flatten

How to remove the innermost level of nesting in a list of lists of varying lengths

久未见 提交于 2020-03-14 19:06:17
问题 I'm trying to remove the innermost nesting in a list of lists of single element length lists. Do you know a relatively easy way (converting to NumPy arrays is fine) to get from: [[[1], [2], [3], [4], [5]], [[6], [7], [8]], [[11], [12]]] to this?: [[1, 2, 3, 4, 5], [6, 7, 8], [11, 12]] Also, the real lists I'm trying to do this for contain datetime objects rather than ints in the example. And the initial collection of lists will be of varying lengths. Alternatively, it would be fine if there

Generify transformation of hierarchical array into a flat array

自古美人都是妖i 提交于 2020-02-24 10:22:08
问题 I'm trying to generify the transformation of a hierarchical array into a flat array. I have this kind of object which has children of the same type, which has children of the same type etc.. [{ id: "123", children: [ { id: "603", children: [ { id: "684", children: [ ... ] }, { id: "456", children: [] } ] } ] }] I found a way to flatten it and I have the information of the number of nested levels. One level deep (works): let result = myArray.flat() .concat(myArray.flatMap(comm => comm.children

Extract set of leaf values found in nested dicts and lists excluding None

纵然是瞬间 提交于 2020-02-14 02:19:48
问题 I have a nested structure read from YAML which is composed of nested lists and/or nested dicts or a mix of both at various levels of nesting. It can be assumed that the structure doesn't contain any recursive objects. How do I extract from it the leaf values only? Also, I don't want any None value. The leaf values contain strings which is all I care for. It's okay for recursion to be used, considering that the maximum depth of the structure is not large enough to exceed stack recursion limits

flattening of nested json using spark scala creating 2 column with same name and giving error of duplicate in Phoenix

久未见 提交于 2020-01-24 22:56:10
问题 I was trying to flatten the very nested JSON, and create spark dataframe and the ultimate goal is to push the given dataframe to phoenix. I am successfully able to flatten the JSON using code. def recurs(df: DataFrame): DataFrame = { if(df.schema.fields.find(_.dataType match { case ArrayType(StructType(_),_) | StructType(_) => true case _ => false }).isEmpty) df else { val columns = df.schema.fields.map(f => f.dataType match { case _: ArrayType => explode(col(f.name)).as(f.name) case s:

Flatten only deepest level in scala spark dataframe

倖福魔咒の 提交于 2020-01-23 17:25:08
问题 I have a Spark job, which has a DataFrame with the following value : { "id": "abchchd", "test_id": "ndsbsb", "props": { "type": { "isMale": true, "id": "dd", "mcc": 1234, "name": "Adam" } } } { "id": "abc", "test_id": "asf", "props": { "type2": { "isMale": true, "id": "dd", "mcc": 12134, "name": "Perth" } } } and I want to flatten it out elegantly (as no of keys is unknown and type etc) in such a way that props remains as a struct but everything inside it is flattened off (irrespective of the

Flatten Array: Keep index, value equal to position in array

懵懂的女人 提交于 2020-01-14 19:00:31
问题 I've been having a little trouble trying to flatten arrays in a specific way. Here is a print_r view of the array I want to flatten: Array ( [1] => Array ( [8] => 1 [9] => 2 [10] => Array ( [15] => Array ( [22] => 1 ) [21] => 2 ) [11] => Array ( [16] => Array ( [23] => 1 ) ) ) [2] => Array ( [12] => 1 ) [3] => Array ( [13] => 1 ) [4] => Array ( [14] => 1 ) [5] => 5 [6] => 6 [7] => 7 ) What I'm attempting to create is an array which keeps the above indexes, but the value is equal to it's

Flatten nested arrays. (Java)

爷,独闯天下 提交于 2020-01-13 04:15:14
问题 I'm struggling to create the right logic to flatten an array. I essentially want to duplicate parent rows for each child item in a nested array. The number of nested arrays could vary. I've been creating Java lists bc I find them easy to work with, but open to any solution. The nature of this problem is I'm starting with some nested JSON that I want to convert into a flat csv to load into a database table. Thanks for the help. Example: [1,2,[A,B,[Cat,Dog]],3] I've created the above as a List.

Python: flatten nested lists with indices

别来无恙 提交于 2020-01-12 06:53:29
问题 Given a list of arbitrairly deep nested lists of arbitrary size, I would like an flat, depth-first iterator over all elements in the tree, but with path indicies as well such that: for x, y in flatten(L), x == L[y[0]][y[1]]...[y[-1]]. That is L = [[[1, 2, 3], [4, 5]], [6], [7,[8,9]], 10] flatten(L) should yield: (1, (0, 0, 0)), (2, (0, 0, 1)), (3, (0, 0, 2)), (4, (0, 1, 0)), (5, (0, 1, 1)), (6, (1, 0)), (7, (2, 0)), (8, (2, 1, 0)), (9, (2, 1, 1)), (10, (3,)) I made a recursive implementation

Python: flatten nested lists with indices

ぃ、小莉子 提交于 2020-01-12 06:53:27
问题 Given a list of arbitrairly deep nested lists of arbitrary size, I would like an flat, depth-first iterator over all elements in the tree, but with path indicies as well such that: for x, y in flatten(L), x == L[y[0]][y[1]]...[y[-1]]. That is L = [[[1, 2, 3], [4, 5]], [6], [7,[8,9]], 10] flatten(L) should yield: (1, (0, 0, 0)), (2, (0, 0, 1)), (3, (0, 0, 2)), (4, (0, 1, 0)), (5, (0, 1, 1)), (6, (1, 0)), (7, (2, 0)), (8, (2, 1, 0)), (9, (2, 1, 1)), (10, (3,)) I made a recursive implementation

python flatten an array of array

随声附和 提交于 2020-01-05 04:39:07
问题 I have an array of array, something like that: array([[array([33120, 28985, 9327, 45918, 30035, 17794, 40141, 1819, 43668], dtype=int64)], [array([33754, 24838, 17704, 21903, 17668, 46667, 17461, 32665], dtype=int64)], [array([46842, 26434, 39758, 27761, 10054, 21351, 22598, 34862, 40285, 17616, 25146, 32645, 41276], dtype=int64)], ..., [array([24534, 8230, 14267, 9352, 3543, 29397, 900, 32398, 34262, 37646, 11930, 37173], dtype=int64)], [array([25157], dtype=int64)], [array([ 8859, 20850,