我终于从包含许多json对象的文件中获得了我需要的数据输出,但是当它在数据中循环时,我需要一些帮助将以下输出转换为单个数据帧。这是产生输出的代码,包括输出外观的示例:
原始数据:
{ "zipcode":"08989", "current"{"canwc":null,"cig":4900,"class":"observation","clds":"OVC","day_ind":"D","dewpt":19,"expireTimeGMT":1385486700,"feels_like":34,"gust":null,"hi":37,"humidex":null,"icon_code":26,"icon_extd":2600,"max_temp":37,"wxMan":"wx1111"}, "triggers":[53,31,9,21,48,7,40,178,55,179,176,26,103,175,33,51,20,57,112,30,50,113] } { "zipcode":"08990", "current":{"canwc":null,"cig":4900,"class":"observation","clds":"OVC","day_ind":"D","dewpt":19,"expireTimeGMT":1385486700,"feels_like":34,"gust":null,"hi":37,"humidex":null,"icon_code":26,"icon_extd":2600,"max_temp":37, "wxMan":"wx1111"}, "triggers":[53,31,9,21,48,7,40,178,55,179,176,26,103,175,33,51,20,57,112,30,50,113] } def lines_per_n(f, n): for line in f: yield ''.join(chain([line], itertools.islice(f, n - 1))) for fin in glob.glob('*.txt'): with open(fin) as f: for chunk in lines_per_n(f, 5): try: jfile = json.loads(chunk) zipcode = jfile['zipcode'] datetime = jfile['current']['proc_time'] triggers = jfile['triggers'] print pd.Series(jfile['zipcode']), pd.Series(jfile['current']['proc_time']),\ jfile['triggers'] except ValueError, e: pass else: pass
运行上面的命令时,我将获得示例输出,我希望将其存储为3列的pandas数据框中。
08988 20131126102946 [] 08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179] 08988 20131126102946 [] 08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179] 00544 20131126102946 [178, 30, 176, 103, 179, 112, 21, 20, 48]
因此,以下代码似乎更接近,因为如果我在列表中传递并转置df,它会给我一个时髦的df。关于如何正确调整此形状的任何想法吗?
def series_chunk(chunk): jfile = json.loads(chunk) zipcode = jfile['zipcode'] datetime = jfile['current']['proc_time'] triggers = jfile['triggers'] return jfile['zipcode'],\ jfile['current']['proc_time'],\ jfile['triggers'] for fin in glob.glob('*.txt'): with open(fin) as f: for chunk in lines_per_n(f, 7): df1 = pd.DataFrame(list(series_chunk(chunk))) print df1.T [u'08988', u'20131126102946', []] [u'08989', u'20131126102946', [53, 31, 9, 21, 48, 7, 40, 178, 55, 179]] [u'08988', u'20131126102946', []] [u'08989', u'20131126102946', [53, 31, 9, 21, 48, 7, 40, 178, 55, 179]]
数据框:
0 1 2 0 08988 20131126102946 [] 0 1 2 0 08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ... 0 1 2 0 08988 20131126102946 [] 0 1 2 0 08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...
这是我的最终代码和输出。如何捕获通过循环创建的每个数据帧,并将它们动态地连接为一个数据帧对象?
for fin in glob.glob('*.txt'): with open(fin) as f: print pd.concat([series_chunk(chunk) for chunk in lines_per_n(f, 7)], axis=1).T 0 1 2 0 08988 20131126102946 [] 1 08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ... 0 1 2 0 08988 20131126102946 [] 1 08989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ...
read_json
# can either pass string of the json, or a filepath to a file with valid json In [99]: pd.read_json('[{"A": 1, "B": 2}, {"A": 3, "B": 4}]') Out[99]: A B 0 1 2 1 3 4
查看文档的IO部分中的几个示例,可以传递给此函数的参数以及标准化结构化程度较低的json的方法。
如果您没有有效的json ,则在读入json之前先对字符串进行修改通常比较有效。
如果您有多个json文件,则应将DataFrame合并在一起(类似于此答案):
pd.concat([pd.read_json(file) for file in ...], ignore_index=True)
在正则表达式中使用后面的表达式作为传递给read_csv的分隔符:
In [11]: df = pd.read_csv('foo.csv', sep='(?<!,)\s', header=None) In [12]: df Out[12]: 0 1 2 0 8988 20131126102946 [] 1 8989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ... 2 8988 20131126102946 [] 3 8989 20131126102946 [53, 31, 9, 21, 48, 7, 40, 178, 55, 179, 176, ... 4 544 20131126102946 [178, 30, 176, 103, 179, 112, 21, 20, 48, 7, 5... 5 601 20131126094911 [] 6 602 20131126101056 [] 7 603 20131126101056 [] 8 604 20131126101056 [] 9 544 20131126102946 [178, 30, 176, 103, 179, 112, 21, 20, 48, 7, 5... 10 601 20131126094911 [] 11 602 20131126101056 [] 12 603 20131126101056 [] 13 604 20131126101056 [] [14 rows x 3 columns]
如评论中所提到的,您可以通过将多个Series并置在一起来更直接地执行此操作…这样做也会更容易一些:
def series_chunk(chunk): jfile = json.loads(chunk) zipcode = jfile['zipcode'] datetime = jfile['current']['proc_time'] triggers = jfile['triggers'] return pd.Series([jfile['zipcode'], jfile['current']['proc_time'], jfile['triggers']]) dfs = [] for fin in glob.glob('*.txt'): with open(fin) as f: df = pd.concat([series_chunk(chunk) for chunk in lines_per_n(f, 5)], axis=1) dfs.append(dfs) df = pd.concat(dfs, ignore_index=True)
注意:您也可以将try / except移到中series_chunk。
series_chunk