Ok , let say that I have a string text file named \"string.txt\" , and I want to convert it into a json text file. What I suppose to do? I have tried to use \'json.loads()\' ,bu
It really depends on how your txt
file is structured. But suppose you have a structured txt
file like the following:
BASE|30-06-2008|2007|2|projected
BASE|30-06-2007|2010|1|projected
BASE|30-06-2007|2009|3|projected
BASE|30-06-2007|2020|2|projected
...
You could use a script like this:
import codecs
import json
import numpy as np
import pandas as pd
raw_filepath = "your_data.txt"
field_names = [
"Scenario",
"Date",
"Year",
"Quarter",
"Value"
]
data_array = np.genfromtxt(raw_filepath, delimiter="|", dtype=None, encoding="utf-8")
df = pd.DataFrame.from_records(data_array)
df.columns = field_names
result = df.to_json(orient="records")
parsed = json.loads(result)
out_json_path = "your_data.json"
### saves pandas dataframe in .json format
json.dump(
parsed, codecs.open(out_json_path, "w", encoding="utf-8"), sort_keys=False, indent=4
)
Explanation
To load a dataset in Numpy, we can use the genfromtxt()
function. We can specify data file name, delimiter (which is optional but often used), and number of rows to skip if we have a header row. The genfromtxt()
function has a parameter called dtype
for specifying data types of each column(this
parameter is optional). Without specifying the types, all types will be casted the same to the more general/precise type and numpy will try
infer the type of a column.
In this part df.to_json(orient="records")
we are Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not preserved with this encoding. This, way, we can have an output like this, as described in the Pandas Documentation:
>>>result = df.to_json(orient="records")
>>>parsed = json.loads(result)
>>>json.dumps(parsed, indent=4)
[
{
"col 1": "a",
"col 2": "b"
},
{
"col 1": "c",
"col 2": "d"
}
]