file-writing

Newline “\n” not Working when Writing a .txt file Python

一世执手 提交于 2019-12-02 02:08:58
问题 for word in keys: out.write(word+" "+str(dictionary[word])+"\n") out=open("alice2.txt", "r") out.read() For some reason, instead of getting a new line for every word in the dictionary, python is literally printing \n between every key and value. I have even tried to write new line separately, like this... for word in keys: out.write(word+" "+str(dictionary[word])) out.write("\n") out=open("alice2.txt", "r") out.read() What do I do? 回答1: Suppose you do: >>> with open('/tmp/file', 'w') as f: ..

Proper way of waiting until a file is created

守給你的承諾、 提交于 2019-12-02 01:10:06
I have the following code: // get location where application data director is located var appData = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData); // create dir if it doesnt exist var folder = System.IO.Path.Combine(appData, "SomeDir"); if (System.IO.Directory.Exists(folder) == false) System.IO.Directory.CreateDirectory(folder); // create file if it doesnt exist var file = System.IO.Path.Combine(folder, "test.txt"); if(System.IO.File.Exists(file)== false) System.IO.File.Create(file); // write something to the file System.IO.File.AppendAllText(file,"Foo"); This code

Newline “\\n” not Working when Writing a .txt file Python

我们两清 提交于 2019-12-02 00:24:53
for word in keys: out.write(word+" "+str(dictionary[word])+"\n") out=open("alice2.txt", "r") out.read() For some reason, instead of getting a new line for every word in the dictionary, python is literally printing \n between every key and value. I have even tried to write new line separately, like this... for word in keys: out.write(word+" "+str(dictionary[word])) out.write("\n") out=open("alice2.txt", "r") out.read() What do I do? Suppose you do: >>> with open('/tmp/file', 'w') as f: ... for i in range(10): ... f.write("Line {}\n".format(i)) ... And then you do: >>> with open('/tmp/file') as

Write each row of a spark dataframe as a separate file

一世执手 提交于 2019-12-01 00:37:51
I have Spark Dataframe with a single column, where each row is a long string (actually an xml file). I want to go through the DataFrame and save a string from each row as a text file, they can be called simply 1.xml, 2.xml, and so on. I cannot seem to find any information or examples on how to do this. And I am just starting to work with Spark and PySpark. Maybe map a function on the DataFrame, but the function will have to write string to text file, I can't find how to do this. When saving a dataframe with Spark, one file will be created for each partition. Hence, one way to get a single row

How to write at a particular position in text file without erasing original contents?

末鹿安然 提交于 2019-11-29 11:38:57
I've written a code in Python that goes through the file, extracts all numbers, adds them up. I have to now write the 'total' (an integer) at a particular spot in the file that says something something something...Total: __00__ something something . I have to write the total that I have calculated exactly after the Total: __ part which would mean the resulting line would change to, for example: something something something...Total: __35__ something something . So far I have this for the write part: import re f1 = open("filename.txt", 'r+') for line in f1: if '__' in line and 'Total:' in line:

How do I write to a file in Kotlin?

爱⌒轻易说出口 提交于 2019-11-28 07:09:15
I can't seem to find this question yet, but what is the simplest, most-idiomatic way of opening/creating a file, writing to it, and then closing it? Looking at the kotlin.io reference and the Java documentation I managed to get this: fun write() { val writer = PrintWriter("file.txt") // java.io.PrintWriter for ((member, originalInput) in history) { // history: Map<Member, String> writer.append("$member, $originalInput\n") } writer.close() } This works, but I was wondering if there was a "proper" Kotlin way of doing this? Jayson Minard A bit more idiomatic. For PrintWriter, this example: File(

How to write the resulting RDD to a csv file in Spark python

≯℡__Kan透↙ 提交于 2019-11-27 12:07:40
I have a resulting RDD labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions) . This has output in this format: [(0.0, 0.08482142857142858), (0.0, 0.11442786069651742),.....] What I want is to create a CSV file with one column for labels (the first part of the tuple in above output) and one for predictions (second part of tuple output). But I don't know how to write to a CSV file in Spark using Python. How can I create a CSV file with the above output? Just map the lines of the RDD ( labelsAndPredictions ) into strings (the lines of the CSV) then use rdd.saveAsTextFile() .

How do I write to a file in Kotlin?

会有一股神秘感。 提交于 2019-11-27 05:44:03
问题 I can't seem to find this question yet, but what is the simplest, most-idiomatic way of opening/creating a file, writing to it, and then closing it? Looking at the kotlin.io reference and the Java documentation I managed to get this: fun write() { val writer = PrintWriter("file.txt") // java.io.PrintWriter for ((member, originalInput) in history) { // history: Map<Member, String> writer.append("$member, $originalInput\n") } writer.close() } This works, but I was wondering if there was a

Writing a dictionary to a text file?

醉酒当歌 提交于 2019-11-26 22:02:02
I have a dictionary and am trying to write it to a file. exDict = {1:1, 2:2, 3:3} with open('file.txt', 'r') as file: file.write(exDict) I then have the error file.write(exDict) TypeError: must be str, not dict So I fixed that error but another error came exDict = {111:111, 222:222} with open('file.txt', 'r') as file: file.write(str(exDict)) The error: file.write(str(exDict)) io.UnsupportedOperation: not writable I have no idea what to do as I am still a beginner at python. If anyone knows how to resolve the issue, please provide an answer. NOTE: I am using python 3, not python 2 First of all

How to write the resulting RDD to a csv file in Spark python

佐手、 提交于 2019-11-26 17:39:05
问题 I have a resulting RDD labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions) . This has output in this format: [(0.0, 0.08482142857142858), (0.0, 0.11442786069651742),.....] What I want is to create a CSV file with one column for labels (the first part of the tuple in above output) and one for predictions (second part of tuple output). But I don't know how to write to a CSV file in Spark using Python. How can I create a CSV file with the above output? 回答1: Just map the