I\'m using this code to get standard output from an external program:
>>> from subprocess import *
>>> command_stdout = Popen([\'ls\', \'-l
I made a function to clean a list
def cleanLists(self, lista):
lista = [x.strip() for x in lista]
lista = [x.replace('\n', '') for x in lista]
lista = [x.replace('\b', '') for x in lista]
lista = [x.encode('utf8') for x in lista]
lista = [x.decode('utf8') for x in lista]
return lista
To interpret a byte sequence as a text, you have to know the corresponding character encoding:
unicode_text = bytestring.decode(character_encoding)
Example:
>>> b'\xc2\xb5'.decode('utf-8')
'µ'
ls
command may produce output that can't be interpreted as text. File names
on Unix may be any sequence of bytes except slash b'/'
and zero
b'\0'
:
>>> open(bytes(range(0x100)).translate(None, b'\0/'), 'w').close()
Trying to decode such byte soup using utf-8 encoding raises UnicodeDecodeError
.
It can be worse. The decoding may fail silently and produce mojibake if you use a wrong incompatible encoding:
>>> '—'.encode('utf-8').decode('cp1252')
'—'
The data is corrupted but your program remains unaware that a failure has occurred.
In general, what character encoding to use is not embedded in the byte sequence itself. You have to communicate this info out-of-band. Some outcomes are more likely than others and therefore chardet
module exists that can guess the character encoding. A single Python script may use multiple character encodings in different places.
ls
output can be converted to a Python string using os.fsdecode()
function that succeeds even for undecodable
filenames (it uses
sys.getfilesystemencoding()
and surrogateescape
error handler on
Unix):
import os
import subprocess
output = os.fsdecode(subprocess.check_output('ls'))
To get the original bytes, you could use os.fsencode()
.
If you pass universal_newlines=True
parameter then subprocess
uses
locale.getpreferredencoding(False)
to decode bytes e.g., it can be
cp1252
on Windows.
To decode the byte stream on-the-fly, io.TextIOWrapper() could be used: example.
Different commands may use different character encodings for their
output e.g., dir
internal command (cmd
) may use cp437. To decode its
output, you could pass the encoding explicitly (Python 3.6+):
output = subprocess.check_output('dir', shell=True, encoding='cp437')
The filenames may differ from os.listdir()
(which uses Windows
Unicode API) e.g., '\xb6'
can be substituted with '\x14'
—Python's
cp437 codec maps b'\x14'
to control character U+0014 instead of
U+00B6 (¶). To support filenames with arbitrary Unicode characters, see Decode PowerShell output possibly containing non-ASCII Unicode characters into a Python string
From sys — System-specific parameters and functions:
To write or read binary data from/to the standard streams, use the underlying binary buffer. For example, to write bytes to stdout, use sys.stdout.buffer.write(b'abc')
.
def toString(string):
try:
return v.decode("utf-8")
except ValueError:
return string
b = b'97.080.500'
s = '97.080.500'
print(toString(b))
print(toString(s))
If you should get the following by trying decode()
:
AttributeError: 'str' object has no attribute 'decode'
You can also specify the encoding type straight in a cast:
>>> my_byte_str
b'Hello World'
>>> str(my_byte_str, 'utf-8')
'Hello World'
try this
bytes.fromhex('c3a9').decode('utf-8')