Best way to convert string to bytes in Python 3?

后端 未结 3 495
慢半拍i
慢半拍i 2020-11-22 00:14

There appear to be two different ways to convert a string to bytes, as seen in the answers to TypeError: 'str' does not support the buffer interface

Which of

相关标签:
3条回答
  • 2020-11-22 00:53

    If you look at the docs for bytes, it points you to bytearray:

    bytearray([source[, encoding[, errors]]])

    Return a new array of bytes. The bytearray type is a mutable sequence of integers in the range 0 <= x < 256. It has most of the usual methods of mutable sequences, described in Mutable Sequence Types, as well as most methods that the bytes type has, see Bytes and Byte Array Methods.

    The optional source parameter can be used to initialize the array in a few different ways:

    If it is a string, you must also give the encoding (and optionally, errors) parameters; bytearray() then converts the string to bytes using str.encode().

    If it is an integer, the array will have that size and will be initialized with null bytes.

    If it is an object conforming to the buffer interface, a read-only buffer of the object will be used to initialize the bytes array.

    If it is an iterable, it must be an iterable of integers in the range 0 <= x < 256, which are used as the initial contents of the array.

    Without an argument, an array of size 0 is created.

    So bytes can do much more than just encode a string. It's Pythonic that it would allow you to call the constructor with any type of source parameter that makes sense.

    For encoding a string, I think that some_string.encode(encoding) is more Pythonic than using the constructor, because it is the most self documenting -- "take this string and encode it with this encoding" is clearer than bytes(some_string, encoding) -- there is no explicit verb when you use the constructor.

    Edit: I checked the Python source. If you pass a unicode string to bytes using CPython, it calls PyUnicode_AsEncodedString, which is the implementation of encode; so you're just skipping a level of indirection if you call encode yourself.

    Also, see Serdalis' comment -- unicode_string.encode(encoding) is also more Pythonic because its inverse is byte_string.decode(encoding) and symmetry is nice.

    0 讨论(0)
  • 2020-11-22 01:03

    The absolutely best way is neither of the 2, but the 3rd. The first parameter to encode defaults to 'utf-8' ever since Python 3.0. Thus the best way is

    b = mystring.encode()
    

    This will also be faster, because the default argument results not in the string "utf-8" in the C code, but NULL, which is much faster to check!

    Here be some timings:

    In [1]: %timeit -r 10 'abc'.encode('utf-8')
    The slowest run took 38.07 times longer than the fastest. 
    This could mean that an intermediate result is being cached.
    10000000 loops, best of 10: 183 ns per loop
    
    In [2]: %timeit -r 10 'abc'.encode()
    The slowest run took 27.34 times longer than the fastest. 
    This could mean that an intermediate result is being cached.
    10000000 loops, best of 10: 137 ns per loop
    

    Despite the warning the times were very stable after repeated runs - the deviation was just ~2 per cent.


    Using encode() without an argument is not Python 2 compatible, as in Python 2 the default character encoding is ASCII.

    >>> 'äöä'.encode()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
    
    0 讨论(0)
  • 2020-11-22 01:19

    It's easier than it is thought:

    my_str = "hello world"
    my_str_as_bytes = str.encode(my_str)
    type(my_str_as_bytes) # ensure it is byte representation
    my_decoded_str = my_str_as_bytes.decode()
    type(my_decoded_str) # ensure it is string representation
    
    0 讨论(0)
提交回复
热议问题