Python - Unicode & double backslashes

前端 未结 2 844
野性不改
野性不改 2021-01-24 03:44

I scrapped a webpage with BeautifulSoup. I got great output except parts of the list look like this after getting the text:

list = [u\'that\\\\u2019s\', u\'it\\         


        
相关标签:
2条回答
  • 2021-01-24 04:19

    the problem here is that the site ended up double encoding those unicode arguments, just do the following:

    ls = [u'that\\u2019s', u'it\\u2019ll', u'It\\u2019s', u'don\\u2019t', u'That\\u2019s', u'we\\u2019re', u'\\u2013']
    
    ls = map(lambda x: x.decode('unicode-escape'), ls)
    

    now you have a list with properly unicode encoded strings:

    for a in ls:
       print a
    
    0 讨论(0)
  • 2021-01-24 04:25

    Since you are using Python 2 there, it is simply a matter of re-applying the "decode" method - using the special codec "unicode_escape". It "sees" the "physical" backlashes and decodes those sequences proper unicode characters:

    data =  [u'that\\u2019s', u'it\\u2019ll', u'It\\u2019s', u'don\\u2019t', u'That\\u2019s', u'we\\u2019re', u'\\u2013']
    
    result = [part.decode('unicode_escape') for part in data]
    

    To aAnyone getting here using Python3: in that version can not apply the "decode" method to the str objects delivered by beautifulsoup - one has to first re-encode those to byte-string objects, and then decode with the uncode_escape codec. For these purposes it is usefull to make use of the latin1 codec as the transparent encoding: all bytes in the str object are preserved in the new bytes object:

    result = [part.encode('latin1').decode('unicode_escape') for part in data]
    
    0 讨论(0)
提交回复
热议问题