Python3.0 - tokenize and untokenize

﹥>﹥吖頭↗ 提交于 2019-12-11 01:49:34

问题


I am using something similar to the following simplified script to parse snippets of python from a larger file:

import io
import tokenize

src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)

src = list(tokenize.tokenize(src.readline))

for tok in src:
  print(tok)

src = tokenize.untokenize(src)

Although the code is not the same in python2.x, it uses the same idiom and works just fine. However, running the above snippet using python3.0, I get this output:

(57, 'utf-8', (0, 0), (0, 0), '')
(1, 'foo', (1, 0), (1, 3), 'foo="bar"')
(53, '=', (1, 3), (1, 4), 'foo="bar"')
(3, '"bar"', (1, 4), (1, 9), 'foo="bar"')
(0, '', (2, 0), (2, 0), '')

Traceback (most recent call last):
  File "q.py", line 13, in <module>
    src = tokenize.untokenize(src)
  File "/usr/local/lib/python3.0/tokenize.py", line 236, in untokenize
    out = ut.untokenize(iterable)
  File "/usr/local/lib/python3.0/tokenize.py", line 165, in untokenize
    self.add_whitespace(start)
  File "/usr/local/lib/python3.0/tokenize.py", line 151, in add_whitespace
    assert row <= self.prev_row
AssertionError

I have searched for references to this error and its causes, but have been unable to find any. What am I doing wrong and how can I correct it?

[edit]

After partisann's observation that appending a newline to the source causes the error to go away, I started messing with the list I was untokenizing. It seems that the EOF token causes an error if not immediately preceded by a newline so removing it gets rid of the error. The following script runs without error:

import io
import tokenize

src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)

src = list(tokenize.tokenize(src.readline))

for tok in src:
  print(tok)

src = tokenize.untokenize(src[:-1])

回答1:


src = 'foo="bar"\n'
You forgot newline.


回答2:


If you limit the input to untokenize to the first 2 items of the tokens, it seems to work.

import io
import tokenize

src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)

src = list(tokenize.tokenize(src.readline))

for tok in src:
  print(tok)

src = [t[:2] for t in src]
src = tokenize.untokenize(src)


来源:https://stackoverflow.com/questions/934661/python3-0-tokenize-and-untokenize

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!