Why is bottom-up parsing more common than top-down parsing?

江枫思渺然 提交于 2019-12-21 03:26:07

问题


It seems that recursive-descent parsers are not only the simplest to explain, but also the simplest to design and maintain. They aren't limited to LALR(1) grammars, and the code itself can be understood by mere mortals. In contrast, bottom up parsers have limits on the grammars they are able to recognize, and need to be generated by special tools (because the tables that drive them are next-to-impossible to generate by hand).

Why then, is bottom-up (i.e. shift-reduce) parsing more common than top-down (i.e. recursive descent) parsing?


回答1:


If you choose a powerful parser generator, you can code your grammar without worrying about peculiar properties. (LA)LR means you don't have to worry about left recursion, one less headache. GLR means you don't have to worry about local ambiguity or lookahead.

And the bottom up parsers tend to be pretty efficient. So, once you've paid the price of a bit of complicated machinery, it is easier to write grammers and the parsers perform well.

You should expect to see this kind of choice whereever there is some programming construct that commonly occurs: if it is easier to specify, and it performs pretty well, even if the machinery is complicated, complex machinery will win. As another example, the database world has gone to relational tools, in spite of the fact that you can hand-build an indexed file yourself. Its easier to write the data schemas, its easier to specify the indexes, and with complicated enough machinery behind (you don't have to look at the gears, you just use them), they can be pretty fast with almost no effort. Same reasons.




回答2:


It stems from a couple different things.

BNF (and the theory of grammars and such) comes from computational linguistics: folks researching natural language parsing. BNF is a very attractive way of describing a grammar, and so it's natural to want to consume these notation to produce a parser.

Unfortunately, top-down parsing techniques tend to fall over when applied to such notations, because they cannot handle many common cases (e.g., left recursion). This leaves you with the LR family, which performs well and can handle the grammars, and since they're being produced by a machine, who cares what the code looks like?

You're right, though: top-down parsers work more "intuitively," so they're easier to debug and maintain, and once you have a little practice they're just as easy to write as those generated by tools. (Especially when you get into shift/reduce conflict hell.) Many of the answers talk about parsing performance, but in practice top-down parsers can often be optimized to be as fast as machine-generated parsers.

That's why many production compilers use hand-written lexers and parsers.




回答3:


Recursive-descent parsers try to hypothesize the general structure of the input string, which means a lot of trial-and-error occurs before the end of the string is reached. This makes them less efficient than bottom-up parsers, which need no such inference engines.

The performance difference becomes magnified as the complexity of the grammar increases.




回答4:


To add to other answers, it is important to realize that besides efficiency, bottom-up parsers can accept significantly more grammars than recursive-descent parsers do. Top-down parsers -whether predictive or not- can only have 1 lookahead token and fail if current token and whatever immediately follows the token can be derived using two different rules. Of course, you could implement the parser to have more lookaheads (e.g. LL(3)), but how far are you willing to push it before it becomes as complex as a bottom-up parser? Bottom-up parsers (LALR specifically) on the other hand, maintain a list of firsts and follows and can handle cases where top-down parsers can't.

Of course, computer science is about trade-offs. If you grammar is simple enough, it makes sense to write a top-down parser. If it's complex (such as grammars of most programming languages), then you might have to use a bottom-up parser to successfully accept the input.




回答5:


I have two guesses, though I doubt that either fully explains it:

  1. Top-down parsing can be slow. Recursive-descent parsers may require exponential time to complete their job. That would put severe limitations on the scalability of a compiler which uses a top-down parser.

  2. Better tools. If you can express the language in some variant of EBNF, then chances are you can Lex/Yacc your way past a whole lot of tedious code. There don't seem to be as many tools to help automate the task of putting together a top-down parser. And let's face it, grinding out parser code just isn't the fun part of toying with languages.




回答6:


I never saw a real comparison between top-down and shift-reduce parser :

just 2 small programs ran in the same time side-by-side, one using the top-down approach and the over one using the bottom-up approach, each of about ~200 lines of code,

able to parse any kind of custom binary operator and mathematical expression, the two sharing the same grammar declaration format, and then possibly adding variables declarations and affectations to show how 'hacks' (non context-free) can be implemented.

so, how to speak honestly of something we never did : comparing rigorously the two approach ?



来源:https://stackoverflow.com/questions/4316385/why-is-bottom-up-parsing-more-common-than-top-down-parsing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!