PLEASE NOTE: This is NOT about the use of eval(), it is about the potential quality (or lack thereof) of a book it is used and taught in. SO already has countless threads ab
As the author of the book in question, let me weigh in on this issue.
The use of eval in the book is largely a historical artifact of the conversion from Python 2 to Python 3 (although the same "flaw" exists in the use of input in Python 2). I am well aware of the dangers of using eval in production code where input may come from an untrusted source, but the book is not about production code for a web-based system; it's about learning some CS and programming principles. There's really nothing in the book that could be remotely considered production code. My goal is always to use the simplest approach that allows me to illustrate the point I am trying to make, and eval helps to do that.
I do not agree with the crowd proclaiming eval evil in all contexts. It's very handy for simple programs and scripts that are only being run by their writer. In that context, it's perfectly safe. It allows for simple multiple inputs and expressions as input. Pedagogically, it emphasizes the concept of expression evaluation. Eval exposes all the power (and danger) of an interpreted language. I use eval all the time in my own personal programs (and not just in Python). In hindsight, I absolutely agree that I should have included some discussion of the potential risks of eval; this is something I always do in my classes, anyway.
The bottom line is that there are numerous ways this book could be improved (there always are). I don't think using eval is a fatal flaw; it is appropriate for the types of programs being illustrated and the context in which those programs appear. I am not aware of any other "insecurities" in the way Python is used in the book, but you should be warned (as the Preface explains) that there are numerous places where the code is not exactly "Pythonic."