问题
We all know about short circuiting in logical expressions, i.e. when
if ( False AND myFunc(a) ) then
...
doesn't bother executing myFunc()
because there's no way the if
condition can be true.
I was curious as to whether there is an equivalent for your everyday algebraic equation, say
result = C*x/y + z
If C=0
there is no point in evaluating the first term. It wouldn't matter much performance-wise if x
and y
were scalars, but if we pretend they are large matrices and the operations are costly (and applicable to matrices) then surely it would make a difference. Of course you could avoid such an extreme case by throwing in an if C!=0
statement.
So my question is whether such a feature exists and if it is useful. I'm not much of a programmer so it probably does under some name that I haven't come across; if so please enlighten me :)
回答1:
The concept you are talking about goes under different names: lazy evaluation, non-strict evaluation, call by need, to name a few and is actually much more powerful than just avoiding a multiplication here and there.
There are programming languages like Haskell or Frege whose evaluation model is non-strict. There it would be quite easy to write your "short circuiting" multiplication operator, for example you could write something like:
infixl 7 `*?` -- tell compiler that ?* is a left associative infix operator
-- with precedence 7 (like the normal *)
0 *? x = 0 -- do not evaluate x
y *? x = y * x -- fall back to standard multiplication
回答2:
If the data is large and/or complex and the operations are costly, then the implementation of the operation should perform appropriate shortcut checks before committing to the costly operation. It's an internal detail of the implementation of the operator (say, matrix *) but really has nothing to do with the language concept of "multiply" and should have little impact on how you write your calculations.
来源:https://stackoverflow.com/questions/8145894/is-there-such-a-thing-as-short-circuit-multiplication