I have a query:
SELECT
someFields
FROM
someTable
WHERE
cheapLookup=1
AND (CAST(someField as FLOAT)/otherField)<0.9
SQL is declarative: you tell the database what you want, not how you want it done. The database is entirely free to evaluate lazily or eagerly. In fact, it can evaluate thrice in reverse order for all I know :)
In rare cases, you can improve performance by reframing your query in such a way that it avoids a specific expensive operation. For example, moving the floating point math to a separate query would force lazy evaluation:
declare @t table (id int, someField float, otherField float)
insert @t select id, someField, otherField from someTable
where cheaplLookup <> 1
delete @t where (CAST(someField as FLOAT)/otherField) >= 0.9
insert @t select id, someField, otherField from someTable
where cheaplLookup = 1
In your example, I would expect SQL Server to choose the best way without any hints or tricks.
What you're referring to is short-circuiting, like other languages (e.g. C#) support.
I believe SQL Server can short-circuit but depends on the scenario / what happens in the optimizer so there is certainly not a guarantee that it will. It just might.
Excellent reference on this by Remus Rusanu here:
http://rusanu.com/2009/09/13/on-sql-server-boolean-operator-short-circuit/
It depends on how SQL Server optimizes the query, you could run the Query Analyzer to see for your particular case
A sure fire way to optimize would to say
WITH QueryResult AS (
SELECT
someFields
FROM
someTable
WHERE
cheapLookup=1
)
SELECT * FROM QueryResult WHERE (CAST(someField as FLOAT)/otherField)<0.9
It depends on the query plan, which is determined by the estimated cost of each considered alternative plan that would produce correct results.
If the predicate 'cheapLookup = 1' can use an index, and it is sufficiently selective, SQL Server would likely choose to seek on that index and apply the second predicate as a residual (that is, only evaluating it on rows that are matched by the seeking operation).
On the other hand, if cheapLookup is not the leading key in an index, or if it is not very selective, SQL Server might choose to scan, applying both predicates to every row encountered.
The second predicate will not be chosen for a seeking operation, unless there happens to be an indexed computed column on the whole expression, and using that index turns out to be the cheapest way to execute the whole query. If a suitable index exists, SQL Server would seek on 'second predicate result < 0.9', and apply 'cheapLookup=1' as a residual. There is also the possibility that the indexed computed column has cheapLookup as its second key, which would result in a pure seek, with no residual.
The other thing about the second predicate is that without a computed column (whether or not indexed), SQL Server will have to guess at the selectivity of the expression. With the computed column, the server might be able to create statistics on the expression-result column, which will help the optimizer. Note that a computed column on 'CAST(someField as FLOAT)/otherField' would have to be persisted before it could be indexed or have statistics created on it, because it contains an imprecise data type.
In summary, it's not the complexity of the expression that counts so much as the estimated cost of the whole plan that uses each of the available access methods considered by the optimizer.