Others have answered this question with respect to tight loops although there seems to be an obvious performance difference between Rex Kerr's examples that I have commented on.
This answer is really targeted at people who might investigate a need for tight-loop optimisation as design flaw.
I am relatively new to Scala (about a year or so) but the feel of it, thus far, is that it allows you to defer many aspects of design, implementation and execution relatively easily (with enough background reading and experimentation :)
Deferred Design Features:
- Abstract Types
- Explicitly Typed Self References
- Views
- Mixins
Deferred Implementation Features:
- Variance Annotations
- Compound Types
- Local Type Inference
Deferred Execution Features: (sorry, no links)
- Thread-safe lazy values
- Pass-by-name
- Monadic stuff
These features, to me, are the ones that help us to tread the path to fast, tight applications.
Rex Kerr's examples differ in what aspects of execution are deferred. In the Java example, allocation of memory is deferred until it's size is calculated where the Scala example defers the mapping lookup. To me, they seem like completely different algorithms.
Here's what I think is more of an apples to apples equivalent for his Java example:
val bigEnough = array.collect({
case k: String if k.length > 2 && mapping.contains(k) => mapping(k)
})
No intermediary collections, no Option
instances etc.
This also preserves the collection type so bigEnough
's type is Array[File]
- Array
's collect
implementation will probably be doing something along the lines of what Mr Kerr's Java code does.
The deferred design features I listed above would also allow Scala's collection API developers to implement that fast Array-specific collect implementation in future releases without breaking the API. This is what I'm referring to with treading the path to speed.
Also:
val bigEnough = array.withFilter(_.length > 2).flatMap(mapping.get)
The withFilter
method that I've used here instead of filter
fixes the intermediate collection problem but there is still the Option instance issue.
One example of simple execution speed in Scala is with logging.
In Java we might write something like:
if (logger.isDebugEnabled())
logger.debug("trace");
In Scala, this is just:
logger.debug("trace")
because the message parameter to debug in Scala has the type "=> String
" which I think of as a parameter-less function that executes when it is evaluated, but which the documentation calls pass-by-name.
EDIT {
Functions in Scala are objects so there is an extra object here. For my work, the weight of a trivial object is worth removing the possibility of a log message getting needlessly evaluated.
}
This doesn't make the code faster but it does make it more likely to be faster and we're less likely to have the experience of going through and cleaning up other people's code en masse.
To me, this is a consistent theme within Scala.
Hard code fails to capture why Scala is faster though it does hint a bit.
I feel that it's a combination of code re-use and the ceiling of code quality in Scala.
In Java, awesome code is often forced to become an incomprehensible mess and so isn't really viable within production quality APIs as most programmers wouldn't be able to use it.
I have high hopes that Scala could allow the einsteins among us to implement far more competent APIs, potentially expressed through DSLs. The core APIs in Scala are already far along this path.