We currently have the following compound if statement...
if ((billingRemoteService == null)
|| billingRemoteService.getServiceHeader() == null
|| !\"
You can avoid both try/catch and compound ifs when using Java 8 optionals:
import java.util.List;
import java.util.Optional;
public class Optionals
{
interface BillingRemoteService
{Optional<ServiceBody> getServiceBody();}
interface ServiceBody
{Optional<ServiceResponse> getServiceResponse();}
interface ServiceResponse
{Optional<List<Customer>> getCustomersList();}
interface Customer
{Optional<BillAccountInfo> getBillAccountInfo();}
interface BillAccountInfo
{Optional<EcpId> getEcpdId();}
interface EcpId
{Optional<BillingRemoteService> getEcpdId();}
Object test(BillingRemoteService billingRemoteService) throws Exception
{
return
billingRemoteService.getServiceBody()
.flatMap(ServiceBody::getServiceResponse)
.flatMap(ServiceResponse::getCustomersList)
.map(l->l.get(0))
.flatMap(Optional::ofNullable)
.flatMap(Customer::getBillAccountInfo)
.flatMap(BillAccountInfo::getEcpdId).orElseThrow(()->new Exception("Failed to get information for Account Number "));
}
}
This requires you to change the signature of those methods but if they often return null this should be done anyways in my opinion. If null is not expected but an exception, and changing the method signature is not possible then exceptions can be used. I don't think the runtime difference is a problem unless you call this operation hundreds of thousands of times per second.
OK, none of the answers really answered the question, though theZ's suggestion was the fastest way to check this given my current circumstances. None of this code was designed or written by me, and the application it is part of is massive, which would mean person-years of refactoring to handle every situation like this.
So, for everyone's edification:
I whipped up a quick test that mocks the classes required for both methods. I don't care how long any of the individual methods of the classes run, as it is irrelevant to my question. I also built/ran with JDKs 1.6 and 1.7. There was virtually no difference between the two JDKs.
If things work --- IE no nulls anywhere, the average times are:
Method A (compound IF): 4ms
Method B (exceptions): 2ms
So using exceptions when objects are not null is twice as fast as a compound IF.
Things get even more interesting if I deliberately force a null pointer exception at the get(0) statement.
The averages here are:
Method A: 36ms
Method B: 6ms
So, it's clear that in the original case documented, exceptions are the way to go, cost-wise.
As a rule of thumb exception handling is more expensive than ifs, but I agree with TheZ that the best approach would be to benchmark / profile both versions under the expected load. The difference may become negligible when you consider IO and networking costs which generally extrapolates CPU costs by orders of magnitude.
Also, please notice that the !00.equals
condition should probably be checked in the second version.
As others have stated, exceptions are more expensive than if statements. However, there is an excellent reason not to use them in your case.
Exceptions are for exceptional events
When unpacking a message, something not being in the message is expected error checking, not an exceptional event.
This block of code is far too interested in the data in other instances. Add some behavior to those other instances. Right now all of the behavior is in the code that is not in the class, which is bad object orientation.
-- for billingRemoteService --
public boolean hasResponse();
public BillingRemoteResponse getResponse();
-- for BillingRemoteResponse --
public List<Customer> getCustomerList();
-- for Customer --
public Customer(Long ecpdId, ...) {
if (ecpdId == null) throw new IllegalArgumentException(...);
}
I must disagree with Edwin Buck's argument.
He says:
As others have stated, exceptions are more expensive than if statements. However, there is an excellent reason not to use them in your case. "Exceptions are for exceptional events"
When unpacking a message, something not being in the message is expected error checking, not an exceptional event.
This is essentially saying that if you do error checking then an error is expected (because you were looking for it) and therefore not exceptional.
But that is not what "exceptional event" means. An exceptional event means an unusual / out of the ordinary / unlikely event. Exceptional is about the likelihood of the event happening, not about whether you are (or should be) expecting and/or looking for it.
So going back to first principles, the underlying reasoning for avoiding exceptions is a cost trade-off: the cost of explicitly testing for the event versus the cost of throwing, catching and handling the exception. To be precise.
If the probability of the event is P
the average cost of using exceptions is:
P * cost of an exception being created/thrown/caught/handled + (1 - P) * cost of no explicit tests
the average cost of not using exceptions is:
P * cost testing for the condition when it occurs and doing the error handling + (1 - P) * cost of testing when the condition doesn't occur.
And of course, this is where "exceptional" == "unlikely" comes in. Because, if as P gets closer to 0, the overheads of using exceptions become less and less significant. And if P is sufficiently small (depending on the problem), exceptions will be MORE efficient.
So in answer to the original question, it is not simply the cost of if / else versus exceptions. You also need to take account of the likelihood of the event (error) that you are testing for.
The other thing to note is that there is a lot of scope for the JIT compiler to optimize both versions.
In the first version, there is potentially a lot repeated calculation of subexpressions, and repeated behind-the-scenes null checking. The JIT compiler may be able to optimize some of this, though it depends whether there might side-effects. If it can't, then the sequence of tests could be rather expensive.
In the second version, there is scope for the JIT compiler to notice that an exception is being thrown and caught in the same method without making use of the exception object. Since the exception object doesn't "escape" it could (in theory) be optimized away. And if that happen, the overheads of using exceptions will almost vanish.
(Here is a worked example to make it clear what my informal equations mean:
// Version 1
if (someTest()) {
doIt();
} else {
recover();
}
// Version 2
try {
doIt();
} catch (SomeException ex) {
recover();
}
As before, let P be the probability that the exception-causing even occurs.
Version #1 - if we assume that the cost of someTest()
is the same whether the test succeeds of fail, and use "doIt-success" to denote the cost of doIt when no exception is thrown, then the average cost of one execution of version #1 is:
V1 = cost("someTest") + P * cost("recover") + (1 - P) * cost("doIt-success")
Version #2 - if we assume that the cost of doIt()
is the same whether or not an exception is thrown, then the average cost of one execution of version #2 is:
v2 = P * ( cost("doit-fail") + cost("throw/catch") + cost("recover") ) +
(1 - P) * cost("doIt-success")
We subtract one from the other to give the difference in average costs.
V1 - V2 = cost("someTest") + P * cost("recover") +
(1 - P) * cost("doIt-success") -
P * cost("doit-fail") - P * cost("throw/catch") -
P * cost("recover") - (1 - P) * cost("doIt-success")
= cost("someTest") - P * ( cost("doit-fail") + cost("throw/catch") )
Notice that the costs of recover()
and the costs where doIt()
succeed cancel out. We are left with a positive component (the cost doing the test to avoid the exception) and a negative component that is proportional to probability of the failure. The equation tells us that no matter how expensive the throw / catch overheads are, if the probability P
is close enough to zero, the difference will be negative.
In response to this comment:
The real reason you shouldn't catch unchecked exceptions for flow control is this: what happens if one of the methods you call throws an NPE? You catching the NPE assumes that it came from your code when it may have come from one of the getters. You may be hiding a bug underneath the code and this can lead to massive debugging headaches (personal experience). Performance arguments are useless when you might be hiding bugs in your (or others') code by catching an unchecked exception like NPE or IOOBE for flow control.
This is really the same argument as Edwin Bucks's.
The problem is what does "flow control" mean?
On the one hand, throwing and catching exceptions is a form of flow control. So that would mean that you should never throw and catch unchecked exceptions. That clearly makes no sense.
So then we fall back to arguing about different kinds of flow control, which is really the same arguing about what is "exceptional" versus what it "non-exceptional".
I recognize that you need to be careful when catching NPE's and similar to make sure that you don't catch one that comes from an unexpected source (i.e. a different bug). But in the OP's example, there is minimal risk of that. And you can and should check that those things that look like simple getters really are simple getters.
And you also have to recognize that catching NPE (in this case) results in simpler code, that is likely to be more reliable than a long sequence of conditions in an if
statement. Bear in mind that this "pattern" could be replicated in lots of places.
The bottom line is that the choice between exceptions and tests CAN BE a complicated. A simple mantra which tells you to always use tests is going to give you the wrong solution in some situations. And the "wrongness" could be less reliable and/or less readable and/or slower code.
You could mix Groovy into your JVM-based application - the line will get considerably simpler then:
def result = billingRemoteService?.
serviceBody?.
serviceResponse?.
customersList?.
customersList[0];
if ('00' != billingRemoteService?.serviceHeader?.statusCode ||
result?.
billAccountInfo?.
getEcpdId == null)
throw new WebServicesException
...