问题
I am working on improving the performance of DataAccess Layer of an existing Asp.Net Web Application. The scenerios are.
- Its a web based application in Asp.Net.
- DataAccess layer is built using NHibernate 1.2 and exposed as WCF Service.
- The Entity class is marked with DataContract.
- Lazy loading is not used and because of the eager-fetching of the relations there is huge no of database objects are loaded in the memory. No of hits to the database is also high. For example I profiled the application using NHProfiler and there were about 50+ sql calls to load one of the Entity object using the primary key.
- I also can not change code much as its an existing live application with no NUnit test cases at all.
Please can I get some suggestions here?
EDIT 1: I have tried using Lazy loading but the issue is that since the Entity is also used as DataContract it triggers the lazy loading during Serialization. Using a DTO Objects is an option but that is a huge change as no of Entities are huge. Without the Test cases this will require a huge amount of manual Testing effort.
EDIT 2: The project was written long back with no flexibility to write unit tests. E.g The entity itself contains the CRUD operation and uses NHibernate Session.
class SampleEntity : ICrudOperation
{
//fields and properties
public IList<SampleEntity> Update()
{
//perform business logic (which can be huge and call the ICrudOperation of
//other entities
ISession session = GetSessionFromSomewhere();
session.Update(this);
}
}
This is just an example for Update. And there are around 400 Entities which are interdependent. Is there a way to write unit test for this
回答1:
It does seem that the architecture here could be improved.
The main problem seems to be that huge amounts of data are being read. Does all of this data need to be read? For example If entity A has a list of child elements B that are being loaded eagerly, but only fields from entity A are displayed on the page then only entity A needs to be read. If everything is displayed on the page consider re-designing the page so that you have to navigate to a different page to see entity B data.
Assuming that you are only displaying data from entity A or that the web site can be re-designed to do this then the first thing will be to turn on lazy loading of the child entities so that you will only need to read them if you really need the data. Secondly, if you continue to return the entities themselves turning on lazy loading will have no effect as when the serializer serializes your data, the child entities will still be read. You will need to introduce some data transfer objects (DTOs) to pass the data over the wire. These will be similar to your entities but will only have fields for the data you actually want to use on your web page. You will then need to translate your entities to the DTOs which means because you won't access lists of children entities that you don't need, with lazy loading configured, that data won't get read.
It is worth investigating upgrading to the latest version of NHibernate although with no unit tests this will probably be scary but definitely worth it.
Introducing second level cache will probably have very little effect as this really starts to make a difference when you are getting lots of hits in a distributed environment. You have more fundamental problems to resolve first.
回答2:
Suggestion 1
- Enable lazy-loading
- Use a custom NhibernateDataContractSerializerSurrogate to remove proxies before they're sent via WCF (check out Allan Ritchie's reply here)
This combination will reduce the number of initial SQL joins, and reduce the size of your object graph.
However, this will lead to the next problem:
Your consumers/clients will not have access to all of the data. They will need to make multiple calls to the server to retrieve child objects/collections.
You could try to anticipate which child objects/collections will be needed (and eager load them), but it could be a cumbersome exercise. A simpler option would be to check for null objects/collections (i.e at the consumer), and make the call to the server to fetch additional data.
Suggestion 2
Are your SQL JOINs slow? You might try replacing fetch=join with fetch=select, and see if that speeds things up.
Suggestion 3
Determine the time being spent in SQL, vs the time taken to send data over the wire. You might consider compressing your object graph before your send it over the wire.
Suggestion 4
Swap any Nhibernate lazy proxies with your own custom lazy-load proxies, before returning the object graph to the consumer. Your custom proxies will lazy load data over WCF/Web Services (instead of calling the db). I've had some success using this technique on a rich client.
In theory, this should allow your client code to remain as-is. There's a thread here along those lines.
Hope that helps.
回答3:
Perhaps firstly you should switch to the latest stable version of NHibernate (which is 2.1.2, AFAIK), although I myself prefer to use the 3.0 alpha version. Later versions tend to have increased stability and performance than earlier ones. And I think NHibernate's API is backwards compatible enough to do this.
So, my recommended solution for you is this:
If you REALLY need to have eager loading, the number of queries used to return an object along with the full object graph will still be huge, but don't worry, there is hope!
NHibernate is an enterprise-level ORM which means that it scales very well. It can employ various caching strategies for various scenarios.
It has an internal cache for the following:
- Entities
- Collections
- Queries
- Timestamps
And it has two level of these:
- First level cache: Caches stuff for a single
ISession
instance. It is ON by default. - Second level cache: Caches stuff for everything from the same
ISessionFactory
. It is OFF by default.
The above description is very basic from me, but this is exactly what you need. You can find more about the subject on nhibernate.info and Ayende's blog, several questions in StackOverflow, and varios other places, as well.
You need to do two things:
- Make sure that all your
ISession
s are coming from the sameISessionFactory
- Enable the second level cache (the above links help you with that)
You'll gain many advantages with this approach, but mostly, because all the entities you ever loaded will get cached, it will only be necessary to request them from the database once. After that, they'll come from the cache. This reduces the number of roundtrips to the database, vastly.
回答4:
In addition to suggestions by others, have you looked at profiling your application with NHibernate Profiler? (There is a 30 day trial)
NHibernate Profiler is a real-time visual debugger allowing a development team to gain valuable insight and perspective into their usage of NHibernate. The product is architected with input coming from many top industry leaders within the NHibernate community. Alerts are presented in a concise code-review manner indicating patterns of misuse by your application. To streamline your efforts to correct the misuse, we provide links to the problematic code section that triggered the alert.
回答5:
configure a caching strategy inside NHibernate....
来源:https://stackoverflow.com/questions/2662428/improving-the-performance-of-an-nhibernate-data-access-layer