Currently im using EF and using its datacontext directly in all of my actions, but since i started reading about loose coupling and testability im thinking that thats not the be
ADO.NET connection pooling will be managing the connections behind-the-scenes. It basically won't matter at all how many different entities (and therefore Repositories with their own context) you use; each DB operation will be taking connections from the same pool.
The reason for the repository is to enable you to abstract/replace the way the entities are being created for testing, etc. The entity objects can be instantiated like normal objects without the Context's services, so the test repo would do that for the test data
ObjectContext uses connection pooling, so it won't be as inefficient as you might think. Also, SQL servers (i.e. MSSQL) are really optimized for tons of concurrent connections.
As for how to implement it, I'd go with some IRepository interface. You can then make specific interfaces, i.e. PostRepository > IRepository, and finally, implement that in concrete classes (for example, a real class, and a fake in-memory one for testing).
Problem 2: A way to avoid this would be to use something like the "ADO.NET C# POCO Entity Generator".
You can read this book. There is a good example of the using Repository pattern and LINQ.
Also there is this article Using Repository and Unit of Work patterns with Entity Framework 4.0.
Firstly, I'm not aware of a requirement for each entity to have it's own repository so I'd junk that restriction.
For Scott H's implementation, I assume you are referring to the Nerd Dinner app, which by his own admission isn't really the repository pattern.
The objective of the repository pattern is, as you surmise, to isolate the data store from the layers above it. It's not purely for testing reasons, it also allows you to change the backing store without affecting your UI/Business Logic.
In purist terms you would create POCOs that you would return from the Repository to your BL, by using an interface to define the Repository contract you could then pass and use the interface rather than a concrete implementation. This would allow you to pass in any object that implemented the Repository interface, whether your live repository or a mocked repository.
In reality I use a repository with MVC with Linq to SQL as my backing store which allows me a degree of flexibility over the actual backing store so I use hand crafted L2S objects in my BL, these have additional fields and functionality that isn't persisted to the backing store. This way I get some great functionality from the L2S aspects, change tracking, object hierarchy, etc, while also allowing me to substitute a mocked repository for TDD.
You hit the nail on the head in identifying the difficulty with using Entities as business objects. After much trial and error, here's the pattern that we've settled into, which has been working very well for us:
Our application is divided into modules, and each module is divided into three tiers: Web (front-end), Core (business), and Data. In our case, each of these tiers is given its own project, so there is a hard enforcement preventing our dependencies from becoming tightly-coupled.
The Core layer contains utility classes, POCOs, and repository interfaces.
The Web layer leverages these classes and interfaces to get the information it needs. For example, an MVC controller can take a particular repository interface as a constructor argument, so our IoC framework injects the correct implementation of that repository when the controller is created. The repository interface defines selector methods that return our POCO objects (also defined in the Core business layer).
The Data layer's entire responsibility is to implement the repository interfaces defined in the Core layer. It has an Entity Framework context that represents our data store, but rather than returning the Entities (which are technically "data" objects), it returns the POCOs defined in the Core layer (our "business" objects).
In order to reduce repetition, we have an abstract, generic EntityMapper
class, which provides basic functionality for mapping Entities to POCOs. This makes the majority of our repository implementations extremely simple. For example:
public class EditLayoutChannelEntMapper : EntityMapper<Entity.LayoutChannel, EditLayoutChannel>,
IEditLayoutChannelRepository
{
protected override System.Linq.Expressions.Expression<Func<Entity.LayoutChannel, EditLayoutChannel>> Selector
{
get
{
return lc => new EditLayoutChannel
{
LayoutChannelId = lc.LayoutChannelId,
LayoutDisplayColumnId = lc.LayoutDisplayColId,
ChannelKey = lc.PortalChannelKey,
SortOrder = lc.Priority
};
}
}
public EditLayoutChannel GetById(int layoutChannelId)
{
return SelectSingle(c => c.LayoutChannelId == layoutChannelId);
}
}
Thanks to the methods implemented by the EntityMapper base class, the above repository implements the following interface:
public interface IEditLayoutChannelRepository
{
EditLayoutChannel GetById(int layoutChannelId);
void Update(EditLayoutChannel editLayoutChannel);
int Insert(EditLayoutChannel editLayoutChannel);
void Delete(EditLayoutChannel layoutChannel);
}
EntityMappers do very little in their constructors, so it's okay if a controller has multiple repository dependencies. Not only does the Entity Framework reuse connections, but the Entity Contexts themselves are only created when one of the repository methods gets called.
Each module also has a special Test project, which contains unit tests for the classes in these three tiers. We've even come up with a way to make our repositories and other data-access classes somewhat unit-testable. Now that we've got this basic infrastructure set up, adding functionality to our web application is generally pretty smooth and not too error-prone.