Download the code for this article.
Updated 14-Oct-2011: I modified the code sample to include controller unit tests and improved the config and logging services.
I recently started a consulting project as an architect on an ASP.NET MVC application and quickly found myself immersed in the world of N* open source tools. MVC (which stands for Model-View-Controller) lends itself to an Agile development methodology where TDD and BDD (Test-Driven and Behavior-Driven Development) are important components. Writing applications that are testable requires that you separate business logic from presentation logic so that they can be independently tested. This is a concept known as separation of concerns (SoC), which, in addition to testability, provides other benefits, such as greater application longevity and maintainability. The life of an application is extended because loose coupling makes it easer to upgrade or replace components without affecting other parts of the system.
This is where the “Onion Architecture” comes in. The term was first coined by Jeffery Palermo back in 2008 in a series of blog posts. It is intended to provide some insurance against the evolution of technology that can make products obsolete not long after they are developed (the technical term is “deprecated”). A classic example is Microsoft’s data access stack, which tends to change every few years (remember the demise of LINQ to SQL?). What Jeffery proposed (although he’s not the first) is for the application to reference interfaces so that the concrete implementation can be supplied at runtime. If, for example, the data access layer is represented by a number of repository interfaces, you can swap out LINQ to SQL with Entity Framework or NHibernate (or your favorite ORM) without breaking other parts of the application. This same approach is used to decouple things like configuration and logging so they become replaceable components.
In this depiction, the “core” of the onion is the object model, which represents your domain. This layer would contain your POCO entities. Surrounding your domain entities are repository interfaces, which are in turn surrounded by service interfaces. Representing your repositories and services as interfaces decouples consumers from concrete implementations, enabling you to swap one out for another without affecting consumers, such as client UI’s or tests. The data access layer is represented in the outer layer as a set of repository classes which implement the repository interfaces. Similarly the logging component implements a logging interface in the service interfaces layer.
Here is the project structure for a Visual Studio solution I created to demonstrate the Onion Architecture. I inserted solution folders and aligned project and folder names for ease of use. Infrastructure.Data uses NHibernate to implement repositories for ICategoryRepository and IProductRepository in Domain.Interfaces. Infrastructure.Logging uses NLog to implement ILogging in Infrastructure.Interfaces. The Web.Ui project has a ProductService class that implements IProductService in Services.Interfaces. (In a future post I will incorporate WCF into the project structure, but the Service implementation would go in a Service.Core project, with a Web.Services project for the service host.)
You may be asking, “How are concrete implementations of repositories and services created?” If components in the outer layer were to create instances directly, they would be tightly coupled to those implementations, defeating the whole purpose of the Onion Architecture and jeopardizing the application’s long-term viability. The answer is Dependency Injection (also known as Inversion of Control, or IoC). Components on the outer rim of the diagram have constructors that accept service or repository interfaces, and it’s the job of the DI framework to serve up a concrete instance, based on some initial configuration or setup. For example, the ProductController class in the ASP.NET MVC application has a constructor that accepts an IProductService, which has methods to get categories and products. The controller doesn’t care about how the interface is implemented and what API the data access component uses.
public class ProductController : Controller { // Services will be injected private IProductService _productService; public ProductController(IProductService productService) { _productService = productService; } // // GET: /Product/ public ActionResult Index() { // Get products var products = _productService.GetProducts
(selectedCategoryId); // Rest of method follows ... } }
There are a number of DI / IoC containers out there. One of my favorites is Ninject, which can be added using the NuGet package manager and has an extension for ASP.NET MVC applications. Installing the Ninject.MVC3 package places a bootstrapper class in the App_Start folder. There is a private RegisterServices method where you can bind local service implementations and load Ninject modules.
private static void RegisterServices(IKernel kernel) { // Bind local services kernel.Bind<IProductService>().To<ProductService>(); // Add data and infrastructure modules var modules = new List<INinjectModule> { new RepositoryModule() }; kernel.Load(modules); }
The RepositoryModule class resides in a separate DependencyResolution assembly, which references the Infrastructure.Data assembly and binds IProductRepository to ProductResository. The assembly containing the Ninject modules references the Data assembly, so that web client doesn’t have to, keeping the web client ignorant of the actual repository implementation. Once the bindings are set up, Ninject serves up the appropriate instance wherever the interface is used.
public class RepositoryModule : NinjectModule { public override void Load() { // Get config service var configService = Kernel.Get<IConfigService>(); // Bind repositories Bind<IProductRepository>().To<ProductRepository>() .WithConstructorArgument("connectionString",
configService.NorthwindConnection); } }
As you can see, dependency injection is the glue that holds everything together. An integration test, for example, would also use the DI container to get instances of interface implementations, without having to reference assemblies containing those classes.
[TestFixture] public class RepositoryTests { // Ninject kernel private IKernel _ninjectKernel; public RepositoryTests() { // Init Ninject kernel _ninjectKernel = new StandardKernel(new RepositoryModule(), new LoggingModule()); } [Test] public void Should_Get_All_Categories() { // Arrange var categoriesRep = _ninjectKernel.Get<ICategoryRepository>(); // Act var categories = categoriesRep.GetCategories(); // Assert Assert.That(categories != null); Assert.That(categories.Count() > 0); } }
Unit tests would likely combine DI with a mocking tool, such as Moq, as shown here.
[TestFixture] public class RepositoryTests { // Ninject kernel private IKernel _ninjectKernel; public RepositoryTests() { // Init Ninject kernel _ninjectKernel = new StandardKernel(); } [TestFixtureSetUp] public void FixtureSetup() { // Init categories var categories = new List<Category> { new Category { CategoryId = 1, CategoryName = "Beverages" }, new Category { CategoryId = 2, CategoryName = "Condiments" }, new Category { CategoryId = 1, CategoryName = "Confections" } }; // Set up mock categories repository var mockCategoriesRep = new Mock<ICategoryRepository>(); mockCategoriesRep.Setup(m => m.GetCategories()).Returns(categories); _ninjectKernel.Bind<ICategoryRepository>().ToConstant(mockCategories.Object); } [Test] public void Should_Get_All_Categories() { // Arrange var categoriesRep = _ninjectKernel.Get<ICategoryRepository>(); // Act var categories = categoriesRep.GetCategories(); // Assert Assert.That(categories != null); Assert.That(categories.Count() == 3); } }
Jeffrey summarizes the key tenets of the Onion Architecture as follows:
- The application is built around an independent object model.
- Inner layers define interfaces; outer layers implement interfaces.
- Direction of coupling is toward the center.
- All application core code can be compiled and run separate from infrastructure.
The sample application (which you can download here) provides a reference architecture based on these principles. To use it you’ll need to download our good old friend, the Northwind sample database. While the Onion Architecture does not propose anything new in terms of object-oriented and domain-driven design patterns, it provides guidance on how to decouple infrastructure from business and presentation logic, without introducing too much complexity or requiring redundant code. Enjoy.
Pingback: Peeling Back the Onion Architecture | Web, Microsoft, MS Access | Syngu
Thank you, Tony, for a very insightful blog post.
I found your blog – and this post – just in time when I was pondering about the architecture of a solution with multiple clients (ASP.NET MVC and Windows Phone). Using the above architecture, it should be no problem to integrate another client project. I was wondering, however, how you would centralize validation (using FluentValidation) and reuse view models across multiple clients. How would your solution structure adapt to that?
I’m looking forward to your next post in this series!
Hi Marius,
To share things like ViewModels across multiple clients, they should be placed in separate assemblies and referenced from those clients. The same would go for things like validation, whether with DataAnnotations or something like FluentValidation.
In cases where you would want to de-couple the actual implementation, you would factor out an interface and have the consumer reference the assembly, using dependency injection to serve up the concrete implementation – which is what the Onion Architecture is all about. The interfaces are in a separate assembly do that they can be shared among multiple consumers, be they client apps (ASP.NET MVC or Silverlight) or a test harness (NUnit, MSTest).
Hope this answers your question. 🙂
Cheers,
Tony
Thanks for your answer, Tony, it definitely helped me!
I am a little late to the game here, so please excuse me. I’ve created my project structure based on the one provided here and added a library under infrastructure for validation. (MyApp.Infrastructure.Validation) and added a interface to the infrastructure.interfaces project called IValidation with a single method called validate that should return the validation results. Is this correct? Also, I’m not exactly sure where the viewmodels would go though? Would they be considered part of the domain? Another thing, being that the services and the validation both live on the outer ring is it okay for them to reference each other? I know everything says all references go towards the center. Not sure how this all comes in to play. Any advice would be greatly appreciated.
View models are used exclusively in the client UI layer. In that case you should check out an MVVM toolkit, such as my very own Simple Mvvm Toolkit. If you are building a web client, you’ll want to use something like Knockout.
In terms of validation, there is an interface, INotifyDataError, which the MVVM toolkit should implement. It has a service side hook as well, where the client can asynchronously perform server side validation.
In terms of implementation, I would place it in a separate assembly that can be referenced both by client and services. That way, you can validate on both ends with the same logic. You’ll want to use dependency injection to wire up the concrete implementation of the validation interface.
Had a look at your sample app, looks pretty good. There are a few things I do differently and wondering what you think.
1. I put the mapping code (ie GetProductViewModel) in a separate class. Often this mapping code is pretty complicated for some views and it clutters up the controller. This class can then contain Sql/Linq that is specific for populating the ViewModel rather than putting that code is a server/repository where it will never be used again. Similar to this (not exactly) https://github.com/sharparchitecture/Sharp-Architecture-Cookbook/wiki/Using-Query-Objects
2. Repositories only contain reusable queries. For example GetProductByCode. They don’t contain general queries for doing things like populating View Models, or else they end up with many queries, and end up as a bit of a query dumping ground.
3. Persistence is done in a separate class, for example I might have a ProductUpdateCommand class to which I pass the ViewModel. This means controller just has a couple of lines in the Save methods like
var errors = _productUpdateCommand.Execute(model);
ModelState.AddErrors(errors);
These differences are for the purpose of a) Keeping the controllers uncluttered with code, b) Keeping classes purpose clear (not doing everything in the one class).
Glad you found the sample app helpful, and thanks for the input. The point that stands out for me is that repositories can get too cluttered with generic queries and CRUD operations. What I’ve seen that helps relieve this is to create a generic IRepository class that has the reusable CRUD ops there (such as GetEntityByKey, GetEntities, etc). Keeping that stuff abstracted away with an interface keeps the controllers free of data access code and loosely coupled. Again with the updates, the controllers do not have any code that actually saves entities — all of that is abstracted away. What you could do here, though, is implement the IUnitOfWork pattern for the saves, since those usually go across more than one repository. Check out the DDD book by Eric Evans.
Cheers,
Tony
Tony, thanks so much for the post – this is very helpful. I’m looking forward to diving into your sample project.
How to integrate the application Services using WCF,Can you please let me know the Project Structure for the same and also will the interface and Service implementation be in the same projects?
HI there, I’m going to do another post on integrating services into the onion architecture. However, it’s not hard to do. Interfaces stay in the same place. There would be a Services class lib that referenced the interfaces, and you would use DI to inject dependencies into the services (for ex, repositories) – I have a blog post on that. Then the services lib can be hosted either in the same web app as the asp-mvc project, or in a separate web project.
Cheers,
Tony
Thanks….Tony….
I am little confused on why the core doesn’t provide a concrete implementation for the Services classes. I would would think if you provide an interface, how it gets implemented might not change very much between implementations.
It also seems that Onion Architecture encourages the developer to use anemic data models. Am I right about this? And if I am, why is that?
Thanks for your help.
Jared
You want to put services in a separate class lib so that they can be loaded into different WCF hosts (web, NT Service, etc). However, that’s just for WCF services. There can be other services that are not exposed via WCF, and you can put those pretty much where it make the most sense. For example, the ASP.NET MVC app can implement a service locally, or it could, as you suggest sit in Core.
Not sure what you mean by anemic data models. Can you elaborate?
Tony
Hi Tony,
Thanks for a great post. I’m designing the architecture for a new web app and web service based on MVC and EF. One point I’m slightly confused about is the Service implementation. I see you have implemented ProductService which in DDD terminology appears to be an Application Service. This can be implemented as a class in an MVC project or exposed via WCF as you suggest. That all hangs together for me.
So where do Domain Services belong in your code base? I believe the Onion Architecture calls for them to be part of the core and may contain at least some business logic, more than what DDD usually recommends (as others have pointed out). I like the idea of being able to use the entities as DTOs without generating DTOs and providing mapping etc. so a little more logic in the domain service doesn’t bother me too much. If you had to perform some validation say in the persistence of a new Product where would this validation live in your code base?
Thanks,
D.
Ah, that makes a lot more sense. I haven’t worked WCF outside of web services, but I can see the need to be flexible.
I probably should have said anemic data models. Martin Fowler refers to them as an anti-pattern here: http://martinfowler.com/bliki/AnemicDomainModel.html. Instead of writing anemic domain models, he encourages developers to build domain specific business logic into domain objects themselves. Service layers are kept very thin. I tend to follow a similar path when I write code, because I feel it makes it easier to follow and maintain business logic over time.
In your example and in some open source projects like Code Camp Server, its look the domain objects are kept pretty bare while the services carry out a lot of the business logic. Is this by design, and if it is why? To further reduce coupling?
Thanks again for your help.
Jared
Hi Jared,
Great question, and thanks for the link to the Fowler article. I think the main reason for not having behavior in the domain entities is the desire to use them as DTO’s (Data Transfer Objects) rather than true domain models. Rich clients (SL, WPF) would consume DTO’s via a WCF service, in which case any behavior would be irrelevant. So the anemic entities result from turning them into DTO’s instead of actual domain entities. Of course, this is not necessarily ideal. Creating a rich domain model should be done separately from DTO’s. But this results in a bunch of code needed to translate between domain entities and DTO’s. I think many devs combine then to reduce entity redundancy.
I don’t think one approach is right and the other wrong – it all depends on the complexity of the system you are developing and whether it involves a WCF services layer. With ASP.NET MVC there usually is not a WCF component, while with SL/WPF there usually is.
Anyways, that’s may take on it.
Cheers,
Tony
Thanks for answering my questions. I just recently started looking at Onion Architecture and DDD. I’m very excited about what I see so far. As always, the best part of our industry is that there is always something new to learn:).
Jared
Took me time to learn all of the comments, however I really enjoyed the article. It proved to be very useful to me and I’m certain to all of the commenters right here! It’s all the time nice when you cannot solely learn, but additionally engaged! I’m certain you had joy scripting this article. Anyway, in my language, there aren’t a lot good source like this.
http://www.idoc-tlc.com/idoc/index.aspx
Glad you enjoyed the article! I hope to have some time to post more of them. 🙂 Tony
Hello Tony,
I am trying to use Entity Framework for DAL in (AppArch.Infrastructure.Data). EF generated model that matches the model i have in Domain.Entities project. Let’s say there was already two tables called Categories and Products and now EF has generated a corresponding Model. How is the ICategoryRepository / IProductRepository implemented?
I Suppose those two interfaces will be implemented in AppArch.Infrastructure.Data project.
Now, you have a model already defined under Domain.Entities where the repository interface is looking for and you have a model generated by EF. Can you please how to implement say IProductRepository? It’s a little confusing to me.
Thanks,
Bewnet
The idea here is to use POCO classes (Plain Old CLR Objects) instead of EF generated classes. There is a T4 template that generates POCO classes based on the Entity Framework Data Model and then sets the generation strategy to remove the generated classes and use the POCO’s instead.
You can implement the repository interfaces however you wish, according to the needs of your application, but they would return and/or accept the POCO entities. You implement the interfaces in the Data project simply by adding repository classes and inserting code to interact with the object context. The sample project should show an example of doing this.
Cheers,
Tony
Thanks a lot Tony for both posting this useful blog and your answer as well!
That said, i have one more question – Even when you generate those POCO classes using T4 template:
1. You need to have one class – to take advantage of entity framework’s context management and that class has to inherit from ObjectContect – this is dependency.
2. Can we have those POCO classes on their own project? If i still have them in the same project, that leaves the core of the entire application dependent on EF which i think is something Onion Arch is trying to solve.
I might be asking obvious questions but, i need your expertise on this.
Thanks again!
Bewnet
I got it now, 4.1 has that support. Please ignore my question.
Excellent explanation… I am currently working as an architect and implementing Onion architecture and was in a need to quickly explain to developers how everything works… I used your blog post as one of the best references that clearly explains the concept of the Onion architecture ;)!
Tony, thanks again for putting this together. With the help of some of other developers at my company, we put together a POC and presentation on Onion Architecture. Your code was instrumental for helping us to get started. You can check out the slides from my presentation here: http://jared-jenkins.com/blog/onion-architecture-presentation/.
Cheers,
Jared
Pingback: Enterprise Software Sucks | Jared Jenkins
Pingback: Onion Architecture: part 4 – after four years | Headspring
Hello Tony,
The domain layer seems containing POCO classes. What if those domain classes have domain logic, validations, calculations, to do, how will you restructure the architecture to make sure the validations and calculations are done and yet make use of Onion Arch?
Sample code will be really useful.
Thanks,
Bewnet
Great article !
I am having the same issue. Did you find an answer to your questions ?
Thank you
Tony,
I have been having issues trying to figure out how to set up Session-Per-Request in our solution. I want to decouple NHibernate from the presentation layer, but I can’t seem to figure out how to implement it in your design. I understand that I could make attributes that handle the beginning and ending of transactions or I could use an HttpModule. My issue is with writing the SessionHelper to work in a more transactional way and still have everything injected with Ninject. What is the best approach to handling this within the architecture?
Hi and thanks for your question. If you open up the sample project you’ll see that the NHibernate code goes in the Data project. All you have to do is configure the session for per http request. NHibernate takes care of the details by tying the lifetime of the session to the current http request. Then the overhead of creating the session only takes place on each request rather than multiple times per request. Per request makes sense when you want to isolate transaction scope at the request level. But for reading data you could have a single session. And for static data, check out level 2 cache solutions for better performance.
Pingback: Behavior Driven Development (BDD) using Onion Architecture on .NET (Part 1) | Tech Startups
Hi Tony, thank you for writing this awesome article. I noticed a pingback so I am adding additional details for your visitors. I’ve Implemented the Onion Architecture on .NET (Ports and Adapters) based on your article using Entity Framework Code First and MVC4 Web API. Complete code sample is hosted on github: https://github.com/sharifhkhan/OnionArchitecture
Thanks,
Sharif
Very nice, I’ll check it out!
Tony
Hi Tony,
Thanks for your nice article I learn quite a lot from. Just wondering where you will put the abstraction and implemenetation of the Unit of Work pattern in your sample project, say, IUnitOfWork.cs and UnitOfWork.cs.
I would personally choose AppArch.Infrastructure.Interfaces and AppArch.Infrastructure.Data since the UoW doesn’t seem related to the domain. How do you think? Thanks.
Cheers,
Pete
For a layered architecture I prefer this visualization: http://biese.files.wordpress.com/2007/10/spring2-01.JPG
Pingback: Bookstore – Part I: Onion Architecture, Entities and Interfaces « Henrique Baggio's Blog
Reblogged this on Great Microsoft .Net Silverlight Tutorials Page.
I would like to thank you for the post which I am looking for while. I am applying this architecture (onion architecture) for my application that I started a week ago. The challenge I face is that, I have “business Service” layer that implement “the Business Service interface” defined by the core”. I want to use Unit Of Work Pattern, in most examples they inject the Unit Of Work on the Controller Constructor, hear you show us to inject Business Service so how could I do that? The reason I want Unit Of Work is I want to secure I’m using one context across repositories and so on. Please help us how to incorporate Unit Of Work with this architecture on which the Business Service implemented on different assembly!
I believe the Dependency Injection container should mostly take care of this for you. Typically, IUnitOfWork is injected into the constructor of each Repository, so that cross-cutting concerns, such as connections and transactions, can be managed across multiple repositories. If you register your implementation of IUnitOfWork with your DI container, it should inject it into your repositories. This hold true even when a business service layer is involved. For example, if you have a WCF service that accepts an ICustomerRepository, and the CustomerRepository implementation accepts an IUnitOfWork, then the DI container will populate the object graph just the way it’s supposed to. And it doesn’t matter if those pieced live in different assemblies, as long as they are all referenced appropriately.
In the example shown on the following link, http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
Unit of Work is responsible for creating the repository object. In my case, as I said before, I have business service layer. So if I use DI container for injecting UnitOfWork for every Repository Interface, how can I manage life time of UnitOfWork, for example, I want to insert some data to Category, and then To Customer and I want to commit to database both at the same time… what should I do? Can I be sure they are using the same context?
My Repository looks like this
public class GenericRepository : IRepository where T: class
{
public GenericRepository(ObjectContext context)
{
_objectSet = context.CreateObjectSet();
}
public void Create(T entity)
{
_objectSet.AddObject(entity);
}
public void Update(T entity)
{
throw new NotImplementedException();
}
public void Delete(T entity)
{
_objectSet.DeleteObject(entity);
}
public IQueryable GetAll()
{
return _objectSet;
}
public IQueryable Find(System.Linq.Expressions.Expression<Func> predicate)
{
return _objectSet.Where(predicate);
}
public T FindById(int id)
{
throw new NotImplementedException();
}
public ObjectSet _objectSet { get; set; }
}
My UnitofWOrk look like this
public class UnitOfWork : IUnitOfWork
{
public UnitOfWork()
{
_context = new ObjectContext(connectionString);
//_context.ContextOptions.LazyLoadingEnabled = true;
}
public void Commit()
{
throw new NotImplementedException();
}
public IRepository Programs
{
get
{
if (_programs == null)
{
_programs = new GenericRepository(_context);
}
return _programs;
}
}
GenericRepository _programs = null;
readonly ObjectContext _context;
string connectionString = “test “;
}
assume that, “Program” is one of my domain entity. Wait to for help?
There are mainly two approaches to how repository and unit of work patterns should interact. The one I referred to in my response is where the ctor of each repository accepts a unit of work interface. But if this would require the service or controller to have each repository injected into it, which basically puts the cart before the horse. A better approach, which you referred to, is to create a unit of work class that uses repositories to do its work.
If you look at the example depicted in Martin Fowler’s description of the Unit of Work pattern, you see that the interface does not expose repositories as properties, unlike the example in the article you referenced. I like this much better, because it lets you abstract the repositories away from the service or controller.
Using a DI container allows you to inject repositories into the unit of work class. The unit of work class can then be injected into the service or controller in your business layer, without referencing any repositories. The DI container is responsible for building the whole object graph, starting with the repositories needed by the unit of work class, and then creating the unit of work class needed by the service.
Now, what about this thing called object scope or lifetime? Again, this is where the DI container steps in. Almost all of them have a lifetime model that allows for a “per HTTP request” lifetime. You specify the lifetime at your composition root when configuring your DI container, then the container builds the objects at the start of a request, and tears them down at the end.
So here is basically what your code will look like:
IGenericRespository { IEnumerable GetCustomers(); // etc }
IGenericRespository { void PlaceOrder(Customer c); // etc }
ICustomerOrdersUnitOfWork { void PlaceCustomerOrder(int customerId); // etc }
CustomerOrdersUnitOfWork : ICustomerOrdersUnitOfWork
{
public CustomerOrdersUnitOfWork(IGenericRespository custRepo, IGenericRespository ordersRepo) {}
}
CustomerOrdersService
{
public CustomerOrderService(ICustomerOrdersUnitOfWork custOrdersWork) {}
}
Your business service host then is responsible for configuring the DI container and registering it, so that CustomerOrdersService gets fed a CustomerOrdersUnitOfWork. If you are using WCF, check out my Common Instance Factory, which has plumbing for using the DI container as an instance provider for WCF.
Happy coding!
Tony
Thank you!
Hi Sir, Now i am searching job. one question: How to start the project in .net. pls tell me Real time project Model.
Pingback: Onion Architecture: Part 4 - After Four Years : Jeffrey Palermo (.com)
Pingback: Onion Architecture Presentation Resources « IExtendable
Pingback: [Dev Tip] The Onion Architecture | sudo man
HI Tony,
I have gone through your architecture it’s just awesome. I have one question you have implemented the service in web project is there any specific reason there?
Can’t we have separate project for it?
@Jalpesh, You’re correct – services should be implemented in a separate project. I think I may have done it differently to keep things simpler, but I can’t recall the precise reason at this point. 🙂 For a better example, check out my Trackable Entities samples, which include the use of dependency injection with repository and unit of work patterns.
thanks I will check it out!
Tony,
Can you provide us with a sample using .edmx ?
Check out the samples from my Trackable Entities extension: http://trackable.codeplex.com. There you will find two relevant samples: Patterns (Onion Architecture) and ModelFirst (EDMX). It would not be hard to combine the two. I suggest you start by installing the Trackable Entities VS extension (VSIX), create a new Patterns app, then add an EDMX to the Entities project (instead of Code First).
Thank you, Tony
I see that you don’t use unitofwork and you directly use edmx in the API controller, what about using Unit Test with DB first and Trackable Entites?
Do you have an ASP.NET app using your project ‘ET’?
What do you suggest about names for subfolders when using only Namespaces like “Core”, “Infrastructure”, “UI” and “UnitTest”?
Trackable Entities does include Unit of Work and has a nice implementation of the pattern. You should download the Samples zip file for either VS 2013 or VS 2015, where you’ll find a project that includes UoW. See this Getting Started Guide for directions. You’ll find a good example there also about project and namespace names, and it uses Asp.Net Web API.
In general, these guidelines are a good starting point for naming conventions, etc.
I see that most “OA” uses code first but in my case I need DB to be reviewed/changed by DBA.
Do you recommend “OA” in case you know I’m using edmx and what about a sample using edmx ?(what you mentionned as project calls edmx directly from API controller)
In your opinion what is the best loosely coupling architecture to use with MVC and edmx ?
“Code First” with Entity Framework does not mean your DB can’t be changed by a DBA. The EF Power Tools, as well as the EF 6.1 Tools for Visual Studio, both allow you to generate CF entities from an existing DB and whenever the DB has changed. So it really should’t be called Code First and, in fact, the EF team is looking at changing it to “Code-Based Modeling,” in order to avoid misunderstanding.
That said, there is nothing about Onion Architecture that requires code-based modeling. You can use EDMX-based modeling with OA just fine. The main thing you want to do is separate your entities from your DbContext class and EDMX file. The strategy I recommend is creating a separate Class Library project and then linking the entity cs files from the project in which the EDMX resides. (Right-click, add existing items, navigate to the DB project, select the .cs entity files, and choose Add As Link.)
If you want a good example, you can download my Trackable Entities project, which includes sample projects that use this approach with an EDMX file. It also has great templates for implementing Repository and Unit of Work patterns.
Thank you, Tony
I didn’t get a complete answer to what I need, also until now I didn’t find a sample which uses OA with unit test applied to IDbContext with edmx.
In terms of unit testing, what you want to do is create mocks of your repositories. Again, it doesn’t matter whether you use code-based or edmx-based modeling. The repository is an abstraction of the data layer, so what you use behind the data facade is irrelevant to your unit testing code. A tool such as Moq is idea for this scenario. Check out Rowan Millar’s EF testing framework for help with this.
Thank you, Tony
Tony, do you recommend to use DependencyResolution project in which I want to resolve all interfaces used in many projects (MVC, UnitTest ..) or just inject DI within each project?
Autofac is it better for me or you recommend something else!
I sometimes use a DR project in a WCF scenario where services are independent of host and where DI makes sense in an integration test project. Otherwise, for web api projects I might only use DI for the web project since its not really needed for tests in that case. Simple Injector is my favorite.
Hello Tony,
I was wondering wether you would expose the Repository Interfaces to the UI.
Or is it better to only expose the Application-Service Interfaces?
For example if you have repositories exposing a GetById() method, and a Factory in the App Services with a method Create(), only the latter is expected to return a valid businiss object.
So then it could be confusing to expose the Reposirory with the GetById() too.
I would normally not expose Repository interfaces to the UI, if the way the application gets its data is via a web service. If, on the other hand, the UI stores data in a local cache, you could use repositories on the client.
Pingback: Onion Architecture
Tony,
Great article.
I was looking for the example code but the link appears to be broken?
Is there an alternate link?
Thanks,
Mark.
Yes I’ll fix the link and post a reply here when that’s ready.
Alright, I fixed the link — you should be able to download the code now. Cheers!
The link for source code is still broken
I updated the link to a dropbox file. I just checked and the link is fine, but you may have a problem if dropbox.com is blocked by your network admin.
Thanks for your eply, I have checked in 2 separate networks now, it’s definitely not working.
Error: HTTP Error 404. The requested resource is not found.
This link works for me: https://www.dropbox.com/s/i0pix0yg47af6wk/onion-arch.zip
Thanks Tony – direct link worked 🙂
Hi Tony,
I am left wondering, in companies where the server infrastructure is divided into DMZ and Core servers. How would this work?
Usually in this structure DMZ server is the one exposed to internet and Core is where the app/website to talk to DB is installed.
Thanks
@Coder, Thanks for your question. There are a number of topologies possible when configuring servers for availability over the Internet, and you might find this article helpful. But where the onion architecture would come in would be on the servers hosting your web services – that is where you would employ repository and unit of work patterns with dependency injection to abstract away your data layer and provide greater testability. Those servers could be deployed in the DMZ, perhaps in addition to some sort of load balancer, where you might what to terminate the SSL connection, using ipSec to secure transport between the load balancer and other servers, as also between servers. I hope this helps.
Thanks Tony makes sense,
Another Issue that i find difficult to understand is separation of dependencies with in these layers. As an example you have an request from UI: User GetUserInfo to Application layer, this returns a User object which is in domain layer and is populated by repository, you also have an interface for IUser(in application layer) that exposes these method and return object and also IUserRepository that exposes DB calls and return object. The issue i am trying to make here is that return data ends by binding all the layers together unless I use DTO etc between the projects. Your UI knows directly what your domains is so do other projects.
I am very keep and would like to get your inputs on this, especially UI –> Application layer and Repository –> Domain interaction with return types and how do you make these loosely coupled.
The onion architecture seeks to abstract away dependencies on what are often referred to as infrastructure concerns, which are things like data access and security, in order to insulate the app against obsolescence. Dependencies may still exist between various layers in an application, for example, by sharing entities between data access and business logic layers. The degree of coupling you allow between layers all depends on your situation and constraints. You can, for example, have different entities for use internally within a service and entities returned by the service to consumers (DTO’s). Usually decoupling layers adds more development and maintenance costs, so it’s always a trade-off. The rule-of-thumb is, strive for loose coupling whenever possible, but weigh the costs and decouple as much as makes sense for your scenario.
Awesome article with clear explanation. It really helps me to understand and implement Onion Architecture. I have a request if you can include UoW (Unit of Work) pattern with NHibernate in this article with updated source code.
Thanks.
Windows defender does not allow the opening of the attached code archive because of the presence of Trojan: Win32/Spursint.A!cl in the NUnit executables embedded within.
It may be a false positive, but anyone running windows defender will not be able to read the code samples.
Good catch .. Time to put the code on GitHub. Can you ping me on Twitter and I’ll send you the code? @tonysneed
Hi Tony,
I am developing a REST-web service using the Java technology (Spring-data, Spring-mvc, JPA, Hibernate). The entity classes uses JPA-annotations on the attributes. The REST-resources are DTO classes, serialized and deserialized using the Jackson library, and mapped to the entity classes. The DTO classes help in composing different mixtures of domain entity attributes which are visible to the outside.
Should I get rid of JPA-annotations on my entity classes and uses old XML configuration instead?
Should I get rid of DTO-classes if they are very similar (1-1) to entity classes?
Should I get rid of Jackson-annotated DTOs and annotate my entity classes?
How to respect the Onion principles and use the annotation style?
Kind regards.
Thanks.
Hello Olivier,
Thanks for your questions. I’m happy to answer — just be aware these are my own opinions. The onion architecture is mainly about decoupling your app from infrastructure concerns, through the use of interfaces and dependency injection, and it designed to be platform independent, so it should apply equally well to both C# and Java based applications.
In my opinion, the use of annotations on entity or DTO classes introduces coupling to a specific framework and should be avoided if at all possible. The best approach is based on conventions (see this article on customizing Hibernate conventions), but it may be necessary to use configuration.
I’m of the opinion that entity classes can at times also serve as DTO’s, provided they do not contain domain-specific logic.
It’s best if you can get by without annotations, and instead use convention or configuration, because it couples you to a specific framework or technology stack. The idea is that your app is then completely abstracted away from persistence concerns.
I hope this helps. Cheers!
Hi Tony,
I’ve used this framework for the basis of my own application. On the homepage I have a HTML action that calls a service and checks a database, so its running on every page view.
Since I’ve done this my application suffers from out of memory and grinds to a halt when a number of users access the site.
It’s all to do with the nHibernate session helper. Have you experienced this?
nice article, thank’s a lot, where can i download code ?