Announcing Event Driven .NET – An Event Driven Microservices Platform for .NET

I am pleased to announce the general availability of Event Driven .NET, a platform which .NET developers can use to build microservices based on event-driven architecture, creating distributed systems that can be developed by multiple teams in parallel with services that can be versioned, deployed and scaled independently.

For source code, samples and instructions on how to get started building event-driven microservices, visit the home of Event Driven .NET: https://github.com/event-driven-dotnet/home

This is part of a blog post series on building event-driven microservices with Event Driven .NET:

  1. Announcing Event Driven .NET ‚Äď An Event Driven Microservices Platform for .NET (this post)
  2. Properly Scope Microservices with Domain Driven Design (future post)
  3. Separate Read and Write Responsibilities with CQRS (future post)
  4. Decouple Microservices with an Event Bus and Dapr (future post)
  5. Orchestrate Updates Across Multiple Microservices with Sagas (future post)
  6. Get Audit Trail and Time Travel for Microservices with Event Sourcing (future post)
  7. Real-Time Data Processing for Microservices with Event Streams (future post)

The Promise and Peril of Microservices

According to Chris Richardson (owner of microservices.io and author of the book Microservices Patterns), microservices is an architectural style with a collection of services that are:

  • Highly maintainable and testable
  • Loosely coupled
  • Independently deployable
  • Organized around business capabilities
  • Owned by a small team

The promise of microservices is to enable teams to build large, complex systems with services that can be deployed as independent units so that features can be delivered with greater velocity without deploying the entire application all at once.

While at first glance microservices sound like a terrific idea, building systems in the real world using this pattern has proven elusive for many teams. There are several common pitfalls that can introduce coupling among services, and the solutions to these challenges may not be readily apparent. Here are a few examples of microservice anti-patterns:

  • Failing to scope services at the appropriate level of granularity
  • Sharing a database among services
  • Point-to-point communication between services

Here are some patterns which can be applied to address each of these issues.

  • Domain Driven Design
  • Command Query Responsibility Segregation
  • Event Driven Architecture

Domain Driven Design plays an important role in helping ensure that services are scoped at the optimal level of granularity. If a service is scoped too broadly, multiple teams working on the same service can interfere with one another. Scaling such a service may prove inefficient, and deployments may become larger than they need to be. On the other hand, if a service is too finely scoped, it may complicate deployments and increase maintenance costs. DDD addresses this issue by demarcating bounded contexts and defining aggregate root entities which logically group child entities. Most of the time, the ideal level of granularity is to create one microservice per aggregate root.

Command Query Responsibility Segregation is a pattern that separates read and update operations for a data store. This separation allows different optimizations to be applied to the read and write sides, including use of different database technologies, for example, NoSQL for writes and SQL for reads. In a microservices architecture this allows one service to update its data store by subscribing to events published by other services when updating their respective data stores.

Event Driven Architecture is a pattern that focuses on publishing and consuming events in reaction to changes in state. With microservices, it is quite useful for one service to publish integration events so that another service can update its data store by subscribing to these events. This can help alleviate the need for direct, synchronous service-to-service communication, which requires both services to be available at the same time.

An Event-Driven Microservices Platform

Each of these patterns can have a relatively steep learning curve, and it can be difficult to apply them in a way that achieves the objectives of a distributed system that is robust and flexible. That is why many .NET developers could use a microservices platform.

Event Driven .NET provides an event-driven microservices platform for .NET developers that combines DDD, CQRS and EDA patterns on top of a messaging layer consisting of an Event Bus, Service Mesh capabilities exposed by Distributed Application Runtime (Dapr), a Message Broker (such as Amazon SNS+SQS or Azure Service Bus), and Kubernetes for container orchestration (such as Amazon EKS or Azure AKS). The Event Bus relies on Dapr for pub-sub and has support for idempotency with an event cache that filters out duplicate events. It also uses a schema registry to validate messages for compatibility with registered schemas. Sagas in Event Driven .NET use the Event Bus to orchestrate updates across services so that they all succeed or roll back using compensating actions. The messaging layer allows for fine-tuning various cross-cutting concerns, such as monitoring and observability, security and privacy, rate limiting, feature flags, testing and deployment.

Rather than imposing a prescriptive framework that dictates one way to build microservices, Event Driven .NET provides a set of abstractions and libraries, deployed as NuGet packages, combined with reference architectures which illustrate how to build loosely coupled services, each scoped to an aggregate root and persisted to a private data store. Services have REST handlers with separate command and query controllers that accept Data Transfer Objects mapped to domain entities. Services communicate with one another over an Event Bus that uses Dapr to abstract away the concrete message broker implementation and enable observability with monitoring and tracing. Finally, services are deployed to Kubernetes for container orchestration, fault tolerance and responsive elastic scaling.

In the Reference Architecture for Event Driven .NET, the Customer Service publishes a versioned “address changed” integration event whenever a customer’s address is updated. The Order Service subscribes to this event, which is delivered by the underlying message broker, so that it can update the shipping address of the customer’s orders. This way, the Order Service always has the updated customer address information, which removes the need for the Order Service to communicate directly with the Customer Service. (Note there are drawbacks to every pattern, including this one, and that this diagram depicts a simplified version of CQRS, which is often implemented using separate services / databases for read and write operations.)

A Layered Approach

Event Driven .NET is designed to allow you to wade gradually into the waters of event-driven microservices, so that your system architecture aligns with your organizational structure, as well as the skill set of your developers and the maturity of your DevOps processes.

Starting at the bottom, you can adopt a Domain Driven Design (DDD) approach, then build on this foundation by adding Command Query Responsibility Segregation (CQRS). When you have the need for inter-service communication, you can move to the Event Bus layer, utilizing Dapr as a pub-sub abstraction over a message broker.

If you have the need to maintain eventual data consistency with updates to multiple services spanning a logically atomic operation, you are ready to advance to the next layer: Sagas. When your organization has the need for additional capabilities, such as built-in audit trail and the ability to atomically persist and publish events, you may wish to advance to the Event Sourcing layer (to be implemented). Lastly, if you wish to perform real-time data analysis and transformation, you can consider using Event Streams (to be implemented).

Roadmap

A the time of this writing, Event Driven .NET has impemented the DDD, CQRS, Event Bus and Sagas layers. The following layers are on the roadmap to be completed subsequently.

  • Event Sourcing: This will provide the ability to treat domain events as the source of truth for a system, so that services are no longer directly responsible for reversing persistence should event publishing fail. Event sourcing also provides built-in audit trail and replay capabilities.
  • Event Streams: This will support real-time data analysis and transformation by means of a durable, append-only message broker, such as Apache Kafka or Amazon Kinesis.

Summary

Event Driven .NET provides a platform for building event-driven microservices. Its purpose is to make it easier for .NET developers to build distributed systems which deliver on the promise of microservices for parallel multi-team development, deploying those services at different times, and scaling them independently to meet varying levels of demand in a cost-efficient manner.

Event Driven .NET accomplishes this by providing abstractions, libraries and reference architectures for building microservices that are appropriately scoped within a bounded context and that use events to communicate with one another asynchronously over an event bus abstraction. The event bus leverages Dapr as an application-level service mesh, enabling idempotency and schema validation as it abstracts away the underlying message broker.

Finally, Event Driven .NET builds on the foundation of DDD with layers for CQRS to allow for read-write optimizations, Sagas for modern “distributed transactions” based on eventual consistency, Event Sourcing for audit trail and transactional publishing, and Event Streams for real-time message processing.

To get started with Event Driven.NET, pay a visit to the Reference Architecture repository, where you can follow steps in the Development Guide to build event-drive microservices that are both highly maintainable and scalable.

Now go forth and build microservices with confidence!

Posted in Technical | Tagged , , , , , | Leave a comment

Using SpecFlow for BDD with .NET 6 Web API

Behavior Driven Development (BDD) is a practice in which developers and QA testers write automated user acceptance tests (UAT) which satisfy acceptance criteria in user stories. This helps developers write code that is focused on satisfying business criteria which define the purpose of the software they are writing. As with Test Driven Development (TDD), BDD tests are often written prior to developers writing any code. In contrast to unit tests, where external dependencies are discouraged, BDD tests are integration tests in which web services and database connections are used. They are written in a natural language, called Gherkin (with Given-When-Then syntax), that is understandable to non-technical audiences and can be wired to tests that are executed in an automated fashion.

Note: You can download the code for this blog post by cloning the accompanying GitHub repository: https://github.com/tonysneed/Demo.SpecFlowWebApi

In this blog post I will show you how to use SpecFlow to create BDD tests for an ASP.NET Web API project. This will be in addition to BDD tests you might create with SpecFlow and Selenium for driving a web UI built using a SPA framework such as Angular. The reason for building separate Web API tests is that, from an architectural perspective, a Web API constitutes a distinct application layer which may be consumed by any number of different clients, both visual and non-visual. As such, Web API’s should have their own user stories with acceptance criteria that apply to the REST interface.

What makes it challenging to write Specflow tests for a Web API is that you need to configure an in-memory test server using a WebApplicationFactory and wire it up to SpecFlow’s dependency injection system in order to make it available to step definition classes. You also need to bring in an appsettings.json file with values that override those in the Web API project so that you can use a test database instead of the real one. Finally, there is the challenge of using JSON files in the SpecFlow project for the body of HTTP requests and expected responses. Rest assured you’ll be able to meet these challenges, because this blog post will describe precisely how to do these things.

SpecFlow is a popular tool for automating BDD tests with .NET. It comes as a plugin for Integrated Development Environments (IDE’s) such as Microsoft Visual Studio and JetBrains Rider.

After installing the plugin, simply add a new project to your solution by selecting the SpecFlow project template. Be sure to specify .NET 6 for the framework together with xUnit as the testing framework.

Right-click on the Features folder to add a SpecFlow Feature file.

This is where you’ll add scenarios in Given-When-Then format. These should match acceptance criteria contained in the user stories for your Web API. At first you’ll see yellow squiggly lines underlining each statement. This is because you have not yet created any step definitions. The IDE will provide a handy context menu for creating steps for each statement. You’ll want to create a new binding class in which to place the first step.

If you haven’t done so already, you’ll need to add a new Web API project to the solution. Because .NET 6 does away with the Startup class, you need to modify the Program class in order for the SpecFlow project to “see” it. First, add a public partial Program class to the Program.cs file.


public partial class Program { }

view raw

Program.cs

hosted with ❤ by GitHub

Next add the following to the Web API .csproj file, replacing the value with the name of your SpecFlow project.


<ItemGroup>
<InternalsVisibleTo Include="SpecFlowWebApi.Specs" />
</ItemGroup>

The new Web API project template includes a weather forecasts controller. In my sample I updated the project to use a repository interface with an implementation that persists entities to a MongoDB database.


public interface IWeatherRepository
{
Task<IEnumerable<WeatherForecast>> GetAsync();
Task<WeatherForecast?> GetAsync(int id);
Task<WeatherForecast?> AddAsync(WeatherForecast entity);
Task<WeatherForecast?> UpdateAsync(WeatherForecast entity);
Task<int> RemoveAsync(int id);
}

Next you’ll need to go to the step definitions class in the SpecFlow project and add parameters for IWeatherRepository and WebApplicationFactory<Program>. The latter is required to create an in-memory test server for the integration tests. In addition, you can add a constructor parameter for a JsonFilesRepository, which is a helper class I created to retrieve the contents of JSON files from a project folder. SpecFlow tests can use these files both for input parameters and expected results returned by REST endpoints in your Web API project.


public class JsonFilesRepository
{
private const string Root = "../../../json/";
public Dictionary<string, string> Files { get; } = new();
public JsonFilesRepository(params string[] files)
{
var filesList = files.ToList();
if (!filesList.Any())
foreach (var file in Directory.GetFiles(Root))
filesList.Add(Path.GetFileName(file));
foreach (var file in filesList)
{
var path = Path.Combine(Root, file);
var contents = File.ReadAllText(path);
Files.Add(file, contents);
}
}
}

Here are the contents of the weather.json file used to create a new WeatherForecast by means of a POST controller action.


{
"id": 1,
"date": "2022-01-01T06:00:00Z",
"temperatureC": 32,
"temperatureF": 89,
"summary": "Freezing",
"eTag": "6e9eff61-a3ed-4339-93fb-24151149b46c"
}

view raw

weather.json

hosted with ❤ by GitHub

The complete step definitions class should look like the following. (Some methods are elided for clarity.)


[Binding]
public class WeatherWebApiStepDefinitions
{
private const string BaseAddress = "http://localhost/";
public WebApplicationFactory<Program> Factory { get; }
public IWeatherRepository Repository { get; }
public HttpClient Client { get; set; } = null!;
private HttpResponseMessage Response { get; set; } = null!;
public JsonFilesRepository JsonFilesRepo { get; }
private WeatherForecast? Entity { get; set; }
private JsonSerializerOptions JsonSerializerOptions { get; } = new JsonSerializerOptions
{
AllowTrailingCommas = true,
PropertyNameCaseInsensitive = true
};
public WeatherWebApiStepDefinitions(
WebApplicationFactory<Program> factory,
IWeatherRepository repository,
JsonFilesRepository jsonFilesRepo)
{
Factory = factory;
Repository = repository;
JsonFilesRepo = jsonFilesRepo;
}
[Given(@"I am a client")]
public void GivenIAmAClient()
{
Client = Factory.CreateDefaultClient(new Uri(BaseAddress));
}
[Given(@"the repository has weather data")]
public async Task GivenTheRepositoryHasWeatherData()
{
var weathersJson = JsonFilesRepo.Files["weathers.json"];
var weathers = JsonSerializer.Deserialize<IList<WeatherForecast>>(weathersJson, JsonSerializerOptions);
if (weathers != null)
foreach (var weather in weathers)
await Repository.AddAsync(weather);
}
[When(@"I make a GET request with id '(.*)' to '(.*)'")]
public async Task WhenIMakeAgetRequestWithIdTo(int id, string endpoint)
{
Response = await Client.GetAsync($"{endpoint}/{id}");
}
[When(@"I make a POST request with '(.*)' to '(.*)'")]
public async Task WhenIMakeApostRequestWithTo(string file, string endpoint)
{
var json = JsonFilesRepo.Files[file];
var content = new StringContent(json, Encoding.UTF8, MediaTypeNames.Application.Json);
Response = await Client.PostAsync(endpoint, content);
}
[Then(@"the response status code is '(.*)'")]
public void ThenTheResponseStatusCodeIs(int statusCode)
{
var expected = (HttpStatusCode)statusCode;
Assert.Equal(expected, Response.StatusCode);
}
[Then(@"the location header is '(.*)'")]
public void ThenTheLocationHeaderIs(Uri location)
{
Assert.Equal(location, Response.Headers.Location);
}
[Then(@"the response json should be '(.*)'")]
public async Task ThenTheResponseDataShouldBe(string file)
{
var expected = JsonFilesRepo.Files[file];
var response = await Response.Content.ReadAsStringAsync();
var actual = response.JsonPrettify();
Assert.Equal(expected, actual);
}
}

The Web API project has an appsettings.json file in which a WeatherDatabaseSettings section is present with values for a MongoDB connection string, database name and collection name. You can add a matching appsettings.json file to your SpecFlow project in which you replace the value for DatabaseName with a test database used exclusively for the integration testing.


{
"WeatherDatabaseSettings": {
"ConnectionString": "mongodb://localhost:27017",
"DatabaseName": "WeathersTestDb",
"CollectionName": "Weathers"
}
}

You’ll need to add a WeatherHooks.cs file to the Hooks folder, so that you can add instances of WebApplicationFactory<Program>, IWeatherRepository and JsonFilesRepository to the SpecFlow dependency injection system. My sample contains a GetWebApplicationFactory method for configuring a WebApplicationFactory<Program> using an appsettings.json file bound to a strongly typed WeatherDatabaseSettings class.


private WebApplicationFactory<Program> GetWebApplicationFactory() =>
new WebApplicationFactory<Program>()
.WithWebHostBuilder(builder =>
{
IConfigurationSection? configSection = null;
builder.ConfigureAppConfiguration((context, config) =>
{
config.AddJsonFile(Path.Combine(Directory.GetCurrentDirectory(), AppSettingsFile));
configSection = context.Configuration.GetSection(nameof(WeatherDatabaseSettings));
});
builder.ConfigureTestServices(services =>
services.Configure<WeatherDatabaseSettings>(configSection));
});

The WeatherHooks class also contains a ClearData method which, as the name suggests, clears data from the test database before each scenario is run.


private async Task ClearData(
WebApplicationFactory<Program> factory)
{
if (factory.Services.GetService(typeof(IWeatherRepository))
is not IWeatherRepository repository) return;
var entities = await repository.GetAsync();
foreach (var entity in entities)
await repository.RemoveAsync(entity.Id);
}

view raw

ClearData.cs

hosted with ❤ by GitHub

Lastly, the WeatherHooks class has a constructor that accepts an IObjectContainer and a RegisterServices method that registers WebApplicationFactory<Program>, IWeatherRepository and JsonFilesRepository as services with SpecFlow DI so they can be injected into step definition classes.


[Binding]
public class WeatherHooks
{
private readonly IObjectContainer _objectContainer;
private const string AppSettingsFile = "appsettings.json";
public WeatherHooks(IObjectContainer objectContainer)
{
_objectContainer = objectContainer;
}
[BeforeScenario]
public async Task RegisterServices()
{
var factory = GetWebApplicationFactory();
await ClearData(factory);
_objectContainer.RegisterInstanceAs(factory);
var jsonFilesRepo = new JsonFilesRepository();
_objectContainer.RegisterInstanceAs(jsonFilesRepo);
var repository = (IWeatherRepository)factory.Services.GetService(typeof(IWeatherRepository))!;
_objectContainer.RegisterInstanceAs(repository);
}

view raw

WeatherHooks.cs

hosted with ❤ by GitHub

You can execute the SpecFlow tests from the IDE Test Explorer, just as you would unit tests. This also allows you to debug individual tests and hit breakpoints both in the SpecFlow and Web API projects.

In this blog post I have shown you how to use SpecFlow to create user acceptance tests for BDD with a .NET 6 Web API project. By following the steps outlined here and in my sample, you’ll be able to configure a test web server for hosting your Web API and creating an HTTP client to invoke controller actions, with data is saved to a test database and results compared to JSON files added to the SpecFlow project. Now you can fearlessly add automated integration tests written in Gherkin syntax to help ensure that your Web API’s continue to meet acceptance criteria and deliver functionality that is targeted to the business domain.

Happy coding. ūüėÄ

Posted in Technical | Tagged , , | Leave a comment

Copy NuGet Content Files to Output Directory on Build

Recently someone posted an issue to my EF Core Handlebars project about an error they were receiving. After reproducing the issue, I narrowed it down to a problem copying template files to the project output directory on build. Because the file was missing from its intended location, my program was throwing an exception when it could not find the file.

This seemed like a strange error to me, because after over 267,000 package downloads, this was the first time I could recall someone reporting this error. However, this could be explained by the fact that most developers who use my package do so from Visual Studio, which seems to perform some magic copying package files to the output directory. Nevertheless, the error does take place consistently when running the EF Core scaffolding command from a terminal.

To correct the anomaly, I first needed to ensure the CodeTemplates folder was properly packaged in the NuGet file. The way I had been adding files was to include an ItemGroup in the project .csproj file in which I specified both the Content and PackagePath elements.

<ItemGroup>
<Content Include="CodeTemplates\CSharpDbContext\DbContext.hbs" PackagePath="lib\netstandard2.1\CodeTemplates\CSharpDbContext\" />
<Content Include="CodeTemplates\CSharpDbContext\Partials\DbImports.hbs" PackagePath="lib\netstandard2.1\CodeTemplates\CSharpDbContext\Partials\" />
<Content Include="CodeTemplates\CSharpDbContext\Partials\DbConstructor.hbs" PackagePath="lib\netstandard2.1\CodeTemplates\CSharpDbContext\Partials\" />
<Content Include="CodeTemplates\CSharpDbContext\Partials\DbOnConfiguring.hbs" PackagePath="lib\netstandard2.1\CodeTemplates\CSharpDbContext\Partials\" />
<Content Include="CodeTemplates\CSharpDbContext\Partials\DbSets.hbs" PackagePath="lib\netstandard2.1\CodeTemplates\CSharpDbContext\Partials\" />
<!– Remaining items elided for clarity –>

Viewing v5.0.4 of the .nupkg file in the NuGet Package Explorer, I could see that the CodeTemplates folder was placed as specified under the target framework folder of netstandard2.1.

While this worked functionally, the best practice starting with NuGet 3.3 has been to utilize the contentFiles feature to include static files in a NuGet package. In addition, rather than listing each file, I could specify the content in a single line with a file globbing pattern.

<ItemGroup>
<Content Include="CodeTemplates/**/*.*" Pack="true" PackagePath="contentFiles/CodeTemplates">
<PackageCopyToOutput>true</PackageCopyToOutput>
</Content>
</ItemGroup>

This resulted in the CodeTemplates folder being placed under contentFiles in the NuGet package, which is precisely where I wanted it to be.

Now that the CodeTemplates folder was in the correct location in the NuGet package, I could turn my attention to the task of copying the folder to the project output directory on build.

It turns out, to no surprise, that including content files in a NuGet package will not result in them being automatically copied to the project output directory when the developer builds the project. For this to happen, a custom build target is required, in which MS Build is explicitly instructed to copy files from one place to another on build. So I created a .targets file prefixed with the name of my library’s project name, and I added the a Target to copy the CodeTemplates folder from the NuGet contentFiles directory to the build target directory. In order to copy the entire folder structure, I also needed to include %(RecursiveDir) in the path.

<Project>
<ItemGroup>
<Files Include="$(MSBuildThisFileDirectory)/../contentFiles/CodeTemplates/**/*.*" />
</ItemGroup>
<Target Name="CopyFiles" AfterTargets="Build">
<Copy SourceFiles="@(Files)" DestinationFolder="$(TargetDir)/CodeTemplates/%(RecursiveDir)" />
</Target>
</Project>

Lastly, I added the following line in an ItemGroup of the project’s .csproj file, which placed the .targets file in the build folder of the NuGet package.

<Content Include="EntityFrameworkCore.Scaffolding.Handlebars.targets" PackagePath="build/EntityFrameworkCore.Scaffolding.Handlebars.targets" />

To verify that the CodeTemplates folder was copied to the expected location on build, I ran the following commands from a Mac terminal.

dotnet new classlib –name FolderCopyTest
cd FolderCopyTest
dotnet add package EntityFrameworkCore.Scaffolding.Handlebars –version 6.0.0-preview3
dotnet build
cd bin/Debug/net6.0
ls -R

I hope that sharing my journey of discovery will help someone seeking to perform the same kind of task. Happy coding!

Posted in Technical | Tagged | Leave a comment

Run WPF in .NET Core on Nano Server in Docker

Every now and then you’re presented with a scenario that should in theory work as advertised, but you’re not convinced until you actually see it with your own eyes. I recently found myself in this situation when migrating some code from the full .NET Framework running on a traditional Windows Server cluster to .NET Core running as a microservice on Kubernetes.

You can download the code for this post from my hello-netcore-wpf-nano GitHub repo. The Dockerfile for the Windows Desktop Nano Server container image on Docker Hub can be found in my dotnet-runtime-windowsdesktop GitHub repo.

The code in question is non-visual but has a dependency on WPF (Windows Presentation Foundation), in particular the Geometry classes in the System.Windows.Media namespace. Because .NET Core version 3 and later includes WPF, and Nano Server (a compact version of the Windows operating system designed to run in containers) supports .NET Core (but not the full .NET Framework), it should be possible to run a non-visual WPF application on Nano Server in a Docker container ‚ÄĒ and to deploy it to Kubernetes using Amazon EKS, which has support for Windows containers.

.NET Core WPF Console App

The first step is to create a WPF console app using .NET Core, which is a rather unnatural act, since WPF is normally used to create a visual UI. You need to start out with a WPF .NET Core Library, using either Visual Studio or the .NET Core CLI.

dotnet new wpflib -n MyWpfConsoleApp

Running this command on Windows ‚ÄĒ don’t even think of doing it on a Mac ‚ÄĒ will produce a class library project based on the Windows Desktop SDK. To convert it to a .NET Core console app all you need to do is edit the csproj file and set the project output type to Exe.

<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<UseWPF>true</UseWPF>
<OutputType>Exe</OutputType>
</PropertyGroup>
</Project>

Then add a Progam.cs file with a static Main method in which you use some Geometry classes.

using System;
using System.Windows.Media.Media3D;
namespace MyWpfConsoleApp
{
public static class Program
{
public static void Main()
{
var point = new Point3D(1, 2, 3);
Console.WriteLine($"Point X: {point.X}, Y: {point.Y}, Z: {point.Z}");
}
}
}

Executing dotnet run will produce the following output.

Point X: 1, Y: 2, Z: 3

Dockerized WPF Console App

To use Docker you must first install Docker Desktop for Windows.

To Dockerize your WPF console app, start by adding a Dockerfile to the project. If you are using Visual Studio, ordinarily you would right-click the project in the Solution Explorer and select Add Docker Support. But this won’t work with a WPF app.

A more feasible approach is to open the project folder in Visual Studio Code, where you have installed the Docker extension. Press Ctrl+P to open the command pallet, type Docker and select Add Docker files to Workspace.

Select .NET Core Console for Windows and you’ll get a Dockerfile that looks like this.

FROM mcr.microsoft.com/dotnet/core/runtime:3.1-nanoserver-1909 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1909 AS build
WORKDIR /src
COPY ["MyWpfConsoleApp.csproj", "./"]
RUN dotnet restore "./MyWpfConsoleApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "MyWpfConsoleApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyWpfConsoleApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyWpfConsoleApp.dll"]

Build and run the container.

docker build -t wpf-console-app .
docker run --rm --name wpf-console-app wpf-console-app

At this point you’ll get an error complaining that the Microsoft.WindowsDesktop.App framework could not be found.

It was not possible to find any compatible framework version
The framework 'Microsoft.WindowsDesktop.App', version '3.1.0' was not found.
You can resolve the problem by installing the specified framework and/or SDK.

This is because the Windows Desktop framework is not installed on the Nano Server base image. For that you’ll need to create a custom base image that installs the Windows Desktop framework on Nano Server (or simply use my Windows Desktop Nano Server container image). Here is the WinDesktop.Dockerfile I created. It downloads and runs the installer for the Windows Desktop runtime, then copies the dotnet folder from the installer image to the Nano Server runtime image.

# escape=`
# Installer image
FROM mcr.microsoft.com/windows/servercore:1909 AS installer
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
# Retrieve .NET Core Runtime
# USER ContainerAdministrator
RUN $dotnet_version = '3.1.5'; `
Invoke-WebRequest -OutFile dotnet-installer.exe https://download.visualstudio.microsoft.com/download/pr/86835fe4-93b5-4f4e-a7ad-c0b0532e407b/f4f2b1239f1203a05b9952028d54fc13/windowsdesktop-runtime-3.1.5-win-x64.exe; `
$dotnet_sha512 = '5df17bd9fed94727ec5b151e1684bf9cdc6bfd3075f615ab546759ffca0679d23a35fcf7a8961ac014dd5a4ff0d22ef5f7434a072e23122d5c0415fcd4198831'; `
if ((Get-FileHash dotnet-installer.exe -Algorithm sha512).Hash -ne $dotnet_sha512) { `
Write-Host 'CHECKSUM VERIFICATION FAILED!'; `
exit 1; `
}; `
`
./dotnet-installer.exe /S
# Runtime image
FROM mcr.microsoft.com/windows/nanoserver:1909
ENV `
# Enable detection of running in a container
DOTNET_RUNNING_IN_CONTAINER=true
# In order to set system PATH, ContainerAdministrator must be used
USER ContainerAdministrator
RUN setx /M PATH "%PATH%;C:\Program Files\dotnet"
USER ContainerUser
COPY --from=installer ["/Program Files/dotnet", "/Program Files/dotnet"]

Note: If you create your own custom container image using this Dockerfile, you will need to push it to a container registry of your choice, such as Docker, Azure, Amazon or Google.

To get your local Dockerfile to work, replace the FROM statement at the top with one that specifies the custom Docker image you created (or you can reference my base image).

FROM tonysneed/dotnet-runtime-windowsdesktop:3.1-nanoserver-1909 AS base

After re-running the docker build and docker run commands shown above, you should see output from your WPF console app.

Congratulations. You now have a Nano Server container running a .NET Core app that references WPF.

Happy coding!

Posted in Technical | Tagged , , , | Leave a comment

An Event Stream Processing Micro-Framework for Apache Kafka

Apache Kafka, originally developed by LinkedIn and open sourced in 2011, is the de-facto industry standard for real-time data feeds that can reliably handle large volumes of data with extremely high throughput and low latency. Companies like Uber, Netflix and Slack use Kafka to process trillions of messages per day, and, unlike a traditional queue or message broker, Kafka functions as a unified, durable log of append-only, ordered events that can be replayed or archived.

You can download the code for the event stream processing micro-framework and sample application here: https://github.com/event-streams-dotnet/event-stream-processing

Kafka and .NET

Kafka is written in Java, and most of the libraries and tools are only available in Java. This makes the experience of developing for Kafka in C# somewhat limiting. Kafka Streams and Kafka Connect API’s, for example, are only available in Java, and Confluent’s ksqlDB product is only accessible via a REST interface.

All is not lost, however, for C# developers wishing to use Kafka. Confluent, a company founded by the creators of Kafka, offers confluent-kafka-dotnet, a .NET Kafka client that provides high-level Consumer, Producer and AdminClient API’s. This is useful for building single event stream processing applications, which can be used for scenarios like event sourcing and real-time ETL (extract-transform-load) data pipelines. (There is also an open source kafka-streams-dotnet project that aims to provide the same functionality as Kafka Streams on .NET for multiple event stream processing applications.)

Single Event Stream Processing

As the name implies, single event stream processing entails consuming and processing one event at a time, rather than capturing and processing multiple events at the same time (for example, to aggregate results for a specific timeframe). This is a very powerful paradigm for both event-driven microservice architectures and transforming data as it flows from one data source to another.

A good example is sending an event through a chain of message handlers which apply validation, enrichment and filtering, before writing processed events back to Kafka as a new event stream.

Event Stream Processing Micro-Framework

Because this is the kind of thing you might want to do all the time, it makes sense to create a reusable framework for processing event streams. The framework abstractions should provide a standard approach that is generic, type-safe and extensible, without being coupled to Kafka or any other streaming platform. This is the purpose of the EventStreamProcessing.Abstractions package. There you will find an abstract EventProcessor class that implements the IEventProcessor interface. Notice the generic TSourceEvent and TSinkEvent type arguments, which allow you to specify any message type.

public abstract class EventProcessor<TSourceEvent, TSinkEvent> : EventProcessorBase, IEventProcessor
{
protected readonly IEventConsumer<TSourceEvent> consumer;
protected readonly IEventProducer<TSinkEvent> producer;
public EventProcessor(
IEventConsumer<TSourceEvent> consumer,
IEventProducer<TSinkEvent> producer,
params IMessageHandler[] handlers)
: base(handlers)
{
this.consumer = consumer;
this.producer = producer;
}
public abstract Task Process(CancellationToken cancellationToken = default);
}

Next there is the abstract MessageHandler class that implements IMessageHandler, which is used to build a chain of message handlers.

public abstract class MessageHandler : IMessageHandler
{
private IMessageHandler nextHandler;
public void SetNextHandler(IMessageHandler nextHandler)
{
this.nextHandler = nextHandler;
}
public virtual async Task<Message> HandleMessage(Message sourceMessage)
{
return nextHandler != null ? await (nextHandler?.HandleMessage(sourceMessage)) : sourceMessage;
}
}

Lastly, the Message class encapsulates an event as a key-value pair.

public class Message<TKey, TValue> : Message
{
public TKey Key { get; set; }
public TValue Value { get; set; }
public Message(TKey key, TValue value)
{
Key = key;
Value = value;
}
}
view raw message.cs hosted with ❤ by GitHub

In addition to a platform-agnostic set of abstractions, there is an EventStreamProcessing.Kafka package that references Confluent.Kafka and has Kafka-specific implementations of the IEventConsumer, IEventProducer and IEventProcessor interfaces. The KafkaEventProcessor class overrides the Process method with code that consumes a raw event, builds the chain of handlers, and produces a processed event.

public override async Task Process(CancellationToken cancellationToken = default)
{
// Build chain of handlers
BuildHandlerChain();
// Consume event
var sourceEvent = consumer.ConsumeEvent(cancellationToken);
// Return if EOF
if (sourceEvent == null) return;
// Invoke handler chain
var sourceMessage = new Message<TSourceKey, TSourceValue>(sourceEvent.Key, sourceEvent.Value);
var sinkMessage = await handlers[0].HandleMessage(sourceMessage) as Message<TSinkKey, TSinkValue>;
// Return if message filtered out
if (sinkMessage == null) return;
// Produce event
var sinkEvent = new Confluent.Kafka.Message<TSinkKey, TSinkValue>
{
Key = sinkMessage.Key,
Value = sinkMessage.Value
};
producer.ProduceEvent(sinkEvent);
}

Build Your Own Event Stream Processing Service

To build your own event stream processing service it’s best to start by creating a new .NET Core Worker Service.

dotnet new worker --name MyWorker

Add the EventStreamProcessing.Kafka package. This will also bring in the EventStreamProcessing.Abstractions and Confluent.Kafka packages.

dotnet add package EventStreamProcessing.Kafka

Inject IEventProcessor into the Worker class constructor, then call eventProcessor.Process inside the while loop in the ExecuteAsync method.

public class Worker : BackgroundService
{
private readonly IEventProcessor _eventProcessor;
private readonly ILogger<Worker> _logger;
public Worker(IEventProcessor eventProcessor, ILogger<Worker> logger)
{
_eventProcessor = eventProcessor;
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
await _eventProcessor.Process(stoppingToken);
}
}
}

To set up the event processor you will need some helper methods for creating Kafka consumers and producers.

public static class KafkaUtils
{
public static IConsumer<int, string> CreateConsumer(string brokerList, List<string> topics)
{
var config = new ConsumerConfig
{
BootstrapServers = brokerList,
GroupId = "sample-consumer"
};
var consumer = new ConsumerBuilder<int, string>(config).Build();
consumer.Subscribe(topics);
return consumer;
}
public static IProducer<int, string> CreateProducer(string brokerList)
{
var config = new ProducerConfig { BootstrapServers = brokerList };
var producer = new ProducerBuilder<int, string>(config).Build();
return producer;
}
}
view raw kafka-utils.cs hosted with ❤ by GitHub

Next create some classes that extend MessageHandler in which you override the HandleMessage method to process the message. Here is an example of a handler that transforms the message.

public class TransformHandler : MessageHandler
{
public override async Task<Message> HandleMessage(Message sourceMessage)
{
var message = (Message<int, string>)sourceMessage;
var sinkMessage = new Message<int, string>(message.Key, message.Value.ToUpper());
return await base.HandleMessage(sinkMessage);
}
}

Then add code to the CreateHostBuilder method in the Program class where you set up dependency injection for IEventProcessor.

public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
// Add event processor
services.AddSingleton<IEventProcessor>(sp =>
{
// Create Kafka consumer and producer
var kafkaConsumer = KafkaUtils.CreateConsumer("localhost:9092", new List<string> { "raw-events" });
var kafkaProducer = KafkaUtils.CreateProducer("localhost:9092");
// Create handlers
var handlers = new List<MessageHandler> { new TransformHandler() };
// Create event processor
return new KafkaEventProcessor<int, string, int, string>(
new KafkaEventConsumer<int, string>(kafkaConsumer),
new KafkaEventProducer<int, string>(kafkaProducer, "processed-events"),
handlers.ToArray());
});
services.AddHostedService<Worker>();
});
}

Run Kafka Locally with Docker

Note: To run Kafka you will need to allocate 8 GB of memory to Docker Desktop.

Confluent has a convenient repository with a docker-compose.yml file for running Kafka locally with Docker. Simply clone the repo and run docker-compose.

git clone https://github.com/confluentinc/cp-all-in-one
cd cp-all-in-one
git checkout 5.5.0-post
cd cp-all-in-one/
docker-compose up -d --build
docker-compose ps

Open the Kafka control center: http://localhost:9021/. Then select the main cluster, go to Topics and create the “raw-events” and “processed-events” topics.

Use the console consumer to show the processed events.

docker exec -it broker bash
cd /usr/bin
./kafka-console-consumer --bootstrap-server broker:29092 --topic "processed-events"

Use the console producer to create raw events.

docker exec -it broker bash
cd /usr/bin
./kafka-console-producer --broker-list broker:29092 --topic "raw-events"

Check Out the Sample

The event-stream-processing repository has a samples folder that contains a working example of an event processing service based on the Event Stream Processing Micro-Framework. Here is a diagram showing the data pipeline used by the Sample Worker.

event-stream-processing-sample

  1. The¬†Sample Producer¬†console app lets the user write a stream of events to the Kafka broker using the “raw-events” topic. The numeral represents the event key, and the text “Hello World” presents the event value.
  2. The Sample Worker service injects an IEventProcessor into the KafkaWorker class constructor. Then ExecuteAsync method calls eventProcessor.Process in a while loop until the operation is cancelled.
  3. The Program.CreateHostBuilder method registers an IEventProcessor for dependency injection with a KafkaEventProcessor that uses KafkaEventConsumer, KafkaEventProducer and an array of MessageHandler with ValidationHandler, EnrichmentHandler and FilterHandler.
  4. The¬†KafkaEventConsumer¬†in¬†Sample Worker¬†subscribes to the “raw-events” topic of the Kafka broker running on¬†localhost:9092. The message handlers validate, enrich and filter the events one at a time. If there are validation errors, those are written back to Kafka with a “validation-errors” topic. This takes place if the message key does not correlate to a key in the language store. The¬†EnrichmentHandler¬†looks up a translation for “Hello” in the language store and transforms the message with the selected translation. The¬†FilterHandler¬†accepts a lambda expression for filtering messages. In this case the English phrase “Hello” is filtered out. Lastly, the¬†KafkaEventProducer¬†writes processed events back to Kafka using the “final-events” topic.
  5. The¬†Sample Consumer¬†console app reads the “validation-errors” and “final-events” topics, displaying them in the console.

Follow instructions in the project ReadMe file to run the sample. If you wish to run the Sample Worker in a Docker container, you will need to place it in the same network as the Kafka broker, which can be accomplished using a separate docker-compose.yml file for the Sample Worker.

Happy coding!

Posted in Technical | Tagged | Leave a comment

Enable SSL with ASP.NET Core using Nginx and Docker

When developing web apps and api’s with ASP.NET Core, it is useful to replicate the kind of setup used to deploy your application to production. While the built-in Kestrel web server is adequate for local development, you need a full-fledged web server, such as IIS, Apache or Nginx, to perform functions such as load balancing and SSL termination. Therefore, it is worthwhile to configure an ASP.NET Core project to run locally using Nginx as a reverse proxy¬†for secure communication over HTTPS. Of course, the best way to do this is by running both the web app and reverse proxy in Docker containers.

You can download the code for this blog post here: https://github.com/tonysneed/Demo.AspNetCore-Nginx-Ssl

Dockerize Web API

To get started you’ll need to install Docker Desktop for Windows or Mac. Then create a Web API project using the .NET Core SDK.

mkdir HelloAspNetCore3 && cd HelloAspNetCore3
dotnet new sln --name HelloAspNetCore3
dotnet new webapi --name HelloAspNetCore3.Api
dotnet sln add HelloAspNetCore3.Api/HelloAspNetCore3.Api.csproj

Then open the project folder in a code editor of your choice. My favorite is Visual Studio Code, which allows you to easily open a project folder from the command line: code .

Open Startup.cs and edit the Configure method to remove app.UseHttpsRedirection() and add support for using forwarded headers.

app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});

Next, containerize the Web API project by adding a file named Api.Dockerfile.

FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-alpine AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-alpine AS build
WORKDIR /src
COPY ["HelloAspNetCore3.Api.csproj", "./"]
RUN dotnet restore "./HelloAspNetCore3.Api.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "HelloAspNetCore3.Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "HelloAspNetCore3.Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "HelloAspNetCore3.Api.dll"]
view raw Api.Dockerfile hosted with ❤ by GitHub

VS Code has a nice Docker extension that lets you add various kinds of Dockerfiles, including for ASP.NET Core. I modified the default format to use the Alpine Linux distro, which is small and lightweight, and to add an ASPNETCORE_URLS environment variable for serving the Web API on port 5000. Run the following commands to build and run the Docker image.

docker build -t hello-aspnetcore3 -f Api.Dockerfile .
docker run -d -p 5000:5000 --name hello-aspnetcore3 hello-aspnetcore3
docker ps
view raw webapi-docker hosted with ❤ by GitHub

Use Google Chrome to browse to http://localhost:5000/weatherforecast, and you’ll see some pretty JSON. You can then remove both the container and image.

docker rm -f hello-aspnetcore3
docker rmi hello-aspnetcore3
view raw docker-rm-rmi hosted with ❤ by GitHub

Dockerize Nginx Server

Next add an Nginx folder to the solution folder, and place a file there named Nginx.Dockerfile.

FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf

You’ll need to create a nginx.conf file that will be copied to the container.

worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream web-api {
server api:5000;
}
server {
listen 80;
server_name $hostname;
location / {
proxy_pass http://web-api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}

Notice that proxy_pass specifies a host name of web-api, which matches the upstream directive with a server value of api:5000, which will be defined later as a service in a docker-compose file.

If you run both the Nginx and Web API containers at the name time, the reverse proxy will return a 502 Bad Gateway, because it will not see the Web API server. Both containers need to be placed in the same network. This can be accomplished using Docker networking directives, or you can simply use docker-compose, which is what we’ll do here. Add a docker-compose.yml file to the solution folder.

version: "3.7"
services:
reverseproxy:
build:
context: ./Nginx
dockerfile: Nginx.Dockerfile
ports:
- "80:80"
restart: always
api:
depends_on:
- reverseproxy
build:
context: ./HelloAspNetCore3.Api
dockerfile: Api.Dockerfile
expose:
- "5000"
restart: always

The build directives are there to facilitate building each docker image, which you can perform using the following command: docker-compose build.

To run both containers in a default bridge network, run the following command: docker-compose up -d.

View the running containers with docker ps. Notice that the Web API is not exposed to the host, but the reverse proxy is. Browse to: http://localhost/weatherforecast. To stop the containers run: docker-compose down.

Enable SSL Termination

One of the benefits of using Nginx as a reverse proxy is that you can configure it to use SSL for secure communication with clients, with requests forwarded to the web app over plain HTTP. The first step in this process is to create a public / private key pair for localhost. We can accomplish this task using OpenSSL, which can be installed on both macOS and Windows. Start by adding a localhost.conf file to the Nginx folder.

[req]
default_bits = 2048
default_keyfile = localhost.key
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_ca
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = US
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Texas
localityName = Locality Name (eg, city)
localityName_default = Dallas
organizationName = Organization Name (eg, company)
organizationName_default = localhost
organizationalUnitName = organizationalunit
organizationalUnitName_default = Development
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = localhost
commonName_max = 64
[req_ext]
subjectAltName = @alt_names
[v3_ca]
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = 127.0.0.1
view raw localhost.conf hosted with ❤ by GitHub

Run the following command to create localhost.crt and localhost.key files, inserting your own strong password.

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout localhost.key -out localhost.crt -config localhost.conf -passin pass:YourStrongPassword
view raw open-ssl-create hosted with ❤ by GitHub

In order to trust the localhost certificate on your local machine, you’ll want to run the following command to create a localhost.pfx file, providing the same strong password when prompted.

sudo openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.crt
view raw open-ssl-pfx hosted with ❤ by GitHub

To trust the localhost certificate on macOS, open Keychain Access, select System in the Keychains pane, and drag localhost.pfx from the Finder into the certificate list pane. Then double-click the localhost certificate and under the trust section select Always Trust.

trust-keychain-access

To create and trust a self-signed certificate on Windows, follow these instructions.

Now that you have created a public / private key pair, you need to update Nginx.Dockerfile to copy these files to the container.

FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf
COPY localhost.crt /etc/ssl/certs/localhost.crt
COPY localhost.key /etc/ssl/private/localhost.key

Next, update nginx.conf to load the certificate key pair. Configure a server to listen on port 443 over ssl and forward requests to the upstream web-api server. Also configure a server to listen on port 80 and redirect requests to port 443.

worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream web-api {
server api:5000;
}
server {
listen 80;
server_name localhost;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/localhost.crt;
ssl_certificate_key /etc/ssl/private/localhost.key;
location / {
proxy_pass http://web-api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
view raw nginx-ssl.conf hosted with ❤ by GitHub

Lastly, edit the docker-compose.yml file to expose both ports 80 and 443.

ports:
- "80:80"
- "443:443"

Run docker-compose build, followed by docker-compose up -d. This time when you browse to http://localhost/weatherforecast, you’ll be redirected to https://localhost/weatherforecast.

web-api-ssl

Happy coding!

Posted in Technical | Tagged , , , | 6 Comments

What It Means to Be a Software Architect ‚ÄĒ and Why It Matters

For the past year I have worked as a Senior Solutions Architect for a large multinational company in the construction sector. While in the past I functioned as a hybrid developer-architect, designing and building enterprise-scale systems and mission-critical line-of-business applications, my new role has been squarely focused on software architecture. As such I thought it would be a good time to reflect on the meaning of software architecture and why it has an indispensable role to play in designing applications and services that add business value and will stand the test of time.

This exercise was inspired by a 15 minute talk, titled “Making Architecture Matter,” presented at an O’Reilly conference in 2015 by renowned software architecture guru, Martin Fowler.

fowler-arch-matters

What Is Software Architecture?

Martin begins his talk by emphasizing that software architecture isn’t about pretty diagrams. Although these can help communicate some important aspects of application architecture, they themselves do not capture or define the actual architecture of a software development project. Rather, the totality of the application architecture can be best expressed as the shared understanding of system design by expert developers who are most familiar with the system. In other words, it’s about the code. And if you’re going to understand the de facto architecture of a system, you have to understand the code, not simply as an abstraction, but how it functions and how the pieces work together.

As Martin states that if you’re going to be a software architect you need to know how to write code, and you need to communicate with lead developers as a coder among coders. This is important, because in order to provide technical leadership, which is critical to the proper implementation of software design principles, you will need to gain their cooperation and respect. At the risk of sounding trite, you need to show them that you know what you’re talking about, so that you can engage in technical discussions that produce sounds technical decisions.

Part of the process should be regular code reviews, whether in-person or via pull requests. If possible, you can augment this with periodic pair programming sessions (check out VS Live Share), which will help to promote geek bonding. For this reason, it’s important to keep your coding skills up-to-date. This can be challenging if you’re a full-time software architect, so you might want to consider contributing to some open-source projects, which shouldn’t be too difficult now that companies like Microsoft and Google have open-sourced most of their cutting-edge technologies. (If you’re interested you can also volunteer to resolve issues and implement features for some of my open source projects: URF, a generic Unit of Work and Repository framework, Trackable Entities, an Entity Framework extension, and EF Core Scaffolding with Handlebars, reverse engineering customizable classes from a database.)

The Purpose of Software Architecture

Martin grapples with how to define precisely what software architecture is, and in so doing he refers to an article he wrote, Who Needs an Architect, in which he quotes Ralph Johnson as saying, “Architecture is about the important stuff. Whatever that is.” ¬†There are different ways of defining what is important to a project. It could be related to technical decisions, such as programming languages and frameworks, which once implemented are difficult to change without incurring substantial cost.

Given this characterization, one might speculate concerning the process by which important technical decisions are made. In my view it’s the job of the architect to ensure that consequential decisions are made in a strategic manner. In other words, the architect is perhaps the person who sees the larger technological landscape, who discerns emerging trends, and who can discriminate between enduring paradigm shifts and passing fads.

For example, back in 2016 I wrote a somewhat influential (and controversial) blog post titled, WCF Is Dead, in which I argued against using Windows Communication Foundation for building greenfield web services. While at the time there were some valid use cases for WCF (for example, inter/intra-process communication and message queuing), SOAP-based web services have since been fully supplanted by RESTful Web API’s using modern toolkits such as ASP.NET Core¬†which incorporate¬†Dependency Injection¬†as a first-class citizen. With the planned merging in the year 2020 of disparate versions of .NET (both full .NET Framework and Mono / Xamarin) into a version of .NET Core that will simply be called¬†.NET 5, Microsoft will be officially classifying WCF services as legacy ‚ÄĒ will not be a part of .NET 5. (Microsoft is recommending gRPC¬†as a successor to WCF for efficient inter-process communication, for which support will be included in ASP.NET 3.0.) My prescient assessment of WCF’s demise is an example of¬†the kind of strategic architectural advice that can spare companies from making costly mistakes betting on tech stacks that are on their way out.

dotnet-timeline.png

Choice of software frameworks is but just one crucial decision in which software architecture can play a role. We are in the midst of a number of other paradigm shifts that will influence the direction and focus of our software projects. Here are a few notable examples:

  • From On-Premise (costly, inefficient) to Cloud-Based¬†Computing (economies of scale)
  • From Centralized (SVN, TFVC) to Decentralized Version Control (Git)
  • From Monolithic (binary references) to Modular Design (packages)
  • From Server-based (virtual machines) to Server-less Architecture¬†(Docker, K8s, SaaS)
  • From Platform-Specific (Windows) to Platform-Independent Frameworks¬†(Linux)
  • From Coarse-Grained (n-tier) to Fine-Grained Services (microservices)
  • From Imperative (scripts) to Declarative Infrastructure (YAML)

Many of these changes are just beginning to gain dominance in the software industry, so it’s important for architects to train and mentor dev teams in the transition from outdated practices to modern approaches that will result in greater longevity as well as yield increased levels of reliability, scalability and cost-effectiveness.

The Right Level of Abstraction

Martin goes onto state that our goal as software architects is to design systems that have fewer and fewer aspects that are difficult to change. In my view this is where SOLID programming principles come in.

Adhering to these principles will more likely result in a more modular design of loosely coupled components, each of which have just one responsibility. Dependencies are represented by interfaces that are injected into constructors and supplied by a dependency injection container. Systems are also designed using well-known design patterns, including the Gang of Four and Domain-Driven Design patterns. In addition, it is important to establish how microservices interact with one another, abstracting away direct service dependencies in favor of indirect communication based on models such as publish-subscribe (for example, Amazon SNS / SQS) or event sourcing with a unified log (for example, Apache Kafka, Amazon Kinesis). Martin has an excellent presentation on the Many Meanings of Event-Driven Architecture, in which he discusses the pros and cons of various event-based architectural patterns.

The purpose of applying these patterns and enforcing separation of concerns is to reduce parts of the system design that are difficult to change, thereby increasing flexibility and helping insure against premature obsolescence. This is an approach I described some time ago using Jeffery Palermo’s Onion Architecture, which aims to decouple an application from infrastructure concerns using dependency injection. A benefit of this approach is code that¬†is more testable¬†and uses mocking frameworks to stub external dependencies such as data stores or other services.

Trust But Verify

It is important for software architects to understand that, no matter how beautiful and elegant your designs are, dev teams will almost always adulterate them. As I pointed out in a talk at a London software architecture conference in 2015, entitled Avoiding Five Common Architectural Pitfalls, often times developers will apply architectural patterns without fully understanding them and will, as a result, introduce hacks and workarounds (a.k.a. anti-patterns) that defeat the very purpose for which the pattern was created. This may be because the team is composed of members with various levels of skill and experience, or devs may be used to a coding practice that has been displaced by a newer approach.

kool-aid.jpg

A case in point would be the use of dependency injection, where devs may “new” up objects in classes rather than allowing the DI framework to do its thing. Another example would be the use of modal dialogs directly from view-models while implementing the MVVM pattern in client apps. Furthermore, I’ve seen devs struggle to take full advantage of Git, not understanding how to pull down upstream commits with rebase, or using the same branch for all their pull requests.

As Martin puts it, the best architects are like mountaineering guides: “a more experienced and skillful team member who teaches other team members to better fend for themselves yet is always there for the really tricky stuff.”

Why Architecture Matters

One of the most challenging aspects of a software architect’s job is to communicate to stakeholders and key decision-makers the significant business value that effective software architecture brings to the business. Sometimes companies fail to recognize that, in the 21st century, all companies to a certain extent need to see themselves as being in a technology company, or at least they need to recognize that well-built software can be a differentiating factor that provides a crucial competitive advantage.

Even if a company acknowledges the importance of software in general, they may not recognize that adequate software architecture is essential to building systems that are large, complex or play a critical role in running the business. Responsibility for communicating this message lies with us, and our profession suffers the extent to which we fail to convey the role software architecture plays in both avoiding costly mistakes rapidly introducing new product features in a shorter span of time. Martin calls this the Design Stamina Hypothesis, which postulates that, at a certain point in the app development lifecycle, good design allows features to be introduced at a faster pace than when design is not considered, or poor design choices resulted in an inordinate accumulation of technical debt.

design-stamina.gif

I worked on a large project where the lead developer lacked design expertise. The code base was so intertwined and convoluted that releasing new versions became a nightmare. Classes with over 3000 lines of code made maintenance almost impossible, and lack of unit tests meant that new features or bug fixes would break existing functionality or make the system unstable.

One of the misperceptions that architects face is that we are engaging in architecture for architecture’s sake, or that we are keen on introducing new technologies mostly because of the coolness factor. Our challenge is to counter this misperception by arguing not for the aesthetic value of good design, but for the pragmatic, economic value. We need to frame the need for intentional design as something that can save the company millions of dollars by averting otherwise disastrous technology and design choices, produce a distinct competitive edge through market differentiation, or pave the way for increased customer satisfaction by making it possible to evolve the product in a sustainable fashion. If we can communicate a clear understanding of the nature and purpose of software architecture, and we forcefully articulate the value proposition of sound architectural design and the benefits of ensuring correct execution of that design, we will be able to help mature the ability of an organization to deliver software solutions that can both ward off obsolescence and add features that delight customers.

Happy architecting!

Posted in Technical | Tagged , | 2 Comments

Use EF Core with AWS Lambda Functions

This is Part 3 in a 3 part series:

  1. Add .NET Core DI and Config Goodness to AWS Lambda Functions
  2. IDesignTimeDbContextFactory and Dependency Injection: A Love Story
  3. Use EF Core with AWS Lambda Functions (this post)

In a previous post I demonstrated how to set up Dependency Injection and Configuration for AWS Lambda Functions that are written in C# and use .NET Core.   The purpose of this post is to provide an example and some best practices for using Entity Framework Core with AWS Lambda.

Note: You can download or clone the code for this post here: https://github.com/tonysneed/net-core-lambda-di-config-ef

One of the benefits of adding DI and Config to a Lambda function is that you can abstract away persistence concerns from your application, allowing you greater flexibility in determining which concrete implementation to use. For example, you may start out with a relational data store (such as SQL Server, PostgreSQL or SQLite), but decide later to move to a NoSQL database or other non-relational store, such as Amazon S3. This is where the Repository Pattern comes in.

Note: If you’re going to get serious about using Repository and Unit of Work patterns, you should use a framework dedicated to this purpose, such as URF (Unit of Work and Repository Framework): https://github.com/urfnet/URF.Core

A Simple Example

Let’s start with a simple example, a Products repository. You would begin by defining an interface, for example, IProductRepository. Notice in the code below that the GetProduct method returns a Task. This is so that IO-bound operations can execute without blocking the calling thread.

public interface IProductRepository
{
Task<Product> GetProduct(int id);
}

You’re going to want to place this interface in a .NET Standard class library so that it can be referenced separately from specific implementations. Then create a ProductRepository class that implements IProductRepository. This can go in a .NET Standard class library that includes a package reference to an EF Core provider, for example, Microsoft.EntityFrameworkCore.SqlServer. You will want to add a constructor that accepts a¬†DbContext-derived class.

public class ProductRepository : IProductRepository
{
public SampleDbContext Context { get; }
public ProductRepository(SampleDbContext context)
{
Context = context;
}
public async Task<Product> GetProduct(int id)
{
return await Context.Products.SingleOrDefaultAsync(e => e.Id == id);
}
}

Dependency Resolution

At this point you’re going to want to add a .NET Standard class library that can be used for dependency resolution. This is where you’ll add code that sets up DI and registers services that are used by classes in your application, including the¬†DbContext¬†that is used by your ProductRepository.

public class DependencyResolver
{
public IServiceProvider ServiceProvider { get; }
public string CurrentDirectory { get; set; }
public Action<IServiceCollection> RegisterServices { get; }
public DependencyResolver(Action<IServiceCollection> registerServices = null)
{
// Set up Dependency Injection
var serviceCollection = new ServiceCollection();
RegisterServices = registerServices;
ConfigureServices(serviceCollection);
ServiceProvider = serviceCollection.BuildServiceProvider();
}
private void ConfigureServices(IServiceCollection services)
{
// Register env and config services
services.AddTransient<IEnvironmentService, EnvironmentService>();
services.AddTransient<IConfigurationService, ConfigurationService>
(provider => new ConfigurationService(provider.GetService<IEnvironmentService>())
{
CurrentDirectory = CurrentDirectory
});
// Register DbContext class
services.AddTransient(provider =>
{
var configService = provider.GetService<IConfigurationService>();
var connectionString = configService.GetConfiguration().GetConnectionString(nameof(SampleDbContext));
var optionsBuilder = new DbContextOptionsBuilder<SampleDbContext>();
optionsBuilder.UseSqlServer(connectionString, builder => builder.MigrationsAssembly("NetCoreLambda.EF.Design"));
return new SampleDbContext(optionsBuilder.Options);
});
// Register other services
RegisterServices?.Invoke(services);
}
}

There are parts of this class that are worthy of discussion. First, notice that the constructor accepts a delegate for registering services. This is so that classes using DependencyResolver can pass it a method for adding other dependencies. This is important because the application will register dependencies that are of no interest to the EF Core CLI.

Another thing to point out is the code that sets the CurrentDirectory of the ConfigurationService. This is required in order to locate the appsettings.*.json files residing at the root of the main project.

Lastly, there is code that registers the DbContext with the DI system. This calls an overload of AddTransient that accepts an IServiceProvider, which is used to get an instance of the ConfigurationService that supplies a connection string to the UseSqlServer method of the DbContextOptionsBuilder. This code can appear somewhat obtuse if you’re not used to it, but the idea is to use the IServiceProvider of the DI system to resolve services that are required for passing additional parameters to constructors.

To use DI with a Lambda function, simply add a constructor to the Function class that creates a DependencyResolver, passing a ConfigureServices method that registers IProductRepository with the DI system. The FunctionHandler method can then use the repository to retrieve a product by id.

public class Function
{
// Repository
public IProductRepository ProductRepository { get; }
public Function()
{
// Get dependency resolver
var resolver = new DependencyResolver(ConfigureServices);
// Get products repo
ProductRepository = resolver.ServiceProvider.GetService<IProductRepository>();
}
// Use this ctor from unit tests that can mock IProductRepository
public Function(IProductRepository productRepository)
{
ProductRepository = productRepository;
}
/// <summary>
/// A simple function that takes an id and returns a product.
/// </summary>
/// <param name="input"></param>
/// <param name="context"></param>
/// <returns></returns>
public async Task<Product> FunctionHandler(string input, ILambdaContext context)
{
int.TryParse(input, out var id);
if (id == 0) return null;
return await ProductRepository.GetProduct(id);
}
// Register services with DI system
private void ConfigureServices(IServiceCollection services)
{
services.AddTransient<IProductRepository, ProductRepository>();
}
}
view raw FunctionEf.cs hosted with ❤ by GitHub

Notice the second constructor that accepts an IProductRepository. This is to support unit tests that pass in a mock IProductRepository. For example, here is a unit test that uses Moq to create a fake IProductRepository. This allows for testing logic in the FunctionHandler method without connecting to an actual database, which would make the test fragile and slow.

[Fact]
public async void Function_Should_Return_Product_By_Id()
{
// Mock IProductRepository
var expected = new Product
{
Id = 1,
ProductName = "Chai",
UnitPrice = 10
};
var mockRepo = new Mock<IProductRepository>();
mockRepo.Setup(m => m.GetProduct(It.IsAny<int>())).ReturnsAsync(expected);
// Invoke the lambda function and confirm correct value is returned
var function = new Function(mockRepo.Object);
var result = await function.FunctionHandler("1", new TestLambdaContext());
Assert.Equal(expected, result);
}
}
view raw FunctionTest.cs hosted with ❤ by GitHub

EF Core CLI

In a previous post I proposed some options for implementing an interface called IDesignTimeDbContextFactory, which is used by the EF Core CLI for to create code migrations and apply them to a database.

aws-unicorn-framed.png

This allows the EF Core tooling to retrieve a connection string from the appsettings.*.json file that corresponds to a specific environment (Development, Staging, Production, etc).

"ConnectionStrings": {
"SampleDbContext": "Data Source=(localdb)\\MsSqlLocalDb;initial catalog=SampleDb;Integrated Security=True; MultipleActiveResultSets=True"
}

"ConnectionStrings": {
"SampleDbContext": "Data Source=sample-instance.xxx.eu-west-1.rds.amazonaws.com;initial catalog=SampleDb;User Id=xxx;Password=xxx; MultipleActiveResultSets=True"
}

Here is a sample DbContext factory that uses a DependencyResolver to get a DbContext from the DI system.

public class SampleDbContextFactory : IDesignTimeDbContextFactory<SampleDbContext>
{
public SampleDbContext CreateDbContext(string[] args)
{
// Get DbContext from DI system
var resolver = new DependencyResolver
{
CurrentDirectory = Path.Combine(Directory.GetCurrentDirectory(), "../NetCoreLambda")
};
return resolver.ServiceProvider.GetService(typeof(SampleDbContext)) as SampleDbContext;
}
}

To set the environment, simply set the ASPNETCORE_ENVIRONMENT environment variable.

rem Set environment
set ASPNETCORE_ENVIRONMENT=Development
rem View environment
set ASPNETCORE_ENVIRONMENT
view raw set-env-win.cmd hosted with ❤ by GitHub

Then run the dotnet-ef commands to add a migration and create a database with a schema that mirrors entity definitions and their relationships. You’ll want to do this twice: once for the Development environment and again for Production.

rem Create EF code migration
dotnet ef migrations add initial
rem Apply migration to database
dotnet ef database update

Try It Out!

Once you have created the database, you can press F5 to launch the AWS.NET Mock Lambda Test Tool, which you can use to develop and debug your Lambda function locally. Simply enter a value of 1 for Function Input and click the Execute Function button.

mock-lambda-test-tool.png

You should see JSON for Product 1 from the database.

mock-lambda-test-tool-result.png

When you’re confident everything works locally, you can throw caution to the wind and upload your Lambda function to AWS.

upload-lambda-function1.png

Make sure that the ASPNETCORE_ENVIRONMENT environment variable is set appropriately.

upload-lambda-function2.png

You can then bravely execute your deployed Lambda function.

execute-lambda

Conclusion

One of the benefits of using C# for AWS Lambda functions is built-in support for Dependency Injection, which is a first-class citizen in .NET Core and should be as indispensable to developers as a Jedi’s light saber. The tricky part can be setting up DI so that it can be used both at runtime by the Lambda function and at development-time by the EF Core CLI. With the knowledge you now possess, you should have no trouble implementing a microservices architecture with serverless functions that are modular and extensible. Cheers!

Posted in Technical | Tagged , , , | 2 Comments

IDesignTimeDbContextFactory and Dependency Injection: A Love Story

This is Part 2 in a 3 part series:

  1. Add .NET Core DI and Config Goodness to AWS Lambda Functions
  2. IDesignTimeDbContextFactory and Dependency Injection: A Love Story (this post)
  3. Use EF Core with AWS Lambda Functions

Whenever I set out to create an application or service, I might start out with everything in a single project, but before long I find myself chopping things up into multiple projects. This is in line with the Single Responsibility and Interface Segregation principles of SOLID software development. A corollary of this approach is separating out development-time code from runtime code. You can see an example of this in the NPM world with separate dev dependencies in a package.json file.  Similarly, .NET Core has adopted this concept with .NET Core CLI tools, which can also be installed globally.

Note: You can download or clone the code for this post here: https://github.com/tonysneed/ef-design-di

Entity Framework Core provides the IDesignTimeDbContextFactory interface so that you can separate the EF code needed for generating database tables at design-time (what is commonly referred to as a code-first approach) from EF code used by your application at runtime.  A typical implementation of IDesignTimeDbContextFactory might look like this. Note that using the MigrationAssembly method is also required for generating code-first migrations.

public class ProductsDbContextFactory : IDesignTimeDbContextFactory<ProductsDbContext>
{
public ProductsDbContext CreateDbContext(string[] args)
{
var optionsBuilder = new DbContextOptionsBuilder<ProductsDbContext>();
var connectionString = "Data Source=(localdb)\\MsSqlLocalDb;initial catalog=ProductsDbDev;Integrated Security=True; MultipleActiveResultSets=True";
optionsBuilder.UseSqlServer(connectionString, b => b.MigrationsAssembly("EfDesignDemo.EF.Design"));
return new ProductsDbContext(optionsBuilder.Options);
}
}

The code smell that stands out here is that the connection string is hard-coded.  To remedy this you can build an IConfiguration in which you set the base path to the main project directory.

public class ProductsDbContextFactory : IDesignTimeDbContextFactory<ProductsDbContext>
{
public ProductsDbContext CreateDbContext(string[] args)
{
// Build config
IConfiguration config = new ConfigurationBuilder()
.SetBasePath(Path.Combine(Directory.GetCurrentDirectory(), "../EfDesignDemo"))
.AddJsonFile("appsettings.json")
.Build();
// Get connection string
var optionsBuilder = new DbContextOptionsBuilder<ProductsDbContext>();
var connectionString = config.GetConnectionString(nameof(ProductsDbContext));
optionsBuilder.UseSqlServer(connectionString, b => b.MigrationsAssembly("EfDesignDemo.EF.Design"));
return new ProductsDbContext(optionsBuilder.Options);
}
}

While this is better than including the hard-coded connection string, we can still do better. For example, we might want to select a different appsettings.*.json file depending on the environment we’re in (Development, Staging, Production, etc). ¬†In ASP.NET Core, this is determined by an special environment variable, ASPNETCORE_ENVIRONMENT. ¬†We’re also going to want to plug in environment variables, so that the connection string and other settings can be overriden when the application is deployed.

public class ProductsDbContextFactory : IDesignTimeDbContextFactory<ProductsDbContext>
{
public ProductsDbContext CreateDbContext(string[] args)
{
// Get environment
string environment = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
// Build config
IConfiguration config = new ConfigurationBuilder()
.SetBasePath(Path.Combine(Directory.GetCurrentDirectory(), "../EfDesignDemo.Web"))
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"appsettings.{environment}.json", optional: true)
.AddEnvironmentVariables()
.Build();
// Get connection string
var optionsBuilder = new DbContextOptionsBuilder<ProductsDbContext>();
var connectionString = config.GetConnectionString(nameof(ProductsDbContext));
optionsBuilder.UseSqlServer(connectionString, b => b.MigrationsAssembly("EfDesignDemo.EF.Design"));
return new ProductsDbContext(optionsBuilder.Options);
}
}

See It In Action

The beauty of adding config to your design-time DbContext factory is that it will pick up the connection string from the configuration system, selecting the appropriate connection string for the environment you specify. How do you specify an environment (Development, Staging, Production, etc)? Simply by setting that special ASPNETCORE_ENVIRONMENT environment variable.

If you’re on Windows, you can set and view it like so:

rem Set environment
set ASPNETCORE_ENVIRONMENT=Development
rem View environment
set ASPNETCORE_ENVIRONMENT
view raw set-env-win.cmd hosted with ❤ by GitHub

If you’re on Mac, here’s how to do it:

# Set environment
export ASPNETCORE_ENVIRONMENT=Development
# View environment
echo ASPNETCORE_ENVIRONMENT
view raw set-env-win.sh hosted with ❤ by GitHub

With the environment set, you can switch to the directory where the DbContext factory is located and run commands to add EF code migrations and apply them to the database specified in the appsettings.*.json file for your selected environment.

rem Create EF code migration
dotnet ef migrations add initial
rem Apply migration to database
dotnet ef database update

Show Me Some DI Love

This solution works, but further improvements are possible. One problem is that it violates the Dependency Inversion principle of SOLID design, because we are newing up the DbContext in the design-time factory. It might be cleaner to use DI to resolve dependencies and provide the DbContext.

geeks-falling-in-love

To remedy this we can factor out the configuration bits into an IConfigurationService that builds an IConfiguration, and this service will depend on an IEnvironmentService to supply the environment name.

public interface IEnvironmentService
{
string EnvironmentName { get; set; }
}

public interface IConfigurationService
{
IConfiguration GetConfiguration();
}

The implementations for these interfaces can go into a .NET Standard class library that exists to support configuration.

public class EnvironmentService : IEnvironmentService
{
public EnvironmentService()
{
EnvironmentName = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")
?? "Production";
}
public string EnvironmentName { get; set; }
}

public class ConfigurationService : IConfigurationService
{
public IEnvironmentService EnvService { get; }
public string CurrentDirectory { get; set; }
public ConfigurationService(IEnvironmentService envService)
{
EnvService = envService;
}
public IConfiguration GetConfiguration()
{
CurrentDirectory = CurrentDirectory ?? Directory.GetCurrentDirectory();
return new ConfigurationBuilder()
.SetBasePath(CurrentDirectory)
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"appsettings.{EnvService.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables()
.Build();
}
}

Now we just need to create DependencyResolver class that uses an IServiceCollection to register dependencies.

public class DependencyResolver
{
public IServiceProvider ServiceProvider { get; }
public string CurrentDirectory { get; set; }
public DependencyResolver()
{
// Set up Dependency Injection
IServiceCollection services = new ServiceCollection();
ConfigureServices(services);
ServiceProvider = services.BuildServiceProvider();
}
private void ConfigureServices(IServiceCollection services)
{
// Register env and config services
services.AddTransient<IEnvironmentService, EnvironmentService>();
services.AddTransient<IConfigurationService, ConfigurationService>
(provider => new ConfigurationService(provider.GetService<IEnvironmentService>())
{
CurrentDirectory = CurrentDirectory
});
// Register DbContext class
services.AddTransient(provider =>
{
var configService = provider.GetService<IConfigurationService>();
var connectionString = configService.GetConfiguration().GetConnectionString(nameof(ProductsDbContext));
var optionsBuilder = new DbContextOptionsBuilder<ProductsDbContext>();
optionsBuilder.UseSqlServer(connectionString, builder => builder.MigrationsAssembly("EfDesignDemo.EF.Design"));
return new ProductsDbContext(optionsBuilder.Options);
});
}
}

This class exposes an IServiceProvider that we can use to get an instance that has been created by the DI container.  This allows us to refactor the ProductsDbContextFactory class to use DependencyResolver to create the DbContext.

public class ProductsDbContextFactory : IDesignTimeDbContextFactory<ProductsDbContext>
{
public ProductsDbContext CreateDbContext(string[] args)
{
// Get DbContext from DI system
var resolver = new DependencyResolver
{
CurrentDirectory = Path.Combine(Directory.GetCurrentDirectory(), "../EfDesignDemo.Web")
};
return resolver.ServiceProvider.GetService(typeof(ProductsDbContext)) as ProductsDbContext;
}
}

Pick Your Potion

Adding DI to the mix may feel like overkill, because the DbContext factory is only being used at design-time by EF Core tooling.  In that case, it would be more straightforward to stick with building an IConfiguration right within the CreateDbContext of your factory class, as shown in the ProductsDbContextFactory3 code snippet.

However, there is a case where the DI-based approach would be worth the effort, which is when you need to set up DI for the application entry point, for example, when using EF Core with AWS Lamda Functions. ¬†More on that in my next blog post. ūü§ď¬†Enjoy!

Posted in Technical | Tagged , | 3 Comments

Add .NET Core DI and Config Goodness to AWS Lambda Functions

This is Part 1 in a 3 part series:

  1. Add .NET Core DI and Config Goodness to AWS Lambda Functions (this post)
  2. IDesignTimeDbContextFactory and Dependency Injection: A Love Story
  3. Use EF Core with AWS Lambda Functions

When Amazon introduced AWS Lambda in 2014, it helped kick off the revolution in serverless computing that is now taking the software development world by storm.  In a nutshell, AWS Lambda (and equivalents such as Azure Functions and Google Cloud Functions) provides an abstraction over the underlying operating system and execution runtime, so that you can more easily achieve economies of scale that are the driving force of Cloud computing paradigms.

Note: You can download or clone the GitHub repository for this post here: https://github.com/tonysneed/net-core-lambda-di-config

AWS Lambda with .NET Core

AWS Lambda supports a number of programming languages and runtimes, well as custom runtimes which enable use of any language and execution environment. Included in the list of standard runtimes is Microsoft .NET Core, an open-source cross-platform runtime, which you can build apps on using the C# programming language. To get started with AWS Lambda for .NET Core, you can install the AWS Toolkit for Visual Studio, which provides a number of project templates.

aws-lambda-templates.png

The first thing you’ll notice is that you’re presented with a choice between two different kinds of projects: a) AWS Lambda Project, and b) AWS Serverless Application. ¬†An AWS Lambda Project will provide a standalone function that can be invoked in response to a number of different events, such as notifications from a queue service. ¬†The Serverless Application template, on the other hand, will allow you to group several functions and deploy them as a separate application. This includes a Startup class with a¬†ConfigureServices method for registering types with the built-in dependency injection system, as well as local and remote entry points which leverage the default configuration system using values from an¬†appsettings.json file that is included with the project.

The purpose of the Serverless Application approach is to create a collection of functions that will be exposed as an HTTP API using the Amazon API Gateway. This may suit your needs, but often you will want to write standalone functions that respond to various events and represent a flexible microservices architecture with granular services that can be deployed and scaled independently.  In this case the AWS Lambda Project template will fill the bill.

The drawback, however, of the AWS Lambda Project template is that is lacks the usual hooks for setting up configuration and dependency injection. You’re on your own for adding the necessary code. ¬†But never fear ‚ÄĒ¬†I will now show you step-by-step instructions for accomplishing this feat.

Add Dependency Injection Code

One of my favorite things to tell developers is that employing the¬†new¬†keyword to instantiate types is an anti-pattern. ¬†Anytime you directly create an object you are coupling yourself to a specific implementation. ¬†So it stands to reason that you’d want to apply the same Inversion of Control pattern and .NET Core Dependency Injection goodness to your AWS Lambda Functions as you would in a standard ASP.NET Core Web API project.

Start by adding a package reference to Microsoft.Extensions.DependencyInjection version 2.1.0.

Note: Versions of NuGet packages you install need to match the version of .NET Core supported by AWS Lambda for the project you created.  In this example it is 2.1.0.

Then add a ConfigureServices function that accepts an IServiceCollection and uses it to register services with the .NET Core dependency injection system.

private void ConfigureServices(IServiceCollection serviceCollection)
{
// TODO: Register services with DI system
}

Next, add a parameterless constructor that creates a new ServiceCollection, calls ConfigureServices, then calls BuildServiceProvider to create a service provider you can use to get services via DI.

public Function()
{
// Set up Dependency Injection
var serviceCollection = new ServiceCollection();
ConfigureServices(serviceCollection);
var serviceProvider = serviceCollection.BuildServiceProvider();
// TODO: Get service from DI system
}

Add Configuration Code

Following the DI playbook, you’re going to want to abstract configuration behind an interface so that you can mock it for unit tests. ¬†Start by adding version 2.1.0 of the following package references:

  • Microsoft.Extensions.Configuration
  • Microsoft.Extensions.Configuration.EnvironmentVariables
  • Microsoft.Extensions.Configuration.FileExtensions
  • Microsoft.Extensions.Configuration.Json

Then create an IConfigurationService interface with a GetConfiguration method that returns an IConfiguration.

public interface IConfigurationService
{
IConfiguration GetConfiguration();
}

Because some of the configuration settings will be for specific environments, you’ll also want to add an IEnvironmentService interface with an EnvironmentName property.

public interface IEnvironmentService
{
string EnvironmentName { get; set; }
}

To implement IEnvironmentService create an EnvironmentService class that checks for the presence of a special ASPNETCORE_ENVIRONMENT variable, defaulting to a value of “Production”.

public class EnvironmentService : IEnvironmentService
{
public EnvironmentService()
{
EnvironmentName = Environment.GetEnvironmentVariable(EnvironmentVariables.AspnetCoreEnvironment)
?? Environments.Production;
}
public string EnvironmentName { get; set; }
}

To avoid the use of magic strings, you can employ a static Constants class.

public static class Constants
{
public static class EnvironmentVariables
{
public const string AspnetCoreEnvironment = "ASPNETCORE_ENVIRONMENT";
}
public static class Environments
{
public const string Production = "Production";
}
}
view raw Constants1.cs hosted with ❤ by GitHub

The ConfigurationService class should have a constructor that accepts an IEnvironmentService and implements the GetConfiguration method by creating a new ConfigurationBuilder and calling methods to add appsettings JSON files and environment variables.  Because the last config entry wins, it is possible to add values to an appsettings file which are then overridden by environment variables that are set when the AWS Lambda Function is deployed.

public class ConfigurationService : IConfigurationService
{
public IEnvironmentService EnvService { get; }
public ConfigurationService(IEnvironmentService envService)
{
EnvService = envService;
}
public IConfiguration GetConfiguration()
{
return new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"appsettings.{EnvService.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables()
.Build();
}
}

Now that you’ve defined the environment and configuration service interfaces and implementations, it’s time to register them with the DI system inside the ConfigureServices method of the Function class.

private void ConfigureServices(IServiceCollection serviceCollection)
{
// Register services with DI system
serviceCollection.AddTransient<IEnvironmentService, EnvironmentService>();
serviceCollection.AddTransient<IConfigurationService, ConfigurationService>();
}

Then edit the Function class constructor to get IConfigurationService from the DI service provider and set a read-only ConfigService property on the class.  The top of the Function class should now look like this:

// Configuration Service
public IConfigurationService ConfigService { get; }
public Function()
{
// Set up Dependency Injection
var serviceCollection = new ServiceCollection();
ConfigureServices(serviceCollection);
var serviceProvider = serviceCollection.BuildServiceProvider();
// Get Configuration Service from DI system
ConfigService = serviceProvider.GetService<IConfigurationService>();
}

Lastly, add code to the FunctionHandler method to get a value from the ConfigService using the input parameter as a key.

public string FunctionHandler(string input, ILambdaContext context)
{
// Get config setting using input as a key
return ConfigService.GetConfiguration()[input] ?? "None";
}

Add App Settings JSON Files

In .NET Core it is customary to add an appsettings.json file to the project and to store varioius configuration settings there.  Typically this might include database connection strings (without passwords and other user secrets!).  This is so that developers can just press F5 and values can be fed from appsettings.json into the .NET Core configuration system.  For example, the following appsettings.json file contains three key-value pairs.

{
"env1": "val1",
"env2": "val2",
"env3": "val3"
}

App settings will often vary from one environment to another, so you’ll usually see multiple JSON files added to a project, with the environment name included in the file name (for example, appsettings.Development.json, appsettings.Staging.json, etc). ¬†Config values for a production environment can then come from the primary appsettings.json file, or from environment variables set at deployment time. ¬†For example, the following appsettings.Development.json¬†file has values which take the place of those in appsettings.json when the ASPNETCORE_ENVIRONMENT¬†environment variable is set to Development.

{
"env1": "dev-val1",
"env2": "dev-val2",
"env3": "dev-val3"
}

However, in order for these files to be deployed to AWS Lambda, they need to be included in the output directory when the application is published.  For this to take place, you need open the Properties window to set the Build Action property to Content and the  Copy to Output Directory property to Copy always.

appsettings-props

Set Environment Variables

There are two places where you’ll need to set environment variables: development-time and deployment-time. ¬†At development time, the one variable you’ll need to set is the ASPNETCORE_ENVIRONMENT¬†environment variable, which you’ll want to set to Development. To do this, open the launchSettings.json file from under the Properties folder in the Solution Explorer. ¬†Then add the following JSON property:

"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}

To set environment variables at deployment-time, you can add these to the aws-lambda-tools-defaults.json file.  (Just remember to escape the double quote marks.)

"environment-variables" : "\"ASPNETCORE_ENVIRONMENT\"=\"Development\";\"env1\"=\"val1\";\"env2\"=\"val2\"",

Then, when you publish your project to AWS Lambda, these environment variables will be shown in the Upload dialog, where you get set their values.  They will also be shown on the Configuration tab of the Lambda Function after publication, where you can set them and click Apply Changes to deploy the new values.

netcore-lambda

Try It Out

To execute and debug your AWS Lambda Function locally, simply by press F5 to launch the Mock Lambda Test Tool with the Visual Studio debugger attached.  For Function Input you can enter one of the keys from your appsettings.json file, such as "env1".

mock-lambda-test-tool1.png

Press the Execute Function button and you should see a response of  "dev-val1" retrieved from the appsettings.Development.json file.

To try out configuration at deployment-time, right-click on the NetCoreLambda project in the Solution Explorer and select Publish to AWS Lambda. ¬†In the wizard you can set the ASPNETCORE_ENVIRONMENT environment variable to something other than Development, such as Production or Staging. ¬†When you enter “env1” for the Sample input and click Invoke, you should get a response of ¬†"val1" from the appsettings.json file. ¬†Then, to test overriding JSON file settings with environment variables, you can click on the Configuration tab and replace val1 with foo. ¬†Click Apply Changes, then invoke the function again to return a value of foo.

environment-foo

Unit Testing

One of the reasons for using dependency injection in the first place is to make your AWS Lambda Functions testable by adding constructors to your types which accept interfaces for service dependencies. To this end, you can add a constructor to the Function class that accepts an IConfigurationService.

// Use this ctor from unit tests that can mock IConfigurationService
public Function(IConfigurationService configService)
{
ConfigService = configService;
}

In the NetCoreLambda.Test project add a package dependency for Moq.  Then add a unit test which mocks both IConfiguration and IConfigurationService, passing the mock IConfigurationService to the Function class constructor.  Calling the FunctionHandler method will then return the expected value.

public class FunctionTest
{
[Fact]
public void Function_Should_Return_Config_Variable()
{
// Mock IConfiguration
var expected = "val1";
var mockConfig = new Mock<IConfiguration>();
mockConfig.Setup(p => p[It.IsAny<string>()]).Returns(expected);
// Mock IConfigurationService
var mockConfigService = new Mock<IConfigurationService>();
mockConfigService.Setup(p => p.GetConfiguration()).Returns(mockConfig.Object);
// Invoke the lambda function and confirm config value is returned
var function = new Function(mockConfigService.Object);
var result = function.FunctionHandler("env1", new TestLambdaContext());
Assert.Equal(expected, result);
}
}

Conclusion

AWS Lambda Functions offer a powerful and flexible mechanism for developing and deploying single-function, event-driven microservices based on a serverless architecture. In this post I have demonstrated how you can leverage the powerful capabilities of .NET Core to add dependency injection and configuration to your C# Lambda functions, so that you can make them more testable and insulate them from specific platform implementations.  For example, using this approach you could use a Repository pattern with .NET Core DI and Config systems to easily substitute a relational data store with a NoSQL database service such as Amazon DynamoDB.

Happy Coding!

Posted in Technical | Tagged , , | 15 Comments