What It Means to Be a Software Architect — and Why It Matters

For the past year I have worked as a Senior Solutions Architect for a large multinational company in the construction sector. While in the past I functioned as a hybrid developer-architect, designing and building enterprise-scale systems and mission-critical line-of-business applications, my new role has been squarely focused on software architecture. As such I thought it would be a good time to reflect on the meaning of software architecture and why it has an indispensable role to play in designing applications and services that add business value and will stand the test of time.

This exercise was inspired by a 15 minute talk, titled “Making Architecture Matter,” presented at an O’Reilly conference in 2015 by renowned software architecture guru, Martin Fowler.

fowler-arch-matters

What Is Software Architecture?

Martin begins his talk by emphasizing that software architecture isn’t about pretty diagrams. Although these can help communicate some important aspects of application architecture, they themselves do not capture or define the actual architecture of a software development project. Rather, the totality of the application architecture can be best expressed as the shared understanding of system design by expert developers who are most familiar with the system. In other words, it’s about the code. And if you’re going to understand the de facto architecture of a system, you have to understand the code, not simply as an abstraction, but how it functions and how the pieces work together.

As Martin states that if you’re going to be a software architect you need to know how to write code, and you need to communicate with lead developers as a coder among coders. This is important, because in order to provide technical leadership, which is critical to the proper implementation of software design principles, you will need to gain their cooperation and respect. At the risk of sounding trite, you need to show them that you know what you’re talking about, so that you can engage in technical discussions that produce sounds technical decisions.

Part of the process should be regular code reviews, whether in-person or via pull requests. If possible, you can augment this with periodic pair programming sessions (check out VS Live Share), which will help to promote geek bonding. For this reason, it’s important to keep your coding skills up-to-date. This can be challenging if you’re a full-time software architect, so you might want to consider contributing to some open-source projects, which shouldn’t be too difficult now that companies like Microsoft and Google have open-sourced most of their cutting-edge technologies. (If you’re interested you can also volunteer to resolve issues and implement features for some of my open source projects: URF, a generic Unit of Work and Repository framework, Trackable Entities, an Entity Framework extension, and EF Core Scaffolding with Handlebars, reverse engineering customizable classes from a database.)

The Purpose of Software Architecture

Martin grapples with how to define precisely what software architecture is, and in so doing he refers to an article he wrote, Who Needs an Architect, in which he quotes Ralph Johnson as saying, “Architecture is about the important stuff. Whatever that is.”  There are different ways of defining what is important to a project. It could be related to technical decisions, such as programming languages and frameworks, which once implemented are difficult to change without incurring substantial cost.

Given this characterization, one might speculate concerning the process by which important technical decisions are made. In my view it’s the job of the architect to ensure that consequential decisions are made in a strategic manner. In other words, the architect is perhaps the person who sees the larger technological landscape, who discerns emerging trends, and who can discriminate between enduring paradigm shifts and passing fads.

For example, back in 2016 I wrote a somewhat influential (and controversial) blog post titled, WCF Is Dead, in which I argued against using Windows Communication Foundation for building greenfield web services. While at the time there were some valid use cases for WCF (for example, inter/intra-process communication and message queuing), SOAP-based web services have since been fully supplanted by RESTful Web API’s using modern toolkits such as ASP.NET Core which incorporate Dependency Injection as a first-class citizen. With the planned merging in the year 2020 of disparate versions of .NET (both full .NET Framework and Mono / Xamarin) into a version of .NET Core that will simply be called .NET 5, Microsoft will be officially classifying WCF services as legacy — will not be a part of .NET 5. (Microsoft is recommending gRPC as a successor to WCF for efficient inter-process communication, for which support will be included in ASP.NET 3.0.) My prescient assessment of WCF’s demise is an example of the kind of strategic architectural advice that can spare companies from making costly mistakes betting on tech stacks that are on their way out.

dotnet-timeline.png

Choice of software frameworks is but just one crucial decision in which software architecture can play a role. We are in the midst of a number of other paradigm shifts that will influence the direction and focus of our software projects. Here are a few notable examples:

  • From On-Premise (costly, inefficient) to Cloud-Based Computing (economies of scale)
  • From Centralized (SVN, TFVC) to Decentralized Version Control (Git)
  • From Monolithic (binary references) to Modular Design (packages)
  • From Server-based (virtual machines) to Server-less Architecture (Docker, K8s, SaaS)
  • From Platform-Specific (Windows) to Platform-Independent Frameworks (Linux)
  • From Coarse-Grained (n-tier) to Fine-Grained Services (microservices)
  • From Imperative (scripts) to Declarative Infrastructure (YAML)

Many of these changes are just beginning to gain dominance in the software industry, so it’s important for architects to train and mentor dev teams in the transition from outdated practices to modern approaches that will result in greater longevity as well as yield increased levels of reliability, scalability and cost-effectiveness.

The Right Level of Abstraction

Martin goes onto state that our goal as software architects is to design systems that have fewer and fewer aspects that are difficult to change. In my view this is where SOLID programming principles come in.

Adhering to these principles will more likely result in a more modular design of loosely coupled components, each of which have just one responsibility. Dependencies are represented by interfaces that are injected into constructors and supplied by a dependency injection container. Systems are also designed using well-known design patterns, including the Gang of Four and Domain-Driven Design patterns. In addition, it is important to establish how microservices interact with one another, abstracting away direct service dependencies in favor of indirect communication based on models such as publish-subscribe (for example, Amazon SNS / SQS) or event sourcing with a unified log (for example, Apache Kafka, Amazon Kinesis). Martin has an excellent presentation on the Many Meanings of Event-Driven Architecture, in which he discusses the pros and cons of various event-based architectural patterns.

The purpose of applying these patterns and enforcing separation of concerns is to reduce parts of the system design that are difficult to change, thereby increasing flexibility and helping insure against premature obsolescence. This is an approach I described some time ago using Jeffery Palermo’s Onion Architecture, which aims to decouple an application from infrastructure concerns using dependency injection. A benefit of this approach is code that is more testable and uses mocking frameworks to stub external dependencies such as data stores or other services.

Trust But Verify

It is important for software architects to understand that, no matter how beautiful and elegant your designs are, dev teams will almost always adulterate them. As I pointed out in a talk at a London software architecture conference in 2015, entitled Avoiding Five Common Architectural Pitfalls, often times developers will apply architectural patterns without fully understanding them and will, as a result, introduce hacks and workarounds (a.k.a. anti-patterns) that defeat the very purpose for which the pattern was created. This may be because the team is composed of members with various levels of skill and experience, or devs may be used to a coding practice that has been displaced by a newer approach.

kool-aid.jpg

A case in point would be the use of dependency injection, where devs may “new” up objects in classes rather than allowing the DI framework to do its thing. Another example would be the use of modal dialogs directly from view-models while implementing the MVVM pattern in client apps. Furthermore, I’ve seen devs struggle to take full advantage of Git, not understanding how to pull down upstream commits with rebase, or using the same branch for all their pull requests.

As Martin puts it, the best architects are like mountaineering guides: “a more experienced and skillful team member who teaches other team members to better fend for themselves yet is always there for the really tricky stuff.”

Why Architecture Matters

One of the most challenging aspects of a software architect’s job is to communicate to stakeholders and key decision-makers the significant business value that effective software architecture brings to the business. Sometimes companies fail to recognize that, in the 21st century, all companies to a certain extent need to see themselves as being in a technology company, or at least they need to recognize that well-built software can be a differentiating factor that provides a crucial competitive advantage.

Even if a company acknowledges the importance of software in general, they may not recognize that adequate software architecture is essential to building systems that are large, complex or play a critical role in running the business. Responsibility for communicating this message lies with us, and our profession suffers the extent to which we fail to convey the role software architecture plays in both avoiding costly mistakes rapidly introducing new product features in a shorter span of time. Martin calls this the Design Stamina Hypothesis, which postulates that, at a certain point in the app development lifecycle, good design allows features to be introduced at a faster pace than when design is not considered, or poor design choices resulted in an inordinate accumulation of technical debt.

design-stamina.gif

I worked on a large project where the lead developer lacked design expertise. The code base was so intertwined and convoluted that releasing new versions became a nightmare. Classes with over 3000 lines of code made maintenance almost impossible, and lack of unit tests meant that new features or bug fixes would break existing functionality or make the system unstable.

One of the misperceptions that architects face is that we are engaging in architecture for architecture’s sake, or that we are keen on introducing new technologies mostly because of the coolness factor. Our challenge is to counter this misperception by arguing not for the aesthetic value of good design, but for the pragmatic, economic value. We need to frame the need for intentional design as something that can save the company millions of dollars by averting otherwise disastrous technology and design choices, produce a distinct competitive edge through market differentiation, or pave the way for increased customer satisfaction by making it possible to evolve the product in a sustainable fashion. If we can communicate a clear understanding of the nature and purpose of software architecture, and we forcefully articulate the value proposition of sound architectural design and the benefits of ensuring correct execution of that design, we will be able to help mature the ability of an organization to deliver software solutions that can both ward off obsolescence and add features that delight customers.

Happy architecting!

Posted in Technical | Tagged , | Leave a comment

Use EF Core with AWS Lambda Functions

This is Part 3 in a 3 part series:

  1. Add .NET Core DI and Config Goodness to AWS Lambda Functions
  2. IDesignTimeDbContextFactory and Dependency Injection: A Love Story
  3. Use EF Core with AWS Lambda Functions (this post)

In a previous post I demonstrated how to set up Dependency Injection and Configuration for AWS Lambda Functions that are written in C# and use .NET Core.   The purpose of this post is to provide an example and some best practices for using Entity Framework Core with AWS Lambda.

Note: You can download or clone the code for this post here: https://github.com/tonysneed/net-core-lambda-di-config-ef

One of the benefits of adding DI and Config to a Lambda function is that you can abstract away persistence concerns from your application, allowing you greater flexibility in determining which concrete implementation to use. For example, you may start out with a relational data store (such as SQL Server, PostgreSQL or SQLite), but decide later to move to a NoSQL database or other non-relational store, such as Amazon S3. This is where the Repository Pattern comes in.

Note: If you’re going to get serious about using Repository and Unit of Work patterns, you should use a framework dedicated to this purpose, such as URF (Unit of Work and Repository Framework): https://github.com/urfnet/URF.Core

A Simple Example

Let’s start with a simple example, a Products repository. You would begin by defining an interface, for example, IProductRepository. Notice in the code below that the GetProduct method returns a Task. This is so that IO-bound operations can execute without blocking the calling thread.

You’re going to want to place this interface in a .NET Standard class library so that it can be referenced separately from specific implementations. Then create a ProductRepository class that implements IProductRepository. This can go in a .NET Standard class library that includes a package reference to an EF Core provider, for example, Microsoft.EntityFrameworkCore.SqlServer. You will want to add a constructor that accepts a DbContext-derived class.

Dependency Resolution

At this point you’re going to want to add a .NET Standard class library that can be used for dependency resolution. This is where you’ll add code that sets up DI and registers services that are used by classes in your application, including the DbContext that is used by your ProductRepository.

There are parts of this class that are worthy of discussion. First, notice that the constructor accepts a delegate for registering services. This is so that classes using DependencyResolver can pass it a method for adding other dependencies. This is important because the application will register dependencies that are of no interest to the EF Core CLI.

Another thing to point out is the code that sets the CurrentDirectory of the ConfigurationService. This is required in order to locate the appsettings.*.json files residing at the root of the main project.

Lastly, there is code that registers the DbContext with the DI system. This calls an overload of AddTransient that accepts an IServiceProvider, which is used to get an instance of the ConfigurationService that supplies a connection string to the UseSqlServer method of the DbContextOptionsBuilder. This code can appear somewhat obtuse if you’re not used to it, but the idea is to use the IServiceProvider of the DI system to resolve services that are required for passing additional parameters to constructors.

To use DI with a Lambda function, simply add a constructor to the Function class that creates a DependencyResolver, passing a ConfigureServices method that registers IProductRepository with the DI system. The FunctionHandler method can then use the repository to retrieve a product by id.

Notice the second constructor that accepts an IProductRepository. This is to support unit tests that pass in a mock IProductRepository. For example, here is a unit test that uses Moq to create a fake IProductRepository. This allows for testing logic in the FunctionHandler method without connecting to an actual database, which would make the test fragile and slow.

EF Core CLI

In a previous post I proposed some options for implementing an interface called IDesignTimeDbContextFactory, which is used by the EF Core CLI for to create code migrations and apply them to a database.

aws-unicorn-framed.png

This allows the EF Core tooling to retrieve a connection string from the appsettings.*.json file that corresponds to a specific environment (Development, Staging, Production, etc).

Here is a sample DbContext factory that uses a DependencyResolver to get a DbContext from the DI system.

To set the environment, simply set the ASPNETCORE_ENVIRONMENT environment variable.

Then run the dotnet-ef commands to add a migration and create a database with a schema that mirrors entity definitions and their relationships. You’ll want to do this twice: once for the Development environment and again for Production.

Try It Out!

Once you have created the database, you can press F5 to launch the AWS.NET Mock Lambda Test Tool, which you can use to develop and debug your Lambda function locally. Simply enter a value of 1 for Function Input and click the Execute Function button.

mock-lambda-test-tool.png

You should see JSON for Product 1 from the database.

mock-lambda-test-tool-result.png

When you’re confident everything works locally, you can throw caution to the wind and upload your Lambda function to AWS.

upload-lambda-function1.png

Make sure that the ASPNETCORE_ENVIRONMENT environment variable is set appropriately.

upload-lambda-function2.png

You can then bravely execute your deployed Lambda function.

execute-lambda

Conclusion

One of the benefits of using C# for AWS Lambda functions is built-in support for Dependency Injection, which is a first-class citizen in .NET Core and should be as indispensable to developers as a Jedi’s light saber. The tricky part can be setting up DI so that it can be used both at runtime by the Lambda function and at development-time by the EF Core CLI. With the knowledge you now possess, you should have no trouble implementing a microservices architecture with serverless functions that are modular and extensible. Cheers!

Posted in Technical | Tagged , , , | Leave a comment

IDesignTimeDbContextFactory and Dependency Injection: A Love Story

This is Part 2 in a 3 part series:

  1. Add .NET Core DI and Config Goodness to AWS Lambda Functions
  2. IDesignTimeDbContextFactory and Dependency Injection: A Love Story (this post)
  3. Use EF Core with AWS Lambda Functions

Whenever I set out to create an application or service, I might start out with everything in a single project, but before long I find myself chopping things up into multiple projects. This is in line with the Single Responsibility and Interface Segregation principles of SOLID software development. A corollary of this approach is separating out development-time code from runtime code. You can see an example of this in the NPM world with separate dev dependencies in a package.json file.  Similarly, .NET Core has adopted this concept with .NET Core CLI tools, which can also be installed globally.

Note: You can download or clone the code for this post here: https://github.com/tonysneed/ef-design-di

Entity Framework Core provides the IDesignTimeDbContextFactory interface so that you can separate the EF code needed for generating database tables at design-time (what is commonly referred to as a code-first approach) from EF code used by your application at runtime.  A typical implementation of IDesignTimeDbContextFactory might look like this. Note that using the MigrationAssembly method is also required for generating code-first migrations.

The code smell that stands out here is that the connection string is hard-coded.  To remedy this you can build an IConfiguration in which you set the base path to the main project directory.

While this is better than including the hard-coded connection string, we can still do better. For example, we might want to select a different appsettings.*.json file depending on the environment we’re in (Development, Staging, Production, etc).  In ASP.NET Core, this is determined by an special environment variable, ASPNETCORE_ENVIRONMENT.  We’re also going to want to plug in environment variables, so that the connection string and other settings can be overriden when the application is deployed.

See It In Action

The beauty of adding config to your design-time DbContext factory is that it will pick up the connection string from the configuration system, selecting the appropriate connection string for the environment you specify. How do you specify an environment (Development, Staging, Production, etc)? Simply by setting that special ASPNETCORE_ENVIRONMENT environment variable.

If you’re on Windows, you can set and view it like so:

If you’re on Mac, here’s how to do it:

With the environment set, you can switch to the directory where the DbContext factory is located and run commands to add EF code migrations and apply them to the database specified in the appsettings.*.json file for your selected environment.

Show Me Some DI Love

This solution works, but further improvements are possible. One problem is that it violates the Dependency Inversion principle of SOLID design, because we are newing up the DbContext in the design-time factory. It might be cleaner to use DI to resolve dependencies and provide the DbContext.

geeks-falling-in-love

To remedy this we can factor out the configuration bits into an IConfigurationService that builds an IConfiguration, and this service will depend on an IEnvironmentService to supply the environment name.

The implementations for these interfaces can go into a .NET Standard class library that exists to support configuration.

Now we just need to create DependencyResolver class that uses an IServiceCollection to register dependencies.

This class exposes an IServiceProvider that we can use to get an instance that has been created by the DI container.  This allows us to refactor the ProductsDbContextFactory class to use DependencyResolver to create the DbContext.

Pick Your Potion

Adding DI to the mix may feel like overkill, because the DbContext factory is only being used at design-time by EF Core tooling.  In that case, it would be more straightforward to stick with building an IConfiguration right within the CreateDbContext of your factory class, as shown in the ProductsDbContextFactory3 code snippet.

However, there is a case where the DI-based approach would be worth the effort, which is when you need to set up DI for the application entry point, for example, when using EF Core with AWS Lamda Functions.  More on that in my next blog post. 🤓 Enjoy!

Posted in Technical | Tagged , | Leave a comment

Add .NET Core DI and Config Goodness to AWS Lambda Functions

This is Part 1 in a 3 part series:

  1. Add .NET Core DI and Config Goodness to AWS Lambda Functions (this post)
  2. IDesignTimeDbContextFactory and Dependency Injection: A Love Story
  3. Use EF Core with AWS Lambda Functions

When Amazon introduced AWS Lambda in 2014, it helped kick off the revolution in serverless computing that is now taking the software development world by storm.  In a nutshell, AWS Lambda (and equivalents such as Azure Functions and Google Cloud Functions) provides an abstraction over the underlying operating system and execution runtime, so that you can more easily achieve economies of scale that are the driving force of Cloud computing paradigms.

Note: You can download or clone the GitHub repository for this post here: https://github.com/tonysneed/net-core-lambda-di-config

AWS Lambda with .NET Core

AWS Lambda supports a number of programming languages and runtimes, well as custom runtimes which enable use of any language and execution environment. Included in the list of standard runtimes is Microsoft .NET Core, an open-source cross-platform runtime, which you can build apps on using the C# programming language. To get started with AWS Lambda for .NET Core, you can install the AWS Toolkit for Visual Studio, which provides a number of project templates.

aws-lambda-templates.png

The first thing you’ll notice is that you’re presented with a choice between two different kinds of projects: a) AWS Lambda Project, and b) AWS Serverless Application.  An AWS Lambda Project will provide a standalone function that can be invoked in response to a number of different events, such as notifications from a queue service.  The Serverless Application template, on the other hand, will allow you to group several functions and deploy them as a separate application. This includes a Startup class with a ConfigureServices method for registering types with the built-in dependency injection system, as well as local and remote entry points which leverage the default configuration system using values from an appsettings.json file that is included with the project.

The purpose of the Serverless Application approach is to create a collection of functions that will be exposed as an HTTP API using the Amazon API Gateway. This may suit your needs, but often you will want to write standalone functions that respond to various events and represent a flexible microservices architecture with granular services that can be deployed and scaled independently.  In this case the AWS Lambda Project template will fill the bill.

The drawback, however, of the AWS Lambda Project template is that is lacks the usual hooks for setting up configuration and dependency injection. You’re on your own for adding the necessary code.  But never fear — I will now show you step-by-step instructions for accomplishing this feat.

Add Dependency Injection Code

One of my favorite things to tell developers is that employing the new keyword to instantiate types is an anti-pattern.  Anytime you directly create an object you are coupling yourself to a specific implementation.  So it stands to reason that you’d want to apply the same Inversion of Control pattern and .NET Core Dependency Injection goodness to your AWS Lambda Functions as you would in a standard ASP.NET Core Web API project.

Start by adding a package reference to Microsoft.Extensions.DependencyInjection version 2.1.0.

Note: Versions of NuGet packages you install need to match the version of .NET Core supported by AWS Lambda for the project you created.  In this example it is 2.1.0.

Then add a ConfigureServices function that accepts an IServiceCollection and uses it to register services with the .NET Core dependency injection system.

Next, add a parameterless constructor that creates a new ServiceCollection, calls ConfigureServices, then calls BuildServiceProvider to create a service provider you can use to get services via DI.

Add Configuration Code

Following the DI playbook, you’re going to want to abstract configuration behind an interface so that you can mock it for unit tests.  Start by adding version 2.1.0 of the following package references:

  • Microsoft.Extensions.Configuration
  • Microsoft.Extensions.Configuration.EnvironmentVariables
  • Microsoft.Extensions.Configuration.FileExtensions
  • Microsoft.Extensions.Configuration.Json

Then create an IConfigurationService interface with a GetConfiguration method that returns an IConfiguration.

Because some of the configuration settings will be for specific environments, you’ll also want to add an IEnvironmentService interface with an EnvironmentName property.

To implement IEnvironmentService create an EnvironmentService class that checks for the presence of a special ASPNETCORE_ENVIRONMENT variable, defaulting to a value of “Production”.

To avoid the use of magic strings, you can employ a static Constants class.

The ConfigurationService class should have a constructor that accepts an IEnvironmentService and implements the GetConfiguration method by creating a new ConfigurationBuilder and calling methods to add appsettings JSON files and environment variables.  Because the last config entry wins, it is possible to add values to an appsettings file which are then overridden by environment variables that are set when the AWS Lambda Function is deployed.

Now that you’ve defined the environment and configuration service interfaces and implementations, it’s time to register them with the DI system inside the ConfigureServices method of the Function class.

Then edit the Function class constructor to get IConfigurationService from the DI service provider and set a read-only ConfigService property on the class.  The top of the Function class should now look like this:

Lastly, add code to the FunctionHandler method to get a value from the ConfigService using the input parameter as a key.

Add App Settings JSON Files

In .NET Core it is customary to add an appsettings.json file to the project and to store varioius configuration settings there.  Typically this might include database connection strings (without passwords and other user secrets!).  This is so that developers can just press F5 and values can be fed from appsettings.json into the .NET Core configuration system.  For example, the following appsettings.json file contains three key-value pairs.

App settings will often vary from one environment to another, so you’ll usually see multiple JSON files added to a project, with the environment name included in the file name (for example, appsettings.Development.json, appsettings.Staging.json, etc).  Config values for a production environment can then come from the primary appsettings.json file, or from environment variables set at deployment time.  For example, the following appsettings.Development.json file has values which take the place of those in appsettings.json when the ASPNETCORE_ENVIRONMENT environment variable is set to Development.

However, in order for these files to be deployed to AWS Lambda, they need to be included in the output directory when the application is published.  For this to take place, you need open the Properties window to set the Build Action property to Content and the  Copy to Output Directory property to Copy always.

appsettings-props

Set Environment Variables

There are two places where you’ll need to set environment variables: development-time and deployment-time.  At development time, the one variable you’ll need to set is the ASPNETCORE_ENVIRONMENT environment variable, which you’ll want to set to Development. To do this, open the launchSettings.json file from under the Properties folder in the Solution Explorer.  Then add the following JSON property:

To set environment variables at deployment-time, you can add these to the aws-lambda-tools-defaults.json file.  (Just remember to escape the double quote marks.)

Then, when you publish your project to AWS Lambda, these environment variables will be shown in the Upload dialog, where you get set their values.  They will also be shown on the Configuration tab of the Lambda Function after publication, where you can set them and click Apply Changes to deploy the new values.

netcore-lambda

Try It Out

To execute and debug your AWS Lambda Function locally, simply by press F5 to launch the Mock Lambda Test Tool with the Visual Studio debugger attached.  For Function Input you can enter one of the keys from your appsettings.json file, such as "env1".

mock-lambda-test-tool1.png

Press the Execute Function button and you should see a response of  "dev-val1" retrieved from the appsettings.Development.json file.

To try out configuration at deployment-time, right-click on the NetCoreLambda project in the Solution Explorer and select Publish to AWS Lambda.  In the wizard you can set the ASPNETCORE_ENVIRONMENT environment variable to something other than Development, such as Production or Staging.  When you enter “env1” for the Sample input and click Invoke, you should get a response of  "val1" from the appsettings.json file.  Then, to test overriding JSON file settings with environment variables, you can click on the Configuration tab and replace val1 with foo.  Click Apply Changes, then invoke the function again to return a value of foo.

environment-foo

Unit Testing

One of the reasons for using dependency injection in the first place is to make your AWS Lambda Functions testable by adding constructors to your types which accept interfaces for service dependencies. To this end, you can add a constructor to the Function class that accepts an IConfigurationService.

In the NetCoreLambda.Test project add a package dependency for Moq.  Then add a unit test which mocks both IConfiguration and IConfigurationService, passing the mock IConfigurationService to the Function class constructor.  Calling the FunctionHandler method will then return the expected value.

Conclusion

AWS Lambda Functions offer a powerful and flexible mechanism for developing and deploying single-function, event-driven microservices based on a serverless architecture. In this post I have demonstrated how you can leverage the powerful capabilities of .NET Core to add dependency injection and configuration to your C# Lambda functions, so that you can make them more testable and insulate them from specific platform implementations.  For example, using this approach you could use a Repository pattern with .NET Core DI and Config systems to easily substitute a relational data store with a NoSQL database service such as Amazon DynamoDB.

Happy Coding!

Posted in Technical | Tagged , , | 11 Comments

Customize EF Core Scaffolding with Handlebars Templates

With the release of Entity Framework Core 2.1 we finally have a version of EF Core that is ready for prime time.  EF Core is a complete re-write of its predecessor Entity Framework 6, which has been married to the full Windows .NET Framework since it was released in 2008 as part of .NET 3 SP1. EF Core, on the other hand, was designed using modern programming concepts such as greater modularity via dependency injection and can run on non-Windows platforms that provide more options for cloud-based deployments that are serverless or use containers.

But writing a new data access stack from scratch required numerous tradeoffs. On the plus side, the EF Core team added long awaited but never realized features, such as mixed client/server query evaluation and composable queries that include raw SQL.  On the downside, they had to cut features considered essential to some teams, such as table-per-type inheritance modeling and support for mapping stored procedures to CUD operations. On balance, however, as of version 2.1 EF Core has been able to achieve much better parity with EF 6 by including previous show-stoppers such as GroupBy query translation and support for System.Transactions. (See this article for a feature-by-feature comparison between EF Core and EF 6.)

While the EF Core runtime has matured to the point where it can be considered a viable option for enterprise applications, the tooling has lagged behind.  For example, there is still no Visual Studio wizard for reverse engineering an existing database to model and context classes.  For that you’ll need to resort to the command line.

Note: Erik Ejlskov Jensen has authored the EF Core Power Tools, which provide a UI for reverse engineering classes from an existing database and use my Handlebars plugin under the covers. They also allow you to perform migrations and visualize your DbContext in various ways.

While that doesn’t present too much difficulty, there has not been a way to customize classes generated by the EF Core tooling.

That is, until now.

I have authored a plug-in (EntityFrameworkCore.Scaffolding.Handlebars) that allows you to use Handlebars templates to customize classes that are reverse engineered from an existing database using the dotnet ef dbcontext scaffold command.  To use the plugin simply add my extension NuGet package to your EF Core Class Library project, then add a class that implements IDesignTimeServices by calling the AddHandlebarsScaffolding extension method that hangs off IServiceCollection.

Next, open a command prompt at the project level and use the .NET Core CLI to reverse engineer a context and models from an existing database.  For example, if you have downloaded scripts to create the NorthwindSlim sample database for SQL Server LocalDb, you can run the following command:

The first time you run the command you’ll see a CodeTemplates folder magically appear in your project.

hbs-scaffolding-sample

Handlebars Templates

There you’ll find Handlebars templates for context and entity classes which you can customize to your heart’s content.

Notice there are also partial templates that you can also customize.  (One reason you might want to customize generated entity classes would be to implement an interface or extend a base class.)  The next time you run the scaffolding command, you’ll see your changes reflected in the generated classes.

Lastly, my plug-in allows you to register Handlebars helpers for further customizing output based on runtime conditions. Simple pass one or more named tuples to the AddHandlebarsScaffolding extension method.

Then insert the helper into your Handlebars template as in the following example.

This will result in the helper rendering your desired content, as shown in the following example.

For further information and detailed instructions please visit the ReadMe on the project repo. Enjoy!

Posted in Technical | Tagged , | 8 Comments

It’s here! Trackable Entities for EF Core!

The idea behind my open source Trackable Entities project is quite simple: track changes to an object graph as you update, add and remove items, then send those changes to a back end service where they can be saved in a single transaction.  It’s an important thing to be able to do, because it’s difficult to wrap multiple round trips in a single transaction without holding locks for a long time.  On the other hand, you could break up related operations into multiple transactions, but then you lose the benefit of atomicity, which enables you to roll back all the changes in a transaction should one of them fail.

To get started with Trackable Entities for Entity Framework Core, download the NuGet package and check out the project repository.  You can also clone the sample applications and follow the instructions.

Brief Introduction

For example, you update an order with related details.  Some details are unchanged, while others are modified, added or removed.  All these changes should be atomic, that is, they should all succeed or none.  The problem with regard to Entity Framework is that, in the context of a web service, updates must be conducted in a disconnected manner, which means that you need to inform EF about which entities require updating and what kind of changes it needs to perform.  In the case of an order with multiple details that need to be updated all at once, there needs to be a way to communicate this information to the web service so that you can attach entities to a DbContext and individually set the state of each entity.  The way Trackable Entities accomplishes this is by means of a simple enum called TrackingState.

To track changes, entities implement an ITrackable interface, which includes a ModifiedProperties property to support partial updates.

In addition, Trackable Entities provides an IMergeable interface to correlate updated entities so they can be merged back into the original object graph on the client.

Once entities have been marked as Added, Modified or Deleted, all you have to do is call the ApplyChanges extension method for DbContext.

Trackable Entities for EF Core

Because Trackable Entities is an extension of Entity Framework, it has only been available for the full .NET Framework running on Windows.  But now that EF has been ported to .NET Core and can run on Linux and MacOS, it’s time for Trackable Entities to come along for the ride!  This makes it possible to create a Web API with ASP.NET Core that can run both locally on a Mac and in a Docker container running on a Linux VM in the Cloud.  How cool is that?!

To get started you can either use Visual Studio for Windows, Visual Studio for Mac, or if you’re adventurous, Visual Studio Code.  It doesn’t matter which option you choose, because in the end your web app will run anywhere .NET Core will run.  In this blog post I’ll walk you through a demo using VS for Mac — just for fun.  But if you prefer, feel free to check out the sample I created using the “classic” version of Visual Studio on Windows.

Note: You’ll need to install the SDK for .NET Core 2.0 or higher.

Start by creating a new project in VS using the ASP.NET Core Web API project template.

aspnet-core-project.png

Then add a .NET Standard Class Library for server-side trackable entities.

net-standard-project.png

Add the TrackableEntities.Common.Core (pre-release) and System.ComponentModel.Annotations NuGet packages.

te-common-core.png

Add classes that implement ITrackable and IMergeable interfaces. You’ll also need to add a using directive for System.ComponentModel.DataAnnotations.Schema, so that you can decorate TrackingState, ModifiedProperties and EntityIdentifier properties with a [NotMapped] attribute, to indicate these properties do not belong to the database schema.

EF Core Migrations

Although it is possible to generate entities based on database tables (Database First), in this demo we’ll go the other way and start with entities that will be used to generate database tables and relationships (Code First).  Both approaches use the EF .NET Core CLI, which you install by adding the Microsoft.EntityFrameworkCore.Design package and manually editing the .csproj file for the Web API project to insert a DotNetCliToolReference and change the project target from netstandard2.0 to netcoreapp2.0 (which is required to run the EF CLI). You’ll also need to add a package for the EF provider you’re using, which in this case is Microsoft.EntityFrameworkCore.Sqlite, as well as a reference to the server-side entities project you created earlier.  Then run dotnet restore from the command line.  (Visual Studio does not yet automatically restore tooling packages.)  Here is what your .csproj file will then look like.

Next, add a specific DbContext-derived class to the Web API project.

Lastly, you’ll need to add a class that implements IDesignTimeDbContextFactory to create your context class with the appropriate options and connection string.

Now all you need to do to create a database from your entities is to execute the following two commands, after which a northwindslim.db file will appear in the project directory.

Configure Dependency Injection

To inject your EF context class into controllers, you’ll need to register it with ASP.NET Core’s dependency injection system.  Add code to the ConfigureServices method of the Startup class in which you call services.AddDbContext, passing options that include the provider and connection string.  You’ll also want to configure the JSON serializer to preserve references in order to accommodate cyclical references in object graphs.

Web API Controllers

Next add the TrackableEntities.EF.Core NuGet package to the Web API project.  Then you can add a controller to the project in which you use LINQ to execute queries against the database and return objects from your GET actions.

Then to utilize Trackable Entities for persisting object graphs with changed entities, all you need to do is call context.ApplyChanges.

Note that after calling context.SaveChangesAsync() there is a line of code that calls context.LoadRelatedEntitiesAsync, passing the root entity.  This will traverse the object graph, loading reference properties so that entities returned to the client will have them populated.  For example, when updating an Order it isn’t necessary to send the entire Customer entity when all you have to do is set the order’s CustomerId property.  But the client will usually want to have the Customer property populated when it is returned from the service, which is what LoadRelatedEntitiesAsync does. Lastly, the call to context.AcceptChanges is there to reset TrackingState on each entity to Unchanged, so that the client can can have a fresh start before making additional changes.

You can now execute dotnet run on the command line and the Web API will start listening for requests on the default port 5000.

dotnet-run.png

You can then open a browser and navigate to an API endpoint to make sure you can retrieve entities.  You can also use a REST client such as Postman to make POST, PUT and DELETE requests.

customer-alfki.png

Trackable Entities on the Client

On the client side Trackable Entities provides change-tracking for .NET apps by means of a ChangeTrackingCollection.  You can create a .NET Core console app to which you can add the TrackableEntities.Client NuGet package. Even though this package was written for the full .NET Framework version 4.6.1, you can use it in a .NET Core app because both are compliant with .NET Standard 2.0.

Because client-side entities have a different set of concerns than server-side entities, firing events on property changes and using change tracking collections for navigation properties, you’ll probably want to generate client entities from the database schema you created earlier using the EF Core CLI tooling.  The easiest way to accomplish this is to use T4 templates.  Fortunately, Trackable Entities has a package for that — TrackableEntities.CodeTemplates.Client.Net45.  You’ll need to add it to a traditional .NET Class Library project in Visual Studio for Windows, so that you can add an ADO.NET Entity Data Model that will use it to generate the client entities.

You can then write client-side code that changes entities, gets only the changed entities, and sends them to your Web API service for applying changes and saving them to the database.  See the sample app’s repository for the complete code.

To run the console app, keep the Web API running and execute dotnet run.  Follow the prompts to retrieve, add, update and delete entities at various places in the object graph.

console-client.png

The cool thing is that all this is happening on a Mac, and there’s nothing to stop you from deploying it to a container service such as Amazon EC2, Microsoft Azure or Google Container Engine.  With Trackable Entities and EF Core the future is now.

Posted in Technical | Tagged , , , | 3 Comments

React to JavaScript object updates with observable-entities-js

I just published my first TypeScript library — observable-entities.  It contains base classes that notify observers when properties are updated and when objects are added or removed from collections.

The code for observable-entitites-js can be found here: https://github.com/TrackableEntities/observable-entities-js.

Node sample app that uses observable-entities for reacting to entity updates and when objects are added or removed from set and map collections: https://github.com/tonysneed/hello-observable-entities

Angular sample app that uses observable-entities for reactive binding: https://github.com/TrackableEntities/observable-entities-js-sample.

Getting Started

To get started using observable-entities-js simply add it using NPM as a runtime dependency to your JavaScript application or library.

npm i --save observable-entities

Note: observable-entities uses features of ES2015 that are not compatible with older browsers such as Internet Explorer.

To use observable-entities all you have to do is derive your model classes from ObservableEntity and add a constructor that returns a call to super.proxify.

import { ObservableEntity } from 'observable-entities';

class Product extends ObservableEntity {
  productName: string;
  constructor(productName: string) {
    super();
    this.productName = productName;
    return super.proxify(this);
  }
}

This allows you to create a listener that can subscribe to notifications that take place when any property in the entity is modified.

import { INotifyInfo } from 'observable-entities';
import { Subject } from 'rxjs/Subject';

// Create listener that logs to console when entity is updated
const modifyListener = new Subject[INotifyInfo]();
modifyListener.subscribe(info => {
  console.log(`Entity Update - key: ${info.key} origValue: ${info.origValue} currentValue: ${info.currentValue}`)
});

Then add the listener to the modifyListeners property of the entity.

// Create product and add listener
const product = new Product('Chai');
product.modifyListeners.push(modifyListener);

When a property is set on the entity, the listener will get notified of the property change.

// Set productName property
product.productName = 'Chang';

// Expected output:
// Entity Update - key: productName origValue: Chai currentValue: Chang

If you were to debug this code, you could set a breakpoint on the line of code that logs to the console, and you would hit the breakpoint as soon as you set productName to ‘Chang’.

obs-entities-debug

You can clone the Node demo app I wrote for this post and try debugging it yourself.

Observable Proxies

You may have noticed that modifyListener is of type Subject[INotifyInfo], which is a class that comes from RxJS (Reactive Extensions for JavaScript), a library that represents a style of programming combining the observer design pattern with functional programming.  While it’s possible to design a class that can notify listeners of property changes, it would involve a lot of repetitive boilerplate code as in, for example, this version of the Product class.

Note: To render TypeScript generics in WordPress, it is necessary to use square brackets [] instead of angle brackets <>.

class Product {

  private _productName: string;
  readonly modifyListeners: Subject[INotifyInfo][] = [];

  get productName() {
    return this._productName;
  }

  set productName(value: string) {
    // Notify listeners of property updates
    const notifyInfo: INotifyInfo = { key: 'productName', origValue: this._productName, currentValue: value };
    this.modifyListeners.forEach(listener => listener.next(notifyInfo))
    this._productName = value;
  }

  constructor(productName: string) {
    this._productName = productName;
  }
}

While you could encapsulate the notification code in a protected notify method placed in a base class, you would still need to call the method from the property setter on each property, which is similar to how INotifyPropertyChanged is usually implemented in C# to support two-way data binding.

A cleaner solution would be to intercept calls to property setters in an entity so that you could notify listeners of changes in a generic fashion.  This is why ES2015 proxies were created.  The way it works is that you create a handler that has a trap for the kind of operation you want to intercept, set for example. Then you can do whatever you want when a property is set, such as notify listeners of property changes.

The way observbable-entities does this is by providing a protected proxify method in the ObservableEntity base class, which returns a proxy of the entity.  (Keep in mind that proxies are an ES2015 feature that is not supported by downlevel browsers, such as Internet Explorer — therefore, you’ll need to target ES2015 in apps or libraries that use observable-entities.

protected proxify[TEntity extends object](item: TEntity): TEntity {
  if (!item) { return item; }
  const modifyListeners = this._modifyListeners;
  const excludedProps = this._excludedProperties;
  const setHandler: ProxyHandler[TEntity] = {
    set: (target, property, value) => {
      const key = property.toString();
      if (!excludedProps.has(key)) {
        const notifyInfo: INotifyInfo = { key: key, origValue: (target as any)[property], currentValue: value };
        modifyListeners.forEach(listener => listener.next(notifyInfo));
      }
      (target as any)[property] = value;
      return true;
    }
  };
  return new Proxy[TEntity](item, setHandler);
}

If the constructor of the subclass returns a call to super.proxify, then consumers get a proxy whenever they new up the entity, without the need to create the proxy manually.  (There a static factory method is provided if you have a need for this.)

Observable Sets and Maps

Besides notifications of property updates, observable-entities allows you to receive notifications when entities are added or removed from Set and Map collections. That’s the purpose of ObservableSet and ObservableMap classes, which extend Set and Map by overriding add and delete methods to notify listeners.

You can, for example, create listeners that write to the console when entities are added or removed from an ObservableSet.

// Observe adds and deletes to a Set
const productSet = new ObservableSet(product);

// Create listener that writes to console when entities are added
const addListener = new Subject[INotifyInfo]();
addListener.subscribe(info => {
  console.log(`Set Add - ${(info.currentValue as Product).productName}`)
});
productSet.addListeners.push(addListener);

// Create listener that writes to console when entities are removed
const removeListener = new Subject[INotifyInfo]();
removeListener.subscribe(info => {
  console.log(`Set Remove - ${(info.currentValue as Product).productName}`)
});
productSet.removeListeners.push(removeListener);

Those listeners will then be notified whenever entities are added or deleted from the Set.

// Add entity
const newProduct = new Product('Aniseed Syrup');
productSet.add(newProduct);

// Expected output:
// Set Add - Aniseed Syrup

// Remove entity
productSet.delete(newProduct);

// Expected output:
// Set Remove - Aniseed Syrup

ObservableMap works the same way as ObservableSet, but with key-value pairs.  Note that INotifyInfo also provides the entity key when the listener is notified.

// Observe adds and deletes to a Map
const productMap = new ObservableMap([product.productName, product]);

// Add listener for when entities are added
const addListenerMap = new Subject[INotifyInfo]();
addListenerMap.subscribe(info => {
  console.log(`Map Add - ${info.key} (key): ${(info.currentValue as Product).productName} (value)`)
});
productMap.addListeners.push(addListenerMap);

// Add listener for when entities are removed
const removeListenerMap = new Subject[INotifyInfo]();
removeListenerMap.subscribe(info => {
  console.log(`Map Remove - ${info.key} (key): ${(info.currentValue as Product).productName} (value)`)
});
productMap.removeListeners.push(removeListenerMap);

// Add entity
productMap.add(newProduct.productName, newProduct);

// Expected output:
// Map Add - Aniseed Syrup (key): Aniseed Syrup (value)

// Remove entity
productMap.delete(newProduct.productName);

// Expected output:
// Map Remove - Aniseed Syrup (key): Aniseed Syrup (value)

Angular Data Binding with Observables

One possible application of observable-entities is to use them with Angular’s OnPush change detection strategy so that you can control when change detection cycles take place when binding components to templates.  This could result in faster data binding because you’ll call markForCheck on an injected ChangeDetectorRef so that Angular will perform change detection on that component while skipping others.

Instead of writing to the console, add and modify listeners will call markForCheck.

// Trigger data binding when item is added
this.addListener = new Subject[INotifyInfo]();
this.addListener.subscribe(info => {
  this.cd.markForCheck();
});

// Add listener to products
this.products.addListeners.push(this.addListener);

// Trigger binding when item is updated
this.modifyListener = new Subject[INotifyInfo]();
this.modifyListener.subscribe(info => {
  this.cd.markForCheck();
});

// Add listener to each product
this.products.forEach(product => {
  product.modifyListeners.push(this.modifyListener);
});

To see data binding with observable entities in action you can clone the Angular demo app I wrote for this post.

obs-entities-angular

There are many possible uses for observable entities, sets and maps, and I hope you find my observable-entities-js library useful and interesting.

Enjoy!

Posted in Technical | Tagged , , | Leave a comment