Getting Visual Studio Code Ready for TypeScript

Part 1: Compiling TypeScript to JavaScript

This is the first part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript (this post)
  2. Writing Jasmine Tests in TypeScript

Why TypeScript?

In case you’re new to TypeScript, Wikipedia defines TypeScript in the following way (paraphrased):

TypeScript is designed for development of large applications and transcompiles to JavaScript. It is a strict superset of JavaScript (any existing JavaScript programs are also valid TypeScript programs), and it adds optional static typing and class-based object-oriented programming to the JavaScript language.

Coming from a C# background, I was attracted to TypeScript, first because it is the brain child of Anders Hejlsberg, who also invented the C# programming language, and I can have confidence it has been well-designed, and second because I like to rely on the compiler to catch errors while I am writing code.  While TypeScript embraces all the features of ECMAScript 2015, such as modules, classes, promises and arrow functions, it adds type annotations which allow code editors to provide syntax checking and intellisense, making it easier to use the good parts of JavaScript while avoiding the bad.

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.


Why Visual Studio Code?

Once I decided to embark on the adventure of learning TypeScript, the next question was: What development tools should I use?

I’ve spent the better part of my career with Microsoft Visual Studio, and I enjoy all the bells and whistles it provides.  But all those fancy designers come at a cost, both in terms of disk space and RAM, and even installing or updating VS 2015 can take quite a while.  To illustrate, here is a joke I recently told a friend of mine:

I like Visual Studio because I can use it to justify to my company why I need to buy better hardware, so I can run VS and get acceptable performance. That’s how I ended up with a 1 TB SSD and 16 GB of RAM — thank you Visual Studio! 👏

I also own a MacBook Air, mainly because of Apple’s superior hardware, and run a Windows 10 virtual machine so that I can use Visual Studio and Office.  But I thought it would be nice to be able to write TypeScript directly on my Mac without having to spin up a Windows VM, which can drain my laptop’s battery.  So I thought I would give Visual Studio Code a try.

But before I started with VS Code, I decided to go back to Visual Studio and create a simple TypeScript project with support for unit testing with Jasmine, which is a popular JavaScript unit testing framework.  It turns out the experience was relatively painless, but I still had to do a lot of manual setup, which entailed creating a new TypeScript project in Visual Studio, deleting the files that were provided, installing NuGet packages for AspNet.Mvc and JasmineTest, then adding a bare-bones controller and a view which I adapted from the spec runner supplied by Jasmine.

You can download the code for a sample VS 2015 TypeScript project from my Demo.VS2015.TypeScript repository on GitHub.

Visual Studio 2015 still required me to do some work to create a basic TypeScript project with some unit tests, and if I wanted to add other features, such as linting my TypeScript or automatically refreshing the browser when I changed my code, then I would have to use npm or a task runner such as Grunt or Gulp. This helped tip the scales for me in favor of Visual Studio Code.


VS Code is actually positioned as something between a simple code editor, such as Atom, Brackets or SublimeText, and a full fledged IDE like Visual Studio or WebStorm.  The main difference is that VS Code lacks a “File, New Project” command  for creating a new type of project with all the necessary files. This means you either have to start from scratch or select a Yeoman generator to scaffold a new project.

I decided to start from scratch, because I like pain. (OK, I’m just kidding.)

The truth is, I couldn’t find an existing generator that met my needs, and I wanted to learn all I could from the experience of getting VS Code ready for TypeScript.  The result was a sample project on GitHub (Demo.VSCode.TypeScript) and a Yeoman generator (tonysneed-vscode-typescript) for scaffolding new TypeScript projects.

Compiling TypeScript to JavaScript

My first goal was to compile TypeScript into JavaScript with sourcemaps for debugging and type definitions for intellisense.  This turned out to be much more challenging than I thought it would be.  I discovered that the gulp-typescript plugin did not handle relative paths very well, so instead I relied on npm (Node Package Manager) to invoke the TypeScript compiler directly, setting the project parameter to the ‘src’ directory in which I placed my tsconfig.json file.  This allowed for specifying a ‘dist’ output directory and preserving the directory structure in ‘src’.  To compile TypeScript using a gulp task, all I had to do was execute the ‘tsc’ script.

 * Compile TypeScript
gulp.task('typescript-compile', ['vet:typescript', 'clean:generated'], function () {

    log('Compiling TypeScript');
    exec('node_modules/typescript/bin/tsc -p src');

Here is the content of the ‘tsconfig.json’ file. Note that both ‘rootDir’ and ‘outDir’ must be set in order to preserve directory structure in the ‘dist’ folder.

    "compilerOptions": {
        "module": "commonjs",
        "target": "es5",
        "sourceMap": true,
        "declaration": true,
        "removeComments": true,
        "noImplicitAny": true,
        "rootDir": ".",
        "outDir": "../dist"
    "exclude": [

Debugging TypeScript

I could then enable debugging of TypeScript in Visual Studio Code by adding a ‘launch.json’ file to the ‘.vscode’ directory and including a configuration for debugging the currently selected TypeScript file.

    "name": "Debug Current TypeScript File",
    "type": "node",
    "request": "launch",
    // File currently being viewed
    "program": "${file}",
    "stopOnEntry": true,
    "args": [],
    "cwd": ".",
    "sourceMaps": true,
    "outDir": "dist"

Then I could simply open ‘greeter.ts’ and press F5 to launch the debugger and break on the first line.


Linting TypeScript

While compiling and debugging TypeScript was a good first step, I also wanted to be able to lint my code using tslint.  So I added a gulp task called ‘vet:typescript’ and configured my ‘typescript-compile’ task to be dependent on it.  The result was that, if I for example removed a semicolon from my Greeter class and compiled my project from the terminal, I would see a linting error displayed.


Configuring the Build Task

I also wanted to be able to compile TypeScript simply by pressing Cmd+B.  That was easy because VS Code will use a Gulpfile if one is present.  Simply specify ‘gulp’ for the command and ‘typescript-compile’ for the task name, then set ‘isBuildCommand’ to true.

    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
    "tasks": [
            "taskName": "typescript-compile",
            "isBuildCommand": true,
            "showOutput": "always",
            "problemMatcher": "$gulp-tsc"

Adding a Watch Task

Lastly, I thought it would be cool to run a task that watches my TypeScript files for changes and automatically re-compiles them.  So I added yet another gulp task, called ‘typescript-watch’, which first compiles the .ts files, then watches for changes.

 * Watch and compile TypeScript
gulp.task('typescript-watch', ['typescript-compile'], function () {

    return, ['typescript-compile']);

I could then execute this task from the command line. Here you can see output shown in the terminal when a semicolon is removed from a .ts file.


It is also possible to execute a gulp task from within VS Code.  Press Cmd+P, type ‘task’ and hit the spacebar to see the available gulp tasks.  You can select a task by typing part of the name, then press Enter to execute the task.


Using a Yeoman Generator

While it’s fun to set up a new TypeScript project with Visual Studio Code from scratch, an easier way is to scaffold a new project using a Yeoman generator, which is the equivalent of executing File, New Project in Visual Studio.  That’s why I built a Yeoman generator called tonysneed-vscode-typescript, which gives you a ready-made TypeScript project with support for unit testing with Jasmine and Karma.  (I’ll explain more about JavaScript testing frameworks in the next part of this series.)


To get started using Yeoman, you’ll need to install Yeoman with the Node Package Manager.

npm install -g yo

Next install the tonysneed-vscode-typescript Yeoman generator.

npm install -g generator-tonysneed-vscode-typescript

To use the generator you should first create the directory where you wish to place your scaffolded TypeScript project.

mkdir MyCoolTypeScriptProject
cd MyCoolTypeScriptProject

Then simply run the Yeoman generator.

yo tonysneed-vscode-typescript

To view optional arguments, you can append –help to the command.  Another option is to skip installation of npm dependencies by supplying an argument of –skip-install, in which case you can install the dependencies later by executing npm install from the terminal.

In response to the prompt for Application Name, you can either press Enter to accept the default name, based on the current directory name, or enter a new application name.


Once the generator has scaffolded your project, you can open it in Visual Studio Code from the terminal.

code .

After opening the project in Visual Studio Code, you will see TypeScript files located in the src directory.  You can compile the TypeScript files into JavaScript simply by pressing Cmd+B, at which point a dist folder should appear containing the transpiled JavaScript files.

For the next post in this series I will explain how you can add unit tests to your TypeScript project, and how you can configure test runners that can be run locally as well as incorporated into your build process for continuous integration.

Posted in Technical | Tagged , , | 11 Comments

Using EF6 with ASP.NET MVC Core 1.0 (aka MVC 6)

This week Microsoft announced that it is renaming ASP.NET 5 to ASP.NET Core 1.0.  In general I think this is a very good step.  Incrementing the version number from 4 to 5 for ASP.NET gave the impression that ASP.NET 5 was a continuation of the prior version and that a clean migration path would exist for upgrading apps from ASP.NET 4 to 5.  However, this did not reflect the reality that ASP.NET 5 was a completely different animal, re-written from the ground up, and that is has little to do architecturally with its predecessor.  I would even go so far as to say it has more in common with node.js than with ASP.NET 4.

You can download the code for this post from my Demo.AspNetCore.EF6 repository on GitHub.

Note: Select the migrate-1.0 branch in the repository for code that has been updated to ASP.NET Core 1.0.

Entity Framework 7, however, has even less in common with its predecessor than does MVC, making it difficult for developers to figure out whether and when they might wish to make the move to the new data platform.  In fact, EF Core 1.0 is still a work in progress and won’t reach real maturity until well after initial RTM.  So I’m especially happy that EF 7 has been renamed EF Core 1.0, and also that MVC 6 is now named MVC Core 1.0.

The problem I have with the name ASP.NET Core is that it implies some equivalency with .NET Core.  But as you see from the diagram below, ASP.NET Core will not only run cross-platform on .NET Core, but you can also target Windows with .NET Framework 4.6.


Note: This diagram has been updated to reflect that EF Core 1.0 (aka EF 7) is part of ASP.NET Core 1.0 and can target either .NET 4.6 or .NET Core.

It is extremely important to make this distinction, because there are scenarios in which you would like to take advantage of the capabilities of ASP.NET Core, but you’ll need to run on .NET 4.6 in order to make use of libraries that are not available on .NET Core 1.0.

So why would you want to use ASP.NET Core 1.0 and target .NET 4.6?

As I wrote in my last post, WCF Is Dead and Web API Is Dying – Long Live MVC 6, you should avoid using WCF for greenfield web services, because: 1) is it not friendly to dependency injection, 2) it is overly complicated and difficult to use properly, 3) it was designed primarily for use with SOAP (which has fallen out of favor), and 4) because Microsoft appears to not be investing further in WCF.  I also mentioned you should avoid ASP.NET Web API because it has an outdated request pipeline, which does not allow you to apply cross-cutting concerns, such as logging or security, across multiple downstream web frameworks (Web API, MVC, Nancy, etc).  OWIN and Katana were introduced in order to correct this deficiency, but those should be viewed as temporary remedies prior to the release of ASP.NET Core 1.0, which has the same pipeline model as OWIN.

The other important advantage of ASP.NET Core is that it completely decouples you from WCF, IIS and System.Web.dll.  It was kind of a dirty secret that under the covers ASP.NET Web API used WCF for self-hosting, and you would have to configure the WCF binding if you wanted implement things like transport security.  ASP.NET Core has a more flexible hosting model that has no dependence on WCF or System.Web.dll (which carries significant per-request overhead), whether you choose to host in IIS on Windows, or cross-platform in Kestrel on Windows, Mac or Linux.

A good example of why you would want to use ASP.NET Core 1.0 to target .NET 4.6 would be the ability to use Entity Framework 6.x.  The first release of EF Core, for example, won’t include TPC inheritance or M-M relations without extra entities.  As Rowan Miller, a program manager on the EF team, stated:

We won’t be pushing EF7 as the ‘go-to release’ for all platforms at the time of the initial release to support ASP.NET 5. EF7 will be the default data stack for ASP.NET 5 applications, but we will not recommend it as an alternative to EF6 in other applications until we have more functionality implemented.

This means if you are building greenfield web services, but still require the full capabilities of EF 6.x, you’ll want to use ASP.NET MVC Core 1.0 (aka MVC 6) to create Web API’s which depend on .NET 4.6 (by specifying “dnx451” in the project.json file).  This will allow you to add a dependency for the “EntityFramework” NuGet package version “6.1.3-*”.  The main difference is that you’ll probably put your database connection string in an *.json file rather than a web.config file, or you may specify it as an environment variable or retrieve it from a secrets store.  An appsettings.json file, for example, might contain a connection string for a local database file.

  "Data": {
    "SampleDb": {
      "ConnectionString": "Data Source=(localdb)\\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\\SampleDb.mdf;Integrated Security=True; MultipleActiveResultSets=True"

You can then register your DbContext-derived class with the dependency injection system of ASP.NET Core.

public void ConfigureServices(IServiceCollection services)
    // Add DbContext
    services.AddScoped(provider =>
        var connectionString = Configuration["Data:SampleDb:ConnectionString"];
        return new SampleDbContext(connectionString);

    // Add framework services.

This will allow you to inject a SampleDbContext into the constructor of any controller in your app.

public class ProductsController : Controller
    private readonly SampleDbContext _dbContext;

    public ProductsController(SampleDbContext dbContext)
        _dbContext = dbContext;

Lastly, you’ll need to provide some information to EF regarding the provider you’re using (for example, SQL Server, Oracle, MySQL, etc).  In a traditional ASP.NET 4.6 app you would have done that in app.config or web.config.  But in ASP.NET Core you’ll want to specify the provider in a class that inherits from DbConfiguration.

public class DbConfig : DbConfiguration
    public DbConfig()
        SetProviderServices("System.Data.SqlClient", SqlProviderServices.Instance);

Then you can apply a DbConfigurationType attribute to your DbContext-derived class, so that EF can wire it all together.

public class SampleDbContext : DbContext
    public SampleDbContext(string connectionName) :
        base(connectionName) { }

    public DbSet Products { get; set; }

You can download the code for this post from my Demo.AspNetCore.EF6 repository on GitHub.

The primary limitation of targeting .NET 4.6 with EF 6 is that you’ll only be able to deploy your web services on Windows.  The good news, however, is that you’ll be in a great position to migrate from EF 6 to EF Core 1.0 (aka EF 7) as soon as it matures enough to meet your needs.  That’s because the API’s for EF Core are designed to be similar to EF 6.  Then when you do move to EF Core, you’ll be able to use Docker to deploy your web services on Linux VM’s running in a Cloud service such as Amazon EC2, Google Compute Engine, or Microsoft Azure.

Posted in Technical | Tagged , , | 69 Comments

WCF Is Dead and Web API Is Dying – Long Live MVC 6!

The time has come to start saying goodbye to Windows Communication Foundation (WCF).  Yes, there are plenty of WCF apps in the wild — and I’ve built a number of them.  But when it comes to selecting a web services stack for greenfield applications, you should no longer use WCF.

Note: A number of commenters have misunderstood the nuanced position I’ve taken in this blog post, so I thought it would help to add a statement at the beginning clarifying my position on WCF. I am not saying that WCF is going away or that you should discontinue using it for non-HTTP communication, such as MSMQ or Named Pipes, or where SOAP is a requirement. I am also not saying that most WCF apps should be re-written; on the contrary, they will need to be maintained to support SOAP-based clients. What I am saying is that, if you plan to build a greenfield HTTP-based web service, you should seriously consider using ASP.NET Core instead of WCF or ASP.NET Web API 2.x, because it is cross-platform, modular and designed for Cloud-based deployment, and it supports modern development methodologies where dependency injection is an essential requirement.

Update: ASP.NET 5 has been renamed to ASP.NET Core 1.0 and MVC 6 is now called MVC Core 1.0.

King Is Dead

WCF is dead

There are many reasons why WCF has lost its luster, but the bottom line is that WCF was written for a bygone era and the world has moved on.  There are some use cases where it still might make sense to use WCF, for example, message queuing applications where WCF provides a clean abstraction layer over MSMQ, or inter / intra process applications where using WCF with named pipes is a better choice than .NET Remoting. But for developing modern HTTP-based web services, WCF should be considered deprecated for this purpose.

Didn’t get the memo?  Unfortunately, Microsoft is not in the habit of announcing when they are no longer recommending a specific technology for new application development.  Sometimes there’s a tweet, blog post or press release, as when Bob Muglia famously stated that Microsoft’s Silverlight strategy had “shifted,” but there hasn’t to my knowledge been word from Microsoft that WCF is no longer recommended for building modern HTTP-based web services.

One reason might be that countless web services have been built using WCF since its debut in 2007 with .NET 3.0 on Windows Vista, and other frameworks, such as WCF Data Services, WCF RIA Services, and self-hosted Web API’s, have been built on top of WCF.  Also, if you need to interoperate with existing SOAP-based web services, you’re going to want to use WCF rather than handcrafted SOAP messages.


But it’s fair to say that the vision of a world of interoperable web services based on a widely accepted set of SOAP standards has generally failed to materialize.  The story, however, is not so much about the failure of SOAP to gain wide acceptance, as it is about the success of HTTP as a platform for interconnected services based on the infrastructure of the World Wide Web, which has been codified in an architectural style called REST.  The principle design objective for WCF was to provide a comprehensive platform and toolchain for developing service-oriented applications that are highly configurable and independent of the underlying transport, whereas the goal of REST-ful applications is to leverage the capabilities of HTTP for producing and consuming web services.

But doesn’t WCF support REST?

Yes it does, but aside from the fact that REST support in WCF has always felt tacked on, WCF has problems of its own.  First, WCF in general is way too complicated.  There are too many knobs and dials to turn, and you have to be somewhat of an expert to build WCF services that are secure, performant and scalable.  Many times, for example, I have seen WCF apps configured to use the least performant bindings when it wasn’t necessary.  And setting things up correctly requires advanced knowledge of encoders, multi-threading and concurrency.  Second, WCF was not designed to be friendly to modern development techniques, such as dependency injection, and WCF service types require a custom service behavior to use DI.


One of the first signs that WCF was in trouble was when the Web API team opted for using ASP.NET MVC rather than WCF for services hosted by IIS (although under the covers “self-hosted” Web API’s (for example, those hosted by Windows services) were still coupled to WCF).  ASP.NET Web API offers a much simpler approach to developing and consuming REST-ful web services, with programmatic control over all aspects of HTTP, and it was designed to play nice with dependency injection for greater flexibility and testability.

Nevertheless, ASP.NET Web API duplicated many aspects of ASP.NET MVC (for example, routing) and it was still coupled to the underlying host.  For example, Web API apps requiring secure communication over TLS / SSL required a different setup depending on whether the app was hosted in IIS or self-hosted.  To address the coupling issue, Microsoft released an implementation of the OWIN specification called Katana, which offers components for building host-independent web apps and a middleware-based pipeline for inserting cross-cutting concerns regardless of the host.


Web API is dying – Long live MVC 6!

As awesome as ASP.NET Web API and Katana are, they were released mainly as a stopgap measure while an entirely new web platform was being built from the ground up.  That platform is ASP.NET 5 with MVC 6, which merges Web API with MVC into a unified model with shared infrastructure for things like routing and dependency injection.  While OWIN web hosting for Web API retained a dependency on System.Web.dll (along with significant per-request overhead), ASP.NET 5 offers complete liberation from the shackles of legacy ASP.NET.


More importantly, ASP.NET 5 was designed to be lightweight, modular and portable across Windows, Mac OSX and Linux.  It can run on a scaled down version of the .NET Framework, called .NET Core, which is also cross-platform and consists of both a runtime and a set of base class libraries, both of which are bin-deployable so they can be upgraded without affecting other .NET apps on the same machine.  All of this is intended to make ASP.NET 5 cloud-friendly and suitable for a microservices architecture using container services such as Docker.

So when can I start building Web API’s with ASP.NET 5 and MVC6?

The answer is: right now!  When RC1 of ASP.NET 5 was released in Nov 2015, it came with a “go-live” license and permission to use it in a production environment with full support from Microsoft.


Instead of hosting on IIS, which of course only runs on Windows, you’ll want to take advantage of Kestrel, a high-performance cross-platform host that clocks at over 700,000 requests per second, which is about 5 times faster than NodeJs using the same benchmark parameters.

Shiny new tools

Not only has Microsoft opened up to deploying ASP.NET 5 apps on non-Windows platforms, it has also come out with a new cross-platform code editor called Visual Studio Code, which you can use to develop both ASP.NET 5 and NodeJs apps on Mac OSX, Linux or Windows.

ASP.NET 5 is also released as open source and is hosted on GitHub, where you can clone the repositories, ask questions, submit bug reports, and even contribute to the code base.

If you’re interested in learning more about developing cross-platform web apps for ASP.NET 5, be sure to check out my 4-part blog series on building ASP.NET 5 apps for Mac OSX and Linux and deploying them to the Cloud using Docker containers.


In summary, you should avoid WCF if you want to develop REST-ful web services with libraries and tools that support modern development approaches and can be readily consumed by a variety of clients, including web and mobile applications.  However, you’re going to want to skip right over ASP.NET Web API and go straight to ASP.NET Core, so that you can build cross-platform web services that are entirely host-independent and can achieve economies of scale when deployed to the Cloud.

Posted in Technical | Tagged , , , , | 114 Comments

How Open Source Changed My Life

2015 was a pivotal year for my life as a developer, due in no small measure to the impact of open source software both on how I go about writing code, as well as on how I interact with other developers.  If I had to select one word to describe the reason for this, it would be: collaboration.  It’s not just that open source development demands a greater degree of collaboration, but the acceleration of open source as a movement during the past couple of years has actually redefined software development as a highly collaborative process.

Open source

In plain English, this means that software quality depends on how well I can work with others.  However, in the past it hasn’t been very easy to make collaboration a seamless part of software development.  You could do real-time collaboration (also known as pair programming), but few employers were willing to support it, and it was difficult to pull off when team members were scattered across different time zones.

To add insult to injury, centralized version control systems, such as Team Foundation Version Control or Subversion, made common tasks, such as branching and merging, much more arduous.  All of this changed with the widespread adoption of Git, a distributed version control system which makes first-class citizens out of branching and merging.


This has actually changed the way I write code.

To start with, it’s helped to organize my development.  For example, when I want to work on a bug fix or new feature, the first thing I’m going to do is create a branch. This keeps me from working on more than one thing at the same time. But if I do want to multitask, I can stash changes, switch to another branch to do something else, then come back and pick up where I left off.  When I’m working on a branch, the process of committing my changes also forces me to try to better organize my work, by logically grouping my changes into commits.  I can also compare what I’m currently doing to the prior commit to see how refactoring has helped eliminate code smells.

While Git has had an impact on how I personally write code, what has transformed software development into a truly collaborative process has been the rise of code hosting services such as GitHubBitbucket or Visual Studio Team Services, which provide tools for implementing a collaboration workflow based on Git, with work isolated into branches, organized into commits and documented with commit messages.


If I am working on my own public Git repository, then the workflow might take place as follows:

  1. Open an issue describing a defect or desired feature.  Here I can add comments, insert code snippets, refer to other issues or lines of code, as well as reference other issues.
  2. Create a branch for working on the issue. Here I can write code or change existing code, then commit changes with descriptive messages.
  3. Publish the branch, at which time GitHub will show a “Compare & Pull Request” button, which I can click to create a pull request.  This will allow other developers who have cloned my repository to create a local branch based on the pull request, so they can look at the code I’ve written. Other developers can then comment on the pull request, and we can have a discussion about the the issue, even referencing specific lines of code.  If I am trying to reproduce a particular bug, I can simply write a failing test, commit changes, then push those changes to the public branch, which allows others to retrieve and run the failing test.  When I fix the defect so that the failing test now passes, I can commit the fix and push the commit so that other developers can pull the commit to see how the fix was performed.
  4. Once I’m satisfied with the code, I can merge the feature branch into the main branch (probably develop or master), then push those commits to the public repository.  At that point, the pull request can be closed (GitHub will automatically close it), both the public and private feature branches can then be deleted, and the original issue can be closed (GitHub will automatically close issues based on commit messages).

If I am working on someone else’s public Git repository, then the workflow will start off differently:

  1. The first thing I’ll want to do is fork their repository, effectively copying it over to my own GitHub account.
  2. I can then clone the forked repo, create a local branch, work on it (for example, write a feature or create a failing test to reproduce an exception), then publish my local branch to my public repo.
  3. Once I’ve published a branch I can create a new pull request, which others can then pull to see what I’ve done, without affecting any other work they may be doing.

GitHub Workflow

What Git and GitHub essentially provide is the ability to share code with others and facilitate discussions in a structured workflow. That’s powerful stuff.

But the ability to leverage these tools depends on how widely they’ve been appropriated by members of the developer community.  And that means developers are going to need to get out of their comfort zone to learn how to use Git and GitHub (or another hosting service).  The good news is that most popular IDE’s and code editors have decent Git integration, which allows you to perform most Git tasks using a GUI interface right from within the IDE.  Other Git clients, such as TortoiseGit and SourceTree, ease many tasks, but there are some Git commands, such as interactive rebase, where you’ll need to pull up a terminal window or command prompt.  Interactive rebase can be tricky at first, but it lets you squash certain commits and consolidate messages for a cleaner version history.

One of things which can trip up Git noobies is not providing a proper .gitignore file with their repo.  GitHub provides .gitignore file templates for various IDE’s.  Not using the correct file will make it difficult for others to build your solution without getting spurious errors.  For example, if you’re using Visual Studio without the correct .gitignore, you may check in package, bin and obj folders, which is can interfere with restoring NuGet packages when someone else tries to build the solution.

To help you get up to speed on this new collaborative approach to software development, you should check out some of the many free Git tutorials available online.  Then you should bite the bullet and contribute to an open-source project.  Feel intimidated?  Scott Hanselman has created a First Timers Only web site specifically targeted to people who are dipping their toes into the open source waters.

I’ve been privileged to author a couple of popular open source frameworks: Simple MVVM Toolkit and Trackable Entities.  I created the first project prior to embracing Git, but I moved the second project to GitHub early on and have had other developers contribute to the project, which has encouraged me to fully adopt the Git way.  I also had the opportunity to submit some pull requests to Microsoft’s ASP.NET 5 repo on GitHub, where I learned how to rebase my feature branch and resolve conflicts to stay in sync with upstream changes.

Aspnet5 new kid

One of the things that propelled me further into open source has been the way in which Microsoft has jumped on the bandwagon.  Not only have they opened up their code for developers to look at, they have invited others to take part in the process by allowing them to open issues and submit pull requests.  That’s huge.  And it’s a model for how companies large and small stand to benefit from this new way to build software by sharing code and allowing collaboration with the help of tools from Git and GitHub.

I hope your journey into open source is as enriching for you as it has been for me.

Yoda Source

May the Source be with you!

Posted in Technical | Tagged , | Leave a comment

Deploy ASP.NET 5 Apps to Docker on Azure

NOTE: This post is part 4 of a series on developing and deploying cross-platform web apps with ASP.NET 5:

  1. Develop and Deploy ASP.NET 5 Apps on Mac OS X
  2. Develop and Deploy ASP.NET 5 Apps on Linux
  3. Deploy ASP.NET 5 Apps to Docker on Linux
  4. Deploy ASP.NET 5 Apps to Docker on Azure (this post)

Download instructions and code for this post here:

Over the past few years, a phenomenon known as “the Cloud” has appeared.  While the term is rather nebulous and can mean a number of different things, with regard to business applications it generally refers to a deployment model where apps run on servers provided by a third party that rents out computational resources, such as CPU cycles, memory and storage, on a pay-as-you-go basis.  There are different service models for cloud computing, including infrastructure (IaaS), platform (PaaS) and software (Saas).  In this post I’ll focus on the first option, infrastructure, which allows you set up Linux virtual machines where you can deploy Docker images with your ASP.NET 5 apps and all their dependencies.  There are a number of players in the IaaS market, including Amazon Elastic Compute Cloud (EC2), Google Compute Engine (GCE) and Microsoft Azure, but I’ll show you how to deploy a Dockerized ASP.NET 5 app to Azure using Docker Hub, GitHub and the Docker Client for Windows.


So let’s start out with GitHub.  The reason we’re starting here is that you can set up Docker Hub to link to a GitHub repository that contains a Dockerfile.  When you push a commit to the GitHub repo, Docker Hub will build a new image for your app.  What makes this a nice approach is that you get automated builds with continuous integration, and it’s easy to pull images from Docker Hub and run them on the Linux VM on Azure.

To demonstrate this I’ve created two repositories on GitHub.  The first one is a simple console app:  It contains three files: project.json, program.cs, and a Dockerfile.

FROM microsoft/aspnet:1.0.0-beta4

COPY . /app


RUN ["dnu", "restore"]

ENTRYPOINT ["dnx", ".", "run"]

The second is a simple web app:  It also contains three files: project.json, startup.cs and a Dockerfile.

FROM microsoft/aspnet:1.0.0-beta4

COPY . /app


RUN ["dnu", "restore"]


ENTRYPOINT ["dnx", ".", "kestrel"]

Next, you’re going to need to set up an account on Docker Hub.  It’s free, and you can log in using your GitHub credentials.  Then add an “Automated Build” which links to a GitHub repo.


By far the easiest way to create a new Docker virtual machine in Azure is to use the Visual Studio 2015 Tools for Docker.  Otherwise, you’re going to need to create the certificates manually and upload them to the Azure portal when adding the VM Extension for Docker.  When I attempted this on the Azure portal, installing the Docker extension hung.  But I didn’t experience a problem using the VS Docker Tools to create a Docker VM on Azure.

You’ll have to go through a few steps in Visual Studio before you can create the VM on Azure.  First create a new Web Project, selecting one of the ASP.NET 5 templates.  For our purposes, an empty web project will do just fine.


Then right-click on the generated project and select “Publish” from the context menu.  Under Profile, select Docker Containers, at which point you’ll be presented with a list of existing Azure Docker virtual machines.  Simply click the New button to create a new Linux VM with Docker installed.


Enter a unique DNS name, together with an admin user name and password.  You can check the option to auto-generate Docker certificates, and the wizard will create the required certificates, configure Docker on the VM to use them, and copy the certificate and key files into the “.docker” folder under your user profile, so that you can use the same certificates to create additional virtual machines on Azure or elsewhere.


I highly encourage you to check out the video series on Docker for .NET Developers, where you can learn more about the VS Docker tools, which can also generate a Dockerfile and build scripts for publishing an app to a Docker container on the VM you created on Azure.

What’s cool is that installing the tools will also give you the docker client for Windows, which you can use from a command prompt to build and run Docker images on the remote VM.  You can start with the following command to display basic information, supplying the host name and port number specified when you created the VM.

docker --tls -H tcp:// info


To keep from having to include the remote host address with every command, you can set the DOCKER_HOST environment variable.

set docker_host=tcp://

You can now use the docker client on the command line to run packages.  If the package is not already installed locally, it will be pulled from Docker Hub.  The following, for example, will simply print “Hello from Docker” to the console, along with a few other bits of information.

docker --tls run -t hello-world

You can also run Docker images which you have pulled into Docker Hub from GitHub.  The following command will run an ASP.NET 5 console app that prints “Hello World” to the console.

docker --tls run -t tonysneed/aspnet5-consoleapp

If you want to get the latest version of the image from Docker Hub, simply execute a pull command.

docker --tls pull tonysneed/aspnet5-consoleapp

The following command will run a daemonized web app, mapping port 80 on the VM to port 5004 on the container.

docker --tls run -t -d -p 80:5004 tonysneed/aspnet5-webapp

This command will return a big long number, which you can use to output the container logs.  The following will display “Started” if successful, otherwise it will list runtime exceptions.

docker --tls logs f2de092f14b67590ae4dc08cd3a453a28271de0a8f27e6d80ec356cbc5151d43

To list running processes, execute the ps command, which will list all the running containers.

docker --tls ps


Now you can just open a browser with a URL that contains the fully qualified DNS name for the VM in Azure:


Congratulations!  You have successfully deployed an ASP.NET 5 web app to a Docker container running on a Linux virtual machine in Azure.  More importantly, you have configured Docker Hub to re-build the image whenever a commit is pushed to a linked GitHub repo, and you know how to pull that Docker image into the VM on Azure from the command line using the Docker Client for Windows.  The Visual Studio Tools for Docker make it easy to create the Linux VM on Azure and generate certificates which you can use to create other Docker VM’s and which you can copy to other machines (both Windows and non-Windows) so that you can run Docker commands from there.  All in all, a sweet story indeed.

Posted in Technical | Tagged , , , , , | 1 Comment

Deploy ASP.NET 5 Apps to Docker on Linux

NOTE: This post is part 3 of a series on developing and deploying cross-platform web apps with ASP.NET 5:

  1. Develop and Deploy ASP.NET 5 Apps on Mac OS X
  2. Develop and Deploy ASP.NET 5 Apps on Linux
  3. Deploy ASP.NET 5 Apps to Docker on Linux (this post)
  4. Deploy ASP.NET 5 Apps to Docker on Azure

Download instructions and code for this post here:

Docker is a technology for Linux that enables you to deploy applications inside of containers, which are analogous to virtual machines, but which are much more lightweight because multiple containers on a VM all share the same operating system and kernel.


While lightweight containers are a cool idea, what makes Docker a killer app is that docker images are built from a simple script, called a DockerFile, that can be layered on top of other definitions and which lists all the components needed to run your application.  Images based on a Dockerfile can be pushed to an online repository called Docker Hub, where they can be built automatically whenever commits are pushed to a linked repository on GitHub.  And it’s super easy to pull images from Docker Hub and run them on a Linux virtual machine, either locally or in the cloud.

In this post I’ll describe how to deploy an ASP.NET 5 app to a docker container running on a local virtual machine with Linux Ubuntu.  Note that you’re not actually going to install ASP.NET 5 directly on the virtual machine.  Instead, you’ll build a docker image that already comes with ASP.NET 5.  Virtualization software, such as Parallels, allows you to create a number of snapshots, so that you can easily revert the VM to an earlier state.


First you’ll need to install Docker on your Ubuntu VM.  We’ll install it from its own package source.  Just open a Terminal and paste the following commands, one by one.

sudo apt-key adv --keyserver hkp:// --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo sh -c "echo deb docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker


To verify the Docker installation, enter: docker –version. You should see the Docker version and build number.


For convenience you’ll want to set things up so you don’t have to prefix docker commands with sudo for elevated privileges.

sudo groupadd docker
sudo gpasswd -a parallels docker
sudo service docker restart
newgrp docker

To verify docker root privileges enter: docker run –help.  Then just to be sure all is well, you can run the docker hello-world image, which will pull down and run a basic image from Docker Hub.

docker run hello-world

You should see the following output in the Terminal.


You can then “dockerize” the Hello World ASP.NET 5 console app.  Start by creating a ConsoleApp directory and placing there two file from the AspNet Console sample repo on GitHub: project.json and program.cs.  In this same directory, create a file called DockerFile, with the following content:

FROM microsoft/aspnet:1.0.0-beta4
COPY . /app
RUN ["dnu", "restore"]

ENTRYPOINT ["dnx", ".", "run"]

This will create a new Docker image based on the AspNet 1.0.0-beta4 image on Docker Hub, which will copy files in the current folder to an app folder in the container.  When the container is built, the dnu restore command will download required dependencies.  The entry point is defined as dnx . run, which will invoke the code in Program.Main when the container is run.

Open a Terminal at the ConsoleApp directory and execute the following command (remember to include the trailing dot):

docker build -t consoleapp .

You can verify that the container exists by entering: docker images.


Lastly, run the container.  You should see “Hello World” printed to the Terminal window.

docker run -t consoleapp

Console apps are well-suited for running tasks on a server, but more often you’ll use ASP.NET 5 to run web apps. The AspNet repo on GitHub has a HelloWeb sample, where you can grab two files: project.json and startup.cs.

Create a directory called WebApp containing these two files, as well as an empty directory called wwwroot, then add a file called Dockerfile with the following content:

FROM microsoft/aspnet:1.0.0-beta4
COPY . /app
RUN ["dnu", "restore"]

ENTRYPOINT ["dnx", ".", "kestrel"]

Open a terminal at the WebApp directory and build the docker image.

docker build -t webapp .

Then run the container, indicating with the –d switch that you want a daemonized app running in the background that does not interact with the Terminal.  You’re also mapping the VM’s port 5004 to that of the container.

docker run -t -d -p 5004:5004 webapp

This will execute the container entry point, which is to start the Kestrel web server on the indicated port.  You’ll get back a container id, a big long number which you can use to see the log output.  This will display log messages, including runtime error information.  You can also verify that the container successfully started by entering docker ps, which lists running containers.


Open a browser and enter the address: http://localhost:5004.


And that is how you can deploy ASP.NET 5 apps to docker containers on a Linux virtual machine.  For the next part of this series, I’ll show you how you can push your image to Docker Hub, where it can be pulled into a Linux virtual machine you’ve provisioned on Microsoft Azure.

Posted in Technical | Tagged , , , , | 4 Comments

Develop and Deploy ASP.NET 5 Apps on Linux

NOTE: This post is part 2 of a series on developing and deploying cross-platform web apps with ASP.NET 5:

  1. Develop and Deploy ASP.NET 5 Apps on Mac OS X
  2. Develop and Deploy ASP.NET 5 Apps on Linux (this post)
  3. Deploy ASP.NET 5 Apps to Docker on Linux
  4. Deploy ASP.NET 5 Apps to Docker on Azure

Download instructions and code for this post here:

It is now possible to develop and deploy an ASP.NET application on Linux.  In case you haven’t heard, Microsoft’s CEO, Satya Nadella, has proclaimed, “Microsoft loves Linux.”


The reason is simple: Linux is vital to Microsoft’s cloud service, Azure.  But more important, in embracing cross-platform initiatives, such as Core CLR, there is an acknowledgement that we no longer live in a world with a single dominant platform, and that developers not only need to write code that runs on multiple platforms, but that they should be comfortable writing apps with an IDE that will run on multiple platforms. For most of us, that IDE will be Visual Studio Code.

In this blog post I’ll show you how to set up a Linux virtual machine to run VS Code, so that you can both develop and deploy ASP.NET 5 applications.  If you’re on a Mac with Parallels, creating a virtual machine running Ubuntu Linux is just a few clicks away.


Once you’ve stood up your shiny new Linux VM, follow these instructions to install VS Code.  I found it convenient to create a VSCode folder under Home and extract the contents of there.  Then you can create a link that will launch VS Code from the Terminal, so that you can type code . to start editing files in VS Code at that location.

sudo ln -s /home/parallels/VSCode/Code /usr/local/bin/code

Next you’ll need to follow these instructions to install ASP.NET 5 on Linux.  The first step is to install Mono.

sudo apt-key adv --keyserver --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update
sudo apt-get install Mono-Complete

Second, you’ll need to install libuv, which is used by Kestrel for hosting ASP.NET 5 apps on Linux.

sudo apt-get install automake libtool curl
curl -sSL | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.4.2
sudo sh
sudo ./configure
sudo make
sudo make install
sudo rm -rf /usr/local/src/libuv-1.4.2 && cd ~/
sudo ldconfig

Lastly, you’ll need to install the DotNet Version Manager, which is used to select and configure versions of the .NET runtime for hosting ASP.NET 5 apps.

curl -sSL | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.s
source /home/parallels/.dnx/dnvm/
dnvm upgrade

Entering dnvm will then bring up the version manager, which allows you to select and configure different versions of the ASP.NET 5 runtime.


To see which versions are installed, enter: dnvm list.


Next, you’ll create a directory and fire up VS Code to create your first ASP.NET 5 app.  We’ll start with a Hello World console app! The quickest approach is to grab two files from the ConsoleApp folder for the AspNet Samples Repo: project.json and program.cs.

Here is project.json for the console app, which lists the dependencies you’ll bring in.

    "dependencies": {

    "commands": {
        "ConsoleApp": "ConsoleApp"
    "frameworks": {
        "dnx451": { },
        "dnxcore50": {
            "dependencies": {
                "System.Console": "4.0.0-beta-*"

And here is program.cs, which simply prints “Hello World” to the console.

using System;

public class Program
    public static void Main()
        Console.WriteLine("Hello World");

Editing program.cs in Visual Studio Code looks like this:


To launch the app, open a Terminal at the ConsoleApp directory, then restore the dependencies and execute the program.

dnu restore
dnx . run

You should see “Hello World” printed to the Terminal window.


Console apps are well-suited for running tasks on a server, but more often you’ll use ASP.NET 5 to run web apps. The AspNet repo on GitHub has a HelloWeb sample, where you can grab two files: project.json and startup.cs.

Here is the project.json file for the web app.  Notice the “kestrel” command for starting an HTTP listener on Mac OS X and Linux.

    "version": "1.0.0-*",
    "dependencies": {
        "Kestrel": "1.0.0-*",
        "Microsoft.AspNet.Diagnostics": "1.0.0-*",
    "commands": {
        "kestrel": "Microsoft.AspNet.Hosting --server Kestrel --server.urls http://localhost:5004"
    "frameworks": {
        "dnx451": { },
        "dnxcore50": { }

Here is the startup.cs file, which configures the pipeline with an endpoint for displaying a welcome page.

using Microsoft.AspNet.Builder;

namespace HelloWeb
    public class Startup
        public void Configure(IApplicationBuilder app)

To start the web server, restore dependencies and then enter the kestrel command.  VS Code also allows you to execute commands from the command palette, but this doesn’t yet work on Linux.

dnu restore
dnx . kestrel

Then open a browser and enter the address: http://localhost:5004. You’ll see a message in the Terminal stating the web server has started. To end it, press Enter.


Aside from running the console and web Hello World samples, you’re going to want to develop your own applications based on templates you’d find in Visual Studio on Windows.  To get a similar experience, you can install Yeoman, which scaffolds various kinds of ASP.NET 5 apps, such as MVC 6 or Web API (which is technically now part of MVC 6). To install Yeoman you’ll need the Node Package Manager, which you can download and install using the apt-get command.

sudo apt-get install nodejs-legacy npm

Once you have Node installed, you can use it to install Yeoman, the generator, grunt and bower – all in one fell swoop.

sudo npm install -g yo grunt-cli generator-aspnet bower

Having installed Yeoman, you can then navigate to a directory in Terminal where you’d like to create your new app.  Then execute:

yo aspnet

This brings up a number of choices.  For example, you can select Web API Application, which then scaffolds the standard Web API app with a ValuesController. If you run dnu restore and dnx . kestrel, as you did with the sample web app, you can browse to the following URL to get back JSON values: http://localhost:5001/api/values.

And that is how you can both develop and deploy ASP.NET 5 apps on Linux.  For the next part of this series, I’ll show you how to deploy an ASP.NET 5 app to a Docker container on Linux.

Posted in Technical | Tagged , , , | 35 Comments