Getting Visual Studio Code Ready for TypeScript: Part 3

Part 3: Injecting Scripts with Gulp

This is the third part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript
  2. Writing Jasmine Tests in TypeScript
  3. Injecting Scripts with Gulp (this post)

Leveraging Gulp

In the first and second post in this series I showed how you can use Gulp to automate common tasks such as compiling TypeScript to JavaScript and running Jasmine tests in a browser.  While Gulp is not strictly necessary to perform these tasks, it allows you to chain together multiple tasks, which can give you a smoother workflow.


You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.

For example, we defined a “watch” task with a dependency on the “compile” task, so that Gulp performs a compilation before watching for changes in any TypeScript files.  When changes are detected, the “compile” task is then re-executed.

gulp.task('compile', function () {

    exec('rm -rf dist && tsc -p src');

gulp.task('watch', ['compile'], function () {

    return'./src/**/*.ts', ['compile']);

Likewise, we defined a “test” task with a dependency on the “watch” task, so that changes to any TypeScript files will cause browser-sync to reload the browser when it detects that the JavaScript files have been re-generated.

gulp.task('test', ['watch'], function () {

    var options = {
        port: 3000,
        server: './',
        files: ['./dist/**/*.js',
        // Remaining options elided for clarity


Listing Tasks

While VS Code allows you to execute gulp tasks from within the editor, you may sometimes prefer to use Gulp from the Terminal (if for no other reason than to see all the pretty colors).  To make this easier, we can use a plugin that will list all the tasks we’ve defined in our gulpfile.js.  But before we get into that, we can make our lives easier by using a plugin called gulp-load-plugins, which will relieve us from having to define a separate variable for each plugin we wish to use.  All we need to do is define a $ variable, then use it to execute other gulp plugins we’ve installed.

var $ = require('gulp-load-plugins')({ lazy: true });

To list tasks in gulpfile.js, we can define a “help” task which uses the gulp-task-listing plugin to list all of our tasks.  We’ll follow a convention which uses a colon in the task name to designate it as a sub-task.  We can also define a “default” task which calls the “help” task when a user enters “gulp” in the Terminal with no parameters.

gulp.task('help', $.taskListing.withFilters(/:/));
gulp.task('default', ['help']);

You’ll need to install both Gulp plugins using npm.

npm install --save-dev gulp-load-plugins gulp-task-listing

Then open the Terminal, type “gulp” (no quotes) and press Enter.  You should see a list of tasks displayed.  To execute a task, simply type “gulp” followed by a space and the name of the task.


Injecting Scripts

In my last blog post I described how you can run Jasmine tests in a browser by serving up an HTML file which included both source and spec JavaScript files.  But this required you to manually insert script tags into SpecRunner.html.  You might have asked yourself if there might be a way to inject scripts into the spec runner automatically whenever you executed the “test” task. Well it just so happens: there’s plugin for that!™ It’s appropriately called gulp-inject, and you can add an injectScripts function to gulpfile.js which will inject scripts into SpecRunner.html based on globs for source and spec files.

var inject = require('gulp-inject');

function injectScripts(src, label) {

    var options = { read: false, addRootSlash: false };
    if (label) { = 'inject:' + label;
    return $.inject(gulp.src(src), options);

Now add a “specs:inject” gulp task which calls injectScripts to insert the source and spec scripts.  Because we only intend to call this task from other tasks, we can classify it as a sub-task by inserting a colon in the task name.

gulp.task('specs:inject', function () {

    var source = [

    var specs = ['./dist/**/*.spec.js'];

    return gulp
        .pipe(injectScripts(source, ''))
        .pipe(injectScripts(specs, 'specs'))

The gulp-inject plugin will insert selected scripts at each location, based on a comment corresponding to the specified label.  Simply edit SpecRunner.html to replace the hard-coded script tags with specially formatted comments. After running the “specs:inject” task, you should see the appropriate scripts inserted at these locations

<!-- inject:js -->
<!-- endinject -->

<!-- inject:specs:js -->
<!-- endinject -->

Injecting Imports

In addition to inserting source and spec scripts, you’ll also want to inject System.import statements into the spec runner so that system.js can provide browser support for module loading.  For that you’ll need to install packages for glob, path, gulp-rename, and gulp-inject-string, then add an injectImports function to gulpfile.js.

var glob = require('glob');
var path = require('path');

function injectImports(src, label) {

    var search = '/// inject:' + label;
    var first = '\n    System.import(\'';
    var last = '\'),';
    var specNames = [];

    src.forEach(function(pattern) {
            .forEach(function(file) {
                var fileName = path.basename(file, path.extname(file));
                var specName = path.join(path.dirname(file), fileName);
                specNames.push(first + specName + last);

    return $.injectString.after(search, specNames);

Then add an “imports:inject” task which calls injectImports to insert system imports into a file called system.imports.js.

gulp.task('imports:inject', function(){

        .pipe(injectImports(['/.dist/**/*.spec.js'], 'import'))

Modify SpecRunner.html to replace the script that uses System.import with a reference to system.imports.js.

<script src="util/system.imports.js"></script>

When you execute the “imports:inject” gulp task, it will search a file called system.template.js for a triple-dash comment with the text “inject:import”, where it will inject imports for each spec file. The result will be written to system.imports.js.

    /// inject:import

Lastly, you need to update the “test” task in gulpfile.js to add the two sub-tasks for injecting scripts and imports. This will ensure they are executed each time you run your tests.

gulp.task('test', ['specs:inject', 'imports:inject', 'watch'], function ()

Debugging Gulp Tasks

If you run into problems with any gulp tasks, it would help if you could set breakpoints in gulpfile.js, launch a debugger and step through your code to see what went wrong.  You can do this in VS Code by adding an entry to the “configurations” section of your launch.json file, in which you invoke gulp.js and pass a task name.

    "name": "Debug Gulp Task",
    "type": "node",
    "request": "launch",
    "program": "${workspaceRoot}/node_modules/gulp/bin/gulp.js",
    "stopOnEntry": false,
    "args": [
        // Replace with name of gulp task to run
    "cwd": "${workspaceRoot}"

If you set a breakpoint in the “imports:inject” task, select “Debug Gulp Task” from the drop down in the Debug view in VS Code and press F5, it will launch the debugger and stop at the breakpoint you set.  You can then press F10 (step over) or F11 (step into), view local variables and add watches.


Learning Gulp

If you would like to learn more about Gulp, I highly recommend John Papa’s Pluralsight course on Gulp, where he explains how to use Gulp to perform various build automation tasks, such as bundling, minification, versioning and integration testing. While the learning curve may appear steep at first, Gulp will make your life easier in the long run by automating repetitive tasks and allowing you to chain them together for a streamlined development workflow.

Posted in Technical | Tagged , | Leave a comment

Getting Visual Studio Code Ready for TypeScript: Part 2

Part 2: Writing Jasmine Tests in TypeScript

This is the second part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript
  2. Writing Jasmine Tests in TypeScript (this post)

Jasmine vs Mocha + Chai + Sinon


There are numerous JavaScript testing frameworks, but two of the most popular are Jasmine and Mocha.  I won’t perform a side-by-side comparison here, but the main difference is that Mocha does not come with built-in assertion and mocking libraries, so you need to plug in an assertion library, such as Chai, and a mocking library, such as Sinon.  Jasmine, on the other hand, includes its own API for assertions and mocks.  So if you want to keep things simple with fewer moving parts, and you don’t need extra features offered by libraries such as Chai and Mocha, Jasmine might be a more appealing option.  If, on the other hand, you want more flexibility and control, and you want features offered by dedicated assertion and mocking libraries, you might want to opt for Mocha.  For simplicity I’m going to stick with Jasmine in this blog post, but feel free to use Mocha if that better suits your purpose.

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.

Update: Since first publishing this blog post, I added a section on Debugging Jasmine Tests in VS Code, which you can find at the end of the article.

Using Jasmine with TypeScript

Jasmine is a behavior-driven development testing framework, which allows you to define test suites through one or more nested describe functions.  Each describe function accepts a string argument with the name of the test suite, which is usually the name of the class or method you are testing.  A test suite consists of one or more specs, formulated as a series of it functions, in which you specify expected behaviors.

Let’s say you have a Greeter class written in TypeScript, and it has a greet function.

namespace HelloTypeScript {
    export class Greeter {
        constructor(public message: string) {

        greet(): string {
            return "Hello " + this.message;

Notice that Greeter is defined within the namespace HelloTypeScript and is qualified with the export keyword.  This removes Greeter from the global scope so that we can avoid potential name collisions.

To use Jasmine we’ll need to install jasmine-core (not jasmine) using npm (Node Package Manager).  Because we’re only using Jasmine at development-time, we’ll save it to the package.json file using the –save-dev argument.

npm install --save-dev jasmine-core

Intellisense for Jasmine

To allow Visual Studio Code to provide intellisense for Jasmine, you’ll need to install type definitions using Typings, which replaces the deprecated Tsd tool from Definitely Typed.

npm install -g typings

Use Typings to install type definitions for jasmine.

typings install jasmine --save-dev --ambient

This command will result in the addition of a typings folder in your project, which contains a main.d.ts file with references to installed type definitions.  The –save-dev argument will persist the specified typing as a dev dependency in a typings.json file, so that you can re-install the typings later.  The –ambient argument is required to include Definitely Typed in the lookup.

Now you’re ready to add your first Jasmine test.  By convention you should use the same name as the TypeScript file you’re testing, but with a .spec suffix.  For example, the test for greeter.ts should be called greeter.spec.ts and be placed in the same folder.

/// <reference path="../../typings/main.d.ts" />

describe("Greeter", () => {

    describe("greet", () => {

        it("returns Hello World", () => {

            // Arrange
            let greeter = new HelloTypeScript.Greeter("World");

            // Act
            let result = greeter.greet();

            // Assert
            expect(result).toEqual("Hello World");

The triple-slash reference is needed for intellisense to light up.  Without it you’ll see red squigglies, and VS Code will complain that it cannot find the name ‘describe’.

When you press Cmd+B to compile your TypeScript code, you will see a greeter.spec.js file in the dist/greeter directory, where the greeter.js file is also located.  You’ll also see a file to enable debugging of your Jasmine test.  (See the first post in this blog series for information on how to configure VS Code for compiling and debugging TypeScript.)

Running Jasmine Tests

To run your Jasmine tests in a browser, go to the latest release for Jasmine and download the jasmine-standalone zip file.  After extracting the contents of the zip file, copy both the lib folder and SpecRunner.html file to your project folder.  Edit the html file to include both the source and spec files.

<!-- include source files here... -->
<script src="dist/greeter/greeter.js"></script>

<!-- include spec files here... -->
<script src="dist/greeter/greeter.spec.js"></script>

You can then simply open SpecRunner.html in Finder (Mac) or File Explorer (Windows) to see the test results.


If you change Greeter.greet to return “Goodbye” instead of “Hello”, then compile and refresh the browser, you’ll see that the test now fails.


Running Tests Automatically

Having to refresh the browser to see test results can become tedious, so it’s a good idea to serve your tests using http.  To help with this you can use a task runner such as Gulp, which integrates nicely with VS Code, together wish an http server such as BrowserSync.


First, you’ll want to install gulp and browser-sync locally.

npm install --save-dev gulp browser-sync

Next, add a gulpfile.js file to the project, in which you’ll define tasks for compiling TypeScript to JavaScript, as well as watching TypeScript files and recompiling them when there are changes.

var gulp = require('gulp');
var exec = require('child_process').exec;
var browserSync = require('browser-sync');

gulp.task('compile', function () {
    exec('rm -rf dist && tsc -p src');

gulp.task('watch', ['compile'], function () {
    return'./src/**/*.ts', ['compile']);

To run either the compile or watch tasks, we can execute them from a Terminal or Command Prompt.  (You can also run tasks in VS Code by pressing Cmd+P, typing “task ” [no quotes] and entering the task name).

gulp watch


If you change a TypeScript file, the gulp watch task will detect the change and execute the compile task.  You can then add a gulp task which serves both .js and .spec.js files in a browser.

gulp.task('test', ['watch'], function () {

    var options = {
        port: 3000,
        server: './',
        files: ['./dist/**/*.js',
        logFileChanges: true,
        logLevel: 'info',
        logPrefix: 'spec-runner',
        notify: true,
        reloadDelay: 1000,
        startPath: 'SpecRunner.html'


Running Tests in VS Code

It is possible to wire up the gulp test task so that it runs in response to pressing Cmd+T.  To set up VS Code both for compiling TypeScript and running tests, press Cmd+Shift+P, type “config” and select Configure Task Runner. Replace the default content for tasks.json with the following:

    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
    "tasks": [
            "taskName": "compile",
            "isBuildCommand": true,
            "showOutput": "silent",
            "problemMatcher": "$gulp-tsc"
            "taskName": "test",
            "isTestCommand": true,
            "showOutput": "always"

Pressing Cmd+B will compile your TypeScript files, and pressing Cmd+T will serve your Jasmine tests in a browser, automatically refreshing the browser each time any of your TypeScript files changes.

Using Modules

To improve encapsulation TypeScript supports the use of modules, which are executed in their own scope, not in the global scope.  Various constructs, such as variables, functions, interfaces and classes, are not visible outside a module unless they are explicitly exported.  For example, we could define an ItalianGreeter class with an export statement.

export default class ItalianGreeter {
    constructor(public message: string) {

    greet(): string {
        return "Ciao " + this.message;

The Jasmine test for ItalianGreeter would then require an import statement.

import ItalianGreeter from "./italiangreeter";

let greeter = new ItalianGreeter("World");

// Remaining code elided for clarity

To use modules in TypeScript you’ll need to specify a module loader in your tsconfig.json file.  For a TypeScript library or node.js app, you would select commonjs.

    "compilerOptions": {
    "module": "commonjs",

// Remaining code elided for clarity

At this point your TypeScript will compile, but the additional tests will not show up in SpecRunner.html, even after you include scripts for the source and spec files.  The reason is that you need SystemJs, which acts as a polyfill to provide support in the browser for module loading, which is a feature of ECMA Script 2015. First add systemjs to your project.

npm install --save-dev systemjs

Then add these two scripts to SpecRunner.html.

<script src="node_modules/systemjs/dist/system.js"></script>
    System.config({ packages: { 'dist': {defaultExtension: 'js'}}});

Pressing Cmd+T will now also serve italiangreeter.spec.js, which imports the ItalianGreeter class.


Stopping Tests in VS Code

You can terminate the test task by pressing Cmd+Shift+P and selecting Terminate Running Task.  Because this is something you’ll do often, you might want to add a keyboard shortcut for it.  From the Code menu select Preferences / Keyboard Shortcuts, then add the following binding, which will terminate the running task by pressing Cmd+Shift+X.

    { "key": "shift+cmd+x", "command": "workbench.action.tasks.terminate" }

Debugging Tests in VS Code

While it may be useful to run Jasmine tests in a browser, there are times when you need to launch a debugger and step through your code one line at a time.  Visual Studio Code makes it relatively painless to debug your tests.  First you’ll need to install jasmine-node using npm.

npm install --save-dev jasmine-node

Then add the following entry to the “configurations” section of your launch.json file.

    "name": "Debug Tests",
    "type": "node",
    "request": "launch",
    "program": "${workspaceRoot}/node_modules/jasmine-node/bin/jasmine-node",
    "stopOnEntry": false,
    "args": [
    "cwd": "${workspaceRoot}",
    "sourceMaps": true,
    "outDir": "${workspaceRoot}/dist"

Press Cmd+Shift+D to view the Debugging pane in VS Code and select “Debug Tests” from the dropdown.  Then set a breakpoint (pressing F9 will do the trick), and press F5 to launch the debugger.  Execution should pause at the breakpoint, allowing you to step through your code.


What’s Next?

In this post I showed how to write Jasmine tests in TypeScript and serve them in a browser by running a Gulp task either from the Terminal or in Visual Studio Code.  This has the advantage of automatically compiling TypeScript files and refreshing the browser whenever a source or spec file has changed.  While this works well at development time, you’ll need to use a test runner such as Karma if you want to execute tests on a continuous integration server when commits are pushed to a remote repository.  I’ll address this issue in my next post.

Posted in Technical | Tagged , | 17 Comments

Getting Visual Studio Code Ready for TypeScript

Part 1: Compiling TypeScript to JavaScript

This is the first part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript (this post)
  2. Writing Jasmine Tests in TypeScript

Why TypeScript?

In case you’re new to TypeScript, Wikipedia defines TypeScript in the following way (paraphrased):

TypeScript is designed for development of large applications and transcompiles to JavaScript. It is a strict superset of JavaScript (any existing JavaScript programs are also valid TypeScript programs), and it adds optional static typing and class-based object-oriented programming to the JavaScript language.

Coming from a C# background, I was attracted to TypeScript, first because it is the brain child of Anders Hejlsberg, who also invented the C# programming language, and I can have confidence it has been well-designed, and second because I like to rely on the compiler to catch errors while I am writing code.  While TypeScript embraces all the features of ECMAScript 2015, such as modules, classes, promises and arrow functions, it adds type annotations which allow code editors to provide syntax checking and intellisense, making it easier to use the good parts of JavaScript while avoiding the bad.

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.


Why Visual Studio Code?

Once I decided to embark on the adventure of learning TypeScript, the next question was: What development tools should I use?

I’ve spent the better part of my career with Microsoft Visual Studio, and I enjoy all the bells and whistles it provides.  But all those fancy designers come at a cost, both in terms of disk space and RAM, and even installing or updating VS 2015 can take quite a while.  To illustrate, here is a joke I recently told a friend of mine:

I like Visual Studio because I can use it to justify to my company why I need to buy better hardware, so I can run VS and get acceptable performance. That’s how I ended up with a 1 TB SSD and 16 GB of RAM — thank you Visual Studio! 👏

I also own a MacBook Air, mainly because of Apple’s superior hardware, and run a Windows 10 virtual machine so that I can use Visual Studio and Office.  But I thought it would be nice to be able to write TypeScript directly on my Mac without having to spin up a Windows VM, which can drain my laptop’s battery.  So I thought I would give Visual Studio Code a try.

But before I started with VS Code, I decided to go back to Visual Studio and create a simple TypeScript project with support for unit testing with Jasmine, which is a popular JavaScript unit testing framework.  It turns out the experience was relatively painless, but I still had to do a lot of manual setup, which entailed creating a new TypeScript project in Visual Studio, deleting the files that were provided, installing NuGet packages for AspNet.Mvc and JasmineTest, then adding a bare-bones controller and a view which I adapted from the spec runner supplied by Jasmine.

You can download the code for a sample VS 2015 TypeScript project from my Demo.VS2015.TypeScript repository on GitHub.

Visual Studio 2015 still required me to do some work to create a basic TypeScript project with some unit tests, and if I wanted to add other features, such as linting my TypeScript or automatically refreshing the browser when I changed my code, then I would have to use npm or a task runner such as Grunt or Gulp. This helped tip the scales for me in favor of Visual Studio Code.


VS Code is actually positioned as something between a simple code editor, such as Atom, Brackets or SublimeText, and a full fledged IDE like Visual Studio or WebStorm.  The main difference is that VS Code lacks a “File, New Project” command  for creating a new type of project with all the necessary files. This means you either have to start from scratch or select a Yeoman generator to scaffold a new project.

I decided to start from scratch, because I like pain. (OK, I’m just kidding.)

The truth is, I couldn’t find an existing generator that met my needs, and I wanted to learn all I could from the experience of getting VS Code ready for TypeScript.  The result was a sample project on GitHub (Demo.VSCode.TypeScript) and a Yeoman generator (tonysneed-vscode-typescript) for scaffolding new TypeScript projects.

Compiling TypeScript to JavaScript

My first goal was to compile TypeScript into JavaScript with sourcemaps for debugging and type definitions for intellisense.  This turned out to be much more challenging than I thought it would be.  I discovered that the gulp-typescript plugin did not handle relative paths very well, so instead I relied on npm (Node Package Manager) to invoke the TypeScript compiler directly, setting the project parameter to the ‘src’ directory in which I placed my tsconfig.json file.  This allowed for specifying a ‘dist’ output directory and preserving the directory structure in ‘src’.  To compile TypeScript using a gulp task, all I had to do was execute the ‘tsc’ script.

 * Compile TypeScript
gulp.task('typescript-compile', ['vet:typescript', 'clean:generated'], function () {

    log('Compiling TypeScript');
    exec('node_modules/typescript/bin/tsc -p src');

Here is the content of the ‘tsconfig.json’ file. Note that both ‘rootDir’ and ‘outDir’ must be set in order to preserve directory structure in the ‘dist’ folder.

    "compilerOptions": {
        "module": "commonjs",
        "target": "es5",
        "sourceMap": true,
        "declaration": true,
        "removeComments": true,
        "noImplicitAny": true,
        "rootDir": ".",
        "outDir": "../dist"
    "exclude": [

Debugging TypeScript

I could then enable debugging of TypeScript in Visual Studio Code by adding a ‘launch.json’ file to the ‘.vscode’ directory and including a configuration for debugging the currently selected TypeScript file.

    "name": "Debug Current TypeScript File",
    "type": "node",
    "request": "launch",
    // File currently being viewed
    "program": "${file}",
    "stopOnEntry": true,
    "args": [],
    "cwd": ".",
    "sourceMaps": true,
    "outDir": "dist"

Then I could simply open ‘greeter.ts’ and press F5 to launch the debugger and break on the first line.


Linting TypeScript

While compiling and debugging TypeScript was a good first step, I also wanted to be able to lint my code using tslint.  So I added a gulp task called ‘vet:typescript’ and configured my ‘typescript-compile’ task to be dependent on it.  The result was that, if I for example removed a semicolon from my Greeter class and compiled my project from the terminal, I would see a linting error displayed.


Configuring the Build Task

I also wanted to be able to compile TypeScript simply by pressing Cmd+B.  That was easy because VS Code will use a Gulpfile if one is present.  Simply specify ‘gulp’ for the command and ‘typescript-compile’ for the task name, then set ‘isBuildCommand’ to true.

    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
    "tasks": [
            "taskName": "typescript-compile",
            "isBuildCommand": true,
            "showOutput": "always",
            "problemMatcher": "$gulp-tsc"

Adding a Watch Task

Lastly, I thought it would be cool to run a task that watches my TypeScript files for changes and automatically re-compiles them.  So I added yet another gulp task, called ‘typescript-watch’, which first compiles the .ts files, then watches for changes.

 * Watch and compile TypeScript
gulp.task('typescript-watch', ['typescript-compile'], function () {

    return, ['typescript-compile']);

I could then execute this task from the command line. Here you can see output shown in the terminal when a semicolon is removed from a .ts file.


It is also possible to execute a gulp task from within VS Code.  Press Cmd+P, type ‘task’ and hit the spacebar to see the available gulp tasks.  You can select a task by typing part of the name, then press Enter to execute the task.


Using a Yeoman Generator

While it’s fun to set up a new TypeScript project with Visual Studio Code from scratch, an easier way is to scaffold a new project using a Yeoman generator, which is the equivalent of executing File, New Project in Visual Studio.  That’s why I built a Yeoman generator called tonysneed-vscode-typescript, which gives you a ready-made TypeScript project with support for unit testing with Jasmine and Karma.  (I’ll explain more about JavaScript testing frameworks in the next part of this series.)


To get started using Yeoman, you’ll need to install Yeoman with the Node Package Manager.

npm install -g yo

Next install the tonysneed-vscode-typescript Yeoman generator.

npm install -g generator-tonysneed-vscode-typescript

To use the generator you should first create the directory where you wish to place your scaffolded TypeScript project.

mkdir MyCoolTypeScriptProject
cd MyCoolTypeScriptProject

Then simply run the Yeoman generator.

yo tonysneed-vscode-typescript

To view optional arguments, you can append –help to the command.  Another option is to skip installation of npm dependencies by supplying an argument of –skip-install, in which case you can install the dependencies later by executing npm install from the terminal.

In response to the prompt for Application Name, you can either press Enter to accept the default name, based on the current directory name, or enter a new application name.


Once the generator has scaffolded your project, you can open it in Visual Studio Code from the terminal.

code .

After opening the project in Visual Studio Code, you will see TypeScript files located in the src directory.  You can compile the TypeScript files into JavaScript simply by pressing Cmd+B, at which point a dist folder should appear containing the transpiled JavaScript files.

For the next post in this series I will explain how you can add unit tests to your TypeScript project, and how you can configure test runners that can be run locally as well as incorporated into your build process for continuous integration.

Posted in Technical | Tagged , , | 11 Comments

Using EF6 with ASP.NET MVC Core 1.0 (aka MVC 6)

This week Microsoft announced that it is renaming ASP.NET 5 to ASP.NET Core 1.0.  In general I think this is a very good step.  Incrementing the version number from 4 to 5 for ASP.NET gave the impression that ASP.NET 5 was a continuation of the prior version and that a clean migration path would exist for upgrading apps from ASP.NET 4 to 5.  However, this did not reflect the reality that ASP.NET 5 was a completely different animal, re-written from the ground up, and that is has little to do architecturally with its predecessor.  I would even go so far as to say it has more in common with node.js than with ASP.NET 4.

You can download the code for this post, which has been updated for ASP.NET Core 2.0, from my repository on GitHub.

Entity Framework 7, however, has even less in common with its predecessor than does MVC, making it difficult for developers to figure out whether and when they might wish to make the move to the new data platform.  In fact, EF Core 1.0 is still a work in progress and won’t reach real maturity until well after initial RTM.  So I’m especially happy that EF 7 has been renamed EF Core 1.0, and also that MVC 6 is now named MVC Core 1.0.

The problem I have with the name ASP.NET Core is that it implies some equivalency with .NET Core.  But as you see from the diagram below, ASP.NET Core will not only run cross-platform on .NET Core, but you can also target Windows with .NET Framework 4.6.


Note: This diagram has been updated to reflect that EF Core 1.0 (aka EF 7) is part of ASP.NET Core 1.0 and can target either .NET 4.6 or .NET Core.

It is extremely important to make this distinction, because there are scenarios in which you would like to take advantage of the capabilities of ASP.NET Core, but you’ll need to run on .NET 4.6 in order to make use of libraries that are not available on .NET Core 1.0.

So why would you want to use ASP.NET Core 1.0 and target .NET 4.6?

As I wrote in my last post, WCF Is Dead and Web API Is Dying – Long Live MVC 6, you should avoid using WCF for greenfield web services, because: 1) is it not friendly to dependency injection, 2) it is overly complicated and difficult to use properly, 3) it was designed primarily for use with SOAP (which has fallen out of favor), and 4) because Microsoft appears to not be investing further in WCF.  I also mentioned you should avoid ASP.NET Web API because it has an outdated request pipeline, which does not allow you to apply cross-cutting concerns, such as logging or security, across multiple downstream web frameworks (Web API, MVC, Nancy, etc).  OWIN and Katana were introduced in order to correct this deficiency, but those should be viewed as temporary remedies prior to the release of ASP.NET Core 1.0, which has the same pipeline model as OWIN.

The other important advantage of ASP.NET Core is that it completely decouples you from WCF, IIS and System.Web.dll.  It was kind of a dirty secret that under the covers ASP.NET Web API used WCF for self-hosting, and you would have to configure the WCF binding if you wanted implement things like transport security.  ASP.NET Core has a more flexible hosting model that has no dependence on WCF or System.Web.dll (which carries significant per-request overhead), whether you choose to host in IIS on Windows, or cross-platform in Kestrel on Windows, Mac or Linux.

A good example of why you would want to use ASP.NET Core 1.0 to target .NET 4.6 would be the ability to use Entity Framework 6.x.  The first release of EF Core, for example, won’t include TPC inheritance or M-M relations without extra entities.  As Rowan Miller, a program manager on the EF team, stated:

We won’t be pushing EF7 as the ‘go-to release’ for all platforms at the time of the initial release to support ASP.NET 5. EF7 will be the default data stack for ASP.NET 5 applications, but we will not recommend it as an alternative to EF6 in other applications until we have more functionality implemented.

This means if you are building greenfield web services, but still require the full capabilities of EF 6.x, you’ll want to use ASP.NET MVC Core 1.0 (aka MVC 6) to create Web API’s which depend on .NET 4.6 (by specifying “dnx451” in the project.json file).  This will allow you to add a dependency for the “EntityFramework” NuGet package version “6.1.3-*”.  The main difference is that you’ll probably put your database connection string in an *.json file rather than a web.config file, or you may specify it as an environment variable or retrieve it from a secrets store.  An appsettings.json file, for example, might contain a connection string for a local database file.

  "Data": {
    "SampleDb": {
      "ConnectionString": "Data Source=(localdb)\\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\\SampleDb.mdf;Integrated Security=True; MultipleActiveResultSets=True"

You can then register your DbContext-derived class with the dependency injection system of ASP.NET Core.

public void ConfigureServices(IServiceCollection services)
    // Add DbContext
    services.AddScoped(provider =&gt;
        var connectionString = Configuration["Data:SampleDb:ConnectionString"];
        return new SampleDbContext(connectionString);

    // Add framework services.

This will allow you to inject a SampleDbContext into the constructor of any controller in your app.

public class ProductsController : Controller
    private readonly SampleDbContext _dbContext;

    public ProductsController(SampleDbContext dbContext)
        _dbContext = dbContext;

Lastly, you’ll need to provide some information to EF regarding the provider you’re using (for example, SQL Server, Oracle, MySQL, etc).  In a traditional ASP.NET 4.6 app you would have done that in app.config or web.config.  But in ASP.NET Core you’ll want to specify the provider in a class that inherits from DbConfiguration.

public class DbConfig : DbConfiguration
    public DbConfig()
        SetProviderServices("System.Data.SqlClient", SqlProviderServices.Instance);

Then you can apply a DbConfigurationType attribute to your DbContext-derived class, so that EF can wire it all together.

public class SampleDbContext : DbContext
    public SampleDbContext(string connectionName) :
        base(connectionName) { }

    public DbSet Products { get; set; }

You can download the code for this post from my Demo.AspNetCore.EF6 repository on GitHub.

The primary limitation of targeting .NET 4.6 with EF 6 is that you’ll only be able to deploy your web services on Windows.  The good news, however, is that you’ll be in a great position to migrate from EF 6 to EF Core 1.0 (aka EF 7) as soon as it matures enough to meet your needs.  That’s because the API’s for EF Core are designed to be similar to EF 6.  Then when you do move to EF Core, you’ll be able to use Docker to deploy your web services on Linux VM’s running in a Cloud service such as Amazon EC2, Google Compute Engine, or Microsoft Azure.

Posted in Technical | Tagged , , | 72 Comments

WCF Is Dead and Web API Is Dying – Long Live MVC 6!

The time has come to start saying goodbye to Windows Communication Foundation (WCF).  Yes, there are plenty of WCF apps in the wild — and I’ve built a number of them.  But when it comes to selecting a web services stack for greenfield applications, you should no longer use WCF.

Note: A number of commenters have misunderstood the nuanced position I’ve taken in this blog post, so I thought it would help to add a statement at the beginning clarifying my position on WCF. I am not saying that WCF is going away or that you should discontinue using it for non-HTTP communication, such as MSMQ or Named Pipes, or where SOAP is a requirement. I am also not saying that most WCF apps should be re-written; on the contrary, they will need to be maintained to support SOAP-based clients. What I am saying is that, if you plan to build a greenfield HTTP-based web service, you should seriously consider using ASP.NET Core instead of WCF or ASP.NET Web API 2.x, because it is cross-platform, modular and designed for Cloud-based deployment, and it supports modern development methodologies where dependency injection is an essential requirement.

Update: ASP.NET 5 has been renamed to ASP.NET Core 1.0 and MVC 6 is now called MVC Core 1.0.

King Is Dead

WCF is dead

There are many reasons why WCF has lost its luster, but the bottom line is that WCF was written for a bygone era and the world has moved on.  There are some use cases where it still might make sense to use WCF, for example, message queuing applications where WCF provides a clean abstraction layer over MSMQ, or inter / intra process applications where using WCF with named pipes is a better choice than .NET Remoting. But for developing modern HTTP-based web services, WCF should be considered deprecated for this purpose.

Didn’t get the memo?  Unfortunately, Microsoft is not in the habit of announcing when they are no longer recommending a specific technology for new application development.  Sometimes there’s a tweet, blog post or press release, as when Bob Muglia famously stated that Microsoft’s Silverlight strategy had “shifted,” but there hasn’t to my knowledge been word from Microsoft that WCF is no longer recommended for building modern HTTP-based web services.

One reason might be that countless web services have been built using WCF since its debut in 2007 with .NET 3.0 on Windows Vista, and other frameworks, such as WCF Data Services, WCF RIA Services, and self-hosted Web API’s, have been built on top of WCF.  Also, if you need to interoperate with existing SOAP-based web services, you’re going to want to use WCF rather than handcrafted SOAP messages.


But it’s fair to say that the vision of a world of interoperable web services based on a widely accepted set of SOAP standards has generally failed to materialize.  The story, however, is not so much about the failure of SOAP to gain wide acceptance, as it is about the success of HTTP as a platform for interconnected services based on the infrastructure of the World Wide Web, which has been codified in an architectural style called REST.  The principle design objective for WCF was to provide a comprehensive platform and toolchain for developing service-oriented applications that are highly configurable and independent of the underlying transport, whereas the goal of REST-ful applications is to leverage the capabilities of HTTP for producing and consuming web services.

But doesn’t WCF support REST?

Yes it does, but aside from the fact that REST support in WCF has always felt tacked on, WCF has problems of its own.  First, WCF in general is way too complicated.  There are too many knobs and dials to turn, and you have to be somewhat of an expert to build WCF services that are secure, performant and scalable.  Many times, for example, I have seen WCF apps configured to use the least performant bindings when it wasn’t necessary.  And setting things up correctly requires advanced knowledge of encoders, multi-threading and concurrency.  Second, WCF was not designed to be friendly to modern development techniques, such as dependency injection, and WCF service types require a custom service behavior to use DI.


One of the first signs that WCF was in trouble was when the Web API team opted for using ASP.NET MVC rather than WCF for services hosted by IIS (although under the covers “self-hosted” Web API’s (for example, those hosted by Windows services) were still coupled to WCF).  ASP.NET Web API offers a much simpler approach to developing and consuming REST-ful web services, with programmatic control over all aspects of HTTP, and it was designed to play nice with dependency injection for greater flexibility and testability.

Nevertheless, ASP.NET Web API duplicated many aspects of ASP.NET MVC (for example, routing) and it was still coupled to the underlying host.  For example, Web API apps requiring secure communication over TLS / SSL required a different setup depending on whether the app was hosted in IIS or self-hosted.  To address the coupling issue, Microsoft released an implementation of the OWIN specification called Katana, which offers components for building host-independent web apps and a middleware-based pipeline for inserting cross-cutting concerns regardless of the host.


Web API is dying – Long live MVC 6!

As awesome as ASP.NET Web API and Katana are, they were released mainly as a stopgap measure while an entirely new web platform was being built from the ground up.  That platform is ASP.NET 5 with MVC 6, which merges Web API with MVC into a unified model with shared infrastructure for things like routing and dependency injection.  While OWIN web hosting for Web API retained a dependency on System.Web.dll (along with significant per-request overhead), ASP.NET 5 offers complete liberation from the shackles of legacy ASP.NET.


More importantly, ASP.NET 5 was designed to be lightweight, modular and portable across Windows, Mac OSX and Linux.  It can run on a scaled down version of the .NET Framework, called .NET Core, which is also cross-platform and consists of both a runtime and a set of base class libraries, both of which are bin-deployable so they can be upgraded without affecting other .NET apps on the same machine.  All of this is intended to make ASP.NET 5 cloud-friendly and suitable for a microservices architecture using container services such as Docker.

So when can I start building Web API’s with ASP.NET 5 and MVC6?

The answer is: right now!  When RC1 of ASP.NET 5 was released in Nov 2015, it came with a “go-live” license and permission to use it in a production environment with full support from Microsoft.


Instead of hosting on IIS, which of course only runs on Windows, you’ll want to take advantage of Kestrel, a high-performance cross-platform host that clocks at over 700,000 requests per second, which is about 5 times faster than NodeJs using the same benchmark parameters.

Shiny new tools

Not only has Microsoft opened up to deploying ASP.NET 5 apps on non-Windows platforms, it has also come out with a new cross-platform code editor called Visual Studio Code, which you can use to develop both ASP.NET 5 and NodeJs apps on Mac OSX, Linux or Windows.

ASP.NET 5 is also released as open source and is hosted on GitHub, where you can clone the repositories, ask questions, submit bug reports, and even contribute to the code base.

If you’re interested in learning more about developing cross-platform web apps for ASP.NET 5, be sure to check out my 4-part blog series on building ASP.NET 5 apps for Mac OSX and Linux and deploying them to the Cloud using Docker containers.


In summary, you should avoid WCF if you want to develop REST-ful web services with libraries and tools that support modern development approaches and can be readily consumed by a variety of clients, including web and mobile applications.  However, you’re going to want to skip right over ASP.NET Web API and go straight to ASP.NET Core, so that you can build cross-platform web services that are entirely host-independent and can achieve economies of scale when deployed to the Cloud.

Posted in Technical | Tagged , , , , | 126 Comments

How Open Source Changed My Life

2015 was a pivotal year for my life as a developer, due in no small measure to the impact of open source software both on how I go about writing code, as well as on how I interact with other developers.  If I had to select one word to describe the reason for this, it would be: collaboration.  It’s not just that open source development demands a greater degree of collaboration, but the acceleration of open source as a movement during the past couple of years has actually redefined software development as a highly collaborative process.

Open source

In plain English, this means that software quality depends on how well I can work with others.  However, in the past it hasn’t been very easy to make collaboration a seamless part of software development.  You could do real-time collaboration (also known as pair programming), but few employers were willing to support it, and it was difficult to pull off when team members were scattered across different time zones.

To add insult to injury, centralized version control systems, such as Team Foundation Version Control or Subversion, made common tasks, such as branching and merging, much more arduous.  All of this changed with the widespread adoption of Git, a distributed version control system which makes first-class citizens out of branching and merging.


This has actually changed the way I write code.

To start with, it’s helped to organize my development.  For example, when I want to work on a bug fix or new feature, the first thing I’m going to do is create a branch. This keeps me from working on more than one thing at the same time. But if I do want to multitask, I can stash changes, switch to another branch to do something else, then come back and pick up where I left off.  When I’m working on a branch, the process of committing my changes also forces me to try to better organize my work, by logically grouping my changes into commits.  I can also compare what I’m currently doing to the prior commit to see how refactoring has helped eliminate code smells.

While Git has had an impact on how I personally write code, what has transformed software development into a truly collaborative process has been the rise of code hosting services such as GitHubBitbucket or Visual Studio Team Services, which provide tools for implementing a collaboration workflow based on Git, with work isolated into branches, organized into commits and documented with commit messages.


If I am working on my own public Git repository, then the workflow might take place as follows:

  1. Open an issue describing a defect or desired feature.  Here I can add comments, insert code snippets, refer to other issues or lines of code, as well as reference other issues.
  2. Create a branch for working on the issue. Here I can write code or change existing code, then commit changes with descriptive messages.
  3. Publish the branch, at which time GitHub will show a “Compare & Pull Request” button, which I can click to create a pull request.  This will allow other developers who have cloned my repository to create a local branch based on the pull request, so they can look at the code I’ve written. Other developers can then comment on the pull request, and we can have a discussion about the the issue, even referencing specific lines of code.  If I am trying to reproduce a particular bug, I can simply write a failing test, commit changes, then push those changes to the public branch, which allows others to retrieve and run the failing test.  When I fix the defect so that the failing test now passes, I can commit the fix and push the commit so that other developers can pull the commit to see how the fix was performed.
  4. Once I’m satisfied with the code, I can merge the feature branch into the main branch (probably develop or master), then push those commits to the public repository.  At that point, the pull request can be closed (GitHub will automatically close it), both the public and private feature branches can then be deleted, and the original issue can be closed (GitHub will automatically close issues based on commit messages).

If I am working on someone else’s public Git repository, then the workflow will start off differently:

  1. The first thing I’ll want to do is fork their repository, effectively copying it over to my own GitHub account.
  2. I can then clone the forked repo, create a local branch, work on it (for example, write a feature or create a failing test to reproduce an exception), then publish my local branch to my public repo.
  3. Once I’ve published a branch I can create a new pull request, which others can then pull to see what I’ve done, without affecting any other work they may be doing.

GitHub Workflow

What Git and GitHub essentially provide is the ability to share code with others and facilitate discussions in a structured workflow. That’s powerful stuff.

But the ability to leverage these tools depends on how widely they’ve been appropriated by members of the developer community.  And that means developers are going to need to get out of their comfort zone to learn how to use Git and GitHub (or another hosting service).  The good news is that most popular IDE’s and code editors have decent Git integration, which allows you to perform most Git tasks using a GUI interface right from within the IDE.  Other Git clients, such as TortoiseGit and SourceTree, ease many tasks, but there are some Git commands, such as interactive rebase, where you’ll need to pull up a terminal window or command prompt.  Interactive rebase can be tricky at first, but it lets you squash certain commits and consolidate messages for a cleaner version history.

One of things which can trip up Git noobies is not providing a proper .gitignore file with their repo.  GitHub provides .gitignore file templates for various IDE’s.  Not using the correct file will make it difficult for others to build your solution without getting spurious errors.  For example, if you’re using Visual Studio without the correct .gitignore, you may check in package, bin and obj folders, which is can interfere with restoring NuGet packages when someone else tries to build the solution.

To help you get up to speed on this new collaborative approach to software development, you should check out some of the many free Git tutorials available online.  Then you should bite the bullet and contribute to an open-source project.  Feel intimidated?  Scott Hanselman has created a First Timers Only web site specifically targeted to people who are dipping their toes into the open source waters.

I’ve been privileged to author a couple of popular open source frameworks: Simple MVVM Toolkit and Trackable Entities.  I created the first project prior to embracing Git, but I moved the second project to GitHub early on and have had other developers contribute to the project, which has encouraged me to fully adopt the Git way.  I also had the opportunity to submit some pull requests to Microsoft’s ASP.NET 5 repo on GitHub, where I learned how to rebase my feature branch and resolve conflicts to stay in sync with upstream changes.

Aspnet5 new kid

One of the things that propelled me further into open source has been the way in which Microsoft has jumped on the bandwagon.  Not only have they opened up their code for developers to look at, they have invited others to take part in the process by allowing them to open issues and submit pull requests.  That’s huge.  And it’s a model for how companies large and small stand to benefit from this new way to build software by sharing code and allowing collaboration with the help of tools from Git and GitHub.

I hope your journey into open source is as enriching for you as it has been for me.

Yoda Source

May the Source be with you!

Posted in Technical | Tagged , | Leave a comment

Deploy ASP.NET 5 Apps to Docker on Azure

NOTE: This post is part 4 of a series on developing and deploying cross-platform web apps with ASP.NET 5:

  1. Develop and Deploy ASP.NET 5 Apps on Mac OS X
  2. Develop and Deploy ASP.NET 5 Apps on Linux
  3. Deploy ASP.NET 5 Apps to Docker on Linux
  4. Deploy ASP.NET 5 Apps to Docker on Azure (this post)

Download instructions and code for this post here:

Over the past few years, a phenomenon known as “the Cloud” has appeared.  While the term is rather nebulous and can mean a number of different things, with regard to business applications it generally refers to a deployment model where apps run on servers provided by a third party that rents out computational resources, such as CPU cycles, memory and storage, on a pay-as-you-go basis.  There are different service models for cloud computing, including infrastructure (IaaS), platform (PaaS) and software (Saas).  In this post I’ll focus on the first option, infrastructure, which allows you set up Linux virtual machines where you can deploy Docker images with your ASP.NET 5 apps and all their dependencies.  There are a number of players in the IaaS market, including Amazon Elastic Compute Cloud (EC2), Google Compute Engine (GCE) and Microsoft Azure, but I’ll show you how to deploy a Dockerized ASP.NET 5 app to Azure using Docker Hub, GitHub and the Docker Client for Windows.


So let’s start out with GitHub.  The reason we’re starting here is that you can set up Docker Hub to link to a GitHub repository that contains a Dockerfile.  When you push a commit to the GitHub repo, Docker Hub will build a new image for your app.  What makes this a nice approach is that you get automated builds with continuous integration, and it’s easy to pull images from Docker Hub and run them on the Linux VM on Azure.

To demonstrate this I’ve created two repositories on GitHub.  The first one is a simple console app:  It contains three files: project.json, program.cs, and a Dockerfile.

FROM microsoft/aspnet:1.0.0-beta4

COPY . /app


RUN ["dnu", "restore"]

ENTRYPOINT ["dnx", ".", "run"]

The second is a simple web app:  It also contains three files: project.json, startup.cs and a Dockerfile.

FROM microsoft/aspnet:1.0.0-beta4

COPY . /app


RUN ["dnu", "restore"]


ENTRYPOINT ["dnx", ".", "kestrel"]

Next, you’re going to need to set up an account on Docker Hub.  It’s free, and you can log in using your GitHub credentials.  Then add an “Automated Build” which links to a GitHub repo.


By far the easiest way to create a new Docker virtual machine in Azure is to use the Visual Studio 2015 Tools for Docker.  Otherwise, you’re going to need to create the certificates manually and upload them to the Azure portal when adding the VM Extension for Docker.  When I attempted this on the Azure portal, installing the Docker extension hung.  But I didn’t experience a problem using the VS Docker Tools to create a Docker VM on Azure.

You’ll have to go through a few steps in Visual Studio before you can create the VM on Azure.  First create a new Web Project, selecting one of the ASP.NET 5 templates.  For our purposes, an empty web project will do just fine.


Then right-click on the generated project and select “Publish” from the context menu.  Under Profile, select Docker Containers, at which point you’ll be presented with a list of existing Azure Docker virtual machines.  Simply click the New button to create a new Linux VM with Docker installed.


Enter a unique DNS name, together with an admin user name and password.  You can check the option to auto-generate Docker certificates, and the wizard will create the required certificates, configure Docker on the VM to use them, and copy the certificate and key files into the “.docker” folder under your user profile, so that you can use the same certificates to create additional virtual machines on Azure or elsewhere.


I highly encourage you to check out the video series on Docker for .NET Developers, where you can learn more about the VS Docker tools, which can also generate a Dockerfile and build scripts for publishing an app to a Docker container on the VM you created on Azure.

What’s cool is that installing the tools will also give you the docker client for Windows, which you can use from a command prompt to build and run Docker images on the remote VM.  You can start with the following command to display basic information, supplying the host name and port number specified when you created the VM.

docker --tls -H tcp:// info


To keep from having to include the remote host address with every command, you can set the DOCKER_HOST environment variable.

set docker_host=tcp://

You can now use the docker client on the command line to run packages.  If the package is not already installed locally, it will be pulled from Docker Hub.  The following, for example, will simply print “Hello from Docker” to the console, along with a few other bits of information.

docker --tls run -t hello-world

You can also run Docker images which you have pulled into Docker Hub from GitHub.  The following command will run an ASP.NET 5 console app that prints “Hello World” to the console.

docker --tls run -t tonysneed/aspnet5-consoleapp

If you want to get the latest version of the image from Docker Hub, simply execute a pull command.

docker --tls pull tonysneed/aspnet5-consoleapp

The following command will run a daemonized web app, mapping port 80 on the VM to port 5004 on the container.

docker --tls run -t -d -p 80:5004 tonysneed/aspnet5-webapp

This command will return a big long number, which you can use to output the container logs.  The following will display “Started” if successful, otherwise it will list runtime exceptions.

docker --tls logs f2de092f14b67590ae4dc08cd3a453a28271de0a8f27e6d80ec356cbc5151d43

To list running processes, execute the ps command, which will list all the running containers.

docker --tls ps


Now you can just open a browser with a URL that contains the fully qualified DNS name for the VM in Azure:


Congratulations!  You have successfully deployed an ASP.NET 5 web app to a Docker container running on a Linux virtual machine in Azure.  More importantly, you have configured Docker Hub to re-build the image whenever a commit is pushed to a linked GitHub repo, and you know how to pull that Docker image into the VM on Azure from the command line using the Docker Client for Windows.  The Visual Studio Tools for Docker make it easy to create the Linux VM on Azure and generate certificates which you can use to create other Docker VM’s and which you can copy to other machines (both Windows and non-Windows) so that you can run Docker commands from there.  All in all, a sweet story indeed.

Posted in Technical | Tagged , , , , , | 1 Comment