Now is a Great Time to Learn TypeScript

Today I am pleased to share some exciting news.  I just finished authoring a new 5-day course on TypeScript!

Essential TypeScript 2.0 with Visual Studio Code

It is timed to coincide with the 2.0 release of TypeScript, a relatively new programming language designed to make JavaScript strongly typed and more capable of supporting large-scale web applications.


I authored the course for Global Knowledge, an IT training company that in 2015 acquired DevelopMentor, a developer training company where I authored and taught courses in C# and .NET since 2006.  The course represents the culmination of a four-month odyssey, in which I not only had to grok TypeScript grammar and syntax but also mastered an entirely new technology stack and toolchain.

It was fun!

Here is a list of topics included in the course:

  1. Introduction to TypeScript
  2. Language Basics
  3. Using Visual Studio Code with TypeScript
  4. Task Automation, Unit Testing, Continuous Integration
  5. The TypeScript Type System
  6. Functional Programming
  7. Asynchronous Programming
  8. Object-Oriented Programming
  9. Generics and Decorators
  10. Namespaces and Modules
  11. Practical TypeScript with Express and Angular

While performing the herculean task of creating a 5-day class with slides, labs and numerous demos, I have to say I thoroughly enjoyed the process of adding a new weapon to my arsenal as a software developer and the chance to venture off in an entirely new direction.  So I thought I would take this opportunity to put together some thoughts about why I think this is a great time to learn web development with TypeScript — and at the same time plug my new course. 🙂

Revenge of JavaScript

One motivation for me to pick up JavaScript (of which TypeScript is a superset) is that we no longer live in a world in which Windows is dominant, and JavaScript can be used to write apps for more than just web browsers — you can use it to write desktop and mobile applications, as well as backend services running in the Cloud.  It has unwittingly become one language to rule them all.


Secondly, web development has matured to the point where it is possible to write an app that has nearly the same interactivity and responsiveness as a traditional desktop application.  One of the reasons I previously gravitated toward WPF (and unconsciously drifted away from ASP.NET) was that desktop and native mobile apps seemed to offer a more fluid user experience where you could work with data locally without having to continually refresh the screen from a remote server.

All that has changed with the advent of SPA’s (Single Page Applications), where turbocharged JavaScript engines quickly render rich, interactive web pages.  It’s the perfect time to build SPA’s because second generation frameworks have emerged that take web development to a whole new level and implement the MVVM (Model-View-ViewModel) pattern (or some MV-* variation), providing benefits such as better separation of concerns, testability and maintainability.  Frameworks like Angular, Aurelia and ReactRedux also provide tools for quickly scaffolding new applications and preparing them for production.


TypeScript has emerged as the language of choice for building many of these kinds of modern web apps, because its strong-typing enables features we’ve come to take for granted, such as interfaces and generics. It also provides things most developers couldn’t think of doing without, such as intellisense, statement completion code refactorings.  And TypeScript’s popularity received a shot in the arm from the Angular team at Google, which is using it to build Angular.  In addition, most of their tutorials are written in TypeScript.

JavaScript Has Grown Up

In 2015 JavaScript had its most significant upgrade since it was created in 1995 by Brendan Eich in a 10-day hackathon.  With the release of ECMAScript 2015, JavaScript received a slew of new features, many of which classical languages, such as Java and C#, have had for years.  These include classes, inheritance, constants, iterators, modules and promises, to name a few.  TypeScript not only includes all ES 2015 features, but it fast-forwards to future versions ECMAScript by supporting proposed features such as async and await operators, which help simplify asynchronous code.  TypeScript lets you use advanced features of JavaScript by transpiling down to ES5, a flavor of JavaScript compatible with most browsers.


When you put modern JavaScript together with TypeScript, you get a powerful combination that gives you just about everything you might want for building SOLID applications that can run in the browser, on the server or on mobile and desktop platforms.

Shiny New Tools

The nice thing about TypeScript is that you’re free to use whatever tool you like, from a full-fledged IDE like Visual Studio or Web Storm, to a lightweight code editor, such as SublimeText, Atom, Brackets or Visual Studio Code.  While there’s nothing wrong with any of these options, I prefer using VS Code for TypeScript development, because it comes with TypeScript in the box and the team eats their own dogfood by using TypeScript to build the editor.


Coming from a C# background, where I was confined to using Visual Studio on Windows with its enormous overhead, I can run code editors such as VS Code on my Mac, giving me one less reason to fire up a Windows VM.  VS Code starts quickly, and I can open it at a specific folder from either the Finder or Terminal.  I also found navigation in VS Code to be straightforward and intuitive, and you can perform many tasks from the command palette, including custom gulp tasks.  VS Code functions as a great markdown editor with a side-by-side preview that refreshes in real time as you make changes.  It has Git integration and debugging support, as well as a marketplace of third-party extensions that provide a variety of nifty services, such as TypeScript linting and Angular 2 code snippets.  Put it all together, and VS Code is a perfect fit for TypeScript development.

Living in Harmony

One of the most compelling reasons I can think of for picking up TypeScript is that it’s the brainchild of the same person who created C#, Anders Hejlsberg, who in a prior life also invented Turbo Paschal and Delphi.  Having such an amazing track record, I have a high degree of confidence in following him into the world of web and native JavaScript development, where he has made it possible to be more productive and write code that is more resilient because the TypeScript compiler is able to catch problems at development-time that would otherwise only become apparent at runtime.

Lastly, it’s significant that Anders did not choose to create a language that is not JavaScript, such as CoffeeScript, but rather one that includes all of JavaScript with optional type annotations that disappear when TypeScript is compiled down to plain old JavaScript.  In fact, all JavaScript is valid TypeScript, and you can put in annotations or leave them out wherever you like, giving you the best of both dynamic and static typing.  In other words, TypeScript does not impose itself on you or dictate that you follow any of its prescriptions.

To conclude, this is a great time for learning TypeScript, even if you don’t know much about JavaScript, because the two are mostly one and the same, except for extra features designed to make your life easier and that allow you to write JavaScript programs capable of growing over time without becoming too unwieldy.  This frees you to concentrate on learning JavaScript frameworks, such as Express and Angular, that empower you to build RESTful web services and modern client applications.

Happy coding!

Posted in Technical | Tagged | 2 Comments

Simple MVVM Toolkit – It Lives!

Update: Version 1.0 of Simple MVVM Toolkit Express has now been released and is based on version 1.0 RTM of .NET Core: Source code and samples can be found here:

Now that .NET Core is stable and RC2 has been released, and the .NET Platform Standard has been proposed to replace Portable Class Libraries, I thought it would be a good idea to port my Simple MVVM Toolkit to .NET Core and provide support for additional platforms, such as Universal Windows Platform and the latest version of Xamarin for cross-platform mobile apps, included iOS from Apple and Android from Google.


Rather than update my existing repository, I decided it was time for a fresh start.  So I createed a new project on GitHub called Simple MVVM Toolkit Express.  It is compatible with the following platforms:

  • Portable Class Libraries: Profile 111 – .NET 4.5, AspNet Core 1.0, Windows 8, Windows Phone 8.1
  • .NET Framework 4.6
  • Universal Windows Platform 10.0
  • Mono/Xamarin: MonoAndroid60, XamariniOS10
  • .NET Core 1.0: NetStandard 1.3

I decided to break compatibility with the following legacy frameworks: .NET 4.0 and Silverlight.

The toolkit has all the major features of the classic version, including classes for models and view models, support for validation and editing with rollbacks, as well as a leak-proof message bus (aka mediator or event aggregator).  Platform-specific threading implementations have been removed, because it’s better to use C#’s built-in async support.

I published a pre-release NuGet package, which you can find here:  And I’ve created samples for WPF, UWP and Xamarin, which you can find on the SimpleMvvm home repository.

I used the dotnet CLI (command-line interface) tool chain to build the project and generate a multi-targeted NuGet package, but I had to modify the generated nuspec file to work around some compatibility issues.  In the end, it was a great learning experience, and I found it reassuring that I could continue to use a popular framework for building many different kinds of client applications using the Model-View-ViewModel design pattern.

Happy coding!

Posted in Technical | 14 Comments

Getting Visual Studio Code Ready for TypeScript: Part 3

Part 3: Injecting Scripts with Gulp

This is the third part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript
  2. Writing Jasmine Tests in TypeScript
  3. Injecting Scripts with Gulp (this post)

Leveraging Gulp

In the first and second post in this series I showed how you can use Gulp to automate common tasks such as compiling TypeScript to JavaScript and running Jasmine tests in a browser.  While Gulp is not strictly necessary to perform these tasks, it allows you to chain together multiple tasks, which can give you a smoother workflow.


You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.

For example, we defined a “watch” task with a dependency on the “compile” task, so that Gulp performs a compilation before watching for changes in any TypeScript files.  When changes are detected, the “compile” task is then re-executed.

gulp.task('compile', function () {

    exec('rm -rf dist && tsc -p src');

gulp.task('watch', ['compile'], function () {

    return'./src/**/*.ts', ['compile']);

Likewise, we defined a “test” task with a dependency on the “watch” task, so that changes to any TypeScript files will cause browser-sync to reload the browser when it detects that the JavaScript files have been re-generated.

gulp.task('test', ['watch'], function () {

    var options = {
        port: 3000,
        server: './',
        files: ['./dist/**/*.js',
        // Remaining options elided for clarity


Listing Tasks

While VS Code allows you to execute gulp tasks from within the editor, you may sometimes prefer to use Gulp from the Terminal (if for no other reason than to see all the pretty colors).  To make this easier, we can use a plugin that will list all the tasks we’ve defined in our gulpfile.js.  But before we get into that, we can make our lives easier by using a plugin called gulp-load-plugins, which will relieve us from having to define a separate variable for each plugin we wish to use.  All we need to do is define a $ variable, then use it to execute other gulp plugins we’ve installed.

var $ = require('gulp-load-plugins')({ lazy: true });

To list tasks in gulpfile.js, we can define a “help” task which uses the gulp-task-listing plugin to list all of our tasks.  We’ll follow a convention which uses a colon in the task name to designate it as a sub-task.  We can also define a “default” task which calls the “help” task when a user enters “gulp” in the Terminal with no parameters.

gulp.task('help', $.taskListing.withFilters(/:/));
gulp.task('default', ['help']);

You’ll need to install both Gulp plugins using npm.

npm install --save-dev gulp-load-plugins gulp-task-listing

Then open the Terminal, type “gulp” (no quotes) and press Enter.  You should see a list of tasks displayed.  To execute a task, simply type “gulp” followed by a space and the name of the task.


Injecting Scripts

In my last blog post I described how you can run Jasmine tests in a browser by serving up an HTML file which included both source and spec JavaScript files.  But this required you to manually insert script tags into SpecRunner.html.  You might have asked yourself if there might be a way to inject scripts into the spec runner automatically whenever you executed the “test” task. Well it just so happens: there’s plugin for that!™ It’s appropriately called gulp-inject, and you can add an injectScripts function to gulpfile.js which will inject scripts into SpecRunner.html based on globs for source and spec files.

var inject = require('gulp-inject');

function injectScripts(src, label) {

    var options = { read: false, addRootSlash: false };
    if (label) { = 'inject:' + label;
    return $.inject(gulp.src(src), options);

Now add a “specs:inject” gulp task which calls injectScripts to insert the source and spec scripts.  Because we only intend to call this task from other tasks, we can classify it as a sub-task by inserting a colon in the task name.

gulp.task('specs:inject', function () {

    var source = [

    var specs = ['./dist/**/*.spec.js'];

    return gulp
        .pipe(injectScripts(source, ''))
        .pipe(injectScripts(specs, 'specs'))

The gulp-inject plugin will insert selected scripts at each location, based on a comment corresponding to the specified label.  Simply edit SpecRunner.html to replace the hard-coded script tags with specially formatted comments. After running the “specs:inject” task, you should see the appropriate scripts inserted at these locations

<!-- inject:js -->
<!-- endinject -->

<!-- inject:specs:js -->
<!-- endinject -->

Injecting Imports

In addition to inserting source and spec scripts, you’ll also want to inject System.import statements into the spec runner so that system.js can provide browser support for module loading.  For that you’ll need to install packages for glob, path, gulp-rename, and gulp-inject-string, then add an injectImports function to gulpfile.js.

var glob = require('glob');
var path = require('path');

function injectImports(src, label) {

    var search = '/// inject:' + label;
    var first = '\n    System.import(\'';
    var last = '\'),';
    var specNames = [];

    src.forEach(function(pattern) {
            .forEach(function(file) {
                var fileName = path.basename(file, path.extname(file));
                var specName = path.join(path.dirname(file), fileName);
                specNames.push(first + specName + last);

    return $.injectString.after(search, specNames);

Then add an “imports:inject” task which calls injectImports to insert system imports into a file called system.imports.js.

gulp.task('imports:inject', function(){

        .pipe(injectImports(['/.dist/**/*.spec.js'], 'import'))

Modify SpecRunner.html to replace the script that uses System.import with a reference to system.imports.js.

<script src="util/system.imports.js"></script>

When you execute the “imports:inject” gulp task, it will search a file called system.template.js for a triple-dash comment with the text “inject:import”, where it will inject imports for each spec file. The result will be written to system.imports.js.

    /// inject:import

Lastly, you need to update the “test” task in gulpfile.js to add the two sub-tasks for injecting scripts and imports. This will ensure they are executed each time you run your tests.

gulp.task('test', ['specs:inject', 'imports:inject', 'watch'], function ()

Debugging Gulp Tasks

If you run into problems with any gulp tasks, it would help if you could set breakpoints in gulpfile.js, launch a debugger and step through your code to see what went wrong.  You can do this in VS Code by adding an entry to the “configurations” section of your launch.json file, in which you invoke gulp.js and pass a task name.

    "name": "Debug Gulp Task",
    "type": "node",
    "request": "launch",
    "program": "${workspaceRoot}/node_modules/gulp/bin/gulp.js",
    "stopOnEntry": false,
    "args": [
        // Replace with name of gulp task to run
    "cwd": "${workspaceRoot}"

If you set a breakpoint in the “imports:inject” task, select “Debug Gulp Task” from the drop down in the Debug view in VS Code and press F5, it will launch the debugger and stop at the breakpoint you set.  You can then press F10 (step over) or F11 (step into), view local variables and add watches.


Learning Gulp

If you would like to learn more about Gulp, I highly recommend John Papa’s Pluralsight course on Gulp, where he explains how to use Gulp to perform various build automation tasks, such as bundling, minification, versioning and integration testing. While the learning curve may appear steep at first, Gulp will make your life easier in the long run by automating repetitive tasks and allowing you to chain them together for a streamlined development workflow.

Posted in Technical | Tagged , | Leave a comment

Getting Visual Studio Code Ready for TypeScript: Part 2

Part 2: Writing Jasmine Tests in TypeScript

This is the second part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript
  2. Writing Jasmine Tests in TypeScript (this post)

Jasmine vs Mocha + Chai + Sinon


There are numerous JavaScript testing frameworks, but two of the most popular are Jasmine and Mocha.  I won’t perform a side-by-side comparison here, but the main difference is that Mocha does not come with built-in assertion and mocking libraries, so you need to plug in an assertion library, such as Chai, and a mocking library, such as Sinon.  Jasmine, on the other hand, includes its own API for assertions and mocks.  So if you want to keep things simple with fewer moving parts, and you don’t need extra features offered by libraries such as Chai and Mocha, Jasmine might be a more appealing option.  If, on the other hand, you want more flexibility and control, and you want features offered by dedicated assertion and mocking libraries, you might want to opt for Mocha.  For simplicity I’m going to stick with Jasmine in this blog post, but feel free to use Mocha if that better suits your purpose.

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.

Update: Since first publishing this blog post, I added a section on Debugging Jasmine Tests in VS Code, which you can find at the end of the article.

Using Jasmine with TypeScript

Jasmine is a behavior-driven development testing framework, which allows you to define test suites through one or more nested describe functions.  Each describe function accepts a string argument with the name of the test suite, which is usually the name of the class or method you are testing.  A test suite consists of one or more specs, formulated as a series of it functions, in which you specify expected behaviors.

Let’s say you have a Greeter class written in TypeScript, and it has a greet function.

namespace HelloTypeScript {
    export class Greeter {
        constructor(public message: string) {

        greet(): string {
            return "Hello " + this.message;

Notice that Greeter is defined within the namespace HelloTypeScript and is qualified with the export keyword.  This removes Greeter from the global scope so that we can avoid potential name collisions.

To use Jasmine we’ll need to install jasmine-core (not jasmine) using npm (Node Package Manager).  Because we’re only using Jasmine at development-time, we’ll save it to the package.json file using the –save-dev argument.

npm install --save-dev jasmine-core

Intellisense for Jasmine

To allow Visual Studio Code to provide intellisense for Jasmine, you’ll need to install type definitions using Typings, which replaces the deprecated Tsd tool from Definitely Typed.

npm install -g typings

Use Typings to install type definitions for jasmine.

typings install jasmine --save-dev --ambient

This command will result in the addition of a typings folder in your project, which contains a main.d.ts file with references to installed type definitions.  The –save-dev argument will persist the specified typing as a dev dependency in a typings.json file, so that you can re-install the typings later.  The –ambient argument is required to include Definitely Typed in the lookup.

Now you’re ready to add your first Jasmine test.  By convention you should use the same name as the TypeScript file you’re testing, but with a .spec suffix.  For example, the test for greeter.ts should be called greeter.spec.ts and be placed in the same folder.

/// <reference path="../../typings/main.d.ts" />

describe("Greeter", () => {

    describe("greet", () => {

        it("returns Hello World", () => {

            // Arrange
            let greeter = new HelloTypeScript.Greeter("World");

            // Act
            let result = greeter.greet();

            // Assert
            expect(result).toEqual("Hello World");

The triple-slash reference is needed for intellisense to light up.  Without it you’ll see red squigglies, and VS Code will complain that it cannot find the name ‘describe’.

When you press Cmd+B to compile your TypeScript code, you will see a greeter.spec.js file in the dist/greeter directory, where the greeter.js file is also located.  You’ll also see a file to enable debugging of your Jasmine test.  (See the first post in this blog series for information on how to configure VS Code for compiling and debugging TypeScript.)

Running Jasmine Tests

To run your Jasmine tests in a browser, go to the latest release for Jasmine and download the jasmine-standalone zip file.  After extracting the contents of the zip file, copy both the lib folder and SpecRunner.html file to your project folder.  Edit the html file to include both the source and spec files.

<!-- include source files here... -->
<script src="dist/greeter/greeter.js"></script>

<!-- include spec files here... -->
<script src="dist/greeter/greeter.spec.js"></script>

You can then simply open SpecRunner.html in Finder (Mac) or File Explorer (Windows) to see the test results.


If you change Greeter.greet to return “Goodbye” instead of “Hello”, then compile and refresh the browser, you’ll see that the test now fails.


Running Tests Automatically

Having to refresh the browser to see test results can become tedious, so it’s a good idea to serve your tests using http.  To help with this you can use a task runner such as Gulp, which integrates nicely with VS Code, together wish an http server such as BrowserSync.


First, you’ll want to install gulp and browser-sync locally.

npm install --save-dev gulp browser-sync

Next, add a gulpfile.js file to the project, in which you’ll define tasks for compiling TypeScript to JavaScript, as well as watching TypeScript files and recompiling them when there are changes.

var gulp = require('gulp');
var exec = require('child_process').exec;
var browserSync = require('browser-sync');

gulp.task('compile', function () {
    exec('rm -rf dist && tsc -p src');

gulp.task('watch', ['compile'], function () {
    return'./src/**/*.ts', ['compile']);

To run either the compile or watch tasks, we can execute them from a Terminal or Command Prompt.  (You can also run tasks in VS Code by pressing Cmd+P, typing “task ” [no quotes] and entering the task name).

gulp watch


If you change a TypeScript file, the gulp watch task will detect the change and execute the compile task.  You can then add a gulp task which serves both .js and .spec.js files in a browser.

gulp.task('test', ['watch'], function () {

    var options = {
        port: 3000,
        server: './',
        files: ['./dist/**/*.js',
        logFileChanges: true,
        logLevel: 'info',
        logPrefix: 'spec-runner',
        notify: true,
        reloadDelay: 1000,
        startPath: 'SpecRunner.html'


Running Tests in VS Code

It is possible to wire up the gulp test task so that it runs in response to pressing Cmd+T.  To set up VS Code both for compiling TypeScript and running tests, press Cmd+Shift+P, type “config” and select Configure Task Runner. Replace the default content for tasks.json with the following:

    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
    "tasks": [
            "taskName": "compile",
            "isBuildCommand": true,
            "showOutput": "silent",
            "problemMatcher": "$gulp-tsc"
            "taskName": "test",
            "isTestCommand": true,
            "showOutput": "always"

Pressing Cmd+B will compile your TypeScript files, and pressing Cmd+T will serve your Jasmine tests in a browser, automatically refreshing the browser each time any of your TypeScript files changes.

Using Modules

To improve encapsulation TypeScript supports the use of modules, which are executed in their own scope, not in the global scope.  Various constructs, such as variables, functions, interfaces and classes, are not visible outside a module unless they are explicitly exported.  For example, we could define an ItalianGreeter class with an export statement.

export default class ItalianGreeter {
    constructor(public message: string) {

    greet(): string {
        return "Ciao " + this.message;

The Jasmine test for ItalianGreeter would then require an import statement.

import ItalianGreeter from "./italiangreeter";

let greeter = new ItalianGreeter("World");

// Remaining code elided for clarity

To use modules in TypeScript you’ll need to specify a module loader in your tsconfig.json file.  For a TypeScript library or node.js app, you would select commonjs.

    "compilerOptions": {
    "module": "commonjs",

// Remaining code elided for clarity

At this point your TypeScript will compile, but the additional tests will not show up in SpecRunner.html, even after you include scripts for the source and spec files.  The reason is that you need SystemJs, which acts as a polyfill to provide support in the browser for module loading, which is a feature of ECMA Script 2015. First add systemjs to your project.

npm install --save-dev systemjs

Then add these two scripts to SpecRunner.html.

<script src="node_modules/systemjs/dist/system.js"></script>
    System.config({ packages: { 'dist': {defaultExtension: 'js'}}});

Pressing Cmd+T will now also serve italiangreeter.spec.js, which imports the ItalianGreeter class.


Stopping Tests in VS Code

You can terminate the test task by pressing Cmd+Shift+P and selecting Terminate Running Task.  Because this is something you’ll do often, you might want to add a keyboard shortcut for it.  From the Code menu select Preferences / Keyboard Shortcuts, then add the following binding, which will terminate the running task by pressing Cmd+Shift+X.

    { "key": "shift+cmd+x", "command": "workbench.action.tasks.terminate" }

Debugging Tests in VS Code

While it may be useful to run Jasmine tests in a browser, there are times when you need to launch a debugger and step through your code one line at a time.  Visual Studio Code makes it relatively painless to debug your tests.  First you’ll need to install jasmine-node using npm.

npm install --save-dev jasmine-node

Then add the following entry to the “configurations” section of your launch.json file.

    "name": "Debug Tests",
    "type": "node",
    "request": "launch",
    "program": "${workspaceRoot}/node_modules/jasmine-node/bin/jasmine-node",
    "stopOnEntry": false,
    "args": [
    "cwd": "${workspaceRoot}",
    "sourceMaps": true,
    "outDir": "${workspaceRoot}/dist"

Press Cmd+Shift+D to view the Debugging pane in VS Code and select “Debug Tests” from the dropdown.  Then set a breakpoint (pressing F9 will do the trick), and press F5 to launch the debugger.  Execution should pause at the breakpoint, allowing you to step through your code.


What’s Next?

In this post I showed how to write Jasmine tests in TypeScript and serve them in a browser by running a Gulp task either from the Terminal or in Visual Studio Code.  This has the advantage of automatically compiling TypeScript files and refreshing the browser whenever a source or spec file has changed.  While this works well at development time, you’ll need to use a test runner such as Karma if you want to execute tests on a continuous integration server when commits are pushed to a remote repository.  I’ll address this issue in my next post.

Posted in Technical | Tagged , | 17 Comments

Getting Visual Studio Code Ready for TypeScript

Part 1: Compiling TypeScript to JavaScript

This is the first part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript (this post)
  2. Writing Jasmine Tests in TypeScript

Why TypeScript?

In case you’re new to TypeScript, Wikipedia defines TypeScript in the following way (paraphrased):

TypeScript is designed for development of large applications and transcompiles to JavaScript. It is a strict superset of JavaScript (any existing JavaScript programs are also valid TypeScript programs), and it adds optional static typing and class-based object-oriented programming to the JavaScript language.

Coming from a C# background, I was attracted to TypeScript, first because it is the brain child of Anders Hejlsberg, who also invented the C# programming language, and I can have confidence it has been well-designed, and second because I like to rely on the compiler to catch errors while I am writing code.  While TypeScript embraces all the features of ECMAScript 2015, such as modules, classes, promises and arrow functions, it adds type annotations which allow code editors to provide syntax checking and intellisense, making it easier to use the good parts of JavaScript while avoiding the bad.

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.


Why Visual Studio Code?

Once I decided to embark on the adventure of learning TypeScript, the next question was: What development tools should I use?

I’ve spent the better part of my career with Microsoft Visual Studio, and I enjoy all the bells and whistles it provides.  But all those fancy designers come at a cost, both in terms of disk space and RAM, and even installing or updating VS 2015 can take quite a while.  To illustrate, here is a joke I recently told a friend of mine:

I like Visual Studio because I can use it to justify to my company why I need to buy better hardware, so I can run VS and get acceptable performance. That’s how I ended up with a 1 TB SSD and 16 GB of RAM — thank you Visual Studio! 👏

I also own a MacBook Air, mainly because of Apple’s superior hardware, and run a Windows 10 virtual machine so that I can use Visual Studio and Office.  But I thought it would be nice to be able to write TypeScript directly on my Mac without having to spin up a Windows VM, which can drain my laptop’s battery.  So I thought I would give Visual Studio Code a try.

But before I started with VS Code, I decided to go back to Visual Studio and create a simple TypeScript project with support for unit testing with Jasmine, which is a popular JavaScript unit testing framework.  It turns out the experience was relatively painless, but I still had to do a lot of manual setup, which entailed creating a new TypeScript project in Visual Studio, deleting the files that were provided, installing NuGet packages for AspNet.Mvc and JasmineTest, then adding a bare-bones controller and a view which I adapted from the spec runner supplied by Jasmine.

You can download the code for a sample VS 2015 TypeScript project from my Demo.VS2015.TypeScript repository on GitHub.

Visual Studio 2015 still required me to do some work to create a basic TypeScript project with some unit tests, and if I wanted to add other features, such as linting my TypeScript or automatically refreshing the browser when I changed my code, then I would have to use npm or a task runner such as Grunt or Gulp. This helped tip the scales for me in favor of Visual Studio Code.


VS Code is actually positioned as something between a simple code editor, such as Atom, Brackets or SublimeText, and a full fledged IDE like Visual Studio or WebStorm.  The main difference is that VS Code lacks a “File, New Project” command  for creating a new type of project with all the necessary files. This means you either have to start from scratch or select a Yeoman generator to scaffold a new project.

I decided to start from scratch, because I like pain. (OK, I’m just kidding.)

The truth is, I couldn’t find an existing generator that met my needs, and I wanted to learn all I could from the experience of getting VS Code ready for TypeScript.  The result was a sample project on GitHub (Demo.VSCode.TypeScript) and a Yeoman generator (tonysneed-vscode-typescript) for scaffolding new TypeScript projects.

Compiling TypeScript to JavaScript

My first goal was to compile TypeScript into JavaScript with sourcemaps for debugging and type definitions for intellisense.  This turned out to be much more challenging than I thought it would be.  I discovered that the gulp-typescript plugin did not handle relative paths very well, so instead I relied on npm (Node Package Manager) to invoke the TypeScript compiler directly, setting the project parameter to the ‘src’ directory in which I placed my tsconfig.json file.  This allowed for specifying a ‘dist’ output directory and preserving the directory structure in ‘src’.  To compile TypeScript using a gulp task, all I had to do was execute the ‘tsc’ script.

 * Compile TypeScript
gulp.task('typescript-compile', ['vet:typescript', 'clean:generated'], function () {

    log('Compiling TypeScript');
    exec('node_modules/typescript/bin/tsc -p src');

Here is the content of the ‘tsconfig.json’ file. Note that both ‘rootDir’ and ‘outDir’ must be set in order to preserve directory structure in the ‘dist’ folder.

    "compilerOptions": {
        "module": "commonjs",
        "target": "es5",
        "sourceMap": true,
        "declaration": true,
        "removeComments": true,
        "noImplicitAny": true,
        "rootDir": ".",
        "outDir": "../dist"
    "exclude": [

Debugging TypeScript

I could then enable debugging of TypeScript in Visual Studio Code by adding a ‘launch.json’ file to the ‘.vscode’ directory and including a configuration for debugging the currently selected TypeScript file.

    "name": "Debug Current TypeScript File",
    "type": "node",
    "request": "launch",
    // File currently being viewed
    "program": "${file}",
    "stopOnEntry": true,
    "args": [],
    "cwd": ".",
    "sourceMaps": true,
    "outDir": "dist"

Then I could simply open ‘greeter.ts’ and press F5 to launch the debugger and break on the first line.


Linting TypeScript

While compiling and debugging TypeScript was a good first step, I also wanted to be able to lint my code using tslint.  So I added a gulp task called ‘vet:typescript’ and configured my ‘typescript-compile’ task to be dependent on it.  The result was that, if I for example removed a semicolon from my Greeter class and compiled my project from the terminal, I would see a linting error displayed.


Configuring the Build Task

I also wanted to be able to compile TypeScript simply by pressing Cmd+B.  That was easy because VS Code will use a Gulpfile if one is present.  Simply specify ‘gulp’ for the command and ‘typescript-compile’ for the task name, then set ‘isBuildCommand’ to true.

    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
    "tasks": [
            "taskName": "typescript-compile",
            "isBuildCommand": true,
            "showOutput": "always",
            "problemMatcher": "$gulp-tsc"

Adding a Watch Task

Lastly, I thought it would be cool to run a task that watches my TypeScript files for changes and automatically re-compiles them.  So I added yet another gulp task, called ‘typescript-watch’, which first compiles the .ts files, then watches for changes.

 * Watch and compile TypeScript
gulp.task('typescript-watch', ['typescript-compile'], function () {

    return, ['typescript-compile']);

I could then execute this task from the command line. Here you can see output shown in the terminal when a semicolon is removed from a .ts file.


It is also possible to execute a gulp task from within VS Code.  Press Cmd+P, type ‘task’ and hit the spacebar to see the available gulp tasks.  You can select a task by typing part of the name, then press Enter to execute the task.


Using a Yeoman Generator

While it’s fun to set up a new TypeScript project with Visual Studio Code from scratch, an easier way is to scaffold a new project using a Yeoman generator, which is the equivalent of executing File, New Project in Visual Studio.  That’s why I built a Yeoman generator called tonysneed-vscode-typescript, which gives you a ready-made TypeScript project with support for unit testing with Jasmine and Karma.  (I’ll explain more about JavaScript testing frameworks in the next part of this series.)


To get started using Yeoman, you’ll need to install Yeoman with the Node Package Manager.

npm install -g yo

Next install the tonysneed-vscode-typescript Yeoman generator.

npm install -g generator-tonysneed-vscode-typescript

To use the generator you should first create the directory where you wish to place your scaffolded TypeScript project.

mkdir MyCoolTypeScriptProject
cd MyCoolTypeScriptProject

Then simply run the Yeoman generator.

yo tonysneed-vscode-typescript

To view optional arguments, you can append –help to the command.  Another option is to skip installation of npm dependencies by supplying an argument of –skip-install, in which case you can install the dependencies later by executing npm install from the terminal.

In response to the prompt for Application Name, you can either press Enter to accept the default name, based on the current directory name, or enter a new application name.


Once the generator has scaffolded your project, you can open it in Visual Studio Code from the terminal.

code .

After opening the project in Visual Studio Code, you will see TypeScript files located in the src directory.  You can compile the TypeScript files into JavaScript simply by pressing Cmd+B, at which point a dist folder should appear containing the transpiled JavaScript files.

For the next post in this series I will explain how you can add unit tests to your TypeScript project, and how you can configure test runners that can be run locally as well as incorporated into your build process for continuous integration.

Posted in Technical | Tagged , , | 11 Comments

Using EF6 with ASP.NET MVC Core 1.0 (aka MVC 6)

This week Microsoft announced that it is renaming ASP.NET 5 to ASP.NET Core 1.0.  In general I think this is a very good step.  Incrementing the version number from 4 to 5 for ASP.NET gave the impression that ASP.NET 5 was a continuation of the prior version and that a clean migration path would exist for upgrading apps from ASP.NET 4 to 5.  However, this did not reflect the reality that ASP.NET 5 was a completely different animal, re-written from the ground up, and that is has little to do architecturally with its predecessor.  I would even go so far as to say it has more in common with node.js than with ASP.NET 4.

You can download the code for this post, which has been updated for ASP.NET Core 2.0, from my repository on GitHub.

Entity Framework 7, however, has even less in common with its predecessor than does MVC, making it difficult for developers to figure out whether and when they might wish to make the move to the new data platform.  In fact, EF Core 1.0 is still a work in progress and won’t reach real maturity until well after initial RTM.  So I’m especially happy that EF 7 has been renamed EF Core 1.0, and also that MVC 6 is now named MVC Core 1.0.

The problem I have with the name ASP.NET Core is that it implies some equivalency with .NET Core.  But as you see from the diagram below, ASP.NET Core will not only run cross-platform on .NET Core, but you can also target Windows with .NET Framework 4.6.


Note: This diagram has been updated to reflect that EF Core 1.0 (aka EF 7) is part of ASP.NET Core 1.0 and can target either .NET 4.6 or .NET Core.

It is extremely important to make this distinction, because there are scenarios in which you would like to take advantage of the capabilities of ASP.NET Core, but you’ll need to run on .NET 4.6 in order to make use of libraries that are not available on .NET Core 1.0.

So why would you want to use ASP.NET Core 1.0 and target .NET 4.6?

As I wrote in my last post, WCF Is Dead and Web API Is Dying – Long Live MVC 6, you should avoid using WCF for greenfield web services, because: 1) is it not friendly to dependency injection, 2) it is overly complicated and difficult to use properly, 3) it was designed primarily for use with SOAP (which has fallen out of favor), and 4) because Microsoft appears to not be investing further in WCF.  I also mentioned you should avoid ASP.NET Web API because it has an outdated request pipeline, which does not allow you to apply cross-cutting concerns, such as logging or security, across multiple downstream web frameworks (Web API, MVC, Nancy, etc).  OWIN and Katana were introduced in order to correct this deficiency, but those should be viewed as temporary remedies prior to the release of ASP.NET Core 1.0, which has the same pipeline model as OWIN.

The other important advantage of ASP.NET Core is that it completely decouples you from WCF, IIS and System.Web.dll.  It was kind of a dirty secret that under the covers ASP.NET Web API used WCF for self-hosting, and you would have to configure the WCF binding if you wanted implement things like transport security.  ASP.NET Core has a more flexible hosting model that has no dependence on WCF or System.Web.dll (which carries significant per-request overhead), whether you choose to host in IIS on Windows, or cross-platform in Kestrel on Windows, Mac or Linux.

A good example of why you would want to use ASP.NET Core 1.0 to target .NET 4.6 would be the ability to use Entity Framework 6.x.  The first release of EF Core, for example, won’t include TPC inheritance or M-M relations without extra entities.  As Rowan Miller, a program manager on the EF team, stated:

We won’t be pushing EF7 as the ‘go-to release’ for all platforms at the time of the initial release to support ASP.NET 5. EF7 will be the default data stack for ASP.NET 5 applications, but we will not recommend it as an alternative to EF6 in other applications until we have more functionality implemented.

This means if you are building greenfield web services, but still require the full capabilities of EF 6.x, you’ll want to use ASP.NET MVC Core 1.0 (aka MVC 6) to create Web API’s which depend on .NET 4.6 (by specifying “dnx451” in the project.json file).  This will allow you to add a dependency for the “EntityFramework” NuGet package version “6.1.3-*”.  The main difference is that you’ll probably put your database connection string in an *.json file rather than a web.config file, or you may specify it as an environment variable or retrieve it from a secrets store.  An appsettings.json file, for example, might contain a connection string for a local database file.

  "Data": {
    "SampleDb": {
      "ConnectionString": "Data Source=(localdb)\\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\\SampleDb.mdf;Integrated Security=True; MultipleActiveResultSets=True"

You can then register your DbContext-derived class with the dependency injection system of ASP.NET Core.

public void ConfigureServices(IServiceCollection services)
    // Add DbContext
    services.AddScoped(provider =&gt;
        var connectionString = Configuration["Data:SampleDb:ConnectionString"];
        return new SampleDbContext(connectionString);

    // Add framework services.

This will allow you to inject a SampleDbContext into the constructor of any controller in your app.

public class ProductsController : Controller
    private readonly SampleDbContext _dbContext;

    public ProductsController(SampleDbContext dbContext)
        _dbContext = dbContext;

Lastly, you’ll need to provide some information to EF regarding the provider you’re using (for example, SQL Server, Oracle, MySQL, etc).  In a traditional ASP.NET 4.6 app you would have done that in app.config or web.config.  But in ASP.NET Core you’ll want to specify the provider in a class that inherits from DbConfiguration.

public class DbConfig : DbConfiguration
    public DbConfig()
        SetProviderServices("System.Data.SqlClient", SqlProviderServices.Instance);

Then you can apply a DbConfigurationType attribute to your DbContext-derived class, so that EF can wire it all together.

public class SampleDbContext : DbContext
    public SampleDbContext(string connectionName) :
        base(connectionName) { }

    public DbSet Products { get; set; }

You can download the code for this post from my Demo.AspNetCore.EF6 repository on GitHub.

The primary limitation of targeting .NET 4.6 with EF 6 is that you’ll only be able to deploy your web services on Windows.  The good news, however, is that you’ll be in a great position to migrate from EF 6 to EF Core 1.0 (aka EF 7) as soon as it matures enough to meet your needs.  That’s because the API’s for EF Core are designed to be similar to EF 6.  Then when you do move to EF Core, you’ll be able to use Docker to deploy your web services on Linux VM’s running in a Cloud service such as Amazon EC2, Google Compute Engine, or Microsoft Azure.

Posted in Technical | Tagged , , | 70 Comments

WCF Is Dead and Web API Is Dying – Long Live MVC 6!

The time has come to start saying goodbye to Windows Communication Foundation (WCF).  Yes, there are plenty of WCF apps in the wild — and I’ve built a number of them.  But when it comes to selecting a web services stack for greenfield applications, you should no longer use WCF.

Note: A number of commenters have misunderstood the nuanced position I’ve taken in this blog post, so I thought it would help to add a statement at the beginning clarifying my position on WCF. I am not saying that WCF is going away or that you should discontinue using it for non-HTTP communication, such as MSMQ or Named Pipes, or where SOAP is a requirement. I am also not saying that most WCF apps should be re-written; on the contrary, they will need to be maintained to support SOAP-based clients. What I am saying is that, if you plan to build a greenfield HTTP-based web service, you should seriously consider using ASP.NET Core instead of WCF or ASP.NET Web API 2.x, because it is cross-platform, modular and designed for Cloud-based deployment, and it supports modern development methodologies where dependency injection is an essential requirement.

Update: ASP.NET 5 has been renamed to ASP.NET Core 1.0 and MVC 6 is now called MVC Core 1.0.

King Is Dead

WCF is dead

There are many reasons why WCF has lost its luster, but the bottom line is that WCF was written for a bygone era and the world has moved on.  There are some use cases where it still might make sense to use WCF, for example, message queuing applications where WCF provides a clean abstraction layer over MSMQ, or inter / intra process applications where using WCF with named pipes is a better choice than .NET Remoting. But for developing modern HTTP-based web services, WCF should be considered deprecated for this purpose.

Didn’t get the memo?  Unfortunately, Microsoft is not in the habit of announcing when they are no longer recommending a specific technology for new application development.  Sometimes there’s a tweet, blog post or press release, as when Bob Muglia famously stated that Microsoft’s Silverlight strategy had “shifted,” but there hasn’t to my knowledge been word from Microsoft that WCF is no longer recommended for building modern HTTP-based web services.

One reason might be that countless web services have been built using WCF since its debut in 2007 with .NET 3.0 on Windows Vista, and other frameworks, such as WCF Data Services, WCF RIA Services, and self-hosted Web API’s, have been built on top of WCF.  Also, if you need to interoperate with existing SOAP-based web services, you’re going to want to use WCF rather than handcrafted SOAP messages.


But it’s fair to say that the vision of a world of interoperable web services based on a widely accepted set of SOAP standards has generally failed to materialize.  The story, however, is not so much about the failure of SOAP to gain wide acceptance, as it is about the success of HTTP as a platform for interconnected services based on the infrastructure of the World Wide Web, which has been codified in an architectural style called REST.  The principle design objective for WCF was to provide a comprehensive platform and toolchain for developing service-oriented applications that are highly configurable and independent of the underlying transport, whereas the goal of REST-ful applications is to leverage the capabilities of HTTP for producing and consuming web services.

But doesn’t WCF support REST?

Yes it does, but aside from the fact that REST support in WCF has always felt tacked on, WCF has problems of its own.  First, WCF in general is way too complicated.  There are too many knobs and dials to turn, and you have to be somewhat of an expert to build WCF services that are secure, performant and scalable.  Many times, for example, I have seen WCF apps configured to use the least performant bindings when it wasn’t necessary.  And setting things up correctly requires advanced knowledge of encoders, multi-threading and concurrency.  Second, WCF was not designed to be friendly to modern development techniques, such as dependency injection, and WCF service types require a custom service behavior to use DI.


One of the first signs that WCF was in trouble was when the Web API team opted for using ASP.NET MVC rather than WCF for services hosted by IIS (although under the covers “self-hosted” Web API’s (for example, those hosted by Windows services) were still coupled to WCF).  ASP.NET Web API offers a much simpler approach to developing and consuming REST-ful web services, with programmatic control over all aspects of HTTP, and it was designed to play nice with dependency injection for greater flexibility and testability.

Nevertheless, ASP.NET Web API duplicated many aspects of ASP.NET MVC (for example, routing) and it was still coupled to the underlying host.  For example, Web API apps requiring secure communication over TLS / SSL required a different setup depending on whether the app was hosted in IIS or self-hosted.  To address the coupling issue, Microsoft released an implementation of the OWIN specification called Katana, which offers components for building host-independent web apps and a middleware-based pipeline for inserting cross-cutting concerns regardless of the host.


Web API is dying – Long live MVC 6!

As awesome as ASP.NET Web API and Katana are, they were released mainly as a stopgap measure while an entirely new web platform was being built from the ground up.  That platform is ASP.NET 5 with MVC 6, which merges Web API with MVC into a unified model with shared infrastructure for things like routing and dependency injection.  While OWIN web hosting for Web API retained a dependency on System.Web.dll (along with significant per-request overhead), ASP.NET 5 offers complete liberation from the shackles of legacy ASP.NET.


More importantly, ASP.NET 5 was designed to be lightweight, modular and portable across Windows, Mac OSX and Linux.  It can run on a scaled down version of the .NET Framework, called .NET Core, which is also cross-platform and consists of both a runtime and a set of base class libraries, both of which are bin-deployable so they can be upgraded without affecting other .NET apps on the same machine.  All of this is intended to make ASP.NET 5 cloud-friendly and suitable for a microservices architecture using container services such as Docker.

So when can I start building Web API’s with ASP.NET 5 and MVC6?

The answer is: right now!  When RC1 of ASP.NET 5 was released in Nov 2015, it came with a “go-live” license and permission to use it in a production environment with full support from Microsoft.


Instead of hosting on IIS, which of course only runs on Windows, you’ll want to take advantage of Kestrel, a high-performance cross-platform host that clocks at over 700,000 requests per second, which is about 5 times faster than NodeJs using the same benchmark parameters.

Shiny new tools

Not only has Microsoft opened up to deploying ASP.NET 5 apps on non-Windows platforms, it has also come out with a new cross-platform code editor called Visual Studio Code, which you can use to develop both ASP.NET 5 and NodeJs apps on Mac OSX, Linux or Windows.

ASP.NET 5 is also released as open source and is hosted on GitHub, where you can clone the repositories, ask questions, submit bug reports, and even contribute to the code base.

If you’re interested in learning more about developing cross-platform web apps for ASP.NET 5, be sure to check out my 4-part blog series on building ASP.NET 5 apps for Mac OSX and Linux and deploying them to the Cloud using Docker containers.


In summary, you should avoid WCF if you want to develop REST-ful web services with libraries and tools that support modern development approaches and can be readily consumed by a variety of clients, including web and mobile applications.  However, you’re going to want to skip right over ASP.NET Web API and go straight to ASP.NET Core, so that you can build cross-platform web services that are entirely host-independent and can achieve economies of scale when deployed to the Cloud.

Posted in Technical | Tagged , , , , | 121 Comments