Simple MVVM Toolkit – It Lives!

Update: Version 1.0 of Simple MVVM Toolkit Express has now been released and is based on version 1.0 RTM of .NET Core: https://www.nuget.org/packages/SimpleMvvmToolkit.Express. Source code and samples can be found here: https://github.com/SimpleMvvm

Now that .NET Core is stable and RC2 has been released, and the .NET Platform Standard has been proposed to replace Portable Class Libraries, I thought it would be a good idea to port my Simple MVVM Toolkit to .NET Core and provide support for additional platforms, such as Universal Windows Platform and the latest version of Xamarin for cross-platform mobile apps, included iOS from Apple and Android from Google.

dotnet-core.png

Rather than update my existing repository, I decided it was time for a fresh start.  So I createed a new project on GitHub called Simple MVVM Toolkit Express.  It is compatible with the following platforms:

  • Portable Class Libraries: Profile 111 – .NET 4.5, AspNet Core 1.0, Windows 8, Windows Phone 8.1
  • .NET Framework 4.6
  • Universal Windows Platform 10.0
  • Mono/Xamarin: MonoAndroid60, XamariniOS10
  • .NET Core 1.0: NetStandard 1.3

I decided to break compatibility with the following legacy frameworks: .NET 4.0 and Silverlight.

The toolkit has all the major features of the classic version, including classes for models and view models, support for validation and editing with rollbacks, as well as a leak-proof message bus (aka mediator or event aggregator).  Platform-specific threading implementations have been removed, because it’s better to use C#’s built-in async support.

I published a pre-release NuGet package, which you can find here: https://www.nuget.org/packages/SimpleMvvmToolkit.Express.  And I’ve created samples for WPF, UWP and Xamarin, which you can find on the SimpleMvvm home repository.

I used the dotnet CLI (command-line interface) tool chain to build the project and generate a multi-targeted NuGet package, but I had to modify the generated nuspec file to work around some compatibility issues.  In the end, it was a great learning experience, and I found it reassuring that I could continue to use a popular framework for building many different kinds of client applications using the Model-View-ViewModel design pattern.

Happy coding!

Posted in Technical | 11 Comments

Getting Visual Studio Code Ready for TypeScript: Part 3

Part 3: Injecting Scripts with Gulp

This is the third part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript
  2. Writing Jasmine Tests in TypeScript
  3. Injecting Scripts with Gulp (this post)

Leveraging Gulp

In the first and second post in this series I showed how you can use Gulp to automate common tasks such as compiling TypeScript to JavaScript and running Jasmine tests in a browser.  While Gulp is not strictly necessary to perform these tasks, it allows you to chain together multiple tasks, which can give you a smoother workflow.

gulp-partial

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.

For example, we defined a “watch” task with a dependency on the “compile” task, so that Gulp performs a compilation before watching for changes in any TypeScript files.  When changes are detected, the “compile” task is then re-executed.

gulp.task('compile', function () {

    exec('rm -rf dist && tsc -p src');
});

gulp.task('watch', ['compile'], function () {

    return gulp.watch('./src/**/*.ts', ['compile']);
});

Likewise, we defined a “test” task with a dependency on the “watch” task, so that changes to any TypeScript files will cause browser-sync to reload the browser when it detects that the JavaScript files have been re-generated.

gulp.task('test', ['watch'], function () {

    var options = {
        port: 3000,
        server: './',
        files: ['./dist/**/*.js',
                './dist/**/*.spec.js',
                '!./dist/**/*.js.map'],
        // Remaining options elided for clarity
    };

    browserSync(options);
});

Listing Tasks

While VS Code allows you to execute gulp tasks from within the editor, you may sometimes prefer to use Gulp from the Terminal (if for no other reason than to see all the pretty colors).  To make this easier, we can use a plugin that will list all the tasks we’ve defined in our gulpfile.js.  But before we get into that, we can make our lives easier by using a plugin called gulp-load-plugins, which will relieve us from having to define a separate variable for each plugin we wish to use.  All we need to do is define a $ variable, then use it to execute other gulp plugins we’ve installed.

var $ = require('gulp-load-plugins')({ lazy: true });

To list tasks in gulpfile.js, we can define a “help” task which uses the gulp-task-listing plugin to list all of our tasks.  We’ll follow a convention which uses a colon in the task name to designate it as a sub-task.  We can also define a “default” task which calls the “help” task when a user enters “gulp” in the Terminal with no parameters.

gulp.task('help', $.taskListing.withFilters(/:/));
gulp.task('default', ['help']);

You’ll need to install both Gulp plugins using npm.

npm install --save-dev gulp-load-plugins gulp-task-listing

Then open the Terminal, type “gulp” (no quotes) and press Enter.  You should see a list of tasks displayed.  To execute a task, simply type “gulp” followed by a space and the name of the task.

gulp-help

Injecting Scripts

In my last blog post I described how you can run Jasmine tests in a browser by serving up an HTML file which included both source and spec JavaScript files.  But this required you to manually insert script tags into SpecRunner.html.  You might have asked yourself if there might be a way to inject scripts into the spec runner automatically whenever you executed the “test” task. Well it just so happens: there’s plugin for that!™ It’s appropriately called gulp-inject, and you can add an injectScripts function to gulpfile.js which will inject scripts into SpecRunner.html based on globs for source and spec files.

var inject = require('gulp-inject');

function injectScripts(src, label) {

    var options = { read: false, addRootSlash: false };
    if (label) {
        options.name = 'inject:' + label;
    }
    return $.inject(gulp.src(src), options);
}

Now add a “specs:inject” gulp task which calls injectScripts to insert the source and spec scripts.  Because we only intend to call this task from other tasks, we can classify it as a sub-task by inserting a colon in the task name.

gulp.task('specs:inject', function () {

    var source = [
        './dist/**/*.js',
        '!./dist/**/*.js.map',
        '!./dist/**/*.spec.js'];

    var specs = ['./dist/**/*.spec.js'];

    return gulp
        .src('./specrunner.html')
        .pipe(injectScripts(source, ''))
        .pipe(injectScripts(specs, 'specs'))
        .pipe(gulp.dest('./'));
});

The gulp-inject plugin will insert selected scripts at each location, based on a comment corresponding to the specified label.  Simply edit SpecRunner.html to replace the hard-coded script tags with specially formatted comments. After running the “specs:inject” task, you should see the appropriate scripts inserted at these locations

<!-- inject:js -->
<!-- endinject -->

<!-- inject:specs:js -->
<!-- endinject -->

Injecting Imports

In addition to inserting source and spec scripts, you’ll also want to inject System.import statements into the spec runner so that system.js can provide browser support for module loading.  For that you’ll need to install packages for glob, path, gulp-rename, and gulp-inject-string, then add an injectImports function to gulpfile.js.

var glob = require('glob');
var path = require('path');

function injectImports(src, label) {

    var search = '/// inject:' + label;
    var first = '\n    System.import(\'';
    var last = '\'),';
    var specNames = [];

    src.forEach(function(pattern) {
        glob.sync(pattern)
            .forEach(function(file) {
                var fileName = path.basename(file, path.extname(file));
                var specName = path.join(path.dirname(file), fileName);
                specNames.push(first + specName + last);
            });
    });

    return $.injectString.after(search, specNames);
}

Then add an “imports:inject” task which calls injectImports to insert system imports into a file called system.imports.js.

gulp.task('imports:inject', function(){

    gulp.src('./util/system.template.js')
        .pipe(injectImports(['/.dist/**/*.spec.js'], 'import'))
        .pipe($.rename('./util/system.imports.js'))
        .pipe(gulp.dest('./'));
});

Modify SpecRunner.html to replace the script that uses System.import with a reference to system.imports.js.

<script src="util/system.imports.js"></script>

When you execute the “imports:inject” gulp task, it will search a file called system.template.js for a triple-dash comment with the text “inject:import”, where it will inject imports for each spec file. The result will be written to system.imports.js.

Promise.all([
    /// inject:import
    System.import('dist/greeter/greeter.spec'),
    System.import('dist/italiangreeter/italiangreeter.spec')
]);

Lastly, you need to update the “test” task in gulpfile.js to add the two sub-tasks for injecting scripts and imports. This will ensure they are executed each time you run your tests.

gulp.task('test', ['specs:inject', 'imports:inject', 'watch'], function ()

Debugging Gulp Tasks

If you run into problems with any gulp tasks, it would help if you could set breakpoints in gulpfile.js, launch a debugger and step through your code to see what went wrong.  You can do this in VS Code by adding an entry to the “configurations” section of your launch.json file, in which you invoke gulp.js and pass a task name.

{
    "name": "Debug Gulp Task",
    "type": "node",
    "request": "launch",
    "program": "${workspaceRoot}/node_modules/gulp/bin/gulp.js",
    "stopOnEntry": false,
    "args": [
        // Replace with name of gulp task to run
        "imports:inject"
    ],
    "cwd": "${workspaceRoot}"
}

If you set a breakpoint in the “imports:inject” task, select “Debug Gulp Task” from the drop down in the Debug view in VS Code and press F5, it will launch the debugger and stop at the breakpoint you set.  You can then press F10 (step over) or F11 (step into), view local variables and add watches.

gulp-debug

Learning Gulp

If you would like to learn more about Gulp, I highly recommend John Papa’s Pluralsight course on Gulp, where he explains how to use Gulp to perform various build automation tasks, such as bundling, minification, versioning and integration testing. While the learning curve may appear steep at first, Gulp will make your life easier in the long run by automating repetitive tasks and allowing you to chain them together for a streamlined development workflow.

Posted in Technical | Tagged , | Leave a comment

Getting Visual Studio Code Ready for TypeScript: Part 2

Part 2: Writing Jasmine Tests in TypeScript

This is the second part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript
  2. Writing Jasmine Tests in TypeScript (this post)

Jasmine vs Mocha + Chai + Sinon

jasmine.png

There are numerous JavaScript testing frameworks, but two of the most popular are Jasmine and Mocha.  I won’t perform a side-by-side comparison here, but the main difference is that Mocha does not come with built-in assertion and mocking libraries, so you need to plug in an assertion library, such as Chai, and a mocking library, such as Sinon.  Jasmine, on the other hand, includes its own API for assertions and mocks.  So if you want to keep things simple with fewer moving parts, and you don’t need extra features offered by libraries such as Chai and Mocha, Jasmine might be a more appealing option.  If, on the other hand, you want more flexibility and control, and you want features offered by dedicated assertion and mocking libraries, you might want to opt for Mocha.  For simplicity I’m going to stick with Jasmine in this blog post, but feel free to use Mocha if that better suits your purpose.

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.

Update: Since first publishing this blog post, I added a section on Debugging Jasmine Tests in VS Code, which you can find at the end of the article.

Using Jasmine with TypeScript

Jasmine is a behavior-driven development testing framework, which allows you to define test suites through one or more nested describe functions.  Each describe function accepts a string argument with the name of the test suite, which is usually the name of the class or method you are testing.  A test suite consists of one or more specs, formulated as a series of it functions, in which you specify expected behaviors.

Let’s say you have a Greeter class written in TypeScript, and it has a greet function.

namespace HelloTypeScript {
    export class Greeter {
        constructor(public message: string) {

        }
        greet(): string {
            return "Hello " + this.message;
        }
    }
}

Notice that Greeter is defined within the namespace HelloTypeScript and is qualified with the export keyword.  This removes Greeter from the global scope so that we can avoid potential name collisions.

To use Jasmine we’ll need to install jasmine-core (not jasmine) using npm (Node Package Manager).  Because we’re only using Jasmine at development-time, we’ll save it to the package.json file using the –save-dev argument.

npm install --save-dev jasmine-core

Intellisense for Jasmine

To allow Visual Studio Code to provide intellisense for Jasmine, you’ll need to install type definitions using Typings, which replaces the deprecated Tsd tool from Definitely Typed.

npm install -g typings

Use Typings to install type definitions for jasmine.

typings install jasmine --save-dev --ambient

This command will result in the addition of a typings folder in your project, which contains a main.d.ts file with references to installed type definitions.  The –save-dev argument will persist the specified typing as a dev dependency in a typings.json file, so that you can re-install the typings later.  The –ambient argument is required to include Definitely Typed in the lookup.

Now you’re ready to add your first Jasmine test.  By convention you should use the same name as the TypeScript file you’re testing, but with a .spec suffix.  For example, the test for greeter.ts should be called greeter.spec.ts and be placed in the same folder.

/// <reference path="../../typings/main.d.ts" />

describe("Greeter", () => {

    describe("greet", () => {

        it("returns Hello World", () => {

            // Arrange
            let greeter = new HelloTypeScript.Greeter("World");

            // Act
            let result = greeter.greet();

            // Assert
            expect(result).toEqual("Hello World");
        });
    });
});

The triple-slash reference is needed for intellisense to light up.  Without it you’ll see red squigglies, and VS Code will complain that it cannot find the name ‘describe’.

When you press Cmd+B to compile your TypeScript code, you will see a greeter.spec.js file in the dist/greeter directory, where the greeter.js file is also located.  You’ll also see a greeter.spec.js.map file to enable debugging of your Jasmine test.  (See the first post in this blog series for information on how to configure VS Code for compiling and debugging TypeScript.)

Running Jasmine Tests

To run your Jasmine tests in a browser, go to the latest release for Jasmine and download the jasmine-standalone zip file.  After extracting the contents of the zip file, copy both the lib folder and SpecRunner.html file to your project folder.  Edit the html file to include both the source and spec files.

<!-- include source files here... -->
<script src="dist/greeter/greeter.js"></script>

<!-- include spec files here... -->
<script src="dist/greeter/greeter.spec.js"></script>

You can then simply open SpecRunner.html in Finder (Mac) or File Explorer (Windows) to see the test results.

jasmine-file.png

If you change Greeter.greet to return “Goodbye” instead of “Hello”, then compile and refresh the browser, you’ll see that the test now fails.

jasmine-file-fail.png

Running Tests Automatically

Having to refresh the browser to see test results can become tedious, so it’s a good idea to serve your tests using http.  To help with this you can use a task runner such as Gulp, which integrates nicely with VS Code, together wish an http server such as BrowserSync.

gulp.png

First, you’ll want to install gulp and browser-sync locally.

npm install --save-dev gulp browser-sync

Next, add a gulpfile.js file to the project, in which you’ll define tasks for compiling TypeScript to JavaScript, as well as watching TypeScript files and recompiling them when there are changes.

var gulp = require('gulp');
var exec = require('child_process').exec;
var browserSync = require('browser-sync');

gulp.task('compile', function () {
    exec('rm -rf dist && tsc -p src');
});

gulp.task('watch', ['compile'], function () {
    return gulp.watch('./src/**/*.ts', ['compile']);
});

To run either the compile or watch tasks, we can execute them from a Terminal or Command Prompt.  (You can also run tasks in VS Code by pressing Cmd+P, typing “task ” [no quotes] and entering the task name).

gulp watch

gulp-watch.png

If you change a TypeScript file, the gulp watch task will detect the change and execute the compile task.  You can then add a gulp task which serves both .js and .spec.js files in a browser.

gulp.task('test', ['watch'], function () {

    var options = {
        port: 3000,
        server: './',
        files: ['./dist/**/*.js',
                './dist/**/*.spec.js',
                '!./dist/**/*.js.map'],
        logFileChanges: true,
        logLevel: 'info',
        logPrefix: 'spec-runner',
        notify: true,
        reloadDelay: 1000,
        startPath: 'SpecRunner.html'
    };

    browserSync(options);
});

Running Tests in VS Code

It is possible to wire up the gulp test task so that it runs in response to pressing Cmd+T.  To set up VS Code both for compiling TypeScript and running tests, press Cmd+Shift+P, type “config” and select Configure Task Runner. Replace the default content for tasks.json with the following:

{
    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
        "--no-color"
    ],
    "tasks": [
        {
            "taskName": "compile",
            "isBuildCommand": true,
            "showOutput": "silent",
            "problemMatcher": "$gulp-tsc"
        },
        {
            "taskName": "test",
            "isTestCommand": true,
            "showOutput": "always"
        }
    ]
}

Pressing Cmd+B will compile your TypeScript files, and pressing Cmd+T will serve your Jasmine tests in a browser, automatically refreshing the browser each time any of your TypeScript files changes.

Using Modules

To improve encapsulation TypeScript supports the use of modules, which are executed in their own scope, not in the global scope.  Various constructs, such as variables, functions, interfaces and classes, are not visible outside a module unless they are explicitly exported.  For example, we could define an ItalianGreeter class with an export statement.

export default class ItalianGreeter {
    constructor(public message: string) {

    }
    greet(): string {
        return "Ciao " + this.message;
    }
}

The Jasmine test for ItalianGreeter would then require an import statement.

import ItalianGreeter from "./italiangreeter";

let greeter = new ItalianGreeter("World");

// Remaining code elided for clarity

To use modules in TypeScript you’ll need to specify a module loader in your tsconfig.json file.  For a TypeScript library or node.js app, you would select commonjs.

{
    "compilerOptions": {
    "module": "commonjs",

// Remaining code elided for clarity

At this point your TypeScript will compile, but the additional tests will not show up in SpecRunner.html, even after you include scripts for the source and spec files.  The reason is that you need SystemJs, which acts as a polyfill to provide support in the browser for module loading, which is a feature of ECMA Script 2015. First add systemjs to your project.

npm install --save-dev systemjs

Then add these two scripts to SpecRunner.html.

<script src="node_modules/systemjs/dist/system.js"></script>
<script>
    System.config({ packages: { 'dist': {defaultExtension: 'js'}}});
    Promise.all([
        System.import('dist/greeter/greeter.spec'),
        System.import('dist/italiangreeter/italiangreeter.spec'),
    ]);
</script>

Pressing Cmd+T will now also serve italiangreeter.spec.js, which imports the ItalianGreeter class.

tests-systemjs

Stopping Tests in VS Code

You can terminate the test task by pressing Cmd+Shift+P and selecting Terminate Running Task.  Because this is something you’ll do often, you might want to add a keyboard shortcut for it.  From the Code menu select Preferences / Keyboard Shortcuts, then add the following binding, which will terminate the running task by pressing Cmd+Shift+X.

[
    { "key": "shift+cmd+x", "command": "workbench.action.tasks.terminate" }
]

Debugging Tests in VS Code

While it may be useful to run Jasmine tests in a browser, there are times when you need to launch a debugger and step through your code one line at a time.  Visual Studio Code makes it relatively painless to debug your tests.  First you’ll need to install jasmine-node using npm.

npm install --save-dev jasmine-node

Then add the following entry to the “configurations” section of your launch.json file.

{
    "name": "Debug Tests",
    "type": "node",
    "request": "launch",
    "program": "${workspaceRoot}/node_modules/jasmine-node/bin/jasmine-node",
    "stopOnEntry": false,
    "args": [
        "dist",
        "--verbose"
    ],
    "cwd": "${workspaceRoot}",
    "sourceMaps": true,
    "outDir": "${workspaceRoot}/dist"
}

Press Cmd+Shift+D to view the Debugging pane in VS Code and select “Debug Tests” from the dropdown.  Then set a breakpoint (pressing F9 will do the trick), and press F5 to launch the debugger.  Execution should pause at the breakpoint, allowing you to step through your code.

debug-ts-tests.png

What’s Next?

In this post I showed how to write Jasmine tests in TypeScript and serve them in a browser by running a Gulp task either from the Terminal or in Visual Studio Code.  This has the advantage of automatically compiling TypeScript files and refreshing the browser whenever a source or spec file has changed.  While this works well at development time, you’ll need to use a test runner such as Karma if you want to execute tests on a continuous integration server when commits are pushed to a remote repository.  I’ll address this issue in my next post.

Posted in Technical | Tagged , | 1 Comment

Getting Visual Studio Code Ready for TypeScript

Part 1: Compiling TypeScript to JavaScript

This is the first part in a series of blog posts on Getting Visual Studio Code Ready for TypeScript:

  1. Compiling TypeScript to JavaScript (this post)
  2. Writing Jasmine Tests in TypeScript

Why TypeScript?

In case you’re new to TypeScript, Wikipedia defines TypeScript in the following way (paraphrased):

TypeScript is designed for development of large applications and transcompiles to JavaScript. It is a strict superset of JavaScript (any existing JavaScript programs are also valid TypeScript programs), and it adds optional static typing and class-based object-oriented programming to the JavaScript language.

Coming from a C# background, I was attracted to TypeScript, first because it is the brain child of Anders Hejlsberg, who also invented the C# programming language, and I can have confidence it has been well-designed, and second because I like to rely on the compiler to catch errors while I am writing code.  While TypeScript embraces all the features of ECMAScript 2015, such as modules, classes, promises and arrow functions, it adds type annotations which allow code editors to provide syntax checking and intellisense, making it easier to use the good parts of JavaScript while avoiding the bad.

You can download a sample project with code for this blog post.  You can also download my Yeoman generator for scaffolding new TypeScript projects for Visual Studio Code.

ts-logo.jpg

Why Visual Studio Code?

Once I decided to embark on the adventure of learning TypeScript, the next question was: What development tools should I use?

I’ve spent the better part of my career with Microsoft Visual Studio, and I enjoy all the bells and whistles it provides.  But all those fancy designers come at a cost, both in terms of disk space and RAM, and even installing or updating VS 2015 can take quite a while.  To illustrate, here is a joke I recently told a friend of mine:

I like Visual Studio because I can use it to justify to my company why I need to buy better hardware, so I can run VS and get acceptable performance. That’s how I ended up with a 1 TB SSD and 16 GB of RAM — thank you Visual Studio! 👏

I also own a MacBook Air, mainly because of Apple’s superior hardware, and run a Windows 10 virtual machine so that I can use Visual Studio and Office.  But I thought it would be nice to be able to write TypeScript directly on my Mac without having to spin up a Windows VM, which can drain my laptop’s battery.  So I thought I would give Visual Studio Code a try.

But before I started with VS Code, I decided to go back to Visual Studio and create a simple TypeScript project with support for unit testing with Jasmine, which is a popular JavaScript unit testing framework.  It turns out the experience was relatively painless, but I still had to do a lot of manual setup, which entailed creating a new TypeScript project in Visual Studio, deleting the files that were provided, installing NuGet packages for AspNet.Mvc and JasmineTest, then adding a bare-bones controller and a view which I adapted from the spec runner supplied by Jasmine.

You can download the code for a sample VS 2015 TypeScript project from my Demo.VS2015.TypeScript repository on GitHub.

Visual Studio 2015 still required me to do some work to create a basic TypeScript project with some unit tests, and if I wanted to add other features, such as linting my TypeScript or automatically refreshing the browser when I changed my code, then I would have to use npm or a task runner such as Grunt or Gulp. This helped tip the scales for me in favor of Visual Studio Code.

why-vs-code.png

VS Code is actually positioned as something between a simple code editor, such as Atom, Brackets or SublimeText, and a full fledged IDE like Visual Studio or WebStorm.  The main difference is that VS Code lacks a “File, New Project” command  for creating a new type of project with all the necessary files. This means you either have to start from scratch or select a Yeoman generator to scaffold a new project.

I decided to start from scratch, because I like pain. (OK, I’m just kidding.)

The truth is, I couldn’t find an existing generator that met my needs, and I wanted to learn all I could from the experience of getting VS Code ready for TypeScript.  The result was a sample project on GitHub (Demo.VSCode.TypeScript) and a Yeoman generator (tonysneed-vscode-typescript) for scaffolding new TypeScript projects.

Compiling TypeScript to JavaScript

My first goal was to compile TypeScript into JavaScript with sourcemaps for debugging and type definitions for intellisense.  This turned out to be much more challenging than I thought it would be.  I discovered that the gulp-typescript plugin did not handle relative paths very well, so instead I relied on npm (Node Package Manager) to invoke the TypeScript compiler directly, setting the project parameter to the ‘src’ directory in which I placed my tsconfig.json file.  This allowed for specifying a ‘dist’ output directory and preserving the directory structure in ‘src’.  To compile TypeScript using a gulp task, all I had to do was execute the ‘tsc’ script.

/**
 * Compile TypeScript
 */
gulp.task('typescript-compile', ['vet:typescript', 'clean:generated'], function () {

    log('Compiling TypeScript');
    exec('node_modules/typescript/bin/tsc -p src');
});

Here is the content of the ‘tsconfig.json’ file. Note that both ‘rootDir’ and ‘outDir’ must be set in order to preserve directory structure in the ‘dist’ folder.

{
    "compilerOptions": {
        "module": "commonjs",
        "target": "es5",
        "sourceMap": true,
        "declaration": true,
        "removeComments": true,
        "noImplicitAny": true,
        "rootDir": ".",
        "outDir": "../dist"
    },
    "exclude": [
        "node_modules"
    ]
}

Debugging TypeScript

I could then enable debugging of TypeScript in Visual Studio Code by adding a ‘launch.json’ file to the ‘.vscode’ directory and including a configuration for debugging the currently selected TypeScript file.

{
    "name": "Debug Current TypeScript File",
    "type": "node",
    "request": "launch",
    // File currently being viewed
    "program": "${file}",
    "stopOnEntry": true,
    "args": [],
    "cwd": ".",
    "sourceMaps": true,
    "outDir": "dist"
}

Then I could simply open ‘greeter.ts’ and press F5 to launch the debugger and break on the first line.

vsc-debugger.png

Linting TypeScript

While compiling and debugging TypeScript was a good first step, I also wanted to be able to lint my code using tslint.  So I added a gulp task called ‘vet:typescript’ and configured my ‘typescript-compile’ task to be dependent on it.  The result was that, if I for example removed a semicolon from my Greeter class and compiled my project from the terminal, I would see a linting error displayed.

lint-error.png

Configuring the Build Task

I also wanted to be able to compile TypeScript simply by pressing Cmd+B.  That was easy because VS Code will use a Gulpfile if one is present.  Simply specify ‘gulp’ for the command and ‘typescript-compile’ for the task name, then set ‘isBuildCommand’ to true.

{
    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
        "--no-color"
    ],
    "tasks": [
        {
            "taskName": "typescript-compile",
            "isBuildCommand": true,
            "showOutput": "always",
            "problemMatcher": "$gulp-tsc"
        }
    ]
}

Adding a Watch Task

Lastly, I thought it would be cool to run a task that watches my TypeScript files for changes and automatically re-compiles them.  So I added yet another gulp task, called ‘typescript-watch’, which first compiles the .ts files, then watches for changes.

/**
 * Watch and compile TypeScript
 */
gulp.task('typescript-watch', ['typescript-compile'], function () {

    return gulp.watch(config.ts.files, ['typescript-compile']);
});

I could then execute this task from the command line. Here you can see output shown in the terminal when a semicolon is removed from a .ts file.

tsc-watch.png

It is also possible to execute a gulp task from within VS Code.  Press Cmd+P, type ‘task’ and hit the spacebar to see the available gulp tasks.  You can select a task by typing part of the name, then press Enter to execute the task.

vscode-tasks.png

Using a Yeoman Generator

While it’s fun to set up a new TypeScript project with Visual Studio Code from scratch, an easier way is to scaffold a new project using a Yeoman generator, which is the equivalent of executing File, New Project in Visual Studio.  That’s why I built a Yeoman generator called tonysneed-vscode-typescript, which gives you a ready-made TypeScript project with support for unit testing with Jasmine and Karma.  (I’ll explain more about JavaScript testing frameworks in the next part of this series.)

yeoman-logo.png

To get started using Yeoman, you’ll need to install Yeoman with the Node Package Manager.

npm install -g yo

Next install the tonysneed-vscode-typescript Yeoman generator.

npm install -g generator-tonysneed-vscode-typescript

To use the generator you should first create the directory where you wish to place your scaffolded TypeScript project.

mkdir MyCoolTypeScriptProject
cd MyCoolTypeScriptProject

Then simply run the Yeoman generator.

yo tonysneed-vscode-typescript

To view optional arguments, you can append –help to the command.  Another option is to skip installation of npm dependencies by supplying an argument of –skip-install, in which case you can install the dependencies later by executing npm install from the terminal.

In response to the prompt for Application Name, you can either press Enter to accept the default name, based on the current directory name, or enter a new application name.

yo-ts-vsc-typescript.png

Once the generator has scaffolded your project, you can open it in Visual Studio Code from the terminal.

code .

After opening the project in Visual Studio Code, you will see TypeScript files located in the src directory.  You can compile the TypeScript files into JavaScript simply by pressing Cmd+B, at which point a dist folder should appear containing the transpiled JavaScript files.

For the next post in this series I will explain how you can add unit tests to your TypeScript project, and how you can configure test runners that can be run locally as well as incorporated into your build process for continuous integration.

Posted in Technical | Tagged , , | 10 Comments

Using EF6 with ASP.NET MVC Core 1.0 (aka MVC 6)

This week Microsoft announced that it is renaming ASP.NET 5 to ASP.NET Core 1.0.  In general I think this is a very good step.  Incrementing the version number from 4 to 5 for ASP.NET gave the impression that ASP.NET 5 was a continuation of the prior version and that a clean migration path would exist for upgrading apps from ASP.NET 4 to 5.  However, this did not reflect the reality that ASP.NET 5 was a completely different animal, re-written from the ground up, and that is has little to do architecturally with its predecessor.  I would even go so far as to say it has more in common with node.js than with ASP.NET 4.

You can download the code for this post from my Demo.AspNetCore.EF6 repository on GitHub.

Entity Framework 7, however, has even less in common with its predecessor than does MVC, making it difficult for developers to figure out whether and when they might wish to make the move to the new data platform.  In fact, EF Core 1.0 is still a work in progress and won’t reach real maturity until well after initial RTM.  So I’m especially happy that EF 7 has been renamed EF Core 1.0, and also that MVC 6 is now named MVC Core 1.0.

The problem I have with the name ASP.NET Core is that it implies some equivalency with .NET Core.  But as you see from the diagram below, ASP.NET Core will not only run cross-platform on .NET Core, but you can also target Windows with .NET Framework 4.6.

aspnetcore-mvc-ef.png

Note: This diagram has been updated to reflect that EF Core 1.0 (aka EF 7) is part of ASP.NET Core 1.0 and can target either .NET 4.6 or .NET Core.

It is extremely important to make this distinction, because there are scenarios in which you would like to take advantage of the capabilities of ASP.NET Core, but you’ll need to run on .NET 4.6 in order to make use of libraries that are not available on .NET Core 1.0.

So why would you want to use ASP.NET Core 1.0 and target .NET 4.6?

As I wrote in my last post, WCF Is Dead and Web API Is Dying – Long Live MVC 6, you should avoid using WCF for greenfield web services, because: 1) is it not friendly to dependency injection, 2) it is overly complicated and difficult to use properly, 3) it was designed primarily for use with SOAP (which has fallen out of favor), and 4) because Microsoft appears to not be investing further in WCF.  I also mentioned you should avoid ASP.NET Web API because it has an outdated request pipeline, which does not allow you to apply cross-cutting concerns, such as logging or security, across multiple downstream web frameworks (Web API, MVC, Nancy, etc).  OWIN and Katana were introduced in order to correct this deficiency, but those should be viewed as temporary remedies prior to the release of ASP.NET Core 1.0, which has the same pipeline model as OWIN.

The other important advantage of ASP.NET Core is that it completely decouples you from WCF, IIS and System.Web.dll.  It was kind of a dirty secret that under the covers ASP.NET Web API used WCF for self-hosting, and you would have to configure the WCF binding if you wanted implement things like transport security.  ASP.NET Core has a more flexible hosting model that has no dependence on WCF or System.Web.dll (which carries significant per-request overhead), whether you choose to host in IIS on Windows, or cross-platform in Kestrel on Windows, Mac or Linux.

A good example of why you would want to use ASP.NET Core 1.0 to target .NET 4.6 would be the ability to use Entity Framework 6.x.  The first release of EF Core, for example, won’t include TPC inheritance or M-M relations without extra entities.  As Rowan Miller, a program manager on the EF team, stated:

We won’t be pushing EF7 as the ‘go-to release’ for all platforms at the time of the initial release to support ASP.NET 5. EF7 will be the default data stack for ASP.NET 5 applications, but we will not recommend it as an alternative to EF6 in other applications until we have more functionality implemented.

This means if you are building greenfield web services, but still require the full capabilities of EF 6.x, you’ll want to use ASP.NET MVC Core 1.0 (aka MVC 6) to create Web API’s which depend on .NET 4.6 (by specifying “dnx451” in the project.json file).  This will allow you to add a dependency for the “EntityFramework” NuGet package version “6.1.3-*”.  The main difference is that you’ll probably put your database connection string in an *.json file rather than a web.config file, or you may specify it as an environment variable or retrieve it from a secrets store.  An appsettings.json file, for example, might contain a connection string for a local database file.

{
  "Data": {
    "SampleDb": {
      "ConnectionString": "Data Source=(localdb)\\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\\SampleDb.mdf;Integrated Security=True; MultipleActiveResultSets=True"
    }
  }
}

You can then register your DbContext-derived class with the dependency injection system of ASP.NET Core.

public void ConfigureServices(IServiceCollection services)
{
    // Add DbContext
    services.AddScoped(provider =&gt;
    {
        var connectionString = Configuration["Data:SampleDb:ConnectionString"];
        return new SampleDbContext(connectionString);
    });

    // Add framework services.
    services.AddMvc();
}

This will allow you to inject a SampleDbContext into the constructor of any controller in your app.

[Route("api/[controller]")]
public class ProductsController : Controller
{
    private readonly SampleDbContext _dbContext;

    public ProductsController(SampleDbContext dbContext)
    {
        _dbContext = dbContext;
    }

Lastly, you’ll need to provide some information to EF regarding the provider you’re using (for example, SQL Server, Oracle, MySQL, etc).  In a traditional ASP.NET 4.6 app you would have done that in app.config or web.config.  But in ASP.NET Core you’ll want to specify the provider in a class that inherits from DbConfiguration.

public class DbConfig : DbConfiguration
{
    public DbConfig()
    {
        SetProviderServices("System.Data.SqlClient", SqlProviderServices.Instance);
    }
}

Then you can apply a DbConfigurationType attribute to your DbContext-derived class, so that EF can wire it all together.

[DbConfigurationType(typeof(DbConfig))]
public class SampleDbContext : DbContext
{
    public SampleDbContext(string connectionName) :
        base(connectionName) { }

    public DbSet Products { get; set; }
}

You can download the code for this post from my Demo.AspNetCore.EF6 repository on GitHub.

The primary limitation of targeting .NET 4.6 with EF 6 is that you’ll only be able to deploy your web services on Windows.  The good news, however, is that you’ll be in a great position to migrate from EF 6 to EF Core 1.0 (aka EF 7) as soon as it matures enough to meet your needs.  That’s because the API’s for EF Core are designed to be similar to EF 6.  Then when you do move to EF Core, you’ll be able to use Docker to deploy your web services on Linux VM’s running in a Cloud service such as Amazon EC2, Google Compute Engine, or Microsoft Azure.

Posted in Technical | Tagged , , | 55 Comments

WCF Is Dead and Web API Is Dying – Long Live MVC 6!

Note that ASP.NET 5 has been renamed to ASP.NET Core 1.0 and that MVC 6 is now called MVC Core 1.0.

The time has come to say goodbye to Windows Communication Foundation (WCF).  Yes, there are plenty of WCF apps in the wild — and I’ve built a number of them.  But when it comes to selecting a web services stack for greenfield applications, you should no longer use WCF.

King Is Dead

WCF is dead

There are many reasons why WCF has lost its luster, but the bottom line is that WCF was written for a bygone era and the world has moved on.  There are some narrow use cases where it still might make sense to use WCF, for example, message queuing applications where WCF provides a clean abstraction layer over MSMQ, or inter / intra process applications where using WCF with named pipes is a better choice than .NET Remoting.  But for developing modern web services, WCF is as dead as a doornail.

Didn’t get the memo?  Unfortunately, Microsoft is not in the habit of annoucing when they are no longer recommending a specific technology for new application development.  Sometimes there’s a tweet, blog post or press release, as when Bob Muglia famously stated that Microsoft’s Silverlight strategy had “shifted,” but there hasn’t to my knowledge been word from Microsoft that WCF has been quietly deprecated.

One reason might be that countless web services have been built using WCF since its debut in 2007 with .NET 3.0 on Windows Vista, and other frameworks, such as WCF Data Services, WCF RIA Services, and self-hosted Web API’s, have been built on top of WCF.  Also, if you need to interoperate with existing SOAP-based web services, you’re going to want to use WCF rather than handcrafted SOAP messages.

wcf-logo

But it’s fair to say that the vision of a world of interoperable web services based on a widely accepted set of SOAP standards has utterly failed.  The story, however, is not so much about the failure of SOAP to gain wide acceptance, as it is about the success of HTTP as a platform for interconnected services based on the infrastructure of the World Wide Web, which has been codified in an architectural style called REST.  The principle design objective for WCF was to provide a comprehensive platform and toolchain for developing service-oriented applications that are highly configurable and independent of the underlying transport, whereas the goal of REST-ful applications is to leverage the capabilities of HTTP for producing and consuming web services.

But doesn’t WCF support REST?

Yes it does, but aside from the fact that REST support in WCF has always felt tacked on, WCF has problems of its own.  First, WCF in general is way too complicated.  There are too many knobs and dials to turn, and you have to be somewhat of an expert to build WCF services that are secure, performant and scalable.  Many times, for example, I have seen WCF apps configured to use the least performant bindings when it wasn’t necessary.  And setting things up correctly requires advanced knowledge of encoders, multi-threading and concurrency.  Second, WCF was not designed to be friendly to modern devleopment techniques, such as dependency injection, and WCF service types require a custom service behavior to use DI.

webapi-logo

One of the first signs that WCF was in trouble was when the Web API team opted for using ASP.NET MVC rather than WCF for services hosted by IIS (although under the covers “self-hosted” Web API’s (for example, those hosted by Windows services) were still coupled to WCF).  ASP.NET Web API offers a much simpler approach to developing and consuming REST-ful web services, with programmatic control over all aspects of HTTP, and it was designed to play nice with dependency injection for greater flexibility and testability.

Nevertheless, ASP.NET Web API duplicated many aspects of ASP.NET MVC (for example, routing) and it was still coupled to the underlying host.  For example, Web API apps requiring secure communication over TLS / SSL required a different setup depending on whether the app was hosted in IIS or self-hosted.  To address the coupling issue, Microsoft released an imlementation of the OWIN specification called Katana, which offers components for building host-independent web apps and a middleware-based pipeline for inserting cross-cutting concerns regardless of the host.

fruit-ninja

Web API is dying – Long live MVC 6!

As awesome as ASP.NET Web API and Katana are, they were released mainly as a stopgap measure while an entirely new web platform was being built from the ground up.  That platform is ASP.NET 5 with MVC 6, which merges Web API with MVC into a unified model with shared infrastructure for things like routing and dependency injection.  While OWIN web hosting for Web API retained a dependency on System.Web.dll (along with significant per-request overhead), ASP.NET 5 offers complete liberation from the shackes of legacy ASP.NET.

aspnet5-arch

More importantly, ASP.NET 5 was designed to be lightweight, modular and portable across Windows, Mac OSX and Linux.  It can run on a scaled down version of the .NET Framework, called .NET Core, which is also cross-platform and consists of both a runtime and a set of base class libraries, both of which are bin-deployable so they can be upgraded without affecting other .NET apps on the same machine.  All of this is indended to make ASP.NET 5 cloud-friendly and suitable for a microservices architecture using container services such as Docker.

So when can I start building Web API’s with ASP.NET 5 and MVC6?

The answer is: right now!  When RC1 of ASP.NET 5 was released in Nov 2015, it came with a “go-live” license and permission to use it in a production environment with full support from Microsoft.

aspnet5-roadmap

Instead of hosting on IIS, which of course only runs on Windows, you’ll want to take advantage of Kestrel, a high-performance cross-platform host that clocks at over 700,000 requests per second, which is about 5 times faster than NodeJs using the same benchmark parameters.

Shiny new tools

Not only has Microsoft opened up to deploying ASP.NET 5 apps on non-Windows platforms, it has also come out with a new cross-platform code editor called Visual Studio Code, which you can use to develop both ASP.NET 5 and NodeJs apps on Mac OSX, Linux or Windows.

ASP.NET 5 is also released as open source and is hosted on GitHub, where you can clone the repositories, ask questions, submit bug reports, and even contribute to the code base.

If you’re interested in learning more about developing cross-platform web aps for ASP.NET 5, be sure to check out my 4-part blog series on building ASP.NET 5 apps for Mac OSX and Linux and deploying them to the Cloud using Docker containers.

cloud-city

In summary, you should avoid WCF like the plague if you want to develop REST-ful web services with libraries and tools that support modern development approaches and can be readily consumed by a variety of clients, including web and mobile applications.  However, you’re going to want to skip right over ASP.NET Web API and go straight to ASP.NET 5, so that you can build cross-platform web services that are entirely host-independent and can achieve economies of scale when deployed to the Cloud.

Posted in Technical | Tagged , , , , | 60 Comments

How Open Source Changed My Life

2015 was a pivotal year for my life as a developer, due in no small measure to the impact of open source software both on how I go about writing code, as well as on how I interact with other developers.  If I had to select one word to describe the reason for this, it would be: collaboration.  It’s not just that open source development demands a greater degree of collaboration, but the acceleration of open source as a movement during the past couple of years has actually redefined software development as a highly collaborative process.

Open source

In plain English, this means that software quality depends on how well I can work with others.  However, in the past it hasn’t been very easy to make collaboration a seamless part of software development.  You could do real-time collaboration (also known as pair programming), but few employers were willing to support it, and it was difficult to pull off when team members were scattered across different time zones.

To add insult to injury, centralized version control systems, such as Team Foundation Version Control or Subversion, made common tasks, such as branching and merging, much more arduous.  All of this changed with the widespread adoption of Git, a distributed version control system which makes first-class citizens out of branching and merging.

Git

This has actually changed the way I write code.

To start with, it’s helped to organize my development.  For example, when I want to work on a bug fix or new feature, the first thing I’m going to do is create a branch. This keeps me from working on more than one thing at the same time. But if I do want to multitask, I can stash changes, switch to another branch to do something else, then come back and pick up where I left off.  When I’m working on a branch, the process of committing my changes also forces me to try to better organize my work, by logically grouping my changes into commits.  I can also compare what I’m currently doing to the prior commit to see how refactoring has helped eliminate code smells.

While Git has had an impact on how I personally write code, what has transformed software development into a truly collaborative process has been the rise of code hosting services such as GitHubBitbucket or Visual Studio Team Services, which provide tools for implementing a collaboration workflow based on Git, with work isolated into branches, organized into commits and documented with commit messages.

GitHub

If I am working on my own public Git repository, then the workflow might take place as follows:

  1. Open an issue describing a defect or desired feature.  Here I can add comments, insert code snippets, refer to other issues or lines of code, as well as reference other issues.
     
  2. Create a branch for working on the issue. Here I can write code or change existing code, then commit changes with descriptive messages.
     
  3. Publish the branch, at which time GitHub will show a “Compare & Pull Request” button, which I can click to create a pull request.  This will allow other developers who have cloned my repository to create a local branch based on the pull request, so they can look at the code I’ve written. Other developers can then comment on the pull request, and we can have a discussion about the the issue, even referencing specific lines of code.  If I am trying to reproduce a particular bug, I can simply write a failing test, commit changes, then push those changes to the public branch, which allows others to retrieve and run the failing test.  When I fix the defect so that the failing test now passes, I can commit the fix and push the commit so that other developers can pull the commit to see how the fix was performed.
     
  4. Once I’m satisfied with the code, I can merge the feature branch into the main branch (probably develop or master), then push those commits to the public repository.  At that point, the pull request can be closed (GitHub will automatically close it), both the public and private feature branches can then be deleted, and the original issue can be closed (GitHub will automatically close issues based on commit messages).

If I am working on someone else’s public Git repository, then the workflow will start off differently:

  1. The first thing I’ll want to do is fork their repository, effectively copying it over to my own GitHub account.
     
  2. I can then clone the forked repo, create a local branch, work on it (for example, write a feature or create a failing test to reproduce an exception), then publish my local branch to my public repo.
     
  3. Once I’ve published a branch I can create a new pull request, which others can then pull to see what I’ve done, without affecting any other work they may be doing.

GitHub Workflow

What Git and GitHub essentially provide is the ability to share code with others and facilitate discussions in a structured workflow. That’s powerful stuff.

But the ability to leverage these tools depends on how widely they’ve been appropriated by members of the developer community.  And that means developers are going to need to get out of their comfort zone to learn how to use Git and GitHub (or another hosting service).  The good news is that most popular IDE’s and code editors have decent Git integration, which allows you to perform most Git tasks using a GUI interface right from within the IDE.  Other Git clients, such as TortoiseGit and SourceTree, ease many tasks, but there are some Git commands, such as interactive rebase, where you’ll need to pull up a terminal window or command prompt.  Interactive rebase can be tricky at first, but it lets you squash certain commits and consolidate messages for a cleaner version history.

One of things which can trip up Git noobies is not providing a proper .gitignore file with their repo.  GitHub provides .gitignore file templates for various IDE’s.  Not using the correct file will make it difficult for others to build your solution without getting spurious errors.  For example, if you’re using Visual Studio without the correct .gitignore, you may check in package, bin and obj folders, which is can interfere with restoring NuGet packages when someone else tries to build the solution.

To help you get up to speed on this new collaborative approach to software development, you should check out some of the many free Git tutorials available online.  Then you should bite the bullet and contribute to an open-source project.  Feel intimidated?  Scott Hanselman has created a First Timers Only web site specifically targeted to people who are dipping their toes into the open source waters.

I’ve been privileged to author a couple of popular open source frameworks: Simple MVVM Toolkit and Trackable Entities.  I created the first project prior to embracing Git, but I moved the second project to GitHub early on and have had other developers contribute to the project, which has encouraged me to fully adopt the Git way.  I also had the opportunity to submit some pull requests to Microsoft’s ASP.NET 5 repo on GitHub, where I learned how to rebase my feature branch and resolve conflicts to stay in sync with upstream changes.

Aspnet5 new kid

One of the things that propelled me further into open source has been the way in which Microsoft has jumped on the bandwagon.  Not only have they opened up their code for developers to look at, they have invited others to take part in the process by allowing them to open issues and submit pull requests.  That’s huge.  And it’s a model for how companies large and small stand to benefit from this new way to build software by sharing code and allowing collaboration with the help of tools from Git and GitHub.

I hope your journey into open source is as enriching for you as it has been for me.

Yoda Source

May the Source be with you!

Posted in Technical | Tagged , | Leave a comment