Life Just Got Easier: Simple MVVM Toolkit for Silverlight

imageIf you’ve made a decision to start using the Model-View-ViewModel pattern (abbreviated as MVVM) for Silverlight development, you’re faced with a rather steep learning curve and a scarcity of of accepted standards and best practices.  I’ve read no less than three Silverlight books. All of them have a chapter on MVVM, in which the pattern is explained and some fairly basic examples are provided.  But when it comes to taking on some controversial issues, such as how to display modal dialogs, they tend to sidestep the issue and recommend that you pick up an MVVM toolkit.

The approach of these authors reflects the reality that a) there is more than one way to do MVVM, and b) you should not attempt to build a serious MVVM app without the aid of a toolkit.  The problem is that, if you are coming up to speed on MVVM, learning the ins and outs of a particular toolkit can be daunting.  What’s needed is a simple MVVM toolkit that comes with a tutorial to take you from square one to building a basic Silverlight application based on the MVVM pattern. Oh and by the way, it would be nice if it included Visual Studio templates and code snippets to make your life easier.

That day has now arrived. I am pleased to announce the Simple MVVM Toolkit for Silverlight, available for download from CodePlex: http://simplemvvmtoolkit.codeplex.com.

It includes a set of helper classes with support for event-based messaging and modal dialogs, an implementation of IEditableDataObject that uses deep copies, and transparent marshaling of events for safe cross-thread calls to update the UI.  It comes with Visual Studio item templates for creating view-models and locators, and code snippets for inserting bindable properties and building out a view-model locator.  While all that is nice, the best part is a tutorial that includes “before and after” versions of a sample application with step-by-step instructions for building an MVVM Silverlight application.  This helps fill the void left by some other toolkits, where documentation and samples may be virtually nonexistent.  I’ve kept things very straightforward in this first phase of the project by implementing a standard n-tier application that uses a basic WCF service to retrieve entities from the Northwind sample database.  In the future I’ll incorporate support for things like dependency injection and WCF RIA Services.

Rather than shying away from some of the more controversial aspects of MVVM, I’ve decided as much as possible to address them directly.  Much of it boils down to your needs and preferences. A good example is going for zero (or almost zero) code-behind.  That might be a laudable goal if you have a team of visual designers using Blend that must be able to work independently from developers.  I’ve found, however, that many shops have developers building the UI.  In this case, achieving zero code-behind results in greater complexity and lowers developer productivity.  On the other hand, putting some UI-related logic in the view still preserves testability and separation of concerns, keeping the application simpler and easier to maintain.  In the Simple MVVM Toolkit I favor the latter approach – although the toolkit does provide some facilities for zero code-behind if you decide to go that route.

Another topic where there are diverging opinions is on how to communicate between various components, such as between the view and view-model or among view-models.  One approach is to use a message bus of some sort.  While that does make sense in many scenarios, especially communicating among view-models, I felt that a simpler approach using events is more practical and easier to use.  For example, when a method in the view-model requires information from the user, it can simply raise an event, and the view can handle the event by showing a dialog and calling back the view-model with the results.  It’s easy to understand and implement, and events can be raised by a helper method in the view-model base class, which guarantees that it fires on the UI thread.

Another area where there are different approaches concerns the use of commands.  Many MVVM toolkits include a delegating command which is used by the view-model to expose a command property that an element such as a button can bind to, so that the click event results in calling a method wired up to the command.  My personal feeling is that using commands for every button click (or other events by means of an event-to-command behavior) requires a lot of extra code in the view-model that is basically unnecessary.  While my Simple MVVM Toolkit does have a DelegateCommand for this purpose, I would suggest using it sparingly, and instead use an event trigger with a CallMethodAction to invoke a method on the view-model directly.  The only time I would resort to a command would be to pass a parameter to a method, where the parameter cannot be bound to a property in the view-model.

So go ahead and download the toolkit, take it for a test drive, and let me know what you think.  It’s not intended as the be-all or end-all toolkit, but it should be enough to get you started building Silverlight MVVM applications that maintain a clean separation of concerns, allow for testability and designability (also called Blendability), and can be more easily maintained over the long haul.  Enjoy.

Posted in Technical | Tagged , | 8 Comments

Texas Sneeds 2010

Click to play this Smilebox scrapbook
Create your own scrapbook - Powered by Smilebox
This free scrapbook design personalized with Smilebox
Posted in Personal | Leave a comment

Automatically Upgrading to Visual Studio 2010

I’ve had the need recently to convert a large number of projects from Visual Studio 2008 to Visual Studio 2010.  I was not able to find a tool out there that does this automatically, so I wrote one.  The key is to invoke devenv.exe from the command line and use the /upgrade switch.  Here is a link to the docs on that option.  And here is a Windows Forms app I wrote that searches for all Visual Studio solutions under a root directory and shells out to a command prompt to do the upgrade.

Of course, upgrading to VS 2010 is only half the process.  Usually you also want to re-target the application to .NET 4.  Just upgrading to VS 2010 will leave the application targeted to whatever version of .NET is was using (for VS 2008 it’s usually .NET 3.5).  Your help here is a macro you can install, which retargets all projects in the current solution.  What I could do is incorporate this code into my Upgrade Projects app. But alas I’m short on time and will have to defer this to a future date.

UPDATE: In the meantime, you can Add this macro to Visual Studio (Tools\Macros\Macros IDE, then select My Macros\Module1(:

Imports System.Runtime.Versioning
Imports VSLangProj

Public Module Module1

    ' macro to retarget all C#/VB projects in a solution to .NET Framework 4
 Sub SwitchFramework()
   Dim v40FrameworkName As FrameworkName = 
      New FrameworkName(".NETFramework", New Version(4, 0))

   For Each project As EnvDTE.Project In DTE.Solution.Projects
      If project.Kind = PrjKind.prjKindCSharpProject Then
        project.Properties.Item("TargetFrameworkMoniker")
           .Value = v40FrameworkName.FullName
      End If
   Next
 End Sub

End Module

 

Then go to Tools\Options\Environment\Keyboard, type SwitchFramework to locate the macro and assign it a shortcut, such as Ctrl+Shit+4.

Options

You should now be able to press Ctrl+Shift+4 and the C# projects will re-target to .NET 4 – the full version. Cheers.

Posted in Technical | Tagged | Leave a comment

Understanding WCF RIA Services

When I first looked at WCF RIA Services, I have to admit I was a bit mystified at what I was seeing.  Compared to a traditional n-tier application, where there is a clean separation and only contract and schema are shared , it seemed like a  throw back to client-server application design, where both ends are tightly coupled.  However, after authoring a module on WCF RIA Services for my Exploring .NET course at DevelopMentor, I gained a much better understanding of how it works under the covers and can better appreciate how RIA Services reduces the inherent complexity of n-tier development by offering features, such as end-to-end data validation, that would be difficult to implement on your own.

Some ambiguity may have been created by positioning RIA Services as a Silverlight-based technology.  As far as I can tell, there’s nothing intrinsic to RIA Services that ties it to Silverlight.  Presently, the way you create a RIA services client is by “linking” a Silverlight project to an ASP.NET web project.  But this just creates a custom MSBuild task that reflects over types defined in the web project and generates code in the linked client project.

ria-arch

There’s nothing to stop the RIA Services team from opening that up to non-Silverlight clients.  in fact, there’s word that they are already moving in that direction by allowing you to use T4 templates to generate client entities and placing those entities in a separate class library.

I think another source of ambiguity may rise from the description of RIA Services as “RAD for RIA,” with a RIA application being a single logical application that blurs the line separating the presentation and service layers.  While I can appreciate many of the productivity benefits provided by RIA Services, I wince at the comparison to classic two-tier ASP.NET application architecture because in fact RIA Services is still just a WCF Service (I tend to drop the WCF just to save on typing).  There is a client (Silverlight now, but WPF in the future), a middle tier (the domain service), and a back end (entity data model, or custom DAL).

Then there seems to be a bit of magic happening to support features like query composability, change-tracking and batch updates, attributes for presentation and data validation, shared code and async support.  In reality there’s no magic, just the framework providing features that you would otherwise have to build yourself.  The problem is the absence of a decent white paper on RIA Services that explains all of this in detail.  There is a white paper, but it’s just a brief high-level overview.  You’ll get much more information from the chapter on RIA Services in Silverlight 4 in Action by Pete Brown, and from an assortment of blog posts.

Since this post is already getting rather long, I will follow it up with a few more posts detailing various aspects of WCF RIA Services.

  1. WCF Service endpoints: binary, soap, odata, json
  2. DomainService operations (invoke, update, CRUD)
  3. DomainContext methods and async operations
  4. Query composability with IQueryable
  5. Entity metadata (presentation, composition, loading, validation)
  6. Data validation (per property, cross property, server, async)
  7. Security (authentication, authorization)

In the meantime, I welcome comments on what I’ve said so far.  I think RIA Services offers a great deal of features that will make your life easier when developing n-tier applications using Silverlight and (in the future) WPF.  I would even venture to say you should seriously consider using it for Silverlight business applications, in concert with basic WCF SOAP and Data Services (more on that in my next post).

Posted in Technical | Tagged | Leave a comment

Webinar: MEF Explained

In the process of updating my Exploring .NET course for DevelopMentor, I’ve authored a module on the Managed Extensibility Framework, or MEF for short. I also presented a webinar on the topic.  Here is the recorded video, plus the slides and code.

Glenn Block, the principal architect of MEF, has written a very good article on MEF in the Feb 2010 issue of MSDN Magazine, where he provides plenty of code samples.  Rather than repeating those here, I’ll limit myself to explaining the overall architecture of MEF and how the various pieces fit together.

First of all, MEF ships both with .NET 4.0 and Silverlight 4 and allows you to build pluggable applications that can be extended either in-house or by third parties.  A good example is the code editor in Visual Studio 2010, which has been MEF-ified to allow for customized extensions.  The idea here is that you have some interfaces in a common assembly that are shared between plugin authors and the  main application.  However, instead of the application calling CreateInstance from either Activator or Assembly classes, MEF essentially does it for you under the covers based on a pair of attributes working together: Import (on the consuming class) and Export (on the plugin class).

While MEF is good for plugins, it’s more general purpose in nature, enabling applications to be composed of loosely coupled components (where have we heard that before … COM, .NET).  You can split up your application into various parts that extend abstract base classes or implement a common set of interfaces, and then use MEF to assemble the parts as needed.  For example, you might want to deploy a version of your application, then incrementally build it out over time without having to redeploy the whole thing.

Another scenario would be  large Silverlight applications.  MEF is built into the latest version of Silverlight and allows you to package different parts of the application into several xap files, which can be downloaded asynchronously on demand.

Here are the major components of MEF and how they fit together (click to enlarge).

mef-arch

MEF is built on a Composition Primitives layer, which can be used to interoperate with an IoC container such as Unity, but the primary way most developers interact with MEF is through a set of attributes and some bootstrapping code.  Basically, the consuming class decorates a property with an Import or ImportMany attribute, while the plugin class is decorated with an Export attribute.  As a parameter each takes a contract type, which is usually an interface or base class that placed in a shared assembly.

The bootstrapping code usually consists of one ore more catalogs, whose job it is to discover parts, either in an assembly, a file directory or perhaps a Silverlight xap file.  If more than one catalog is required, they can be part of an aggregate catalog.  The catalog is then passed to a composition container, which then supplies exported parts to imports based on the common contract.

Sounds pretty simple, eh?  However MEF comes with a few bells and whistles as well.  One of those is the concept of metadata.  In other words, exports can expose a dictionary of values via a MetadataExport attribute.  Imports can use a specialized Lazy<T, TMetadata> type to examine the part metadata and only create the part if it meets certain conditions.  The dictionary is <string, object>, but you can add type safety by creating a custom export attribute and a companion interface.

[MetadataAttribute]
[AttributeUsage(AttributeTargets.Class)]
public class ExportLoggerAttribute : ExportAttribute
{
    public ExportLoggerAttribute()
        : base(typeof(ILogger)) { }

    public ConsoleColor BackColor { get; set; }
}

 

[ExportLogger(BackColor=ConsoleColor.Yellow)]
class BlueLogger : ILogger { … }
 

 

public interface ILoggerMetadata{    // Must match attribute property    ConsoleColor BackColor { get; }}

 

class Worker
{
    [ImportMany]
    public List<Lazy<ILogger, ILoggerMetadata>>         Loggers { get; set; }

    public void DoSomething(string message,         ConsoleColor backColor)
    {
        foreach (var logger in Loggers)
        {
            if (logger.Metadata.BackColor == backColor)
                logger.Value.Log(message);
        }
    }
}

 

MEF also supports the  idea of dynamic recomposition at runtime, which enables exports to be instantiated while the host application is still running.  You have to opt-in to recomposition by adding a named parameter, AllowRecomposition=true, to the Import attribute.  Then you have to call ComposeParts on CompositionContainer, instead of SatisfyImportsOnce.  Lastly, you need to take some action to refresh the parts catalog.  For example, by calling Refresh on a directory catalog, or DownloadAsync on a Silverlight Deployment catalog.

That pretty much summarizes the main features of MEF. As you can see, the programming model is quite straightforward, and the capabilities are focused on a few features.  MEF is not designed to take the place of products like Managed Addin Framework, Unity or Prism.  If you need more bang for the buck, be sure to check those out.

Posted in Technical | Tagged | Leave a comment

Setting up SQL Server 2008 Express with Profiler

When I teach my DevelopMentor course on Entity Framework 4.0 and WCF Data Services, I use the Express Edition of SQL Server 2008 R2, but I have need for the SQL Profiler tool, which comes only with the full version and is needed to inspect what SQL is sent to the database. In addition, the setup folks often have a hard time getting the permissions right.  So I wrote a script that first installs just the tools from a trial version of the Developer Edition, which include both SQL Management Studio and SQL Profiler.  Then I wrote another script that installs just the database engine of SQL Express, adding the BUILTIN\Users account to the sysadmin role, which would enable users to to do various admin tasks such as attaching a database.

Here are the steps for these installations:

Developer or Standard Edition of SQL Server 2008 R2:

Trial Edition: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=de21dffd-4c6c-4963-b00c-5153cf7e41dd

– Click on the Eval X86 Executable link to download SQLFULL_x86_ENU.exe.

– Execute the file to extract contents to an installation directory, then run the following from an admin command prompt:

setup.exe /FEATURES=Tools /Q /INDICATEPROGRESS /ACTION=Install /INSTANCENAME=MSSQLSERVER /BROWSERSVCSTARTUPTYPE=Automatic /AGTSVCACCOUNT=”NT AUTHORITY\NETWORK SERVICE” /IACCEPTSQLSERVERLICENSETERMS

– This will install only the tools for SQL Server, including SQL Profiler.

SQL Server 2008 R2 Express Edition (database engine only):

http://www.microsoft.com/downloads/details.aspx?FamilyID=8b3695d9-415e-41f0-a079-25ab0412424b&displaylang=en

– Execute SQLEXPR32_x86_ENU.exe, then after the main screen is shown, copy the contents of the temporary directory to an installation directory. Run the following from an admin command prompt:

setup.exe /FEATURES=SQLEngine /Q /INDICATEPROGRESS /ACTION=Install /INSTANCENAME=SQLEXPRESS /BROWSERSVCSTARTUPTYPE=Automatic /AGTSVCACCOUNT=”NT AUTHORITY\NETWORK SERVICE” /SQLSYSADMINACCOUNTS=”BUILTIN\Users” /IACCEPTSQLSERVERLICENSETERMS

– This will install the database engine and add BUILTIN\Users to the sysadmin role

The complete list of SQL Server installation commands is available here.

Technorati Tags:
Posted in Technical | 64 Comments

Webinar: N-Tier Entity Framework with DTOs

I recently delivered a free webinar for DevelopMentor on n-tier application development using Entity Framework 4.0.  In it I explained how to use what I call “Trackable Data Transfer Objects” to achieve the same result as “Self-Tracking Entities” but using a more lightweight tracking mechanism to achieve better interoperability, as I outlined in this blog post.

The screencast video is now available to be streamed or downloaded.  The slides and code for the presentation are also available.

Posted in Technical | Tagged | 21 Comments

Unearthing Some Diamonds in the Rough

O’Reilly has a program that provides online access to books that have yet to be published.  It’s called “Rough Cuts” and allows you to read chapters as they are written but before they are cleaned up for final publication, and it provides a discounted price for online and print editions.  Given the nature of technology and the speed of change, I find early access to be quite valuable.   Here are two books listed that I recommend ordering in their Rough Cuts version:

Programming Entity Framework Second Edition by Julia Lerman.  This includes coverage of Entity Framework 4.0 with samples in C#.

Programming WCF Services Third Edition by Juval Lowy, which covers new features in .NET 4.0.

Posted in Technical | 1 Comment

WCF Data Services versus WCF Soap Services

Someone recently asked me this question:  When a company that has been using 2 tiers wants to move to n-tier, what are the considerations for choosing WCF and STEs [or Trackable DTOs] vs. WCF Data Services?

This is a great question because it relates to a recent re-alignment of what used to be called “ADO.NET Data Services” (code-named Astoria) under the umbrella of Windows Communication Foundation (WCF), as well as the renaming of .NET RIA Services to WCF RIA Services.  I’m going to steal an image from the .NET Endpoint blog, because it shows how each programming model rests on top of the infrastructure provided by WCF.

image

The two bottom layers should be quite familiar to anyone who uses WCF, but the diagram could mistakenly lead you to the conclusion that the programming model section is independent of the underlying Service Model and Channel layers.  The truth is that RIA Services rests on Data Services, which is turn sits on top of Web HTTP Services (aka REST), which is tightly coupled to HTTP as a transport and XML, Atom or Json as a format.  Only SOAP Services (leaving Workflow Services aside for the moment) can be used with any format and transport protocol.

Practically speaking, what this means is that there is a fork in the road when it comes to deciding how to implement an n-tier application architecture.  WCF SOAP Services (that is, traditional WCF) offers the most flexibility when it comes to selecting an underlying transport.  For example, I may want to use a NetMsmqBinding with clients and services that are occasionally connected.  The other way to go is to select a REST-based programming model, which leverages the universality of the HTTP protocol and uses a URI addressing scheme.  If flexibility concerning the transport layer matters to you, then traditional SOAP-based WCF services are the way to go.

Another differentiating factor is that WCF SOAP Services tend to be operation-based, while REST services are said to be resource-based.  That means clients are effectively going to call methods on a SOAP service, while client of a REST service are going to send HTTP requests (mostly GET’s) to a URI and expect to get some resource in return, usually a blob of XML, probably in a syndication feed format such as ATOM.  From an architectural perspective what this means is that service operations will be inherently more constrained than resources that are freely accessible.  Whether that’s good or bad depends entirely on your point of view.  If you want to design a service that is more tightly locked down, then you’ll most likely prefer a traditional WCF service.  On the other hand, if you want to freely make your data available to clients (especially clients that may not understand or care about SOAP), you would get more bang for your buck with a REST-based service.

Traditional WCF services are also going to allow you a more advanced level of security (for example, message-based or federated security), and can offer reliable messaging and transactional services.  That’s because WCF supports the WS-* SOAP protocols that have evolved over the last several years.  On the other hand, you may not need or want any of those features.  If your client is mainly an AJAX web app, or even a Silverlight rich Internet app, then REST-based services are all you need, and you can benefit from tight coupling with HTTP.

From reading this post so far, you might get the impression that I favor traditional WCF services over REST-based services.  And if we were only talking about a service programming model, you might be right.  But Microsoft has done a lot of work on the client-side programming model for Data Services.  All you have to do for a .NET client is simply write a LINQ query, and Data Services will translate it to a URI sent to the service.  The resulting XML is used to populate client-side entities, which are change-tracked.  Heck, it even supports batch updating and concurrency control. Sweet.  And WCF RIA Services strives for RAD n-tier development for Silverlight apps, with support for end-to-end data validation and a whole bunch of other goodies.  In addition, these higher-order programming models allow you to blend in an operation-based approach by adding methods to your data service.

So I suppose your choice between SOAP and REST services will depend a great deal on the architectural objectives dictated by your application requirements.  Alas, there’s still a role for the architect in all of this. 🙂

Additional Resources:
White Paper on RESTful Web Services with WCF 3.5
Scott Hanselman Interview with Pabro Castro on OData
Open Data Protocol (OData)
WCF Data Services Team Blog
Entity Framework 4.0 and WCF Data Services 4.0 in Visual Studio 2010

Posted in Technical | 8 Comments

T4 Supplement to Add Service Reference

I’m a self-admitted control-freak.  And when I add a service reference to a project pointing to a WCF service that exposes metadata, I would like more control over the code-generation process.  Not long ago I had a specific need to to this when creating a client-side T4 template for my Trackable DTO’s project with Entity Framework 4.0.  Instead of relying solely on “Add Service Reference” in Visual Studio to generate client-side POCO classes, I wanted to generate the classes myself, hooking into a change-tracking mechanism and injecting data binding code.

The way that Self-Tracking Entities in EF4 accomplishes this is to add a class library project with a T4 template the inspects an entity data model (edmx file).  The problem is that this class library assembly must then be referenced both by the service and the client, violating a tenet of service-oriented applications to share contract and schema but not class or assembly.  That way, the client could be completely ignorant of the persistence stack used (in this case Entity Framework), and there could be a cleaner separation of concerns.  The WCF service could use a data access layer (DAL) to abstract away persistence concerns, and the client could make use of a change-tracker that the service would have no interest in referencing.

And so I needed to generate client-side entities with data contracts using a T4 template that could read service metadata exposed as WSDL.  The way to do this would be to create a WsdlImporter referencing data contracts from a service.  The result would be a CodeCompileUnit, which I could reflect over using the CodeDom to generate class and member definitions.

private CodeCompileUnit GetMetadataCodeUnit(string mexAddress,
    long maxReceivedMessageSize, Type collectionType)
{
    Binding mexBinding = GetMetadataBinding(mexAddress,
        maxReceivedMessageSize);
    var mexClient = new MetadataExchangeClient(mexBinding);
    mexClient.ResolveMetadataReferences = true;

    var metadata = mexClient.GetMetadata
        (new EndpointAddress(mexAddress));
    var sections = metadata.MetadataSections
        .Where(s => s.Identifier.Contains("datacontract.org"));
    var metaDocs = new MetadataSet(sections);
    var wsdlImporter = new WsdlImporter(metaDocs);

    var schemaImporter = new XsdDataContractImporter();
    schemaImporter.Options = new ImportOptions();
    schemaImporter.Options.ReferencedCollectionTypes
        .Add(collectionType);
    schemaImporter.Import(wsdlImporter.XmlSchemas);
    return schemaImporter.CodeCompileUnit;
}

The GetMetadataBinding method creates a custom binding with a transport element that has been configured with the specified max received message size.

private Binding GetMetadataBinding(string mexAddress,
    long maxReceivedMessageSize)
{
    Uri uri = new Uri(mexAddress);
    BindingElement element = null;
    switch (uri.Scheme)
    {
        case "net.pipe":
            element = new NamedPipeTransportBindingElement();
            ((NamedPipeTransportBindingElement)element)
              .MaxReceivedMessageSize = maxReceivedMessageSize;
            break;
        case "net.tcp":
            element = new TcpTransportBindingElement();
            ((TcpTransportBindingElement)element)
              .MaxReceivedMessageSize = maxReceivedMessageSize;
            break;
        case "http":
            element = new HttpTransportBindingElement();
            ((HttpTransportBindingElement)element)
              .MaxReceivedMessageSize = maxReceivedMessageSize;
            break;
        case "https":
            element = new HttpsTransportBindingElement();
            ((HttpsTransportBindingElement)element)
              .MaxReceivedMessageSize = maxReceivedMessageSize;
            break;
        default:
            break;
    }
    if (element != null)
    {
        Binding binding = new CustomBinding(element);
        return binding;
    }
    else
    {
        return null;
    }
}

Here is a link to a simple Console app that tests the WsdlImporter code.  And here is a link to a project that implements it as a T4 template.

Enjoy.

Posted in Technical | 2 Comments