A Look at ETW – Part 3

In part two of this series on Event Tracing for Windows I wrote a simple EventSource to provide strongly-typed events with Entity Framework interceptors. After playing around with ETW a bit more, and still frustrated by the documentation, I decided to switch gears and do something fun, so here in part 3 I’ll try using ETW in an AOP solution.

AOP is of course a great fit for a cross-cutting concern like logging or tracing, and my AOP framework of choice, PostSharp, actually already has a Diagnostics Pattern Library which supports logging to trace, the console, NLog, Log4Net and the Enterprise Library Logging Application Block, but since there’s no built-in support for ETW I’ll get to write my own aspect.

In addition to using AOP for method entry-exit tracing, I also like how tools such as Azure Application Insights and Xamarin Insights make it easy to wrap a block of code in a using statement for tracing purposes, and want to do something similar with ETW. E.g., in Xamarin Insights:

    using (var handle = Insights.TrackTime("something")) {
       .. stuff to track
    }

Since I want to reuse these features I’ll bundle this up in a .NET library, and because naming things continues to be really hard, call it “SampleAnalytics”. The only dependencies are the NuGet packages for Microsoft.Diagnostics.Tracing.EventSource and PostSharp.

The EventSource

Although a strength of ETW is its strongly-typed events, after seeing several examples which use more “generic” events yet still benefit from ETW and its tooling, I’m going to try that and go with a few general-purpose events.

Several of the logging frameworks use the concept of “category” (or a dictionary of categories), so I’ve decided that might be useful here too. Since I want to trace both methods and code blocks I use “Action” and “actionName” to designate either. I also want to make timing optional: ETW events have a high-resolution timestamp and some tools will pair up start/stop events, so doing my own timing isn’t always necessary. If there’s no listener on my ETW events it’s also inefficient.

So the EventSource starts taking shape.

[EventSource(Name = "Samples-Analytics")]
public sealed class AnalyticsEventSource : EventSource {

  public static readonly AnalyticsEventSource Log = new AnalyticsEventSource();

  public class Tasks {
    public const EventTask TimedAction = (EventTask)0x1;
    public const EventTask Action = (EventTask)0x2;
  }

  private const int TraceActionTimedStartEventId = 1;
  private const int TraceActionTimedStopEventId = 2;
  private const int TraceActionStartEventId = 3;
  private const int TraceActionStopEventId = 4;

  [Event(TraceActionTimedStartEventId, Level = EventLevel.Verbose, Task = Tasks.TimedAction, Opcode = EventOpcode.Start)]
  public void TraceActionTimedStart(string category, string actionName) {
    WriteEvent(TraceActionTimedStartEventId, category, actionName);
  }

  [Event(TraceActionTimedStopEventId, Message = "Category '{0}' - Action '{1}' took {2} ms", Level = EventLevel.Verbose, Task = Tasks.TimedAction, Opcode = EventOpcode.Stop)]
  public void TraceActionTimedStop(string category, string actionName, long elapsedMilliseconds) {
    WriteEvent(TraceActionTimedStopEventId, category, actionName, elapsedMilliseconds);
  }

  [Event(TraceActionStartEventId, Level = EventLevel.Verbose, Task = Tasks.Action, Opcode = EventOpcode.Start)]
  public void TraceActionStart(string category, string actionName) {
    WriteEvent(TraceActionStartEventId, category, actionName);
  }

  [Event(TraceActionStopEventId, Level = EventLevel.Verbose, Task = Tasks.Action, Opcode = EventOpcode.Stop)]
  public void TraceActionStop(string category, string actionName) {
    WriteEvent(TraceActionStopEventId, category, actionName);
  }
}

This wasn’t quite as simple as it appears, since I found that an overload for WriteEvent(int, string, string, long) doesn’t exist in the base EventSource and needed to write it myself. This involved pointers and unsafe code, and the curious can see the code download for details.

Tracing code blocks

I’ll be doing tracing of both timed and untimed “actions”, so define a disposable TraceAction and TimedAction.

public interface ITraceAction : IDisposable {
  void Start();
  void Stop();
}

Untimed actions generate start and stop events:

public class TraceAction : ITraceAction {

  private readonly string _category;
  private readonly string _actionName;

  internal TraceAction(string category, string actionName) {
    _category = category;
    _actionName = actionName;
  }

  public void Start() {
    AnalyticsEventSource.Log.TraceActionStart(_category, _actionName);
  }

  public void Stop() {
    AnalyticsEventSource.Log.TraceActionStop(_category, _actionName);
  }

  public void Dispose() {
    Stop();
  }
}

While timed actions also use a Stopwatch:

public class TimedAction : ITraceAction {

  private readonly string _category;
  private readonly string _actionName;
  private Stopwatch _stopwatch;

  internal TimedAction(string category, string actionName) {
    _category = category;
    _actionName = actionName;
  }

  public void Start() {
    AnalyticsEventSource.Log.TraceActionTimedStart(_category, _actionName);
    _stopwatch = Stopwatch.StartNew();
  }

  public void Stop() {
    if (_stopwatch == null || !_stopwatch.IsRunning) return;
    _stopwatch.Stop();
    AnalyticsEventSource.Log.TraceActionTimedStop(_category, _actionName, _stopwatch.ElapsedMilliseconds);
  }

  public void Dispose() {
    Stop();
  }
}

Along with a helper class…

public static class Analytics {

  public static ITraceAction TrackTime(string actionName, string category = "Trace") {
    var action = new TimedAction(category, actionName);
    action.Start();
    return action;
  }

  public static ITraceAction TrackUntimed(string actionName, string category = "Trace") {
    var action = new TraceAction(category, actionName);
    action.Start();
    return action;
  }
}

 
With these in place I can now trace a block of code like this:

using (var timedAction = Analytics.TrackTime("Save Student")) {
  db.SaveChanges();
}

For method-level interception I’ll need AOP.

Tracing methods

PostSharp provides method interception with the OnMethodBoundaryAspect. They have a good amount of documentation showing how to use it so it’s very easy to get started. All that’s necessary is to sub-class the aspect and override any needed interception points:

public class SomeCustomAttribute : OnMethodBoundaryAspect {
 public override void OnEntry(MethodExecutionArgs args) ..
 public override void OnExit(MethodExecutionArgs args) ..
 public override void OnSuccess(MethodExecutionArgs args) ..
 public override void OnException(MethodExecutionArgs args) ..
 public override void OnYield(MethodExecutionArgs args) ..
 public override void OnResume(MethodExecutionArgs args) ..
}

OnEntry, OnExit, OnSuccess and OnException are self-explanatory; OnYield and OnResume allow for more accurate interception of async code.

MethodExecutionArgs give you access to the method name, any argument types and values, the exception thrown and any return value. The class also has a handy Tag property which allows you to plug in any user-defined object for the duration of the method call.

OnMethodBoundaryAspect also has built-in multicasting, so when you apply your attribute to a class or an assembly all methods in that class or assembly will be intercepted. Since this can be quite a bit more than you want (especially if you don’t want to intercept things like getters and setters, ToString, GetHashCode, etc.) there are also a number of attributes available to control this.

I still want to use a “category” to help apply some semantic information to events, and timing should be optional here too. Both parameters can be set when the attribute is applied.

[Serializable]
public class ETWTraceAttribute : OnMethodBoundaryAspect {

  private string _methodName;
  private string _category;
  private bool _addTiming;

  public ETWTraceAttribute(string category = "default", bool addTiming = false) {
    ApplyToStateMachine = true;
    _category = category;
    _addTiming = addTiming;
  }

For example, applied to a method:

[ETWTrace(category: "Instructor")]
public ActionResult Index(int? id, int? courseID) {..}

And applied to a class:

[ETWTrace(category: "Course", addTiming: true)]
public class CourseController : Controller {..}

 
I’ll also grab the method information at compile-time rather than run-time, as it’s more efficient. I could have grabbed parameter and generic arguments too, but I’m not sure that the extra information will help when doing an ETW analysis so I’ve left them out for now:

  public override void CompileTimeInitialize(MethodBase method, AspectInfo aspectInfo) {
    _methodName = method.DeclaringType.FullName + "." + method.Name;
  }

At run-time, tracing should start when the method starts. The parameter values are available here too, but since the ETW event won’t be enabled without a listener I don’t want to incur any performance penalty in getting their string representation. The TrackTime and TrackUntimed helper methods defined above can be used here too, and the ITraceAction stashed within the MethodExecutionTag for later use.

public override void OnEntry(MethodExecutionArgs args) {
  var action = _addTiming ? Analytics.TrackTime(_methodName, _category)
                          : Analytics.TrackUntimed(_methodName, _category);
  args.MethodExecutionTag = action;
}

When the method completes successfully I’ll stop tracing. The method return value is available here also, but again, I don’t want to include it in the ETW event:

public override void OnSuccess(MethodExecutionArgs args) {
  var action = args.MethodExecutionTag as ITraceAction;
  action.Stop();
}

So that’s nearly all there is to it. It’s not as full-featured as PostSharp’s Diagnostic Library, but it’s a start. I did add OnException, OnYield and OnResume handling too; those are available in the source code download.

With a few simple changes to the ContosoUniversity sample application from part 2, a PerfView trace now shows both method-level and code block tracing:

traceactivity1

Logging to an interface

Now that the ContosoUniversity sample is using the “SampleAnalytics” library it no longer needs its own EventSource too. In the spirit of more “generic” events, and finding that an EventSource can implement an interface, I’m going to remove all the logging I added in part 2 and have the interceptors use a simple logging interface:

public interface ILogger {
  void TraceInformation(string message, string category = null, string subcategory = null);
  void TraceWarning(string message, string category = null, string subcategory = null);
  void TraceError(string message, string category = null, string subcategory = null);
}

“Category” and “sub-category” will allow the ETW events to provide a bit more than printf-style logging.

Here’s the DbCommandInterceptor using ILogger, with a category of “Command” and a sub-category set to the calling method name via the CallerMemberName attribute.

public class SchoolInterceptorLogging : DbCommandInterceptor {

  private const string Category = "Command";
  private ILogger _logger; 

  public SchoolInterceptorLogging(ILogger logger) {
    _logger = logger;
  }

  public override void ScalarExecuting(DbCommand command, DbCommandInterceptionContext<object> interceptionContext) {
    LogTraceOrException(command);
  }

  public override void ScalarExecuted(DbCommand command, DbCommandInterceptionContext<object> interceptionContext) {
    LogTraceOrException(command, interceptionContext.Exception);
  }

  ... and so on for NonQuery and Reader command types

  private void LogTraceOrException(DbCommand command, Exception exception = null, [CallerMemberName] string methodName = "") {
    if (exception == null) {
      _logger.TraceInformation(command.CommandText, Category, methodName);
    } else {
      _logger.TraceError(exception.Message, Category, methodName);
    }
}

With similar changes for the IDbConnectionInterceptor:

public class SchoolConnectionInterceptor : IDbConnectionInterceptor {

  private const string Category = "Connection";
  private ILogger _logger; 

  public SchoolConnectionInterceptor(ILogger logger) {
    _logger = logger;
  }

  public void Opening(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
    LogTraceOrException(connection, interceptionContext);
  }

  public void Opened(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
    LogTraceOrException(connection, interceptionContext);
  }

  private void LogTraceOrException(DbConnection connection, DbConnectionInterceptionContext context, [CallerMemberName] string methodName = "") {
    if (context.Exception == null) {
      _logger.TraceInformation(connection.Database, Category, methodName);
    } else {
      _logger.TraceError(context.Exception.Message, Category, methodName);
    }
}

The EventSource can now implement ILogger, along with the other trace events shown earlier:

[EventSource(Name = "Samples-Analytics")]
public sealed class AnalyticsEventSource : EventSource, ILogger {

    ... 

    private const int TraceInformationEventId = 5;
    private const int TraceWarningEventId = 6;
    private const int TraceErrorEventId = 7;

    ...

    [Event(TraceInformationEventId, Level = EventLevel.Informational)]
    public void TraceInformation(string message, string category = "", string subcategory = "") {
      WriteEvent(TraceInformationEventId, message, category, subcategory);
    }

    [Event(TraceWarningEventId, Level = EventLevel.Warning)]
    public void TraceWarning(string message, string category = "", string subcategory = "") {
      WriteEvent(TraceWarningEventId, message, category, subcategory);
    }

    [Event(TraceErrorEventId, Level = EventLevel.Error)]
    public void TraceError(string message, string category = "", string subcategory = "") {
      WriteEvent(TraceErrorEventId, message, category, subcategory);
    }
}

 
And finally, a PerfView collection now shows the traced actions along with other activity from the interceptors:

traceactivity3

A zip of the modified sample application and “SampleAnalytics” library is available here. Source for the SampleAnalytics library only is also available on github.

A Look at ETW – Part 2

Recently I was poking around a bit with the interceptors available in EF 6, IDbConnectionInterceptor and IDbCommandInterceptor. Since a common use of these interceptors is to provide logging/profiling capabilities, it seemed like these might be a good fit to try with ETW. But first I wanted to know if EF already uses ETW. After a somewhat lengthy search it seems that it does not, although at the provider and SQL Server levels there is use of something called BID (Built In Diagnostics), along with complicated directions involving registering MOF files, which I won’t try now.

I decided to re-work the Microsoft sample “ASP.NET MVC Application Using Entiy Framework Code First” as it’s using EF 6 and has already implemented a DbCommandInterceptor for logging. It also has a simple ILogger implementation. The name of the sample application is, of course, “ContosoUniversity”. You can download the modified source code here.

Getting Started

To add to some of the confusion about providers, Microsoft offers two different implementations of EventSource. First there’s System.Diagnostics.Tracing.EventSource, in mscorlib, then there’s Microsoft.Diagnostics.Tracing.EventSource, available from a NuGet package. What’s the difference? The NuGet version (MDT.EventSource) supports channels, and thus writing to the Event Log. The package will also be revved more frequently, and includes support for portable, Windows Store and Phone apps, so is probably the best choice (despite what the package description says).

nuget

The package also installs the EventRegister package, which adds a target to your project file to validate your EventSource and generate and register a manifest if needed (this is needed if using channels, which I won’t get into here). The package adds a document called _EventSourceUsersGuide.docx to your project which is worth reading. You can also find it online.

A simple EventSource

An EventSource definition is initially deceptively easy.

The sample already includes a DbCommandInterceptor implementation called SchoolInterceptorLogging, which is doing logging for executing/executed interception of three types of DbCommands – Scalar, NonQuery, and Reader.

Note: The Microsoft sample has a bug in its implementation of the command interceptor. Only a single instance of the interceptor is used by EF, so it must be thread safe to handle concurrent requests from multiple threads. The Stopwatch used in the sample is not thread safe.

For logging purposes I don’t really care about the type of DbCommand, so my EventSource will define only two events – CommandExecuting and CommandExecuted, and the interceptor will call these event methods.

The interceptor now looks like this:

public class SchoolInterceptorLogging : DbCommandInterceptor {
  ...

   public override void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) {
      SampleEventSource.Log.CommandExecuting(command.CommandText);
   }

   public override void ReaderExecuted(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) {
     SampleEventSource.Log.CommandExecuted(command.CommandText);
   }

   // And so on for the Scalar and NonQuery overrides
}

… while the first attempt at an EventSource looks like this:

using Microsoft.Diagnostics.Tracing;

namespace ContosoUniversity.Logging {

  [EventSource(Name="Samples-ContosoUniversity")]
  public sealed class SampleEventSource : EventSource {

    public static readonly SampleEventSource Log = new SampleEventSource();

    public class Tasks {
      public const EventTask CommandExecuting = (EventTask)0x1;
    }

    private const int CommandStartEventId = 1;
    private const int CommandStopEventId = 2;

    [Event(CommandStartEventId, Task=Tasks.CommandExecuting, Opcode=EventOpcode.Start)]
    public void CommandExecuting(string commandText) {
      if (IsEnabled()) WriteEvent(CommandStartEventId, commandText);
    }

    [Event(CommandStopEventId, Task=Tasks.CommandExecuting, Opcode=EventOpcode.Stop)]
    public void CommandExecuted(string commandText) {
      if (IsEnabled()) WriteEvent(CommandStopEventId, commandText);
    }
  }
}

A few things to note:

  • An ETW provider is defined by sub-typing EventSource. The EventSourceAttribute is used to name the provider, otherwise it will default to the class name.
  • Microsoft guidance suggests you define a singleton instance of your EventSource; using a static field named “Log” seems to be common practice.
  • Any void instance methods are assumed to be events, with event ids incrementing by one. To avoid problems it’s a good idea to use the EventAttribute on all event methods and explicitly define the parameters as shown here. Event methods can take only string, DateTime or primitive type arguments, which means you can’t pass a type such as Exception or DbCommand to an event method, as the build-time validation will raise an error. The number and types of arguments to the event method must also match those passed to the WriteEvent method it calls.

Consuming events

Generating events is all well and good, but it’s still nice to see what’s going on while debugging. PerfView to the rescue! PerfView is a “performance analysis tool focusing on ETW information” and has a huge number of features, but with the help of Vance Morrison’s PerfView tutorial video series it’s easy to get started. I wanted to view my custom events, so I started a data collection and told PerfView about my custom provider:

perfview
Because this provider isn’t registered on the machine with a manifest the provider name must be prefixed with an asterisk as shown above. Not all tools support this.

After running the application for a few minutes before stopping the collection, I can now see my events in the context of .NET and other provider events. Here’s a few of my events, with the DURATION_MSEC calculated by PerfView.

startstop

Using an external tool is great for working with a deployed app, but while coding and debugging it’s much handier to see a real time log of events. After removing the prior Logger implementation it may seem a bit ass backwards to add logging back in, but that’s what I do using an EventListener. The EventListener is part of the EventSource NuGet package, and can listen to all EventSources in the current domain.

Here’s a simple implementation which dumps everything to the output window in Visual Studio:

using Microsoft.Diagnostics.Tracing;
using System.Diagnostics;
using System.Linq;

namespace ContosoUniversity.Logging {

  public class SampleEventListener : EventListener {

    protected override void OnEventSourceCreated(EventSource eventSource) {
      EnableEvents(eventSource, EventLevel.LogAlways, EventKeywords.All);
      Trace.TraceInformation("Listening on " + eventSource.Name);
    }

    protected override void OnEventWritten(EventWrittenEventArgs eventData) {
      string msg1 = string.Format("Event {0} from {1} level={2} opcode={3} at {4:HH:mm:ss.fff}",
        eventData.EventId, eventData.EventSource.Name, eventData.Level, eventData.Opcode, DateTime.Now);

      string msg2 = null;
      if (eventData.Message != null) {
        msg2 = string.Format(eventData.Message, eventData.Payload.ToArray());
      } else {
        string[] sargs = eventData.Payload != null ? eventData.Payload.Select(o => o.ToString()).ToArray() : null;
        msg2 = string.Format("({0}).", sargs != null ? string.Join(", ", sargs) : "");
      }

      if (eventData.Level == EventLevel.Error || eventData.Level == EventLevel.Critical) {
        Trace.TraceError("{0}\n{1}", msg1, msg2);
      } else {
        Trace.TraceInformation("{0}\n{1}", msg1, msg2);
      }
    }
  }
}

This is a start, but I’ve lost a few things too. The original sample logged informational messages, warnings, errors and “trace api” information, along with it’s own duration calculation (however buggy). This implementation doesn’t log exceptions, and the EventListener doesn’t contain event names or timestamps.

Logging exceptions

Because System.Exception and sub-types aren’t supported with EventSource event methods you must apparently resort to using either the exception message or ToString(), which doesn’t seem ideal. CLR exceptions are logged by the CLR Runtime ETW provider, so they aren’t lost entirely, but logging them from my EventSource seems like a good idea too, so I added a Failure event to my EventSource. (Why did I call it “Failure”? I don’t know, it seemed like a good idea at the time, and naming things is hard.)

[Event(FailureEventId, Message = "Application Exception in {0}: {1}", Level = EventLevel.Error)]
public void Failure(string methodName, string message) {
  if (this.IsEnabled()) {
    this.WriteEvent(FailureEventId, methodName, message);
  }
}

… and this to the *Executed methods in the interceptor:

public override void ReaderExecuted(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) {
  LogResultOrException(command, interceptionContext.Exception);
}

private void LogResultOrException(DbCommand command, Exception ex, [CallerMemberName] string methodName = "") {
  if (ex != null) {
    SampleEventSource.Log.Failure(methodName, ex.ToString());
  } else {
    SampleEventSource.Log.CommandExecuted(command.CommandText);
  }
}

Not ideal, but better.  I want to use ETW instrumentation throughout the sample application, though, not just to record database calls, so back to design considerations.

Adding more events

The Microsoft recommendation is to limit the number of EventSources defined within your application. But this raises more questions – if only a single EventSource is used for the entire application, and you want to take advantage of the structured nature of ETW events, you could have a large number of events defined within a single provider. If instead you use “generic” events such as Information(..), Warning(..) and so on, you lose the benefits of strong typing. The goal, after all, is to enable a comprehensive analysis of the application in context, not to generate lots of string messages that can’t be filtered easily.

The user’s guide installed with the NuGet package recommends a “{CompanyName}{Product}{Component}” naming convention (which I didn’t follow here), but this sample is too small to have components, and was actually only logging from its DbInterceptors, so I need to think about what might be useful to instrument to diagnose production issues. Since the application is pretty simple, the only potentially helpful thing I can see is to optionally instrument method entry and exit, in either selected “critical” methods or all methods in certain classes. This sounds like a great use case for AOP, and in a later post I’m going to try implementing this with ETW and PostSharp.

For now, though, I’ll just add another EF interceptor, IDbConnectionInterceptor, and add some more events to my existing EventSource. This will give me a chance to work with the additional EventAttribute parameters:

eventargs

Kathleen Dollard has a good post explaining how to best use these parameters, but here’s some quick definitions:

Channel – There are four predefined channels – Admin, Analytic, Debug and Operational. Other than being used to write to the EventL Log, I still don’t understand much about them. In a later post I’ll look at this further.

Keywords – These can be used to help group or categorize events.

Level – Predefined levels include Informational, Warning, Error, Critical, LogAlways and Verbose.

Message – This is an optional format string which accepts the event method parameters.

Task and Opcode – Provide for “task-oriented” groupings. The Opcode can only be used if a Task is specified. There are some predefined Opcodes like Start and Stop, Suspend and Resume, and a few others. Because they’re well-known, tools can act on these opcodes in a generic way.

Version – Events can be versioned, but according to Dollard, don’t.

The IDbConnectionInterceptor interface contains begin/end interception points for 12 different connection-related events, but for now I’m only interested in instrumenting those related to opening and closing a connection.

Here’s the revised EventSource. It contains several more event methods, a few more tasks, and keywords, which may help in filtering. A few of the events also use either the Verbose or Error level.

using Microsoft.Diagnostics.Tracing;
using System.Runtime.CompilerServices;

namespace ContosoUniversity.Logging {

  [EventSource(Name="Samples-ContosoUniversity")]
  public sealed class SampleEventSource : EventSource {

    public static readonly SampleEventSource Log = new SampleEventSource();

    public class Keywords {
      public const EventKeywords Command = (EventKeywords)1;
      public const EventKeywords Connection = (EventKeywords)2;
    }

    public class Tasks {
      public const EventTask CommandExecuting = (EventTask)0x1;
      public const EventTask ConnectionOpening = (EventTask)0x2;
      public const EventTask ConnectionClosing = (EventTask)0x3;
    }

    private const int CommandStartEventId = 1;
    private const int CommandStopEventId = 2;
    private const int ConnectionOpenStartEventId = 3;
    private const int ConnectionOpenStopEventId = 4;
    private const int ConnectionCloseStartEventId = 5;
    private const int ConnectionCloseStopEventId = 6;
    private const int TraceApiEventId = 50;
    private const int CommandFailureEventId = 1000;
    private const int ConnectionFailureEventId = 1001;

    [Event(CommandStartEventId, Keywords=Keywords.Command, Task=Tasks.CommandExecuting, Opcode=EventOpcode.Start, Level = EventLevel.Verbose)]
    public void CommandExecuting(string commandText) {
      if (IsEnabled()) WriteEvent(CommandStartEventId, commandText);
    }

    [Event(CommandStopEventId, Keywords = Keywords.Command, Task = Tasks.CommandExecuting, Opcode = EventOpcode.Stop)]
    public void CommandExecuted(string commandText) {
      if (IsEnabled()) WriteEvent(CommandStopEventId, commandText);
    }

    [Event(ConnectionOpenStartEventId, Message = "Opening {0}", Keywords = Keywords.Connection, Task = Tasks.ConnectionOpening, Opcode = EventOpcode.Start)]
    public void ConnectionOpening(string databaseName) {
      if (IsEnabled()) WriteEvent(ConnectionOpenStartEventId, databaseName);
    }

    [Event(ConnectionOpenStopEventId, Message = "Opened {0}", Keywords = Keywords.Connection, Task = Tasks.ConnectionOpening, Opcode = EventOpcode.Stop)]
    public void ConnectionOpened(string databaseName) {
      if (IsEnabled()) WriteEvent(ConnectionOpenStopEventId, databaseName);
    }

    [Event(ConnectionCloseStartEventId, Message = "Closing {0}", Keywords = Keywords.Connection, Task = Tasks.ConnectionClosing, Opcode = EventOpcode.Start)]
    public void ConnectionClosing(string databaseName) {
      if (IsEnabled()) WriteEvent(ConnectionCloseStartEventId, databaseName);
    }

    [Event(ConnectionCloseStopEventId, Message = "Closed {0}", Keywords = Keywords.Connection, Task = Tasks.ConnectionClosing, Opcode = EventOpcode.Stop)]
    public void ConnectionClosed(string databaseName) {
      if (IsEnabled()) WriteEvent(ConnectionCloseStopEventId, databaseName);
    }

    [Event(TraceApiEventId, Message = "TraceApi {0} {1}", Level = EventLevel.Verbose)]
    public void TraceAPI([CallerMemberName] string methodName = "", string message = "") {
      if (this.IsEnabled()) this.WriteEvent(TraceApiEventId, methodName, message);
    }

    [Event(CommandFailureEventId, Message = "Command error in {0}: {1}", Keywords = Keywords.Command, Level = EventLevel.Error)]
    public void CommandFailure(string methodName, string message) {
      if (this.IsEnabled()) this.WriteEvent(CommandFailureEventId, methodName, message);
    }

    [Event(ConnectionFailureEventId, Message = "Connection error in {0}: {1}", Keywords = Keywords.Connection, Level = EventLevel.Critical)]
    public void ConnectionFailure(string methodName, string message) {
      if (this.IsEnabled()) this.WriteEvent(ConnectionFailureEventId, methodName, message);
    }
  }
}

And the new interceptor:

using ContosoUniversity.Logging;
using System;
using System.Data.Common;
using System.Data.Entity.Infrastructure.Interception;
using System.Runtime.CompilerServices;

namespace ContosoUniversity.DAL {

  public class SchoolConnectionInterceptor : IDbConnectionInterceptor {

    private SampleEventSource _logger = SampleEventSource.Log;

    public void Opening(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
      _logger.ConnectionOpening(connection.Database);
    }

    public void Opened(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
      LogResultOrException(() => _logger.ConnectionOpened(connection.Database), interceptionContext);
    }

    public void Closing(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
      _logger.ConnectionClosing(connection.Database);
    }

    public void Closed(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
      LogResultOrException(() => _logger.ConnectionClosed(connection.Database), interceptionContext);
    }

    public void Disposing(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
      _logger.TraceAPI();
    }

    public void Disposed(DbConnection connection, DbConnectionInterceptionContext interceptionContext) {
      _logger.TraceAPI();
    }

    private void LogResultOrException(Action logAction, DbConnectionInterceptionContext context, [CallerMemberName] string methodName = "") {
      if (context.Exception != null) {
        _logger.ConnectionFailure(methodName, context.Exception.ToString());
      } else {
        logAction();
      }
    }
    // Remaining interface methods are stubs
  }
}

Is Less More?

For simple instrumentation from only the EF interceptors this isn’t too bad, but I’m still not happy with the tight coupling, and also don’t have a handle on what would be most useful for monitoring a running production application. In fact, I wonder if I’d get equivalent results if I got rid of the interceptors and called one generic trace event via built-in EF logging:


this.Database.Log = (s) => SampleEventSource.Log.TraceAPI("DefaultLogger", s);

Which results in somewhat unstructured but still useful information:

defaultlogger

And finally …

It’s going to take some trial and error to get this right; after several weeks I’ve still only scratched the surface with ETW and find the learning curve long and sometimes steep. In future posts I plan to take a look at:

  • channels and the Event Log,
  • the Semantic Logging Application Block,
  • AOP for method enter/exit instrumentation

A Look at ETW – Part 1

Over the past few years I’ve returned from technical conferences with a to-do item of “look into ETW.” I’d make this note because at some point during one or more sessions a presenter would say something like “you really should be using ETW.” Unfortunately, I never did get around to looking into Event Tracing for Windows (ETW), and the to-do item got moved forward another year. I recently finished Ben Watson’s excellent new book, Writing High-Performance .NET Code, where he also encourages the use of ETW, and having some free time, decided I’d finally “look into ETW.”

I had a vague notion of what ETW was; after all, I’d used the EventLog/Event Viewer. And the Service Log Viewer. And Intellitrace. Weren’t these all using ETW? I’d need to do some research to find out.

Where to start learning more about ETW? How about the MSDN page? Channels, manifests, sessions, publishers? Vista?!?! Um, no, not there. Well, how about this MSDN Magazine article? Better, but it was written in 2007!  However, it did offer this:

Event Tracing for Windows® (ETW) is a general-purpose, high-speed tracing facility provided by the operating system. Using a buffering and logging mechanism implemented in the kernel, ETW provides a tracing mechanism for events raised by both user-mode applications and kernel-mode device drivers. Additionally, ETW gives you the ability to enable and disable logging dynamically, making it easy to perform detailed tracing in production environments without requiring reboots or application restarts. …
ETW was first introduced on Windows 2000. Since then, various core OS and server components have adopted ETW to instrument their activities, and it’s now one of the key instrumentation technologies on Windows platforms.

So it’s built into the OS, extremely fast, and allows for dynamic real-time tracing across system and application components. And it’s been available on Windows platforms for almost 15 years, which could be a mixed blessing, as the links to the somewhat outdated documentation above illustrate. Over its long history there have been several changes to ETW, and a confusing hodgepodge of tools, toolkits and terminology.

But first, some of that terminology.

Event – Really anything you want it to be. Events in ETW are strongly-typed: they have both predefined elements and additional developer-defined payload. ETW will also capture the stack trace and timestamp of the event. The semantic nature of events makes their definition both easy and quite difficult, as determining what should be instrumented, and when, takes some thought. There’s also a whole glossary of terms you need to know too. More on this in a later post.

Source or provider – Something which will generate events. Much of the OS and sub-systems are already ETW providers. There’s a legacy here too which can be important to remember when using some of the tooling or reading through documentation:

  • “Classic providers” (pre-Vista, in other words before 2007) had to be registered with “MOF” files and the mofcomp utility.
  • With Vista came “manifest-based” providers registered using something called wevtutil.
  • As of .NET 4.5, the EventSource is the preferred way to write a provider. (If you’re not using .NET 4.5 there’s a NuGet package to help.)

Controllers, sessions, and consumers – A controller will start and stop tracing sessions, while a consumer will work with the results. Generally you’ll use existing tools for all this, but you can also write your own.  ETW has a pub-sub type of architecture, which allows for both in-process, but more often out-of-process, tracing of a running application. This decoupling also helps make it fast – unless “something” is listening to a provider’s events then ETW essentially ignores the event.

ETW is not a logging platform like NLog, Log4Net, Enterprise Library, et al., as it’s not about defining a target destination. The EventSource(s) you define in your application will generate events to ETW. It’s any sessions and consumers, whether in-process or out-of-process, which will determine targets.

I’ve mentioned outdated documentation and the alphabet soup of tools (perfview, logman, xperf, tracerpt, wpa, and more), but there are other obstacles on the learning curve as well, two of which stand out:

  • Confusing message – How does ETW fit in with other, newer, Microsoft tools? Does Application Insights tie in with ETW? From what I can tell, no. How about Intellitrace? Umm, apparently not. The Semantic Logging Application Block? Yes! With SLAB you still write your own EventSource(s) but use SLAB for its various sinks and listeners. And the Service Log Trace Viewer? It can read ETW trace output (ETL files).
    About the Event Log / Event Viewer – Prior to 2007 you’d write to either the Event Log or use ETW; with Vista these techniques merged. You can still use the EventLog API directly, but you can also use some predefined “channels” with your events to have the Event Viewer consume them.
  • An anti-pattern? – Most applications today use some form of dependency injection to provide logging, and developers code to an interface. Although there don’t seem to be any best practices with ETW, the strongly-typed logging it provides generally means there will be a tight coupling between your code and your EventSource.

Despite my concerns, tracing and monitoring with ETW tools does look powerful. Providers are built-in for much of the OS and sub-systems, so it’s easy to enable providers for HTTP traffic, ASP.NET activity, WCF, JIT, GC, etc. Adding your own events to the mix for end-to-end tracing in a single unified view is very appealing.  The documentation picture isn’t quite as bleak as it first seemed either: Kathleen Dollard has a good introduction to the EventSource and a few other ETW blog posts as well; while Vance Morrison, a performance architect on the .NET runtime, has blogged extensively about ETW.

In part 2 I’ll try writing an EventSource to generate strongly-typed events from implementations of the EF 6 IDbConnectionInterceptor and IDbCommandInterceptor interfaces.

Port This!

A recent post on the .NET Framework blog is titled “Leveraging existing code across .NET platforms.” Code portability is a favorite topic of mine, although I’d missed their earlier announcement of a new portability analyzer. The tool, the .NET Portability Analyzer, or apiport.exe, can be used from the command line and is also now integrated into Visual Studio.

I’ve done several ports of .NET code for the DevForce framework, the first from .NET 3.5 to Silverlight 2, and more recently .NET 4.5 to Windows 8.x and Phone 8.0. The usual approach was to create a new project for the targeted platform and then see how large the explosion was when trying to compile. The analyzer saves you from this preliminary work and quickly scans your assemblies, creating a detailed report on API differences along with a few recommendations.

I thought it would be interesting to run the analyzer on the DevForce “client” assemblies. We ported these assemblies to other platforms the old fashioned way, so I’d see how thorough the tool is. As it turns out, it’s very thorough, quite fast, and easy to use.

What’s shown here should not be construed as a DevForce product roadmap.

The summary, while interesting, is not all that helpful, but I’ll get to the detail in a moment. Here I included all possible platforms the analyzer supports, although when porting your own assemblies you’re probably not interested in all possible targets, nor in porting to all of them at once. You might also find you’ll want to port from, say, full .NET to Windows 8.1, and then from there to a Xamarin platform. It’s usually easier to port from an assembly which is already focused on a mobile, “reduced” .NET API, and the API gap may be less daunting.

The “IdeaBlade.Core” assembly has the lowest API compatibility numbers in the list above, so I wanted to look at what the analyzer found. As the name of this assembly might imply, it’s responsible for a number of lower-level features, including configuration, reflection, WCF, MEF, registry access, some file I/O and EventLog access, a bit of remoting, use of the Cryptographic API, and so on.

Running apiport.exe from the command line generates Excel output, which is much more useful if you do plan on acting on the information, but I like the red light / green light look of the icons from the output generated within VS, so that’s what I show here. Here’s a snip of the detail view.

You’ll see long swaths of red for unsupported features in a specific platform, along with recommendations. For example, here System.Type, along with much of the Reflection API, is radically different in Windows 8 and Windows Phone 8.1.

You may also see large grids of red, for features found only in full .NET, such as System.Configuration.

You’ll also sometimes see surprises, maybe even showstoppers.

Hmm, System.Linq.Expressions is not available in Xamarin.iOS.


What? No WCF support in Phone 8.1!

Once you know what’s missing, the question then turns to how to mitigate the differences. I’ve found that there doesn’t seem to be one single best practice here, and developer and team productivity should trump any search for architectural purity. If a developer doesn’t know, and can’t easily discover, that a certain piece of code is used differently for different platforms, then your team will waste time with broken builds and worse.

If you aren’t doing continuous integration, now is also the time to start. You’ll need to port your test suite to the new platform(s) too. With the increased testing/build workload, team members may sometimes cut corners in rushed situations, and automatic builds and tests will truly be a lifesaver.

Especially when porting from full .NET, first look at reducing or eliminating functionality. Your tablet or mobile app can’t, and shouldn’t even need to, use the registry, a .config file, write to the console, etc. You can often just not include those files containing unnecessary functionality in the target project. Problem solved, unless internal politics or your customers take issue.

You should also usually avoid the temptation to “roll your own” for features missing in the platform. You really won’t miss things like the PropertyDescriptor, so apply the YAGNI principle with rigor.

You might also find that some features of your application may really be “desktop only” or “server only” features. If possible, it could be a good time to restructure and refactor any assembly such as this into multiple assemblies, leaving remaining “client” or “common” functionality easier to port. In fact, I wish we’d done this with IdeaBlade.Core.

Next, you’ll find some types supported across platforms, but not all the methods or overloads you might be used to. Or maybe there’s some slightly different way of accomplishing the same thing. Here the analyzer’s recommended changes can help. It’s usually easy enough to use Dispose rather than Close on a TextWriter; List<T> instead of ArrayList; or a different but functionally equivalent constructor or method overload. Having common code, rather than lots of platform-specific code, is much easier to maintain.

Speaking of the TextWriter and I/O in general, it could also be a good time to refactor for async, and possibly drop sync usage where possible. All the listed platforms support async-await (SL5 requires a compatibility pack).

When you do need platform-specific code, a very common approach is of course the use of compiler directives. These do have their place, but can also be overused, especially as the number of supported platforms grows. Do you really want to maintain code like this?

#if NET
 // do some .NET thing
#elif SILVERLIGHT
 // do some SL thing
#elif WINDOWS_APP
 // and so on
#elif WINDOWS_PHONE_APP
 // 
#elif ANDROID
 //
#endif 

Or worse,

#if NET || ANDROID
 // some cool thing
#elif SILVERLIGHT && !WINDOWS_PHONE
 // something else
#elif NETFX_CORE || WINDOWS_PHONE 
 // and so on
#endif

Or this?!

Other than compiler directives, what else can you do?

  • When entire classes will be wildly different across platforms, use interfaces and custom implementations for each platform.
  • When some code is common, you can use abstract base classes with subtyping by platform.
  • If only bits and pieces of a class will be platform-specific, you can make your classes partial and refactor platform functionality into functions for which you can use partial methods. Partial methods don’t seem to be used much, but they’re handy, and can be defined for static methods too.
  • Extension methods can be useful too, with the platform-specific functionality in separate extension classes. You can also use extension methods to add functionality that you think is “missing” from one platform. This lets calling code use a common API fairly seamlessly.
  • You can also use the adapter pattern, which works well for static functions. For example, a “TaskAdapter” can wrap the static methods of TaskEx in SL, and Task everywhere else.
  • What about “missing” interfaces and attributes? For example, ICloneable is “missing” on most of the platforms, but if you’ve got a lot of code that wants to Clone, defining the interface yourself is an option. The same with attributes, especially those that only define simple types and contain no logic or behavior. If you have a lot of code already decorated with “missing” attributes and don’t want to wrap compiler directives around every single one, creating your own implementation of the attribute is an option.
  • With all of the above, the next question might be whether to use separate files for each platform, and what naming scheme if so, or whether to continue using a single file with compiler directives separating content. I don’t have a good answer for this, although I lean toward using a single file with compiler directives, since it’s then obvious to developers working in these files where the platform-specific code is located.

Finally, one important question when looking at porting is whether a native assembly is truly the right approach, or should you take the plunge with a portable class library? The tooling around PCL is much improved, although profile-specific documentation is lagging. Porting a .NET library to a PCL, of any target profile, can be painful, and the analyzer doesn’t (yet?) handle this, possibly because the capabilities of each PCL profile are something you have to discover for yourself as a kind of initiation rite. A PCL can sometimes mean taking a lowest common denominator approach, and you might lose functionality. Nevertheless, “write once, run everywhere” is a worthy, if still somewhat elusive, goal.

First Look at Xamarin.Forms

Included in last week’s Xamarin 3 release is Xamarin.Forms. What is Xamarin.Forms? “A cross-platform natively backed UI toolkit abstraction that allows developers to easily create user interfaces that can be shared across Android, iOS, and Windows Phone” (emphasis mine).

Other goodies in Xamarin 3 are a visual designer for Xamarin.iOS within both Visual Studio and Xamarin Studio, NuGet support, F# support, support for Shared Projects, and quite a few fixes and enhancements. As usual with Xamarin, there’s a lot of promise here, along with many rough edges.

As in prior releases, you still need a Mac to serve as a “Xamarin build host” for iOS support, but this is an Apple-imposed restriction. If you’re doing Windows Phone 8 (or Windows Phone Silverlight 8.1) you need to be working in Windows 8 and Visual Studio with the appropriate Phone SDK installed. There’s no support for Universal apps, Windows Store, or Windows Phone 8.1 store apps at this time.

It’s probably just me, but I find all the Xamarin version numbers confusing, and didn’t even know that I’d been working in Xamarin 2 previously. “Xamarin 3.0” includes Xamarin Studio 5.0 and Xamarin VSIX 3.0 for Visual Studio support. Xamarin.Android is currently at 4.12, while Xamarin.iOS seems to be at 7.2.1.

Licensing also hasn’t changed, unfortunately. The Business edition, needed for Visual Studio support, still runs $999 per platform, per developer. The free Starter edition does not include support for Xamarin.Forms, while the Indie edition, which does, is $299 per platform, per developer.

The goal of Xamarin.Forms is rapid application development, and primarily targets enterprise apps. XF contains a gallery of about 40 common controls and a navigation abstraction, basically the essentials for a typical forms-style page or application. Controls in the gallery include:

  • Pages – an abstraction of the Android Activity, iOS ViewController, or WP Page
  • Layouts – composable containers such as Grid and StackLayout
  • Views – the usual buttons, labels, text boxes, images, date and time pickers, and more
  • Cells – combine a label with another visual element in tables and lists

Some might argue that there’s nothing particularly sexy about form controls, but allowing a developer to build the boilerplate parts of a UI once with non-platform specific abstractions – a Page, a Label, a ListView, etc. – is pretty powerful. Add in rendering using the native controls of each target platform, and this really is quite sexy. Although Xamarin seems to see Forms as a prototyping tool, if you can rapidly build native apps for multiple platforms – with native user interfaces, native API access, and native performance – while working at the “right” level of abstraction, what’s not to like?

Customization per platform is provided too. You can write custom renderers, and drop into platform-specific implementations via the built-in DependencyService. The DependencyService is not intended for use as a general-purpose DI container, it’s more in the style of “platform enlightenment” used with some PCL implementations, although it can be used as a simple Service Locator too.

There are two additional features though, which make Xamarin.Forms really exciting: data binding and XAML.

Android and iOS have no built-in data binding support, although you can use a third party framework like MvvmCross to accomplish this. The data binding support in Xamarin.Forms feels quite familiar: one- and two-way binding, INotifyPropertyChanged, INotifyCollectionChanged, value converters, static resources, ItemsSource, data templates. Missing, however, is validation, and there’s no INotifyDataErrorInfo implementation.

XAML! Most of the current Xamarin.Forms doc and samples seem to focus on writing the UI in code, but what’s better when building a UI than working in a visual designer to get immediate feedback on every change? Granted, wiring up your controls in code is powerful, but it’s not 1990. XAML to the rescue! Or maybe not quite yet. Tucked away in the documentation is the note “Xamarin.Forms is not compatible with pre-existing XAML visual designers.” Yep, the XAML for Xamarin.Forms cannot be opened in the visual designer of either Visual Studio or Xamarin Studio. At this time you must hand edit your XAML. Although there is supposed to be Intellisense within the XAML code editor now, I couldn’t get it working. A few other missing pieces: breakpoints and debugging of XAML data bindings aren’t supported yet, and the current API-level documentation doesn’t show any XAML sample code to make hand editing easier.

It’s tempting to want to grab some XAML from an existing app, forgetting that Xamarin.Forms defines its own controls. So for example this:

<StackPanel Orientation="Vertical">
   <TextBlock Text="Name"  />
   <TextBox Text="{Binding FullName}" />
</StackPanel>

might look like this in Xamarin.Forms:

<StackLayout  VerticalOptions="FillAndExpand" >
  <Label Text="Name" />
  <Entry Text="{Binding FullName}"  />
</StackLayout>

Also note that XAML files can be defined only in portable class libraries and shared projects: platform-specific projects don’t understand Xamarin.Forms.

Which brings me to another important piece of Xamarin.Forms: shared project support, also introduced by Microsoft in Visual Studio 2013 Update 2 for Universal Apps. Previously, platform-specific projects could reference a portable class library for shared content; with Xamarin 3 there’s now the option to use shared projects, in both VS and Xamarin Studio.

A “shared project” is not truly a VS or XS project, and won’t compile into a separate assembly. Instead the project, with a .shproj extension, imports a .projitems file containing all the shared project artifacts. Projects referencing the shared project will automatically include all shared artifacts. It’s a little like file linking, a common technique for sharing content among multiple platform-specific projects, and also supports compiler directives. Unlike file linking, the editing experience is much better, as the code editor now includes a “context switcher” which allows you to view and edit the file within the context of the target project.

As with Universal Apps, choosing between a PCL or shared project for shared content is a decision each team must make. A PCL can provide quite a bit of flexibility and potential reuse in other solutions and platforms and is usually separately testable, but also means deploying another assembly. A shared project allows you to compile each target project into a single assembly, so might be great for small apps.

One gotcha with a shared project is that the referencing target projects likely have different namespace and/or assembly names. If, for example, you define a local namespace in shared XAML (e.g., xmlns:local="clr-namespace:TipCalc;assembly=TipCalc") for shared value converters, you’ll need to use the same assembly name for all target projects too.

So, XAML and data binding, PCL and shared projects. What might a simple solution look like? While I haven’t yet written anything that’s not both derivative and butt ugly, I found these helpful:

  • Introduction to Xamarin.Forms – This is really quite comprehensive, and the best place to start.
  • Among the samples, TipCalc shows off XAML and data binding, while the Forms Gallery provides code for every control type. Both samples were written by Charles Petzold, which should be reason enough to go check them out.

PostSharpin’ – Part 3

In the final part of this series I look at new features coming in PostSharp 3.2, including support for aggregates and undo/redo.

Aggregates

Under the hood, the biggest new feature in 3.2 might be the support for aggregates: object graphs with parent-child relationships. Version 3.2 makes aggregates first class citizens in the world of PostSharp aspects, and allows PostSharp to offer more complex features like undo/redo. They’ve also modified other aspects to be aggregate aware – so for example the Actor aspect now also implements IAggregatable.

You mark up properties in your aggregatable types with Child, Parent and Reference aspects, and PostSharp then does the right thing when dealing with your object graph. I mentioned an “aggregatable type” – you can mark up your class with the Aggregatable aspect, but on its on this won’t do much. Instead you’ll use another instance-level aspect – such as Recordable, Immutable, Disposable, and others – which are all aggregate aware and will work correctly with your object graph.

If you’re using Entity Framework, nHibernate, or similar, these frameworks already understand your graph and its relationships, so additional markup may feel like more work, although these aspects could open the door to custom aspects which understand both the data services layer and composition of your model.

Here’s a simple example of aggregates with the new Disposable aspect, which handles the dirty work of implementing IDisposable on types in your graph.

using PostSharp.Patterns.Collections;
using PostSharp.Patterns.Model;

[Disposable]
public class Order
{
    public Order()
    {
        Details = new AdvisableCollection<OrderDetail>();
    }

    public int Id { get; set; }
    public DateTime OrderDate { get; set; }
    public string Customer { get; set; }
    
    [Child]
    public ICollection<OrderDetail> Details { get; private set; }
}

[Disposable]
public class OrderDetail
{
    public int Id { get; set; }
    public string Product { get; set; }
    public int Quantity { get; set; }
    public decimal UnitPrice { get; set; }
} 

The child collection must be of type AdvisableCollection, otherwise PostSharp raises a runtime error. But once defined correctly, when the parent is disposed PostSharp will dispose of all children too.

A bit irritating, though, is that to use your parent type in a using statement you must initialize it outside the scope of the using to avoid the build-time error type used in a using statement must be implicitly convertible to System.IDisposable:

var order = new Order { Id = 1, OrderDate = DateTime.Now };
using (order as IDisposable)
{
   ...
}

Immutable and Freezable

I was initially excited to see that PostSharp will be adding Immutable and Freezable aspects in version 3.2.

When you need to support thread-safe access for a number of object types, the idea of immutable types is really appealing. But also attractive is the use of object initializers. Unfortunately, since object initializers require public setters the type can’t be immutable. Named constructor parameters can partially solve the issue, but the brevity of object initializers, for both the caller and callee, can’t be beat. Fortunately, C# 6.0 should make writing immutable types easier with the new primary constructors and property initializers, but I was hoping that PostSharp could work some magic here right now, without having to wait for the C# release. Well, this may be a pipe dream of mine. PostSharp will be adding support for early and late object initialization through Immutable and Freezable, but these address the problem with regard to object graphs and “deep” immutability. Granted, this will be a helpful feature. Unfortunately, I wasn’t able to get these aspects working correctly with the alpha code so I’ll have to try again later.
Edit: Per Gael Fraiteur’s recommendation, an upgrade to version 3.2.20-alpha got this working. As hoped, PostSharp will raise an ObjectReadOnlyException if you try to make any changes to an Immutable type after construction or a Freezable type after Freeze is called. This works for both simple and “deep” fields and properties. I expect “freezability” to be especially useful.

These aspects are available from the pre-release version of the PostSharp Threading Pattern Library. A more thorough discussion of these aspects is available here.

Recordable

Now here is something to get excited about. Implementing the Memento pattern on your own is usually hard, error prone, and non-performant; probably the reason why so few applications and frameworks support it, but also making it a great candidate for an AOP approach.

The Recordable aspect builds upon the support for aggregation, so with a few simple changes to the code above I’m ready for Undo/Redo support:

using PostSharp.Patterns.Collections;
using PostSharp.Patterns.Model;
using PostSharp.Patterns.Recording;

[Recordable]
public class Order
{
    public Order()
    {
        Details = new AdvisableCollection<OrderDetail>();
    }

    public int Id { get; set; }
    public DateTime OrderDate { get; set; }
    public string Customer { get; set; }

    [Child]
    public ICollection<OrderDetail> Details { get; private set; }
}

[Recordable]
public class OrderDetail
{
    public int Id { get; set; }
    public string Product { get; set; }
    public int Quantity { get; set; }
    public decimal UnitPrice { get; set; }
}

As before, the child collection must be an AdvisableCollection, otherwise an error is thrown.

Once a type, or the types in an object graph, are made recordable, you then use the new RecordingServices features. The Recorder tracks operations within a scope and provides undo/redo functions. There’s a default Recorder, RecordingServices.DefaultRecorder, to get started or for simple applications.

By default, every change to a property or field, add/remove from a child collection, or call to a public method on the class, is atomic. To bundle these into a scope – a logical operation – you use a RecordingScope, which can be set either declaratively or programmatically.

For example:

var order = new Order();
using (RecordingScope scope = RecordingServices.DefaultRecorder.OpenScope(RecordingScopeOption.Atomic))
{
    order.Customer = "Bikes R Us";
    order.Id = 1;
    order.OrderDate = DateTime.Now;
}

If Undo were called after the above, the Order object would be returned to its default state after construction.

It’s worth noting that using an object initializer is not an atomic operation. For example the following will undo the last property set operation (the set of OrderDate):

  var order = new Order { Customer = "Bikes R Us", Id = 1, OrderDate = DateTime.Now };
  RecordingServices.DefaultRecorder.Undo();

Also notable is the factoid that constructors do not participate in an operation, even if the constructor sets properties or fields within the class; the object must be initialized before it can be considered recordable.

Along with Undo is of course Redo.

Here we create a new Order and OrderDetail, Undo the add of the detail line to the order, and then immediately have a change of heart and call Redo to restore the added line:

var order = new Order { Customer = "My Grocer", Id = 1, OrderDate = DateTime.Now };
var od = new OrderDetail { Id = 1, Product = "pears", Quantity = 10, UnitPrice = 1.99M };
order.Details.Add(od);
RecordingServices.DefaultRecorder.Undo();
RecordingServices.DefaultRecorder.Redo();

Restore points are supported too:

var order = new Order { Customer = "My Grocer", Id = 1, OrderDate = DateTime.Now };
RecordingServices.DefaultRecorder.Clear();

var token1 = RecordingServices.DefaultRecorder.AddRestorePoint("first");
order.Details.Add(new OrderDetail { Id = 1, Product = "apples", Quantity = 5, UnitPrice = 1.99M });

var token2 = RecordingServices.DefaultRecorder.AddRestorePoint("second");
order.Details.Add(new OrderDetail { Id = 2, Product = "potatoes", Quantity = 10, UnitPrice = .99M });

// Removes detail 2
RecordingServices.DefaultRecorder.UndoTo(token2);

// Removes detail 1
RecordingServices.DefaultRecorder.UndoTo(token1);

You have a great deal of control over the Recorder and other recordable features, and in general the implementation looks full-featured and quite useful. There’s also a series of blog posts on the PostSharp site with more detailed information.

The recordable feature is available in the pre-release version of the PostSharp Model Pattern Library.

PostSharpin’ Part 2 – Actor

In Part 1 I looked at PostSharp’s support for INotifyPropertyChanged, and several handy aspects to help with threading: Background, Dispatch, ThreadUnsafe and ReaderWriterSynchronized. In part 2 I’d planned to look at PostSharp’s Actor support and new features for undo/redo, but life got in the way, so part 2 will cover only the Actor aspect, and part 3 will cover new features in PostSharp 3.2.

Actor

The Actor model hasn’t yet received a lot of attention in the .NET world. The model was first defined in 1973 as a means to model parallel and distributed systems, “a framework for reasoning about concurrency.” The model assumes that “concurrency is hard” and provides an alternative to do-it-yourself threading and locking. It’s built into languages like Erlang and Scala, and there are a number of libraries and frameworks. It’s gotten a recent boost in the .NET world with F# agents, the TPL Dataflow library and Project Orleans.

Conceptually, an actor is a concurrency primitive which can both send and receive messages and create other actors, all completely asynchronous, and thread-safe by design. An actor may or may not hold state, but it is never shared.

Where does PostSharp fit in? Remembering the PostSharp promise: “Eradicate boilerplate. Raise abstraction. Enforce good design.” the PostSharp Actor implementation allows developers to work at the “right” level of abstraction, and provides both build time and run time validation to avoid shared mutable state and ensure that private state is accessed by only a single thread at a time.

To use the Actor aspect, install the Threading Pattern Library package from NuGet.

Ping Pong

I started with the PingPong sample (well, PingPing really) from PostSharp. Here’s the code:

[Actor]
public class Player 
{
    private string name;
    private int counter;

    public Player(string name)
    {
        this.name = name;
    }

    public async Task Ping(Player peer, int countdown)
    {
        Console.WriteLine("{0}.Ping({1}) from thread {2}", this.name, countdown,
                          Thread.CurrentThread.ManagedThreadId);

        if (countdown > 1)
        {
            await peer.Ping(this, countdown - 1);
        }

        this.counter++;
    }

    public async Task GetCounter()
    {
        return this.counter;
    }
}

class Program
{
    static void Main(string[] args)
    {
        AsyncMain().Wait();
        Console.ReadLine();
    }

    private static async Task AsyncMain()
    {
        Console.WriteLine("main thread is {0}", Thread.CurrentThread.ManagedThr

        Player ping = new Player("Sarkozy");
        Player pong = new Player("Hollande");

        Task pingTask = ping.Ping(pong, 10);

        await pingTask;

        Console.WriteLine("{0} Counter={1}", ping, await ping.GetCounter());
        Console.WriteLine("{0} Counter={1}", pong, await pong.GetCounter());
    }
}

Here the Player class is an actor, and decorated with the PostSharp Actor aspect. The “messages” are implied by the Ping and GetCounter async methods. Whether the “message-ness” of the actor model should be abstracted away is certainly a point for discussion, but it does provide for easier programming within an OO language like C#.

From the output we see that 1) activation (construction) is performed on the caller’s thread, 2) the player’s methods are invoked on background threads, and 3) there is no thread affinity.
pingpong_sm

Validation

The compile-time validation performed when using the Actor aspect tries to ensure you do the right thing.

1. All fields must be private, and private state must not be made available to other threads or actors.

If we try to define the name field as public:

[Actor]
public class Player 
{
    public string name;
    private int counter;
    ...
} 

This results in the compiler error: Field Player.name cannot be public because its declaring class Player implements its threading model does not allow it. Apply the [ExplicitlySynchronized] custom attribute to this field to opt out from this rule.

The same holds true of a public property:

[Actor]
public class Player 
{
    ...
    public int Id { get; private set; }
    ...
}

This results in the compile-time error: Method Player cannot return a value or have out/ref parameters because its declaring class derives from Actor and the method can be invoked from outside the actor.

2. All methods must be asynchronous.

To PostSharp this means that method signatures must include the async modifier. If you try to return a Task from a non-async method, something like this:

 public Task<string> SayHello(string greeting)
 {
     return Task.FromResult("You said: '" + greeting + "', I say: Hello!");
 }

You’ll get a compiler error: Method Player cannot return a value or have out/ref parameters because its declaring class derives from Actor and the method can be invoked from outside the actor.

The async rule also means that you must ignore the standard compiler warning about using async when you don’t demonstrably need to, which is why the GetCounter method looks like this:

public async Task<int> GetCounter()
{
    return this.counter;
}

PostSharp will dispatch the method to a background task, so you should ignore the compiler warning: This async method lacks ‘await’ operators and will run synchronously. Consider using the ‘await’ operator to await non-blocking API calls, or ‘await Task.Run(…)’ to do CPU-bound work on a background thread.

If you remove the async modifier the Actor validation will fail. You can add an await, but it looks silly, and you shouldn’t await Task.FromResult anyway:

public async Task<int> GetCounter()
{
    return await Task.FromResult<int>(this.counter);
}

You can, however, write a synchronous method, which PostSharp will dispatch to a background thread. For example:

public void Ping(Player peer, int countdown)
{
    Console.WriteLine("{0}.Ping from thread {1}", this.name,
                      Thread.CurrentThread.ManagedThreadId);

    if (countdown >= 1)
    {
        peer.Pong(this, countdown - 1);
    }

    this.counter++;
}

This may be a good thing, but also possibly misleading, since at first glance a developer might assume the method is executed synchronously on the current thread.

Rock-Paper-Scissors

Next I tried the “Rock-Paper-Scissors” example as described here.

Here’s my implementation.

namespace Roshambo
{
    public enum Move
    {
        Rock,
        Paper,
        Scissors
    }

    [Actor]
    public class Coordinator
    {
        public async Task Start(Player player1, Player player2, int numberOfThrows)
        {
            Task.WaitAll(player1.Start(), Task.Delay(10), player2.Start());

            while (numberOfThrows-- > 0)
            {
                var move1Task = player1.Throw();
                var move2Task = player2.Throw();
                Task.WaitAll(move1Task, move2Task);

                var move1 = move1Task.Result;
                var move2 = move2Task.Result;

                if (Tie(move1, move2))
                {
                    Console.WriteLine("Player1: {0}, Player2: {1} - Tie!", move1, move2);
                }
                else
                {
                    Console.WriteLine("Player1: {0}, Player2: {1} - Player{2} wins!", move1, move2,
                        FirstWins(move1, move2) ? "1" : "2");
                }
            }
        }

        private bool Tie (Move m1, Move m2) {
            return m1 == m2;
        }

        private bool FirstWins(Move m1, Move m2)
        {
            return
              (m1 == Move.Rock && m2 == Move.Scissors) ||
              (m1 == Move.Paper && m2 == Move.Rock) ||
              (m1 == Move.Scissors && m2 == Move.Paper);
        }

    }

    [Actor]
    public class Player
    {
        private Random _random;
        private string _name;

        public Player(string name)
        {
            _name = name;
        }
        public async Task Start()
        {
            int seed = Environment.TickCount + System.Threading.Thread.CurrentThread.ManagedThreadId;
            _random = new Random(seed);
        }

        public async Task<Move> Throw()
        {
            return (Move)_random.Next(3);
        }

        public async Task<string> GetName()
        {
            return _name;
        }
    }
}

class Program
{
    static void Main(string[] args)
    {
        AsyncMain().Wait();
        Console.ReadLine();
    }

    private static async Task AsyncMain()
    {
        var coordinator = new Coordinator();
        var player1 = new Player("adam");
        var player2 = new Player("zoe");

        await coordinator.Start(player1, player2, 20);
    }
}

And the exciting results:
rps

A few things to note:

  • I passed a name to the Player constructor but then never used it again. As private state, to access the name you must follow the Actor message rules and use an async method. I wouldn’t want the Coordinator to repeatedly ask each Player for its name, but this could have been done once at play start.
  • Trying to uniquely seed a System.Random instance for each player was tricky, and my implementation is a hack. The Random class is not thread-safe, so while sharing a single static Random instance among Player actors is an option, having to perform my own locking around Random.Next calls seemed to violate the spirit of the actor model. The default seed for a Random instance is Environment.TickCount, which if called in close succession will likely return the same value. Using the current thread id as a seed is an alternative, but although PostSharp will ensure that Actor methods will be called on a background thread, there’s no assurance they’ll be different threads for different actor instances. My not-so-robust compromise was to take the sum of TickCount and thread id and cross my fingers. Including the dummy Task.Delay when waiting for the players to start helps.
  • The Coordinator here does not hold state, and its Start method will 1) tell the players to start, 2) tell the players to throw, and 3) announce the result.
  • The Player does hold non-shared state, and contains Start, Throw and GetName async methods. None of these methods is inherently asynchronous, so I see compiler warnings telling me to consider using the await operator. I could have made these methods synchronous, but as I said above I think it leads to some cognitive dissonance between the code you see and the underlying actor implementation.

Summary

Overall, despite some quirks, using the Actor aspect could be useful. It would be interesting to compare PostSharp’s Actor support with other .NET implementations, and I may try that some day.

PostSharpin’ – Part 1

I’ve been intrigued with PostSharp for some time. PostSharp is an AOP platform and works its magic at build time (for the most part). You apply an aspect and PostSharp does the IL weaving, via integration with MSBuild, to output an assembly with the injected functionality. The goal, as their home page banner says: “Eradicate boilerplate. Raise abstraction. Enforce good design.”

I worked on the DevForce framework for a number of years and we’d written several custom aspects to implement certain product features, but I’d never had a chance to play with some of the “ready-made” implementations in the PostSharp patterns library. Writing your own aspects can range from fairly easy to quite hard, but many of the out-of-the-box aspects in the patterns library seem to combine great functionality and ease of use. Of particular interest to me are the ready-made patterns for INotifyPropertyChanged support and threading. The patterns library also includes support for logging, exception handling and a few other patterns, but I’ll save those for another day.

To get started with PostSharp, install the package(s) from NuGet. You’ll also be prompted for license information during the install, but since I’m using the 3.2 alpha I signed up for a 45 day free trial of PostSharp Ultimate.

NotifyPropertyChanged

Nothing screams boilerplate more than INotifyPropertyChanged. In any .NET environment, data bound objects must implement INotifyPropertyChanged (or the PropertyChanged pattern) for changes to be seen by UI controls. This means raising the PropertyChanged event, and more importantly, losing the simplicity of automatic properties since property setters must raise PropertyChanged. This gets irritating quickly.

With PostSharp the solution is simple: just decorate the class with the NotifyPropertyChanged aspect, found in the Model Pattern Library.

You can decorate a base class too, and get the functionality you expect across its sub-types. If you’ve both implemented INotifyPropertyChanged and added the aspect (maybe in a complex inheritance hierarchy), PostSharp handles that too. PostSharp will add the PropertyChanged logic to the setters of all public properties, but you can opt-out too by decorating a property with the IgnoreAutoChangeNotification aspect.

Here’s a simple class and sub-class with full INotifyPropertyChanged functionality:

[NotifyPropertyChanged]
public class Customer
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string FullName
    {
        get { return string.Format("{0} {1}", FirstName, LastName); }
    }

    [IgnoreAutoChangeNotification]
    public DateTime LastContactDate { get; set; }
}

public class GoldCustomer : Customer 
{
     public int Points { get; set; }
}

Under the hood the aspect will inject the INotifyPropertyChanged logic into your IL. Actually, it implements a PostSharp interface called INotifyChildPropertyChanged, which in turn extends INotifyPropertyChanged.

This simplicity does come with a performance penalty, however. In my tests setting a single string property on a class implementing INotifyPropertyChanged (INPC) versus one using the NotifyPropertyChanged aspect, the INPC implementation was about 100x faster. So, while the aspect is a great solution for eliminating the boilerplate, for classes which won’t be data bound in the UI or which have strict performance requirements, it’s not an ideal solution.

Threading

The PostSharp Threading Pattern Library contains several interesting aspects, with more coming in the 3.2 release.

Background and Dispatched

The Background aspect can be applied to any method to dispatch it to the background. Under the hood PostSharp does a Task.Start to invoke your method on a background thread. With the async/await support in .NET 4.5 this may not be all that handy, but if you’re using a BackgroundWorker or working directly with the Thread or ThreadPool classes it may make your code a bit cleaner. Likewise, the Dispatched aspect will ensure the decorated method is invoked on the UI thread. Background and Dispatched don’t need to be used together, and using Dispatched can clean up some ugly BeginInvoke logic.

Coincidentally, I was just reading the April issue of MSDN Magazine and Laurent Bugnion’s article Multithreading and Dispatching in MVVM Applications, so switched his simple non-MVVM starting solution to use Background and Dispatched. Bugnion uses the DispatchHelper from MvvmLight in his MVVM samples; you can use these PostSharp aspects in your view models too, but unlike some of the other PostSharp libraries which have portable class library support, the threading library is available for .NET 4 and above only.

Here’s a snippet of the sample code from the article using Background and Dispatched:

[Background]
private void StartSuccessClick(object sender, RoutedEventArgs e)
{
    // This is a background operation!

    var loopIndex = 0;

    while (_condition)
    {
        // Do something

        // Notify user
        UpdateStatus(string.Format("Loop # {0}", loopIndex++));

        // Sleep for a while
        Thread.Sleep(500);
    }
}

[Dispatched]
private void UpdateStatus(string msg)
{
    StatusTextBlock.Text = msg;
}

These are handy but not all that compelling, since an InvalidOperationException is thrown immediately when you try to access a UI control on a background thread, and therefore easy to diagnose and fix. Much more insidious are thread safety problems. In a multi-threaded environment, safely working with mutable objects can be challenging: thread safety issues generally appear as odd “random” errors and deadlocks, often only under load, and are very difficult to reproduce and debug.

PostSharp defines three threading models or patterns and corresponding aspects: Thread Unsafe, Read-Writer Synchronized and Actor. (I’ll take a look at Actor in a follow-on post.)

ThreadUnsafe
If a single instance of a class should never be accessed concurrently, decorate the class with the ThreadUnsafe aspect. Should you access the object across concurrent threads, you’ll receive an immediate ConcurrentAccessException from PostSharp.

Markup is easy:

[ThreadUnsafe]
public class Booking
{
    public string CustomerName { get; set; }
    public DateTime StartTime { get; set; }
    public int NumberOfPersons { get; set; }
}

The ThreadUnsafe aspect also allows you to set a ThreadUnsafePolicy, such as ThreadAffine, which gives thread affinity to your objects and will cause a ThreadMismatchException to be thrown if the object is used on any thread other than the creating thread. Also baked into ThreadUnsafe is some compile-time validation and support for opting out to perform your own explicit synchronization.

Most classes aren’t designed to be thread safe, and don’t need to be, but if you have any globally shared objects or need to pass an object around among threads, or alternately, ensure it’s never shared across threads, this aspect is far easier to use than performing your own locking. I didn’t look at performance here, but I’ll take a quick exception over data corruption any day.

ReaderWriterSynchronized

Finally, dealing with thread-safe objects. When you do have objects which truly must be shared in a multi-threaded environment dealing with thread safety can be a big pain. First deciding on whether to use a lock, a mutex, a ReaderWriterLock, Interlocked variables, etc., and then what the scope should be. Once it’s finally working correctly a later modification “forgets” to lock the resource, and you don’t discover this until you’re in production under heavy load.

PostSharp handles this declaratively with its ReaderWriterSynchronized aspect. Under the hood it’s an implementation of ReaderWriterSynchronizedSlim, and provides for concurrent reads and exclusive writes. Here you decorate the class with ReaderWriterSynchronized, but must also mark methods, getters and setters with either Reader or Writer aspects. (Prior to PostSharp 3.2 these aspects were called ReaderLock and WriterLock.) The advantage of the aspect, other than the removal of all the locking code, is that PostSharp will immediately detect when a resource is accessed without a lock and throw a LockNotHeldException.

Here’s an example of the .NET sample for a SynchronizedCache using PostSharp aspects:

[ReaderWriterSynchronized]
public class SynchronizedCache
{
    private Dictionary innerCache = new Dictionary();

    [Reader]
    public string Read(int key)
    {
        return innerCache[key];
    }

    [Writer]
    public void Add(int key, string value)
    {
        innerCache.Add(key, value);
    }

    [UpgradeableReader]
    public AddOrUpdateStatus AddOrUpdate(int key, string value)
    {
        string result = null;
        if (innerCache.TryGetValue(key, out result))
        {
            if (result == value)
            {
                return AddOrUpdateStatus.Unchanged;
            }
            else
            {
                innerCache[key] = value;
                return AddOrUpdateStatus.Updated;
            }
        }
        else
        {
            innerCache.Add(key, value);
            return AddOrUpdateStatus.Added;
        }
    }

    [Writer]
    public void Delete(int key)
    {
        innerCache.Remove(key);
    }

    public enum AddOrUpdateStatus
    {
        Added,
        Updated,
        Unchanged
    };
}

In part 2 I plan to take a look at PostSharp’s Actor support, along with the new Recordable aspect for undo/redo and the Immutable and Freezable aspects, available in the upcoming 3.2 release.

What I’ll do for a free t-shirt

One of the more exciting things from the recent Build conference was the frequent mention of Xamarin. Whenever Microsoft noted its embrace of cross platform, there was Xamarin providing the capability:
xam x platIt seemed that every .NET-oriented session mentioned Xamarin, and Miguel de Icaza, CTO of Xamarin, made an appearance in the Day 2 keynote and gave an excellent presentation, Go Mobile with C# and Xamarin .

I played with Xamarin a bit last year, and although there were some rough edges, I thought then, and still do, that it has great potential. C#, .NET, Android, iOS? What’s not to love? As de Icaza noted, people expect great experiences from their mobile devices and “C# fits like a glove for mobile development”. It’s been about 6 months since I last took a look, but since I really wanted the “snazzy” C# t-shirt seen at Build, thought I’d give it another spin.
xam shirt

Not that there still aren’t rough edges.

Since it had been six months, I first wanted to update to a more recent Xamarin version. Within Visual Studio 2013 I tried to login to my Xamarin account, and got this helpful dialog:
xamerror

Hmm, was my account no longer valid? Nope, my credentials still worked on their web site, and I was able to download the latest installer from there.

Since I don’t have a Mac, I haven’t yet tried out Xamarin.iOS. A Xamarin.Android install will also install the Android SDK, Xamarin Studio, and the Java SDK and Android NDK if not present. So it can take some time. The installation completed with this message:
xam info
Great, except I’m running VS2013. Thankfully it was just a bad message, and Xamarin.Android 4.12 installed OK.

But, ah, I still got the same error when trying to login to my Xamarin account from VS. Since I couldn’t get past the error in VS I switched over to Xamarin Studio. Xamarin Studio is a nice IDE, but a bit of a step sideways after VS. It is improving though; I found that some of my favorite shortcut keys which hadn’t worked in an earlier version were now working.

There’s a Xamarin Updater built into both the VS tooling and Xamarin Studio and I’d previously set it to automatically check for and download updates from the “stable” channel, so this popped up when I opened Xamarin Studio:
xam updates

After this, I was finally ready to open the XamarinStore app, the sample application shown at Build which introduces you to Xamarin and gets you the snazzy t-shirt.

Just press “play”.

Building the XamarinStore sample application requires that you login to your Xamarin account:
xam login
Xamarin comes in several editions, and I’m currently using the Business edition from my former employer but will soon switch to the free Starter edition. Unfortunately that means no more VS integration.

You’ll need to rebuild after successfully entering your credentials. Also be sure to check the Tasks window for TODO items Xamarin has left for us.

If you haven’t started and selected an emulator, you’re prompted with a list of device emulators. If you’re not up on your Android API levels this will appear confusing, and reminds you that there is a learning curve to developing with Xamarin.Android. The Android developer documentation is a good resource and they’ve helpfully mapped API levels to platform releases.

I chose level 15 (Ice Cream Sandwich) since Jelly Bean and KitKat device emulators weren’t listed. Afterwards I found that although the Android SDK had been updated with my Xamarin installation, the packages for these API levels weren’t also automatically installed. I’ll have to do that later using the Android SDK Manager:
xam sdk

You can also view and edit the devices using the Android Emulator Manager available from the Xamarin Studio Tools menu. (The actual window opened is called the Android Virtual Device Manager.) The size of the emulator screen made using the virtual keyboard difficult, so to add keyboard support I had to edit the AVD to add it:
avd editor

And without further ado the app was running and the t-shirt ordered!
xam done

There’s much to learn in Xamarin, and their Developer Center has a lot of great content. I hope to dig in deeper soon.

Fizzy Bizzyness

FizzBuzz is an interview question which supposedly helps “filter out the 99.5% of programming job candidates” who can’t program.  I find this assertion very hard to believe, unless those tested are non-coding managers.  But anyway, the problem statement is simple:

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”.  For numbers which are multiples of both three and five print “FizzBuzz”.

A simple implementation in C# might look like this:

[TestMethod]
public void FizzBuzzSimple()
{
   string s;
   for (int i = 1; i < 101; i++)
   {
      s = string.Empty;

      if (i % 3 == 0) s += "Fizz";
      if (i % 5 == 0) s += "Buzz";
      if (s.Length == 0) s = i.ToString();

      Console.WriteLine(s);
   }
}

Since the algorithm here truly is straightforward and simple a more interesting exercise might be to consider the ways in which this code might change as the requirements change over time.  Something like a kata. (For a fun “simple” problem description from a non-programmer’s point of view, this is hilarious, and all too true: http://failblog.cheezburger.com/share/59643393.)

We’re all familiar with code where the conditional expressions have become both ugly and impenetrable. In fact, we’ve probably “contributed” to these codebases at some point too.  Something that may have started with a relatively simple “if ((A and !B) or C)” grows over time to a rat’s nest of and’s, or’s, not’s, etc. So what might change here, and can we, or should we, try to “future proof” against it?

  • What if the range changes?  Instead of 1 to 100 the user now wants 1 to 1000, or maybe 50 to 100.  Maybe the range needs to be user configurable.
  • What if our output strings change?  Instead of “Fizz” and “Buzz” the user wants “Buzz” and “Feed”.  Should we continue to hardcode magic strings?  Maybe we want something entirely different than the “FizzBuzz” concatenation when a multiple of 15 is found.   What if we need to localize the strings based on the user’s language?
  • What if the test conditions change, or new conditions are added?  In practice, this is often the most likely change agent.  What if more complex conditions are needed, such as special processing for multiples of 10?   How can we make the code easy to read and maintain over time?
  • What if the test conditions, in the real world, are expensive in terms of performance?  Instead of testing numbers maybe we’re calling functions which make database or service calls.  How do we make this performant?
  • What if our user no longer wants to “print” the output?  Maybe the output will go to a diagnostics window, log, or screen.  Could dependency injection help here?

There are probably myriad more possibilities, but I’m already feeling analysis paralysis.