Checking your Application Servers with Powershell (ApplicationPools, Windows Services, EventLog, Performance Counters)

If you are working in enterprise software development, you will sooner or later be involved in deployment of code to multiple machines. Very often deployments are a tedious, annoying and time-consuming task. In the enterprise world you are often confronted with lots of different machines in your deployment process. There could be frontend, backend and database servers and multiple stages such as production, dev, test, QA, etc…

Since the rise of agile development techniques, the number of deployments that are carried out in a year changed drastically. Where some years ago, a company was ahead of time when it had more than two deployments a year, today its different. The flexibility required by the business requirements and the desire to apply agile practices requires very short release cycles and deployments that are automated, repeatable and fast. The best case would be a deployment that takes nothing more than a click of a button.

Automation is the key. As an application grows, the deployment will span more and more machines. Every step that can be avoided must be saved. In this blog post I will share some nifty Powershell one-liners that can drastically speed up your deployment duration and lead to a more scalable process. They gather crucial system information from multiple machines in one line of code. I always see people fiddling with remote desktop connections and clicking through one menu after another just to find out if a certain windows service is stopped. This is fine if you only have to take care of one machine. As soon as you have multiple machines, the fun is over. Powershell is the answer!

1. Checking the State of IIS Application Pools for Multiple Remote Machines

1 $machines = MYSERVER01, MYSERVER02, MYSERVER03
2 InvokeCommand ComputerName $machines ScriptBlock { GetWebAppPoolState
Name MyApplicationPool }

So what does this script do? Very simple: First, it creates an array that contains your machine names, then it uses the Invoke-Command cmdlet to invoke a script block on every machine. The script block contains a call to the Get-WebAppPoolState cmdlet that ships with IIS. The output looks like this:


Of course you can use the exact same approach to start and stop an Application Pool by using Start-WebAppPool, Stop-WebAppPool and Restart-WebAppPool.

2. Checking the State of Windows Services on Multiple Remote Machines

1 $machines = MYSERVER01, MYSERVER02, MYSERVER03
2 InvokeCommand ComputerName $machines ScriptBlock { GetService
| WhereObject {$_.Name -match NServiceBus }}

This script works in the exact same way than the first one. In the script block, it uses the Get-Service cmdlet to retrieve windows services and filters them to the string “NServiceBus”. The result will be:



















Needless to say that you can use Start-Service, Stop-Service or Restart-Service to start, stop and restart multiple services in one call.

The command can be reduced by using aliases and omitting default property names to:

1 InvokeCommand $machines {gsv | where {$_.Name -match NServiceBus}}

3. Getting the Event Log from multiple Machines

1 InvokeCommand $machines {GetEventLog LogName System EntryType Error, Warning | select First 10 Property TimeWritten, EntryType, MachineName, Message | FormatTable AutoSize wrap}

This will retrieve the first 10 errors and warnings from the “System” event logs of the machines. Output:


4. Checking Windows Performance Counters on a Remote Machine

1 GetCounter Counter \Processor(_Total)\% Processor Time SampleInterval 1 Continuous

This script calls a performance counter in a regular time interval (here 1 second):


If you are interested in what counters are available, just call:

1 GetCounter ListSet *

This will give you the list of all available counters and their paths, such as:



Profiling WPF Applications in Visual Studio 2015 with the WPF Timeline Tool

In VS2013 Microsoft introduced the Performance and Diagnostics Hub. (Check my blog post here: In visual Studio 2015, the XAML UI Responsiveness Tool has been renamed to Timeline and now fully supports all XAML based applications and therefore WPF!

You can read all the details in this post from the WPF team:

The tool is very easy to use. Just go to ANALYZE –> START DIAGNOSTICS WITHOUT DEBUGGING or hit Alt-F2.

Lets see how it works using a sample application:


The application is very simple. It consists of two listboxes that load 5000 images each. The only difference is that in the listbox on the left side, UI Virtualization is disabled, while in the listbox on the right side, it is enabled.

As you might know, enabling and disabling UI virtualization in WPF is a one-liner:

<ListBox  VirtualizingStackPanel.IsVirtualizing="False"
        Grid.Row="1" Grid.Column="0" Name="lb" Background="LemonChiffon" >

You can already see that the number in the screenshot states that there is a tremendous performance difference once virtualization is enabled. Instead of 7.7 seconds, it took only 0.7 seconds, which is an improvement by  a factor of 10!

We are going to use the Timeline tool to profile the application and take a look under the hood to find out where the differences are. We start the profiling using the Timeline tool. Once the application is running, we load the images on the left, then on the right side and close the window.


The report shows us two graphs. The first one is the UI thread utilization and the second one is the frames per second on the composition and the UI thread. We can clearly see the two red sections that correspond to the loading of the images into the listboxes.

We notice that for the non-virtualized listbox, the UI thread was completely busy for a period of about 8 seconds. The thread was exclusively running layout calculations. There is a perfect correlation between the UI thread going up to 100% and the FPS dropping to 0. This is exactly what a user would experience looking at the UI. It comes to a full stop.

When we look at the virtualized listbox we can see that we have about half the CPU used for layout and the rest for rendering. The duration of the loading process can be read at about 0.8 seconds. Furthermore, we notice that the FPS did not drop to absolute zero.

Lets take a look at the details. We are going to zoom in into the 2 interesting sections and compare them side by side.


image image

We can confirm our conclusion from the overview that the performance bottleneck in the non-virtualized listbox comes from too long layout operations. Let’s dig deeper:

In the timeline details of the non-virtualized listbox we find one layout event that calculated roughly 20’000 visuals and lasted for a total time of 8.1 seconds. Further below we notice a lot of garbage collection activity.


By expanding the layout event, we can look at the layout time required for each element in the visual tree:


Let’s drill down to the ItemsPresenter that contains our 20K visuals. We can see that each and every visual could be rendered in a relatively short time (about 60ms). Obviously, it was the single layout operation that exhausted the UI thread because it had to calculate 20’000 child visuals and caused a lot of GC activity.

Lets look at the virtualized listbox details:


The details show that the layout process in the virtualized listbox took only 385ms and had to process about 80 visuals. Additionally, we can see a GC call that took 259ms and was executed during the layout calculations. If we select the garbage collection event, the timeline tool even gives us an explanation about the nature of the GC action:


In conclusion, it is fair to say that the Timeline Tool is incredibly easy to use and gives us very deep insight into the inner workings of our application. This could save you a lot of time when looking for performance problems.

WPF is Back! And becoming even more Powerful!

I am really happy that we can finally see more and more activity around Windows Presentation Foundation (WPF) by Microsoft. Currently, there are breaking news every month. I am looking forward for what will be revealed at Build 2015. Here is an overview of the current developments:

November 2014: Roadmap for WPF

First, in November last year, the .NET Framework Blog published this post:

In the post they acknowledged that WPF is still a very important technology. They mention that 10% of all newly created Visual Studio Projects at that time were using the WPF project template.

They announced that they are going to invest into the WPF platform, especially in the areas of Performance, DirectX Interoperability, Modern HW Support and Tooling.

January 2015: Timeline Profiling Tool for WPF


Later, in January 2015, the WPF team published the Timeline Profiling Tool for VS2015 CTP5:

Check out my Article on the Performance & Diagnostics Hub:

February 2015: XAML Debugging Tools


In February this year, the WPF Team released the XAML UI Debugging Tools available in VS2015 CTP 6.

The tools include the Live Visual Tree, Live Property Explorer and In-App Selection.

March 2015: WPF Team Connect Live Q&A





In March, Channel 9 published an interesting Q&A session live from the WPF Team.

It contained some very interesting statements by Unni Ravindranathan, Program Manager.

“WPF is the UI Framework of choice for a large collection of mission critical apps. We see that not to change in the near future.”

“We are going to make WPF better”

“We are going to listen to our customers”

Current activities of the team:

WPF Local (00:02)
The team is optimizing WPF with the goal to react more quickly to customer needs.
Therefore, they are extracting the WPF assemblies out of the .NET framework and will be shipping them as Nuget packages. This of course would allow them to release much more quickly and more frequently. The same approach has been taken by the Entity Framework, for example. A caveat of WPF local will be that the conventional assemblies have always been delivered as native images (using ngen). This gets more complicated if assemblies are shipped as Nuget packages. The team is working on a solution.

WPF as of .NET 4.5.1 will be integrated with Windows Update.
Hm, not sure, what to think of that… Winking smile

New Feature: Content Deferral (00:28)
With WPF Local, WPF will by default defer loading of certain parts of the visual tree. This means that invisible visuals, for example, will only be loaded when required. The goal is to have less elements in the visual tree and therefore better performance. It is even possible to mark certain visuals with a XAML tag that will defer loading of the elements. Elements will be inflated once they are requested by a someone who needs them (calls to FindName() or execution of a StoryBoard, etc.)

New Tool: Live Visual Tree, In-App Selection & Live Property Explorer (00:41)

New Tool: WPF Timeline Tool (45:00)

It is great to hear that Microsoft keeps investing in WPF. In my experience, a lot of customers use WPF for their business applications. During the last years and the Windows 8 hype, it was never clear to me what path WPF is taking.

Did you know that WPF is a top level item in the Visual Studio User Voice function? So if you have any feedback, click the button and let them know!

Performance and Diagnostics Hub

Performance Profiling of .NET Applications in Visual Studio 2013/2015

There have been a lot of improvements in the troubleshooting and especially profiling capabilities of Visual Studio over the last years.

While in VS2010 the Visual Studio Profiler was restricted to owners of the premium and ultimate versions, the Visual Studio team made a clever move in shifting the profiler down into the professional versions of VS2012. So profiling .NET applications by use of sampling and instrumentation became a tool for everybody. However, this act of kindness by Microsoft got mostly ignored by the community.  Many developers still do not realize that they have a powerful tool in-the-box on their box!

Behold, it gets even better. With the release of Visual Studio 2013, the Performance & Diagnostics Hub was introduced. A central platform for everything you need related to application diagnostics and profiling.

The hub in VS2013 is under:


or in VS2015:


Now that you know where it is, GO AND USE IT!!!

The idea of the hub is to unite several simple diagnostic tools. The current list in VS2013 Update 4 is:

  • CPU Usage
  • GPU Usage
  • HTML UI Responsiveness(Store/Phone Apps Only)
  • XAML UI Responsiveness (Store/Phone Apps Only)
  • Memory Usage
  • JavaScript Function Timing (Store/Phone Apps Only)
  • JavaScript Memory (Store/Phone Apps Only)
  • Performance Wizard (a.k.a Visual Studio Profiler)
  • Energy Consumption (Store/Phone Apps Only)

As you can see, most of the new tools can only be used with store apps. However, Microsoft aims to expand the reach of the tools to other application types, such as WPF or ASP.NET.  Updates are coming in quickly with VS updates.

Therefore, in Visual Studio 2015, the XAML UI Responsiveness Tool has been renamed and now fully supports Windows Presentation Foundation( WPF)!

On another note, the Performance & Diagnostics Hub is now available in all the VS editions, which includes the new VS Community Edition.


The tools in the hub mostly rely on Event Tracing for Windows (ETW). They record ETW events, collect data from other sources, correlate and create graphic reports for the developer to analyze.

Stay tuned for more posts on performance profiling in Visual Studio! In the meantime, grab the current edition of the Windows Developer magazine and read my article on the profiling tools:

Metro Studio

Free XAML Icons and Shapes in Metro Studio

In any UI Framework, visuals are the key to success. An application that contains appealing graphics automatically yields a higher user acceptance.

Unfortuantely, creating good icons is usually very hard and expensive. Not anymore! Meet Syncfusions Metro Studio!

The Syncfusion Metro Studio is a collection and editor for over 4000 application icons that can be exported to XAML! And the best part is: IT IS FREE and the icons can be used in commercial applications.

Get your copy now at:

With a few clicks you can create icons such as these:


The editor allows you to change colors, shape, size etc.

Once your icon is finalized, it can be exported to XAML and integrated into your XAML application. In addition to XAML, the icons can be exported to PNG, GIF, JPG, BMP, ICO,  TIFF or SVG.

As far as I know, this is by far the best and most affordable icon library that there is. A huge time-saver!

1 Comment

ToolTip: Decoding Base64 Images with Chrome Data URL

Reading the great blog post about Base64 encoding and decoding in .NET/C# written by Jerry Nixon (, I immediately remembered a great trick that comes in very handy when using Base64 encoding.

Suppose you have an image that you would like to encode into base64:


You will end up with a very long string, such as this one:


During testing of your application, it can be really painful to convert these strings back into jpg files just to see if nothing got missing or corrupted. You could either write a small program, that decodes the base64 string, use a tool or you could use an online base64 decoder/encoder such as this one:

Aaaand, here comes the trick:

If you have Google Chrome installed, you can use a feature called Data Uri to decode your Base64 string. Just type into the chrome address bar:


where <base64> is the base64 encoded data. In our example:


And BOOM, chrome decodes the Base64 data and displays our image!


You can even specify a different target type, such as:




I like this trick very much. It is a huge timesaver! I hope you like it too!


Microsoft TechTalk: S.O.L.I.D. Principles and DDD

On September 18, 2014, I held a Tech Talk at the Microsoft Switzerland Offices in Walisellen. The title was “Enterprise Software Architecture and Domain-Driven Design”. The main goal of the talk was to explain that with the .NET Framework getting more and more mature, it becomes ever more important to think about how we deal with legacy .NET code, that is existing .NET code.

Everyday, more and more .NET projects will not start from scratch but start on an already existing code base. Therefore, we as developers need to understand how to reduce our legacy by writing more maintainable code.

The SOLID Principles are an approach on how to write modular code that enables reuse scenarios. Robert C. Martin, aka. Uncle Bob first came up with the principles in the early 2000s.


The SOLID principles are:

  • SRP: Single Responsibility Principle
  • OCP: Open/Closed Principle
  • LSP: Liskov Substitution Principle
  • ISP: Interface Segregation Principle
  • DIP: Dependency Inversion Principle

Each of the principles describes an approach to “correctly do object oriented programming (OOP)”.

After I explained the SOLID Principles and showed a couple of demos, we took a look at Domain Driven Design (DDD). DDD is an approach to designing a software solution by following a couple of recommendations.


It concentrates on creating a common understanding between the business experts and the developers by creating artifacts such as a common language (the ubiquitous language) and establishing a common model (the domain model), which is a graphical representation of the business domain. Using several DDD techniques, the model is iteratively refined by making it more correct and reducing complexity. A very important concept is to make implicit stuff explicit. Often things that seem simple or even trivial to the developer are absolutely crucial to the business. Therefore, it is vital to give these concepts a name and add them to the model as a visible artifact.


Using further techniques such as entities, aggregates, value objects, factories or repositories, the model is then translated into code.

Attached is the slide deck that I used during the presentation.

A big thank you goes out to Microsoft Switzerland for giving me the opportunity and to the engaged crowd for listening and thinking along!


WPF: The Simplest Way to Get the Default Template of a Control as XAML

Every WPF Developer has probably faced this issue before. You want to change the default template of a control and need it in XAML. Where can you get the default template? There are multiple ways:

  1. Find the correct MSDN page for your framework version
  2. Use a tool
  3. Use Expression Blend

In my opinion, none of these solutions is very convenient. I always end up either not finding the correct version, not having blend installed, etc…

Alas, there is a much easier way. You can use the XamlWriter class to serialize the default template of your control:

1 public static void SaveDefaultTemplate() 2 { 3 var control = Application.Current.FindResource(typeof(ProgressBar)); 4 using (XmlTextWriter writer = new XmlTextWriter(@"defaultTemplate.xml", System.Text.Encoding.UTF8)) 5 { 6 writer.Formatting = Formatting.Indented; 7 XamlWriter.Save(control, writer); 8 } 9 }

This will produce the following file for the ProgressBar in the sample code:

1 <Style TargetType="Button" xmlns="" xmlns:x="" xmlns:s="clr-namespace:System;assembly=mscorlib"> 2 <Style.BasedOn> 3 <Style TargetType="ButtonBase"> 4 <Style.Resources> 5 <ResourceDictionary /> 6 </Style.Resources> 7 <Setter Property="FrameworkElement.FocusVisualStyle"> 8 <Setter.Value> 9 <Style TargetType="IFrameworkInputElement"> 10 <Style.Resources> 11 <ResourceDictionary /> 12 </Style.Resources> 13 <Setter Property="Control.Template"> 14 <Setter.Value> 15 <ControlTemplate> 16 <Rectangle Stroke="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}" StrokeThickness="1" StrokeDashArray="1 2" Margin="2,2,2,2" SnapsToDevicePixels="True" /> 17 </ControlTemplate> 18 </Setter.Value> 19 </Setter> 20 </Style> 21 </Setter.Value> 22 </Setter> 23 <Setter Property="Panel.Background"> 24 <Setter.Value> 25 <SolidColorBrush>#FFDDDDDD</SolidColorBrush> 26 </Setter.Value> 27 </Setter> 28 <Setter Property="Border.BorderBrush"> 29 <Setter.Value> 30 <SolidColorBrush>#FF707070</SolidColorBrush> 31 </Setter.Value> 32 </Setter> 33 <Setter Property="TextElement.Foreground"> 34 <Setter.Value> 35 <DynamicResource ResourceKey="{x:Static SystemColors.ControlTextBrushKey}" /> 36 </Setter.Value> 37 </Setter> 38 <Setter Property="Border.BorderThickness"> 39 <Setter.Value> 40 <Thickness>1,1,1,1</Thickness> 41 </Setter.Value> 42 </Setter> 43 <Setter Property="Control.HorizontalContentAlignment"> 44 <Setter.Value> 45 <x:Static Member="HorizontalAlignment.Center" /> 46 </Setter.Value> 47 </Setter> 48 <Setter Property="Control.VerticalContentAlignment"> 49 <Setter.Value> 50 <x:Static Member="VerticalAlignment.Center" /> 51 </Setter.Value> 52 </Setter> 53 <Setter Property="Control.Padding"> 54 <Setter.Value> 55 <Thickness>1,1,1,1</Thickness> 56 </Setter.Value> 57 </Setter> 58 <Setter Property="Control.Template"> 59 <Setter.Value> 60 <ControlTemplate TargetType="ButtonBase"> 61 <Border BorderThickness="{TemplateBinding Border.BorderThickness}" BorderBrush="{TemplateBinding Border.BorderBrush}" Background="{TemplateBinding Panel.Background}" Name="border" SnapsToDevicePixels="True"> 62 <ContentPresenter RecognizesAccessKey="True" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" ContentStringFormat="{TemplateBinding ContentControl.ContentStringFormat}" Name="contentPresenter" Margin="{TemplateBinding Control.Padding}" HorizontalAlignment="{TemplateBinding Control.HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" Focusable="False" /> 63 </Border> 64 <ControlTemplate.Triggers> 65 <Trigger Property="Button.IsDefaulted"> 66 <Setter Property="Border.BorderBrush" TargetName="border"> 67 <Setter.Value> 68 <DynamicResource ResourceKey="{x:Static SystemColors.HighlightBrushKey}" /> 69 </Setter.Value> 70 </Setter> 71 <Trigger.Value> 72 <s:Boolean>True</s:Boolean> 73 </Trigger.Value> 74 </Trigger> 75 <Trigger Property="UIElement.IsMouseOver"> 76 <Setter Property="Panel.Background" TargetName="border"> 77 <Setter.Value> 78 <SolidColorBrush>#FFBEE6FD</SolidColorBrush> 79 </Setter.Value> 80 </Setter> 81 <Setter Property="Border.BorderBrush" TargetName="border"> 82 <Setter.Value> 83 <SolidColorBrush>#FF3C7FB1</SolidColorBrush> 84 </Setter.Value> 85 </Setter> 86 <Trigger.Value> 87 <s:Boolean>True</s:Boolean> 88 </Trigger.Value> 89 </Trigger> 90 <Trigger Property="ButtonBase.IsPressed"> 91 <Setter Property="Panel.Background" TargetName="border"> 92 <Setter.Value> 93 <SolidColorBrush>#FFC4E5F6</SolidColorBrush> 94 </Setter.Value> 95 </Setter> 96 <Setter Property="Border.BorderBrush" TargetName="border"> 97 <Setter.Value> 98 <SolidColorBrush>#FF2C628B</SolidColorBrush> 99 </Setter.Value> 100 </Setter> 101 <Trigger.Value> 102 <s:Boolean>True</s:Boolean> 103 </Trigger.Value> 104 </Trigger> 105 <Trigger Property="ToggleButton.IsChecked"> 106 <Setter Property="Panel.Background" TargetName="border"> 107 <Setter.Value> 108 <SolidColorBrush>#FFBCDDEE</SolidColorBrush> 109 </Setter.Value> 110 </Setter> 111 <Setter Property="Border.BorderBrush" TargetName="border"> 112 <Setter.Value> 113 <SolidColorBrush>#FF245A83</SolidColorBrush> 114 </Setter.Value> 115 </Setter> 116 <Trigger.Value> 117 <s:Boolean>True</s:Boolean> 118 </Trigger.Value> 119 </Trigger> 120 <Trigger Property="UIElement.IsEnabled"> 121 <Setter Property="Panel.Background" TargetName="border"> 122 <Setter.Value> 123 <SolidColorBrush>#FFF4F4F4</SolidColorBrush> 124 </Setter.Value> 125 </Setter> 126 <Setter Property="Border.BorderBrush" TargetName="border"> 127 <Setter.Value> 128 <SolidColorBrush>#FFADB2B5</SolidColorBrush> 129 </Setter.Value> 130 </Setter> 131 <Setter Property="TextElement.Foreground" TargetName="contentPresenter"> 132 <Setter.Value> 133 <SolidColorBrush>#FF838383</SolidColorBrush> 134 </Setter.Value> 135 </Setter> 136 <Trigger.Value> 137 <s:Boolean>False</s:Boolean> 138 </Trigger.Value> 139 </Trigger> 140 </ControlTemplate.Triggers> 141 </ControlTemplate> 142 </Setter.Value> 143 </Setter> 144 </Style> 145 </Style.BasedOn> 146 <Style.Resources> 147 <ResourceDictionary /> 148 </Style.Resources> 149 </Style>

Sun And Clouds Wallpapers (1)

AzureAdventures: Setting up the Adventureworks Database in under 10 Minutes

This blog post will show you how easy it is to set up the famous Adventureworks database using the Microsoft Azure Cloud.

Before we start, a little background:

A couple of days ago, I was at a customer site and needed a simple database in order to show a .NET framework demo. I wanted to demonstrate an end-to-end sample that consisted of loading data from an SqlServer database by means of the Entity Framework (EF) and hooking the loaded data on a Windows Presentation Foundation (WPF) front end. The requirement for the database were really simple. I just needed a DB that would run on Microsoft SqlServer, contain a couple of tables with relations and some data in them.

I remembered the good old Adventureworks database that is a perfect fit for any database related product demo. There are various possibilities on how to get the Database on

I decided to go for the out-of-the-box-ready solution that provides an *.mdf and an *.ldf file that can be attached to an Sql Server, using the following link:

However, as usual, it was not as easy as I hoped it would be. :-(

When I tried to start the SqlServer on my box, using the SqlServer Configuration Manager, I got an error message stating that “The remote procedure call failed. [0x800706be].


It looked like my Sql Server installation was somehow broken. I fiddled around for about 20 minutes and decided that I would just reinstall the SqlServer and not waist more time. Unfortunately, the SqlServer is a piece of software that, in my humble opinion, is not installed in 5 minutes time. As soon as the installer completed, it triumphantly told me that everything installed perfectly EXCEPT FOR THE DATABASE ENGINE!!! *grrrrrr*. That surely pissed me off. Annoyed and disappointed I went to sleep. (Needless to say, it was already very late at night.)

That left me in the awkward situation that I would have to show up at the customer site in the morning and fix my database problem sometime between the getting out of bed and the start of the course that I was about to teach. *sigh*

But then, IT HIT ME!

Since I am a huge fan of Microsoft Azure and deeply appreciate the technology behind it and how it facilitates a developers life, I turned my back on the local SqlServer installation on my notebook and looked up into the skies or, I must say, the cloud. And, alas, the Azure cloud really saved my day!!! Literally 5 minutes later, I had my db running and could even take a coffee break before the course started. *Tataaa*.

In the next section, I will demonstrate how amazingly easy it is to get the Adventureworks db running on Microsoft Azure. Keep in mind that the process that I am going to describe replaces all of these tasks:

  • Providing a virtual or physical machine
  • Downloading an SqlServer Installer
  • Organizing a License
  • Installing the SqlServer
  • Configuring the SqlServer
  • Downloading and Installing Adventureworks


Running Adventureworks on Microsoft Azure SQL

The following tutorial is divided into four parts:

1. Prerequisites
2. Provisioning an Azure Sql Database Server
3. Getting and Deploying the Scripts
4. Verifying the Installation

Let’s get started:

Step 1: The Prerequisites

You need 3 things in order to reproduce this example:

1. A Microsoft Azure Subscription
If you do not have one, you can get a free trial subscription for one month and 200 CHF at:
2. A computer with an installation of the .NET framework in version 4.
If you run Windows 8, you are set. If you run an earlier version and .NET 4.0 is not installed, you can get it at the Microsoft Download Center:
3. The Azure db must be accessible from your client computer. This is a configuration setting in the Azure Management Dashboard. Details can be found below.

Note that currently there are two Azure Portals. The old one under and the new one under For simplicity, I am going to use the old one.

Step 2: Provisioning an Azure Sql Database Server (Old Portal/

After you have created your Microsoft Azure account, go to the management portal: and log in.

On the portal, select SQL DATABASES from the navigation on the left hand side:


Once you are on the SQL DATABASES page, click SERVERS on the top navigation:


Finally, click the ADD button at the bottom:


The CREATE SERVER wizard will pop up, fill in the fields and confirm. Select a region that is close to your location.


After some time, it took about 30 seconds for me, your database server is ready to go! Click the little arrow on the right of the server name to proceed to the server settings:


On the following page, click CONFIGURE and then click the arrow on the right, where it says “ADD TO THE ALLOWED IP ADDRESSES”. This configuration is vital since it allows your local machine, where the browser is running, to access the database server. IMPORTANT: Do not forget to hit the save button, once completed!


Note the server name and the credentials that you specified and we can go on with the next step. Alternatively, you can navigate to the DASHBOARD page and copy the MANAGE URL from there.

Step 3: Getting and deploying the scripts

Navigate to and download the “AdventureWorks2012ForWindowsAzureSqlDatabase” file. Save and extract the zip file.

Open a CMD prompt with administrative privileges and navigate to the directory where you unzipped the package and there into the \AdventureWorks directory.

Enter the following command:

CreateAdventureWorksForSqlAzure.cmd <servername> <username> <password>

Important Hints!
Make sure you enter the name as “username@servername”. Providing the username only can lead to problems during the installation. Additionally, you have to use the full server name, as in:

CreateAdventureWorksForSqlAzure.cmd mypassword

After roughly 5 minutes, the script should show “Installation Completed”. That’s it, your database is up and running.

Step 4: Verifying the installation

You can use any SqlServer tool, such as Visual Studio or the SqlServer Management Studio to connect to your DB and verify that the data is there. I would like to show a very simple approach using SqlCmd.exe. SqlCmd is shipped with Visual Studio and the easiest way to use sqlcmd.exe is by using the Visual Studio Command Prompt. Run the following command inside the VS command prompt:

sqlcmd.exe –S <servername> –d <db name> –U <username> –q <query>

or, in our case:

sqlcmd.exe –S –d AdventureWorks2012 –U mme@zvnhp88skk –q “select top 10 firstname, lastname from person.person”

If you see 10 records that contain names, the data is there and we are done!

Important hint: Always keep in mind that Microsoft Azure is based on a pay-as-you go policy and you will be charged on a per-minute base for your database instance! If you don’t need it, shut it down!

Getting started with Orchard CMS – The Modules and Features you need

After having installed Orchard as shown in it is very important to know what the first modules are that need to be installed. In this post I will give you an overview about the most important modules/features and why you need them.

The following features help you with questions such as:

  • How can I get Edit buttons on my Orchard pages?
  • How can I influence my Orchard Layout?
  • How can I get table editing functionality in my Orchard HTML Editor.

Widget Control Wrapper

The widget control wrapper is installed by default. Once activated, it will render “Edit” Buttons for all the content sections on the front page. Therefore this:


becomes this:


Follow these steps to activate the Widget Control Wrapper:

  • Log on to your web site
  • Click the dashboard link at the bottom
  • Click Modules in the navigation
  • Under Features search for Widget
  • Click enable on the Widget Control Wrapper Box

Shape Tracing

The shape tracing feature creates a menu on the web site that allows you to look at the layout structure of the page. Once activated, it displays a small icon at the bottom of the page:


The icon opens a menu that lets you analyze all the content element and look at their configuration in terms of Shape, Model, Placement, Template and HTML.



TinyMCEDeluxe extends the built-in WYSIWYG HTML Editor with more functionality, such as table editing features. So this:


Becomes this:


TinyMCEDeuluxe is a module that needs to be installed from the gallery. In order to do this, navigate to Modules –> Gallery. Search for TinyMCEDeluxe and install the module.

IMPORTANT: Before you activate TinyMCEDeluxe, you MUST deactivate the built-in editor “TinyMCE”.

Getting started with Orchard CMS – Installing Orchard

Orchard CMS is a free Content Management System (CMS) based on Microsoft ASP.NET MVC.

Orchard is very powerful due to its rich extensions in the form of Modules (= Feature Packages) and Themes. It is highly customizable due to its full .NET integration and extensibility.

Orchard can be installed either by downloading with the Web Platform installer or by using WebMatrix. I am going to show you a brief description on how to create an orchard website using WebMatrix.

1. Installing WebMatrix

Microsoft WebMatrix is a free Microsoft web development tool. It focuses on easy creation, configuration, publication and maintenance of websites. The power of WebMatrix lies in the simplicity of its usage and, to my opinion, in the rich template gallery that can be used as a starting point. The gallery contains all the major open source implementations for CMS, Blogging, eCommerce, Galleries and Forum websites.

Examples include: Joomla, Umbraco, MojoPortal, nopCommerce Drupal, Moodle, WordPress, DasBlog and many more.

Go to and download WebMatrix.

2. Start WebMatrix


3. Create your Orchard WebSite

    • Click “New” –> “App Gallery”
    • Select Orchard CMS and click “Next”


  • Click next again
  • Accept the EULA
  • Wait until the download completes.

When the download completes, Orchard will start and open an initial configuration page in the browser:


  • Fill in the Site Name, an admin username and password
  • Select your data storage: SQL Server Compact, SQL Server or MySql
  • Select an Orchard Recipe. If you want to create a blog, select blog, if not, select default.
  • Click Finish Setup

And you are done, your website is running on your local machine:


From the home website you can access the dashboard via the link at the bottom. The dashboard contains all the configuration options for settings, content, etc.



In the next post we will explore the features and modules that you need after you have completed the orchard installation:


Using Complex Event Processing (CEP) with Microsoft StreamInsight to Analyze Twitter Tweets 7: The Sample Application 3

Note: This post is one of a series, the overview can be found here: Complex Event Processing with StreamInsight

Writing the StreamInsight Queries

The sample code to the application can be downloaded here:

Scenario 1: Tweets per Second

As we can see from our pass-through implementation from the last post, we receive a large amount of tweets. In the first query we would like to find out how many tweets we receive per second. This turns out to be pretty easy!


We define a tumbling window with a length of one second and just call the count property. StreamInsight uses three different types of windows:

  • TumblingWindow: A fixed length window. The next window begins when the current one ends.
  • HoppingWindow: A fixed length overlapping window. We can define the time interval in which we want the next window. Example: We have a 3 second window that moves in one second hops. So we get updated data every second for the last 3 seconds.
  • SlidingWindow: The sliding window reacts to the input stream and always moves when there is a change in the stream. The window size is defined by two adjacent events.

In order for the example to work, we have to do two more things. First, we need to adjust the consoleObserver. Since the query produces scalar count values (and not TweetItems anymore) we need to adjust the generic type parameter in the DefineObserver() method from TweetItem to long.


Next, we need to replace the twitterstream instance with the query for the Bind() call. We now have a query that operates on the data source and bind the observer to the query.


That’s it! The output should look somewhat like this:


Amazing stuff already, but wait, there’s more!

Tweets per Second grouped by Language

For the next query we are going to group the tweets by language. We first need a data object for the computed output. As a little twist we add a method call that resolves the culture info that we find in the Tweet.


Then we continue and write our query. Remember that we have to adjust the generic type parameter of the observer to use the new LanguageSummary type:


The query leads to output like this:



Language with most Tweets per Second

Next, we improve the last query by adding another query that uses the output of the last example as input. We want to know the top 5 languages in terms of tweets per second. We use the SnapShot window here to react in changes in the stream, that is changes in the output of the last query.



The most popular Tweet every Second

Next, we want to find the most popular Tweet every three seconds. First, we create a new data class:


Then we write a query that groups all the tweets in every 3 second window by the number of followers that the user has:



Note: The numbers after the user name show (Followers/Friends).

In the next step we improve the last example by adding the Friendcount. We want to know the person in every three second window that has most followers AND most friends. We implement this by writing two different queries for followers and friends and then join them together using a StreamInsight join operation.

Let’s start by writing the most friend query first. It is similar to the followers query:


Now we use the join to join the queries on the user name:


And that is it. If we run the sample we find out who the most popular person on twitter is ever three seconds:


An interesting observation that we can make looking at the data is that in some windows, e.g. 06:21:57, there is no event. This means that at that point the person with most tweets and most friends was not the same person.

The join is a very powerful operation that can give us deep insight into the data.

If you want to play around and try out more complex queries, look at the resources in the first blog post of the series, especially at the LINQPad Samples and the Hitchhiker guide.

I hope you enjoyed this series on Microsoft StreamInsight. If it provided substantial value to you, feel free to donate. :-)


Using Complex Event Processing (CEP) with Microsoft StreamInsight to Analyze Twitter Tweets 6: The Sample Application 2

Note: This post is one of a series, the overview can be found here: Complex Event Processing with StreamInsight

The sample code to the application can be downloaded here:

Putting it all together

Now that we have written our TweetItem, TwitterStream an Unsubscriber class, we can put it all together and write the actual StreamInsight code. We start by implementing the main method.

First we define the StreamInsight Server. In the Create() method we specify the name of the StreamInsight instance as we defined it during the installation. Then we create a StreamInsight application that will hold our data sources, sinks and queries.


Next, we define our data sink. We write a method that prints the event to the console. There is an IF statement that only prints the stream events that we are interested in and omits the CTI’s. We implement it as a generic method with a type variable for the payload. This allow us to reuse it when we have different queries with different output POCOs:


Following the StreamInsight 2.1 approach we can create a sink around the method above and hook it to the application in one line. Note: You have to include the Microsoft.ComplexEventProcessing.Linq namespace in order to see the DefineObserver() extension method.


However, it is one line but to my opinion it is hard to understand. Let’s pick it apart and analyze what is going on:


We use the static Create() method from the observer class to create an Observer instance that can observer PointEvent<object> and we point it to the ConsoleWritePointNoCTI method that accepts an element of type PointEvent<object> as a parameter.

Next, we create our data source from the TwitterStream.cs class.


Here as well, we can write it in one line but it is even more complicated than the last statement. Lets look at what is going on:


We can see in the code that we begin by creating an instance of our TwitterStream class. Remember that this is the class where we implemented the IObservable<TweetItem> interface.

Then we use the DefineObservable() extension method from the StreamInsight assembly to convert the IObservable into an IQbservable. From there, we convert the IQbservable into an IQStreamable. In this call we define the structure of our PointEvent items and instruct StreamInsight to add CTIs to the stream.

Finally, we create a binding that connects the data source and the data sink and we call the Run() method on that binding. In this first example, we do not yet use a query and just run all the data that we receive in the source directly to the sink.


And that’s it! Hit F5 and HERE WE GO!

The output should look something like this:


Note: The question marks come from languages with characters that can not be printed by the standard console.

Bravo! You can now receive live Twitter data processed by StreamInsight!

Next week, we will write our first queries and process the data between the source and the sink.

1 Comment

Using Complex Event Processing (CEP) with Microsoft StreamInsight to Analyze Twitter Tweets 5: The Sample Application 1

Note: This post is one of a series, the overview can be found here: Complex Event Processing with StreamInsight

The sample code to the application can be downloaded here:

Writing the Sample Application

In this post we are going to write a sample application from scratch. We start by installing StreamInsight and then jump into Visual Studio and start coding the project. First, we are going to use a library to get access to the Twitter feed. As soon as we have the basics we elaborate on some interesting queries on the Twitter data stream.

Installing StreamInsight

First, we need to download the StreamInsight installer executable. We can find it in the Microsoft Download Center under:

We need the StreamInsight.msi Installers corresponding to your target platform bitness.

During the installation you will be asked if you would like to create a StreamInsight instance. Do so and give it any name you like. In this example I am going to name it “Thomas”.

Creating the Visual Studio Solution

Now, let’s fire up Visual Studio. I am using VS2012 but you can use VS2010 as well. We make it simple and create a new console project. Then we need to add the Microsoft StreamInsight assemblies. Add assembly references to the following files:

  • Microsoft.ComplexEventProcessing: Found in the StreamInsight install directory: C:\Program Files\Microsoft StreamInsight 2.1\Bin\
  • System.Reactive: Found under .NET 4.0
  • System.Reactive.Providers: Found under .NET 4.0

Your References should look like this:


Next, we integrate Twitter into our solution. We use a handy library named Tweetinvi and install it via NuGet Package Manager. Maybe you need to install the NuGet Package Manager in your Visual Studio, if it is not available. Right-click on the References folder in the Solution Explorer and select “Manage NuGet Packages”. Do an online search for “Tweetinvi” and hit “install”.


First we are going to write a data class that will be our payload object. We call it TweetItem and fill it with a couple of properties. For the sake of simplicity, we restrict our item to the following fields: User, Text, CreationDate, Language, Followers, Friends.


In our TweetItem class we provide a constructor that accepts an object that implements the ITweet interface and converts it into a TweetItem. The ITweet interface can be found in the Tweetinvi Library and contains all the fields that we get for each tweet.


Creating the Twitter Observable

Next, we create the class that we use as our data source and feed into StreamInsight. Create a new class called TwitterStream.cs and implement the IObservable<TweetItem> interface. The interface contains a Subscribe() method:


First, we are going to add a list of IObserver<TweetItem> objects. Then we add code to the Subscribe() method that adds the observer parameter to the observer list.


The result of the subscribe method is an object that implements IDisposable. The subscriber can use it to cancel his subscription. The subscription will end when the subscriber disposes the object that he received upon subscription. We implement a small Unsubscriber class that implements IDisposable and keeps references to the observer list and its own observer instance. Upon disposal, it removes its own reference from the observer collection.


Next we need a Start() method that connects to twitter and starts receiving Tweets. In order to receive Tweets from Twitter you will need to register on the Twitter website and get the following four credential keys: UserKey, UserSecret, ConsumerKey, ConsumerSecret. You can start on: Once you can login to you can create an application and will get your authentication information.

There is a free subscription to Twitter that will give you one percent of all the worldwide Tweets. Once you have your credentials, we can implement the start method:


The start method uses the Token and SimpleStream classes from the Tweetinvi namespace. Note that the SimpleStream class has nothing to do with StreamInsight. In the StartStream() method we provide a lambda expression that points to the OnNewTweet(tweet) method that we will define next. The OnNewTweet() method is used to map the Tweets into TweetItems and forward each Tweet that we receive to all the items in our observer collection:


Finally, we implement the constructor. We initialize the myObservers collection and we call the Start() method. We execute the Start() method on a background thread using the Task Parallel Library (TPL).


In the next post we put it all together and write our first query. Stay tuned!