Hello World … A different type of geek podcast

imageShawn Wildermuth started recording a new type of interview podcast where he talks with some well known people in our tech community about how they got started in programming. He’s had software luminaries like Charles Petzold  and Scott Hanselman and many others you will recognize. It’s a lot of fun to hear about their beginnings and other conversation points that evolve while chatting with Shawn.

I talked with Shawn last month for episode #14 of the podcast. Many of you are familiar with the 3-week cold that’s been going around the U.S. Well I had it then so hopefully you can get past my sniffles. Smile

First look at (beta of) EF 6.1 Designer

EF6.1 is closing in on release. Along with the new DLLs that you can get via Nuget, there is a new designer that you can install into Visual Studio 2012 or 2013 via an MSI install.

Before installing, I uninstalled the EF Power Tools, because I wasn’t sure if they were included. Post install of the 6.1 Beta EF Designer, there was no indication of the power tool features in my context menu where it would normally be.


So it looks like this iteration of the designer won’t have the power tool features wrapped in except for the BIG one, the replacement of the reverse engineer into code first.  (So don’t uninstall those power tools! You can still use the other features like this one: my favorite).

With the new designer, when you create a new model, you now get 4 options. Two are familiar, EF Model from Database and Empty Model. The new ones are Empty Code First Model and Code First from database.


I chose Code First from database and pointed to AdventureWorks2012 and selected everything in the Human Resources schema.


This resulted in all of my new classes being created in my project, not in a folder.


Also, it added references to EF.

But the classes felt a little messy. So for my next model, I began with a new project folder. (note that I am only experimenting. I do not recommend having two models in the same project.)


Then I added the new entity model to th efolder, not the outer project. This time I selected the Person schema.


and all the new classes landed in my new folder


I prefer that. In a real app I would then separate the domain classes into their own projects and I would also put each of the models in its own project.

Let’s take a look at the domain classes that this generated.

Notice that tehre are data annotations. This is different than the EF Power Tools which put everything in fluent API code.


But those are annotations that are reasonable in a class since they are not solely defining a database mapping strategy.. required and stringlength can also be used by EF’s and .NET’s validation APIs. For the latter , think MVC validation.

But that’s not how this tool decides to use data annotations. Check the BusinessEntityAddress type.


It has annotations that drive the composite key creation and also that note database generated mappings. I wouldn’t normally put those in my domain types.

The rule that is being used by default is: any mappings that can be described with a data annotation are done that way. Everything else is done with a fluent api. None of my tables triggered any fluent api mappings. There is some discussion (started by guess who? Smile) on codeplex about making it easier to force the designer to use all fluent mappings if you want.

I was curious what Empty Code First Model gives us. It sets up a shell DbContext class. Makes sense to me.


I have been asking for the ability to select individual tables/views for reverse engineering an existing database into Code First context ad classes since the first public CTP of the EF Power Tools. (Here’s proof!) So even with a few things I’d like to see done differently, I am thrilled with this new designer feature.


For people building new apps with legacy databases, it is a great first step towards setting up DbContexts that  you can align with Domain Driven Design Bounded Contexts.  It gives me a nice stake in the ground to thinking within boundaries and also forcing me to think about when a customer in one context should be designed differently than a customer type in another context because they are used differently.

Any-hoo…there’s a quick first look for you. There’s still time to provide feedback to the team on the codeplex site. Go to this workitem: https://entityframework.codeplex.com/workitem/407

Also, as Erik reminds me in the comments, there is a  Visual Studio extension that provides some more extensive reverse engineer features into code first dbcontext and classes. You can find the docs, a video and the download here:  EntityFramework Reverse POCO Generator 


Setting my sites on Barcelona this fall for TechEd Europe 2014

imageThis will depend on getting a talk accepted, but I promise I will put my best foot forward! You may or may not know that I was supposed to give a talk at TEE 2013 in Madrid in June and follow that with a trip to Barcelona. But I had so many travel problems and after 5 hours at my airport on Monday, then trying again on Tuesday only to get stuck at O’Hare airport in Chicago with United Airlines saying they could not get me to Madrid until Thursday, I finally gave up and went home. So I’m determined to get to Barcelona this time!

.NET Rocks Interview from Phoenix is now online

Last week I flew from thednrjuliejan2014 bitterly cold Vermont out to Phoenix Arizona to join Carl Franklin & Richard Campbell in a very cool art film theater in Tempe for their Modern Apps 2013 Road Show Tour. The tour is in conjunction with Xamarin and Visual Studio.

Oh it was warm. Warn and sunny. I’m back home and it is currently warmed up to 4 degrees Fahrenheit!

I got to spend an extra day there which enabled me to see some of the scenery in the Phoenix area and visit with Esther Schindler & her hubby (and cats!) in Scottsdale who took me on a great tour into the hills (without the cats).

At the roadshow event, Carl & Richard interviewed me. We talked about Entity Framework 6, databases and the usual other fun chit chat.

The show is now online. Episode 943. You can listen to or download it here.

New Blog and New WebSite!

I’ve finally done away with my ancient asp.net 2.0 thedatafarm.com that I hadn’t paid any attention to in many years and with it also moved my blog (an early version of graffitiCMS that was starting to act up).

Both are now set up as a WordPress app which was fun to configure with plugins and widgets and not too challenging since I have played with WordPress a bit on behalf of my mom.

So this will be easier for me to maintain and add in modern goodies like the social links.

I’m grateful to a number of people who’s resources helped or inspired me:

Picking a Blog Engine

Many folks on twitter recommended their favorite blog engine. I considered Orchard, the new Ghost blog and initially thought I did not want to use WordPress. My pal Daniel Marbach worked on me on twitter for a while, mystified why I was a brick wall about my anti-Wordpress stance. (He gets the last laugh!) Ghost is new. It will run on Windows Azure. But I was staying with orcsweb (via Cytanium) and that was not an option. Michele Bustamante is loving Ghost on her new site michelebusta.com. Sebastien Ros, from Microsoft, who works on Orchard, was very supportive, even offering to do the big 10 year, 4200 blog post conversion for me. In the end, Steve Smith said two words that swayed me back to WordPress: automatic updates. Yeah I love the ease of one button click wordpress and plugin updates.

Picking a Host

I’ve been very fortunate to have my site hosted by orcsweb.com for many years. While they are one of the highest-end hosts for ASP.net sites and pricey (but oh so worth it), they had offered me free hosting as a benefit of being an ASPInsider. Otherwise, it would have been overkill for my wee web site. But thanks to the gratis hosting, I get the benefits that orcsweb has to offer. Orcsweb has a spin-off company for people like me …don’t need the power (or expense) of the premier hosting, but still want the benefit of their experience, dependibility and great customer support: Cytanium.com. So they offered to host my site on Cytanium which also sports the  Microsoft Web App Gallery for auto creating websites using frameworks like Orchard and WordPress. There are about 100 apps to choose from to install from the gallery. Setting up a new WordPress site was a snap.

Migrating 10 years worth of blogging

I have over 4000 blog posts and didn’t want to lose them. I found four resources that made this possible.

  1. Jef Kazimer‘s blog post on his Graffiti to WordPress migration was very helpful.
  2. Jon Sagara‘s Graffiti to BlogML tool. It’s just a small project that you can download and run in Visual Studio. I made one tweak to the code which was to handle some issues with the way it tried to emulate the blog URLs based on the titles. I had lots of quotations in those titles. In the end, I didn’t even need those URLs becasue I figured out how to get WordPress to format URLs just the same as they were formatted in the graffiti so my links should all (or mostly all) work. And I got the tool to create valid XML. I pointed the connection string to my sql server database and it processed all of the blog posts and comments in a matter of minutes!
  3. Once the posts are in BlogML format there is a WordPress plugin, BlogML Importer (which works fine with WordPress 3.8), that does the conversion.
  4. However it has a limit of 2MB file imports. Searching for a solution, I couldn’t believe that it was my own old Vermont pal, Dave Burke, who presented the solution Importing a Big Honkin’ BlogML.xml Into WordPress . I followed his instructions to a T and was able to pull in all of my posts and comments in 7 smaller files.

And some marketing

I left my blog in a /blog subfolder so that I can continue to have a main page on thedatafarm.com to let folks know that, as a consultant, I do like (and need) to work for a living.  It is all just one big wordpress site. Well, really a small one. The other stuff, home page, etc , are just static pages.


I’m using a bunch of plugins. Since they are free, the least I can do is give them a nod:



Problems Installing AMD Catalyst Control Center on Windows 8? Here’s my fix!

TLDR: Updating to Windows 8.1 and reinstalling solved the problem. Smile

(And yes, I realize that I just wrote an UpWorthy-worthy post title!)

I recently replaced my 19” monitors with a pair of 27” monitors. My desktop is a Dell Optiplex and it has an AMD HD Radeon 7500 display adapter installed. This came with the Dell and is not an off-the-shelf card, but something that AMD sells generically to manufacturers.

My monitor has HDMI and DisplayPort outputs. My adapter has DisplayPort and DVI inputs. I plugged one monitor DP to DP. It was fine. For some reason, I got an HDMI to DVI adapter instead of a DisplayPort to DVI and when I plugged the second monitor in from the HDMI output, I got what is a classic problem with HDMI monitors: a 1” black border around my desktop window.

I tried many things via windows settings but none did the trick. The monitor has lots of hardware settings but it doesn’t have the classic Horizontal and Vertical scaling option that I’m used to from older monitors.

So I went to the internet and found many (many many many) recommendations on the web to just install AMD’s catalyst control center and then use it’s features to change the scaling on the monitor and lose the border.

Obviously this was the way to go. But after 4 hours of attempting to install the CCC software, I got bupkus. The installer went through the motions of installing but I did not get the CCC menu option on the desktop context menu. This is what it should look like:


I went through uninstalls, reinstalls, registry purges (based on AMD guidance), installing beta software. Still nothing.

Then I dug into the install folder created by the installer (C:\AMD) and dug through sub folders running all of the installers I could find! (Yes I know…dangerous and potential for having to repave my machine).

This finally installed a shell of the CCC though still not on my context menu. But when I ran this shell, all it had in it was a Quadrant tool that lets you define where on the screen various softare should be displayed.

I gave up on this thinking that next time I was in town, I would just get a proper DisplayPort to DVI adapter.

A month went by and I decided to try again yesterday. I only wasted 1/2 hr this time before giving up.

I also happened to decide it was time to upgrade this computer from Windows 8 to Windows 8.1. I had upgraded all of the other laptops and devices already.

After the 8.1 update was done, I decided to give CCC one last try (or at least what I was calling one last try.) I looked in the install folder and saw something I had never seen before in my many visits to that folder! A folder called WU-CCC2


In there was another install folder and in there another setup.exe!


Of course, I gave this a try and the install (because I just love to click on things!) and it did some interesting things. First it ran an uninstall (cleaning up previous garbage?) and then ran another install.

When complete, I had to reboot and the Catalyst Control Center option was finally on my menu!

I opened it up and it looked like all of the screenshots I had seen all over the internet as a recommendation on how to solve my original problem: the 1” border.

And changing the scaling on that monitor, has indeed, fixed the border problem! Smile


Sometimes being a pit bull pays off. On the other hand, a new display adapter would have probably been more cost effective than the wasted time …but my pride was at risk, here. So I had to solve the puzzle!

Fix for VS2013 Not Showing Databases in SQL Server Object Explorer

This really had me confused. I have used SSOE plenty in VS2012 and on my laptop that I just replaced, in VS2013. But on my new laptop and on my desktop, I can go through the motions of adding the (localdb)/v11 database to SSOE with no errors and still, nothing shows up under the SQL Server branch of the tree.

Finally I opened VS2012 to see if *it* would show my databases in SSOE. Instead I got a message that SQL Server Data Tools (SSDT) was out of date.

SSDT is an extension to Visual Studio 2012 which that team was releasing whenever they had updates to it.

But for VS2013, it’s just a part of the full IDE.

My out-of-date VS2012 SSDT was causing a conflict with the built in tools for VS2013. Go figure. But installing the October 2013 update to SSDT (you’ll find an update for VS2010 and for VS2012) fixed the problem. Now I can see my databases in SSOE in both version of visual studio.


How EF6 Enables Mocking DbSets more easily

There’s an interesting change in EF6 that simplifies unit testing when EF is in the way and you don’t want to engage it at all.

EF6 DbSet gained new features. The team had to decide if they would make a breaking change to the existing IDbSet interface or leave that be and just change DbSet. They chose

the latter route. In doing so, they also ensured that we could use the DbSet directly for testing by adding a new constructor.

Here’ you can see the different constructors and how they affect our ability to test.

EF5 DbSet Constructor

The DbSet constructor is tied to a DbContext by way of the InternalQuery that is used internally in the constructor.

internal DbSet(InternalSet<TEntity> internalSet)
   : base((IInternalQuery<TEntity>) internalSet)
   this._internalSet = internalSet;

In  EF5, we also have IDbSet (DbSet derives from this) (and IObjectSet which was introduced in EF4) . These interfaces contain the set operations (Add, Update, Remove and some additional methods through other interfaces) and can be implemented without forcing any ties to EF’s DbContext.

That’s what we’ve used in the past to create fake DbSets for testing scenarios.

EF6 DbSet Constructors

The internal constructor is still there.

    internal DbSet(InternalSet<TEntity> internalSet)
      : base((IInternalQuery<TEntity>) internalSet)
      this._internalSet = internalSet;

But now there is another constructor. It’s protected and only uses an set interface, but not the query interface. This allows mocking frameworks to get access to DbSet and at the same time, benefit from some of the methods added to DbSet for EF6.


  /// <summary>
  /// Creates an instance of a <see cref="T:System.Data.Entity.DbSet`1"/> when called from the constructor of a derived
  ///             type that will be used as a test double for DbSets. Methods and properties that will be used by the
  ///             test double must be implemented by the test double except AsNoTracking, AsStreaming, an Include where
  ///             the default implementation is a no-op.
  /// </summary>
  protected DbSet()
    : this((InternalSet<TEntity>) null)

Even if you wanted to create your own fakes (or test doubles) in EF6, you can do that with DbSet now, not IDbSet. IDbSet is still there for backwards compatibility.

There are two detailed documents on MSDN for using EF6 to create Test Doubles and to use with Mocking Frameworks.

You also might find the meeting notes about this change interesting. I sure do! :)

I am curious to revisit my work with Telerik’s JustMock. I built some tests with EF5 and JustMock in my Automated Testing for Fraidy Cats course on Pluralsight. When using the paid version, everything just works. But when using JustMock Lite, the free version, it was not able to grok DbSets and you still needed to implement your own fake. I’ll be checking to see if the new DbSet implementation allows the free version of JustMock to mock DbSets on it’s own now.

//update about 20 minutes after initial post. The limitation of JustMock Lite is that it doesn’t support ReturnsCollection which is what you want to emulate the return of a DbSet. So if you’re not willing to pay for your tools, you can use the free version (which has a ton of features) and do a little extra work (create your own test double for DbSet which you can see how to do in MSDN doc I linked to above.

Testing Out the Connection Resiliency Feature into EF6

A few days ago I wrote a blog post about gaining a better understanding of the new connection resiliency feature of EF6. The feature is implemented using an implementation of a new class, IDbExecutionStrategy.

The SQl Server provider that’s bundled with EF6 has a strategy aimed at Windows Azure SQL Database (aka SQL Azure) that is designed to retry a command if a transient connection error is thrown. My blog post listed the transient error messages.

I wanted to see this in action but it’s not simple to make a transient error occur. There was a blog post with sample code for creating a deadlock, but then Glenn Condron on the EF team suggested just throwing the error by leveraging the new EF6 ability to intercept commands & queries headed to the database. It can also intercept data coming back.

A few people have asked how I did this, so I’ll share my setup here.

I played with that a bit and realized that the SqlAzureExecutionStrategy required more than just the correct error code to trigger the retries. It needs a SQLException to throw that error. And a little more banging led me to realize that you can’t just instantiate a SqlException, it has to be done via reflection.

But I don’t give up easily. I found some helpful examples for doing this though it still wasn’t simple. Maybe things have changed, but I finally tweaked teh sample code enough to get what I needed.

Still, I was unsuccessful because EF was first doing it’s initialization tasks and the SqlAzureExecutionStrategy was not responding to errors thrown by those commands. I had to filter those commands out. Then finally, it worked!

There may be an easier way. I know Glenn does this differently. What I have worked out is witnessing the retries but since that’s all I watned to see, I’m not worried about having one of those retries be successful. I know that in the real world, as it’s retrying and the transient connection kicks back in, one of those retries will get through.

So to test out the feature you need four puzzle pieces.

1) Intercept the (non initialization/migration) command

2) Use DbConfiguration to make sure the interception is happening

3) Use DbConfiguration to set up the SqlAzureExecutionStrategy

4) Run an integration test that will attempt one or more queries on the database via EF.

Step 1) Intercept the command and throw the correct error

For this I created a new class, TransientFailureCausingCommandInterceptor, that inherits from IDbCommandInterceptor. There are a number of methods to override. The only one I’m interested in is ReaderExecuting…i.e. a read command is about to execute.

 public class TransientFailureCausingCommandInterceptor : IDbCommandInterceptor
    public void ReaderExecuting(
      DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)

In the command , I filter out commands that are for the database initialization and migration work, in case they are executing.

For any other commands, I write out some info to Debug so I know that a query is about to execute, then I use my helper class to create a fake SQL Exception that throws one of the transient error codes that SqlAzureExecutionStrategy looks for: 10053.

  public void ReaderExecuting(
      DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
      if (!(command.CommandText.Contains("serverproperty") 
        || command.CommandText.Contains("_MigrationHistory")))
        Debug.WriteLine("throwing fake exception from interceptor");
        throw SqlExceptionFaker.Error10053;


The SqlExceptionFaker is the magic for throwing a SqlException. It is a twist on  this example I found from Microsoft MVP, Chris Pietschmann. I found some other ways of doing this but , in my opinion, Chris’ was the best of the solutions that I found.

using System.Collections;
using System.Data.SqlClient;
using System.Runtime.Serialization;

namespace SqlExceptions
  public static class SqlExceptionFaker
    private static SqlException _error10053;

    public static SqlException Error10053
        if (_error10053 == null)
          _error10053 = Generate(SqlExceptionNumber.TransportLevelReceiving);
        return _error10053;

    public enum SqlExceptionNumber : int
      TimeoutExpired = -2,
      EncryptionNotSupported = 20, 
      LoginError = 64,
      ConnectionInitialization = 233,
      TransportLevelReceiving = 10053,
      TransportLevelSending = 10054, 
      EstablishingConnection = 10060,
      ProcessingRequest = 40143, 
      ServiceBusy = 40501, 
      DatabaseOrServerNotAvailable = 40613

    public static SqlException Generate(SqlExceptionNumber errorNumber)
      return SqlExceptionFaker.Generate((int) errorNumber);

    public static SqlException Generate(int errorNumber)
      var ex = (SqlException) FormatterServices.GetUninitializedObject(typeof (SqlException));

      var errors = GenerateSqlErrorCollection(errorNumber);
      SetPrivateFieldValue(ex, "_errors", errors);
      return ex;

    private static SqlErrorCollection GenerateSqlErrorCollection(int errorNumber)
      var t = typeof (SqlErrorCollection);
      var col = (SqlErrorCollection) FormatterServices.GetUninitializedObject(t);
      SetPrivateFieldValue(col, "errors", new ArrayList());
      var sqlError = GenerateSqlError(errorNumber);
      var method = t.GetMethod(
        System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance );
      method.Invoke(col, new object[] {sqlError});
      return col;

    private static SqlError GenerateSqlError(int errorNumber)
      var sqlError = (SqlError) FormatterServices.GetUninitializedObject(typeof (SqlError));

      SetPrivateFieldValue(sqlError, "number", errorNumber);
      SetPrivateFieldValue(sqlError, "message", errorNumber.ToString());
      SetPrivateFieldValue(sqlError, "procedure", string.Empty);
      SetPrivateFieldValue(sqlError, "server", string.Empty);
      SetPrivateFieldValue(sqlError, "source", string.Empty);
      SetPrivateFieldValue(sqlError, "win32ErrorCode", errorNumber);

      return sqlError;

    private static void SetPrivateFieldValue(object obj, string field, object val)
      var member = obj.GetType().GetField(
        System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance
      member.SetValue(obj, val);


Step 2) Force the context to use this interceptor

This happens in the lovely new DbConfiguration class. Read more about it here.

public class CustomDbConfiguration : DbConfiguration
  public CustomDbConfiguration()
   AddInterceptor(new CasinoModel.TransientFailureCausingCommandInterceptor());

Step 3) Make sure your DbConfiguraiton class is wired up in app/web.config in the EntityConnection section:

 <entityFramework codeConfigurationType="DataLayer.DbConfigurations.CustomDbConfiguration,CasinoModel">
  . . .


Step 4) Test

This is a test that works with my model. Notice that my test specifies a connection string. That ensures that my context is using the SQL Azure connection I set up in my config file.

   [TestMethod, TestCategory("Connection Resiliency")]
    public void CanHitSqlAzureDbWithTransientFailure()
      int slotcount = 0;
      using (var context = new CasinoSlotsModel(connectionStringName: "CasinoHotelsAzure"))
        foreach (var casino in context.Casinos)
          context.Entry(casino).Collection(c => c.SlotMachines).Load();
          slotcount += casino.SlotMachines.Count;
      Assert.AreNotEqual(0, slotcount);

The test fails because an exception was thrown – the SqlException that I faked. That’s because an error is getting thrown. When I check the output

The SqlException caused a series of exceptions in response. Notice the nice message from EntityException: “consider using a SqlAzureExecutionStrategy”.

Result Message:
Test method AutomatedTests.UnitTest1.CanHitSqlAzureDbWithTransientFailure threw exception:

System.Data.DataException: An exception occurred while initializing the database. See the InnerException for details. —> System.Data.Entity.Core.EntityException: An exception has been raised that is likely due to a transient failure. If you are connecting to a SQL Azure database consider using SqlAzureExecutionStrategy. —>

System.Data.Entity.Core.EntityCommandExecutionException: An error occurred while executing the command definition. See the inner exception for details. —>

System.Data.SqlClient.SqlException: Exception of type ‘System.Data.SqlClient.SqlException’ was thrown.

Looking at the failed test’s OUTPUT I see the text spit out by the ReaderExecuting method:


The method was only hit once, so I see my message only once.

Step 5) Add in the SqlAzureConnectionStretegy to the DbConfiguration file:

 public CustomDbConfiguration()

    AddInterceptor(new CasinoModel.TransientFailureCausingCommandInterceptor());
      (SqlProviderServices.ProviderInvariantName, () => new SqlAzureExecutionStrategy());

Using the default settings of SqlAzureExecutionStrategy will cause 5 retries.

Step 6) Run the test again:

The test hangs for a lot longer than fails. Why longer? The retries! I get 6 messages. The first is the initial problem and the other 5 are for each of the 5 retries.


Why does it still fail? Because my setup doesn’t “turn off” the error. That’s fine. I just want to see that it does actually retry.

The exception is different this time. RetryLimitExceededException, max retries (5) , etc…

Result Message:
Test method AutomatedTests.UnitTest1.CanHitSqlAzureDbWithTransientFailure threw exception:

System.Data.DataException: An exception occurred while initializing the database. See the InnerException for details. —> System.Data.Entity.Infrastructure.RetryLimitExceededException: Maximum number of retries (5) exceeded while executing database operations with ‘SqlAzureExecutionStrategy’. See inner exception for the most recent failure. —>

System.Data.Entity.Core.EntityCommandExecutionException: An error occurred while executing the command definition. See the inner exception for details. —>

System.Data.SqlClient.SqlException: Exception of type ‘System.Data.SqlClient.SqlException’ was thrown.

Step 7) Modify the connection retries

Next I’ll change the default of the SqlAzureExecutionStrategy to retry only 3 times.

(SqlProviderServices.ProviderInvariantName, () => new SqlAzureExecutionStrategy(3,new TimeSpan(15)));

Look at the output after I run the test again. Only 3 retries then it gives up.

So that’s it. I’m content to see this feature in action.

I was also happy to see a comment in the previous blog post from a developer who has seen the benefits of this feature already in their production environment.