Category Archives: Data Access

Defining a Defining Query in EF Core 2.1

I have to cut out some text from a too-long article I’ve written for a magazine (links when it’s published), so here is a simple example of using the new ToQuery method for creating a defining query in EF Core 2.1 (currently in Preview 2).

ToQuery is associated with the new Query Type feature that allows you to use types that are not mapped to a table in the database and are therefore not true entities, don’t require a key and are not change tracked.

I’m starting with a simple model that includes these two entities, which are mapped to tables in my DbContext.

public class Team
    {
        public int TeamId { get; set; }
        public string Name { get; set; }
        public string TwitterAlias { get; set; }
        public List Members { get; set; }
    }

    public class TeamMember
    {
        public int TeamMemberId { get; set; }
        public string Name { get; set; }
        public string Role { get; set; }
        public int TeamId { get; set; }
        public TimeSpan TypicalCommuteTime { get; private set; }
        public void CalculateCommuteTime (DateTime start, DateTime end)
        {
            TypicalCommuteTime = end.Subtract(start);
        }
    }

You’ll need to pre-define the type being used for the defining query, mine will have the Name and TypicalCommuteTime for the team member.

public class TeamCommute
{
   public TeamCommute(string name, TimeSpan commuteTime) 
  {
    Name = name;
    TypicalCommuteTime = commuteTime;
  }
  public string Name { get; set; }
  public TimeSpan TypicalCommuteTime { get; set; }
}

You can define queries directly in OnModelBuilding using either raw sql with FromSql or a LINQ query inside a new method called ToQuery. Here’s an example of using a Query type with a defining query and Linq:

modelBuilder.Query<TeamCommute>()   .ToQuery(() => 
   TeamMembers.Select(m => new TeamCommute( m.Name, m.TypicalCommuteTime ) )

With this query defined on the TeamCommute class, you can now use that in queries in your code for example:

var commutes = context.Query<TeamCommute>().ToList();

Keep I mind that you can’t define both a ToQuery and a ToView mapping on the same type.

New Pluralsight Course! EF Core 2: Getting Started

I’ve recently published my 19th course on Pluralsight.com: Entity Framework Core 2: Getting Started.

It’s 2hrs 40 minutes long and focuses on the basics.

This is using EF Core 2.0.1 in Visual Studio 2017.

Future plans: I’ve begun working on an intermediate level course to follow up and have others in the pipeline…such as a course to cover features of EF Core 2.1 when it gets released (I will wait until it has RTMd for stability) and other advanced topics. I am also planning to do a cross-platform version using VS Code on macOS because that’s my fave these days.

If you are not a Pluralsight subscriber, send me a note and I can give you a 30-day trial so you can watch the course. Be warned: the trial is akin to a gateway drug to becoming a subscriber.

Here is the table of contents for the course:

Introducing a New, Lighter Weight Version of EF   32m 40s
Introduction and Overview
What Is Entity Framework Core?
Where You Can Build and Run Apps with EF Core
How EF Core Works
The Path From EF6 to EF Core to EF Core
EF Core 2 New Features
Looking Ahead to EF Core 2.1 and Beyond
Review and Resources

Creating a Data Model and Database with EF Core    42m 36s
Introduction and Overview
Setting up the Solution
Adding EF Core with the NuGet Package Manager
Creating the Data Model with EF Core
Specifying the Data Provider and Connection String
Understanding EF Core Migrations
Adding Your First Migration
Inspecting Your First Migration
Using Migrations to Script or Directly Create the Database
Recreating the Model in .NET Core
Adding Many-to-many and One-to-one Relationships
Reverse Engineering an Existing Database
Review and Resources

Interacting with Your EF Core Data Model 34m 11s
Introduction and Overview
Getting EF Core to Output SQL Logs
Inserting Simple Objects
Batching Commands When Saving
Querying Simple Objects
Filtering Data in Queries
Updating Simple Objects
Disconnected Updates
Deleting Objects with EF Core
Review and Resources

Querying and Saving Related Data   20m 49s
Introduction and Overview
Inserting Related Data
Eager Loading Related Data
Projecting Related Data in Queries
Using Related Data to Filter Objects
Modifying Related Data
Review and Resources

Using EF Core in Your Applications    30m 52s
Introduction and Overview
EF Core on the Desktop or Device
The Desktop Application: Windows Presentation Foundation (WPF)
Creating the WPF Application
Walking Through the WPF Data Access
EF Core in ASP.NET Core MVC
Adding Related Data into the MVC App
Coding the MVC App’s Relationships
Review and Resources

The Secret to Running EF Core 2.0 Migrations from a NET Core or NET Standard Class Library

I have had two people that watched my Pluralsight EF Core Getting Started course (which will soon be joined by an EF Core 2: Getting Started course) ask the same question, which mystified me at first.

The were running migrations commands which caused the project to compile, but the commands did not do anything. For example, add-migration didn’t add a migration file. get-dbcontext did not return any information. The most curious part was there was no error message! I was able to duplicate the problem.

With EF6 it was possible to use migrations from a class library with no exe project in sight. EF Core migrations can run from a .NET Framework or .NET Core project but not .NET Standard. It needs a runtime. A common workaround is that even if you haven’t gotten to the UI part of your app yet, to just add a .NET Core console app project to the solution, add the EF Core Design Nuget package to it and set it as the startup project. But it’s still possible to do this without adding in a dummy project.

We already knew about the multi-targetting fix which solved an error when you try to run migrations from a .NET Standard library. But even with that fix in place, we were getting the mysterious nothingness.

The answer to the question was buried in a GitHub issue and in comments for the Migrations document in the EF Core docs. This same solution solved a problem I was having when trying to use migrations in a UWP app (again, not .NET Core or .NET Framework) that used a separate class library to host its DbContext.

I’m writing this blog post to surface the solution until it is resolved.

The solution that we used with EF Core 1.0 in order to run migrations from a .NET Standard library was to multi-target for .Net Standard (so you can use the library in a few places) and .NET Core (so you can run migrations).

That means replacing

<PropertyGroup>       
  <TargetFramework>netstandard20</TargetFramework>
</PropertyGroup>

with

<PropertyGroup> 
  <TargetFrameworks>netcoreapp2.0;netstandard2.0</TargetFrameworks>
</PropertyGroup>

Notice that the attribute name is now plural and there is a semi-colon between the two SDKs.

But there’s one more secret which is not in the documentation.

For .NET Standard 2.0 (and EF Core 2.0), you also need to add the following to csproj.

<PropertyGroup>
 <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles></PropertyGroup>

Now with the DbContext project set as the startup and ensuring that the package manager console (or command line) are pointing to the same project, your migration commands will work.

Thanks to Christopher Moffat who found the solution in the GitHub issues and shared it in the comments on the EF Core Package Manager Console Tools document.


Screenshot for Tony ..see my comment in reply to your comment below.

Another Use Case for DbContext.Add in EFCore (and a DDD win)

If you are like me and design your classes following Domain-Driven Design principals , you may find yourself with code like this for controlling how objects get added to collections in the root entity.

public class Samurai {

  public Samurai (string name) : this()
  {
     Name = name;
  }

  private Samurai ()
  {
    _quotes=new List<Quote>();
  }

  public int Id { get; private set; }
  public string Name { get; private set; }
  private readonly List _quotes = new List ()
  private IEnumerable Quotes => _quotes.ToList ();
  public void AddQuote (string quoteText) {
      var newQuote=new Quote(quoteText,Id);
      _quotes.Add (newQuote);
  }

I have a fully encapsulated collection of Quotes. The only way to add a new quote is through the AddQuote method. You can’t just call Samura.Quotes.Add(myquote).

Additionally, because I want to control how developers interact with my API, there is no DbSet for Quotes. You have to do all of your queries and updates via context.Samurais.

A big downside to this is that if I have a new quote and I know the ID of the samurai, I have to first query for the samurai and then use the AddQuote. That really bugs me. I just want to create a new quote, push in the Samurai’s ID value and save it. And that requires either raw SQL or a DbSet<Quote>. I don’t like either option. Raw SQL is a hack in this case and DbSet<Quote> will open my API up to potential misuse.

I was thinking about this problem while laying in bed this morning (admit it, that’s the first thing you do when you wake up, too, right?) and had an idea.

In EF Core, we can now add objects directly to the context without going through the DbSet. The context can figure out what DbSet the entity belongs to and apply the right info to the change tracker. I thought this was handy for being able to call

myContext.AddRange(personobjectA, accountobjectB, productObjectC);

Although I haven’t run into a good use case for leveraging that yet.

What occurred to me is that if DbContext.Add is  using  reflection, maybe EF Core can find a private DbSet.

So I added a private DbSet to my DbContext class:

private DbSet<Quote> Quotes { get; set; }
 And tried out this code (notice I’m using context.Add, not context.Quotes.Add):
static void AddQuoteToSamurai () 
{
  using (var context =newSamuraiContext ()) 
  {
    var quote=newQuote("Voila",1);
    context.Add(quote);
    context.SaveChanges();
  }
}
And it worked! But this isn’t complete yet. I’m breaking my rule of ensuring that only my aggregate root can manage quotes. So this is “dangerous” code from my DDD perspective. However, I was happy to know that EF Core would support this capability.
Currently, Samurai.AddQuote does not have any additional logic to be performed on the quote. What if I were to add in a “RemoveBadWords” rule before a quote can get added?
public void AddQuote (string quoteText) 
{
 Utilities.RemoveBadWords(quoteText);
 var newQuote=new Quote(quoteText,Id);
  _quotes.Add (newQuote);
}
 Now I have an important reason to use Samurai to do the deed. I can add a second, static AddQuote method that also takes an int. Because it’s static, it’s a pass through method.
public static Quote AddQuote(string quoteText,int samuraiId)
{
  Utilities.RemoveBadWords(quoteText); 
  var newQuote=newQuote(quoteText,samuraiId);
  return newQuote;
}

This works and now I don’t have to have an instance of Samurai to use it:

staticvoid AddQuoteToSamurai () 
{
  using (var context =newSamuraiContext ()) {
    context.Add(Samurai.AddQuote("static voila",1));
    context.SaveChanges();
}

One thing I was worried about was if I had an instance of Samurai and tried to use this to add a quote to a different samurai. That would break the aggregate root…it’s job is to manage its own quotes only. It shouldn’t know about other Samurais.

But .NET protects me from that. I can’t call the static method from an instance of Samurai.

I still think that there’s a little bit of code smell from a DDD perspective about having this static, pass-through method in an aggregate root so will have to investigate that (or wait for any unhappy DDDers in my comments). But for now I am happy that I can avoid having to query for an instance of Samurai just to do this one task.

First Foray into .NET Core 2.0

I have to start somewhere so I started wtih super baby steps. Downloading the .NET Core 2 nightly build and trying to create a simple console app. Right away I failed miserably. The reason? The nightly builds have already jumped the shark! They are working on 2.1.0 and I accidentally grabbed that. And that was a little too bleeding edge for me.

There is a docker image that you can use quite easily but I wanted to try CLI, VS Code and Visual Studio. Therefore I wanted to just install the bits right on my machine.

What I’ll show you are  my first tests I did on macOS and on Windows 10. I like to do this just to make sure things are actually running properly. Note that on macOS, I just installed this new version directly on my machine where I have other versions of .NET Core. Key is that there is no production work on there dependent on other versions, so I won’t mess up anything important. The versions can live side by side. It’s just that for someone like me who “knows enough to be dangerous”, it’s easy to get tangled up with the versions even though I know tricks like creating a local nuget.config file and also specifying a version in global.json. On Windows, however, I am using a clean VM that has no other versions of .NET on it. That’s the smart way anyway. Although I did try the not smart way of installig it on my machine that already has all kinds of versions of all kinds of frameworks on it and even with my versioning tricks, I could not get 2.0.0 to do a restore or build.

I mentioned above that I originally downloaded the wrong version of the SDK. (Note that the typical SDK install also includes the runtime….so I got the wrong versoins of both. You can read the gory details of that in this GitHub issue which I kept updating as I sorted the problem out.) I want to stick with .NET Core 2.0.0. The installer for that is tucked away in a branch of github.com/dotnet/cli rather than grabbing the absolute latest from the master. Instead go to https://github.com/dotnet/cli/tree/release/2.0.0. There is a solid 2.0.0 version — 2.0.0-preview2-006391. That’s the one I’m using on Windows and macOS.

2017-06-11_12-23-16.jpg

Make Sure NuGet Knows Where to Find Packages

You can update your global NuGet.Config before creating projects. Alternatively, you can create a project and then add a local NuGet.config file to it.

On Mac, the global config file at [user]/.nuget/NuGet. Here s a screenshot if like me, macOS is not your primary platform and you still struggle with these things.

2017-06-11_12-32-15.jpg

In Windows, it’s at %APPDATA%\NuGet\.

I keep a link to the “Configuring NuGet behavior” doc handy for when I forget where to find that.

I added these two keys to my PackageSources section:

<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />

<add key="dotnet-core" value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json"/>

 Now it’s time to create a project

I’m on the Mac, so, to Terminal we will go.

I’ve created a new folder called EFCore20 – I know this is .NET Core, but eventually I plan to add in EF Core 2.0 as well.
My plan is to create a .NET Core console app (this will rely on netcoreapp2.0) and a library. The libray will be based on the netstandard2.0 library . That means its a library I’ll be able to use from a huge variety of apps, including .NET 4.6.1 based apps. EF Core 2.0 will also rely on .NET Standard 2.0, so any app or API that can consume netstandard, can also consume EF Core 2.0. Read more about that in this EF Core 2.0 announcement on GitHub.

I created one folder for each inside the EFCore20 folder:

netcore2console
netstandard2lib

Then I cd’d into the netcore2console app to create that app with the command:

dotnet new console.
That’s all it takes. This creates a tiny little Hello World app. The dotnet new command will also perform a dotnet restore after creating the files. This is the first moment you will see if things are wokring again. If the restore fails, it will tell you immediately.
This is the message I got when had things misaligned:
Error message: error MSB4236: The SDK ‘Microsoft.NET.Sdk’ specified could not be found.
The “specified” SDK is the currently installed version of whatever SDK you are referencing. In my case, the default for dotnet enw console is “dotnetcore” with the version being whatever version is installed.
Most likely the problem is that  you haven’t provided the correct URI for dotnet to find the proper NuGet packages.
If you don’t want to mess with the global config, you can create a local NuGet.config file in the folder with the app. In it’s entirety, it would look like:
<?xml version="1.0" encoding="utf-8"?>
 <configuration>
  <packageSources>
   <addkey="nuget.org"value="https://api.nuget.org/v3/index.json"protocolVersion="3"/>
   <addkey="dotnet-core"value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json"/>
  </packageSources>
</configuration>
Even though dotnet new will call restore, I like to call
dotnet restore
explicitly. (Control freak)
If it restored correctly, then the next step for validation is
dotnet build
And hopefully that gives you no errors as well. It shouldn’t.
Finally,
dotnet run
should result in spitting out
Hello World!
And now I know it’s working. Silly little bit but I want to verify this before I waste my time writing a bunh of code that isn’t going to run because I don’t even have .NET Core installed properly.
Another point of interest is what files were created by dotnet new console.
It’s easier to see them if I open them up in Visual Studio Code which I can do by typing
code .
There are only 2 files. The csproj with the project metadata and a program file.  The csproj file says that the target framework is netcoreapp2.0.

2017-06-11_14-09-46

The program file just spits out Hello World when it starts up.
2017-06-11_14-10-17
Here is a screenshot of all of the steps at the command line from dotnet new console to the Hello World! output including my extra dotnet restore. Another thing I like about the explicit restore is that it’s showing me more detailed info about the restore.
2017-06-11_14-03-19
Next I want to make sure I can get a .NET Standard library also working.
Still in the terminal (or console or PS if you’re on Windows), I now get into the 2nd sub folder, EF Core2/netstandardlibrary. dotnet new library defaults to using netstandard for the library. So that makes it easy.
dotnet new library
The library is created and the packages it needs are restored.
Here are the default contents of this new library, again, shown in VS Code.
Ntice that its csproj says the target framework is netstandard2.0.2017-06-11_14-16-09
And there’s an empty class file.
2017-06-11_14-18-12
I’ll modify the class to add a public method that returns a string: “Why, Hello There!”.
2017-06-11_14-22-01

Now I’ll go into the csproj for the console app and give it a project reference to the library. The intellisense does not (yet?) help me with this and my memory sucks*, so I head to Nate McMaster’s handy project.json to csproj mind mapper aka Project.json to MSBuild conversion guide to remind myself the syntax of a project reference. I add this below the property group section in the console app’s csproj file

 

<ItemGroup>
  <ProjectReference Include="..\netstandard2lib\netstandard2lib.csproj"/>
</ItemGroup>
Running dotnet restore and dotnet build again will help identify if you’ve got a typo in there. I often do.
Now I’ll modify the Program.cs file to also call output the resultls of caling the HiYa method from my library:
2017-06-11_14-31-39

And then back out to my terminal window (though I can also do it inside VS Code’s built-in terminal). I dotnet build the library and then change to the console folder. Rebuild that and run it and …voila

  netcore2console dotnet run

Hello World!

Why, Hello There!

  netcore2console

So now I know that .NET Core 2 (preview from a nightly build) is running properly on my machine and that I can create and use a .NET Standard 2.0 library. That means I can go ahead and confidently start creating a .NET Standard 2.0 library to host some EF Core 2.0 logic.

I also repeated this entire thing on my Windows machine. Not only does that let me know I can use this version of .NET Core there, but I will also do some work from within Visual Studio with the .NET Core 2.0 preview.

*(duh, I should just make myself a snippet in VS Code!)

What’s in that sqlservr.sh file on the mssql-sqlserver-linux docker image anyway?

Update June 3, 2017: The team has revised the docker image and the bash file is gone, presumably with its logic broken up in to various locations. Still I’m glad I grabbed this when I did to satisfy my curiosity!

Microsoft has created 4 official Docker images for SQL Server: SQL Server for Linux, SQL Server Developer Edition, SQL Server Express and (windows) SQL Server vNext) . They can be found on the Docker hub (e.g. https://hub.docker.com/r/microsoft/mssql-server-linux/) and there is also a Github repository for them at github.com/Microsoft/mssql-docker. Some of the files that go along with that image are not on Github. The Dockerfile files for each image run some type of startup script. The Windows images have a PowerShell script called start.ps1. You can see those in the Github repo. The Linux image runs a bash file called sqlservr.sh. That’s not included in the repo though and I was curious what it did.

Note: I wrote a blog post about using the SQL Server for Linux container (Mashup: SQL Server on Linux in Docker on a Mac with Visual Studio Code and I’m also writing an article about using the containers for my July MSDN Magazine Data Points column (watch this space).

Still a bit of a bash noob, I learned how to read a file from a docker container on ..you guessed it…StackOverflow.  Following those instructions, I created a snapshot of my running container

MySqlServerLinuImage git:(master) docker commit juliesqllinux  mysnapshot

sha256:9b552a1e24df7652af0c6c265ae5e2d7cb7832586c431d4b480c30663ab713f0

and ran the snapshot with bash:

  MySqlServerLinuImage git:(master) docker run -t -i mysnapshot bin/bash

[email protected]:/# 

Then at the new prompt (#), used ls to get the listing

[email protected]:/# ls

SqlCmdScript.sql  SqlCmdStartup.sh  bin  boot  dev  entrypoint.sh  etc  home  install.sh  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

then navigated to  folder where the bash file is and listed its contents:

[email protected]:/opt/mssql/bin# ls

compress-dump.sh  generate-core.sh  mssql-conf  paldumper  sqlpackage  sqlservr  sqlservr.sh

Once I was there I used the cat command to list out the contents of the sqlservr.sh file and see what it does. Here is the secret sauce in case, like me, you NEED to know what’s going on under the covers!

[email protected]:/opt/mssql/bin# cat sqlservr.sh 

#!/bin/bash

#

# Microsoft(R) SQL Server(R) launch script for Docker

#

ACCEPT_EULA=${ACCEPT_EULA:-}

SA_PASSWORD=${SA_PASSWORD:-}

#COLLATION=${COLLATION:-SQL_Latin1_General_CP1_CI_AS}

have_sa_password=""

#have_collation=""

sqlservr_setup_prefix=""

configure=""

reconfigure=""

# Check system memory

#

let system_memory="$(awk '/MemTotal/ {print $2}' /proc/meminfo) / 1024"

if [ $system_memory -lt 3250 ]; then

    echo "ERROR: This machine must have at least 3.25 gigabytes of memory to install Microsoft(R) SQL Server(R)."

    exit 1

fi

# Create system directories

#

mkdir -p /var/opt/mssql/data

mkdir -p /var/opt/mssql/etc

mkdir -p /var/opt/mssql/log

# Check the EULA

#

if [ "$ACCEPT_EULA" != "Y" ] && [ "$ACCEPT_EULA" != "y" ]; then

 echo "ERROR: You must accept the End User License Agreement before this container" > /dev/stderr

 echo "can start. The End User License Agreement can be found at " > /dev/stderr

 echo "http://go.microsoft.com/fwlink/?LinkId=746388." > /dev/stderr

 echo ""

 echo "Set the environment variable ACCEPT_EULA to 'Y' if you accept the agreement." > /dev/stderr

 exit 1

fi

# Configure SQL engine

#

if [ ! -f /var/opt/mssql/data/master.mdf ]; then

 configure=1

 if [ ! -z "$SA_PASSWORD" ] || [ -f /var/opt/mssql/etc/sa_password ]; then

 have_sa_password=1

 fi

# if [ ! -z "$COLLATION" ] || [ -f /var/opt/mssql/etc/collation ]; then

# have_collation=1

# fi

 if [ -z "$have_sa_password" ]; then

        echo "ERROR: The system administrator password is not configured. You can set the" > /dev/stderr

        echo "password via environment variable (SA_PASSWORD) or configuration file" > /dev/stderr

        echo "(/var/opt/mssql/etc/sa_password)." > /dev/stderr

 exit 1

 fi

fi

# If user wants to reconfigure, set reconfigure flag

#

if [ -f /var/opt/mssql/etc/reconfigure ]; then

 reconfigure=1

fi

# If we need to configure or reconfigure, run through configuration

# logic

#

if [ "$configure" == "1" ] || [ "$reconfigure" == "1" ]; then

 sqlservr_setup_options=""

# if [ -f /var/opt/mssql/etc/collation ]; then

# sqlservr_setup_options+="-q $(cat /var/opt/mssql/etc/collation)"

# else

# if [ ! -z "$COLLATION" ]; then

# sqlservr_setup_options+="-q $COLLATION "

# fi

# fi

 set +e

 cd /var/opt/mssql

 echo 'Configuring Microsoft(R) SQL Server(R)...'

 if [ -f /var/opt/mssql/etc/sa_password ]; then

 SQLSERVR_SA_PASSWORD_FILE=/var/opt/mssql/etc/sa_password /opt/mssql/bin/sqlservr --setup $sqlservr_setup_options 2>&1 > /var/opt/mssql/log/setup-$(date +%Y%m%d-%H%M%S).log

 elif [ ! -z "$SA_PASSWORD" ]; then

 SQLSERVR_SA_PASSWORD_FILE=<(echo -n "$SA_PASSWORD") /opt/mssql/bin/sqlservr --setup $sqlservr_setup_options 2>&1 > /var/opt/mssql/log/setup-$(date +%Y%m%d-%H%M%S).log

 else

 if [ ! -z '$sqlservr_setup_options' ]; then

 /opt/mssql/bin/sqlservr --setup $sqlservr_setup_options 2>&1 > /var/opt/mssql/log/setup-$(date +%Y%m%d-%H%M%S).log

 fi

 fi

 retcode=$?

 if [ $retcode != 0 ]; then

 echo "Microsoft(R) SQL Server(R) setup failed with error code $retcode. Please check the setup log in /var/opt/mssql/log for more information." > /dev/stderr

 exit 1

 fi

 set -e

 rm -f /var/opt/mssql/etc/reconfigure

 rm -f /var/opt/mssql/etc/sa_password

 echo "Configuration complete."

fi

# Start SQL Server

#

exec /opt/mssql/bin/sqlservr $*

Mashup: SQL Server on Linux in Docker on a Mac with Visual Studio Code

I’ve been having a lot of fun with the new mssql extension for Visual Studio Code. I have an article coming in MSDN Magazine and am planning more fun as well. My latest experiment was doing a big mashup taking advantage of the fact that there is now a Linux version of SQL Server. So we are no longer limited to hosting it on Windows or Azure. The most lightweight way to host SQL Server on Linux is in a Docker container. While I am sitting in front of a MacBook typing this I’m by no means working towards abandoning my Windows development or Windows machines. I’m just happy to have more options at my disposal as well as have the ability to share what I am learning work beyond the world of Windows developers.

Containers are not stateful. There are ways around that (I’ll show you below) but I only know enough to be dangerous here. This is a great way to use SQL Server at design time. Using this for production is a totally different story and you need to do a lot more research and soul-searching before using that option.

On the other hand, there are those who do have that particular goal:

I had to go through a number of documents to do this and of course I got stuck even with those resources at my disposal. So I will share the full path of how I got this setup working.

Pre-Requisite

I already have Docker for Mac installed on my MacBook. Here is the installation link if you need to perform that step. Keep in mind that you can’t do this on a VM. I tried as I wanted to repeat this with a clean setup.

Be sure that Docker is set to use at least 4GB of memory.

2017-04-07_13-55-27.jpg

Getting the SQL Server Docker Image

This is what makes the whole thing so easy! Microsoft has created an official docker image with SQL Server for Linux already on it.

In the terminal window, you can pull and install the official image with

 sudo docker pull microsoft/mssql-server-linux

Once it’s installed, the ‘docker images’ command will show you that the image is now available on your machine.

2017-04-07_14-10-55

Although I just installed it today, you can see that the image I’m using — which is the latest version — was created by Microsoft 3 weeks ago.

Spinning Up a Container (or Two) From the Image

Now that Docker is aware of the image, you can create a container from it — which is running instance of the image.

Because we’re on a Mac and awaiting for a “bug” to get fixed, we will actually create two containers.

Depending on your familiarity with Docker, you may or may not be aware that containers are not stateful. Once you delete a container, it’s all gone! If you have persisted data in that container, it, too, is all gone. However, Docker has a feature called “Volumes” which are a way to retain state between docker instances. So when one instance is shut down, the state is stored in a Volume. When another container instance is spun up, that volume provides the container with the state from the previous instance. This is how it’s possible to use containers for databases.

Here’s a great tutorial on volumes: https://rominirani.com/docker-tutorial-series-part-7-data-volumes-93073a1b5b72
And the official docker doc:  https://docs.docker.com/engine/tutorials/dockervolumes/#/creating-and-mounting-a-data-volume-container

However there’s an issue (which looks like the resolution is around the corner) with Docker on Mac hosting the sql-server-linux image. This prevents us from using a volume for persistence in the simple way. So instead, we’ll create a separate container that is a “data volume container”, then we will point the container that will run SQL Server to the data volume container.

Creating the volume container

I’ll name mine mssqldata. Here’s the command to create it. (Don’t miss the full length of the command!)

docker create -v /var/opt/mssql --name mssqldata  microsoft/mssql-server-linux /bin/true

This volume container still uses the image as its base. But we won’t be running SQL Server from this instance.

Creating the SQL Server container

Now you can create an container where you will run SQL Server and that container will use the data volume container for the persisted data.

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

The two environment variables (accept_eula and sa_password) are required. The userid is (gulp) ‘sa’. The password requirements are: “At least 8 characters including uppercase, lowercase letters, base-10 digits and/or non-alphanumeric symbols.”. Mine’s really fancy!

Once these exist,

docker ps

will only show you the regular container. The volume container is hidden so you need

docker ps -a

to see it.

2017-04-07_16-07-49.jpgNotice that the container I will run SQL Server on is on a port whereas the data volume container has a different status “Created”, and is not exposed on a port.

Test the Connection From the SQL Command Line

You don’t have to do this but it made me happy and it was fun. Did you know that you can interact with SQL Server from the command line? The tools are installed inside the container but that’s a messy way to use them. It’s easier to just install them on your computer directly. The command lines tools for Mac are sql-cli.

Note: On May 16th, Microsoft released the macOS version of sqlcmd and bcp (bulk copy). Good to know if you are already familiar with sqlcmd. Here’s their blog post with more details.

You can install that with:

npm install -g sql-cli

Start it up with the mssql command.

To minimum you need connect is to identify the server (which is at localhost by default, you don’t need to specify the port) and the password. It will presume sa for userid.

mssql -s localhost -p Passw0rd

If it’s successful, you’ll get some information followed by a new prompt, mssql.

Connecting to localhost...done

sql-cli version 0.4.14
Enter ".help" for usage hints.
mssql> 

 Enough of that. Now I get to use my new favorite tool. The mssql extension in Visual Studio Code.

Connect in Visual Studio Code

Once you have mssql installed in VS Code, you can begin by creating a new sql file. Either via the commands (F1 for the command pallete, MS SQL to see the commands and then New Query. This will prompt you for a connection – one parameter at a time. YOu can also start with the MS SQL Connect command (⇧⌘C). The extension will prompt you for each parameter.

Server name:  localhost
Database name: (enter)
User name: sa
Password: Passw0rd
Save Password (yes or no)
Profile name: [your choice]

If you enter everything correctly, not only will it connect, but the details will get stored as a connection profile in the VS Code settings and be available for subsequent connections.

You can see the connection status as it is connecting and then when connected in the lower right hand corner of the IDE.

2017-04-07_18-19-37.jpg

In the SQL file open in the editor, you can type SQL and see some existing snippets as well as get help from Intellisense whicih has read the schema of the data on the server. So far there’s not much.

Selecting the  sqlListDatabases snippet and then executing it (right click for Execute Query on the context menu or ⇧⌘E) displays the databases:

2017-04-07_18-22-45.jpg

Now you can use TSQL to create databases, database objects, query data. In the results pane you will see data as a grid similar to what you might see in SSMS. You can also export results to CSV or JSON. I’ve recently written an article about all of the cool things you can do with mssql which will be in teh June 2017 MSDN Magazine. But that connects to a SQL Azure database. In the meantime you can just go to the docs for the extension (aka.ms/mssql-marketplace).

Creating a Database and a Table

I let some more snippets help me to create a database and a table.

The first was the sqlCreateDatabase where I changed the snippet’s database name placholder to create a new database called LinuxReally then executed that with ⇧⌘E.

Re-running the select name from sys.databases command showed that the new database was now in the list.

Next I leveraged the sqlCreateTable snippet to help me create a new table. I named teh table DatabasesIKnow and gave it three columns

CREATE TABLE dbo.DatabasesIKnow
(
  Id INT NOT NULL PRIMARY KEY, -- primary key column
  DatabaseName [NVARCHAR](50) NOT NULL,
  KnowIt [Bit]
);

For some reason, the intellisense cache did not automatically refresh when I created this. Possibly becuse it was a new database. Even the mssql extension’s Refresh Intellisense Cache command did not kick it in. I got it working by disconnecting and reconnecting this time choosing the LinuxReally database rather than letting the extension connect to master by default. When I did that, I could see the message “Updating Intellisense…” in the status bar. After I had done this, the intellisense did auto refresh any time I modified the database schema.

Once i had the new database, I could execute “select * from dbo.DatabasesIKnow” and see the proper results. In my case, since I haven’t added data, there were no rows. But clearly it was reading from my table.

2017-04-08_09-04-55.jpg

Taking Down the sql-server Container

Now come the big docker volume tests. I first disconnected from the database inside of VS Code with⇧⌘E. Then I stopped and removed the sql-server container with the two commands:

docker stop sql-server
docker rm sql-server

But I left the data volume container (mssqldata) running.

I then created a new container instance using the same command as earlier:

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

In VS Code, I think reconnect to the server and my new database which was easy since it had been stored in the connection profiles . It’s the first one, localhost: LinuxReally.

2017-04-08_09-11-50.jpg

The connection was successful and listing databases showed my new DatabasesIKnow database . Select * from that database showed the schema. So my database was persisted in the data volume container even though I had killed and recreated the sql server container.

Next test: take down both containers.

Now it was time to see what happens when I stop and remove both containers!

docker stop sql-server
docker rm sql-server 
docker stop mssqldata
docker rm mssqldata

Next I use the same command I used previously to restart the sql-server container with the parameter to use the (not running) mssqldata volume container.

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

That fails. I need to first run the data volume container. Unfortunately, I re-ran the create command for that and overwrote the existing container, so all was lost. GOOD LESSON THERE! 🙂

So I started from scratch. In the brand new data volume container I recreated my database and table.

After more experiments, I realized that I had misunderstood the Docker documentation on volumes. You can create copies of data volume containers and remove those. But you need to be more of a docker ninja. While you can make all kinds of copies of the container, once you RM all of the related volumes, it is gone gone. This blog post does clarify some of the info I was still confused about wrt creating/backing up/restoring data volume containers: tricksofthetrades.net/2016/03/14/docker-data-volumes/.

So the bottom line is you need to leave the Data Volume Container running in some format (the original or some flavor of copy). (I’m still finding it hard to believe that it doesn’t somehow get stored as a file you can re-run so I will update this as soon as someone corrects me!)

Windows, Mac, Linux, Azure and Anwhere You Can Host a Docker Container

And if you think this is only somehting to do on OS X (because that’s where I’m doing it) .. no no no! Did you know there is now Docker for Windows? And that VS Code is cross platform? This is such a great way to quickly get SQL Server up and running in your development environment.

2017-04-08_10-49-59.jpg

And as I said on twitter, comparing the experience of pulling the docker image and spinning up a container to the experience of installing SQL Server  on windows is something like this:

Quick Start EF Core Videos on Channel 9

My Pluralsight course, Entity Framework Core: Getting Started, is a pretty thorough exploration for learning about EF Core. But at 5 hours, it may not be the VERY first thing you want to look at. So I’ve created some 12 minute videos on Channel 9 that will, in that very short time, walk you through building some tiny apps that use EF Core so you can get your hands dirty. While you won’t learn much in the way of ins & outs, they will walk you through:

  • creating a new app from scratch,
  • adding in a domain class
  • add in EF Core
  • creating an EF Core data model
  • creating a database from the model
  • creating and saving some data
  • reading, updating and modifying some data.

That’s a lot for 12 minutes, but I have done it in a way that you can just watch and learn, or follow along to build the apps yourself . For some of the code that is not related to the EF Core lesson, you can even copy that code from my GitHub account and paste it into your solution.

There are currently 2 videos.

EF Core in a Full .NET Application Using Visual Studio 2015
This one is designed to assure you that you can use EF Core in a full .NET application and using totally familiar tools. You’ll build a little .NET console app that uses EF Core to read and write some movie data.

EFCoreQS1Lerman_512 2017-03-22_18-21-35

EF Core in an ASP.NET Core Application Using Visual Studio 2017
This video will give you the full leading edge .NET Core experience as you use the brand new Visual Studio 2017 to create a tiny ASP.NET Core MVC App that uses EF Core to interact with the data. Again, some of the code that is not about EF Core is available to copy/paste from my GitHub account, making it easy to follow along with the demo.EFCoreQS2Lerman_512

2017-03-22_18-19-44

I’ll be adding anther one showing  Creating an ASP.NET Core App with EF Core Using Visual Studio Code in OS X . You can keep an eye on my Channel 9 Niners page or this blog to see what that gets published.

When your ready to really learn about EF Core, head over to Pluralsight for my Entity Framework Core: Getting Started Course. If you don’t have a subscription, contact me for a 30-day free trial code.

2017-03-22_18-35-49

Changes to EF Core With the RTM of VS2017 and Tools

When Visual Studio 2017 released today a few other things happened that are relevant to Entity Framework Core.

For more on EF Core, watch my EF Core: Getting Started course on Pluralsight.

EF Core Migrations Tools Release

First – something we were prepared for – the .NET Core SDK was also released. The last stable version was 1.0.0-preview2-1-003137. It’s now simply 1.0.0. Along with this, its dependent tooling, including EF Core Tools for PowerShell and dotnet were also released. As the .NET Core support evolved from project.json to msbuild, the EF Core tools split . We have been using 1.0.0-preview4 (for .NET and project.json) and 1.0.0-msbuild3 for msbuild/csproj support.

Now the tool packages are 1.1.0 (Tools) and 1.0.0 (Tools.DotNet)

For PowerShell support: Microsoft.EntityFrameworkCore.Tools 1.1.0
For dotnet CLI support: Microsoft.EntityFrameworkCore.Tools.DotNet 1.0.0

In Visual Studio 2015 (for full .NET projects) and Visual Studio 2017 (shown here, for full .NET or .NET Core projects), the Package Manager will show the RTM versions:

image

Notice that I do not have “Include prerelease” checked.

If using PMC to install, it’s just

install-package Microsoft.EntityFrameworkCore.Tools

That’s for the PowerShell tools, otherwise, add .DotNet to the name.

But notice that you no longer need to add the –pre.

When using the CLI version of the tools, the command

dotnet ef –version

results in

Entity Framework Core .NET Command Line Tools 1.0.0-rtm-10308

Changes to Migrations Commands

As the tools evolved through the previews, some details changed for example, the scaffolding command got smarter.

But one change that is notable is with respect to EF Core in class libraries. You still need to point to an executable project (exe or test) to run most of the commands, but now you can at least just use “dotnet ef” to get the help file without having to set the –startup-project parameter. There are a few other commands that will run without knowledge of the startup project. You can read more about this in this GitHub thread. Check some of the later comments by Brice Lambson as he worked on evolving the commands.

EF Core 1.1.1 – Patch

This was a more subtle part of the release. Even though the 1.1.1 milestone on GitHub had 30 bug fixes that are all closed , there hadn’t been any mention that this was going to get pushed out and the milestone had no target date on it. Though I had my suspicions! Here’s a screenshot I happened to take on March 5.

image

And yes, the newest version of the EF Core packages is now 1.1.1. These are bug fixes …as the increment suggests.  Most of them are edge cases, but regardless, you should definitely update your EF Core packages to ensure you have these latest fixes. If you’re creating new projects, 1.1.1 is what you’ll see available from NuGet.

Note: there was a regression introduced in EF Core 1.1.1 that is targeted to get fixed with the next patch. You can read about this issue here: http://stackoverflow.com/questions/42708522/loading-related-data-aspnet-core-1-1

You can learn much more about EF Core in my EF Core: Getting Started course on Pluralsight.