All posts by Julie

Copying an Azure Function App using the Portal, GitHub and Cloud Shell

I’ve got an Azure Function App with some functions that I built for a demo for a conference session. I first used this at AngularMix and had named the function app AngularMix2017. Then when I did it for DevIntersection this past week, I recreated it as DevIntersection2017 because you can’t rename a function app. I’m doing it yet again next week at another conference (NetConf.co) and decided it was time to create it as a reusable name: DataApiDemo.

While you can’t flat out copy a function app if you are working strictly in the portal (which is how I’ve been working during my first learning stages of building function apps), you can easily enough duplicate it by downloading and uploading it again.

Keep in mind that if you are developing functions in VS 2017 or VS Code (combined with the Azure CLI) you can just use the tooling to publish the function apps along with the settings. But I don’t like to take the easy path. I learned a lot of new tricks and tools by going this route. Even if I leverage the tooling on my next go round, I’ve learned a lot which gives me a better understanding overall, more control and troubleshooting skills.

My first step was to create the new function app (dataapidemo).

The Overview blade of an Azure function app has a “download app content” option. That will pull down all of the relevant files onto your machine. So next, I go to the overview of the DevIntersection2017 function app and click on the download app content link.

When downloading the app content, you have an option to also grab the app settings. I have a number of secrets stored in my settings ..things like keys to my databases , account ids and more. If you check that, the settings will get downloaded to a local settings file. If you were using this feature to continue your development locally, e.g. in VS2017, then you’ll want them. But for my scenario, they are not useful and even potentially a security issue (you’ll see shortly) so I’m going to leave that unchecked.

I can see the files in the designated folder on my computer.

I’ve already created a Github repository to store these files in: https://github.com/julielerman/DataApiAzureFunctions.

At the command line in the new folder, I’ll git init, add all of the files into the local repo and commit them. Then I can run the git commands to connect that folder to my GitHub repo and then push my local repository into the online repository.

When that’s complete, the files are all in my GitHub repo.

Since I’m using a public repository, this is why I didn’t want the local.settings.json file filled with my secrets to download. Of course I could have deleted it locally or told Git to ignore that file. A few other points about this. If I were developing this locally, then I can have that file locally and use VS2017 or Azure CLI tools to publish my app along with the settings to the portal. But that is not my goal here.

Now I can take advantage of a long-time feature of the Azure Portal: connect it to a repository to auto-deploy from the repository to my app (in the case to my function app).

Back in the portal, I select the new function app again, then from its Platform Features page, open the Deployment Options. Choose Setup new deployment and follow the steps to connect the function app to your repository. When you’ve completed the setup, you can use the Sync button to force the deployment to happen. All of the files are now in my new dataapidemo function, including the function.json files that contain the function setting (e.g. integrations etc) and any other files I may have added to my functions.

Because I am using Twilio, one of my functions has a reference to Twilio in a project.json file and I need to restore that package to this function app. I did that by opening up the project.json and making a small mod to it..adding or removing a blank line, then saving it. That triggers the package restore to happen.

What’s not there however are the function app settings. These are settings that are defined to the app and used by one or more of the functions. There are some default settings created by Azure but I also have some others …those secrets, such as the account and telephone number for my Twilio integration as well as a key for the function.

Some of these settings are my own custom settings that I want in the new function. I could add them in via the portal app settings interface, but I don’t like that path when I have a slew of settings.

That means moving on to yet another amazing tool in the Azure Portal — Cloud Shell. Here you can run Bash or Powershell commands from the Azure CLI directly on services in your subscription. You can also do this locally on your machine with the extra caveat that there you must provide credentials and other metadata.

In the portal, open the Cloud Shell. This is gives you an interactive terminal window.

If you have multiple subscriptions, you want to be sure that it’s set to the one where your app function is stored.

You can do this by listing the subscriptions with the az account list command.

If the one you want is not default, then set it with:

az account set -s [ID]

This shell can do auto-complete so if the id starts with a123, you can type

az account set -s a123

then hit the tab and the id will get finished . Hit enter and it becomes the default subscription to work in.

I’ll use the azure functionapp commands to get the settings into my function app.

First I’ll list the existing settings. You do this with the command

az functionapp config appsettings list --name [functionname] --resource-group [resource name]

The parameter shortcuts for name and resource-group are -n and -g. Mine are both the same name: dataapidemo. Here’s what the command looks like (except for the wrapping that my blog is forcing):

az functionapp config appsettings list -n dataapidemo -g dataapidemo

The appsettings are listed out as JSON by default although you can affect the output format with the output parameter.

I’ll need to see the settings I want to copy from the other function app so I run:

az functionapp config appsettings list -n devintersection2017 -g devintersection2017 -o tsv

which give me the list of settings from which to choose.

In addition to listing the settings, you can add new ones with the set verb and delete with delete verb. Both would go in place of the verb list.

You use SET by combining the SET command and the –settings parameter. After the –settings parameter, you can list one or more setting with this format:

az functionapp config appsettings set -n dataapidemo -g dataapidemo --settings "property1name=value", "property2name=value"

Note that this is wrapping here on the blog post but not in the shell window.

So I added the handful of settings I wanted to add which looked more like:

az functionapp config appsettings set -n dataapidemo -g dataapidemo --settings "TwilioSID=myvalue", "TwilioAccount=myvalue"

There were others I had to add. For example, my function needs to know how to get to my Cosmos DB document database, so it has a setting that uses the db name as the property name and the dbs account endpoint as the value.

Once I had done that, I was able to get rid of the function apps that were named for each conference and use a common one no matter where I am sharing it.

And boyohboy did I learn a lot of new things!

Another Use Case for DbContext.Add in EFCore (and a DDD win)

If you are like me and design your classes following Domain-Driven Design principals , you may find yourself with code like this for controlling how objects get added to collections in the root entity.

public class Samurai {

  public Samurai (string name) : this()
  {
     Name = name;
  }

  private Samurai ()
  {
    _quotes=new List<Quote>();
  }

  public int Id { get; private set; }
  public string Name { get; private set; }
  private readonly List _quotes = new List ()
  private IEnumerable Quotes => _quotes.ToList ();
  public void AddQuote (string quoteText) {
      var newQuote=new Quote(quoteText,Id);
      _quotes.Add (newQuote);
  }

I have a fully encapsulated collection of Quotes. The only way to add a new quote is through the AddQuote method. You can’t just call Samura.Quotes.Add(myquote).

Additionally, because I want to control how developers interact with my API, there is no DbSet for Quotes. You have to do all of your queries and updates via context.Samurais.

A big downside to this is that if I have a new quote and I know the ID of the samurai, I have to first query for the samurai and then use the AddQuote. That really bugs me. I just want to create a new quote, push in the Samurai’s ID value and save it. And that requires either raw SQL or a DbSet<Quote>. I don’t like either option. Raw SQL is a hack in this case and DbSet<Quote> will open my API up to potential misuse.

I was thinking about this problem while laying in bed this morning (admit it, that’s the first thing you do when you wake up, too, right?) and had an idea.

In EF Core, we can now add objects directly to the context without going through the DbSet. The context can figure out what DbSet the entity belongs to and apply the right info to the change tracker. I thought this was handy for being able to call

myContext.AddRange(personobjectA, accountobjectB, productObjectC);

Although I haven’t run into a good use case for leveraging that yet.

What occurred to me is that if DbContext.Add is  using  reflection, maybe EF Core can find a private DbSet.

So I added a private DbSet to my DbContext class:

private DbSet<Quote> Quotes { get; set; }
 And tried out this code (notice I’m using context.Add, not context.Quotes.Add):
static void AddQuoteToSamurai () 
{
  using (var context =newSamuraiContext ()) 
  {
    var quote=newQuote("Voila",1);
    context.Add(quote);
    context.SaveChanges();
  }
}
And it worked! But this isn’t complete yet. I’m breaking my rule of ensuring that only my aggregate root can manage quotes. So this is “dangerous” code from my DDD perspective. However, I was happy to know that EF Core would support this capability.
Currently, Samurai.AddQuote does not have any additional logic to be performed on the quote. What if I were to add in a “RemoveBadWords” rule before a quote can get added?
public void AddQuote (string quoteText) 
{
 Utilities.RemoveBadWords(quoteText);
 var newQuote=new Quote(quoteText,Id);
  _quotes.Add (newQuote);
}
 Now I have an important reason to use Samurai to do the deed. I can add a second, static AddQuote method that also takes an int. Because it’s static, it’s a pass through method.
public static Quote AddQuote(string quoteText,int samuraiId)
{
  Utilities.RemoveBadWords(quoteText); 
  var newQuote=newQuote(quoteText,samuraiId);
  return newQuote;
}

This works and now I don’t have to have an instance of Samurai to use it:

staticvoid AddQuoteToSamurai () 
{
  using (var context =newSamuraiContext ()) {
    context.Add(Samurai.AddQuote("static voila",1));
    context.SaveChanges();
}

One thing I was worried about was if I had an instance of Samurai and tried to use this to add a quote to a different samurai. That would break the aggregate root…it’s job is to manage its own quotes only. It shouldn’t know about other Samurais.

But .NET protects me from that. I can’t call the static method from an instance of Samurai.

I still think that there’s a little bit of code smell from a DDD perspective about having this static, pass-through method in an aggregate root so will have to investigate that (or wait for any unhappy DDDers in my comments). But for now I am happy that I can avoid having to query for an instance of Samurai just to do this one task.

First Foray into .NET Core 2.0

I have to start somewhere so I started wtih super baby steps. Downloading the .NET Core 2 nightly build and trying to create a simple console app. Right away I failed miserably. The reason? The nightly builds have already jumped the shark! They are working on 2.1.0 and I accidentally grabbed that. And that was a little too bleeding edge for me.

There is a docker image that you can use quite easily but I wanted to try CLI, VS Code and Visual Studio. Therefore I wanted to just install the bits right on my machine.

What I’ll show you are  my first tests I did on macOS and on Windows 10. I like to do this just to make sure things are actually running properly. Note that on macOS, I just installed this new version directly on my machine where I have other versions of .NET Core. Key is that there is no production work on there dependent on other versions, so I won’t mess up anything important. The versions can live side by side. It’s just that for someone like me who “knows enough to be dangerous”, it’s easy to get tangled up with the versions even though I know tricks like creating a local nuget.config file and also specifying a version in global.json. On Windows, however, I am using a clean VM that has no other versions of .NET on it. That’s the smart way anyway. Although I did try the not smart way of installig it on my machine that already has all kinds of versions of all kinds of frameworks on it and even with my versioning tricks, I could not get 2.0.0 to do a restore or build.

I mentioned above that I originally downloaded the wrong version of the SDK. (Note that the typical SDK install also includes the runtime….so I got the wrong versoins of both. You can read the gory details of that in this GitHub issue which I kept updating as I sorted the problem out.) I want to stick with .NET Core 2.0.0. The installer for that is tucked away in a branch of github.com/dotnet/cli rather than grabbing the absolute latest from the master. Instead go to https://github.com/dotnet/cli/tree/release/2.0.0. There is a solid 2.0.0 version — 2.0.0-preview2-006391. That’s the one I’m using on Windows and macOS.

2017-06-11_12-23-16.jpg

Make Sure NuGet Knows Where to Find Packages

You can update your global NuGet.Config before creating projects. Alternatively, you can create a project and then add a local NuGet.config file to it.

On Mac, the global config file at [user]/.nuget/NuGet. Here s a screenshot if like me, macOS is not your primary platform and you still struggle with these things.

2017-06-11_12-32-15.jpg

In Windows, it’s at %APPDATA%\NuGet\.

I keep a link to the “Configuring NuGet behavior” doc handy for when I forget where to find that.

I added these two keys to my PackageSources section:

<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />

<add key="dotnet-core" value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json"/>

 Now it’s time to create a project

I’m on the Mac, so, to Terminal we will go.

I’ve created a new folder called EFCore20 – I know this is .NET Core, but eventually I plan to add in EF Core 2.0 as well.
My plan is to create a .NET Core console app (this will rely on netcoreapp2.0) and a library. The libray will be based on the netstandard2.0 library . That means its a library I’ll be able to use from a huge variety of apps, including .NET 4.6.1 based apps. EF Core 2.0 will also rely on .NET Standard 2.0, so any app or API that can consume netstandard, can also consume EF Core 2.0. Read more about that in this EF Core 2.0 announcement on GitHub.

I created one folder for each inside the EFCore20 folder:

netcore2console
netstandard2lib

Then I cd’d into the netcore2console app to create that app with the command:

dotnet new console.
That’s all it takes. This creates a tiny little Hello World app. The dotnet new command will also perform a dotnet restore after creating the files. This is the first moment you will see if things are wokring again. If the restore fails, it will tell you immediately.
This is the message I got when had things misaligned:
Error message: error MSB4236: The SDK ‘Microsoft.NET.Sdk’ specified could not be found.
The “specified” SDK is the currently installed version of whatever SDK you are referencing. In my case, the default for dotnet enw console is “dotnetcore” with the version being whatever version is installed.
Most likely the problem is that  you haven’t provided the correct URI for dotnet to find the proper NuGet packages.
If you don’t want to mess with the global config, you can create a local NuGet.config file in the folder with the app. In it’s entirety, it would look like:
<?xml version="1.0" encoding="utf-8"?>
 <configuration>
  <packageSources>
   <addkey="nuget.org"value="https://api.nuget.org/v3/index.json"protocolVersion="3"/>
   <addkey="dotnet-core"value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json"/>
  </packageSources>
</configuration>
Even though dotnet new will call restore, I like to call
dotnet restore
explicitly. (Control freak)
If it restored correctly, then the next step for validation is
dotnet build
And hopefully that gives you no errors as well. It shouldn’t.
Finally,
dotnet run
should result in spitting out
Hello World!
And now I know it’s working. Silly little bit but I want to verify this before I waste my time writing a bunh of code that isn’t going to run because I don’t even have .NET Core installed properly.
Another point of interest is what files were created by dotnet new console.
It’s easier to see them if I open them up in Visual Studio Code which I can do by typing
code .
There are only 2 files. The csproj with the project metadata and a program file.  The csproj file says that the target framework is netcoreapp2.0.

2017-06-11_14-09-46

The program file just spits out Hello World when it starts up.
2017-06-11_14-10-17
Here is a screenshot of all of the steps at the command line from dotnet new console to the Hello World! output including my extra dotnet restore. Another thing I like about the explicit restore is that it’s showing me more detailed info about the restore.
2017-06-11_14-03-19
Next I want to make sure I can get a .NET Standard library also working.
Still in the terminal (or console or PS if you’re on Windows), I now get into the 2nd sub folder, EF Core2/netstandardlibrary. dotnet new library defaults to using netstandard for the library. So that makes it easy.
dotnet new library
The library is created and the packages it needs are restored.
Here are the default contents of this new library, again, shown in VS Code.
Ntice that its csproj says the target framework is netstandard2.0.2017-06-11_14-16-09
And there’s an empty class file.
2017-06-11_14-18-12
I’ll modify the class to add a public method that returns a string: “Why, Hello There!”.
2017-06-11_14-22-01

Now I’ll go into the csproj for the console app and give it a project reference to the library. The intellisense does not (yet?) help me with this and my memory sucks*, so I head to Nate McMaster’s handy project.json to csproj mind mapper aka Project.json to MSBuild conversion guide to remind myself the syntax of a project reference. I add this below the property group section in the console app’s csproj file

 

<ItemGroup>
  <ProjectReference Include="..\netstandard2lib\netstandard2lib.csproj"/>
</ItemGroup>
Running dotnet restore and dotnet build again will help identify if you’ve got a typo in there. I often do.
Now I’ll modify the Program.cs file to also call output the resultls of caling the HiYa method from my library:
2017-06-11_14-31-39

And then back out to my terminal window (though I can also do it inside VS Code’s built-in terminal). I dotnet build the library and then change to the console folder. Rebuild that and run it and …voila

  netcore2console dotnet run

Hello World!

Why, Hello There!

  netcore2console

So now I know that .NET Core 2 (preview from a nightly build) is running properly on my machine and that I can create and use a .NET Standard 2.0 library. That means I can go ahead and confidently start creating a .NET Standard 2.0 library to host some EF Core 2.0 logic.

I also repeated this entire thing on my Windows machine. Not only does that let me know I can use this version of .NET Core there, but I will also do some work from within Visual Studio with the .NET Core 2.0 preview.

*(duh, I should just make myself a snippet in VS Code!)

Mashup: SQL Server on Linux in Docker on a Mac with Visual Studio Code

I’ve been having a lot of fun with the new mssql extension for Visual Studio Code. I have an article coming in MSDN Magazine and am planning more fun as well. My latest experiment was doing a big mashup taking advantage of the fact that there is now a Linux version of SQL Server. So we are no longer limited to hosting it on Windows or Azure. The most lightweight way to host SQL Server on Linux is in a Docker container. While I am sitting in front of a MacBook typing this I’m by no means working towards abandoning my Windows development or Windows machines. I’m just happy to have more options at my disposal as well as have the ability to share what I am learning work beyond the world of Windows developers.

Containers are not stateful. There are ways around that (I’ll show you below) but I only know enough to be dangerous here. This is a great way to use SQL Server at design time. Using this for production is a totally different story and you need to do a lot more research and soul-searching before using that option.

On the other hand, there are those who do have that particular goal:

I had to go through a number of documents to do this and of course I got stuck even with those resources at my disposal. So I will share the full path of how I got this setup working.

Pre-Requisite

I already have Docker for Mac installed on my MacBook. Here is the installation link if you need to perform that step. Keep in mind that you can’t do this on a VM. I tried as I wanted to repeat this with a clean setup.

Be sure that Docker is set to use at least 4GB of memory.

2017-04-07_13-55-27.jpg

Getting the SQL Server Docker Image

This is what makes the whole thing so easy! Microsoft has created an official docker image with SQL Server for Linux already on it.

In the terminal window, you can pull and install the official image with

 sudo docker pull microsoft/mssql-server-linux

Once it’s installed, the ‘docker images’ command will show you that the image is now available on your machine.

2017-04-07_14-10-55

Although I just installed it today, you can see that the image I’m using — which is the latest version — was created by Microsoft 3 weeks ago.

Spinning Up a Container (or Two) From the Image

Now that Docker is aware of the image, you can create a container from it — which is running instance of the image.

Because we’re on a Mac and awaiting for a “bug” to get fixed, we will actually create two containers.

Depending on your familiarity with Docker, you may or may not be aware that containers are not stateful. Once you delete a container, it’s all gone! If you have persisted data in that container, it, too, is all gone. However, Docker has a feature called “Volumes” which are a way to retain state between docker instances. So when one instance is shut down, the state is stored in a Volume. When another container instance is spun up, that volume provides the container with the state from the previous instance. This is how it’s possible to use containers for databases.

Here’s a great tutorial on volumes: https://rominirani.com/docker-tutorial-series-part-7-data-volumes-93073a1b5b72
And the official docker doc:  https://docs.docker.com/engine/tutorials/dockervolumes/#/creating-and-mounting-a-data-volume-container

However there’s an issue (which looks like the resolution is around the corner) with Docker on Mac hosting the sql-server-linux image. This prevents us from using a volume for persistence in the simple way. So instead, we’ll create a separate container that is a “data volume container”, then we will point the container that will run SQL Server to the data volume container.

Creating the volume container

I’ll name mine mssqldata. Here’s the command to create it. (Don’t miss the full length of the command!)

docker create -v /var/opt/mssql --name mssqldata  microsoft/mssql-server-linux /bin/true

This volume container still uses the image as its base. But we won’t be running SQL Server from this instance.

Creating the SQL Server container

Now you can create an container where you will run SQL Server and that container will use the data volume container for the persisted data.

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

The two environment variables (accept_eula and sa_password) are required. The userid is (gulp) ‘sa’. The password requirements are: “At least 8 characters including uppercase, lowercase letters, base-10 digits and/or non-alphanumeric symbols.”. Mine’s really fancy!

Once these exist,

docker ps

will only show you the regular container. The volume container is hidden so you need

docker ps -a

to see it.

2017-04-07_16-07-49.jpgNotice that the container I will run SQL Server on is on a port whereas the data volume container has a different status “Created”, and is not exposed on a port.

Test the Connection From the SQL Command Line

You don’t have to do this but it made me happy and it was fun. Did you know that you can interact with SQL Server from the command line? The tools are installed inside the container but that’s a messy way to use them. It’s easier to just install them on your computer directly. The command lines tools for Mac are sql-cli.

Note: On May 16th, Microsoft released the macOS version of sqlcmd and bcp (bulk copy). Good to know if you are already familiar with sqlcmd. Here’s their blog post with more details.

You can install that with:

npm install -g sql-cli

Start it up with the mssql command.

To minimum you need connect is to identify the server (which is at localhost by default, you don’t need to specify the port) and the password. It will presume sa for userid.

mssql -s localhost -p Passw0rd

If it’s successful, you’ll get some information followed by a new prompt, mssql.

Connecting to localhost...done

sql-cli version 0.4.14
Enter ".help" for usage hints.
mssql> 

 Enough of that. Now I get to use my new favorite tool. The mssql extension in Visual Studio Code.

Connect in Visual Studio Code

Once you have mssql installed in VS Code, you can begin by creating a new sql file. Either via the commands (F1 for the command pallete, MS SQL to see the commands and then New Query. This will prompt you for a connection – one parameter at a time. YOu can also start with the MS SQL Connect command (⇧⌘C). The extension will prompt you for each parameter.

Server name:  localhost
Database name: (enter)
User name: sa
Password: Passw0rd
Save Password (yes or no)
Profile name: [your choice]

If you enter everything correctly, not only will it connect, but the details will get stored as a connection profile in the VS Code settings and be available for subsequent connections.

You can see the connection status as it is connecting and then when connected in the lower right hand corner of the IDE.

2017-04-07_18-19-37.jpg

In the SQL file open in the editor, you can type SQL and see some existing snippets as well as get help from Intellisense whicih has read the schema of the data on the server. So far there’s not much.

Selecting the  sqlListDatabases snippet and then executing it (right click for Execute Query on the context menu or ⇧⌘E) displays the databases:

2017-04-07_18-22-45.jpg

Now you can use TSQL to create databases, database objects, query data. In the results pane you will see data as a grid similar to what you might see in SSMS. You can also export results to CSV or JSON. I’ve recently written an article about all of the cool things you can do with mssql which will be in teh June 2017 MSDN Magazine. But that connects to a SQL Azure database. In the meantime you can just go to the docs for the extension (aka.ms/mssql-marketplace).

Creating a Database and a Table

I let some more snippets help me to create a database and a table.

The first was the sqlCreateDatabase where I changed the snippet’s database name placholder to create a new database called LinuxReally then executed that with ⇧⌘E.

Re-running the select name from sys.databases command showed that the new database was now in the list.

Next I leveraged the sqlCreateTable snippet to help me create a new table. I named teh table DatabasesIKnow and gave it three columns

CREATE TABLE dbo.DatabasesIKnow
(
  Id INT NOT NULL PRIMARY KEY, -- primary key column
  DatabaseName [NVARCHAR](50) NOT NULL,
  KnowIt [Bit]
);

For some reason, the intellisense cache did not automatically refresh when I created this. Possibly becuse it was a new database. Even the mssql extension’s Refresh Intellisense Cache command did not kick it in. I got it working by disconnecting and reconnecting this time choosing the LinuxReally database rather than letting the extension connect to master by default. When I did that, I could see the message “Updating Intellisense…” in the status bar. After I had done this, the intellisense did auto refresh any time I modified the database schema.

Once i had the new database, I could execute “select * from dbo.DatabasesIKnow” and see the proper results. In my case, since I haven’t added data, there were no rows. But clearly it was reading from my table.

2017-04-08_09-04-55.jpg

Taking Down the sql-server Container

Now come the big docker volume tests. I first disconnected from the database inside of VS Code with⇧⌘E. Then I stopped and removed the sql-server container with the two commands:

docker stop sql-server
docker rm sql-server

But I left the data volume container (mssqldata) running.

I then created a new container instance using the same command as earlier:

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

In VS Code, I think reconnect to the server and my new database which was easy since it had been stored in the connection profiles . It’s the first one, localhost: LinuxReally.

2017-04-08_09-11-50.jpg

The connection was successful and listing databases showed my new DatabasesIKnow database . Select * from that database showed the schema. So my database was persisted in the data volume container even though I had killed and recreated the sql server container.

Next test: take down both containers.

Now it was time to see what happens when I stop and remove both containers!

docker stop sql-server
docker rm sql-server 
docker stop mssqldata
docker rm mssqldata

Next I use the same command I used previously to restart the sql-server container with the parameter to use the (not running) mssqldata volume container.

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

That fails. I need to first run the data volume container. Unfortunately, I re-ran the create command for that and overwrote the existing container, so all was lost. GOOD LESSON THERE! 🙂

So I started from scratch. In the brand new data volume container I recreated my database and table.

After more experiments, I realized that I had misunderstood the Docker documentation on volumes. You can create copies of data volume containers and remove those. But you need to be more of a docker ninja. While you can make all kinds of copies of the container, once you RM all of the related volumes, it is gone gone. This blog post does clarify some of the info I was still confused about wrt creating/backing up/restoring data volume containers: tricksofthetrades.net/2016/03/14/docker-data-volumes/.

So the bottom line is you need to leave the Data Volume Container running in some format (the original or some flavor of copy). (I’m still finding it hard to believe that it doesn’t somehow get stored as a file you can re-run so I will update this as soon as someone corrects me!)

Windows, Mac, Linux, Azure and Anwhere You Can Host a Docker Container

And if you think this is only somehting to do on OS X (because that’s where I’m doing it) .. no no no! Did you know there is now Docker for Windows? And that VS Code is cross platform? This is such a great way to quickly get SQL Server up and running in your development environment.

2017-04-08_10-49-59.jpg

And as I said on twitter, comparing the experience of pulling the docker image and spinning up a container to the experience of installing SQL Server  on windows is something like this:

Cloning a GitHub Repo in Visual Studio 2017 …and a Quiz

When showing off some VS2017 features at our VTdotNET meetup, I made a last minute decision to demo the ability to clone a repository right from GitHub. Then I thought I would combine that with other things I planned to demo.

I already had just the right repo sitting in my GitHub account. A small ASP.NET Core project that was built with Visual Studio 2015 using project.json for its metadata. It’s at https://github.com/julielerman/NetCoreSolutionToMigrateToVS2017.

I had this same solution on my laptop already to use for another demo: showing off VS2017’s ability to auto-migrate a project.json based solution to the new csproj based format for .NET Core projects.

Clever me, I decided to kill two birds with one stone. Clone the repo and have the migration run as it was opening that solution.

So I started up Visual Studio 2017 (since I wanted to show how fast that is) and began the process of cloning the solution from my GitHub repo. I already had my credentials set up and was able to go to File, Open and Open from Source Control.

2017-03-21_21-52-52

This opens the Team Explorer window and I clicked the Clone option, which then opened a window showing all of the accounts I’m connected to.

2017-03-21_22-06-28

I expanded my own account and scrolled down to the repo I wanted, selected it and clicked the Clone button.

2017-03-21_22-11-20

The solution got cloned and then it opened up in Visual Studio.

2017-03-21_22-13-53

But it never triggered the migration! And if you look at the solution, you can see that the project I expanded still has its xproj file and its project.json file. At the time I was confused but now that I know what happened, the answer to why this didn’t migrate is very visible in that screenshot of the Solution Explorer. However, one of the developers who was watching this and had just done another demo with Visual Studio 2017, identified the problem quickly.

Let’s move on for some more clues.

I closed the solution. Then from File/Open, I browsed to the place where it had been saved on my computer, and selected the sln file to open. This time, the same exact solution opening up in VS2017, did indeed trigger the migration, which is quite obvious thanks to this screen.

2017-03-21_22-18-17

Then I let the migrate feature do its job. When it was finished, you can see that the project no longer has its xproj and project.json files.

2017-03-21_22-21-42

Now, look at this new Solution Explorer screenshot compared to the previous one.

And then take a look at the list of new VS2017 features in the Release Notes (https://www.visualstudio.com/en-us/news/releasenotes/vs2017-relnotes) under the section IDE and see if you can tell what the cloning did differently when opening the solution than just opening the solution directly from the drive.

Also, I will find out if this is by design or possible a behavior that can get modified to behave the way I had expected. 🙂

DotNet Core Version Confusion

I see people scratching their heads over this a lot so am dropping it here even though I’m sure it’s stated in many places already.

When you are at the dotnet command line (aka the CLI aka Command Line Interface) and type ‘dotnet’ you will be shown the version of the runtime.

When you add the version parameter (‘dotnet –version’) that will return the version of the SDK (aka CLI aka Command Line Interface) that you are working with.

Here’s an example:

If you are confused, you’re not alone. There’s a good discussion/debate on GitHub about how to alleviate this confusion at  What should dotnet –version display?

 

Troubleshooting the dotnet ef command for EF Core Migrations

Updated March 7, 2017 after Visual Studio 2017 was released.
Also, keep in mind that I have been updating this post (and will continue to do so) as I discover new ways people are hitting problems with dotnet ef.

When using EF Core in a .NET Core app (ASP.NET Core or other app sitting on .NET core), it’s easy to run into a problem when attempting to use EF Core migrations at the command line. The most common one is

No executable found matching command "dotnet-ef"

A Note If Your Coming from My Pluralsight EF Core Course

Microsoft supports developing .NET Core apps in VS2017. The tooling for VS2015 is outdated and there are no plans to bring them up-to-date for the new csproj support. I recorded my Entity Framework Core: Getting Started  course on Pluralsight while VS2017 was still in beta. While I did recreate the VS2017 demos in RC3 right before we published the course, we chose to leave the  rest of  the .NET Core demos that are in VS2015 alone.  VS2015 only supports project.json and the project templates set you up for .NET Core 1.0, not .NET Core 1.1. So in the course means that we’re stuck with project.json support and tooling that’s not quite aligned. But Pluralsight and I both agreed that it made sense not to ALSO force users to the bleeding edge, not even released VS2017 for the demos. The course’s focus is on EF Core, so as long as I could hand-hold users through the project.json setup stuff without the need to make them expert at that, it was the right way to go.

Of the nearly 2000 who have already watched the course since it’s release less than 2 weeks ago,  a few people ran into some confusion with the versioning and getting the “no executable found” message. I worked through these with them but wanted to write down the suggestions I’d made and have a single blog post I could point to.

Problems You May Encounter with ‘dotnet ef’

While some of these notes are specific to the project.json use in the course, I’ve also added tips for using dotnet ef with the newer csproj/msbuild support.

There are a few key things to watch out for.

  1. The current stable tooling for EF Core migrations is split into two packages.
         Microsoft.EntityFrameworkCore.Tools is for PowerShell
        Microsoft.EntityFrameworkCore.Tools.DotNet is for the CLI (dotnet commands)
    Be sure you’ve referenced the Tools.DotNet version of the package so that you have access to the CLI commands. If you’re following my course, that’s explained.
  2. dotnet ef only works in .NET Core projects. If your project targets the full .NET framework, then you’ll need to use the PowerShell commands e.g. add-migration, update-database.
  3. Make sure that you are running the command from the folder that contains the project where the Tools package is referenced. This is explained in the course demos, but still a step you may overlook in your excitement!
  4. If you are using project.json*, make sure that you have the Tools.DotNet package in the Tools section, not the dependencies section. After March 7 release, this will just be “1.0.0”.
    *Going forward, you should only be using project.json with .NET Core 1.0 projects. If you are using the current .NET Core (1.1+) you should be using csproj/msbuild.

    "tools": {
       "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.0.0"
     }
    
  5. If you’re using csproj/msbuild, make sure the tools package is listed in the DotNetCliToolsReference tag.  After March 7 release, this will just be version “1.0.0”.

    <DotNetCliToolReference   
        Include="Microsoft.EntityFrameworkCore.Tools.DotNet"
        Version=”1.0.0” />
  6. If you’re using csproj/msbuild, make sure the casing is correct if you’re manually adding that package in csproj! I’ve seen people get tripped up by typing Dotnet rather than DotNet.
  7. It’s possible that you have to Tools correctly placed but your IDE did not trigger a dotnet restore. So you may need to do that manually. Here’s an example where that bit someone. https://github.com/aspnet/EntityFramework/issues/7801
  8. You shouldn’t be using an RC of VS2017 at this point but I’m leaving this one here.
    If you are using an older Release Candidate of Visual Studio 2017 (before RC3) , the CLI tooling was not yet aligned with the msbuild support so with that version, you have to use the PowerShell commands to do migrations. Therefore you have to use the Tools package and work in the package manager console, not the command line.

    <DotNetCliToolReference   
        Include="Microsoft.EntityFrameworkCore.Tools"
        Version="1.1.0-preview4" />

    As of  VS2017 RC3 (which is what I show in the last module of the course) it was possible to use msbuild3 as shown in point #4 above and the CLI commands. After March 7 release, this will just be “1.0.0”.

  9. Be sure you’re targeting a relational database. Migrations only work with those and not, for example, InMemory. A twitter friend accidentally ran into this problem and was getting the “no executable found” error message.
  10. Make sure that the version of the EF Core tools you are using aligns with the version of .NET Core on your machine.

In my EF Core course, I’m using EF Core 1.1 and for EF tools and in all but the last module, I’m using Microsoft.EntityFrameworkCore.Tools.DotNet 1.0.0-preview4. (I’m in the process of  updating the course to use the new 1.0.0 package)

You’ll need .NET Core 1.1 installed and the related dotnet SDK – which is not numbered as simply. Today the  .NET Core SDK tools are still in preview so the version number of the current “stable” build that goes with .NET Core 1.1 is 1.0.0-preview2-1-001377.  

Note that on March 7  when VS2017 has its official release, the .NET Core SDK tools will also be released so the versions will just be normal numbers like 1.0.0.

Here’s an example of what the dotnet ef command will tell you if you don’t have .NET Core 1.1 installed:

The specified framework 'Microsoft.NETCore.App', version '1.1.0' was not found.
 - Check application dependencies and target a framework version installed at:
 C:\Program Files\dotnet\shared\Microsoft.NETCore.App
 - The following versions are installed:
 1.0.1
 - Alternatively, install the framework version '1.1.0'.

You can get the correct version via the download grid at microsoft.com/net/download/core. The grid only exposes stable releases. If you’re looking for nightly builds (which at this time you need for using the csproj support), there’s a link to those just below the grid.

The set of downloads you get via the LTS button is for .NET Core 1.0.3. The Current button gives you the latest stable versions. The SDK button gets you the Runtime + SDK, whereas the Runtime button gives you ONLY the runtime.

The SDK installs will give you the SDK and the runtime. When you’ve selected the SDK set of installs, it says that it’s .NET Core 1.0. That’s referring to the version of the SDK. It will also install both the .NET Core 1.0 and .NET Core 1.1 runtimes. That single SDK is able to work with both of the runtimes. I’m on Windows x64 so my download is the first one on the list…the Windows x64 installer.netcoresdk10

Just as an FYI, if you select the Runtime set of downloads, then you will only be getting a specific version of a runtime and not the SDK.

netcore11runtime

After it’s installed, typing dotnet will show you that you the runtime version that your machine is running by default. That’s the later one. After installing the 1.0 SDK with both runtimes, dotnet tells me I’m running Version 1.1.0 of the runtime. 

 dotnet –version gives you the version of the SDK. That’s already showing me that there was a patch because the result says “1.0.1”.

If you installed this new SDK but are still seeing the old SDK version (1.0.0-preview2-001313), that is likely because that version is specified in the global.json file of your solution. That shouldn’t create a problem for using the migration commands, but it’s a good idea to have the correct version listed in global.json.

Some additional tips for you!

Commands can only run from an executable/test project

In my Pluralsight EFCore course, I have the DbContext in its own class library project. So when running dotnet ef, after solving all of the above problems, you’ll get a new message which is just pointing out that the commands depend on an executable to run. That message looks like this:

Could not invoke this command on the startup project 'TestEFCore.Data'. This version of the Entity Framework Core .NET Command Line Tools does not support commands on class library projects in ASP.NET Core and .NET Core applications.

In my case, I wasn’t ready to add a UI or test to the solution,  so I added a minimal console app just to cover this need. And it needs to reference the project with the DbContext. Once that’s sorted, you can use the —startup-project parameter of the dotnet ef command to point to that project. While I show all of this in my course, you can also see that in my MSDN Magazine article here: msdn.microsoft.com/magazine/mt742867

This will get a touch easier with the msbuild version of the EF Core tools. With the newer tooling, you’ll at least be able to run dotnet ef to get the command’s help without pointing to a startup-project, although to run sub commands, you’ll still need to specify the startup project.

Installing the EF Core Tools via NuGet in Visual Studio 2015

One of the viewers of the course reported a strange problem. He was able to add the EF Core Tools package in project.json and use the migrations from the CLI as expected. But if he attempted to add the package from the NuGet Package Manager or the Package Manager Console instead, he was back to the old ‘No executable found matching command “dotnet-ef”‘ error.

He finally noticed that there were errors when NuGet attempted to download the package. But in his IDE the errors were subtle, so he was unaware that the tools package was not installed. Here’s a screenshot he shared:

frsBKTFTRSjBGOFFZDJ6_nugetmanager

I figured out how to get myself into a similar bind and got a much more helpful error message:

2017-02-27_17-08-52

Package 'Microsoft.EntityFrameworkCore.Tools.DotNet 1.1.0-preview4-final' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget.

Even though Visual Studio extension manager did not indicate an available update AND it listed my NuGet Package Manager version as 3.5 (the latest), I learned that the team responsible for this extension had temporarily stopped pushing notifications for updates! That change is noted in the “No Auto Updates” section of this blog post: http://blog.nuget.org/20161027/Announcing-NuGet-3.5-RTM.html

So I had to explicitly download the latest VSIX (ignoring the fact that it seemed to have the same version # as the one I had installed!) from the NuGet  distributions page. After installing that, it resolved the problem of installing the EF Core tools package from the Package Manager and Package Manager Console.

2017-02-27_17-14-46

There’s a sneak peek at EF Core with msbuild in VS2017

A few people mentioned to me that they decided to go straight to VS2017 when working through the demos in the course. And they also said they were confused by some differences and had to do some research. I know for sure that one of these devs was kicking himself when I asked if he had watched the VS2017 demo I did at the end of the course before trying to VS2017 along with the early demos. (His answer was “umm, now you tell me!”)  It’s listed in the table of contents for the course. So if you take a peek at that first, I think doing all the demos in VS2017 will be a lot easier! I am currently in the process of updating the VS2017 demos to use the RTM and latest tools.

Hope this helps!

Updated My EFCore / WebAPI / PostreSQL / XUnit Repo to 1.1

Today was dedicated to updating my long running repository sample that I started when EF Core was EF 7  to the newest version of EF Core: 1.1. Here is the updated repo: https://github.com/julielerman/EFCore-ASPNetCore-WebAPI-RTM.

Phase one of this update continues to use project.json.

In addition to updating the version #s of  the Nuget package references, I also made some changes to the code to reflect a few new features.

Pay attention to the tooling packages. In the tools section, the package name has changed – note DotNet at the end –  and the version is currently 1.0.0-preview3 even though IIS version is preview2.

 "tools": {
   "Microsoft.AspNetCore.Server.IISIntegration.Tools": 
       "1.0.0-preview2-final",
   "Microsoft.EntityFrameworkCore.Tools.DotNet":
       "1.0.0-preview3-final"
 },

Also in the dependencies, the EFCore Design package is 1.1.0 like the rest of EFCore. That’s part of the EF APIs, not tooling.

Code changes ….

You’ll discover the DbSet.Find method and change tracker Load method in use in the repository class. These were both added in to EF Core 1.1.

I modified the WeatherEvent class to fully encapsulate its Reactions collection using the support for mapping to IEnumerable. That resulted in some changes to constructors and the addition of an AddReaction method and a local variable.

Unrelated to EF Core, I also modified the SeedData.cs class. It reads a hard coded seeddata.json file to read in seed data. That data used old dates. I wanted the data to show current dates to help me tell that I really and truly pushed new data into the database. Since the Date property of WeatherEvent is private, they way I went about this was to read the raw JSON and update the date value that way then save the raw JSON back to the original file. Then I deserialize the JSON with a current range of dates into a set of WeatherEvents. This also means that I added Delete/Create database back in so the database gets thrown away and recreated/reseeded every time you start up the application.

The tests are also update to use the latest packages. In addition to changing the versions, I had to add a reference to an older package (InternalServices) as its dependency has not yet been updated in xunit.

Here’s the full project.json for the test project since I had to do a bunch of googling to figure it out.

{
 "version": "3.0.0-*",
 "description": "Tests for simple app using aspnetcore, efcore and   
                  postgresql. developed and run on OSX.",
 "authors": [ "Julie Lerman" ],
 "testRunner": "xunit",
 "dependencies": {
   "Microsoft.EntityFrameworkCore.InMemory": "1.1.0",
   "src": "3.0.0",
   "xunit": "2.2.0-beta4-build3444",
   "dotnet-test-xunit": "2.2.0-preview2-build1029",
   "Microsoft.DotNet.InternalAbstractions":"1.0.0"},
 "frameworks": {
 "netcoreapp1.0": {
   "dependencies": {
     "Microsoft.NETCore.App": {
     "type": "platform",
     "version": "1.1.0"
     }
   },
   "imports": [
     "dnxcore50",
     "portable-net45+win8"
     ]
   }
  }
}

I hope you find this repository useful to see EF Core 1.1 in action.

Oh and as per a tweet by Brad Wilson, I added SDK to my global.json file!

Now I have to go learn about why this is important. Clearly it is!

 

EF6 or EF Core? How Do I Choose?

In late October, I spoke at DevIntersection. One of my sessions was called EF6 or EF Core? How Do I Choose? As I’ve been an authority on EF since it’s earliest days, people have been asking me this question frequently, which inspired the talk.

The session was not recorded but I’m sharing my slides on SlideShare. I go into detail on the topic in my upcoming Pluralsight course, “EF Core: Getting Started” which I am currently building. Watch my Pluralsight author page (as well as twitter.com/julielerman and this blog!) for its release.

In the meantime, here is a link to the slides from the DevIntersection talk.ef6-or-ef-core

OSX, ASPNetCore, EFCore and CoreCLI, oh my!

After moving some RC1 test projects from windows over to my MacBook, it was time to start from scratch in OS X to see what that experience was like. Installing all the right pieces. I’d been using Visual Studio Code already on windows for nodes programming, but doing that for an ASP.NET 5 project is a little harder since the debugging for that isn’t implemented yet. At the same time I’m still getting used to navigating my way around a Mac … keystrokes, bash commands etc.

But I did get a small sample worked out and even used PostgreSQL to do the job.

That (remember, RC1) little test is on github here: github.com/julielerman/ef7osxtest.

But RC2 is a different beast!

Addendum because so many have asked: I am using the nightly builds of RC2. It is not out yet!

DNX is transitioning to CoreCLI with new underlying APIs. And there was that name change. ASPNET 5 became ASP.NET Core and EF7 became EF Core. The package and namespaces have changed.

And in the meantime, EF7/EFCore is still going through changes with RC2. To me the most significant is the work the team has been doing to help with disconnected graphs. You can read the latest (and I think final) state of how EF will handle disconnected graphs here on github.

I tried a few paths to starting a new project to try out RC2. Here’s some twitter evidence:

I had watched the video of David Fowler & Damien Edward’s talk about Core CLI from NDC London and David did the demos on a mac.

But it took a tweet from Tony Sneed to remind me that I could get David’s demos from github:

Indeed, that was the best starting point. I cloned that repository onto my MacBook and made sure I could run all three projects.

Now I’m working on building out the HelloMVC project by adding in the model, dbcontext, controllers and other relevant bits from my RC1.

At the moment (as of Feb 1 2016), the dotnet ef migrations commands aren’t working but Brice Lambson is working hard on it and says that should be pushed up this week. (Watch this github issue.)

And it might be a while before the Postgres provider gets updated to work with the new namespaces and package dependencies  but we do have the Microsoft.EntityFrameworkCore.Sqlite provider that will work on OS X although there seem to be a lot of problems at the moment with that on Linux. But I’ll be trying that out anyway.