All posts by Julie

Building C# Project-based Azure Functions in Visual Studio Code

I’ve been using the Azure Functions for Visual Studio Code for some time now and they continue to evolve in great ways. The latest shift threw me for a loop so I thought I would document some of it for those who may not have started yet. I should also state that for the past year or so I have focused on writing my functions with JavaScript just because I like to mix things up a bit and it also makes it easier to share Azure Functions with devs who are not .NET focused.

I’ve also written about creating Azure Functions with VS Code in MSDN Magazine but again, this has changed since I wrote about it. I’ve been using the version 2 APIs for a while so I’m not talking (well, writing) about that change from v1 to v2 here but the change in the experience using the Azure Functions extension.

Also notable is that the Azure Functions team actually recommends using Visual Studio for building C# based apps and VS Code for JavaScript. But I’m so often on my MacBook and love using VS Code so I am going down this path with C# anyway.

Having revisited the docs enough times to finally notice some key information, I realize why the experience has changed with the extension working with C#. Previously, I’d worked with C# script functions (.csx) which is the same as what you use when you work directly in the portal. But the extension templates now drive you to C# project functions and there’s a big difference. C# script functions are more like the javascript functions. They depend on a manually created function.json file to define the bindings and can install the appropriate extension packages based on the bindings. The csx files are compiled at run time. With a C# class library, you develop as you would other C# class libraries – installing the relevant packages and then using attributes to identify methods as Azure Functions as well as trigger, input and output bindings. When you compile the library, the Azure Functions tooling will generate a function.json file for you that gets deployed.

Because I was used to creating functions with JavaScript or the C# script path, the new default for the Azure Function extension that uses C# class libraries instead really threw me for a loop. So I decided to document walking through this workflow as I has to learn it. I think I still prefer the lighter weight C# script (.csx) or JavaScript flow but that might align with my preference in many scenarios for VS Code over Visual Studio.

Preparing Visual Studio Code

So first things first: you’ll need to install the Azure Functions and Azure Account extensions into VS Code, Azure Functions extension relies on the Azure Functions Core Tools. The extension installation instructions will help  you get all that you need and the extension does check for updates and prompt you to update those tools as needed. In fact, I got this prompt last night.2019-01-16_09-28-10.png

With the extensions installed, it is time to create a function app project. You should already have a folder created to house your project and you might as well have it open in VS Code. Mine is named AzureFunctionProj.

Then you can click on the “Create New Project” icon on the function bar (the icons show up when you hover over the bar) to create a new Function App project inside your folder.

2019-01-17_18-11-39.png

This part of the workflow has not changed.

  1. It will ask you to point to a folder and the open folder should be there as a default to select.
  2. It will then have you select a language you’ll use in your app. From the options (C#, JavaScript, Python (still a preview) and Java (also a preview), I’ll choose C#.

As a result a new .NET Core project will be created using the template and you’ll see the following in the folder explorer:

2019-01-17_18-19-58.png

All of these files inside AzureFunctionProj folder were created by the template. Most importantly the csproj file where I’ve highlighted some of the most relevant settings.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
    <AzureFunctionsVersion>v2</AzureFunctionsVersion>
  </PropertyGroup>
  <ItemGroup>
   <PackageReference Include="Microsoft.NET.Sdk.Functions"
                     Version="1.0.24" />
  </ItemGroup>
  <ItemGroup>
    <None Update="host.json">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
    <None Update="local.settings.json">
     <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
     <CopyToPublishDirectory>Never</CopyToPublishDirectory>
    </None>
  </ItemGroup>
</Project>

Creating a function in the function app is where things are quite a bit different.

You begin as always with the “add a function” icon2019-01-17_18-11-39 copy.png

The first step is familiar, selecting which folder has the function app that the function should be in:

2019-01-17_22-06-37.png

I’ll choose AzureFunctionProj.

Then there are a number of trigger templates to choose from which is nice but more interesting are the three options at the bottom:

2019-01-17_16-42-25.pngFirst is the project runtime and I definitely want v2 (“~2”). You can use different languages for different functions in the function app. This is showing the default I chose already: C#. And finally, the list of trigger templates is filtered to “Verified”.

You can change these options by clicking on them.

The filter options are Verified, Core and All. Core and All currently reveal the same list, which includes a few extra preview triggers: DurableFunctionsOrchestration, SendGrid, EventHubTrigger and IotHubTrigger.

2019-01-17_16-43-20.png

Now in the past I was creating JavaScript functions inside my .NET Core Function App but now I am going to continue with C# because this is where things really surprised me. I’ll choose  HttpTrigger and am then prompted to provide a name. I’ll just leave the default: HttpTriggerCSharp. I’m then asked to provide a namespace name for the class that will be created. Default is “Company.Project”. I’ll change it to FunctionTests.HttpTest1. The final bit of info to be collected is that you need to choose the security for the function. Of the options, I will select Anonymous because its a demo and I don’t want to have to deal with credentials.

That’s it. The function is created.

Some More Class Library Project Differences

My past experience gave me the expectation that a new folder is created inside the app function folder with the name of the function and inside there, would be a class file for the function and a function.json file to contain the binding configurations. the class file was there (though not in its own folder). But there was no function.json file. Also  interesting and new to me were the FunctionName attribute on the Run method and  the HttpTrigger attribute on the HttpRequest in the Run method’s signature. Also, I’m not used to having all of those using statements when using csx script.2019-01-19_15-16-45.png

When building the project, .NET Core reads that attribute and builds a function.json that goes in the bin folder for deployment. 2019-01-17_22-38-03.png

But it’s more than just the familiar bindings. Notice the generatedBy , configurationSource , scriptFile and entryPoint tags.

So the first binding, the httpTrigger binding, looks familiar to me. The function will respond to httpTrigger.

You can test out this default either by running or debugging. To run, you can use VS Code’s CTRL-F5 keyboard combo or, if you prefer using the CLI, you use Azure Function CLI command:

func host start

That will run the function and provide a url to try out. The template “stake-in-the-ground” method is written to accept either a query parameter or JSON in the body. I’ll just use a query parameter:

http://localhost:7071/api/HttpTriggerCSharp?name=Julie

And the browser outputs

2019-01-19_15-57-04.png

Adding an Output Binding to
Azure Cosmos DB

What if I wanted to add an output binding? I’m used to doing that by editing function.json. But since I’m on this path of attribute defined bindings, I will add the binding that way. Let’s add an output binding for Azure Cosmos DB. That way this function will respond to an HTTP Request and insert some data into a Cosmos DB database. I already have an Azure Cosmos DB account, so I will define this to target a new collection in a new database in the existing account.

In order to use work with a Azure Cosmos DB binding, I need to add the relevant package to my project. Because I’m building my function using a C# class library, I can just do this as I would for any other Nuget package…by adding it directly to csproj or adding the package with the dotnet core CLI:

But now I’m just writing a C# class library so I can add the package either with the dotnet CLI or just add it manually into .csproj. I’ll use the CLI:

dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB --version 3.0.3

Note that if I were building the function with JavaScript or  C# script, I would need to register the package using the tools CLI (func extensions install -p packagename).

Now the package is in csproj:

<ItemGroup>
  <PackageReference 
    Include="Microsoft.Azure.WebJobs.Extensions.CosmosDb"
    Version="3.0.3"/>
  <PackageReference
    Include="Microsoft.NET.Sdk.Functions"
    Version="1.0.24"/>
</ItemGroup>
After restoring, I can add the output binding. Currently the default function is asynchronous and you can’t add out parameters to an async method. Return values are one option. See https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings#using-the-function-return-value for details. ICollector or IAsyncCollector is another (https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-class-library#writing-multiple-output-values).
But I’m going to  just make the Run method synchronous and create an output parameter instead. Like the trigger binding parameter, I’ll need to add an attribute to the output parameter to specify the binding. I’m also supplying parameters for the database and collection names, the name of the setting that has the connection string to the database account and one last setting to ensure the database and collection get created if needed.
[FunctionName("HttpTriggerCSharp")]
public static ActionResult Run(
   [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post",                 Route = null)]
   HttpRequest req,
   [CosmosDB(databaseName: "CSharpDatabase",
             collectionName: "CSharpCollection",
             ConnectionStringSetting = "MyCosmosDBConnection",
             CreateIfNotExists=true)] out dynamic document,
   ILogger log)
I added the MyCosmosDBConnection is defined in the local.settings.json file:
{
  "IsEncrypted": false,
  "Values": {
     "AzureWebJobsStorage": "",
     "FUNCTIONS_WORKER_RUNTIME": "dotnet",
     "MyCosmosDBConnection": "this is where my connection string goes",
   }
}
Note that if you were creating a new function with an Azure Cosmos DB trigger, the tooling would prompt you for all of the relevant database information, include that an attribute in the function code and add them to the function.json.
Now there is just one last puzzle piece. The function expects me to provide a value to the document variable which the binding will then insert into the database. The full class listing is below and in there the single line for populating the document with a Name and Added property is highlighted in red. The function and the binding will do all of the rest.
using System;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;

namespace FunctionTests.HttpTest1
{
  public static class HttpTriggerCSharp
  {
    [FunctionName("HttpTriggerCSharp")]
    public static ActionResult Run(
      [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
      HttpRequest req, 
      [CosmosDB(databaseName: "CSharpDatabase",
                collectionName: "CSharpCollection",
                ConnectionStringSetting = "MyCosmosDBConnection",
                CreateIfNotExists=true)] out dynamic document,   
      ILogger log)
    {
      log.LogInformation("C# HTTP trigger function processed a request.");
      string name = req.Query["name"];
      document=new { Name = name, Added = DateTime.Now };
      return name != null
        ? (ActionResult)new OkObjectResult($"Hello, {name}")
        : new BadRequestObjectResult("Please pass a name on the query string");
     }
  }
}
Based on my previous experience of manually configuring function.json, I expected after building, the function.json in the bin folder to include the CosmosDB output binding information but it didn’t. Again, this is due to the differences between JavaScript/C# script functions and C# library functions.
Within VS Code I can run or debug the function. Debug is F5 or the debug icon. To run, without debugging, you can use CTRL-F5 or  the tools CLI command:
func host start
When using the CLI command, I found in my testing that the version I’m using seems to require me to run dotnet clean first for it to succeed. CTRL-F5 does that step for you . I made a note of this in a pre-existing GitHub issue. You can read that here.
There is a lot of useful info in the official docs. Two that I leaned on are:

Grateful for the 16th MVP Award from Microsoft

 

15 years ago, July 1, 2003, I got a surprise in the mail. An envelope with this sheet of paper welcoming me to the Microsoft MVP program for the things I had done in the community in the past year. Believe it or not, this first award was triggered via Microsoft Academics because one of the things I’d been doing with INETA was working with college students.

With that piece of paper fresh out of the envelope, I jumped in my car and drove 15 miles to the job site where my husband was working on a roof, made him come down from the roof so I could show him. I was that excited and surprised. Even if the letter had not been completely out of the blue, had someone contacted me to fill out a form as the MVPs do these days, I am confident I would have been just as surprised and excited.

I’ve been honored to be awarded every July 1 since then for various things I do to try to shorten the learning curve for other programmers by sharing what I’ve learned. I don’t take the award for granted. I just follow my heart and do what I want to do and if it happens to be what they are looking for when the assessments are being done, then I’m grateful for that particular recognition.

I know it was a hard week for a lot of long term MVPs who were not being re-awarded this year and for the MVP leads (aka Community Program Managers (aka CMPs)) who made personal calls to each and every one of those MVPs to try to let them know as gently as possible. Given that, I’m extra grateful to continue to be part of the program for the July 2018 – June 2019 period.

Defining a Defining Query in EF Core 2.1

I have to cut out some text from a too-long article I’ve written for a magazine (links when it’s published), so here is a simple example of using the new ToQuery method for creating a defining query in EF Core 2.1 (currently in Preview 2).

ToQuery is associated with the new Query Type feature that allows you to use types that are not mapped to a table in the database and are therefore not true entities, don’t require a key and are not change tracked.

I’m starting with a simple model that includes these two entities, which are mapped to tables in my DbContext.

public class Team
    {
        public int TeamId { get; set; }
        public string Name { get; set; }
        public string TwitterAlias { get; set; }
        public List Members { get; set; }
    }

    public class TeamMember
    {
        public int TeamMemberId { get; set; }
        public string Name { get; set; }
        public string Role { get; set; }
        public int TeamId { get; set; }
        public TimeSpan TypicalCommuteTime { get; private set; }
        public void CalculateCommuteTime (DateTime start, DateTime end)
        {
            TypicalCommuteTime = end.Subtract(start);
        }
    }

You’ll need to pre-define the type being used for the defining query, mine will have the Name and TypicalCommuteTime for the team member.

public class TeamCommute
{
   public TeamCommute(string name, TimeSpan commuteTime) 
  {
    Name = name;
    TypicalCommuteTime = commuteTime;
  }
  public string Name { get; set; }
  public TimeSpan TypicalCommuteTime { get; set; }
}

You can define queries directly in OnModelBuilding using either raw sql with FromSql or a LINQ query inside a new method called ToQuery. Here’s an example of using a Query type with a defining query and Linq:

modelBuilder.Query<TeamCommute>()   .ToQuery(() => 
   TeamMembers.Select(m => new TeamCommute( m.Name, m.TypicalCommuteTime ) )

With this query defined on the TeamCommute class, you can now use that in queries in your code for example:

var commutes = context.Query<TeamCommute>().ToList();

Keep I mind that you can’t define both a ToQuery and a ToView mapping on the same type.

New Pluralsight Course! EF Core 2: Getting Started

I’ve recently published my 19th course on Pluralsight.com: Entity Framework Core 2: Getting Started.

It’s 2hrs 40 minutes long and focuses on the basics.

This is using EF Core 2.0.1 in Visual Studio 2017.

Future plans: I’ve begun working on an intermediate level course to follow up and have others in the pipeline…such as a course to cover features of EF Core 2.1 when it gets released (I will wait until it has RTMd for stability) and other advanced topics. I am also planning to do a cross-platform version using VS Code on macOS because that’s my fave these days.

If you are not a Pluralsight subscriber, send me a note and I can give you a 30-day trial so you can watch the course. Be warned: the trial is akin to a gateway drug to becoming a subscriber.

Here is the table of contents for the course:

Introducing a New, Lighter Weight Version of EF   32m 40s
Introduction and Overview
What Is Entity Framework Core?
Where You Can Build and Run Apps with EF Core
How EF Core Works
The Path From EF6 to EF Core to EF Core
EF Core 2 New Features
Looking Ahead to EF Core 2.1 and Beyond
Review and Resources

Creating a Data Model and Database with EF Core    42m 36s
Introduction and Overview
Setting up the Solution
Adding EF Core with the NuGet Package Manager
Creating the Data Model with EF Core
Specifying the Data Provider and Connection String
Understanding EF Core Migrations
Adding Your First Migration
Inspecting Your First Migration
Using Migrations to Script or Directly Create the Database
Recreating the Model in .NET Core
Adding Many-to-many and One-to-one Relationships
Reverse Engineering an Existing Database
Review and Resources

Interacting with Your EF Core Data Model 34m 11s
Introduction and Overview
Getting EF Core to Output SQL Logs
Inserting Simple Objects
Batching Commands When Saving
Querying Simple Objects
Filtering Data in Queries
Updating Simple Objects
Disconnected Updates
Deleting Objects with EF Core
Review and Resources

Querying and Saving Related Data   20m 49s
Introduction and Overview
Inserting Related Data
Eager Loading Related Data
Projecting Related Data in Queries
Using Related Data to Filter Objects
Modifying Related Data
Review and Resources

Using EF Core in Your Applications    30m 52s
Introduction and Overview
EF Core on the Desktop or Device
The Desktop Application: Windows Presentation Foundation (WPF)
Creating the WPF Application
Walking Through the WPF Data Access
EF Core in ASP.NET Core MVC
Adding Related Data into the MVC App
Coding the MVC App’s Relationships
Review and Resources

Copying an Azure Function App using the Portal, GitHub and Cloud Shell

I’ve got an Azure Function App with some functions that I built for a demo for a conference session. I first used this at AngularMix and had named the function app AngularMix2017. Then when I did it for DevIntersection this past week, I recreated it as DevIntersection2017 because you can’t rename a function app. I’m doing it yet again next week at another conference (NetConf.co) and decided it was time to create it as a reusable name: DataApiDemo.

While you can’t flat out copy a function app if you are working strictly in the portal (which is how I’ve been working during my first learning stages of building function apps), you can easily enough duplicate it by downloading and uploading it again.

Keep in mind that if you are developing functions in VS 2017 or VS Code (combined with the Azure CLI) you can just use the tooling to publish the function apps along with the settings. But I don’t like to take the easy path. I learned a lot of new tricks and tools by going this route. Even if I leverage the tooling on my next go round, I’ve learned a lot which gives me a better understanding overall, more control and troubleshooting skills.

My first step was to create the new function app (dataapidemo).

The Overview blade of an Azure function app has a “download app content” option. That will pull down all of the relevant files onto your machine. So next, I go to the overview of the DevIntersection2017 function app and click on the download app content link.

When downloading the app content, you have an option to also grab the app settings. I have a number of secrets stored in my settings ..things like keys to my databases , account ids and more. If you check that, the settings will get downloaded to a local settings file. If you were using this feature to continue your development locally, e.g. in VS2017, then you’ll want them. But for my scenario, they are not useful and even potentially a security issue (you’ll see shortly) so I’m going to leave that unchecked.

I can see the files in the designated folder on my computer.

I’ve already created a Github repository to store these files in: https://github.com/julielerman/DataApiAzureFunctions.

At the command line in the new folder, I’ll git init, add all of the files into the local repo and commit them. Then I can run the git commands to connect that folder to my GitHub repo and then push my local repository into the online repository.

When that’s complete, the files are all in my GitHub repo.

Since I’m using a public repository, this is why I didn’t want the local.settings.json file filled with my secrets to download. Of course I could have deleted it locally or told Git to ignore that file. A few other points about this. If I were developing this locally, then I can have that file locally and use VS2017 or Azure CLI tools to publish my app along with the settings to the portal. But that is not my goal here.

Now I can take advantage of a long-time feature of the Azure Portal: connect it to a repository to auto-deploy from the repository to my app (in the case to my function app).

Back in the portal, I select the new function app again, then from its Platform Features page, open the Deployment Options. Choose Setup new deployment and follow the steps to connect the function app to your repository. When you’ve completed the setup, you can use the Sync button to force the deployment to happen. All of the files are now in my new dataapidemo function, including the function.json files that contain the function setting (e.g. integrations etc) and any other files I may have added to my functions.

Because I am using Twilio, one of my functions has a reference to Twilio in a project.json file and I need to restore that package to this function app. I did that by opening up the project.json and making a small mod to it..adding or removing a blank line, then saving it. That triggers the package restore to happen.

What’s not there however are the function app settings. These are settings that are defined to the app and used by one or more of the functions. There are some default settings created by Azure but I also have some others …those secrets, such as the account and telephone number for my Twilio integration as well as a key for the function.

Some of these settings are my own custom settings that I want in the new function. I could add them in via the portal app settings interface, but I don’t like that path when I have a slew of settings.

That means moving on to yet another amazing tool in the Azure Portal — Cloud Shell. Here you can run Bash or Powershell commands from the Azure CLI directly on services in your subscription. You can also do this locally on your machine with the extra caveat that there you must provide credentials and other metadata.

In the portal, open the Cloud Shell. This is gives you an interactive terminal window.

If you have multiple subscriptions, you want to be sure that it’s set to the one where your app function is stored.

You can do this by listing the subscriptions with the az account list command.

If the one you want is not default, then set it with:

az account set -s [ID]

This shell can do auto-complete so if the id starts with a123, you can type

az account set -s a123

then hit the tab and the id will get finished . Hit enter and it becomes the default subscription to work in.

I’ll use the azure functionapp commands to get the settings into my function app.

First I’ll list the existing settings. You do this with the command

az functionapp config appsettings list --name [functionname] --resource-group [resource name]

The parameter shortcuts for name and resource-group are -n and -g. Mine are both the same name: dataapidemo. Here’s what the command looks like (except for the wrapping that my blog is forcing):

az functionapp config appsettings list -n dataapidemo -g dataapidemo

The appsettings are listed out as JSON by default although you can affect the output format with the output parameter.

I’ll need to see the settings I want to copy from the other function app so I run:

az functionapp config appsettings list -n devintersection2017 -g devintersection2017 -o tsv

which give me the list of settings from which to choose.

In addition to listing the settings, you can add new ones with the set verb and delete with delete verb. Both would go in place of the verb list.

You use SET by combining the SET command and the –settings parameter. After the –settings parameter, you can list one or more setting with this format:

az functionapp config appsettings set -n dataapidemo -g dataapidemo --settings "property1name=value", "property2name=value"

Note that this is wrapping here on the blog post but not in the shell window.

So I added the handful of settings I wanted to add which looked more like:

az functionapp config appsettings set -n dataapidemo -g dataapidemo --settings "TwilioSID=myvalue", "TwilioAccount=myvalue"

There were others I had to add. For example, my function needs to know how to get to my Cosmos DB document database, so it has a setting that uses the db name as the property name and the dbs account endpoint as the value.

Once I had done that, I was able to get rid of the function apps that were named for each conference and use a common one no matter where I am sharing it.

And boyohboy did I learn a lot of new things!

Another Use Case for DbContext.Add in EFCore (and a DDD win)

If you are like me and design your classes following Domain-Driven Design principals , you may find yourself with code like this for controlling how objects get added to collections in the root entity.

public class Samurai {

  public Samurai (string name) : this()
  {
     Name = name;
  }

  private Samurai ()
  {
    _quotes=new List<Quote>();
  }

  public int Id { get; private set; }
  public string Name { get; private set; }
  private readonly List _quotes = new List ()
  private IEnumerable Quotes => _quotes.ToList ();
  public void AddQuote (string quoteText) {
      var newQuote=new Quote(quoteText,Id);
      _quotes.Add (newQuote);
  }

I have a fully encapsulated collection of Quotes. The only way to add a new quote is through the AddQuote method. You can’t just call Samura.Quotes.Add(myquote).

Additionally, because I want to control how developers interact with my API, there is no DbSet for Quotes. You have to do all of your queries and updates via context.Samurais.

A big downside to this is that if I have a new quote and I know the ID of the samurai, I have to first query for the samurai and then use the AddQuote. That really bugs me. I just want to create a new quote, push in the Samurai’s ID value and save it. And that requires either raw SQL or a DbSet<Quote>. I don’t like either option. Raw SQL is a hack in this case and DbSet<Quote> will open my API up to potential misuse.

I was thinking about this problem while laying in bed this morning (admit it, that’s the first thing you do when you wake up, too, right?) and had an idea.

In EF Core, we can now add objects directly to the context without going through the DbSet. The context can figure out what DbSet the entity belongs to and apply the right info to the change tracker. I thought this was handy for being able to call

myContext.AddRange(personobjectA, accountobjectB, productObjectC);

Although I haven’t run into a good use case for leveraging that yet.

What occurred to me is that if DbContext.Add is  using  reflection, maybe EF Core can find a private DbSet.

So I added a private DbSet to my DbContext class:

private DbSet<Quote> Quotes { get; set; }
 And tried out this code (notice I’m using context.Add, not context.Quotes.Add):
static void AddQuoteToSamurai () 
{
  using (var context =newSamuraiContext ()) 
  {
    var quote=newQuote("Voila",1);
    context.Add(quote);
    context.SaveChanges();
  }
}
And it worked! But this isn’t complete yet. I’m breaking my rule of ensuring that only my aggregate root can manage quotes. So this is “dangerous” code from my DDD perspective. However, I was happy to know that EF Core would support this capability.
Currently, Samurai.AddQuote does not have any additional logic to be performed on the quote. What if I were to add in a “RemoveBadWords” rule before a quote can get added?
public void AddQuote (string quoteText) 
{
 Utilities.RemoveBadWords(quoteText);
 var newQuote=new Quote(quoteText,Id);
  _quotes.Add (newQuote);
}
 Now I have an important reason to use Samurai to do the deed. I can add a second, static AddQuote method that also takes an int. Because it’s static, it’s a pass through method.
public static Quote AddQuote(string quoteText,int samuraiId)
{
  Utilities.RemoveBadWords(quoteText); 
  var newQuote=newQuote(quoteText,samuraiId);
  return newQuote;
}

This works and now I don’t have to have an instance of Samurai to use it:

staticvoid AddQuoteToSamurai () 
{
  using (var context =newSamuraiContext ()) {
    context.Add(Samurai.AddQuote("static voila",1));
    context.SaveChanges();
}

One thing I was worried about was if I had an instance of Samurai and tried to use this to add a quote to a different samurai. That would break the aggregate root…it’s job is to manage its own quotes only. It shouldn’t know about other Samurais.

But .NET protects me from that. I can’t call the static method from an instance of Samurai.

I still think that there’s a little bit of code smell from a DDD perspective about having this static, pass-through method in an aggregate root so will have to investigate that (or wait for any unhappy DDDers in my comments). But for now I am happy that I can avoid having to query for an instance of Samurai just to do this one task.

First Foray into .NET Core 2.0

I have to start somewhere so I started wtih super baby steps. Downloading the .NET Core 2 nightly build and trying to create a simple console app. Right away I failed miserably. The reason? The nightly builds have already jumped the shark! They are working on 2.1.0 and I accidentally grabbed that. And that was a little too bleeding edge for me.

There is a docker image that you can use quite easily but I wanted to try CLI, VS Code and Visual Studio. Therefore I wanted to just install the bits right on my machine.

What I’ll show you are  my first tests I did on macOS and on Windows 10. I like to do this just to make sure things are actually running properly. Note that on macOS, I just installed this new version directly on my machine where I have other versions of .NET Core. Key is that there is no production work on there dependent on other versions, so I won’t mess up anything important. The versions can live side by side. It’s just that for someone like me who “knows enough to be dangerous”, it’s easy to get tangled up with the versions even though I know tricks like creating a local nuget.config file and also specifying a version in global.json. On Windows, however, I am using a clean VM that has no other versions of .NET on it. That’s the smart way anyway. Although I did try the not smart way of installig it on my machine that already has all kinds of versions of all kinds of frameworks on it and even with my versioning tricks, I could not get 2.0.0 to do a restore or build.

I mentioned above that I originally downloaded the wrong version of the SDK. (Note that the typical SDK install also includes the runtime….so I got the wrong versoins of both. You can read the gory details of that in this GitHub issue which I kept updating as I sorted the problem out.) I want to stick with .NET Core 2.0.0. The installer for that is tucked away in a branch of github.com/dotnet/cli rather than grabbing the absolute latest from the master. Instead go to https://github.com/dotnet/cli/tree/release/2.0.0. There is a solid 2.0.0 version — 2.0.0-preview2-006391. That’s the one I’m using on Windows and macOS.

2017-06-11_12-23-16.jpg

Make Sure NuGet Knows Where to Find Packages

You can update your global NuGet.Config before creating projects. Alternatively, you can create a project and then add a local NuGet.config file to it.

On Mac, the global config file at [user]/.nuget/NuGet. Here s a screenshot if like me, macOS is not your primary platform and you still struggle with these things.

2017-06-11_12-32-15.jpg

In Windows, it’s at %APPDATA%\NuGet\.

I keep a link to the “Configuring NuGet behavior” doc handy for when I forget where to find that.

I added these two keys to my PackageSources section:

<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />

<add key="dotnet-core" value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json"/>

 Now it’s time to create a project

I’m on the Mac, so, to Terminal we will go.

I’ve created a new folder called EFCore20 – I know this is .NET Core, but eventually I plan to add in EF Core 2.0 as well.
My plan is to create a .NET Core console app (this will rely on netcoreapp2.0) and a library. The libray will be based on the netstandard2.0 library . That means its a library I’ll be able to use from a huge variety of apps, including .NET 4.6.1 based apps. EF Core 2.0 will also rely on .NET Standard 2.0, so any app or API that can consume netstandard, can also consume EF Core 2.0. Read more about that in this EF Core 2.0 announcement on GitHub.

I created one folder for each inside the EFCore20 folder:

netcore2console
netstandard2lib

Then I cd’d into the netcore2console app to create that app with the command:

dotnet new console.
That’s all it takes. This creates a tiny little Hello World app. The dotnet new command will also perform a dotnet restore after creating the files. This is the first moment you will see if things are wokring again. If the restore fails, it will tell you immediately.
This is the message I got when had things misaligned:
Error message: error MSB4236: The SDK ‘Microsoft.NET.Sdk’ specified could not be found.
The “specified” SDK is the currently installed version of whatever SDK you are referencing. In my case, the default for dotnet enw console is “dotnetcore” with the version being whatever version is installed.
Most likely the problem is that  you haven’t provided the correct URI for dotnet to find the proper NuGet packages.
If you don’t want to mess with the global config, you can create a local NuGet.config file in the folder with the app. In it’s entirety, it would look like:
<?xml version="1.0" encoding="utf-8"?>
 <configuration>
  <packageSources>
   <addkey="nuget.org"value="https://api.nuget.org/v3/index.json"protocolVersion="3"/>
   <addkey="dotnet-core"value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json"/>
  </packageSources>
</configuration>
Even though dotnet new will call restore, I like to call
dotnet restore
explicitly. (Control freak)
If it restored correctly, then the next step for validation is
dotnet build
And hopefully that gives you no errors as well. It shouldn’t.
Finally,
dotnet run
should result in spitting out
Hello World!
And now I know it’s working. Silly little bit but I want to verify this before I waste my time writing a bunh of code that isn’t going to run because I don’t even have .NET Core installed properly.
Another point of interest is what files were created by dotnet new console.
It’s easier to see them if I open them up in Visual Studio Code which I can do by typing
code .
There are only 2 files. The csproj with the project metadata and a program file.  The csproj file says that the target framework is netcoreapp2.0.

2017-06-11_14-09-46

The program file just spits out Hello World when it starts up.
2017-06-11_14-10-17
Here is a screenshot of all of the steps at the command line from dotnet new console to the Hello World! output including my extra dotnet restore. Another thing I like about the explicit restore is that it’s showing me more detailed info about the restore.
2017-06-11_14-03-19
Next I want to make sure I can get a .NET Standard library also working.
Still in the terminal (or console or PS if you’re on Windows), I now get into the 2nd sub folder, EF Core2/netstandardlibrary. dotnet new library defaults to using netstandard for the library. So that makes it easy.
dotnet new library
The library is created and the packages it needs are restored.
Here are the default contents of this new library, again, shown in VS Code.
Ntice that its csproj says the target framework is netstandard2.0.2017-06-11_14-16-09
And there’s an empty class file.
2017-06-11_14-18-12
I’ll modify the class to add a public method that returns a string: “Why, Hello There!”.
2017-06-11_14-22-01

Now I’ll go into the csproj for the console app and give it a project reference to the library. The intellisense does not (yet?) help me with this and my memory sucks*, so I head to Nate McMaster’s handy project.json to csproj mind mapper aka Project.json to MSBuild conversion guide to remind myself the syntax of a project reference. I add this below the property group section in the console app’s csproj file

 

<ItemGroup>
  <ProjectReference Include="..\netstandard2lib\netstandard2lib.csproj"/>
</ItemGroup>
Running dotnet restore and dotnet build again will help identify if you’ve got a typo in there. I often do.
Now I’ll modify the Program.cs file to also call output the resultls of caling the HiYa method from my library:
2017-06-11_14-31-39

And then back out to my terminal window (though I can also do it inside VS Code’s built-in terminal). I dotnet build the library and then change to the console folder. Rebuild that and run it and …voila

  netcore2console dotnet run

Hello World!

Why, Hello There!

  netcore2console

So now I know that .NET Core 2 (preview from a nightly build) is running properly on my machine and that I can create and use a .NET Standard 2.0 library. That means I can go ahead and confidently start creating a .NET Standard 2.0 library to host some EF Core 2.0 logic.

I also repeated this entire thing on my Windows machine. Not only does that let me know I can use this version of .NET Core there, but I will also do some work from within Visual Studio with the .NET Core 2.0 preview.

*(duh, I should just make myself a snippet in VS Code!)

Mashup: SQL Server on Linux in Docker on a Mac with Visual Studio Code

I’ve been having a lot of fun with the new mssql extension for Visual Studio Code. I have an article coming in MSDN Magazine and am planning more fun as well. My latest experiment was doing a big mashup taking advantage of the fact that there is now a Linux version of SQL Server. So we are no longer limited to hosting it on Windows or Azure. The most lightweight way to host SQL Server on Linux is in a Docker container. While I am sitting in front of a MacBook typing this I’m by no means working towards abandoning my Windows development or Windows machines. I’m just happy to have more options at my disposal as well as have the ability to share what I am learning work beyond the world of Windows developers.

Containers are not stateful. There are ways around that (I’ll show you below) but I only know enough to be dangerous here. This is a great way to use SQL Server at design time. Using this for production is a totally different story and you need to do a lot more research and soul-searching before using that option.

On the other hand, there are those who do have that particular goal:

I had to go through a number of documents to do this and of course I got stuck even with those resources at my disposal. So I will share the full path of how I got this setup working.

Pre-Requisite

I already have Docker for Mac installed on my MacBook. Here is the installation link if you need to perform that step. Keep in mind that you can’t do this on a VM. I tried as I wanted to repeat this with a clean setup.

Be sure that Docker is set to use at least 4GB of memory.

2017-04-07_13-55-27.jpg

Getting the SQL Server Docker Image

This is what makes the whole thing so easy! Microsoft has created an official docker image with SQL Server for Linux already on it.

In the terminal window, you can pull and install the official image with

 sudo docker pull microsoft/mssql-server-linux

Once it’s installed, the ‘docker images’ command will show you that the image is now available on your machine.

2017-04-07_14-10-55

Although I just installed it today, you can see that the image I’m using — which is the latest version — was created by Microsoft 3 weeks ago.

Spinning Up a Container (or Two) From the Image

Now that Docker is aware of the image, you can create a container from it — which is running instance of the image.

Because we’re on a Mac and awaiting for a “bug” to get fixed, we will actually create two containers.

Depending on your familiarity with Docker, you may or may not be aware that containers are not stateful. Once you delete a container, it’s all gone! If you have persisted data in that container, it, too, is all gone. However, Docker has a feature called “Volumes” which are a way to retain state between docker instances. So when one instance is shut down, the state is stored in a Volume. When another container instance is spun up, that volume provides the container with the state from the previous instance. This is how it’s possible to use containers for databases.

Here’s a great tutorial on volumes: https://rominirani.com/docker-tutorial-series-part-7-data-volumes-93073a1b5b72
And the official docker doc:  https://docs.docker.com/engine/tutorials/dockervolumes/#/creating-and-mounting-a-data-volume-container

However there’s an issue (which looks like the resolution is around the corner) with Docker on Mac hosting the sql-server-linux image. This prevents us from using a volume for persistence in the simple way. So instead, we’ll create a separate container that is a “data volume container”, then we will point the container that will run SQL Server to the data volume container.

Creating the volume container

I’ll name mine mssqldata. Here’s the command to create it. (Don’t miss the full length of the command!)

docker create -v /var/opt/mssql --name mssqldata  microsoft/mssql-server-linux /bin/true

This volume container still uses the image as its base. But we won’t be running SQL Server from this instance.

Creating the SQL Server container

Now you can create an container where you will run SQL Server and that container will use the data volume container for the persisted data.

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

The two environment variables (accept_eula and sa_password) are required. The userid is (gulp) ‘sa’. The password requirements are: “At least 8 characters including uppercase, lowercase letters, base-10 digits and/or non-alphanumeric symbols.”. Mine’s really fancy!

Once these exist,

docker ps

will only show you the regular container. The volume container is hidden so you need

docker ps -a

to see it.

2017-04-07_16-07-49.jpgNotice that the container I will run SQL Server on is on a port whereas the data volume container has a different status “Created”, and is not exposed on a port.

Test the Connection From the SQL Command Line

You don’t have to do this but it made me happy and it was fun. Did you know that you can interact with SQL Server from the command line? The tools are installed inside the container but that’s a messy way to use them. It’s easier to just install them on your computer directly. The command lines tools for Mac are sql-cli.

Note: On May 16th, Microsoft released the macOS version of sqlcmd and bcp (bulk copy). Good to know if you are already familiar with sqlcmd. Here’s their blog post with more details.

You can install that with:

npm install -g sql-cli

Start it up with the mssql command.

To minimum you need connect is to identify the server (which is at localhost by default, you don’t need to specify the port) and the password. It will presume sa for userid.

mssql -s localhost -p Passw0rd

If it’s successful, you’ll get some information followed by a new prompt, mssql.

Connecting to localhost...done

sql-cli version 0.4.14
Enter ".help" for usage hints.
mssql> 

 Enough of that. Now I get to use my new favorite tool. The mssql extension in Visual Studio Code.

Connect in Visual Studio Code

Once you have mssql installed in VS Code, you can begin by creating a new sql file. Either via the commands (F1 for the command pallete, MS SQL to see the commands and then New Query. This will prompt you for a connection – one parameter at a time. YOu can also start with the MS SQL Connect command (⇧⌘C). The extension will prompt you for each parameter.

Server name:  localhost
Database name: (enter)
User name: sa
Password: Passw0rd
Save Password (yes or no)
Profile name: [your choice]

If you enter everything correctly, not only will it connect, but the details will get stored as a connection profile in the VS Code settings and be available for subsequent connections.

You can see the connection status as it is connecting and then when connected in the lower right hand corner of the IDE.

2017-04-07_18-19-37.jpg

In the SQL file open in the editor, you can type SQL and see some existing snippets as well as get help from Intellisense whicih has read the schema of the data on the server. So far there’s not much.

Selecting the  sqlListDatabases snippet and then executing it (right click for Execute Query on the context menu or ⇧⌘E) displays the databases:

2017-04-07_18-22-45.jpg

Now you can use TSQL to create databases, database objects, query data. In the results pane you will see data as a grid similar to what you might see in SSMS. You can also export results to CSV or JSON. I’ve recently written an article about all of the cool things you can do with mssql which will be in teh June 2017 MSDN Magazine. But that connects to a SQL Azure database. In the meantime you can just go to the docs for the extension (aka.ms/mssql-marketplace).

Creating a Database and a Table

I let some more snippets help me to create a database and a table.

The first was the sqlCreateDatabase where I changed the snippet’s database name placholder to create a new database called LinuxReally then executed that with ⇧⌘E.

Re-running the select name from sys.databases command showed that the new database was now in the list.

Next I leveraged the sqlCreateTable snippet to help me create a new table. I named teh table DatabasesIKnow and gave it three columns

CREATE TABLE dbo.DatabasesIKnow
(
  Id INT NOT NULL PRIMARY KEY, -- primary key column
  DatabaseName [NVARCHAR](50) NOT NULL,
  KnowIt [Bit]
);

For some reason, the intellisense cache did not automatically refresh when I created this. Possibly becuse it was a new database. Even the mssql extension’s Refresh Intellisense Cache command did not kick it in. I got it working by disconnecting and reconnecting this time choosing the LinuxReally database rather than letting the extension connect to master by default. When I did that, I could see the message “Updating Intellisense…” in the status bar. After I had done this, the intellisense did auto refresh any time I modified the database schema.

Once i had the new database, I could execute “select * from dbo.DatabasesIKnow” and see the proper results. In my case, since I haven’t added data, there were no rows. But clearly it was reading from my table.

2017-04-08_09-04-55.jpg

Taking Down the sql-server Container

Now come the big docker volume tests. I first disconnected from the database inside of VS Code with⇧⌘E. Then I stopped and removed the sql-server container with the two commands:

docker stop sql-server
docker rm sql-server

But I left the data volume container (mssqldata) running.

I then created a new container instance using the same command as earlier:

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

In VS Code, I think reconnect to the server and my new database which was easy since it had been stored in the connection profiles . It’s the first one, localhost: LinuxReally.

2017-04-08_09-11-50.jpg

The connection was successful and listing databases showed my new DatabasesIKnow database . Select * from that database showed the schema. So my database was persisted in the data volume container even though I had killed and recreated the sql server container.

Next test: take down both containers.

Now it was time to see what happens when I stop and remove both containers!

docker stop sql-server
docker rm sql-server 
docker stop mssqldata
docker rm mssqldata

Next I use the same command I used previously to restart the sql-server container with the parameter to use the (not running) mssqldata volume container.

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Passw0rd' -p 1433:1433 --volumes-from mssqldata -d --name sql-server microsoft/mssql-server-linux

That fails. I need to first run the data volume container. Unfortunately, I re-ran the create command for that and overwrote the existing container, so all was lost. GOOD LESSON THERE! 🙂

So I started from scratch. In the brand new data volume container I recreated my database and table.

After more experiments, I realized that I had misunderstood the Docker documentation on volumes. You can create copies of data volume containers and remove those. But you need to be more of a docker ninja. While you can make all kinds of copies of the container, once you RM all of the related volumes, it is gone gone. This blog post does clarify some of the info I was still confused about wrt creating/backing up/restoring data volume containers: tricksofthetrades.net/2016/03/14/docker-data-volumes/.

So the bottom line is you need to leave the Data Volume Container running in some format (the original or some flavor of copy). (I’m still finding it hard to believe that it doesn’t somehow get stored as a file you can re-run so I will update this as soon as someone corrects me!)

Windows, Mac, Linux, Azure and Anwhere You Can Host a Docker Container

And if you think this is only somehting to do on OS X (because that’s where I’m doing it) .. no no no! Did you know there is now Docker for Windows? And that VS Code is cross platform? This is such a great way to quickly get SQL Server up and running in your development environment.

2017-04-08_10-49-59.jpg

And as I said on twitter, comparing the experience of pulling the docker image and spinning up a container to the experience of installing SQL Server  on windows is something like this:

Cloning a GitHub Repo in Visual Studio 2017 …and a Quiz

When showing off some VS2017 features at our VTdotNET meetup, I made a last minute decision to demo the ability to clone a repository right from GitHub. Then I thought I would combine that with other things I planned to demo.

I already had just the right repo sitting in my GitHub account. A small ASP.NET Core project that was built with Visual Studio 2015 using project.json for its metadata. It’s at https://github.com/julielerman/NetCoreSolutionToMigrateToVS2017.

I had this same solution on my laptop already to use for another demo: showing off VS2017’s ability to auto-migrate a project.json based solution to the new csproj based format for .NET Core projects.

Clever me, I decided to kill two birds with one stone. Clone the repo and have the migration run as it was opening that solution.

So I started up Visual Studio 2017 (since I wanted to show how fast that is) and began the process of cloning the solution from my GitHub repo. I already had my credentials set up and was able to go to File, Open and Open from Source Control.

2017-03-21_21-52-52

This opens the Team Explorer window and I clicked the Clone option, which then opened a window showing all of the accounts I’m connected to.

2017-03-21_22-06-28

I expanded my own account and scrolled down to the repo I wanted, selected it and clicked the Clone button.

2017-03-21_22-11-20

The solution got cloned and then it opened up in Visual Studio.

2017-03-21_22-13-53

But it never triggered the migration! And if you look at the solution, you can see that the project I expanded still has its xproj file and its project.json file. At the time I was confused but now that I know what happened, the answer to why this didn’t migrate is very visible in that screenshot of the Solution Explorer. However, one of the developers who was watching this and had just done another demo with Visual Studio 2017, identified the problem quickly.

Let’s move on for some more clues.

I closed the solution. Then from File/Open, I browsed to the place where it had been saved on my computer, and selected the sln file to open. This time, the same exact solution opening up in VS2017, did indeed trigger the migration, which is quite obvious thanks to this screen.

2017-03-21_22-18-17

Then I let the migrate feature do its job. When it was finished, you can see that the project no longer has its xproj and project.json files.

2017-03-21_22-21-42

Now, look at this new Solution Explorer screenshot compared to the previous one.

And then take a look at the list of new VS2017 features in the Release Notes (https://www.visualstudio.com/en-us/news/releasenotes/vs2017-relnotes) under the section IDE and see if you can tell what the cloning did differently when opening the solution than just opening the solution directly from the drive.

Also, I will find out if this is by design or possible a behavior that can get modified to behave the way I had expected. 🙂

DotNet Core Version Confusion

I see people scratching their heads over this a lot so am dropping it here even though I’m sure it’s stated in many places already.

When you are at the dotnet command line (aka the CLI aka Command Line Interface) and type ‘dotnet’ you will be shown the version of the runtime.

When you add the version parameter (‘dotnet –version’) that will return the version of the SDK (aka CLI aka Command Line Interface) that you are working with.

Here’s an example:

If you are confused, you’re not alone. There’s a good discussion/debate on GitHub about how to alleviate this confusion at  What should dotnet –version display?