Analyzing an Uploaded Image to a Blob using Azure Cognitive Services & Saving Analysis Results to CosmosDB

We discussed before in details how to build Azure Functions using Visual Studio 2017 and VSCode (review my previous articles Building Azure Functions in Visual Studio Code using .Net Core and Building Azure Functions in Azure Portal & Visual Studio 2017 using .NET Core). Today we’re going to benefit from this knowledge to create a blob trigger function app in azure using Visual Studio 2019 (you can also use Visual Studio 2017 to apply all the steps and features discussed in this post) where this function will be triggered upon an image upload to a blob storage where after we will analyze this image using azure cognitive services and save the analysis results into a ComosDB.

This post assumes that you already have an Azure account and ready to be used, in addition to Visual Studio 2019 installed on your machine. If not you can create your own free trial account by subscribing to Azure portal.

Creating a Blob Trigger Function

Before creating the blob trigger function in Visual Studio, make sure you have set up a blob storage account in Azure (for help you can refer to Microsoft docs Create a storage account). Back to Visual Studio, we want to create a function that trigger whenever a new image is uploaded into a blob storage account. To achieve this, there is a template in Visual Studio 2019 that helps with this as shown below.

Capture 1.PNG

Choose the Azure Function project template after filtering by project type Cloud. Then we have to specify the project name and location after that.

Capture 2.PNG

when we click create, a new form will show up to select the type of the function to be created. We’re creating a blob trigger function, so we will choose the “Blob Trigger” template option. After choosing this option with target framework as .Net Core, we have to specify which storage account to use and what is the path under this account.

Capture 3.png

Visual Studio 2019 provides the option to specify a Storage Emulator as a storage account. This helps in simulating most of the features of Azure storage account locally on your machine (for help about storage emulator and how to use it, you can refer to Microsoft docs Azure storage emulator). But this option is limited, so we will be using our created storage account that exists on our azure subscription. When we select Browse as shown above, you will be asked to sign in to your Azure account so that the available storage accounts in your subscription can be shown to choose from them. Follow the steps as below.

Capture 4.png

After signing in, a list of available storage accounts per selected subscription will be shown. Select from the list the storage account created for this solution and click Add. You will be returned back to the form where templates are listed, but this time you will see the selected storage account and asked for the connection string and the path where the images are uploaded or saved in this blob.

Capture 6.png

To get the connection string correctly, we have to go to the portal and browse for the selected storage account. When the storage account blade shows up, navigate to the Access keys section under settings. Another blade will appear that contains several keys to be used. One of these keys is the connection string value to be used to reach out this storage account.

Capture 7.png

Now you have the connection string value provided to the project template, we still only have the path. Simply, the path or container will be “images” since we’re uploading images to this blob. You can choose whatever path you find it suitable. After clicking “Ok”, the project template will be created and you will have something similar to the figure shown below.

Capture 8

We will do some modifications before we start developing the function. First, we will rename the function so that it reflects what we’re doing. I chose to name it “BlobImageAnalysisFunction”. Second, we will remove the long connection string value from the function attributes, and add it to the local.settings.json file so that the connection string value can be loaded from there. I chose to name the connection string key as “blobDemoConnectionString” and set the value in the settings json file. These changes are shown below.

Capture 9Capture 10

As you can see in the figures above, the function takes several parameters. The first one is represented with the BlobTrigger attribute which requires also two parameters, the path parameter that is formed up of the path or container name in the blob that the images will be uploaded to and I want Azure to monitor it for any newly uploaded images, and the connection string name attribute which we set while creating the project template and moved it to the configuration file. The path parameter in the Blobtrigegr attribute has the /{name} which is similar how routes are defined in ASP Net Core. The /{name} which is a parameter in between curly braces indicates that the function will take the name of the blob being used as an input parameter. Next we have the actual input parameter which is the Stream with myBlob variable name. When a new blob is defined in the images container, the Azure function can automatically open that blob to me and pass all the needed information in a form of Stream. The next variable or input parameter to the function is the blob name in form of string format and finally we have the TraceWriter for logging purposes.

Based on what Microsoft docs suggests here Azure Blob storage bindings for Azure Functions, we can also use the CloudBlockBlob in the namespace Microsoft.WindowsAzure.Storage.Blob instead of Stream where you can reference it from NuGet packages in your project. Notice that after changing the Stream parameter into CloudBlockBlob, you should do some changes in the first line of code in the function body responsible for logging the needed information to tell that this function is working properly. So, you can use myBlob.Name instead of the name parameter in the function signature, and change also myBlob.Length to myBlob.Properties.Length where the properties offer also many attributes that are useful. All what we changed and talked about are reflected in the figure below.

Capture 11.png

Since the blob we’re working with lives in a private container, this means that we need permission to access this blob or in turn may pass this access to other functions or components. This can be achieved through what we call Shared Access Signature and you can refer to Using shared access signatures (SAS) to learn more about this concept, including different types of SAS and how to create SAS in C#. So, we will create a small simple method that creates a SAS for us in order to access this blob, and I’m not doing something new here, the sample code below for creating SAS value was done by the aid of the mentioned Microsoft docs reference above.

public static string GetBlobSharedAccessSignature(CloudBlockBlob cloudBlockBlob)
{
    string sasContainerToken;
 
    SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
    {
        SharedAccessStartTime = DateTime.UtcNow.AddHours(1),
        SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
        Permissions = SharedAccessBlobPermissions.Read
    };
 
    sasContainerToken = cloudBlockBlob.GetSharedAccessSignature(sharedPolicy);
    return sasContainerToken;
}

The above method takes my currently accessed blob as an input parameter and in the method body we’re specifying the permissions on this blob which is only Read in our case since we’re reading uploaded images to the blob. We’re also setting the start time and expiry time access which is in my case a maximum of one hour needed to finish this demo. Finally the SAS is generated by the blob itself and we’re returning back the token as a result for this action.

So now in our blob trigger function we will write some code that shows up some blob information and try to invoke the GetBlobSharedAccessSignature method in order to generate the SAS token in addition to the URL to this blob. We will log all these info to make sure that our function is working well. The modified code is as follows.

public static class BlobImageAnalysisFunction
{
    [FunctionName("BlobImageAnalysisFunction")]
    public static void Run(
        [BlobTrigger("images/{name}", Connection = "blobDemoConnectionString")] CloudBlockBlob myBlob,
        string name, TraceWriter log)
    {
        log.Info($"C# Blob trigger function Processed blob\n Name:{myBlob.Name} \n Size: {myBlob.Properties.Length} Bytes");
 
        string blobSas = GetBlobSharedAccessSignature(myBlob);
        string blobUrl = myBlob.Uri + blobSas;
 
        log.Info($"My Blob URL is: {blobUrl}");
        log.Info($"My Blob SAS is: {blobSas}");
    }
 
    public static string GetBlobSharedAccessSignature(CloudBlockBlob cloudBlockBlob)
    {
        string sasContainerToken;
 
        SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
        {
            SharedAccessStartTime = DateTime.UtcNow.AddHours(1),
            SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
            Permissions = SharedAccessBlobPermissions.Read
        };
 
        sasContainerToken = cloudBlockBlob.GetSharedAccessSignature(sharedPolicy);
        return sasContainerToken;
    }
}

What we did is only generating the SAS token value, appending it to the blob URI to generate the blob URL and logging these information to make sure that our blob is working. Try now to run the project, then go the blob and upload a new image, you will see that your function will be triggered and the logged values will be shown up on the screen. When doing so, you should have an output similar to the below.

Capture 12.PNG

Capture 13.PNG

When uploading a new image to the blob from Azure portal, your function output will be something similar to the below.

Capture 14

Capture 15.png

Capture 16.PNG

Now we have created our function and tested it successfully, we move to the next part which is setting up the Face API that we will use to analyze our uploaded image.

Creating a Face API Cognitive Service

Azure platform provides an intelligence framework in form of Artificial Intelligence services, Machine Learning services, and Cognitive Services. Azure Cognitive Services are in fact web services that you deal with in order to do some analysis. These services include visual things that image processing is a part off, natural language processing, speech to text conversion, etc… You can find a list of all provided services in this reference Azure Cognitive Services

Since our demo is to analyze an uploaded image to a blob, what we need is the visual cognitive services and in specific the Face API. There are several services offered by the visual cognitive services bundle, one of them is useful to be used here also which is the Video Indexer API, so you can try to mimic the same demo based on this service but for video files and not images.

Capture 17.PNG

So now we go the Azure portal in order to create the Face API service. Navigate to “Create a resource” blade and search for Face, you will have something like below in the search result. After that click on Face (AI + Machine Learning), a blade with title Face shows up which asks for creating your new Face API. Follow the below screens for creation instructions.

Capture 18Capture 19Capture 20

After providing the name of the API to be created, I chose the free pricing tier since we’re doing a small demo here. Of course if you’re using this Face API for production purposes or real scenario, the free pricing tier isn’t enough since it have a limited number of calls. When the Face API is created successfully you will be navigated to a blade similar to the below (since Azure is being updated in a fast manner, your blade may be different from what you’re seeing below).

Capture 21

As we mentioned before, the Azure Cognitive Services are web services that provide intelligent functionalities for your data and image processing. These functionalities in reality are exposed in the form of a Web APIs. This means that I can reach any functionality through the invoke of an HTTP endpoint, post some data and get back the results from the API in JSON format. Since these services are in the form of Web API, then in order to go through my demo, I must have an HTTP Client that can invoke the Face API by providing it with the uploaded image URL that already contains the SAS token, and take the results of the analysis which will be posted later to a CosmosDB. We can achieve this by referencing the Face API client library into my project from NuGet packages. The library name is Microsoft.ProjectOxford.Face.DotNetStandard and I’m referencing the Dot Net Standard version since my project is .Net Core base.

Next we’re going to write a method that invoke the Face API image processing service asynchronously which returns a Face array since the uploaded image may contain more than one person, and accepts the image URL as a parameter where the URL contains the SAS generated token mentioned before. The method is shown below.

public static async Task<face[]> GetImageAnalysis(string imageUrl)
{
    var faceServiceClient = new FaceServiceClient(
        Environment.GetEnvironmentVariable("faceApiSubscriptionKey"), 
        Environment.GetEnvironmentVariable("faceApiEndPointUrl"));
 
    var types = new FaceAttributeType[] { FaceAttributeType.Emotion, 
        FaceAttributeType.Age, FaceAttributeType.Gender, 
        FaceAttributeType.Smile };
 
    return await faceServiceClient.DetectAsync(imageUrl, false, 
                                               false, types);
}</face[]>

To explain this method, we start from the first line where we initialized the FaceServiceClient to invoke the Face API. The FaceServiceClient takes two parameters, the subscription key value, and the API end point. I saved these values in the local.settings.json and this is the way to get the values of these variables from the configuration file. Of course you’re wondering what are these variables and where to get them from. The first parameter is the key needed to access the Face API we created in Azure and it’s an entry key for granting access to the API. The second parameter is the URL that the Face API can be invoked through. You can get these values from the Keys section and the Overview section in the created Face API in Azure. Check below figures to have a clear view.

Capture 22Capture 23

The next line of code is to specify what are the face attributes types to be included in the analysis of the image. This takes places by initializing an IEnumerable list (or new[]) of FaceAttributeType where for my demo I included the Emotions, Age, Gender, and Smile. The last line is to call the method responsible for detecting the image based on the attribute types specified and return back the results. Notice that the call of the detect method is async so we have to await and since we’re using it the in the body of the Run function, we have to change the Run function to be async Task. The modifications are shown below. Next we’re going to set up the CosmosDB to save our image analysis results in it and upload the blob trigger function to our Azure portal for testing.

Capture 24.PNG

Saving Image Analysis to CosmosDB

Our target is to analyze an uploaded image via Azure Cognitive Services and specifically through Face API and then save the analysis results to CosmosDB. All that to be done through the blob trigger function to be hosted on our Azure portal. So, now the missing part is the CosmosDB. I will not be giving a step by step guide here to create a CosmosDB in Azure and for more details you can refer to this documentation for further help Azure Cosmos DB Documentation. I already created a CosmosDB in my Azure portal where this database contains a collection named images in order to save the analysis information into it. Below is what my ComosDB looks like.

Capture 25.PNG

Now that we have our CosmosDB ready in Azure, we need to modify our blob trigger function to save the analysis results after calling the Face API into the images collection in this database. In order to do so, we have to add the suitable NuGet package to access the CosmosDB. Microsoft.Azure.WebJobs.Extensions.CosmosDB is the needed package to be installed in order to access our CosmosDB and images collection.

A very important note here, when you install the CosmosDB package, you may have some conflicts with the packages already installed in your project. One of the problems you may face is that BlobTrigger attribute will not be recognized anymore, this is because the latest stable version of WebJobs.Extensions.CosmosDB uses also the latest version of the Microsoft.NET.Sdk.Functions where you will be forced to update this package and at the end install Microsoft.Azure.WebJobs.Extensions.Storage in order BlobTrigger to be recognized again.

As you can see in the figure above, my CosmosDB account name is “rcimagesdemodb” and the collection to hold the analysis results name is “images”. Note that the images collection is found in a container named “rcBlobImageDB”. From the data explorer section you can fetch out and query the data found in this collection.

To push analysis information into our CosmosDB, we have to modify our function to accept an output binding to the database. Moreover, we need to hold our analysis data in an object that can be used to track and push the data into the collection. For this purpose we will create a simple class that holds up the blob name and the collection of faces that were analyzed by the Face API.

[CosmosDB("rcBlobImageDB", "images", ConnectionStringSetting = "cosmosDBConnectionString")] 
IAsyncCollector<FaceAnalysis> analysisResults

The upper piece of code should be added to the function Run signature as an output binding, where this will give us the ability to access our CosmosDB images collection. Several things to be mentioned here to clear what this code does. First the CosmosDB attribute takes several parameters. The database name which is “rcBlobImageDB” in our case, the collection name which is “images” and several properties which we use only one of them now that is the connection string to access the CosmosDB. As known with all resources in Azure, the connection string can be fetched out from the Keys blade inside the CosmosDB account you’ve created. I added also the connection string as a key inside the local.settings.json with a key name “cosmosDBConnectionString” and the value is the one fetched out from the Azure portal. After adding this binding to the function, we need to define the collection to hold up our analysis data. I chose to use the variable type IAsyncCollector. What I do here with IAsyncCollector is keep adding objects this collection and when the blob trigger function executes successfully, the collection in this object will be then pushed to the CosmosDB images collection based on the attributes provided from database name, to collection name, and the proper connection string.

The class I created named FaceAnalysis is a simple class with public access that only contains two properties, the blob name property and the faces collection property.

public class FaceAnalysis
{
    public string BlobName { get; set; }
 
    public Face[] Faces { get; set; }
}

Now we have the output binding to the CosmosDB, and we have the object class that will hold our data to be pushed to the images collection, we have to modify our function body to get the analysis information and add it to the CosmosDB via our IAsyncCollector. The function body is now modified to be as follows.

[FunctionName("BlobImageAnalysisFunction")]
public static async Task Run(
    [BlobTrigger("images/{name}", Connection = "blobDemoConnectionString")] CloudBlockBlob myBlob,
    [CosmosDB("rcBlobImageDB", "images", ConnectionStringSetting = "cosmosDBConnectionString")] IAsyncCollector analysisResults,
    string name, TraceWriter log)
{
    log.Info($"C# Blob trigger function Processed blob\n Name:{myBlob.Name} \n Size: {myBlob.Properties.Length} Bytes");
 
    string blobSas = GetBlobSharedAccessSignature(myBlob);
    string blobUrl = myBlob.Uri + blobSas;
 
    log.Info($"My Blob URL is: {blobUrl}");
    log.Info($"My Blob SAS is: {blobSas}");
 
    var imageAnalysis = await GetImageAnalysis(blobUrl);
 
    FaceAnalysis faceAnalysis = new FaceAnalysis
    {
        BlobName = myBlob.Name,
        Faces = imageAnalysis
    };
 
    await analysisResults.AddAsync(faceAnalysis);
}

Our blob trigger function is ready now. We mapped the BlobTrigger to our blob in Azure and created the needed SAS token for granting access. We also created the needed Face API in our Azure portal and wrote up the needed code to access this API and get the analysis information from it. Finally, we mapped our CosmosDB created in Azure portal to be used in our function and save the collection of analysis in our images collection.

Since everything is ready now, we still only have to push our function to Azure and run this function to check the results of any uploaded image to our images blob. To do, I will push the function code to my git repository in the function app that I already have in my Azure subscription. Note that we discussed in several posts before how to create a function app and push it to Azure, so you can refer to these posts for any help in this.

A very important note to keep up in mind, when we push our blob trigger function to the Azure portal, don’t forget to add all the keys we added to the local.settings.json file to configuration section of the Function App in Azure or the function will not be able to execute and run errors will occur. The function app that I have now in my Azure portal looks like below with the configuration keys also shown along with the application settings.

Capture 26Capture 27

Action Time! Now my function app is up and running in Azure and the Face API we created before  is also there. All what we have to do is to navigate to our images blob storage blade and upload a new image file from there and wait till we see the results saved in the CosmosDB. You can do a sample web application that access the blob and send the image into it but we will use the Azure portal directly for testing now.

As mentioned, I navigated to the blob storage images folder path and uploaded a new image of mine there. Once the image is uploaded to the blob the function is triggered and all the analysis done was exported to the CosmosDB. The results and the actions are shown below.

Capture 28Capture 29Capture 30

The results from the analysis are now available in our images collection in the CosmosDB. You can use this information for querying some fields needed for further analysis like the fields in the Emotions section. Results are shown below.

{
“BlobName”: “image.jpeg”,
“Faces”: [
{
“FaceId”: “00000000-0000-0000-0000-000000000000”,
“FaceRectangle”: {
“Width”: 438,
“Height”: 438,
“Left”: 393,
“Top”: 220
},
“FaceLandmarks”: null,
“FaceAttributes”: {
“Age”: 38,
“Gender”: “male”,
“HeadPose”: null,
“Smile”: 1,
“FacialHair”: null,
“Emotion”: {
“Anger”: 0,
“Contempt”: 0,
“Disgust”: 0,
“Fear”: 0,
“Happiness”: 1,
“Neutral”: 0,
“Sadness”: 0,
“Surprise”: 0
},
“Glasses”: “NoGlasses”
}
}
],
“id”: “acc7fb99-035e-4d32-a3be-97ed7b970277”,
“_rid”: “jTwvAK5+XfoCAAAAAAAAAA==”,
“_self”: “dbs/jTwvAA==/colls/jTwvAK5+Xfo=/docs/jTwvAK5+XfoCAAAAAAAAAA==/”,
“_etag”: “\”850093a1-0000-0e00-0000-5cd81d150000\””,
“_attachments”: “attachments/”,
“_ts”: 1557667093
}

In this post, we did something interesting where we used the powerful Cognitive Services features provided by Azure to analyze an uploaded image to a certain Blob. All this was done through our blob trigger function which accessed the blob to read the uploaded image, send the image to the Face API for analysis and export the results at the end into a CosmosDB. This was all achieved by using the power of Serverless Computing using Azure Functions!

Running ASP.NET Core App in Kubernetes With Docker for Windows 10

Today, we are going to get started with setting up Docker v18.06.0-ce-win72 (19098) that now supports Kubernetes v1.10.3 on Windows machines running Windows 10 OS. We will end up by running an ASP.NET core app with Docker after creating a deployment, which manages up a Pod that runs the desired Container. This to be considered as a quick tutorial for getting started with Kubernetes where after that any developer can dig deep and learn more about using Kubernetes with Azure with what we call Azure Kubernetes Service AKS (or Azure Container Service).

What is Docker?

Docker by definition is an open platform for developers and sys-admins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.

Docker is the biggest and most popular container system, so it is a way to containerize your application and put everything that your application needs into a small image that can run then on any computer that has Docker installed on it. It is a way to eliminate any obstacle in running your application over any platform due to configuration issues or missing libraries or any of the problems that we face when hosting an application. Therefore, any system that has Docker installed on it can containerize your application smoothly.

We have to differentiate between a container and a VM where the container is definitely not a virtual machine and what makes them different is shown in the below figure.

Docker

As we can see, in a VM you have to package up or setup a complete operating system where it is mandatory to have all the libraries required to run your application in order to host the app inside of it. While in Docker, you do not have an actual operating system to be installed and configured where Docker does all the translation needed (acts as a translation layer). Therefore, Docker simply runs a container on your operating system, which has all the bins and libraries needed, and the actual applications itself and the most interesting part is that Docker can share these libraries across multiple applications.

What is Kubernetes?

Kubernetes by definition is a portable, extensible open-source platform for managing containerized workloads and services, which facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services support and tools are widely available.

Simply Kubernetes is a container orchestrator, which takes a declared state of how you would like to configure your micro-services or your containers and makes it happen across computers. As an example, if you are running an application in the cloud like Azure over several compute instances where for each instance you will be configuring and cloning your application, build it and run it and if it crashes up you have to go back and figure out why it crashed out and reboot it again to be operating. Therefore, if you want all what mentioned in the previous statement to be done automatically for you without worrying about all these stuff; simply you can use Kubernetes where it handles all of these things for you. You setup Kubernetes on all of these instances where they communicate with each other through the master and give them a declared state that will make it happens and that is all!

Some terms to keep in mind when using Kubernetes are Node, Pod, Service, and Deployment. A Node is an instance of a computer running Kubernetes where a node runs what we call a Pod. The Pod is what contains one or more containers that are running. A Service is what handles the requests either coming from inside the Kubernetes cluster between several nodes, or a public request from outside the cluster to a master node that wants to execute a specific micro-service. The Service is usually referred as a load balancer. Deployment is what handles or defines the desired state of an app. The below figure shows how this architecture is.

Kubernetes

Docker for Windows?

To install Docker for Windows, check the Install Docker for Windows for a detailed explanation about the installation steps and what needed to make Docker ready to be used on Windows 10. Once installed, you go to Docker settings to enable Kubernetes and show the system containers just as shown below.

docker settings

As you can see, there is also another framework that can be used instead of Kubernetes which is called Swarm but it is outside the scope of this article to be discussed.

Play around with some Commands

Once Docker installed on your machine, you can check it with the command line (or Windows PowerShell) by executing the command “docker” as follows:

docker command

You can check all your running Pods by executing the command “docker ps” and here are the running Pods on my machine.

docker ps

To terminate a certain Pod, all what you have to do is using the command “docker kill” and provide it with the Container ID to be terminated as shown below (to check if container was terminated use again the “docker ps”).

docker kill

To run a new image on Docker we use the command “docker run appname”. If the application is not found, Docker will try to pull it out from library and give it a new GUID for the container id as follows.

docker run

I really like using commands in any platform, but UI is preferable for users to interact with any system and that is why Kubernetes offer a cool Web UI dashboard to interact with the clusters. Therefore, the Web UI is a visualization for what you saw above in the commands results where you can check the pods, nodes, services, and deployments you have in the system. Moreover, you have the ability to manage your system like scaling a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard and much more features that you can check online Web UI (Dashboard).

To enable the dashboard on your machine, all what you have to do is execute the below command.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

You can add more extra features like charts and graphs by doing the following.

kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yamlkubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yamlkubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml

Where “kubectl” is a command line interface for running commands against Kubernetes clusters

Notice that the deployment file is of extension .yml where YAML is YAML is a human friendly data serialization standard for all programming languages. It is commonly used for configuration files, similar to JSON but the difference is that in YAML we use indentation instead of quotations. You can check more about YAML and download the library you want to work with via your preferred programming language.

To access the dashboard we use the “kubectl” command line tool and provide it with the keyword proxy as shown below.

kubectl

Note that the dashboard can only be accessed from the machine that executes the above command. You can reach the dashboard via the following link:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Either the Kubernetes dashboard when opened will bring up the following screen where you can choose the grant access type via KubeConfig, Token, or you can skip this step and continue to the dashboard.

kubernetes dashboard

The dashboard will be something like what I have here on my system.

kubernetes dashboard 2

Now let us move into action by creating an ASP.NET Core Web Application in visual studio 2017 and create a Docker support for this app under Linux. I will use the visual studio templates for creating the application but of course, also commands can be used here to make the same setup.

The application I am creating is of name MyKubeASPNETCoreApp using ASP.NET Core 2.1 where I select the enable Docker support checkbox under the Linux OS:

asp net core app

When clicking the Ok button the Docker setup will be activated and pulling of the images will be initiated as shown:

output

docker image pull

After the download finished pulling the images needed for Docker, my web app solution will now has a Docker file that contains the following:

docker file

When you hit run or F5 in Visual Studio 2017 and Docker is selected, the application will be configured to have a port running on it linked to port 80 over TCP connection. Watch out you may need to enable some Windows Firewall rules for Docker or the application will not run due to some port blocking or drivers sharing between windows OS and Docker. Now if I go and check in command line or PowerShell what are the running Pods I can see my application out there.

docker powershell

In the browser, I have the running demo app functioning smoothly!

app browser

I will share also the commands you can use to setup the app in Docker and Kubernetes instead of Visual Studio 2017 doing that for you.

C:\Users\Rami Chalhoub\Documents\Visual Studio 2017\Projects\MyKubeASPNETCoreApp>docker build -t mykubeaspnetcoreapp:dev

C:\Users\Rami Chalhoub\Documents\Visual Studio 2017\Projects\MyKubeASPNETCoreApp>kubectl run mykubeaspnetcoreapp –image= mykubeaspnetcoreapp:dev –port=80 deployment “mykubeaspnetcoreapp” created

As a start with Docker, and Kubernetes by running an ASP.NET Core Web application configured over Docker Linux OS we can see that we have a lot of features to play with and of course it will get much better in the future for building up containers and orchestrating them via Kubernetes !!