Analyzing an Uploaded Image to a Blob using Azure Cognitive Services & Saving Analysis Results to CosmosDB

We discussed before in details how to build Azure Functions using Visual Studio 2017 and VSCode (review my previous articles Building Azure Functions in Visual Studio Code using .Net Core and Building Azure Functions in Azure Portal & Visual Studio 2017 using .NET Core). Today we’re going to benefit from this knowledge to create a blob trigger function app in azure using Visual Studio 2019 (you can also use Visual Studio 2017 to apply all the steps and features discussed in this post) where this function will be triggered upon an image upload to a blob storage where after we will analyze this image using azure cognitive services and save the analysis results into a ComosDB.

This post assumes that you already have an Azure account and ready to be used, in addition to Visual Studio 2019 installed on your machine. If not you can create your own free trial account by subscribing to Azure portal.

Creating a Blob Trigger Function

Before creating the blob trigger function in Visual Studio, make sure you have set up a blob storage account in Azure (for help you can refer to Microsoft docs Create a storage account). Back to Visual Studio, we want to create a function that trigger whenever a new image is uploaded into a blob storage account. To achieve this, there is a template in Visual Studio 2019 that helps with this as shown below.

Capture 1.PNG

Choose the Azure Function project template after filtering by project type Cloud. Then we have to specify the project name and location after that.

Capture 2.PNG

when we click create, a new form will show up to select the type of the function to be created. We’re creating a blob trigger function, so we will choose the “Blob Trigger” template option. After choosing this option with target framework as .Net Core, we have to specify which storage account to use and what is the path under this account.

Capture 3.png

Visual Studio 2019 provides the option to specify a Storage Emulator as a storage account. This helps in simulating most of the features of Azure storage account locally on your machine (for help about storage emulator and how to use it, you can refer to Microsoft docs Azure storage emulator). But this option is limited, so we will be using our created storage account that exists on our azure subscription. When we select Browse as shown above, you will be asked to sign in to your Azure account so that the available storage accounts in your subscription can be shown to choose from them. Follow the steps as below.

Capture 4.png

After signing in, a list of available storage accounts per selected subscription will be shown. Select from the list the storage account created for this solution and click Add. You will be returned back to the form where templates are listed, but this time you will see the selected storage account and asked for the connection string and the path where the images are uploaded or saved in this blob.

Capture 6.png

To get the connection string correctly, we have to go to the portal and browse for the selected storage account. When the storage account blade shows up, navigate to the Access keys section under settings. Another blade will appear that contains several keys to be used. One of these keys is the connection string value to be used to reach out this storage account.

Capture 7.png

Now you have the connection string value provided to the project template, we still only have the path. Simply, the path or container will be “images” since we’re uploading images to this blob. You can choose whatever path you find it suitable. After clicking “Ok”, the project template will be created and you will have something similar to the figure shown below.

Capture 8

We will do some modifications before we start developing the function. First, we will rename the function so that it reflects what we’re doing. I chose to name it “BlobImageAnalysisFunction”. Second, we will remove the long connection string value from the function attributes, and add it to the local.settings.json file so that the connection string value can be loaded from there. I chose to name the connection string key as “blobDemoConnectionString” and set the value in the settings json file. These changes are shown below.

Capture 9Capture 10

As you can see in the figures above, the function takes several parameters. The first one is represented with the BlobTrigger attribute which requires also two parameters, the path parameter that is formed up of the path or container name in the blob that the images will be uploaded to and I want Azure to monitor it for any newly uploaded images, and the connection string name attribute which we set while creating the project template and moved it to the configuration file. The path parameter in the Blobtrigegr attribute has the /{name} which is similar how routes are defined in ASP Net Core. The /{name} which is a parameter in between curly braces indicates that the function will take the name of the blob being used as an input parameter. Next we have the actual input parameter which is the Stream with myBlob variable name. When a new blob is defined in the images container, the Azure function can automatically open that blob to me and pass all the needed information in a form of Stream. The next variable or input parameter to the function is the blob name in form of string format and finally we have the TraceWriter for logging purposes.

Based on what Microsoft docs suggests here Azure Blob storage bindings for Azure Functions, we can also use the CloudBlockBlob in the namespace Microsoft.WindowsAzure.Storage.Blob instead of Stream where you can reference it from NuGet packages in your project. Notice that after changing the Stream parameter into CloudBlockBlob, you should do some changes in the first line of code in the function body responsible for logging the needed information to tell that this function is working properly. So, you can use myBlob.Name instead of the name parameter in the function signature, and change also myBlob.Length to myBlob.Properties.Length where the properties offer also many attributes that are useful. All what we changed and talked about are reflected in the figure below.

Capture 11.png

Since the blob we’re working with lives in a private container, this means that we need permission to access this blob or in turn may pass this access to other functions or components. This can be achieved through what we call Shared Access Signature and you can refer to Using shared access signatures (SAS) to learn more about this concept, including different types of SAS and how to create SAS in C#. So, we will create a small simple method that creates a SAS for us in order to access this blob, and I’m not doing something new here, the sample code below for creating SAS value was done by the aid of the mentioned Microsoft docs reference above.

public static string GetBlobSharedAccessSignature(CloudBlockBlob cloudBlockBlob)
    string sasContainerToken;
    SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
        SharedAccessStartTime = DateTime.UtcNow.AddHours(1),
        SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
        Permissions = SharedAccessBlobPermissions.Read
    sasContainerToken = cloudBlockBlob.GetSharedAccessSignature(sharedPolicy);
    return sasContainerToken;

The above method takes my currently accessed blob as an input parameter and in the method body we’re specifying the permissions on this blob which is only Read in our case since we’re reading uploaded images to the blob. We’re also setting the start time and expiry time access which is in my case a maximum of one hour needed to finish this demo. Finally the SAS is generated by the blob itself and we’re returning back the token as a result for this action.

So now in our blob trigger function we will write some code that shows up some blob information and try to invoke the GetBlobSharedAccessSignature method in order to generate the SAS token in addition to the URL to this blob. We will log all these info to make sure that our function is working well. The modified code is as follows.

public static class BlobImageAnalysisFunction
    public static void Run(
        [BlobTrigger("images/{name}", Connection = "blobDemoConnectionString")] CloudBlockBlob myBlob,
        string name, TraceWriter log)
        log.Info($"C# Blob trigger function Processed blob\n Name:{myBlob.Name} \n Size: {myBlob.Properties.Length} Bytes");
        string blobSas = GetBlobSharedAccessSignature(myBlob);
        string blobUrl = myBlob.Uri + blobSas;
        log.Info($"My Blob URL is: {blobUrl}");
        log.Info($"My Blob SAS is: {blobSas}");
    public static string GetBlobSharedAccessSignature(CloudBlockBlob cloudBlockBlob)
        string sasContainerToken;
        SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
            SharedAccessStartTime = DateTime.UtcNow.AddHours(1),
            SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
            Permissions = SharedAccessBlobPermissions.Read
        sasContainerToken = cloudBlockBlob.GetSharedAccessSignature(sharedPolicy);
        return sasContainerToken;

What we did is only generating the SAS token value, appending it to the blob URI to generate the blob URL and logging these information to make sure that our blob is working. Try now to run the project, then go the blob and upload a new image, you will see that your function will be triggered and the logged values will be shown up on the screen. When doing so, you should have an output similar to the below.

Capture 12.PNG

Capture 13.PNG

When uploading a new image to the blob from Azure portal, your function output will be something similar to the below.

Capture 14

Capture 15.png

Capture 16.PNG

Now we have created our function and tested it successfully, we move to the next part which is setting up the Face API that we will use to analyze our uploaded image.

Creating a Face API Cognitive Service

Azure platform provides an intelligence framework in form of Artificial Intelligence services, Machine Learning services, and Cognitive Services. Azure Cognitive Services are in fact web services that you deal with in order to do some analysis. These services include visual things that image processing is a part off, natural language processing, speech to text conversion, etc… You can find a list of all provided services in this reference Azure Cognitive Services

Since our demo is to analyze an uploaded image to a blob, what we need is the visual cognitive services and in specific the Face API. There are several services offered by the visual cognitive services bundle, one of them is useful to be used here also which is the Video Indexer API, so you can try to mimic the same demo based on this service but for video files and not images.

Capture 17.PNG

So now we go the Azure portal in order to create the Face API service. Navigate to “Create a resource” blade and search for Face, you will have something like below in the search result. After that click on Face (AI + Machine Learning), a blade with title Face shows up which asks for creating your new Face API. Follow the below screens for creation instructions.

Capture 18Capture 19Capture 20

After providing the name of the API to be created, I chose the free pricing tier since we’re doing a small demo here. Of course if you’re using this Face API for production purposes or real scenario, the free pricing tier isn’t enough since it have a limited number of calls. When the Face API is created successfully you will be navigated to a blade similar to the below (since Azure is being updated in a fast manner, your blade may be different from what you’re seeing below).

Capture 21

As we mentioned before, the Azure Cognitive Services are web services that provide intelligent functionalities for your data and image processing. These functionalities in reality are exposed in the form of a Web APIs. This means that I can reach any functionality through the invoke of an HTTP endpoint, post some data and get back the results from the API in JSON format. Since these services are in the form of Web API, then in order to go through my demo, I must have an HTTP Client that can invoke the Face API by providing it with the uploaded image URL that already contains the SAS token, and take the results of the analysis which will be posted later to a CosmosDB. We can achieve this by referencing the Face API client library into my project from NuGet packages. The library name is Microsoft.ProjectOxford.Face.DotNetStandard and I’m referencing the Dot Net Standard version since my project is .Net Core base.

Next we’re going to write a method that invoke the Face API image processing service asynchronously which returns a Face array since the uploaded image may contain more than one person, and accepts the image URL as a parameter where the URL contains the SAS generated token mentioned before. The method is shown below.

public static async Task<face[]> GetImageAnalysis(string imageUrl)
    var faceServiceClient = new FaceServiceClient(
    var types = new FaceAttributeType[] { FaceAttributeType.Emotion, 
        FaceAttributeType.Age, FaceAttributeType.Gender, 
        FaceAttributeType.Smile };
    return await faceServiceClient.DetectAsync(imageUrl, false, 
                                               false, types);

To explain this method, we start from the first line where we initialized the FaceServiceClient to invoke the Face API. The FaceServiceClient takes two parameters, the subscription key value, and the API end point. I saved these values in the local.settings.json and this is the way to get the values of these variables from the configuration file. Of course you’re wondering what are these variables and where to get them from. The first parameter is the key needed to access the Face API we created in Azure and it’s an entry key for granting access to the API. The second parameter is the URL that the Face API can be invoked through. You can get these values from the Keys section and the Overview section in the created Face API in Azure. Check below figures to have a clear view.

Capture 22Capture 23

The next line of code is to specify what are the face attributes types to be included in the analysis of the image. This takes places by initializing an IEnumerable list (or new[]) of FaceAttributeType where for my demo I included the Emotions, Age, Gender, and Smile. The last line is to call the method responsible for detecting the image based on the attribute types specified and return back the results. Notice that the call of the detect method is async so we have to await and since we’re using it the in the body of the Run function, we have to change the Run function to be async Task. The modifications are shown below. Next we’re going to set up the CosmosDB to save our image analysis results in it and upload the blob trigger function to our Azure portal for testing.

Capture 24.PNG

Saving Image Analysis to CosmosDB

Our target is to analyze an uploaded image via Azure Cognitive Services and specifically through Face API and then save the analysis results to CosmosDB. All that to be done through the blob trigger function to be hosted on our Azure portal. So, now the missing part is the CosmosDB. I will not be giving a step by step guide here to create a CosmosDB in Azure and for more details you can refer to this documentation for further help Azure Cosmos DB Documentation. I already created a CosmosDB in my Azure portal where this database contains a collection named images in order to save the analysis information into it. Below is what my ComosDB looks like.

Capture 25.PNG

Now that we have our CosmosDB ready in Azure, we need to modify our blob trigger function to save the analysis results after calling the Face API into the images collection in this database. In order to do so, we have to add the suitable NuGet package to access the CosmosDB. Microsoft.Azure.WebJobs.Extensions.CosmosDB is the needed package to be installed in order to access our CosmosDB and images collection.

A very important note here, when you install the CosmosDB package, you may have some conflicts with the packages already installed in your project. One of the problems you may face is that BlobTrigger attribute will not be recognized anymore, this is because the latest stable version of WebJobs.Extensions.CosmosDB uses also the latest version of the Microsoft.NET.Sdk.Functions where you will be forced to update this package and at the end install Microsoft.Azure.WebJobs.Extensions.Storage in order BlobTrigger to be recognized again.

As you can see in the figure above, my CosmosDB account name is “rcimagesdemodb” and the collection to hold the analysis results name is “images”. Note that the images collection is found in a container named “rcBlobImageDB”. From the data explorer section you can fetch out and query the data found in this collection.

To push analysis information into our CosmosDB, we have to modify our function to accept an output binding to the database. Moreover, we need to hold our analysis data in an object that can be used to track and push the data into the collection. For this purpose we will create a simple class that holds up the blob name and the collection of faces that were analyzed by the Face API.

[CosmosDB("rcBlobImageDB", "images", ConnectionStringSetting = "cosmosDBConnectionString")] 
IAsyncCollector<FaceAnalysis> analysisResults

The upper piece of code should be added to the function Run signature as an output binding, where this will give us the ability to access our CosmosDB images collection. Several things to be mentioned here to clear what this code does. First the CosmosDB attribute takes several parameters. The database name which is “rcBlobImageDB” in our case, the collection name which is “images” and several properties which we use only one of them now that is the connection string to access the CosmosDB. As known with all resources in Azure, the connection string can be fetched out from the Keys blade inside the CosmosDB account you’ve created. I added also the connection string as a key inside the local.settings.json with a key name “cosmosDBConnectionString” and the value is the one fetched out from the Azure portal. After adding this binding to the function, we need to define the collection to hold up our analysis data. I chose to use the variable type IAsyncCollector. What I do here with IAsyncCollector is keep adding objects this collection and when the blob trigger function executes successfully, the collection in this object will be then pushed to the CosmosDB images collection based on the attributes provided from database name, to collection name, and the proper connection string.

The class I created named FaceAnalysis is a simple class with public access that only contains two properties, the blob name property and the faces collection property.

public class FaceAnalysis
    public string BlobName { get; set; }
    public Face[] Faces { get; set; }

Now we have the output binding to the CosmosDB, and we have the object class that will hold our data to be pushed to the images collection, we have to modify our function body to get the analysis information and add it to the CosmosDB via our IAsyncCollector. The function body is now modified to be as follows.

public static async Task Run(
    [BlobTrigger("images/{name}", Connection = "blobDemoConnectionString")] CloudBlockBlob myBlob,
    [CosmosDB("rcBlobImageDB", "images", ConnectionStringSetting = "cosmosDBConnectionString")] IAsyncCollector analysisResults,
    string name, TraceWriter log)
    log.Info($"C# Blob trigger function Processed blob\n Name:{myBlob.Name} \n Size: {myBlob.Properties.Length} Bytes");
    string blobSas = GetBlobSharedAccessSignature(myBlob);
    string blobUrl = myBlob.Uri + blobSas;
    log.Info($"My Blob URL is: {blobUrl}");
    log.Info($"My Blob SAS is: {blobSas}");
    var imageAnalysis = await GetImageAnalysis(blobUrl);
    FaceAnalysis faceAnalysis = new FaceAnalysis
        BlobName = myBlob.Name,
        Faces = imageAnalysis
    await analysisResults.AddAsync(faceAnalysis);

Our blob trigger function is ready now. We mapped the BlobTrigger to our blob in Azure and created the needed SAS token for granting access. We also created the needed Face API in our Azure portal and wrote up the needed code to access this API and get the analysis information from it. Finally, we mapped our CosmosDB created in Azure portal to be used in our function and save the collection of analysis in our images collection.

Since everything is ready now, we still only have to push our function to Azure and run this function to check the results of any uploaded image to our images blob. To do, I will push the function code to my git repository in the function app that I already have in my Azure subscription. Note that we discussed in several posts before how to create a function app and push it to Azure, so you can refer to these posts for any help in this.

A very important note to keep up in mind, when we push our blob trigger function to the Azure portal, don’t forget to add all the keys we added to the local.settings.json file to configuration section of the Function App in Azure or the function will not be able to execute and run errors will occur. The function app that I have now in my Azure portal looks like below with the configuration keys also shown along with the application settings.

Capture 26Capture 27

Action Time! Now my function app is up and running in Azure and the Face API we created before  is also there. All what we have to do is to navigate to our images blob storage blade and upload a new image file from there and wait till we see the results saved in the CosmosDB. You can do a sample web application that access the blob and send the image into it but we will use the Azure portal directly for testing now.

As mentioned, I navigated to the blob storage images folder path and uploaded a new image of mine there. Once the image is uploaded to the blob the function is triggered and all the analysis done was exported to the CosmosDB. The results and the actions are shown below.

Capture 28Capture 29Capture 30

The results from the analysis are now available in our images collection in the CosmosDB. You can use this information for querying some fields needed for further analysis like the fields in the Emotions section. Results are shown below.

“BlobName”: “image.jpeg”,
“Faces”: [
“FaceId”: “00000000-0000-0000-0000-000000000000”,
“FaceRectangle”: {
“Width”: 438,
“Height”: 438,
“Left”: 393,
“Top”: 220
“FaceLandmarks”: null,
“FaceAttributes”: {
“Age”: 38,
“Gender”: “male”,
“HeadPose”: null,
“Smile”: 1,
“FacialHair”: null,
“Emotion”: {
“Anger”: 0,
“Contempt”: 0,
“Disgust”: 0,
“Fear”: 0,
“Happiness”: 1,
“Neutral”: 0,
“Sadness”: 0,
“Surprise”: 0
“Glasses”: “NoGlasses”
“id”: “acc7fb99-035e-4d32-a3be-97ed7b970277”,
“_rid”: “jTwvAK5+XfoCAAAAAAAAAA==”,
“_self”: “dbs/jTwvAA==/colls/jTwvAK5+Xfo=/docs/jTwvAK5+XfoCAAAAAAAAAA==/”,
“_etag”: “\”850093a1-0000-0e00-0000-5cd81d150000\””,
“_attachments”: “attachments/”,
“_ts”: 1557667093

In this post, we did something interesting where we used the powerful Cognitive Services features provided by Azure to analyze an uploaded image to a certain Blob. All this was done through our blob trigger function which accessed the blob to read the uploaded image, send the image to the Face API for analysis and export the results at the end into a CosmosDB. This was all achieved by using the power of Serverless Computing using Azure Functions!

Building Azure Functions in Visual Studio Code using .Net Core

In the previous post, we discussed how to create Azure Functions in Visual Studio 2017 and in Azure Portal and we kept the VS Code part until later discussions. Today, we are going to start building our first function in Visual Studio Code from scratch by setting up the environment with Azure Functions Cli, authenticating your VS Code to sign in to Azure, and finally creating the function project and deploying on Azure.

Since we had already discussed Azure Functions and went deep in details about knowing this feature in Azure, we will head directly to building our first Azure Function in VS Code without further explanation. So, if you want to learn more about Azure Functions from the beginning, review the previous post before going forward through this article. Now, Let us get into action!

This post assumes that you already have an Azure account that is ready to be used, in addition to VS Code installed on your machine with the latest Node JS and NPM. If not you can create your own free trial account by subscribing to Azure portal.

Installing Azure Functions Cli & Extension in VS Code

The initial step to start with is to prepare the environment to start developing Azure Functions in VS Code. Therefore, we start by installing the Azure Functions Extension in VS Code. This can be done by opening VS Code, click on Extensions and search for “Azure”. In the search result, find “Azure Functions” item, click on it and in the right pane click on install. Check the figure below.

vscode acure function extension.png

Since in my environment I already have the extension installed, it displays the options to either disable or uninstall. Once the extension in installed you will find an icon added to the left pane toolbar like the one displayed in the figure below.


When you click on this icon, a pane will be displayed that contains two options, either to sign in to azure with and existing account or creating a free azure account. Since I already have my azure account there, I will sign in with an existing account. If you do not have an account, you can check azure portal step-by-step guide to create a new free account in azure.

vscode azure signin.png

When you click on Sign in to Azure, a new browser will open to provide your sign in credentials. If login is successful, a page will be displayed indicating the authorization given to your VS Code on your local machine. The page should be something similar to the following (maybe the page changed by the time submitting this article).

vscode azure authentication.png

After closing this page, you may need to restart VS Code for the changes to take place or reflect locally, especially if you are creating a new free azure account. The next step is to install the Azure Functions core tools globally on your machine. Notice that all what was mentioned earlier and will be mentioned later are done on a Windows 10 machine. If you are using any other OS like Linux or Mac or even you are working on a Windows server machine, the commands may differ and you have to search for the suitable ones. Once you restart or launch again VS Code and click on Azure Functions Extension icon, you should have something similar to the below.

vscode azure function extension.png

To install the Azure Functions core tools and the Azure Functions Cli, you open a new command prompt with admin privileges or you can use PowerShell to execute the command below. In my case, I always prefer to use the Cmder Command Emulator. However, I will execute the installation command in a normal command prompt and after that check the results in the Cmder emulator.

install azure functions cli

When the installation finishes successfully, you will be notified about the installed Cli, packages and the installation path.

install azure functions cli 2.png

To check if Cli is working, all needed packages are installed successfully, and your machine is ready for starting developing Azure Functions over VS Code, open a powershell again or a terminal and execute the command “func” and you should have results as shown below.

install azure functions cli result.png

Creating Azure Function App in VS Code

After the successful installation of the Azure Functions extension in VS Code and Cli on your machine with all the needed packages, now it’s time to create our first azure function app in VS Code. So, open VS Code instance, click on the Azure Functions extension icon in the left toolbar and create a new project. This will prompt you to select a folder path for your project. For me I already have a local repository for all my projects, you can select any path suitable for you. When you setup the folder path, you will be also prompted to select the preferred language to be used for developing the function app. In our case, we will be using C# as our development language for the function method.

create new function.png

Next, you will be asked which runtime you want to use. Here you have two options; you go for Version2, which is .NET Standard, or for Version1 which is .NET Framework. In our case, we will be using .NET Standard as our runtime for the function app being created.

create new function 1.png

The next step is to choose where you want to open the project being created, and this depends on your choice. For me I will open it in the current window.

create new function 2.png

When the project finishes loading, head to the Azure Function explorer and create a new function. This will prompt you to select the solution folder or browse for a new location. We will of course select the current workspace folder we created our solution in it. Select the current directory your solution found in; it is the default option, and press Enter to proceed. This is shown in the figure below.

create new function 3.png

When you set the folder for the function app to be created, another prompt action will be displayed in order to set the type of the function app to be created. As we all know, the azure function app can be of several types like HttpTrigger, BlobTrigger, etc… (Check my previous post for more info). For our example we will create an HttpTrigger function app, so we select this option and proceed.

create new function 4.png

After setting the function app type, we have to name our function. You will be prompted to set the name of the function in the next displayed window. I will name my function as shown below.

create new function 5.png

The next step is to specify the namespace of your function. The default name is “Company.Function” but you can change it. After setting the namespace, press Enter to proceed.

create new function 6.png

Now we have to specify the access rights or what was known before as “Authorization Level” for the function app.

create new function 7.png

In our example, we will choose the Anonymous access rights. Select this option and press Enter, you will be moved directly to the project with the function app created in it. The created function app will look somehow like below.

create new function 8.png

The function looks similar to the one we created in Visual Studio 2017 in the previous post. The project is of the same structure and the function app is of the same code sample (check previous article for more information about the function code block input and output).

So, now we have our function app created, let us build it and run it locally!

create new function debug.png

To run the function app, go to debugger in VS Code and click on the debug icon in the Activity Bar. This will open the debugging explorer where you can click on the green button to start the application. Note that there may be some missing packages that need to be installed so the application build may take some time to complete. Once the app starts, you should have something similar to the above figure.

The app build will generate a local URL for your function. Simply click on the highlighted link, it will open in the browser and it should return the message “Please pass a name on the query string or in the request body” since there is no variable “name” passed neither in the query string nor in the body request.

create new function run.png

Now, append the URL in browser with the variable “name” in the query string and give it a value. A message will be returned containing this variable value as a response for the function call. In my example, I passed the message value “my first function app” to the variable “name” and I got the following result.

create new function run 1

If you prefer using the Azure Function Cli instead of pressing F5 to start the function app, use the following command.

func host start

Notice in the terminal the traces that take place for what is happening during the execution of the call when we added the “name” variable value and pressed Enter.

create new function run 2.png

Now, we have a working function that runs locally and does what is required, so it is time for deployment on Azure Functions!

Deploying to Azure Functions from VS Code

To deploy the function to azure we will go through several steps in VS Code, from specifying the new function app name to reaching the region of deployment in which data center. So let us get started!

To deploy the function you created in VS Code to Azure Functions, open the Azure Functions explorer in VS Code and click on the upload arrow, which is the Deploy to Function App arrow. You will be prompted to select an existing function app if any, if not create a new function app by clicking on “Create New Function App” and press Enter. The process is shown below.

function deployment

By clicking on “Create New Function App”, you will be asked to enter a unique name for the function to be created. Provide a unique name for your function, and press Enter to proceed. I will set mine to “rcVsCodeFunctionAppDemo”.

function deployment 1.png

Next step you will be asked to specify the resource group of your function app. As previous section, you have either to select an existing resource group in your azure subscription, or create a new one. In addition, we will go for the second option by creating a new resource group and press Enter as shown below. I will set my resource group name to the same value used before “rcVsCodeFunctionAppDemo”.

function deployment 2.png

Next step is to set the storage account. In addition, we will create a new storage account instead of selecting an existing one. I provided the same name used above for the storage account also.

function deployment 3.png

Last step is to set the location region of this new resource, where a list of available data centers in several countries is displayed. I usually select the data center that is nearest to the client’s region who will be using the created function app. Since, I am only using my function now; I will be selecting “France Central” as the target location.

function deployment 4.png

When you select the region and press Enter, the upload process will be initiated and VS Code will display a series of messages for you to be informed about the deployment process.

function deployment 5function deployment 6function deployment 7

Now, we head to azure portal to check our created function by going to the “Function Apps” blade. And the results should be similar to the below.

azure function app portal.png

When you click on the function name, the azure platform will notify you that you cannot edit or modify this function since it is running from a package file. Therefore, the function will be set to read-only mode!

azure function app portal 1.png

Now how about testing our function app that is running in the portal! Simply, click on the URL displayed in the Output area, and the new browser will open with a warning message to provide a name value since the “name” query string is missing. Provide the “name” query string and call the function again, and here we go the function app is working as was tested locally. Congrats!

azure function app portal run.png

In this post, we learned how to install and configure azure functions extension, Cli and tools in VS Code. Create a new function app and deployed it to Azure Function Apps. In the next posts, we will be discussing very interesting topics like analyzing an image using Azure Cognitive Services, saving data results to Cosmos DB and a lot more. Stay tuned!

Building Azure Functions in Azure Portal & Visual Studio 2017 using .NET Core

In this post, we are going to talk about Azure Functions. As known, Azure Functions are a PaaS, which stands for Platform as a Service. This allows us to develop or run a small piece of code that acts as a service on the cloud platform. This service can do several tasks like doing a certain job when an Http message from a Web API is received, or triggers another function when a certain action on the cloud platform takes place, or initiate an event when a new record inserted in Azure SQL or Cosmos, and so on. Today, we are going to create Azure Functions directly in the portal and later on we are going to use Visual Studio 2017 to create a certain function using .Net Core and reflect this function in the portal. In other coming articles, we will create an Azure Function App using VS Code, analyze an image using Azure Cognitive Services and save the results into either a SQL database or a Document inside Cosmos.

Since Azure Functions are also known as Serverless Computing Service that enables us to run code on demand without having to deal with the infrastructure, we will start by clearing up some terms that are being used like Serverless Architecture and PaaS. So here we go!

This post assumes that you already have an Azure account and ready to be used, in addition to Visual Studio 2017 installed on your machine. If not you can create your own free trial account by subscribing to Azure portal.

What is Serverless Architecture & PaaS?

By definition, Serverless Computing is defined as follow: Serverless Computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources.

The first time I heard the term “Serverless” I paused for a while and thought, “How could this be possible, and how would we be able to run our apps without a server!” Nevertheless, it was not the case. We all need those special kind of powerful computers to run our applications, process data, and send information over networks. Therefore, the term Serverless Computing or what is also known as Serverless Architecture is somehow confusing or even more we could say it is an inaccurate naming or designation since a server is still required and it must be there to do all the required tasks. However, looking beyond the terminology is what matters now. When we say Serverless Computing, we mean that there is no need for us to care about server maintenance and the infrastructure because the server is out there in the public cloud datacenter. Our job shifts from maintaining the server to providing the proper and convenient instructions, which maintain the continuity of this server to do and perform the operations that are needed to be accomplished.

Serverless Computing.png

Serverless Computing is also about using Platform as a Service – PaaS technology. What do we mean by PaaS? The PaaS or what also known as aPaaS (Application Platform as a Service) is a complete development and deployment service provider category or environment in the cloud that allows customers to develop, run and manage their applications with resources that enable them to deliver everything needed without worrying about the maintenance of these resources. There are some other different concepts that we summarize in the figure below which are out of scope of this article, but we just need to know them if any to be mentioned later.

Serverless Computing Comparison.png

What is Azure Functions?

Azure Functions is a solution for creating functions in the Microsoft Azure cloud platform. The function is a small piece of code with some specific configuration that Azure needs in order to know when to call this function and what to do without worrying about the whole application or the infrastructure to run it.

Azure Functions.png

The function in Azure is an efficient way to create a solution for the problem you have with a small code snippet, which leads to a more productive development. The function can be either an event driven or a trigger driven (or even respond to a webhook which is an HTTP API push request or a web callback like what GitHub generates when a code check in is performed for example) and it can be built using a variety of development languages like C#, F#, Node, Ruby and others.

Creating a Function App in Azure

A Function App in Azure in a way or another is a special type of App Service and to be more specific we can go further by saying that Functions in Azure are built on the top of App Services. So as in App Services where each service requires an app service plan, the function in azure requires what we call a Hosting Plan. Let us go systematically to create a function in azure.

To create a function in azure, go to the Function Apps item in the left menu, or if it is not found click on “Create a resource” which opens a blade for searching the azure marketplace. In the search bar, write down the keyword function and here we go the results contain an item called Function App as shown in the figure below.

Azure Function App.png

When clicking on Function App, a new blade will show up that contains little information about Function Apps and a “Create” button. Clicking on create button will lead to another blade as the one down below.

Function App.png

The mandatory fields are the one highlighted with a red asterisk and we will pass over them and explain each one.

The App name is the name of the function you want to create in azure and this name should be unique since it is combined with domain. The second field is the Subscription where you have to specify under which subscription you want this function to be created, knowing that azure gives you the ability to have several subscriptions (this can be managed and checked in the Subscription blade).

A resource group is a single logical entity, which groups all related resources together. We mean by resources storage accounts, virtual machines, virtual networks and many other resources that may be used by the service you are creating in Azure. Here, either you can select an existing resource group to append the resources to be created by the function to it, or you can create a new resource group that is only related to the function being created. Usually, I prefer to create a new resource group for any newly created component in Azure, since it is easier to manage, maintain, and deploy these grouped resources later on for any modification to be made.

After that, you have to choose what operating system the function will be running on. The location represents which datacenter you want your function to be deployed on. Usually we choose the nearest datacenter to our physical location or to the clients’ physical location. The runtime stack is the technology of the function if it is either a .NET one, a JavaScript, or Java. The last one is the storage field where you also have to choose if it is a new storage account to be created or use an already existing one. The storage account is the one that holds up the Blob, Table storage and the Queues to be used by the created function.

I save the Hosting Plan until the end since it needs some more explanation. The hosting plan field represents which plan rules the function will be subjected to while executing where the plan describes what type of hardware our function will be operating on. There are two available options, either an App Service Plan or a Consumption Plan. The difference is that if I choose an App Service Plan, this means that I will be always paying for that plan even if my function is inactive or not executing i.e. even if my function is not using any hardware resources. This option is more costly and you can check the costs of each plan by visiting the website provided by azure for calculating the fees or prices to be paid per service type ( as shown in the figure below.

Azure Price Calculator.png

The Consumption Plan is still like the App Service Plan, but in this scenario, you will not have to worry about the type of resources, virtual machines, and other things like scaling up or scaling down. All of these will be managed by Azure itself. So, choosing consumption plan as an option is like saying to azure do your work here and manage all these stuff for me so that I don’t have to worry about monitoring resources and check when I need to scale up or scale down to lower my cost. Finally, functions can be integrated with application insights, which give you the ability to monitor how your function is performing and where the errors are if there are any. After specifying all the required fields, the blade now will look somehow like the figure below.

Function App Configured.png

When azure finishes creating the function, a notification will be displayed in the notifications center. After that we can navigate to the Function Apps blade form the left menu and we will find our newly created function there. The blade will look somehow like this:

Function App Blade.png

On the left hand side, we can see the available functions for each subscription. In my current subscription, I only have the function I created right now under the name “rcfunctionappdemo”. As we can see, in each function app we can have one or more functions. In order to create a new function in my function app, we navigate to the submenu Functions with the plus sign beside it. We also have the proxies and the slots, where slots are used for deployment slots (development, staging and production) while proxies are special type of functions that we use for executing between the client side and a backend service for example.

Now we click on the Functions node with the plus sign. This will open up a blade that displays the available functions in our function app. However, since we did not add any function yet, the displayed screen will be empty. Therefore, we click on the plus sign to add a new function, and here we go the blade that shows up will appear like below:

Function App Configuration.png

Since we want to use the portal for creating a new function in this section, we click on the “In-Portal” tab in order to proceed and then click continue. The next step will be as follows:

Function App Configuration 1.png

In this blade, the available templates displayed are the “Webhook+API” and the “Timer” templates. We can browse for more templates by clicking on the “More templates” tab, which leads us to the screen below:

Function App Configuration 2.png

Each template type has a small description about its usage, for example we use the “HTTP trigger” when we want our function to execute whenever an HTTP request is received, while the “Timer trigger” is used when we want our function to execute at a specific predefined time like transferring data between two blobs at midnight. The function that we want to create will execute when an HTTP request is received, so we choose the “HTTP trigger” template. A new blade on the right opens up in order to specify the function name and the authorization level.

Function App Configuration 3.png

The authorization level decides the type of accessibility to the function that we’re creating. We have three options in this field either Anonymous, or Function, or Admin. The Anonymous type is used when we want to make our function accessible without any security or restrictions, while the Function type is used to make our function accessible through a generated key by azure, and finally the Admin is used when we have several nested functions within the same function app. For our function to be created we will choose the Function authorization type and then click create.

Function App Code.png

As we can see in the figure above, the screen contains several sections. In the middle, we have the editor, which we will be using for writing our code snippet that represents what the function will be mainly doing which implies that a function is really about writing a method block of code at the end. The section highlighted down below the editor contains the logs screen, which displays the errors when the code is compiled while the console acts like the normal console window that we all know which displays the folder path at the beginning. On the right side we can see the sections for testing our code function by doing a sample call for this function through an HTTP Post request, while the view files allows us to see the available files under this function block where the available file upon creation is the one opened in the editor and called “run.csx”.

So now, we have this C# function in our consumption plan, which takes HttpRequest and ILogger as parameters. The function is of type async since the call to this function is asynchronous and it is static as it does not depend on any object identity but belongs to the function type itself rather than to a specific object. The function will take the incoming Http request and the first thing that is being done is to log the message “C# HTTP trigger function processed a request” through the ILogger instance log. The next step is to fetch the query string parameter “name” from the incoming Http request where if this parameter is empty and no query string is sent via the function URL, then the function will be looking inside the request body as a JSON format and de-serialize it to fetch out the value. The last thing that the function does is to display the parameter value by creating a new response message through the OkObjectResult, which is of type ActionResult. If the name was empty, then a warning message is displayed to pass the parameter name value.

This was the default code generated when we created the function app. You can write your own code by modifying the function body written in C# based on what you need and want this function app to do from the incoming Http request call. We will keep the code as is and pass a value of “my first azure function app” in the name query string parameter and test what the function will display. To test the function we have two options, either by a direct call from the browser by using the function URL generated by Azure or in the Test tab at the right section of the screen described before.

Before testing the function app, we want to make sure that we have no compilation or code errors. We check this by clicking save to commit our code function then click on the Run button. When we click on the Run button, the logs tab should indicate the compilation results and if any errors are, there you will be notified in this section. The logs tab should look somehow like below:

Function App Log.png

After running our code and checking that no compilation errors are there, we are going to test our function app. As a first option we can use the function URL generated by Azure to test the function app. To do so we click on the “Get Function URL” link. When clicked, a popup will be shown that contains the function URL as shown below.

Function App URL.png

The function URL is formed up of your function app name highlighted above in yellow hosted by the and the function app acts like an API with the sub section function name highlighted in red. The final section is the code, which is a secured hash that enabled the access to the function app where without this code the generated response will be HTTP Status Code 401 unauthorized access. We take this URL and paste it in the browser and add the “name” query string value. The output will be as follows:

Function App Result.png

The second option as we mentioned is the Test tab. In this tab, we can test the function app by generating an Http Post call. The tab provides the ability to add query parameters, header and a request message body. Since we tested the query string option in the generated function URL, we will try now the request message body as shown below.

Function App Code 2.png

We provided the name variable in the request body and the type of the call is an Http Post. Once we click on run, the output generated will be the message “Hello, my first azure function app” with status 200 OK.

Creating a Function App in Visual Studio 2017

Now we come to create an azure function app by using Visual Studio 2017. You can also use VS Code and this will be discussed in another article.

To create a function app in Visual Studio 2017 we choose new project and go to the section cloud where the function app template available. If you do not have this template installed, then you have to modify your visual studio installation and install the required templates for cloud projects. You specify the project name and path and after that click create. The process is shown below.

VS Create New Function.png

When you click Ok, another form will be shown to select the type of the azure function under which framework and other options. We have two options: either creating an azure function using .NET framework or creating an azure function using .NET Core. We will choose .NET Core option where also several types are displayed either an empty project app, or an Http Trigger app, etc… Since we want to have a function that takes an incoming Http Request, then we will choose the option Http Trigger project type as shown:

VS Create New Function 2.png

Notice that you can specify the storage account if you want any and the access rights or permission level for this function. In our sample function, we set the access right to “Anonymous”. By clicking Ok, the project will be created and it should looks like as below:

VS Function App Code.png

The project looks like a normal class library, which contains a class code file, called “Function1.cs” where this file will host our function. You can rename the class file; I chose the name “MyFirstAzureFunction.cs”. On the left tab, we can see that the code is very similar to the one we saw before in azure portal. You can change also the function name to any desired name you want. The method is a static one inside a static class. The method in this file should indicate to azure what this function does, what it takes as parameters and so on. These metadata can be expressed to azure using C# attributes. In this method, we can see the specified attribute is HttpTrigger with the authorization level that we chose at the project creation, which is Anonymous. In addition, it specifies what Http verbs it accepts, in our case, we have the GET and the POST without any specified Route since it is of a null value. The FunctionName attribute tells azure what is the public name of this function. Now, it is named as Function1 also so we will change it to the desired name to be displayed in public in azure. I will rename the function to be “MyHelloAzureFunction”. The body of the method is the same that we saw in the azure portal so no need to explain it again. Let us move forward in debugging the function locally and deploying it after that to azure.


When we press F5 to run the project, it will take some time to execute in order to start the Microsoft Azure Emulator on your local machine. Once the project runs, it should display a console that looks similar to the above figure. Of course, you can set a breakpoint to debug the code of the function method once you create a call for this function. As we can see in the above figure, the function is hosted and running on localhost at the port 7071. Notice that the port on your local machine may differ and it depends on the available list of ports for the emulator to use, but 7071 is the default port used by the Azure Function Cli emulator. The function upon running the project can be reached through the URL displayed inside the console and in my case, it is as follow: http://localhost:7071/api/MyHelloAzureFunction.

So, let us now take the mentioned URL, paste it in the browser, and provide it with a query string “name” with any value you want. I will choose again the value used before which is “my first azure function app”. Once you provide the query string with the copied URL and hit enter in the browser, an Http Response will be generated containing the message “Hello, my first azure function app” as shown below:

Function App Result 2.png

Now, after testing our function method locally, we come to deployment on azure. To deploy the function on azure, we have to check in our code to the repository of our function app on azure where the Kudu engine will handle the compilation and running of our code at the master branch of the repository. To do so we have to create the git repository if it is not there yet. We go to our function app; click on “All Settings” where a new blade will open that contains all the settings for this function app as shown below.

Azure Function App Dep Center

Azure Function App Dep Center 2

The git repo at my function app is already created before and ready for use by copying the Git URL. If it is the first time creating a git repo over the function app you created, you can easily go through the steps of creating it when you click on the “Deployment Center” section. Now that I have the git repo URL to commit my function app code to it from Visual Studio we have to start with adding our code to source control and choosing git repository by clicking on “Add to Source Control” button in the visual studio toolbar and choose “Git” from the list. After this step, visual studio will create the git file for the solution and now we are ready to start deploying. Open a console and navigate to the solution folder path, then use the git commands shown below with the git URL that we copied from the properties section of the function app at azure.

Git Push.png

As you can see, we used several git commands to check the status of git repo at our solution, then pushed our code. When we execute the git push command a prompt for the credentials will be shown and you have to provide the password in order to start the process. If you still don’t have these credentials or doesn’t know anything about it, then you can go to the “Deployment Credentials” section under “All Settings” in the function app blade in azure and create your new username and password for this git repo.

After providing the correct credentials, the upload process will start executing and as mentioned before the Kudu engine on azure will start compiling the pushed code. Later, the function will be ready at azure where this process may take some time to be done. Check below console status of my pushed code.

Git Push 2.png

Once the push process is finished the code will be compiled and the function will be created under your function app in azure. If there is no compilation errors the function will be up and running once the push process and compilation by Kudu engine finishes. You can check this by going to your function app in azure and check the functions section under where it should now contain your uploaded function from visual studio as shown in the below figure in my case example.

Git Pushed Function

Finally this it! You can test your function the same way mentioned before and for any changes, you want to apply on your function you have to repeat the process of pushing code to the same repo and master branch again so that the function can reflect the changes that were done on your code in visual studio. In the next article, we will see how to create an azure function using VS Code and much more. Stay tuned!

Running ASP.NET Core App in Kubernetes With Docker for Windows 10

Today, we are going to get started with setting up Docker v18.06.0-ce-win72 (19098) that now supports Kubernetes v1.10.3 on Windows machines running Windows 10 OS. We will end up by running an ASP.NET core app with Docker after creating a deployment, which manages up a Pod that runs the desired Container. This to be considered as a quick tutorial for getting started with Kubernetes where after that any developer can dig deep and learn more about using Kubernetes with Azure with what we call Azure Kubernetes Service AKS (or Azure Container Service).

What is Docker?

Docker by definition is an open platform for developers and sys-admins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.

Docker is the biggest and most popular container system, so it is a way to containerize your application and put everything that your application needs into a small image that can run then on any computer that has Docker installed on it. It is a way to eliminate any obstacle in running your application over any platform due to configuration issues or missing libraries or any of the problems that we face when hosting an application. Therefore, any system that has Docker installed on it can containerize your application smoothly.

We have to differentiate between a container and a VM where the container is definitely not a virtual machine and what makes them different is shown in the below figure.


As we can see, in a VM you have to package up or setup a complete operating system where it is mandatory to have all the libraries required to run your application in order to host the app inside of it. While in Docker, you do not have an actual operating system to be installed and configured where Docker does all the translation needed (acts as a translation layer). Therefore, Docker simply runs a container on your operating system, which has all the bins and libraries needed, and the actual applications itself and the most interesting part is that Docker can share these libraries across multiple applications.

What is Kubernetes?

Kubernetes by definition is a portable, extensible open-source platform for managing containerized workloads and services, which facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services support and tools are widely available.

Simply Kubernetes is a container orchestrator, which takes a declared state of how you would like to configure your micro-services or your containers and makes it happen across computers. As an example, if you are running an application in the cloud like Azure over several compute instances where for each instance you will be configuring and cloning your application, build it and run it and if it crashes up you have to go back and figure out why it crashed out and reboot it again to be operating. Therefore, if you want all what mentioned in the previous statement to be done automatically for you without worrying about all these stuff; simply you can use Kubernetes where it handles all of these things for you. You setup Kubernetes on all of these instances where they communicate with each other through the master and give them a declared state that will make it happens and that is all!

Some terms to keep in mind when using Kubernetes are Node, Pod, Service, and Deployment. A Node is an instance of a computer running Kubernetes where a node runs what we call a Pod. The Pod is what contains one or more containers that are running. A Service is what handles the requests either coming from inside the Kubernetes cluster between several nodes, or a public request from outside the cluster to a master node that wants to execute a specific micro-service. The Service is usually referred as a load balancer. Deployment is what handles or defines the desired state of an app. The below figure shows how this architecture is.


Docker for Windows?

To install Docker for Windows, check the Install Docker for Windows for a detailed explanation about the installation steps and what needed to make Docker ready to be used on Windows 10. Once installed, you go to Docker settings to enable Kubernetes and show the system containers just as shown below.

docker settings

As you can see, there is also another framework that can be used instead of Kubernetes which is called Swarm but it is outside the scope of this article to be discussed.

Play around with some Commands

Once Docker installed on your machine, you can check it with the command line (or Windows PowerShell) by executing the command “docker” as follows:

docker command

You can check all your running Pods by executing the command “docker ps” and here are the running Pods on my machine.

docker ps

To terminate a certain Pod, all what you have to do is using the command “docker kill” and provide it with the Container ID to be terminated as shown below (to check if container was terminated use again the “docker ps”).

docker kill

To run a new image on Docker we use the command “docker run appname”. If the application is not found, Docker will try to pull it out from library and give it a new GUID for the container id as follows.

docker run

I really like using commands in any platform, but UI is preferable for users to interact with any system and that is why Kubernetes offer a cool Web UI dashboard to interact with the clusters. Therefore, the Web UI is a visualization for what you saw above in the commands results where you can check the pods, nodes, services, and deployments you have in the system. Moreover, you have the ability to manage your system like scaling a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard and much more features that you can check online Web UI (Dashboard).

To enable the dashboard on your machine, all what you have to do is execute the below command.

kubectl apply -f

You can add more extra features like charts and graphs by doing the following.

kubectl create -f create -f create -f

Where “kubectl” is a command line interface for running commands against Kubernetes clusters

Notice that the deployment file is of extension .yml where YAML is YAML is a human friendly data serialization standard for all programming languages. It is commonly used for configuration files, similar to JSON but the difference is that in YAML we use indentation instead of quotations. You can check more about YAML and download the library you want to work with via your preferred programming language.

To access the dashboard we use the “kubectl” command line tool and provide it with the keyword proxy as shown below.


Note that the dashboard can only be accessed from the machine that executes the above command. You can reach the dashboard via the following link:


Either the Kubernetes dashboard when opened will bring up the following screen where you can choose the grant access type via KubeConfig, Token, or you can skip this step and continue to the dashboard.

kubernetes dashboard

The dashboard will be something like what I have here on my system.

kubernetes dashboard 2

Now let us move into action by creating an ASP.NET Core Web Application in visual studio 2017 and create a Docker support for this app under Linux. I will use the visual studio templates for creating the application but of course, also commands can be used here to make the same setup.

The application I am creating is of name MyKubeASPNETCoreApp using ASP.NET Core 2.1 where I select the enable Docker support checkbox under the Linux OS:

asp net core app

When clicking the Ok button the Docker setup will be activated and pulling of the images will be initiated as shown:


docker image pull

After the download finished pulling the images needed for Docker, my web app solution will now has a Docker file that contains the following:

docker file

When you hit run or F5 in Visual Studio 2017 and Docker is selected, the application will be configured to have a port running on it linked to port 80 over TCP connection. Watch out you may need to enable some Windows Firewall rules for Docker or the application will not run due to some port blocking or drivers sharing between windows OS and Docker. Now if I go and check in command line or PowerShell what are the running Pods I can see my application out there.

docker powershell

In the browser, I have the running demo app functioning smoothly!

app browser

I will share also the commands you can use to setup the app in Docker and Kubernetes instead of Visual Studio 2017 doing that for you.

C:\Users\Rami Chalhoub\Documents\Visual Studio 2017\Projects\MyKubeASPNETCoreApp>docker build -t mykubeaspnetcoreapp:dev

C:\Users\Rami Chalhoub\Documents\Visual Studio 2017\Projects\MyKubeASPNETCoreApp>kubectl run mykubeaspnetcoreapp –image= mykubeaspnetcoreapp:dev –port=80 deployment “mykubeaspnetcoreapp” created

As a start with Docker, and Kubernetes by running an ASP.NET Core Web application configured over Docker Linux OS we can see that we have a lot of features to play with and of course it will get much better in the future for building up containers and orchestrating them via Kubernetes !!

Executing .NET Assembly in SQL CLR

Many developers become sluggish when it comes to learning TSQL stored procedures and deal with functions and triggers. As we, all know the famous saying “If necessity is the mother of invention, then laziness is sometimes its father” then welcome the tricky way of doing what you want in SQL without writing complex TSQL syntax. This method is by using .NET assemblies and calling .NET DLLs inside the SQL Server using SQLCLR.

What is SQLCLR?

SQL Server 2005 introduced the integration of the Common Language Runtime (CLR) component of the .NET Framework for Microsoft Windows. This means that you can write stored procedures, triggers, user-defined types, user-defined functions, user-defined aggregates, and streaming table-valued functions, using any .NET Framework language, including Microsoft Visual Basic .NET and Microsoft Visual C#. The Microsoft.SqlServer.Server namespace contains a set of new application programming interfaces (APIs) so that managed code can interact with the Microsoft SQL Server environment.

Therefore, SQLCLR is a framework that bridges the environment of the SQL Server database engine with the rich programming environment of .NET. This allows for:

  • Extending capabilities of queries beyond T-SQL built-in functions.
  • Performing certain operations faster than being done in T-SQL.
  • Better interaction with external resources than can be done via xp_cmdshell.

Enabling CLR Integration:

The Common Language Runtime (CLR) integration feature is off by default in Microsoft SQL Server, and must be enabled in order to use objects that are implemented using CLR integration. To enable CLR integration using Transact-SQL, use the “clr enabled” option of the “sp_configure” stored procedure as shown:

sp_configure ‘clr enabled’, 1




Now, let us play with some C#.NET code and do practical stuff based on what introduces above. For this purpose, we are going to create a new assembly in other words a class library in Visual Studio and name it TestSqlClrAssembly. The needed assemblies in the References section of the library are:

using Microsoft.SqlServer.Server;

using System.Data;

using System.Data.Sql;

using System.Data.SqlTypes;

By default, if you are using Visual Studio 2015 or 2017 these assemblies are already found in the references. These assemblies are not all used but usually these are the ones that SQLCLR integration normally need them.

Now, create a class called TestClass where this class will include a simple method that will return the length of a string provided a parameter for the function. Instead of having this as a stored procedure in SQL and use TSQL syntax to create it and use it to get the input string length, we will simply create a method to do this functionality and call it using SQLCLR later from the class library that includes the TestClass class.


public static void StringCharCount (string inputString)




As shown above, the declared method should be static since the SQL Server does not instantiate the class TestClass or create any new object from it to call the method. Moreover, the return type must be always void since the result is channeled via the SqlContext.Pipe class. We use the attribute [Microsoft.SqlServer.Server.SqlProcedure] before the method declaration in order to inform the SQLCLR to recognize this method as a stored procedure. As shown below, the method Send in the SqlContext.Pipe class allows you to provide it with other parameters type like SqlDataRecord object and SqlDataReader object in addition to the input of type string, which is used in our example. We chose to use the input of type string in order not complicate things in our example but you can try it using the same technique and methodology of work where the provided object type differs according to what you need in your project.


When we finish writing the method declaration, we build the class library to generate the assembly (dll file) that will be used in SQL for calling this stored procedure. For testing purposes, we will create a testing database and run the below TSQL statement in it (you can use this dll in any target database you’re working on following the same steps and commands)


FROM  ‘(class library compilation path)\ TestSqlClrAssembly.dll’



The upper statement loads the TestSqlClrAssembly.dll binary file into the testing database and this means that SQL does not reference this dll at its own path but completely loads it into the MDB database file. This implies that anytime you want to do any changes to the class library or modify the code, then you have to rebuild this library again to generate the new dll file and use the same statement above to load the new dll into the database but replacing the CREATE with ALTER. The permission set defined as SAFE since we do not need to access any information or data outside out testing database in addition that we are not using any unmanaged code inside it. Moreover the StringLengh() method can’t be recognized by SQL CLR as a stored procedure, so we have to manually point to this method by creating a procedure that calls it and returns its value. The way of doing this is by following the below statement. Note the same parameter naming in the created SP.

CREATE PROCEDURE spStringLength (@inputString nvarchar(max))


EXTERNAL NAME TestSqlClr.[TestSqlClrAssembly.TestClass].StringLength


Now we are ready for fun by executing the upper SP that will call our created method in the class library, but before doing so do not forget to enable the CLR by executing the commands mentioned at the beginning of this post, which will enable the SQLCLR at the serve side.

EXEC dbo.spStringLength ‘This is my first sql clr assembly run’

The returned value of the upper statement is 37!

We can create much more complex scenarios using the same strategy mentioned in this post, where the created methods to be linked to SPs in the SQL Server. Do not forget that each time you modify the code in the class library; you have to load again the new dll assembly file to the database by altering the assembly command statement. I hope that I can discuss in further posts scenarios that are more complex by using triggers and functions in SQL CLR.



ASP.NET Calendar Dates Comparison & Validation

During the development process of an ASP.NET website, we face an issue of comparing date values entered by user and validating them so that no logical error occurs. This case mostly happens when we have two date values to be supplied and the first value for sure must be less than the second one. As an example, an event may have two date values which are Start Date and End Date where the End Date value must be greater than the Start Date to prevent any further conflict when these values are used later on.

The comparison and validation process can take place either at the server side when an event is fired (postback) in the webpage (comparing values using C# or VB in the code behind), or at the client side using Javascript or jQuery.

Our method is based on using ASP.NET controls for validation and comparison, and this takes place at the client side when these controls are rendered as HTML and scripts inside the page displayed to the client and before posting back the page to the server.

Consider the example below showing how we can apply this method:

<asp:Calendar ID="calStartDate" runat="server"></asp:Calendar>
<asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" ControlToValidate="calStartDate" ErrorMessage='Start Date Required' Display="Dynamic" Font-Bold="true" ForeColor="Red"></asp:RequiredFieldValidator>

<asp:Calendar ID="calEndDate" runat="server"></asp:Calendar>
<asp:RequiredFieldValidator ID="RequiredFieldValidator3" runat="server" ControlToValidate="calEndDate" ErrorMessage='End Date Required' Display="Dynamic" Font-Bold="true" ForeColor="Red"></asp:RequiredFieldValidator>
<asp:CompareValidator ID="CompareValidator1" runat="server" Operator="GreaterThan"
ControlToValidate="calEndDate" ControlToCompare="calStartDate" ErrorMessage='End Date must be greater than Start Date' Font-Bold="true" ForeColor="Red"></asp:CompareValidator>

The ASP.NET RequiredFieldValidator is used to indicate that the target field is a required field where no action can take place without providing a value for this field, so by using this control we can ensure that the Calender control for both Start Date and End Date are not left empty.

The ASP.NET CompareValidator is used to compare two controls values together according to the kind of operation decided inside its Operator parameter. In the example above, we’re using the GreaterThan so that we can ensure that the End Date is greater than the Start Date.

If the End Date isn’t greater than the Start Date, then the error message will be displayed beside the control and preventing the page from posting the page back to the server.

Note that, in order to achieve the comparison and validation process by rising any event inside the page like a button click, the control related to the event fired should be set to cause validation by using: CausesValidation=”true”.

SQL Server: Cleaning all Database Tables and Reset Identity Columns by forcing Truncate or Delete

We all know that during the development process of any software or web application connected to an SQL Server database, all the tables contain what we call ‘trash’ data which is non useful data entered by developers for testing their code. This data should be removed before the delivery of the application to the client, where the database should be clean and all the tables indexes regarding the Identity column must be reset to zero index.

I’ve faced such an issue several times. The data removal using the usual Delete or Truncate statements isn’t a hard task when the number of tables in the database and the relations between them are not that large. But when it comes to a database that contains a large bunch of tables and complicated relationships among them all, then this represents a serious problem.

Well, the solution is by iterating over all tables in the database, disable the constraints and then delete the data found in each table. Don’t panic, we’re not going to write any script to accomplish this mission and of course we’ll not do this task manually by going to each table and disable the constraints and after that deleting the records. Thanks for the built in stored procedure sp_MSforeachtable which allows you to loop over all the tables in the desired database and execute any command you supply within the parameters.

The solution will be as follows:

  1. Disable Constraints and Triggers by executing the following command:
    exec sp_MSforeachtable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'
    exec sp_MSforeachtable 'ALTER TABLE ? DISABLE TRIGGER ALL'
  2. Now delete all the records found in all tables in your database by forcing cleanup through Delete or Truncate
    exec sp_MSforeachtable 'DELETE ?'
  3. We have to enable the Constraints and Triggers back again:
    exec sp_MSforeachtable 'ALTER TABLE ? CHECK CONSTRAINT ALL'
    exec sp_MSforeachtable 'ALTER TABLE ? ENABLE TRIGGER ALL'
  4. The final step is to reset the Identity column in all tables back to zero base index:
    exec sp_MSforeachtable 'IF OBJECTPROPERTY(OBJECT_ID(''?''), ''TableHasIdentity'') = 1 BEGIN DBCC CHECKIDENT (''?'',RESEED,0) END'

Now all the tables in the database are clean and their identity columns are reset to zero.

SQL Server Configuration Manager – Cannot connect to WMI provider

I was trying to access my SQL Server 2008 R2 Network Configuration to enable Tcp/Ip connections to my server through the SQL Server Configuration Manager from the Configuration Tools. But suddenly an error message appeared indicating that I can’t access my configuration manager through the WMI provider. The error message that popped up is shown down below:



As we can see above, the error pop up dialog indicates that I don’t have the permission to access the server or it’s unreachable. But SQL Server is already installed properly on my machine and I can access the instance through the Microsoft SQL Server Management Studio. So, why this error occurred ?

This error occurs when the .mof files (Microsoft Object Format) are damaged or not properly installed and registered during the MS SQL Server 2008 R2 installation process.

In order to solve this issue, we have to do the following steps:

  1. Run the command prompt as administrator.
  2. Change the directory to the following path: “C:\Program Files (x86)\Microsoft SQL Server\100\Shared”.
  3. Use the mofcomp.exe to register the .mof file again by running the following command: mofcomp.exe “C:\Program Files (x86)\Microsoft SQL Server\100\Shared\sqlmgmproviderxpsp2up.mof”

using mofcomp.exe

Remark: mofcomp.exe is used to compile Managed Object Format (MOF) code into binary form stored in the WMI repository. Use when creating or modifying the MOF file for a WMI provider. mofcomp.exe is one of the WMI Command Line Tools in Windows.

Now the MOF file in SQL Server 2008 R2 is parsed successfully and the SQL Configuration Manager will execute without any error.


Cannot use a leading .. to exit above the top directory

When you use relative paths incorrectly in ASP.NET an exception will be thrown of the following error message: “Cannot use a leading .. to exit above the top directory”.

This usually occurs when you write a static url or generate a dynamic one with many upward levels back to the root directory like “../../”. So, while running the website the exception will be handled.

As an example, let’s say that you’re website contains the following link code section:

<!-- Style -->
<link rel="shortcut icon" href="icon.ico" />
<link href="../css/flags.css" rel='stylesheet' type='text/css'>
<asp:Image ImageUrl="../flag.png" />

What happens here is that your webpage is referring or want to access a content which is in the root folder or in a folder of one level up from the current webpage you’re standing at. Till now everything seems logical and true.

But the error occurs when this webpage is basically at the root level and no any upper level exists, where the content you want to access is either at the same level or at any other level in your website since you can’t skip or jump above the root level. So, the webpage won’t be able to refer to a content of one level up “../” since the page itself is at the root folder.

You have to always watch out the using of relative paths whether in HTML or in ASP.NET code like Server.Transfer or Server.Redirect since the request of a wrong relative path will cause a run time exception.

Entity Framework now Open Sourced

Few months after Microsoft open sourced the ASP.NET MVC 4 and ASP.NET Web API, the Entity Framework source code is released under an open source license, and the code repository is now CodePlex.

This step will enable all the developers to contribute and engage in providing code, fixing bugs and new development features to be implemented in new versions after being tested. This will enhance the Entity Framework since it’s being daily tested and built, which will lead to a better product for object relational mapping.

The open sourced code includes Entity Framework Runtime and NuGet packages, Code First, the DbContext API (introduced in EF 4.1), the Entity Framework Power Tools which are already found and included in the .NET Framework.

For more details, you can find all what you’re looking for on Entity Framework CodePlex.