Analyzing an Uploaded Image to a Blob using Azure Cognitive Services & Saving Analysis Results to CosmosDB

We discussed before in details how to build Azure Functions using Visual Studio 2017 and VSCode (review my previous articles Building Azure Functions in Visual Studio Code using .Net Core and Building Azure Functions in Azure Portal & Visual Studio 2017 using .NET Core). Today we’re going to benefit from this knowledge to create a blob trigger function app in azure using Visual Studio 2019 (you can also use Visual Studio 2017 to apply all the steps and features discussed in this post) where this function will be triggered upon an image upload to a blob storage where after we will analyze this image using azure cognitive services and save the analysis results into a ComosDB.

This post assumes that you already have an Azure account and ready to be used, in addition to Visual Studio 2019 installed on your machine. If not you can create your own free trial account by subscribing to Azure portal.

Creating a Blob Trigger Function

Before creating the blob trigger function in Visual Studio, make sure you have set up a blob storage account in Azure (for help you can refer to Microsoft docs Create a storage account). Back to Visual Studio, we want to create a function that trigger whenever a new image is uploaded into a blob storage account. To achieve this, there is a template in Visual Studio 2019 that helps with this as shown below.

Capture 1.PNG

Choose the Azure Function project template after filtering by project type Cloud. Then we have to specify the project name and location after that.

Capture 2.PNG

when we click create, a new form will show up to select the type of the function to be created. We’re creating a blob trigger function, so we will choose the “Blob Trigger” template option. After choosing this option with target framework as .Net Core, we have to specify which storage account to use and what is the path under this account.

Capture 3.png

Visual Studio 2019 provides the option to specify a Storage Emulator as a storage account. This helps in simulating most of the features of Azure storage account locally on your machine (for help about storage emulator and how to use it, you can refer to Microsoft docs Azure storage emulator). But this option is limited, so we will be using our created storage account that exists on our azure subscription. When we select Browse as shown above, you will be asked to sign in to your Azure account so that the available storage accounts in your subscription can be shown to choose from them. Follow the steps as below.

Capture 4.png

After signing in, a list of available storage accounts per selected subscription will be shown. Select from the list the storage account created for this solution and click Add. You will be returned back to the form where templates are listed, but this time you will see the selected storage account and asked for the connection string and the path where the images are uploaded or saved in this blob.

Capture 6.png

To get the connection string correctly, we have to go to the portal and browse for the selected storage account. When the storage account blade shows up, navigate to the Access keys section under settings. Another blade will appear that contains several keys to be used. One of these keys is the connection string value to be used to reach out this storage account.

Capture 7.png

Now you have the connection string value provided to the project template, we still only have the path. Simply, the path or container will be “images” since we’re uploading images to this blob. You can choose whatever path you find it suitable. After clicking “Ok”, the project template will be created and you will have something similar to the figure shown below.

Capture 8

We will do some modifications before we start developing the function. First, we will rename the function so that it reflects what we’re doing. I chose to name it “BlobImageAnalysisFunction”. Second, we will remove the long connection string value from the function attributes, and add it to the local.settings.json file so that the connection string value can be loaded from there. I chose to name the connection string key as “blobDemoConnectionString” and set the value in the settings json file. These changes are shown below.

Capture 9Capture 10

As you can see in the figures above, the function takes several parameters. The first one is represented with the BlobTrigger attribute which requires also two parameters, the path parameter that is formed up of the path or container name in the blob that the images will be uploaded to and I want Azure to monitor it for any newly uploaded images, and the connection string name attribute which we set while creating the project template and moved it to the configuration file. The path parameter in the Blobtrigegr attribute has the /{name} which is similar how routes are defined in ASP Net Core. The /{name} which is a parameter in between curly braces indicates that the function will take the name of the blob being used as an input parameter. Next we have the actual input parameter which is the Stream with myBlob variable name. When a new blob is defined in the images container, the Azure function can automatically open that blob to me and pass all the needed information in a form of Stream. The next variable or input parameter to the function is the blob name in form of string format and finally we have the TraceWriter for logging purposes.

Based on what Microsoft docs suggests here Azure Blob storage bindings for Azure Functions, we can also use the CloudBlockBlob in the namespace Microsoft.WindowsAzure.Storage.Blob instead of Stream where you can reference it from NuGet packages in your project. Notice that after changing the Stream parameter into CloudBlockBlob, you should do some changes in the first line of code in the function body responsible for logging the needed information to tell that this function is working properly. So, you can use myBlob.Name instead of the name parameter in the function signature, and change also myBlob.Length to myBlob.Properties.Length where the properties offer also many attributes that are useful. All what we changed and talked about are reflected in the figure below.

Capture 11.png

Since the blob we’re working with lives in a private container, this means that we need permission to access this blob or in turn may pass this access to other functions or components. This can be achieved through what we call Shared Access Signature and you can refer to Using shared access signatures (SAS) to learn more about this concept, including different types of SAS and how to create SAS in C#. So, we will create a small simple method that creates a SAS for us in order to access this blob, and I’m not doing something new here, the sample code below for creating SAS value was done by the aid of the mentioned Microsoft docs reference above.

public static string GetBlobSharedAccessSignature(CloudBlockBlob cloudBlockBlob)
{
    string sasContainerToken;
 
    SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
    {
        SharedAccessStartTime = DateTime.UtcNow.AddHours(1),
        SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
        Permissions = SharedAccessBlobPermissions.Read
    };
 
    sasContainerToken = cloudBlockBlob.GetSharedAccessSignature(sharedPolicy);
    return sasContainerToken;
}

The above method takes my currently accessed blob as an input parameter and in the method body we’re specifying the permissions on this blob which is only Read in our case since we’re reading uploaded images to the blob. We’re also setting the start time and expiry time access which is in my case a maximum of one hour needed to finish this demo. Finally the SAS is generated by the blob itself and we’re returning back the token as a result for this action.

So now in our blob trigger function we will write some code that shows up some blob information and try to invoke the GetBlobSharedAccessSignature method in order to generate the SAS token in addition to the URL to this blob. We will log all these info to make sure that our function is working well. The modified code is as follows.

public static class BlobImageAnalysisFunction
{
    [FunctionName("BlobImageAnalysisFunction")]
    public static void Run(
        [BlobTrigger("images/{name}", Connection = "blobDemoConnectionString")] CloudBlockBlob myBlob,
        string name, TraceWriter log)
    {
        log.Info($"C# Blob trigger function Processed blob\n Name:{myBlob.Name} \n Size: {myBlob.Properties.Length} Bytes");
 
        string blobSas = GetBlobSharedAccessSignature(myBlob);
        string blobUrl = myBlob.Uri + blobSas;
 
        log.Info($"My Blob URL is: {blobUrl}");
        log.Info($"My Blob SAS is: {blobSas}");
    }
 
    public static string GetBlobSharedAccessSignature(CloudBlockBlob cloudBlockBlob)
    {
        string sasContainerToken;
 
        SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
        {
            SharedAccessStartTime = DateTime.UtcNow.AddHours(1),
            SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
            Permissions = SharedAccessBlobPermissions.Read
        };
 
        sasContainerToken = cloudBlockBlob.GetSharedAccessSignature(sharedPolicy);
        return sasContainerToken;
    }
}

What we did is only generating the SAS token value, appending it to the blob URI to generate the blob URL and logging these information to make sure that our blob is working. Try now to run the project, then go the blob and upload a new image, you will see that your function will be triggered and the logged values will be shown up on the screen. When doing so, you should have an output similar to the below.

Capture 12.PNG

Capture 13.PNG

When uploading a new image to the blob from Azure portal, your function output will be something similar to the below.

Capture 14

Capture 15.png

Capture 16.PNG

Now we have created our function and tested it successfully, we move to the next part which is setting up the Face API that we will use to analyze our uploaded image.

Creating a Face API Cognitive Service

Azure platform provides an intelligence framework in form of Artificial Intelligence services, Machine Learning services, and Cognitive Services. Azure Cognitive Services are in fact web services that you deal with in order to do some analysis. These services include visual things that image processing is a part off, natural language processing, speech to text conversion, etc… You can find a list of all provided services in this reference Azure Cognitive Services

Since our demo is to analyze an uploaded image to a blob, what we need is the visual cognitive services and in specific the Face API. There are several services offered by the visual cognitive services bundle, one of them is useful to be used here also which is the Video Indexer API, so you can try to mimic the same demo based on this service but for video files and not images.

Capture 17.PNG

So now we go the Azure portal in order to create the Face API service. Navigate to “Create a resource” blade and search for Face, you will have something like below in the search result. After that click on Face (AI + Machine Learning), a blade with title Face shows up which asks for creating your new Face API. Follow the below screens for creation instructions.

Capture 18Capture 19Capture 20

After providing the name of the API to be created, I chose the free pricing tier since we’re doing a small demo here. Of course if you’re using this Face API for production purposes or real scenario, the free pricing tier isn’t enough since it have a limited number of calls. When the Face API is created successfully you will be navigated to a blade similar to the below (since Azure is being updated in a fast manner, your blade may be different from what you’re seeing below).

Capture 21

As we mentioned before, the Azure Cognitive Services are web services that provide intelligent functionalities for your data and image processing. These functionalities in reality are exposed in the form of a Web APIs. This means that I can reach any functionality through the invoke of an HTTP endpoint, post some data and get back the results from the API in JSON format. Since these services are in the form of Web API, then in order to go through my demo, I must have an HTTP Client that can invoke the Face API by providing it with the uploaded image URL that already contains the SAS token, and take the results of the analysis which will be posted later to a CosmosDB. We can achieve this by referencing the Face API client library into my project from NuGet packages. The library name is Microsoft.ProjectOxford.Face.DotNetStandard and I’m referencing the Dot Net Standard version since my project is .Net Core base.

Next we’re going to write a method that invoke the Face API image processing service asynchronously which returns a Face array since the uploaded image may contain more than one person, and accepts the image URL as a parameter where the URL contains the SAS generated token mentioned before. The method is shown below.

public static async Task<face[]> GetImageAnalysis(string imageUrl)
{
    var faceServiceClient = new FaceServiceClient(
        Environment.GetEnvironmentVariable("faceApiSubscriptionKey"), 
        Environment.GetEnvironmentVariable("faceApiEndPointUrl"));
 
    var types = new FaceAttributeType[] { FaceAttributeType.Emotion, 
        FaceAttributeType.Age, FaceAttributeType.Gender, 
        FaceAttributeType.Smile };
 
    return await faceServiceClient.DetectAsync(imageUrl, false, 
                                               false, types);
}</face[]>

To explain this method, we start from the first line where we initialized the FaceServiceClient to invoke the Face API. The FaceServiceClient takes two parameters, the subscription key value, and the API end point. I saved these values in the local.settings.json and this is the way to get the values of these variables from the configuration file. Of course you’re wondering what are these variables and where to get them from. The first parameter is the key needed to access the Face API we created in Azure and it’s an entry key for granting access to the API. The second parameter is the URL that the Face API can be invoked through. You can get these values from the Keys section and the Overview section in the created Face API in Azure. Check below figures to have a clear view.

Capture 22Capture 23

The next line of code is to specify what are the face attributes types to be included in the analysis of the image. This takes places by initializing an IEnumerable list (or new[]) of FaceAttributeType where for my demo I included the Emotions, Age, Gender, and Smile. The last line is to call the method responsible for detecting the image based on the attribute types specified and return back the results. Notice that the call of the detect method is async so we have to await and since we’re using it the in the body of the Run function, we have to change the Run function to be async Task. The modifications are shown below. Next we’re going to set up the CosmosDB to save our image analysis results in it and upload the blob trigger function to our Azure portal for testing.

Capture 24.PNG

Saving Image Analysis to CosmosDB

Our target is to analyze an uploaded image via Azure Cognitive Services and specifically through Face API and then save the analysis results to CosmosDB. All that to be done through the blob trigger function to be hosted on our Azure portal. So, now the missing part is the CosmosDB. I will not be giving a step by step guide here to create a CosmosDB in Azure and for more details you can refer to this documentation for further help Azure Cosmos DB Documentation. I already created a CosmosDB in my Azure portal where this database contains a collection named images in order to save the analysis information into it. Below is what my ComosDB looks like.

Capture 25.PNG

Now that we have our CosmosDB ready in Azure, we need to modify our blob trigger function to save the analysis results after calling the Face API into the images collection in this database. In order to do so, we have to add the suitable NuGet package to access the CosmosDB. Microsoft.Azure.WebJobs.Extensions.CosmosDB is the needed package to be installed in order to access our CosmosDB and images collection.

A very important note here, when you install the CosmosDB package, you may have some conflicts with the packages already installed in your project. One of the problems you may face is that BlobTrigger attribute will not be recognized anymore, this is because the latest stable version of WebJobs.Extensions.CosmosDB uses also the latest version of the Microsoft.NET.Sdk.Functions where you will be forced to update this package and at the end install Microsoft.Azure.WebJobs.Extensions.Storage in order BlobTrigger to be recognized again.

As you can see in the figure above, my CosmosDB account name is “rcimagesdemodb” and the collection to hold the analysis results name is “images”. Note that the images collection is found in a container named “rcBlobImageDB”. From the data explorer section you can fetch out and query the data found in this collection.

To push analysis information into our CosmosDB, we have to modify our function to accept an output binding to the database. Moreover, we need to hold our analysis data in an object that can be used to track and push the data into the collection. For this purpose we will create a simple class that holds up the blob name and the collection of faces that were analyzed by the Face API.

[CosmosDB("rcBlobImageDB", "images", ConnectionStringSetting = "cosmosDBConnectionString")] 
IAsyncCollector<FaceAnalysis> analysisResults

The upper piece of code should be added to the function Run signature as an output binding, where this will give us the ability to access our CosmosDB images collection. Several things to be mentioned here to clear what this code does. First the CosmosDB attribute takes several parameters. The database name which is “rcBlobImageDB” in our case, the collection name which is “images” and several properties which we use only one of them now that is the connection string to access the CosmosDB. As known with all resources in Azure, the connection string can be fetched out from the Keys blade inside the CosmosDB account you’ve created. I added also the connection string as a key inside the local.settings.json with a key name “cosmosDBConnectionString” and the value is the one fetched out from the Azure portal. After adding this binding to the function, we need to define the collection to hold up our analysis data. I chose to use the variable type IAsyncCollector. What I do here with IAsyncCollector is keep adding objects this collection and when the blob trigger function executes successfully, the collection in this object will be then pushed to the CosmosDB images collection based on the attributes provided from database name, to collection name, and the proper connection string.

The class I created named FaceAnalysis is a simple class with public access that only contains two properties, the blob name property and the faces collection property.

public class FaceAnalysis
{
    public string BlobName { get; set; }
 
    public Face[] Faces { get; set; }
}

Now we have the output binding to the CosmosDB, and we have the object class that will hold our data to be pushed to the images collection, we have to modify our function body to get the analysis information and add it to the CosmosDB via our IAsyncCollector. The function body is now modified to be as follows.

[FunctionName("BlobImageAnalysisFunction")]
public static async Task Run(
    [BlobTrigger("images/{name}", Connection = "blobDemoConnectionString")] CloudBlockBlob myBlob,
    [CosmosDB("rcBlobImageDB", "images", ConnectionStringSetting = "cosmosDBConnectionString")] IAsyncCollector analysisResults,
    string name, TraceWriter log)
{
    log.Info($"C# Blob trigger function Processed blob\n Name:{myBlob.Name} \n Size: {myBlob.Properties.Length} Bytes");
 
    string blobSas = GetBlobSharedAccessSignature(myBlob);
    string blobUrl = myBlob.Uri + blobSas;
 
    log.Info($"My Blob URL is: {blobUrl}");
    log.Info($"My Blob SAS is: {blobSas}");
 
    var imageAnalysis = await GetImageAnalysis(blobUrl);
 
    FaceAnalysis faceAnalysis = new FaceAnalysis
    {
        BlobName = myBlob.Name,
        Faces = imageAnalysis
    };
 
    await analysisResults.AddAsync(faceAnalysis);
}

Our blob trigger function is ready now. We mapped the BlobTrigger to our blob in Azure and created the needed SAS token for granting access. We also created the needed Face API in our Azure portal and wrote up the needed code to access this API and get the analysis information from it. Finally, we mapped our CosmosDB created in Azure portal to be used in our function and save the collection of analysis in our images collection.

Since everything is ready now, we still only have to push our function to Azure and run this function to check the results of any uploaded image to our images blob. To do, I will push the function code to my git repository in the function app that I already have in my Azure subscription. Note that we discussed in several posts before how to create a function app and push it to Azure, so you can refer to these posts for any help in this.

A very important note to keep up in mind, when we push our blob trigger function to the Azure portal, don’t forget to add all the keys we added to the local.settings.json file to configuration section of the Function App in Azure or the function will not be able to execute and run errors will occur. The function app that I have now in my Azure portal looks like below with the configuration keys also shown along with the application settings.

Capture 26Capture 27

Action Time! Now my function app is up and running in Azure and the Face API we created before  is also there. All what we have to do is to navigate to our images blob storage blade and upload a new image file from there and wait till we see the results saved in the CosmosDB. You can do a sample web application that access the blob and send the image into it but we will use the Azure portal directly for testing now.

As mentioned, I navigated to the blob storage images folder path and uploaded a new image of mine there. Once the image is uploaded to the blob the function is triggered and all the analysis done was exported to the CosmosDB. The results and the actions are shown below.

Capture 28Capture 29Capture 30

The results from the analysis are now available in our images collection in the CosmosDB. You can use this information for querying some fields needed for further analysis like the fields in the Emotions section. Results are shown below.

{
“BlobName”: “image.jpeg”,
“Faces”: [
{
“FaceId”: “00000000-0000-0000-0000-000000000000”,
“FaceRectangle”: {
“Width”: 438,
“Height”: 438,
“Left”: 393,
“Top”: 220
},
“FaceLandmarks”: null,
“FaceAttributes”: {
“Age”: 38,
“Gender”: “male”,
“HeadPose”: null,
“Smile”: 1,
“FacialHair”: null,
“Emotion”: {
“Anger”: 0,
“Contempt”: 0,
“Disgust”: 0,
“Fear”: 0,
“Happiness”: 1,
“Neutral”: 0,
“Sadness”: 0,
“Surprise”: 0
},
“Glasses”: “NoGlasses”
}
}
],
“id”: “acc7fb99-035e-4d32-a3be-97ed7b970277”,
“_rid”: “jTwvAK5+XfoCAAAAAAAAAA==”,
“_self”: “dbs/jTwvAA==/colls/jTwvAK5+Xfo=/docs/jTwvAK5+XfoCAAAAAAAAAA==/”,
“_etag”: “\”850093a1-0000-0e00-0000-5cd81d150000\””,
“_attachments”: “attachments/”,
“_ts”: 1557667093
}

In this post, we did something interesting where we used the powerful Cognitive Services features provided by Azure to analyze an uploaded image to a certain Blob. All this was done through our blob trigger function which accessed the blob to read the uploaded image, send the image to the Face API for analysis and export the results at the end into a CosmosDB. This was all achieved by using the power of Serverless Computing using Azure Functions!

Building Azure Functions in Azure Portal & Visual Studio 2017 using .NET Core

In this post, we are going to talk about Azure Functions. As known, Azure Functions are a PaaS, which stands for Platform as a Service. This allows us to develop or run a small piece of code that acts as a service on the cloud platform. This service can do several tasks like doing a certain job when an Http message from a Web API is received, or triggers another function when a certain action on the cloud platform takes place, or initiate an event when a new record inserted in Azure SQL or Cosmos, and so on. Today, we are going to create Azure Functions directly in the portal and later on we are going to use Visual Studio 2017 to create a certain function using .Net Core and reflect this function in the portal. In other coming articles, we will create an Azure Function App using VS Code, analyze an image using Azure Cognitive Services and save the results into either a SQL database or a Document inside Cosmos.

Since Azure Functions are also known as Serverless Computing Service that enables us to run code on demand without having to deal with the infrastructure, we will start by clearing up some terms that are being used like Serverless Architecture and PaaS. So here we go!

This post assumes that you already have an Azure account and ready to be used, in addition to Visual Studio 2017 installed on your machine. If not you can create your own free trial account by subscribing to Azure portal.

What is Serverless Architecture & PaaS?

By definition, Serverless Computing is defined as follow: Serverless Computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources.

The first time I heard the term “Serverless” I paused for a while and thought, “How could this be possible, and how would we be able to run our apps without a server!” Nevertheless, it was not the case. We all need those special kind of powerful computers to run our applications, process data, and send information over networks. Therefore, the term Serverless Computing or what is also known as Serverless Architecture is somehow confusing or even more we could say it is an inaccurate naming or designation since a server is still required and it must be there to do all the required tasks. However, looking beyond the terminology is what matters now. When we say Serverless Computing, we mean that there is no need for us to care about server maintenance and the infrastructure because the server is out there in the public cloud datacenter. Our job shifts from maintaining the server to providing the proper and convenient instructions, which maintain the continuity of this server to do and perform the operations that are needed to be accomplished.

Serverless Computing.png

Serverless Computing is also about using Platform as a Service – PaaS technology. What do we mean by PaaS? The PaaS or what also known as aPaaS (Application Platform as a Service) is a complete development and deployment service provider category or environment in the cloud that allows customers to develop, run and manage their applications with resources that enable them to deliver everything needed without worrying about the maintenance of these resources. There are some other different concepts that we summarize in the figure below which are out of scope of this article, but we just need to know them if any to be mentioned later.

Serverless Computing Comparison.png

What is Azure Functions?

Azure Functions is a solution for creating functions in the Microsoft Azure cloud platform. The function is a small piece of code with some specific configuration that Azure needs in order to know when to call this function and what to do without worrying about the whole application or the infrastructure to run it.

Azure Functions.png

The function in Azure is an efficient way to create a solution for the problem you have with a small code snippet, which leads to a more productive development. The function can be either an event driven or a trigger driven (or even respond to a webhook which is an HTTP API push request or a web callback like what GitHub generates when a code check in is performed for example) and it can be built using a variety of development languages like C#, F#, Node, Ruby and others.

Creating a Function App in Azure

A Function App in Azure in a way or another is a special type of App Service and to be more specific we can go further by saying that Functions in Azure are built on the top of App Services. So as in App Services where each service requires an app service plan, the function in azure requires what we call a Hosting Plan. Let us go systematically to create a function in azure.

To create a function in azure, go to the Function Apps item in the left menu, or if it is not found click on “Create a resource” which opens a blade for searching the azure marketplace. In the search bar, write down the keyword function and here we go the results contain an item called Function App as shown in the figure below.

Azure Function App.png

When clicking on Function App, a new blade will show up that contains little information about Function Apps and a “Create” button. Clicking on create button will lead to another blade as the one down below.

Function App.png

The mandatory fields are the one highlighted with a red asterisk and we will pass over them and explain each one.

The App name is the name of the function you want to create in azure and this name should be unique since it is combined with azurewebsites.net domain. The second field is the Subscription where you have to specify under which subscription you want this function to be created, knowing that azure gives you the ability to have several subscriptions (this can be managed and checked in the Subscription blade).

A resource group is a single logical entity, which groups all related resources together. We mean by resources storage accounts, virtual machines, virtual networks and many other resources that may be used by the service you are creating in Azure. Here, either you can select an existing resource group to append the resources to be created by the function to it, or you can create a new resource group that is only related to the function being created. Usually, I prefer to create a new resource group for any newly created component in Azure, since it is easier to manage, maintain, and deploy these grouped resources later on for any modification to be made.

After that, you have to choose what operating system the function will be running on. The location represents which datacenter you want your function to be deployed on. Usually we choose the nearest datacenter to our physical location or to the clients’ physical location. The runtime stack is the technology of the function if it is either a .NET one, a JavaScript, or Java. The last one is the storage field where you also have to choose if it is a new storage account to be created or use an already existing one. The storage account is the one that holds up the Blob, Table storage and the Queues to be used by the created function.

I save the Hosting Plan until the end since it needs some more explanation. The hosting plan field represents which plan rules the function will be subjected to while executing where the plan describes what type of hardware our function will be operating on. There are two available options, either an App Service Plan or a Consumption Plan. The difference is that if I choose an App Service Plan, this means that I will be always paying for that plan even if my function is inactive or not executing i.e. even if my function is not using any hardware resources. This option is more costly and you can check the costs of each plan by visiting the website provided by azure for calculating the fees or prices to be paid per service type (https://azure.microsoft.com/en-us/pricing/calculator/) as shown in the figure below.

Azure Price Calculator.png

The Consumption Plan is still like the App Service Plan, but in this scenario, you will not have to worry about the type of resources, virtual machines, and other things like scaling up or scaling down. All of these will be managed by Azure itself. So, choosing consumption plan as an option is like saying to azure do your work here and manage all these stuff for me so that I don’t have to worry about monitoring resources and check when I need to scale up or scale down to lower my cost. Finally, functions can be integrated with application insights, which give you the ability to monitor how your function is performing and where the errors are if there are any. After specifying all the required fields, the blade now will look somehow like the figure below.

Function App Configured.png

When azure finishes creating the function, a notification will be displayed in the notifications center. After that we can navigate to the Function Apps blade form the left menu and we will find our newly created function there. The blade will look somehow like this:

Function App Blade.png

On the left hand side, we can see the available functions for each subscription. In my current subscription, I only have the function I created right now under the name “rcfunctionappdemo”. As we can see, in each function app we can have one or more functions. In order to create a new function in my function app, we navigate to the submenu Functions with the plus sign beside it. We also have the proxies and the slots, where slots are used for deployment slots (development, staging and production) while proxies are special type of functions that we use for executing between the client side and a backend service for example.

Now we click on the Functions node with the plus sign. This will open up a blade that displays the available functions in our function app. However, since we did not add any function yet, the displayed screen will be empty. Therefore, we click on the plus sign to add a new function, and here we go the blade that shows up will appear like below:

Function App Configuration.png

Since we want to use the portal for creating a new function in this section, we click on the “In-Portal” tab in order to proceed and then click continue. The next step will be as follows:

Function App Configuration 1.png

In this blade, the available templates displayed are the “Webhook+API” and the “Timer” templates. We can browse for more templates by clicking on the “More templates” tab, which leads us to the screen below:

Function App Configuration 2.png

Each template type has a small description about its usage, for example we use the “HTTP trigger” when we want our function to execute whenever an HTTP request is received, while the “Timer trigger” is used when we want our function to execute at a specific predefined time like transferring data between two blobs at midnight. The function that we want to create will execute when an HTTP request is received, so we choose the “HTTP trigger” template. A new blade on the right opens up in order to specify the function name and the authorization level.

Function App Configuration 3.png

The authorization level decides the type of accessibility to the function that we’re creating. We have three options in this field either Anonymous, or Function, or Admin. The Anonymous type is used when we want to make our function accessible without any security or restrictions, while the Function type is used to make our function accessible through a generated key by azure, and finally the Admin is used when we have several nested functions within the same function app. For our function to be created we will choose the Function authorization type and then click create.

Function App Code.png

As we can see in the figure above, the screen contains several sections. In the middle, we have the editor, which we will be using for writing our code snippet that represents what the function will be mainly doing which implies that a function is really about writing a method block of code at the end. The section highlighted down below the editor contains the logs screen, which displays the errors when the code is compiled while the console acts like the normal console window that we all know which displays the folder path at the beginning. On the right side we can see the sections for testing our code function by doing a sample call for this function through an HTTP Post request, while the view files allows us to see the available files under this function block where the available file upon creation is the one opened in the editor and called “run.csx”.

So now, we have this C# function in our consumption plan, which takes HttpRequest and ILogger as parameters. The function is of type async since the call to this function is asynchronous and it is static as it does not depend on any object identity but belongs to the function type itself rather than to a specific object. The function will take the incoming Http request and the first thing that is being done is to log the message “C# HTTP trigger function processed a request” through the ILogger instance log. The next step is to fetch the query string parameter “name” from the incoming Http request where if this parameter is empty and no query string is sent via the function URL, then the function will be looking inside the request body as a JSON format and de-serialize it to fetch out the value. The last thing that the function does is to display the parameter value by creating a new response message through the OkObjectResult, which is of type ActionResult. If the name was empty, then a warning message is displayed to pass the parameter name value.

This was the default code generated when we created the function app. You can write your own code by modifying the function body written in C# based on what you need and want this function app to do from the incoming Http request call. We will keep the code as is and pass a value of “my first azure function app” in the name query string parameter and test what the function will display. To test the function we have two options, either by a direct call from the browser by using the function URL generated by Azure or in the Test tab at the right section of the screen described before.

Before testing the function app, we want to make sure that we have no compilation or code errors. We check this by clicking save to commit our code function then click on the Run button. When we click on the Run button, the logs tab should indicate the compilation results and if any errors are, there you will be notified in this section. The logs tab should look somehow like below:

Function App Log.png

After running our code and checking that no compilation errors are there, we are going to test our function app. As a first option we can use the function URL generated by Azure to test the function app. To do so we click on the “Get Function URL” link. When clicked, a popup will be shown that contains the function URL as shown below.

Function App URL.png

The function URL is formed up of your function app name highlighted above in yellow hosted by the azurewebsites.net and the function app acts like an API with the sub section function name highlighted in red. The final section is the code, which is a secured hash that enabled the access to the function app where without this code the generated response will be HTTP Status Code 401 unauthorized access. We take this URL and paste it in the browser and add the “name” query string value. The output will be as follows:

Function App Result.png

The second option as we mentioned is the Test tab. In this tab, we can test the function app by generating an Http Post call. The tab provides the ability to add query parameters, header and a request message body. Since we tested the query string option in the generated function URL, we will try now the request message body as shown below.

Function App Code 2.png

We provided the name variable in the request body and the type of the call is an Http Post. Once we click on run, the output generated will be the message “Hello, my first azure function app” with status 200 OK.

Creating a Function App in Visual Studio 2017

Now we come to create an azure function app by using Visual Studio 2017. You can also use VS Code and this will be discussed in another article.

To create a function app in Visual Studio 2017 we choose new project and go to the section cloud where the function app template available. If you do not have this template installed, then you have to modify your visual studio installation and install the required templates for cloud projects. You specify the project name and path and after that click create. The process is shown below.

VS Create New Function.png

When you click Ok, another form will be shown to select the type of the azure function under which framework and other options. We have two options: either creating an azure function using .NET framework or creating an azure function using .NET Core. We will choose .NET Core option where also several types are displayed either an empty project app, or an Http Trigger app, etc… Since we want to have a function that takes an incoming Http Request, then we will choose the option Http Trigger project type as shown:

VS Create New Function 2.png

Notice that you can specify the storage account if you want any and the access rights or permission level for this function. In our sample function, we set the access right to “Anonymous”. By clicking Ok, the project will be created and it should looks like as below:

VS Function App Code.png

The project looks like a normal class library, which contains a class code file, called “Function1.cs” where this file will host our function. You can rename the class file; I chose the name “MyFirstAzureFunction.cs”. On the left tab, we can see that the code is very similar to the one we saw before in azure portal. You can change also the function name to any desired name you want. The method is a static one inside a static class. The method in this file should indicate to azure what this function does, what it takes as parameters and so on. These metadata can be expressed to azure using C# attributes. In this method, we can see the specified attribute is HttpTrigger with the authorization level that we chose at the project creation, which is Anonymous. In addition, it specifies what Http verbs it accepts, in our case, we have the GET and the POST without any specified Route since it is of a null value. The FunctionName attribute tells azure what is the public name of this function. Now, it is named as Function1 also so we will change it to the desired name to be displayed in public in azure. I will rename the function to be “MyHelloAzureFunction”. The body of the method is the same that we saw in the azure portal so no need to explain it again. Let us move forward in debugging the function locally and deploying it after that to azure.

emulator.png

When we press F5 to run the project, it will take some time to execute in order to start the Microsoft Azure Emulator on your local machine. Once the project runs, it should display a console that looks similar to the above figure. Of course, you can set a breakpoint to debug the code of the function method once you create a call for this function. As we can see in the above figure, the function is hosted and running on localhost at the port 7071. Notice that the port on your local machine may differ and it depends on the available list of ports for the emulator to use, but 7071 is the default port used by the Azure Function Cli emulator. The function upon running the project can be reached through the URL displayed inside the console and in my case, it is as follow: http://localhost:7071/api/MyHelloAzureFunction.

So, let us now take the mentioned URL, paste it in the browser, and provide it with a query string “name” with any value you want. I will choose again the value used before which is “my first azure function app”. Once you provide the query string with the copied URL and hit enter in the browser, an Http Response will be generated containing the message “Hello, my first azure function app” as shown below:

Function App Result 2.png

Now, after testing our function method locally, we come to deployment on azure. To deploy the function on azure, we have to check in our code to the repository of our function app on azure where the Kudu engine will handle the compilation and running of our code at the master branch of the repository. To do so we have to create the git repository if it is not there yet. We go to our function app; click on “All Settings” where a new blade will open that contains all the settings for this function app as shown below.

Azure Function App Dep Center

Azure Function App Dep Center 2

The git repo at my function app is already created before and ready for use by copying the Git URL. If it is the first time creating a git repo over the function app you created, you can easily go through the steps of creating it when you click on the “Deployment Center” section. Now that I have the git repo URL to commit my function app code to it from Visual Studio we have to start with adding our code to source control and choosing git repository by clicking on “Add to Source Control” button in the visual studio toolbar and choose “Git” from the list. After this step, visual studio will create the git file for the solution and now we are ready to start deploying. Open a console and navigate to the solution folder path, then use the git commands shown below with the git URL that we copied from the properties section of the function app at azure.

Git Push.png

As you can see, we used several git commands to check the status of git repo at our solution, then pushed our code. When we execute the git push command a prompt for the credentials will be shown and you have to provide the password in order to start the process. If you still don’t have these credentials or doesn’t know anything about it, then you can go to the “Deployment Credentials” section under “All Settings” in the function app blade in azure and create your new username and password for this git repo.

After providing the correct credentials, the upload process will start executing and as mentioned before the Kudu engine on azure will start compiling the pushed code. Later, the function will be ready at azure where this process may take some time to be done. Check below console status of my pushed code.

Git Push 2.png

Once the push process is finished the code will be compiled and the function will be created under your function app in azure. If there is no compilation errors the function will be up and running once the push process and compilation by Kudu engine finishes. You can check this by going to your function app in azure and check the functions section under where it should now contain your uploaded function from visual studio as shown in the below figure in my case example.

Git Pushed Function

Finally this it! You can test your function the same way mentioned before and for any changes, you want to apply on your function you have to repeat the process of pushing code to the same repo and master branch again so that the function can reflect the changes that were done on your code in visual studio. In the next article, we will see how to create an azure function using VS Code and much more. Stay tuned!