Migrating VMs to Azure DevTest Labs

So here’s something I’ve been running into a lot, and at the heart of it is cost management. No one wants to spend more money than they need to and that is especially true if you are running your solutions in the cloud. And for most organizations, a big cost factor can be running lower environments.

So the situation is this, you have a virtual machine, be it a developer machine, or some form of lower environment (Dev, test, QA, etc). And the simple fact is that you have to pay for this environment to be able to run testing and ensure that everything works when you deploy to production. So its a necessary evil, and extremely important if you are adhering to DevOps principals. Now the unfortunate part of this, is that you likely only really use these lower environments during work hours. They probably aren’t exposed to the outside world, and they probably aren’t being hit by customers in any fashion.

So ultimately, that means that you are paying for this environment for 24/7 but only using it for 12 hours a day. Which means you are basically paying for double what you need.

Enter DevTest Labs, which allows you to create VMs and artifacts to make it easy to spin-up and spin-down environments without any effort on your part so that you can save money with regard to running these lower environments.

Now the biggest problem then becomes, “That’s great, but how do I move my existing VMs into new DevTest lab, how do I do that?” The answer is it can be done, via script, and I’ve written a script to that here.

#The resource group information, the source and destination
resourceGroupName="<Original Resource Group>"
newResourceGroupName="<Destination Resource Group>"

#The name of the VM you wish to migrate
vmName="<VM Name>"
newVmName="<New VM Name>"

#The OS image of the VM
imageName="Windows Server 2016 Datacenter"
osType="Windows"

#The size of the VM
vmsize="<vm size>"
#The location of the VM
location="<location>"
#The admin username for the newly created vm
adminusername="<username>"

#The suffix to add to the end of the VM
osDiskSuffix="_lab.vhd"
#The type of storage for the data disks
storageType="Premium_LRS"

#The VNET information for the VM that is being migrated
vnet="<vnet name>"
subnet="<Subnet name>"

#The name of the lab to be migrated to
labName="<Lab Name>"

#The information about the storage account associated with the lab
newstorageAccountName="<Lab account name>"
storageAccountKey="<Storage account key>"
diskcontainer="uploads"

echo "Set to Government Cloud"
sudo az cloud set --name AzureUSGovernment

#echo "Login to Azure"
#sudo az login

echo "Get updated access token"
sudo az account get-access-token

echo "Create new Resource Group"
az group create -l $location -n $newResourceGroupName

echo "Deallocate current machine"
az vm deallocate --resource-group $resourceGroupName --name $vmName

echo "Create container"
az storage container create -n $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey

osDisks=$(az vm show -d -g $resourceGroupName -n $vmName --query "storageProfile.osDisk.name") 
echo ""
echo "Copy OS Disks"
echo "--------------"
echo "Get OS Disk List"

osDisks=$(echo "$osDisks" | tr -d '"')

for osDisk in $(echo $osDisks | tr "[" "\n" | tr "," "\n" | tr "]" "\n" )
do
   echo "Copying OS Disk $osDisk"

   echo "Get url with token"
   sas=$(az disk grant-access --resource-group $resourceGroupName --name $osDisk --duration-in-seconds 3600 --query [accessSas] -o tsv)

   newOsDisk="$osDisk$osDiskSuffix"
   echo "New OS Disk Name = $newOsDisk"

   echo "Start copying $newOsDisk disk to blob storage"
   az storage blob copy start --destination-blob $newOsDisk --destination-container $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey --source-uri $sas

   echo "Get $newOsDisk copy status"
   while [ "$status"=="pending" ]
   do
      status=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.status')
      status=$(echo "$status" | tr -d '"')
      echo "$newOsDisk Disk - Current Status = $status"

      progress=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.progress')
      echo "$newOsDisk Disk - Current Progress = $progress"
      sleep 10s
      echo ""

      if [ "$status" != "pending" ]; then
      echo "$newOsDisk Disk Copy Complete"
      break
      fi
   done

   echo "Get blob url"
   blobSas=$(az storage blob generate-sas --account-name $newstorageAccountName --account-key $storageAccountKey -c $diskcontainer -n $newOsDisk --permissions r --expiry "2019-02-26" --https-only)
   blobSas=$(echo "$blobSas" | tr -d '"')
   blobUri=$(az storage blob url -c $diskcontainer -n $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey)
   blobUri=$(echo "$blobUri" | tr -d '"')

   echo $blobUri

   blobUrl=$(echo "$blobUri")

   echo "Create image from $newOsDisk vhd in blob storage"
   az group deployment create --name "LabMigrationv1" --resource-group $newResourceGroupName --template-file customImage.json --parameters existingVhdUri=$blobUrl --verbose

   echo "Create Lab VM - $newVmName"
   az lab vm create --lab-name $labName -g $newResourceGroupName --name $newVmName --image "$imageName" --image-type gallery --size $vmsize --admin-username $adminusername --vnet-name $vnet --subnet $subnet
done 

dataDisks=$(az vm show -d -g $resourceGroupName -n $vmName --query "storageProfile.dataDisks[].name") 
echo ""
echo "Copy Data Disks"
echo "--------------"
echo "Get Data Disk List"

dataDisks=$(echo "$dataDisks" | tr -d '"')

for dataDisk in $(echo $dataDisks | tr "[" "\n" | tr "," "\n" | tr "]" "\n" )
do
   echo "Copying Data Disk $dataDisk"

   echo "Get url with token"
   sas=$(az disk grant-access --resource-group $resourceGroupName --name $dataDisk --duration-in-seconds 3600 --query [accessSas] -o tsv)

   newDataDisk="$dataDisk$osDiskSuffix"
   echo "New OS Disk Name = $newDataDisk"

   echo "Start copying disk to blob storage"
   az storage blob copy start --destination-blob $newDataDisk --destination-container $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey --source-uri $sas

   echo "Get copy status"
   while [ "$status"=="pending" ]
   do
      status=$(az storage blob show --container-name $diskcontainer --name $newDataDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.status')
      status=$(echo "$status" | tr -d '"')
      echo "Current Status = $status"

      progress=$(az storage blob show --container-name $diskcontainer --name $newDataDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.progress')
      echo "Current Progress = $progress"
      sleep 10s
      echo ""

      if [ "$status" != "pending" ]; then
      echo "Disk Copy Complete"
      break
      fi
   done
done 

echo "Script Completed"

So the above script breaks out into a couple of key pieces. The first part is the following parts needs to happen:

  • Create the destination resource group
  • Deallocate the machine
  • Create a container for the disks to be migrated to
echo "Create new Resource Group"
az group create -l $location -n $newResourceGroupName

echo "Deallocate current machine"
az vm deallocate --resource-group $resourceGroupName --name $vmName

echo "Create container"
az storage container create -n $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey

The next process is to identify the OS disk for the VM, and copy the disk from its current location, over to the storage account associated with DevTest lab. The next key part is to create a custom image based on that VM and then create a VM based on that image.

osDisks=$(az vm show -d -g $resourceGroupName -n $vmName --query "storageProfile.osDisk.name") 
echo ""
echo "Copy OS Disks"
echo "--------------"
echo "Get OS Disk List"

osDisks=$(echo "$osDisks" | tr -d '"')

for osDisk in $(echo $osDisks | tr "[" "\n" | tr "," "\n" | tr "]" "\n" )
do
   echo "Copying OS Disk $osDisk"

   echo "Get url with token"
   sas=$(az disk grant-access --resource-group $resourceGroupName --name $osDisk --duration-in-seconds 3600 --query [accessSas] -o tsv)

   newOsDisk="$osDisk$osDiskSuffix"
   echo "New OS Disk Name = $newOsDisk"

   echo "Start copying $newOsDisk disk to blob storage"
   az storage blob copy start --destination-blob $newOsDisk --destination-container $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey --source-uri $sas

   echo "Get $newOsDisk copy status"
   while [ "$status"=="pending" ]
   do
      status=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.status')
      status=$(echo "$status" | tr -d '"')
      echo "$newOsDisk Disk - Current Status = $status"

      progress=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.progress')
      echo "$newOsDisk Disk - Current Progress = $progress"
      sleep 10s
      echo ""

      if [ "$status" != "pending" ]; then
      echo "$newOsDisk Disk Copy Complete"
      break
      fi
   done

   echo "Get blob url"
   blobSas=$(az storage blob generate-sas --account-name $newstorageAccountName --account-key $storageAccountKey -c $diskcontainer -n $newOsDisk --permissions r --expiry "2019-02-26" --https-only)
   blobSas=$(echo "$blobSas" | tr -d '"')
   blobUri=$(az storage blob url -c $diskcontainer -n $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey)
   blobUri=$(echo "$blobUri" | tr -d '"')

   echo $blobUri

   blobUrl=$(echo "$blobUri")

   echo "Create image from $newOsDisk vhd in blob storage"
   az group deployment create --name "LabMigrationv1" --resource-group $newResourceGroupName --template-file customImage.json --parameters existingVhdUri=$blobUrl --verbose

   echo "Create Lab VM - $newVmName"
   az lab vm create --lab-name $labName -g $newResourceGroupName --name $newVmName --image "$imageName" --image-type gallery --size $vmsize --admin-username $adminusername --vnet-name $vnet --subnet $subnet
done 

Now part of the above is to use a ARM template to create the image. The template is below:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "existingLabName": {
      "type": "string",
      "defaultValue":"KevinLab",
      "metadata": {
        "description": "Name of an existing lab where the custom image will be created or updated."
      }
    },
    "existingVhdUri": {
      "type": "string",
      "metadata": {
        "description": "URI of an existing VHD from which the custom image will be created or updated."
      }
    },
    "imageOsType": {
      "type": "string",
      "defaultValue": "windows",
      "metadata": {
        "description": "Specifies the OS type of the VHD. Currently 'Windows' and 'Linux' are the only supported values."
      }
    },
    "isVhdSysPrepped": {
      "type": "bool",
      "defaultValue": true,
      "metadata": {
        "description": "If the existing VHD is a Windows VHD, then specifies whether the VHD is sysprepped (Note: If the existing VHD is NOT a Windows VHD, then please specify 'false')."
      }
    },
    "imageName": {
      "type": "string",
      "defaultValue":"LabVMMigration",
      "metadata": {
        "description": "Name of the custom image being created or updated."
      }
    },
    "imageDescription": {
      "type": "string",
      "defaultValue": "",
      "metadata": {
        "description": "Details about the custom image being created or updated."
      }
    }
  },
  "variables": {
    "resourceName": "[concat(parameters('existingLabName'), '/', parameters('imageName'))]",
    "resourceType": "Microsoft.DevTestLab/labs/customimages"
  },
  "resources": [
    {
      "apiVersion": "2018-10-15-preview",
      "name": "[variables('resourceName')]",
      "type": "Microsoft.DevTestLab/labs/customimages",
      "properties": {
        "vhd": {
          "imageName": "[parameters('existingVhdUri')]",
          "sysPrep": "[parameters('isVhdSysPrepped')]",
          "osType": "[parameters('imageOsType')]"
        },
        "description": "[parameters('imageDescription')]"
      }
    }
  ],
  "outputs": {
    "customImageId": {
      "type": "string",
      "value": "[resourceId(variables('resourceType'), parameters('existingLabName'), parameters('imageName'))]"
    }
  }
}

So if you run the above, it will create the new VM under the DevTest lab.

Creating Terraform Scripts from existing resources in Azure Government

Lately I’ve been doing a lot of work with TerraForm lately, and one of the questions I’ve gotten a lot is the ability to create terraform scripts based on existing resources.

So the use case is the following: You are working on projects, or part of an organization that has a lot of resources in Azure, and you want to start using terraform for a variety of reasons:

  • Being able to iterating in your infrastructure
  • Consistency of environment management
  • Code History of changes

The good new is there is a tool for that. The tool can be found here on github along with a list of pre-requisites.  I’ve used this tool in Azure Commercial and have been really happy with the results. I wanted to use this with Azure Commercial.

NOTE => The Pre-reqs are listed on the az2tf tool, but one they didn’t list I needed to install was jq, using “apt-get install jq”.

Next we need to configure our environment for running terraform.  For me, I ran this using the environment I had configured for Terraform.  In the Git repo, there is a PC Setup document that walks you through how to configure your environment with VS code and Terraform.  I then was able to clone the git repo, and execute the az2tf tool using a Ubuntu subsystem on my Windows 10 machine. 

Now, the tool, az2f, was built to work with azure commercial, and there is one change that has to be made for it to leverage azure government 

Once you have the environment created, and the pre-requisites are all present, you can open a “Terminal” window in vscode, and connect to Azure Government. 

In the ./scripts/resources.sh and ./scripts/resources2.sh files, you will find the following on line 9:

ris=`printf “curl -s  -X GET -H \”Authorization: Bearer %s\” -H \”Content-Type: application/json\” https://management.azure.com/subscriptions/%s/resources?api-version=2017-05-10” $bt $sub`

Please change this line to the following:

ris=`printf “curl -s  -X GET -H \”Authorization: Bearer %s\” -H \”Content-Type: application/json\” https://management.usgovcloudapi.net/subscriptions/%s/resources?api-version=2017-05-10” $bt $sub`

You can then run the “az2tf” tool by running the following command in the terminal:

./az2tf.sh -s {Subscription ID} -g {Resource Group Name}

This will generate the script, and you will see a new folder created in the structure marked “tf.{Subscription ID}” and inside of it will be all configuration steps to setup the environment.

Staying Organized with my “digital brain”.

Hello All, so as you probably noticed, I’m trying to do more posts, and trying to cover a wide range of topics. So for this talk, I thought I’d take time to talk about how I stay organized and stay on top of my day.

Productivity methods are a dime a dozen, and honestly everyone has their own flavor of an amalgamation of several methods to keep control of the chaos. For me, I went through a lot of iterations, and then finally settle on the system I describe here to keep myself on top of everything in my life.

Now for all the different variations out there, I know lots of people are still exploring options, which is why I decided to document mine here in hopes that it might help someone else.

So let’s start with tools, for me I use Microsoft To-Do, and its not just cause of where I work, but ultimately I use this tool because I was using Wunderlist, but ended up switching because they ended support of Wunderlist, replacing it with To-Do. So that was the driver, but I also did it, because it supports tags in the text of the items, which helps me to organize them.

So first, I break out my tasks into categories with a tag to start, the categories I use are:

  • Action: These are items that require me to take some small action, like send an email, make a phone call, reply to something, or answer a question. I try to keep these as small items.
  • Investigate: These are items that I need to research or look into, things that require me to do some digging to find an answer.
  • Discuss: These are items that I’ve made a note to get in touch with someone else and discuss a topic.
  • Build: These are my favorite kind of items, this is me taking coding action of some kind, and building something, or working out an idea. Where I am focused on the act of creating something.
  • Learn: These are items that involve my learning goals, to push myself to learn something new and keep it tactical.

Now each day, To-Do has this concept of “My Day” where you take tasks from your task list and indicate that they are going to be part of your day. Now I sort my day alphabetically so that the above items are organized in a way that lines up with how I approach them.

For me I usually tackle as many actions as I can right away and get them out of the way for the first hour of my day, and then spend the next 6 hours as a mix of new actions, and build / investigate actions.  Finally I have a set section of my week that is spent of learning activities.  The idea being to quote Bobby Axelrod, “The successful figure out how to both, handle the immediate while securing the future.”

Finally I maintain a separate list called #Waiting(…). When I am awaiting a response from someone, I change the category (like #Action) to #Waiting(name of person) and move it to the waiting list and take it off “My Day”. This let’s me put it out of my mind without losing track of the item.

After the category, I add the group, these are customer names for work, or a designation to describe the sub category of the work.  Like for example this is a monthly recurring task:

#Action – #Financial – Pay Monthly Bill’s

This allows me to quickly group the category or all “Financial” tasks if I need a big picture.  

I have been using this system for the past year and it’s done a lot to help me stay organized and measure my impact not activity.

I’ve talked previously about how import impact is over activity. And one of the downsides of many of these kinds of systems is that people tend to focus their energy on the “checking off items” and not on the overall impact of those items. I find by using this kind of grouping on the front I am able to focus energy on tasks that are high impact not low impact.

At the end of the day productivity itself is a lie and I believe that completely the idea is not to produce more, but to make every action have a return on investment.

Another book, Essentialism by Greg McKeown calls out this difference in basically saying that the key is to make the distinction of saying “what can I go big on?” or its either a “Hell Yes” or an “Absolute No”. So I find this system assists me by allowing me to make sure that I am focusing on tasks that will return dividends and not on topics that are smaller activity just to drive “checked” items.

Book Review – Clockwork

Hello all, so as many of you know I read a good bit, and I also like to use “audible”. Great way to pass time while traveling or driving is to listen to audio books. Right now I’ve been on a real kick to learn more about business perspective and productivity. Some of the great books I’ve read and talked about before are:

  • Grit
  • Essentialism
  • 10x Rule
  • Deep Work

But I wanted to take a minute to talk about the book I just finished, Clockwork: Designing your business to run itself, by Mike Michalowicz. Now I have to admit, I found out about this book when I heard about it a couple of times and it showed up on my “recommended reading” books a few times.

So I was skeptical about this book, mainly because the book talks about how its focused on people starting their own business, and I work for a major corporation. So how can this be helpful to me? Well I have to admit, I was wrong.

I found this book to be really thought provoking, and it caused me to re-examine a lot of activities and work I do to measure impact and importance to success. The author makes the argument that in any organization, everyone has a responsibility to do the following:

  1. Protect the Queen Bee Role
  2. Serve the Queen Bee Role

And basically the key part of the business is to take the QBR (the Queen Bee Role) which is the crucial part of your job, and make all of your actions that you take focus on that above all others. Basically the argument is that I should spend every second of my work day focusing on that QBR, and when an activity takes away from that, I should focus on getting done with that as soon as possible, or if possible moving it off my plate.

The intention is that it makes me focus on the bigger picture and creates a scenario where you can take off from work and feel comfortable. For me, I have a tendency to have a hard time unplugging, and stepping away from work. And recently I’ve been setting goals to help myself to unplug. I found that when I started to put this into practice, I was able to unplug with less stress and it helped my overall mental health. For me, I started with the intention of doing the following:

  • Blocking 1 hour for lunch everyday
  • I will not eat at my desk

This forces me to take a lunch break away from my desk, and honestly it sounds small but it has paid huge returns, I have found that when I come back to work I am more focused, and at the end of the day less drained from a mental perspective. I find that stress level has gone down with regard to work and I also find that the work I’m doing is much more satisfying.

Below is a video that summarizes some of the ideas of the book. The value of this book though aren’t the ideas, but how you execute.

Building a facial search in less than 2 hours

So, I’ve been pretty up front that I’ve been expanding my current skills in the data science and AI space, and its been a pretty fun process, and I wanted to point everyone to a demo I was able to build out very quickly.

Facial Searching, is a pretty common use case, so much so that we see it everywhere, Google Photos allows you to tag people in photos and indexes them for you automatically. Facebook makes suggestions for tagging people when you upload new pictures.

So I wanted to build a service that would take a set of selected images or 3 members of my family, and build a solution that would allow me to run any family photo through and it would search for the known members of my family to apply. Seems like a pretty basic use-case, but I’ve been wanting to get some hands on experience with the Azure Cognitive Services.

So I researched and read through our documentation, and decided before I started I was going to set aside 2 hours and see how far I could get to do the following:

  • Build a console app to read in images of 3 family members.
  • Build logic to upload an image and read back attributes about that person, things like age, gender, hair color, glasses, etc.
  • Build logic to search and match faces of people in a photo with the library that was previously uploaded.

The cool news is that I was able to implement all that logic. The full solution can be found here.

So to start, I focused on the first use case, and Azure Cognitive services has this concept of “PersonGroups” that can be leveraged with the SDK. For starters you need to install the sdk from nuget, and this is the required package.

Microsoft.Azure.CognitiveServices.Vision.Face

the first part is the key part is the client, which I configured in a parent class as follows:

public class BaseFaceApi
    {
        protected string _apiKey = ConfigurationManager.AppSettings["FaceApiKey"];
        protected string _apiUrl = ConfigurationManager.AppSettings["FaceApiUrl"];
        protected string _apiEndpoint = ConfigurationManager.AppSettings["FaceApiEndpoint"];
        protected FaceClient _client;

        protected void InitializeClient()
        {
            _client = new FaceClient(new ApiKeyServiceClientCredentials(_apiKey));
            _client.Endpoint = _apiEndpoint;
        }
    }

This allows for configuration to be the app.config, and this face client will be leveraged for all operations to hit the API.

The Face API leverages this concept of “PersonGroups” and “Persons” to handle the library of faces you are going to compare against. The process is broken into 4 parts.

  • Create the group
  • Create the person
  • Register Images for that person
  • Train the Model

If you review the source code you will find that I have broken these out to separate methods. The benefit of creating the groups is that you can limit your searching to specific groups, and have your application recognize the differences between groups.

Once you completed loading these images and “Persons” into the service you are ready to search through this repository by uploading an image. This is done with the following code:

public async Task<Dictionary<Guid,FacePerson>> IdentifyFaces(string filePath,string groupID)
        {
            InitializeClient();

            Dictionary<Guid,FacePerson> ret = new Dictionary<Guid, FacePerson>();

            using (Stream s = File.OpenRead(filePath))
            {
                // The list of Face attributes to return.
                IList<FaceAttributeType> faceAttributes =
                    new FaceAttributeType[]
                    {
            FaceAttributeType.Gender, FaceAttributeType.Age,
            FaceAttributeType.Smile, FaceAttributeType.Emotion,
            FaceAttributeType.Glasses, FaceAttributeType.Hair
                    };

                var facesTask = await _client.Face.DetectWithStreamWithHttpMessagesAsync(s,true,true,faceAttributes);
                var faceIds = facesTask.Body.Select(face => face.FaceId.Value).ToList();

                var identifyTask = await _client.Face.IdentifyWithHttpMessagesAsync(faceIds,groupID);
                foreach (var identifyResult in identifyTask.Body)
                {
                    Console.WriteLine("Result of face: {0}", identifyResult.FaceId);
                    if (identifyResult.Candidates.Count > 0)
                    { 
                        // Get top 1 among all candidates returned
                        var candidateId = identifyResult.Candidates[0].PersonId;
                        var person = await _client.PersonGroupPerson.GetWithHttpMessagesAsync(groupID, candidateId);

                        var fp = new FacePerson();
                        fp.PersonID = person.Body.PersonId;
                        fp.Name = person.Body.Name;
                        fp.FaceIds = person.Body.PersistedFaceIds.ToList();

                        var faceInstance = facesTask.Body.Where(f => f.FaceId.Value == identifyResult.FaceId).SingleOrDefault();
                        fp.Age = faceInstance.FaceAttributes.Age.ToString();
                        fp.EmotionAnger = faceInstance.FaceAttributes.Emotion.Anger.ToString();
                        fp.EmotionContempt = faceInstance.FaceAttributes.Emotion.Contempt.ToString();
                        fp.EmotionDisgust = faceInstance.FaceAttributes.Emotion.Disgust.ToString();
                        fp.EmotionFear = faceInstance.FaceAttributes.Emotion.Fear.ToString();
                        fp.EmotionHappiness = faceInstance.FaceAttributes.Emotion.Happiness.ToString();
                        fp.EmotionNeutral = faceInstance.FaceAttributes.Emotion.Neutral.ToString();
                        fp.EmotionSadness = faceInstance.FaceAttributes.Emotion.Sadness.ToString();
                        fp.EmotionSurprise = faceInstance.FaceAttributes.Emotion.Surprise.ToString();
                        fp.Gender = faceInstance.FaceAttributes.Gender.ToString();

                        ret.Add(person.Body.PersonId, fp);
                    }
                }
            }

            return ret;
        }

One key note above is the face attributes, this identifies the attributes you would like the service to review and discover. You can limit this list as you like.

Please feel free to review the sample code and I hope you find a great use-case. For me, a very cool project that is on my list next is to build a camera with a raspberry pi that captures people who come to the door and compares them against a known database of people.

It’s also worth mentioning that this service is fully available in Azure Government for customers that have requirements to be deployed in a sovereign cloud.

Starting out with Data Science, where to go from here

Data Science, let’s get started!

You can’t turn around anymore without hearing people talk about data. It’s everywhere, and honestly its only growing more. Here are a couple of statistics I found interesting (all from Forbes). The big one being for me, 79% of enterprise executives agree that companies that do not embrace Big Data will lose their competitive position and could face extinction.

Let’s stop and think about that a minute, AI is all over the news, and people are constantly figuring out new ways to leverage data to encourage new innovations. It really is quite the time to be alive!

So that being said, I’ve decided to start to shift my focus and learn more about this field and how to leverage it for the future. And I know there are a lot of other people out there in the same boat as me, so I figured I’d start documenting it all here and help those looking for resources with some of the ones I’ve found.

So where to start, I started with a link to this course track:

Data Science Track from Microsoft Academy:
This takes a bunch of courses on EdX and puts them into a nice track and I find that this is a good 100 level entry to Data Science, AI and Analytics. I’m halfway through now, and working through the content and have found it helpful.

All Posts

TerraForm Kubernetes Template

Hello All, I wanted to get a quick blog post out here based on something that I worked on, and finally is seeing the light of day.  I’ve been doing a lot of work with TerraForm, and one of the use cases I found was standing up a Kubernetes cluster.  And specifically I’ve been working with Azure Government, which does not have AKS available.  So how can I build a kubernetes cluster and minimize the lift of creating a cluster and then make it easy to add nodes to the cluster.  So the end result of that goal is here.

Below is a description of the project, and if you’d like to contribute please do, I have some ideas for phase 2 of this that I’m going to build out but I’d love to see what others come up with.

Intention:

The purpose of this template is to provide an easy-to-use approach to using an Infrastructure-as-a-service deployment to deploy a kubernetes cluster on Microsoft Azure. The goal being that you can start fresh with a standardized approach and preconfigured master and worker nodes.

How it works?

This template create a master node, and as many worker nodes as you specify, and during creation will automatically execute the scripts required to join those nodes to the cluster. The benefit of this being that once your cluster is created, all that is required to add additional nodes is to increase the count of the “lkwn” vm type, and reapply the template. This will cause the newe VMs to be created and the cluster will start orchestrating them automatically.

This template can also be built into a CI/CD pipeline to automatically provision the kubernetes cluster prior to pushing pods to it.

This guide is designed to help you navigate the use of this template to standup and manage the infrastructure required by a kubernetes cluster on azure. You will find the following documentation to assist:

  • Configure Terraform Development Environment: This page provides details on how to setup your locale machine to leverage this template and do development using Terraform, Packer, and VSCode.
  • Use this template: This document walks you through how to leverage this template to build out your kubernetes environment.
  • Understanding the template: This page describs how to understand the Terraform Template being used and walks you through its structure.

Key Contributors!

A special thanks to the following people who contributed to this template:
Brandon Rohrer: who introduced me to this template structure and how it works, as well as assisted with optimizing the functionality provided by this template.

Distributed Computing and Architecture Patterns

So lately I’ve been doing a lot of work on distributed programming, and specifically looking less at projects that are living on-premise, but need to be moved to cloud, and more with projects that are born in the cloud and how to optimize.  What I’m talking about here is applicable for the “lift-and-shift” type of project.

Ultimately the “cloud” is just like any other development projects, there are considerations that need to be handled as part of leveraging the environment to the best possible outcome.  So there are things you can do to help make your applications perform to their peak in the cloud.

In the traditional “Monolithic” approach to designing applications, we would work ourselves into a corner or more less.  And what I mean by that is we would build out applications to consume servers and predetermined resources, and that meant that if you wanted to take that application and sell it, traditionally you were looking at a large capital expense.  More than that also, if you wanted to increase scale, guess what…another capital expense, and this time with all the time required for a corporate purchase of that size.

Distributed Computing attempts to solve that problem, by enabling us to take that monolithic application and break it into the smallest parts we can.  And then making each of those parts independently scalable, to meet need.  So instead of one big app, we have a “web” of smaller pieces doing different jobs, and the total is more than the sum of its parts.

The value add here, is by leveraging smaller more isolated components, we can really focus on what does the best job.  For example, you might have a dotnet application, but if Python is the best fit for a microservice, why would you handicap yourself and not use the best tool for the job.  Microservices allow you to do that.

Let’s start with things you should keep in mind when building distributed applications within your application:

  • Loosely-Coupled Components:  For a distributed solution to truly work, all the pieces need to be loosely coupled.  And this takes the form of creating “buffers” between these services.  These “buffers” normally take the form of messaging between services, you could use a service bus, or even just a queue to communicate between services.  But the idea being that the services don’t know anything about each other, they just know that one adds an item to a queue, and the other removes the item from the queue.  This allows them to function independently and allows for the value add of creating ability to deploy these components separately.
  • Handle Communication Appropriately:  Given what we just talked about, how your application is a series of interconnected smaller apps, its important take some time and think about how you will pass information back-and-forth between applications.  Given that the application components are subject to change (platform, technology, end points, networking, etc).  You need to remember that you need an abstraction layer between the different micro services to make sure that they can be separated in a way to provide the best overall support for keeping these as separately deploy-able pieces.
  • Build with Monitoring in Mind:  Also, given that your application is really made of all these separate parts, its important to remember that for your application to work, every component must be functioning properly.  Just like an Olympic team can’t play unless every player is operating at the top of their game, your app can’t work if a component is unhealthy.  So when you set out to build micro service applications, make sure that you build and architecture your code with the logic in mind.
  • Build with Scale in Mind:  Given that your application is being built this way to encourage scaling, its important that you build your app in such as way that it can scale to meet the demands users are putting on the system.  Part of this comes down to making sure that your leveraging resources appropriately and not building systems that over (or under) consume the resources that you are using.
  • Build with Errors in mind:  Another item to consider is that your application is now a sum of “moving parts” and that being said, sometimes things can have errors or breakdowns that need to be handled.  These can be unplanned (exception or errors) or planned (upgrade of a service).  So your application should be able to respond to these “transient” faults, and not break down.  For example, one way is to leverage queues.  If component “A” is talking directly to component “B”, and “B” is in the middle of an upgrade, “A” might start throwing errors which will bubble up to the user, during the upgrade.  So now I need to notify users of downtime for the smallest change.  If I put a queue between them, then component “A” can continue to add elements to the queue, and while “B” is updating no errors occur.  When “B” finishes upgrading, it just starts pulling items off the queue as if nothing happened.  This is even less of a problem when you have scaling built into your app.

So the question is how do you do that?  There are a lot of ways to approach this particular problem, and ways to ensure that your app respects these.

I recommend the following steps:

  • Leverage a micro service approach:  There are a lot of articles out there (and some linked to below) that will talk you through how to build a micro service application, this can leverage a lot of technologies including “Service Fabric”, “Kubernetes”, “Docker Swarm”, and others to push your applications out with containers to support this approach.  You don’t have to use containers to do micro services, but they definitely help.
  • Always consider the best tool for the job:  One of the biggest benefits of micro services are the ability that you can leverage different stacks to solve different problems.  Don’t ignore this.
  • Leverage abstraction in communication between services:  As I mentioned above, this is paramount.  You must account for communication using a communication strategy, and a lot of times it helps to be consistent in how you approach this across your apps.  It will make your life simpler in the long run.
  • Make your services backward compatible:  As mentioned above, the benefits of abstraction are that I can push updates to individual components any time.  But to truly take advantage of this, I need to make my services backwards compatible.  Take my example above, imagine that service “A” writes to a queue, and then service “B” reads from that queue to do processing.  Now if Service “B” has scaled out to 10 nodes, and I try to update service “B”, I don’t want to shut them all down at once and take that part of the app offline, instead I want to do a rolling update.  So the idea being, that while service “B.1” is down, B.2-B.10 are still processing messages.  But in order to do that, I have to be careful how I change the signature of the queue and the changes to the database.  If i change the underlying database for service “B”, the database of service “B.2” has to be able to talk to it, even though its running old code.
  • Assume everything can change:  This is the best advice I can give, assume that anything can change around each Micro service that you build and by default you will be able to gracefully handle “transient faults” or “schema changes” without having to debug huge problems.
  • Leverage configuration management:  This is sort of a “1A” to the above entry, if you leverage configuration management, using services like redis, table storage, key vault or other platforms, you can make changes without requiring a redeploy of the application.  This makes your life much easier when you deploy a service and can change the configuration of other services.
  • There is no need to re-invent the wheel from an architectural standpoint, especially when you are learning.  If this is your first distributed project, lean on the known patterns and then take risks when you know more.

I hope this helps, as you start down this road.  What I will tell you, is that despite the changes I’m reviewing here.  Below are some addition links to help with this discussion:

Cloud Design Patterns :  This site provides you with detailed write-ups of some of the most common architecture patterns out there for Cloud.  These are especially helpful if you are currently more accustom to working an on-premise world, as they give a view of some of the considerations you should keep in mind.  I also like this because it pulls in concepts on things you might not be fully used to supporting (High availability vs disaster recovery, for example).

Architecture Center : Another great site, that contains just general architecture guidance for cloud or on-premise.  Its just a helpful site that lays out the pros / cons of common patterns so that you can design solutions appropriately to meet needs.

Architecting Distributed Applications:  A great online course that will walk you through what it means to build a truly distributed application, and this course is technology agnostic which is always a good thing.

Building Distributed Applications with Akka.net:  A great video with an overview of Akka.net which is a technology to help create applications using an Actor pattern.

Distributed Architecture with Microservices / Messaging:  A great video on Microservices which are the corner stone of distributed computing.

Rethinking Distributed Systems for Data Centers:  Another great article on how to build applications in a distributed world to accommodate a varying degree of scale.

Running Entity Framework Migrations in VSTS Release with Azure Functions

Hello All, I hope your having a good week, I wanted to write up a problem I just solved that I can’t be the only one. For me, I’m a big fan of Entity Framework, and specifically Code First Migrations. Now its not perfect for every solution, but I will say that it does provide a method of versioning and controlling database changes in an more elegant method.

Now one of the problems I’ve run into before is that how do you leverage a dev ops pipeline with migrations. The question becomes how do you trigger the migrations to execute as part of an automated release. And this can be a pretty sticky situation if you’ve ever tried to unbox it. The most common answer is this.  Which is in itself, a fine solution.

But one scenario that I’ve run into where this doesn’t always play out, is the scenario where because the above link executes on App_Start, it can cause a slowdown for the first users, of in the scenario of load balancing can cause a performance issue when it hits that method of running the migration.  And to me, from a Dev Ops perspective, this doesn’t feel really that “clean” as I would like to deploy my database changes at the same time as my app, and know that when it says “Finished” everything is done and successful.  By leveraging this approach you run the risk of a “false positive” saying that it was successful even when it wasn’t, as the migrations will fail with the first user.

So an alternative I wrote was to create an azure function to provide an Http endpoint, that allows for me to trigger the migrations to execute.  Under this approach I can make an http call from my release pipeline, and execute the migrations in a controlled state, and if its going to fail, it will happen within my deployment rather than after.

Below is the code I leveraged in the azure function:

[FunctionName("RunMigration")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
log.Info("Run Migration");

bool isSuccessful = true;
string resultMessage = string.Empty;

try
{
//Get security key
string key = req.GetQueryNameValuePairs()
.FirstOrDefault(q => string.Compare(q.Key, "key", true) == 0)
.Value;

var keyRecord = ConfigurationManager.AppSettings["MigrationKey"];
if (key != keyRecord)
{
throw new ArgumentException("Key Mismatch, disregarding request");
}

Database.SetInitializer(new MigrateDatabaseToLatestVersion<ApplicationDataContext, Data.Migrations.Configuration>());

var dbContext = new ApplicationDataContext();

dbContext.Database.Initialize(true);

//var list = dbContext.Settings.ToList();
}
catch (Exception ex)
{
isSuccessful = false;
resultMessage = ex.Message;
log.Info("Error: " + ex.Message);
}

return isSuccessful == false
? req.CreateResponse(HttpStatusCode.BadRequest, "Error: " + resultMessage)
: req.CreateResponse(HttpStatusCode.OK, "Migrationed Completed, Database updated");
}
}

Now a couple of quick things to note as you look at the code:
Line 12:  I am extracting a querystring parameter called “key” and then in line 16, I am getting the “MigrationKey” from the Functions App Settings.  The purpose of  this is to quickly secure the azure function.  This function is looking for a querystring value that matches the app settings value to allow the triggering of the migrations.  This prevents just anyone from hitting the endpoint.  I can then secure this value within my release management tool to pass as part of the http request.

Lines 22-26: this is what actually triggers the migration to execute by creating a context and setting the initializer.

Line 37: Allows for handling the response code that is sent back to the client for the http request.  This allows me to throw an error within my release management tool if necessary.

Azure Compute – Searchable and Filterable

Hello All, so a good friend of mine, Brandon Rohrer and I just finished the first iteration of a project we’ve been working on recently.  Being Cloud Solution Architects, we get a lot of questions about the different compute options that are available in Azure.  And it occurred to us that there wasn’t a way to consume this information in a searchable, and filterable format to get the information customers need.

So we created this site:

https://computeinfo.azurewebsites.net
This site scrapes through the documentation provided by Microsoft and extracts the information about the different types of virtual machines you can create in azure and provides it in a way that meets the following criteria:

  • Searchable
  • Filterable
  • Viewable on a mobile device

Hope this helps as you look at your requirements in Azure and build out the appropriate architecture for your solution.