Browsed by
Category: Projects

Building a magic mirror – Part 1 – The project

Building a magic mirror – Part 1 – The project

For this post, I thought I would share a project I’ve been working on for my family. For our family, we have a marker board in the kitchen that helps keep track of everything going on for myself, my wife and our kids. And while this is great in practice and does help. The fact that this is analog has been driving me nuts for YEARS. So I wanted to see if we could up this with a magic mirror.

Now I have a magic mirror in my office that I use to help stay focused, and I have here how I manage it via Azure Dev Ops. But I’ve never really done a post detailing how I built this mirror for those interested.

First Hardware Requirements:

In my case I’m using an old Raspberry Pi 3 that I happen to have just sitting around the office, and I’ve installed the Raspberry Pi linux OS on that device.

Outside of that, I’ve got the basics:

  • Power supply cable
  • HDMI Capable
  • Monitor
  • 64 GB Micro SD card
  • SD Card Reader

Now I have plans to hook this to a larger TV when I set it up in the kitchen, but for right now I’ve just got a standard monitor.

Goal of the Project

For me, I found this video on YouTube, and thought it was pretty great, so this is my starting point for this project.

Setting up the Raspberry PI – Out-of-the-Box

To download and install the OS, I used the Raspberry Pi image manager found here. I used the SD card reader I had in the office to format the SD card, and then install the OS.

Once that was completed, I booted up my raspberry pi, and finished the setup which involved the following (there is a wizard to help with this part):

  • Configure localization
  • Configure Wifi
  • Reset password
  • Reset Hostname
  • Download / Install Updates

Finally, one step I did to make my life easier is to enable SSH to the raspberry pi, which allows me to work on it from my laptop rather than setting up a keyboard / monitor / mouse permanently.

You do this by going to the “Settings” on the Raspberry Pi, and going to the “Interface” tab, and selecting “Enable” for “SSH”.

Now that my Raspberry Pi is running, we come to the meat of this, and that’s getting the magic mirror running:

Step 1 – Install Node.js

You need Node.JS to run everything about the magic mirror, so you can start by running these commands against your raspberry pi:

curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install -y nodejs

From there, I cloned the magic mirror repo to Raspberry Pi.

git clone https://github.com/MichMich/MagicMirror

Then enter the repo from the command prompt:

cd MagicMirror/

Then you need to install NPM to be able to work with the MagicMirror installation. This takes the longest, and the first time I ran this I actually had to use sudo to make sure it completed the install.

sudo npm install

A good recommendation from the magic mirror site is to copy the default config so you have a backup. You can do that with this command:

cp config/config.js.sample config/config.js

Finally you can start your MagicMirror with:

npm run start

Now, the next part was tricky, if you reboot your RaspberryPi, the magic mirror will not start automatically, and you need to do some more configuration to make that happen. Most documentation will tell you to use pm2, and I would agree with that, but if you try to run the commands on the most recent Raspberry Pi, you’ll find that pm2 is not installed. You can resolve that with this command:

npm install pm2 -g

Then run the following commands to configure your MagicMirror to run on startup.

pm2 startup

After running this command you will be given a command to run to enable pm2 on startup, run this command.

Then run the following:

cd ~
nano magicmirror.sh

Put the following in the magicmirror.sh file, and then hit Ctrl-X then Y

cd ~/MagicMirror
DISPLAY=:0 npm start

Finally run these commands to finish configuration

chmod +x magicmirror.sh
pm2 start magicmirror.sh

pm2 save
sudo reboot

After that you’ll see your monitor displaying the default configuration for the MagicMirror.

Next post I’ll walk you through the steps I took for configuring some of the common modules to get it working.

Cool Nerdy Gift Idea – Word Cloud

Cool Nerdy Gift Idea – Word Cloud

The holidays are fast approaching, and this year I had a really cool idea for a gift that turned out well, and I thought I would share it. For the past year and a half, I’ve had this thing going with my wife where every day I’ve sent her a “Reason X, that I love you…” and it’s been a thing of ours that’s been going for a long time (up to 462 at the time of this post).

But what was really cool was this year for our anniversary I decided to take a nerdy approach to making something very sentimental but easy to make. Needless to say, it was very well-received, and I thought I would share.

What I did was used Microsoft Cognitive Services and Power BI to build a Word Cloud based on the key words extracted from the text messages I’ve sent her. Microsoft provides a cognitive service that does text analytics, and if you’re like me you’ve seen sentiment analysis and other bots before. But one of the capabilities, is Key Word Extraction, which is discussed here.

So given this, I wrote a simple python script to pull in all the text messages that I exported to csv, and run them through cognitive services.

from collections import Counter
import json 

key = "..."
endpoint = "..."

run_text_analytics = True
run_summarize = True 

from azure.ai.textanalytics import TextAnalyticsClient
from azure.core.credentials import AzureKeyCredential

class KeywordResult():
    def __init__(self, keyword, count):
        self.keyword = keyword
        self.count = count 

# Authenticate the client using your key and endpoint 
def authenticate_client():
    ta_credential = AzureKeyCredential(key)
    text_analytics_client = TextAnalyticsClient(
            endpoint=endpoint, 
            credential=ta_credential)
    return text_analytics_client

client = authenticate_client()

def key_phrase_extraction(client):

    try:
        if (run_text_analytics == True):
            print("Running Text Analytics")
            with open("./data/reasons.txt") as f:
                lines = f.readlines()

                responses = []
                for i in range(0, len(lines),10):
                    documents = lines[i:i+10]
                    response = client.extract_key_phrases(documents = documents)[0]

                    if not response.is_error:
                        for phrase in response.key_phrases:
                            #print("\t\t", phrase)
                            responses += [phrase]
                    else:
                        print(response.id, response.error)
                # for line in lines:
                #     documents = [line]
                    

                with open("./data/output.txt", 'w') as o:
                    for respone_line in responses:
                        o.write(f"{respone_line}\n")
            print("Running Text Analytics - Complete")
        
        if (run_summarize == True):
            print("Running Summary Statistics")
            print("Getting output values")
            with open("./data/output.txt") as reason_keywords:
                keywords = reason_keywords.readlines()
                keyword_counts = Counter(keywords)
                print("Counts retrieved")

                print("Building Keyword objects")
                keyword_list = []
                for key, value in keyword_counts.items():
                    result = KeywordResult(key,value)
                    keyword_list.append(result)
                print("Keyword objects built")

                print("Writing output files")
                with open("./data/keyword_counts.csv","w") as keyword_count_output:
                    for k in keyword_list:
                        print(f"Key = {k.keyword} Value = {k.count}")
                        print()
                        key_value = k.keyword.replace("\n","")
                        result_line = f"{key_value},{k.count}\n"
                        keyword_count_output.write(result_line)
                print("Finished writing output files")
         
    except Exception as err:
        print("Encountered exception. {}".format(err))
        
key_phrase_extraction(client)

Now with the above code, you will need to create a text analytics cognitive service, and then populate the endpoint and the key provided. But the code will take each row of the document and run it through cognitive services (in batches of 10) and then output the results.

From there, you can open up Power BI and point it at the text document provided, and connect the Word Cloud visual, and you’re done. There are great instructions found here if it helps.

It’s a pretty easy gift that can be really amazing. And Happy Holidays!

Thanksgiving Turkey in the Smoker

Thanksgiving Turkey in the Smoker

And now for something completely different. This past week was Thanksgiving in the states, and for almost all of us it was different than normal as COVID-19 prevented us from getting to see our families. Here in the Mack household, we took the opportunity to try something new, and used my pellet smoker, to Smoke a turkey.

And I thought I would share the results.

Turkey Brine:

Basically the process was this, we started the night before with a turkey brine, which was the following:

Now for this we took inspiration from Alton Brown’s recipe found here. But made some slight adjustments:

Here are the ingredients:

  • 1 gallon hot water
  • 1 pound kosher salt
  • 2 quarts vegetable broth
  • 1 pound honey
  • 1 (7-pound) bag of ice
  • 1 (15 to 20-pound) turkey, with giblets removed

Now we combine the boiling water, salt, vegetable broth, and honey into a cooler and mixed everything until it had all dissolved.

Next we added the ice to keep the brine cool and let it come down in temperature to normal. We then took the turkey, and put it in and waited 16 hours.

From there the next step was to remove the turkey from the brine and dry it off. We did not rinse the bird, as my family likes it a little on the salty side, but if you don’t, you’ll want to rinse your bird.

Marinade Injection:

I’m of the belief that Turkey dries out really easy, so we decided to do everything humanly possible to get this bird to stay moist. And the next step was to put together an injection. We got inspiration from here.

Here are the ingredients:

  • 1/2 cup Butter
  • 1/2 cup Maple Syrup

And then we melted both together and allowed it to cool slightly. The idea here being it needs to be a liquid for the injector and don’t let it cool too far or you’ll be injecting sludge into your bird.

We then injected the bird over 50 times, doing small little injections about ever inch across the breast, legs, thighs, and pretty much every part of the exposed meat.

Next we put on a rub, and for this we put together about a half a cup of butter and a store bought turkey rub, we found at lowes. But really any rub that you would use on poultry is a good idea here. And rubbed under the skin of the bird.

Smoking the bird

I got my pellet smoker up to 250 degrees Fahrenheit, and then put in the bird. We used a aluminum disposable pan to keep the drippings around the bird and help with moisture. And then every hour, I would spray the turkey with apple juice.

We kept the turkey cooking until we got it to an even 165 degrees Fahrenheit.

Finally we did increase the temperature when it got to 165 degrees to 325 and let it go for another 30 minutes to make the skin crispy.

After that, enjoy!

MagicMirror on the Wall

MagicMirror on the Wall

So lately, my wife and I have been going through a spending a lot of time organizing different elements of our lives. Specifically Covid-19 has created a situation where our lives are very different than what we are used to previously. And it has caused us to re-evaluate the tips and tricks we used to organize our lives.

One such change has been the need to coordinate and communicate more with regard to the family calendar and a variety of other information about our lives. So I decided if we were going to bring back “The family bulletin board”, that I wanted it to integrate with our digital footprint and tools to make life easier.

After doing some research, I settled on MagicMirror, mainly because of articles like this. And I had a spare monitor, with a Raspberry Pi 3 lying around the office, so I figured why not. The plan was to implement the initial prototype and get it working in my office, and then invest in something better for my wife to help manage everything.

So I have to admit, this was surprisingly easy to setup. And did not require much in the way of effort. I was even able to automate pushing updates to this device pretty easily. So if you are looking for a cheap option for getting something like this working, I recommend MagicMirror.

Here’s the current state of the PoC:

So for a walk through, I went through the following steps:

  • Download Raspbian OS, and flash the Micro SD card using Etcher.
  • Put the Micro SD into the Raspberry Pi and boot it up. From there you will get a step-by-step wizard for configuring wifi and setting up passwords, etc.
  • From there, I executed the manual installation methods found here.

That was really it for getting the baseline out of the box. From there I had a nice black screen with holidays, date/time, and a few other widgets. From there the next step was to install the widgets I cared about. You can find a list of the 3rd party widgets here.

Now I wanted MagicMirror to load up on this monitor without me having to do anything, so I followed the steps here. I did make some additional modifications from here that helped to make my life easier.

The one problem I ran into was the process of updating the device. I could have just remoted into the device to execute updates to config files, and other scripts. But for me, this was frustrating, I really want to keep configurations for this kind of stuff source controlled as a practice. So I ended up creating a github private repo for the configuration of my MagicMirror.

This worked fine except for the fact that every time I needed to update the mirror, I had to push a file to the device, and then copy it over and reboot the device.

So instead what I ended update doing was building a CI/CD pipeline that pushes changes to the magic mirror.

So what I did was the following:

  • Create a blob storage account and container.
  • Create a ADO pipeline for my GitHub repo.
  • And added only one task to my repo:
- task: AzureCLI@2
  inputs:
    azureSubscription: '{Subscription Name}'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: 'az storage blob upload --connection-string $BLOBSTORAGE -f configdev.js -c test -n configdev.js'

Now whenever I push an update to the master branch, a copy of the config file is pushed directly to blob storage.

Now came the problem of how do I get the device to pull down the new config file. If you look at the instructions about for making Magic Mirror auto start, it mentions an mm.sh file, which I updated to be the following:

cd ./MagicMirror
cd config
curl -0 config.dev.js https://...blob url.../config.js
cp config.dev.js config.js
cd ..
DISPLAY=:0 npm start

Now with this update the magic mirror will pick up a fresh config on every restart. So all I have to do is “sudo reboot” to make it pickup the new version.

I’m going to continue to build this out, and will likely have more blog posts on this topic moving forward. But I wanted to get something out about the beginning. Some things I’ve been thinking about adding are:

  • Examine the different types of widgets
  • Calendar integration
  • Google Calendar integration
  • To Do Integration
  • Updating to shut off at 8pm and start up at 6am
  • Building ability to recognize when config has been updated and trigger a reboot automatically

Building CI / CD for Terraform

Building CI / CD for Terraform

I’ve made no secret on this blog of how I feel about TerraForm, and how I believe infrastructure as code is absolutely essential to managing any cloud based deployments long term.

There are so many benefits to leveraging these technologies. And for me one of the biggest is that you can manage your infrastructure deployments in the exact same manner as your code changes.

If your curious of the benefits of a CI /CD pipeline, there are a lot of posts out there. But for this post I wanted to talk about how you can take those TerraForm templates and build out a CI / CD pipeline to deploy them to your environments.

So for this project, I’ve build a TerraForm template that deploys a lot of resources out to 3 environments. And I wanted to do this in a cost saving manner, so I want to manage it in the following way.

  • Development: which will always exist but in a scaled down capacity to keep costs down.
  • Test environment: That will only be created when we are ready to begin testing and destroyed after.
  • Production: where our production application will reside.

Now for the sake of this exercise, I will only be building a deployment pipeline for the TerraForm code, and in a later post will examine how to integrate this with code changes.

Now as with everything, there are lots of ways to make something work. I’ve just showing an approach that has worked for me.

Configuring your template

The first part of this, is to build out your template to be able to easily be make configuration changes via the automated deployment pipeline.

The best way I’ve found to do this is variables, and whether your are doing automated deployment or not, I highly recommend using them. If you ever have more than just yourself working on a TerraForm template, or plan to create more than one environment. You will absolutely need variables. So it’s generally a good practice.

For the sake of this example, I declared the following variables in a file called “variables.tf”:

variable "location" {
    default = "usgovvirginia"
}

variable "environment_code" {
    description = "The environment code required for the solution.  "
}

variable "deployment_code" {
    description = "The deployment code of the solution"
}

variable "location_code" {
    description = "The location code of the solution."
}

variable "subscription_id" {
    description = "The subscription being deployed."
}

variable "client_id" {
    description = "The client id of the service prinicpal"
}

variable "client_secret" {
    description = "The client secret for the service prinicpal"
}

variable "tenant_id" {
    description = "The client secret for the service prinicpal"
}

variable "project_name" {
    description = "The name code of the project"
    default = "cds"
}

variable "group_name" {
    description = "The name put into all resource groups."
    default = "CDS"
}

Also worth noting is the client id, secret, subscription id, and tenant id above. Using Azure DevOps you are going to need to be able to deploy using a service principal. So these will be important.

Then in your main.tf, you will have the following:

provider "azurerm" {
    subscription_id = var.subscription_id
    version = "=2.0.0"

    client_id = var.client_id
    client_secret = var.client_secret
    tenant_id = var.tenant_id
    environment = "usgovernment"

    features {}
}

Now worth mentioning that when I’m working with my template I’m using a file called “variables.tfvars”, which looks like the following:

location = "usgovvirginia"
environment_code = "us1"
deployment_code = "d"
location_code = "us1"
subscription_id = "..."
group_name = "CDS"

Configuring the Pipeline

This will be important later, as you build out the automation. From here the next step is going to be to build out your Azure DevOps pipeline, and for this sample I’m going to show, I’m using a YAML pipeline:

So what I did was plan on creating a “variables.tfvars” as part of my deployment:

- script: |
    touch variables.tfvars
    echo -e "location = \""$LOCATION"\"" >> variables.tfvars
    echo -e "environment_code = \""$ENVIRONMENT_CODE"\"" >> variables.tfvars
    echo -e "deployment_code = \""$DEPLOYMENT_CODE"\"" >> variables.tfvars
    echo -e "location_code = \""$LOCATION_CODE"\"" >> variables.tfvars
    echo -e "subscription_id = \""$SUBSCRIPTION_ID"\"" >> variables.tfvars
    echo -e "group_name = \""$GROUP_NAME"\"" >> variables.tfvars
    echo -e "client_id = \""$SP_APPLICATIONID"\"" >> variables.tfvars
    echo -e "tenant_id = \""$SP_TENANTID"\"" >> variables.tfvars
displayName: 'Create variables Tfvars'

Now the next question being where do those values come from, I’ve declared as part of the variables in the pipeline, these values:

From there, because I’m deploying to Azure Government, I added an azure CLI step to make sure my command line context is pointed at Azure Government, doing the following:

- task: AzureCLI@2
  inputs:
    azureSubscription: 'Kemack - Azure Gov'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
      az cloud set --name AzureUSGovernment
      az account show

How do we do TerraForm plan / apply, the answer is pretty straightforward. I did install this extension, and use the “TerraForm Tool Installer” task, as follows:

- task: TerraformInstaller@0
  inputs:
    terraformVersion: '0.12.3'
  displayName: "Install Terraform"

After that it become pretty straight forward to implement:

- script: |
    terraform init
  displayName: 'Terraform - Run Init'

- script: |
    terraform validate
  displayName: 'Terraform - Validate tf'

- script: |
    terraform plan -var-file variables.tfvars -out=tfPlan.txt
  displayName: 'Terraform - Run Plan'

- script: |
    echo $BUILD_BUILDNUMBER".txt"
    echo $BUILD_BUILDID".txt"
    az storage blob upload --connection-string $TFPLANSTORAGE -f tfPlan.txt -c plans -n $BUILD_BUILDNUMBER"-plan.txt"
  displayName: 'Upload Terraform Plan to Blob'

- script: |
    terraform apply -auto-approve -var-file variables.tfvars
  displayName: 'Terraform - Run Apply'

Now the cool part about the above, is I took it a step further, and created a storage account in azure, and added the connection string as a secret. I then built logic that when it you run this pipeline, it will run a plan ahead of the apply, and then output that to a text file and save it in a storage account with the build number for the file name.

I personally like this as it creates a log of the activities performed during the automated build moving forward.

Now I do plan on refining this and taking steps of creating more automation around environments, so more to come.

Building a Solr Cluster with TerraForm – Part 1

Building a Solr Cluster with TerraForm – Part 1

So it’s no surprise that I very much have been talking about how amazing TerraForm is, and recently I’ve been doing a lot of investigation into Solr and how to build a scalable Solr Cluster.

So given the kubernetes template I wanted to try my hand at something similar. The goals of this project were the following:

  1. Build a generic template for creating a Solr cloud cluster with distributed shard.
  2. Build out the ability to scale the cluster for now using TerraForm to manually trigger increases to cluster size.
  3. Make the nodes automatically add themselves to the cluster.

And I could do this just using bash scripts and packer. But instead wanted to try my hand at cloud init.

But that’s going to be the end result, I wanted to walkthrough the various steps I go through to get to the end.  The first real step is to get through the installation of Solr on  linux machines to be implemented. 

So let’s start with “What is Solr?”   The answer is that Solr is an open source software solution that provides a means of creating a search engine.  It works in the same vein as ElasticSearch and other technologies.  Solr has been around for quite a while and is used by some of the largest companies that implement search to handle search requests by their customers.  Some of those names are Netflix and CareerBuilder.  See the following links below:

So I’ve decided to try my hand at this and creating my first Solr cluster, and have reviewed the getting started. 

So I ended up looking into it more, and built out the following script to create a “getting started” solr cluster.

sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
sudo apt-get install -y gnupg-curl
sudo wget https://www.apache.org/dist/lucene/solr/8.0.0/solr-8.0.0.zip.asc | sudo apt-key add

sudo apt-get update -y
sudo apt-get install unzip
sudo wget http://mirror.cogentco.com/pub/apache/lucene/solr/8.0.0/solr-8.0.0.zip

sudo unzip -q solr-8.0.0.zipls
sudo mv solr-8.0.0 /usr/local/bin/solr-8.0.0 -f
sudo rm solr-8.0.0.zip -f

sudo apt-get install -y default-jdk

sudo chmod +x /usr/local/bin/solr-8.0.0/bin/solr
sudo chmod +x /usr/local/bin/solr-8.0.0/example/cloud/node1/solr
sudo chmod +x /usr/local/bin/solr-8.0.0/example/cloud/node2/solr
sudo /usr/local/bin/solr-8.0.0/bin/bin/solr -e cloud -noprompt

The above will configure a “getting started solr cluster” that leverages all the system defaults and is hardly a production implementation. So my next step will be to change this. But for the sake of getting something running, I took the above script and moved it into a packer template using the following json. The above script is the “../scripts/Solr/provision.sh”

{
  "variables": {
    "deployment_code": "",
    "resource_group": "",
    "subscription_id": "",
    "location": "",
    "cloud_environment_name": "Public"
  },
  "builders": [{   
    "type": "azure-arm",
    "cloud_environment_name": "{{user `cloud_environment_name`}}",
    "subscription_id": "{{user `subscription_id`}}",

    "managed_image_resource_group_name": "{{user `resource_group`}}",
    "managed_image_name": "Ubuntu_16.04_{{isotime \"2006_01_02_15_04\"}}",
    "managed_image_storage_account_type": "Premium_LRS",

    "os_type": "Linux",
    "image_publisher": "Canonical",
    "image_offer": "UbuntuServer",
    "image_sku": "16.04-LTS",

    "location": "{{user `location`}}",
    "vm_size": "Standard_F2s"
  }],
  "provisioners": [
    {
      "type": "shell",
      "script": "../scripts/ubuntu/update.sh"
    },
    {
      "type": "shell",
      "script": "../scripts/Solr/provision.sh"
    },
    {
      "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
      "inline": [
        "/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"
      ],
      "inline_shebang": "/bin/sh -e",
      "type": "shell"
    }]
}

The only other script mentioned is the “update.sh”, which has the following logic in it, to install the cli and update the ubuntu image:

#! /bin/bash

sudo apt-get update -y
sudo apt-get upgrade -y

#Azure-CLI
AZ_REPO=$(sudo lsb_release -cs)
sudo echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
sudo curl -L https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
sudo apt-get install apt-transport-https
sudo apt-get update && sudo apt-get install azure-cli

So the above gets me to a good place for being able to create an image with it configured.

For next steps I will be doing the following:

  • Building a more “production friendly” implementation of Solr into the script.
  • Investigating leveraging cloud init instead of the “golden image” experience with Packer.
  • Building out templates around the use of Zookeeper for managing the nodes.


Creating Terraform Scripts from existing resources in Azure Government

Creating Terraform Scripts from existing resources in Azure Government

Lately I’ve been doing a lot of work with TerraForm lately, and one of the questions I’ve gotten a lot is the ability to create terraform scripts based on existing resources.

So the use case is the following: You are working on projects, or part of an organization that has a lot of resources in Azure, and you want to start using terraform for a variety of reasons:

  • Being able to iterating in your infrastructure
  • Consistency of environment management
  • Code History of changes

The good new is there is a tool for that. The tool can be found here on github along with a list of pre-requisites.  I’ve used this tool in Azure Commercial and have been really happy with the results. I wanted to use this with Azure Commercial.

NOTE => The Pre-reqs are listed on the az2tf tool, but one they didn’t list I needed to install was jq, using “apt-get install jq”.

Next we need to configure our environment for running terraform.  For me, I ran this using the environment I had configured for Terraform.  In the Git repo, there is a PC Setup document that walks you through how to configure your environment with VS code and Terraform.  I then was able to clone the git repo, and execute the az2tf tool using a Ubuntu subsystem on my Windows 10 machine. 

Now, the tool, az2f, was built to work with azure commercial, and there is one change that has to be made for it to leverage azure government 

Once you have the environment created, and the pre-requisites are all present, you can open a “Terminal” window in vscode, and connect to Azure Government. 

In the ./scripts/resources.sh and ./scripts/resources2.sh files, you will find the following on line 9:

ris=`printf “curl -s  -X GET -H \”Authorization: Bearer %s\” -H \”Content-Type: application/json\” https://management.azure.com/subscriptions/%s/resources?api-version=2017-05-10” $bt $sub`

Please change this line to the following:

ris=`printf “curl -s  -X GET -H \”Authorization: Bearer %s\” -H \”Content-Type: application/json\” https://management.usgovcloudapi.net/subscriptions/%s/resources?api-version=2017-05-10” $bt $sub`

You can then run the “az2tf” tool by running the following command in the terminal:

./az2tf.sh -s {Subscription ID} -g {Resource Group Name}

This will generate the script, and you will see a new folder created in the structure marked “tf.{Subscription ID}” and inside of it will be all configuration steps to setup the environment.

Running Entity Framework Migrations in VSTS Release with Azure Functions

Running Entity Framework Migrations in VSTS Release with Azure Functions

Hello All, I hope your having a good week, I wanted to write up a problem I just solved that I can’t be the only one. For me, I’m a big fan of Entity Framework, and specifically Code First Migrations. Now its not perfect for every solution, but I will say that it does provide a method of versioning and controlling database changes in an more elegant method.

Now one of the problems I’ve run into before is that how do you leverage a dev ops pipeline with migrations. The question becomes how do you trigger the migrations to execute as part of an automated release. And this can be a pretty sticky situation if you’ve ever tried to unbox it. The most common answer is this.  Which is in itself, a fine solution.

But one scenario that I’ve run into where this doesn’t always play out, is the scenario where because the above link executes on App_Start, it can cause a slowdown for the first users, of in the scenario of load balancing can cause a performance issue when it hits that method of running the migration.  And to me, from a Dev Ops perspective, this doesn’t feel really that “clean” as I would like to deploy my database changes at the same time as my app, and know that when it says “Finished” everything is done and successful.  By leveraging this approach you run the risk of a “false positive” saying that it was successful even when it wasn’t, as the migrations will fail with the first user.

So an alternative I wrote was to create an azure function to provide an Http endpoint, that allows for me to trigger the migrations to execute.  Under this approach I can make an http call from my release pipeline, and execute the migrations in a controlled state, and if its going to fail, it will happen within my deployment rather than after.

Below is the code I leveraged in the azure function:

[FunctionName("RunMigration")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
log.Info("Run Migration");

bool isSuccessful = true;
string resultMessage = string.Empty;

try
{
//Get security key
string key = req.GetQueryNameValuePairs()
.FirstOrDefault(q => string.Compare(q.Key, "key", true) == 0)
.Value;

var keyRecord = ConfigurationManager.AppSettings["MigrationKey"];
if (key != keyRecord)
{
throw new ArgumentException("Key Mismatch, disregarding request");
}

Database.SetInitializer(new MigrateDatabaseToLatestVersion<ApplicationDataContext, Data.Migrations.Configuration>());

var dbContext = new ApplicationDataContext();

dbContext.Database.Initialize(true);

//var list = dbContext.Settings.ToList();
}
catch (Exception ex)
{
isSuccessful = false;
resultMessage = ex.Message;
log.Info("Error: " + ex.Message);
}

return isSuccessful == false
? req.CreateResponse(HttpStatusCode.BadRequest, "Error: " + resultMessage)
: req.CreateResponse(HttpStatusCode.OK, "Migrationed Completed, Database updated");
}
}

Now a couple of quick things to note as you look at the code:
Line 12:  I am extracting a querystring parameter called “key” and then in line 16, I am getting the “MigrationKey” from the Functions App Settings.  The purpose of  this is to quickly secure the azure function.  This function is looking for a querystring value that matches the app settings value to allow the triggering of the migrations.  This prevents just anyone from hitting the endpoint.  I can then secure this value within my release management tool to pass as part of the http request.

Lines 22-26: this is what actually triggers the migration to execute by creating a context and setting the initializer.

Line 37: Allows for handling the response code that is sent back to the client for the http request.  This allows me to throw an error within my release management tool if necessary.

Azure Compute – Searchable and Filterable

Azure Compute – Searchable and Filterable

Hello All, so a good friend of mine, Brandon Rohrer and I just finished the first iteration of a project we’ve been working on recently.  Being Cloud Solution Architects, we get a lot of questions about the different compute options that are available in Azure.  And it occurred to us that there wasn’t a way to consume this information in a searchable, and filterable format to get the information customers need.

So we created this site:

https://computeinfo.azurewebsites.net
This site scrapes through the documentation provided by Microsoft and extracts the information about the different types of virtual machines you can create in azure and provides it in a way that meets the following criteria:

  • Searchable
  • Filterable
  • Viewable on a mobile device

Hope this helps as you look at your requirements in Azure and build out the appropriate architecture for your solution.