Browsed by
Category: Technology

The world of technology is one of innovation and excitement. Technology for me is the one force in the world that can truly transform lives and improve the world. For me, its the one thing where we build something that leaves the world a better place.

Enabling Remote State with Terraform

Enabling Remote State with Terraform

So I’ve made no secret of my love of TerraForm. Honestly, I really like TerraForm for using infrastructure as code. Now one of the features that I really like about TerraForm, is the ability to execute a plan and see what’s going to change.

What is state in Terraform?

Terraform leverages using state to enable the ability to have the “plan/apply” functionality. Which makes it such that you can see the changes before they are applied.

How do we enable remote state?

So the process of enabling state remotely isn’t necessarily hard, and is requires a simple piece of code. For my projects, I add a “Terraform.tf”, that contains this information. NOTE: I do usually add this to the gitignore, so that I’m not checking in the keys:

terraform {
    backend "azurerm" {
        resource_group_name  = "..."
        storage_account_name = "..."
        container_name = "..."
        key = "..."
    }
}

It really is that simple, and the key part of this is it becomes very important if you are working with more than one person on deploying to the same environment. In that scenario, if you have two developers using local state, then your state can become out of sync. But this is an easy way to make sure that you manage state in a way that allows collaboration.

A simple trick to handling environments in Terraform

A simple trick to handling environments in Terraform

So for a short post, I wanted to share a good habit to get into with TerraForm. More specifically this is an easy way to handle the configuration and deployment of multiple environments and making it easier to manage in your Terraform scripts.

It doesn’t take long working with TerraForm to see the immediate value in leveraging it to build out brand new environments, but that being said it never fails to amaze me how many people I talk to, who don’t craft their templates to be highly reusable. There are lots of ways to do this, but I wanted to share a practice that I use.

The idea starts by leveraging this pattern. My projects all contain the following key files for “.tf”:

  • main.tf: This file contains the provider information, and maps up the service principal (if you are using one) to be used during deployment.
  • variables.tf: This file contains a list of all the variables leveraged in my solution, with a description for their definition.

The “main.tf” file is pretty basic:

provider "azurerm" {
    subscription_id = var.subscription_id
    version = "~> 2.1.0"

    client_id = var.client_id
    client_secret = var.client_secret
    tenant_id = var.tenant_id

    features {}
}

Notice that the above is already wired up for the variables of Subscription_id, client_id, client_secret, and tenant_id.

Now for my variables file, I have things like the following:

variable "subscription_id" {
    description = "The subscription being deployed."
}

variable "client_id" {
    description = "The client id of the service prinicpal"
}

variable "client_secret" {
    description = "The client secret for the service prinicpal"
}

variable "tenant_id" {
    description = "The client secret for the service prinicpal"
}

Now what this enables is the ability to then have a separate “.tfvars” file for each individual environment:

primarylocation = "..."
secondarylocation = "..."
subscription_id = "..."

client_id = "..."
client_secret = "..."
tenant_id = "..."

From here the process of creating the environment in TerraForm is as simple as:

terraform apply -var-file {EnvironmentName}.tfvars

And then for new environments all I have to do is create a new .tfvars file to contain the configuration for that environment. This enables me to manage the configuration for my environment locally.

NOTE: I usually recommend that you add “*.tfvars” to the gitignore, so that these files are not necessarily checked in. This prevents configuration from being checked into source control.

Another step this then makes relatively easy is the automated deployment, as I can add the following for a YAML task:

- script: |
    touch variables.tfvars
    echo -e "primarylocation = \""$PRIMARYLOCATION"\"" >> variables.tfvars
    echo -e "secondarylocation = \""$SECONDARYLOCATION"\"" >> variables.tfvars
    echo -e "subscription_id = \""$SUBSCRIPTION_ID"\"" >> variables.tfvars
    echo -e "client_id = \""$SP_APPLICATIONID"\"" >> variables.tfvars
    echo -e "tenant_id = \""$SP_TENANTID"\"" >> variables.tfvars
    echo -e "client_secret = \""$SP_CLIENTSECRET"\"" >> variables.tfvars
  displayName: 'Create variables Tfvars'

The above script then takes the build variables for the individual environment, and builds the appropriate “.tfvars” file to run for that environment.

Now this is sort of the manual approach, ideally you would leverage keyvault or vault to access the necessary deployment variables.

Leveraging a stream deck to assist productivity

Leveraging a stream deck to assist productivity

So I’ve always been the type to play around with new technology. And one of the things I’ve been toying with is doing some video recordings of a variety of topics.

And one of the newest pieces of technology that I got was an Elgato Stream Deck XL. And I have to tell you, its really kind of nice from a general productivity standpoint. And I thought I would do a write-up for others. So I bought this, expecting to use it for 3 use cases:

  • Normal Work
  • Development Work
  • My Dungeons and Dragon’s Game (done remotely)

And I have to tell you I’ve been pretty impressed with how much it does right out of the box. When you install it, you install a simple utility that allows for configuring the device. And below is a screen shot of that utility:

Now no surprise, there are a lot of features right out of the gate that are built around the idea of streaming. Things like being able to send a tweet or leverage of OBS streaming software. And all those are cool, but right out of the gate, I was looking to leverage things that make my day-to-day life easier.

To that end, it makes use of items like “Open”, “Website”, and “HotKey” which I found made integrating the StreamDeck with my normal workflow surprisingly easy.

All I did was start grabbing a bunch of the commonly used programs, or websites to make them faster access. And then using “HotKey” I was able to automated a lot of my activities. Like being able to create a new chat in teams, create a blank email, view my calendar, etc.

This made a huge impact right out of the gate, I felt like it was a lot easier to manage the different actions I was doing and jump faster to things. I have to say I’m a day in and I really like it.

And then further I was able to implement specific hotkeys in Windows 10 to engage better use of other features. For example, I started using multiple desktops as it lets me switch between the interface I use for calls and meetings, and the coding / engineering work I do. But I struggle to remember all the hotkey combinations. So here I was able to map them to the stream deck and make it a button push:

Which is really kind of awesome. The other feature I like that it supports is profiles, which enables me to have separate key configurations for the various uses. Like I can break it up along use case lines.

Overall I’m really happy with this device and am looking forward to finding new ways to use it.

Debt is bad, and Technical Debt is no different

Debt is bad, and Technical Debt is no different

So there are a lot of posts out there on the topic of technical debt, and to be honest it’s a perfect example of a term that has been abused and given all kinds of unintended meanings. One thing I wanted to accomplish here was tackle the topic of what Technical Debt actually is, and some strategies to manage it.

So if you look out there at almost any blog on financial planning of any kind, they will tell you that debt is a very dangerous thing, and it can devour your income over time because of high interest rates, and take your paycheck and kill it with the death of a thousand cuts. The biggest problem with Debt is that it can creep up on you.

For the sake of an example, let’s look at a simple scenario. If you get a credit card with a balance of $500 and an interest rate of 15%, and let’s say you can only pay $200 a month on it. Each month the balance goes up by $75, so that means your only really paying $125 dollars. So that credit card would take you a lot longer to pay off, and you will end up paying when more than $500 on that debt. And that whole time you lose the ability to generate income with the money being thrown into the “credit card pit.”

Now this is not a financial blog, cause god knows I’m no expert. But I do want to use the above to illustrate a point. In software engineering, we have this concept of “Technical Debt” and if we look, wikipedia, actually has a pretty good definition of it. The definition is:

“Technical debt (also known as design debt or code debt, but can be also related to other technical endeavors) is a concept in software development that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer.”

Wikipedia / Technopedia

Now, I like that definition, but I will change it slightly, I don’t believe that technical debt is limited to only choosing the “Easy” solution, but can actually be caused by any constraint that causes a decision to be made. And I would give the following reasons:

  • Budget: Every project has a budget, whether that’s in time and dollars, everyone would do things differently if they never had to worry about time or money, but that’s not how the world works.
  • Delivery / Contractual Timelines: Sometimes you just have to ship it, and requirements force you to ship something that is not perfect or the way you want it. And you plan to get to it later.
  • Complexity: One of the biggest things I’ve seen that some developers do to avoid technical debt, actually causes it to be significantly worse. Many architects and developers will over complicate their design just for the sake of “making it easier to maintain”, but this can cause further issues later by making the code complicated to test, maintain, and change because baseline assumptions turned out to be wrong.
  • Competing Requirements: Sometimes you just get requirements that force your hand in such a way that you can’t avoid practices and patterns you normally would have avoided. For example, you might want to take advantage of the cloud, but be stuck with requirements for OnPrem.
  • Skill level: I’m going to flat out say this, when I look at old code that I’ve written, but haven’t touched in a long time. I usually think to myself “Man, I was a terrible programmer.” And to quote Red vs Blue, “Were you smart 10 years go, no you were an idiot…and the truth is your just as much of an idiot now, it will just take you 10 more years to realize it.”
  • Available technology: Look at the end of the day, we can only build with the tools we have, and most of the good programmers I know try really hard to have crystal balls, but sometimes our predictions don’t pan out.

Now I’m going to hazard a guess, but if you go back and look at most of the refactors you’ve taken on, I’ll bet that more than a few were up there in that list. And if not, I’d love to hear your reasons in the comments below.

So to a certain extent, the nature of technical debt is impossible to avoid, but what we can do is take strategies to minimize it. We need to accept that technical debt will happen, but if we do a few practices, I find that it can be a lot more manageable. And I’m going to give a few tricks I’ve learned over the years here.

Tip #1 – Create backlog stories around debt, and tag them if possible

Let’s be honest, most development shops maintain a backlog of some kind, and in that backlog are items, user stories, or some kind of baseline unit for work.

Now stop me if you heard this one before, but the following events occur:

  1. Developer gets new story for a feature.
  2. Developer designs feature with architecture and business analyst.
  3. Developer codes feature.
  4. Time lines or priorities shift, and developer doesn’t get to put as much work in as planned this sprint.
  5. Developer gets code to point that meets the requirements, but has a few things they wish they did differently.
  6. Developer ships code, maybe (if your lucky) makes notes of things to look at later.
  7. Repeat at step 1.

Now the problem is that these problems will continue and the technical debt will continue to pile up to a level that you can’t do anything without massive re-work, or re-architecting.

Now all of this could have been avoided, if right after step 6, we added a step 6.5 – “Developer creates user stories and tags them as ‘technical debt’.”

The idea here is this, Agile development was built on the principal that we assume that requirements are going to change, and build towards that reality. To that same end, we should assume that we aren’t going to be able to do everything we plan, so let’s plan for, and build in a process to ensure that we can get back to that technical debt wherever possible. If we had our developer go back in, and log 2-3 backlog items, tagged as “technical debt” it would accomplish a couple of things:

  1. It allows us to keep an eye on how our technical debt is growing.
  2. Allows us to address it in a manageable fashion.
  3. Allows for transparency to management and the business.

Tip #2 – Deal with it a little at a time

This should surprise no one, based on the item above. I recommend that we focus on making sure a few technical debt items make their way into every sprint. That way we can tackle these items in a way that ensures that these items don’t pile up on you for later.

Pick out your main sprint based on backlog priority, and set aside a certain amount of time in the sprint, and fill in with technical debt. This approach would say, if I have 3 people in a team, and 40 hours a week, that means I have 280 hours per sprint. So if I said, 210 hours of that sprint is on new feature work, and then 30 hours (10 hours per person) is on technical debt. I could continue to pull down the technical debt to a manageable level.

And there are two schools of thought here on how to pick those items:

Option A – Take as many small items as you can to fill those 30 hours, so if I have small 1-2 hour jobs, I can get more of them in a sprint.

Option B – Sort the list in descending order, and take the largest debt items first. If I have one that costs 10 hours, 8 hours, and 2 hours, those should be the first 3 I take on.

Personally, I find that Option B, tends to work better on the teams I’ve worked on. But you’re mileage might vary. I find that those little stories, tend to cause a vicious cycle of the following:

  1. Developer gets 2 feature stories, and 5 technical debt stories.
  2. Developer designs features, and reviews technical debt items.
  3. Developer works on feature, and pushes off technical debt stories because they are little, and can be done at the end.
  4. Developer gets caught up in a changing deadlines, and must push off something, technical debt stories get bumped.
  5. Now not only do the 5 small stories get put back on the pile, but any additional stories are now added.

In my experience giving less stories, and having them be more impactful tends to motivate them getting addressed. But again, your mileage might vary. I would work with your teams on a strategy.

One way to make this fun I found was to make a game out of it, similar to estimation poker. We had a “Debt Wheel”, and each sprint, one developer spun the wheel, and whom ever it landed on, had to pick a big technical debt item to work on this sprint. And then at the end of the sprint, if they finished it, they got to spin the wheel (with their name off it) to see who got the next item. If they didn’t finish it, they had to keep it into the next sprint.

Tip #3 – Use things like nuget packages

I wrote a really in-depth blog post on this, found here. And in it I explain at length how private nuget feeds can help manage technical debt. So I’m going to direct you to that post.

Tip #4 – Single Responsibility

A class (or service) should have only one principle. And should have only one job. The intention here being that it makes it easier to take these pieces out and swap them for something else later. The idea here being that you should focus on building smaller services that can be pulled out and replaced as needed.

Along these same lines, communication between services is another key to keeping technical debt under control, and the idea here is to make sure that there is an abstraction between each service. For example, if I have a service that is supposed to send a request for processing to another service.

One option would be to enable http calls directly between services, but honestly this causes quite a few issues:

  • Not Fault Tolerant
  • Changes between services aren’t backwards compatible .
  • Harder to track and monitor.

But ultimately, if we used a messaging layer, like Service Bus or Kafka, we could eliminate those issues and make our application services into more a black box where each piece functions independent of the others.

Tip #5 – Leverage dependency injection

I have to tell you, I’ve seen a disturbing trend lately where people are saying, “We use micro-services, so we don’t have to use DI.” And I honestly don’t understand this, DI makes it very easy to clean up your code and enforce a plug-and-play architecture that can have classes swapped out at any given moment.

The idea is this, even within a microservice smaller classes can be updated easier. If you are using nuget packages, and the other patterns above, DI is something very easy to implement that has massive impacts on how you build your code making it easier to make changes later, which in turn make it easier to remove the technical debt later.

If you ignore debt it will only get worse!

That right there is the number one thing I hope you take away from this post. All too often in my career have I seen technical debt be the reason that applications are rewritten, or completely rebuilt from scratch, which I’m convinced is a massive risk and rarely works out.

Why you really need a private nuget feed

Why you really need a private nuget feed

So I have to be honest, I’m the kind of person that is always looking for a better way to do something. And it probably speaks volumes to my chosen career that I am that way. One of the reasons I’m so passionate about technology is that it on a foundational level supposed to change and improve our lives.

To that end, I’ve been having a lot of conversations lately around development and managing technical debt, sprawl, and just generally the ability to have more controls around major enterprise applications that are being built.

One of the biggest changes to coding, I would argue since the idea of Object-Oriented programming, has been the advent of the micro-service. This idea that I can build out a swarm of smaller services and leverage all these elements to do much larger tasks, and scale them independently has fundamentally changed how many of us look at Software Architecture and Solution development.

But with this big step forward, all to often I see a big step backward as well.

And one of the ways this regression into bad habits happens is that I all to often see it become the “Wild west” of software development, and everyone runs off building individual micro-services, with no idea to operations or manageability. Which ultimately leads to a myriad of services that are now very difficult to maintain.

For example, all services have several common components:

  1. Configuration Management
  2. Secret Management
  3. Monitoring / Alerting
  4. Exception handling
  5. Logging
  6. Availability Checking
  7. Health probes endpoints

And if these elements are coordinated, then what can happen is that it becomes very difficult to maintain an solution because if you have 100 services in the application, you could potentially have 100 different ways configuration is managed. This is a nightmare to be honest. The good news is that they solved this problem, with npm and nuget!

One of the key benefits of leveraging a custom nuget server is that it allows you to gain the ability to have premade application components that then can be leveraged like lego blocks by developers to increase their productivity, so from their side, you can use these to get a jump on the new services they build.

Now most developers I know, love to reuse things, none of us got into this field to rewrite code for the hell of it.

Now if using the above, you create nuget packages around these types of operations (and more you will identify), and hook them up to their own CI/CD process that ends in an artifacts feed, we gain a couple of benefits:

  1. Developers stop reinventing the wheel:  You gain the productivity gains of being able to plug-and-play with code. You basically get to say “I’m building a new service, and I need X, Y, and Z…”
  2. We get away from “It works on my machine” as all dependencies are bundled, and can get to a container that is more cleanly implemented.
  3. You gain more change control and management:  By versioning changes to nuget packages, and leveraging pre-release packages appropriately we can roll out changes to some services and easily look at all of our services and see which ones are consuming older packages. So ultimately we can manage the updates of each service independently, but take the need to track that off the developer and onto a service.
  4. Make updates to shared code easier and an afterthought:  Stop me if you heard this one before.  You as a developer are performing updates to a service that leverages an older piece of shared code that someone discovered a bug with.  You are working on their bug, and don’t think about the shared code that should be updated at the same time, it fell through the cracks.  And when you finish, the new version of service is deployed with old shared code, and no one is the wiser.  A change happens later to enforce something and it breaks service.”  By leveraging nuget packages, it can become standard practice to say “I’m starting work on this service, select manage nuget packages and it shows me a list of packages with updates, I say update all.”
  5. You gain more ability to catch things earlier ahead of release preventing delays:  It makes it a lot easier to coordinate the changes being made to common code when you use a nuget feed, and make it easier for developers to adjust. For example, If I’m working on Service A, and during our standup I find out that another member of my team has a pre-release of the nuget package I’m using, and both are going to prod around the same time. I can implement the pre-release, and make the changes before I get to a point of needing to integrate this at the end and pushing last minute changes.

There are some great posts that get into the specifics of this, and how to implement it, and those are:

MagicMirror on the Wall

MagicMirror on the Wall

So lately, my wife and I have been going through a spending a lot of time organizing different elements of our lives. Specifically Covid-19 has created a situation where our lives are very different than what we are used to previously. And it has caused us to re-evaluate the tips and tricks we used to organize our lives.

One such change has been the need to coordinate and communicate more with regard to the family calendar and a variety of other information about our lives. So I decided if we were going to bring back “The family bulletin board”, that I wanted it to integrate with our digital footprint and tools to make life easier.

After doing some research, I settled on MagicMirror, mainly because of articles like this. And I had a spare monitor, with a Raspberry Pi 3 lying around the office, so I figured why not. The plan was to implement the initial prototype and get it working in my office, and then invest in something better for my wife to help manage everything.

So I have to admit, this was surprisingly easy to setup. And did not require much in the way of effort. I was even able to automate pushing updates to this device pretty easily. So if you are looking for a cheap option for getting something like this working, I recommend MagicMirror.

Here’s the current state of the PoC:

So for a walk through, I went through the following steps:

  • Download Raspbian OS, and flash the Micro SD card using Etcher.
  • Put the Micro SD into the Raspberry Pi and boot it up. From there you will get a step-by-step wizard for configuring wifi and setting up passwords, etc.
  • From there, I executed the manual installation methods found here.

That was really it for getting the baseline out of the box. From there I had a nice black screen with holidays, date/time, and a few other widgets. From there the next step was to install the widgets I cared about. You can find a list of the 3rd party widgets here.

Now I wanted MagicMirror to load up on this monitor without me having to do anything, so I followed the steps here. I did make some additional modifications from here that helped to make my life easier.

The one problem I ran into was the process of updating the device. I could have just remoted into the device to execute updates to config files, and other scripts. But for me, this was frustrating, I really want to keep configurations for this kind of stuff source controlled as a practice. So I ended up creating a github private repo for the configuration of my MagicMirror.

This worked fine except for the fact that every time I needed to update the mirror, I had to push a file to the device, and then copy it over and reboot the device.

So instead what I ended update doing was building a CI/CD pipeline that pushes changes to the magic mirror.

So what I did was the following:

  • Create a blob storage account and container.
  • Create a ADO pipeline for my GitHub repo.
  • And added only one task to my repo:
- task: AzureCLI@2
  inputs:
    azureSubscription: '{Subscription Name}'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: 'az storage blob upload --connection-string $BLOBSTORAGE -f configdev.js -c test -n configdev.js'

Now whenever I push an update to the master branch, a copy of the config file is pushed directly to blob storage.

Now came the problem of how do I get the device to pull down the new config file. If you look at the instructions about for making Magic Mirror auto start, it mentions an mm.sh file, which I updated to be the following:

cd ./MagicMirror
cd config
curl -0 config.dev.js https://...blob url.../config.js
cp config.dev.js config.js
cd ..
DISPLAY=:0 npm start

Now with this update the magic mirror will pick up a fresh config on every restart. So all I have to do is “sudo reboot” to make it pickup the new version.

I’m going to continue to build this out, and will likely have more blog posts on this topic moving forward. But I wanted to get something out about the beginning. Some things I’ve been thinking about adding are:

  • Examine the different types of widgets
  • Calendar integration
  • Google Calendar integration
  • To Do Integration
  • Updating to shut off at 8pm and start up at 6am
  • Building ability to recognize when config has been updated and trigger a reboot automatically

Adding Application Insights to Azure Functions

Adding Application Insights to Azure Functions

So I wanted to share a quick post on something that is rather small but I was surprised at how little documentation there was with regard to how to implement it.

Monitoring is honestly one of the most important things you can do with regard to cloud applications. There’s an old saying, “what gets measured…matters.” And I find this expression to be very true in Application Development.

So if you are building out an azure function, what steps are required to enable Azure Application Insights for your Functions. For this post I’ll be focusing on adding it to a DotNetCore function app.

Step 1 – Add the nuget package

No surprise, it all starts with a nuget package, you are going to want to add “Microsoft.ApplicationInsights.AspNetCore”, shown below:

But additionally, you are going to need the nuget package “Microsoft.Azure.Functions.Extensions” shown here:

Step 2 – Update the Startup.cs

Now if you haven’t configured your function app to have a startup.cs file, then I’m of the opinion you’ve made some mistakes. Because honestly I can’t understate the importance of dependency injection to ensure you are falling recommended practices and setting yourself up for success. Once you’ve done that, you will be able to add the following line to the override of “Configure”, shown below:

[assembly: FunctionsStartup(typeof(SampleProject.Services.Startup))]

namespace SampleProject.Services
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            builder.Services.AddScoped<ITelemetryProvider, TelemetryProvider>();
            builder.Services.AddApplicationInsightsTelemetry();
        }
    }
}

The above code will use the “AddApplicationInsightsTelemetry()” method to capture the out-of-the box telemetry.

That leaves the outstanding question of capturing custom or application specific telemetry, for that I recommend implementing a wrapper class around the Telemetry client. Personally this is a general practice I always implement as it removes the hard dependencies in an application and helps with flexibility later.

For this, the “TelemetryProvider” is the following:

public class TelemetryProvider : ITelemetryProvider
    {
        private TelemetryClient _client;

        public TelemetryProvider()
        {
            _client = new TelemetryClient(TelemetryConfiguration.CreateDefault());
        }
        public void TrackEvent(string name)
        {
            _client.TrackEvent(name);
        }

        public void TrackEvent(string name, Dictionary<string, string> properties)
        {
            _client.TrackEvent(name, properties);
        }

        public void TrackEvent(string name, Dictionary<string, string> properties, Dictionary<string, double> metrics)
        {
            _client.TrackEvent(name, properties, metrics);
        }

        public void TrackMetric(string name, double value)
        {
            _client.TrackMetric(name, value);
        }

        public void TrackException(Exception ex)
        {
            _client.TrackException(ex);
        }

        public void TrackDependency(string name, string typeName, string data, DateTime startTime, TimeSpan duration, bool success)
        {
            _client.TrackDependency(typeName, name, data, startTime, duration, success);
        }
    }

With an interface of the following:

public interface ITelemetryProvider
    {
        void TrackDependency(string name, string typeName, string data, DateTime startTime, TimeSpan duration, bool success);
        void TrackEvent(string name);
        void TrackEvent(string name, Dictionary<string, string> properties);
        void TrackEvent(string name, Dictionary<string, string> properties, Dictionary<string, double> metrics);
        void TrackException(Exception ex);
        void TrackMetric(string name, double value);
    }
}

After you’ve implemented this provider, it makes it very easy to capture telemetry relates to specific functionality in your application and leverage application insights as a total telemetry solution.

Learning how to use Power BI Embedded

Learning how to use Power BI Embedded

So, lately I’ve been doing a lot more work with Power BI Embedded, and discussions around the implementation of Power BI Embedded within applications.

As I discussed Power BI itself can be a complicated topic, especially just to get a handle on all the licensing. Look here for an explanation on that licensing.

But another big question is even then, what does it take to implement Power BI Embedded? What kind of functionality is available? The first resource I would point you at is the Power BI Embedded Playground. This site really is fantastic, giving working samples of how to implement Power BI Embedded in a variety of use-cases and giving you the code to leverage in the process.

But more than that, leveraging a tool like Power BI Embedded, does require further training, and here are some links to tutorials and online training you might find useful:

There are some videos out there that give a wealth of good information on Power BI Embedded, and some of them can be found here.

There is a wealth of information and this is just a post to get you started, but Power BI Embedded, once you get started can make it really easy to embed for amazing analytics capabilities into your applications.

How to pull blobs out of Archive Storage

How to pull blobs out of Archive Storage

So if you’re building a modern application, you definitely have a lot of options for storage of data, whether that be traditional database technologies (SQL, MySQL, etc) or NoSQL (Mongo, Cosmos, etc), or even just blob storage. Of the above options, Blob storage is by far the cheapest, providing a very low cost option for storing data long term.

The best way though to ensure that you get the most value out of blob storage, is to leverage the different tiers to your benefit. By using a tier strategy for your data, you can pay significantly less to store it for the long term. You can find the pricing for azure blob storage here.

Now most people are hesitant to leverage the archive tier because the idea of having to wait for the data to be re hydrated has a tendency to scare them off. But it’s been my experience that most data leveraged for business operations, has a shelf-life, and archiving that data is definitely a viable option. Especially for data that is not accessed often, which I would challenge most people storing blobs to capture data on and see how much older data is accessed. When you compare this need to “wait for retrieval” vs the cost savings of archive, in my experience it tends to really lean towards leveraging archive for data storage.

How do you move data to archive storage

When storing data in azure blob storage, the process of upload a blob is fairly straight forward, and all it takes is setting the access tier to “Archive” to move data to blob storage.

The below code generates a random file and uploads it to blob storage:

var accountClient = new BlobServiceClient(connectionString);

            var containerClient = accountClient.GetBlobContainerClient(containerName);

            // Get a reference to a blob
            BlobClient blobClient = containerClient.GetBlobClient(blobName);

            Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);

            // Open the file and upload its data
            using FileStream uploadFileStream = File.OpenRead(localFilePath);
            var result = blobClient.UploadAsync(uploadFileStream, true);

            result.Wait();

            uploadFileStream.Close();

            Console.WriteLine("Setting Blob to Archive");

            blobClient.SetAccessTier(AccessTier.Archive);

How to re-hydrate a blob in archive storage?

There are two ways of re-hydrating blobs:

  1. Copy the blob to another tier (Hot or Cool)
  2. Set the access tier to Hot or Cool

It really is that simple, and it can be done using the following code:

var accountClient = new BlobServiceClient(connectionString);

            var containerClient = accountClient.GetBlobContainerClient(containerName);

            // Get a reference to a blob
            BlobClient blobClient = containerClient.GetBlobClient(blobName);

blobClient.SetAccessTier(AccessTier.Hot);

After doing the above, it will start the process of re-hydrating the blob automatically. And you need to monitor the properties of the blob which will allow you to see when it has finished hydrating.

Monitoring the re-hydration of a blob

One easy pattern for monitoring the blobs as they are rehydrated is to implement a queue and an azure function to monitor the blob during this process. I did this by implementing the following:

For the message model, I used the following to track the hydration process:

public class BlobHydrateModel
    {
        public string BlobName { get; set; }
        public string ContainerName { get; set; }
        public DateTime HydrateRequestDateTime { get; set; }
        public DateTime? HydratedFileDataTime { get; set; }
    }

And then implemented the following code to handle the re-hydration process:

public class BlobRehydrationProvider
    {
        private string _cs;
        public BlobRehydrationProvider(string cs)
        {
            _cs = cs;
        }

        public void RehydrateBlob(string containerName, string blobName, string queueName)
        {
            var accountClient = new BlobServiceClient(_cs);

            var containerClient = accountClient.GetBlobContainerClient(containerName);

            // Get a reference to a blob
            BlobClient blobClient = containerClient.GetBlobClient(blobName);

            blobClient.SetAccessTier(AccessTier.Hot);

            var model = new BlobHydrateModel() { BlobName = blobName, ContainerName = containerName, HydrateRequestDateTime = DateTime.Now };

            QueueClient queueClient = new QueueClient(_cs, queueName);
            var json = JsonConvert.SerializeObject(model);
            string requeueMessage = Convert.ToBase64String(Encoding.UTF8.GetBytes(json));
            queueClient.SendMessage(requeueMessage);
        }
    }

Using the above code, when you set the blob to hot, and queue a message it triggers an azure function which would then monitor the blob properties using the following:

[FunctionName("CheckBlobStatus")]
        public static void Run([QueueTrigger("blobhydrationrequests", Connection = "StorageConnectionString")]string msg, ILogger log)
        {
            var model = JsonConvert.DeserializeObject<BlobHydrateModel>(msg);
            
            var connectionString = Environment.GetEnvironmentVariable("StorageConnectionString");

            var accountClient = new BlobServiceClient(connectionString);

            var containerClient = accountClient.GetBlobContainerClient(model.ContainerName);

            BlobClient blobClient = containerClient.GetBlobClient(model.BlobName);

            log.LogInformation($"Checking Status of Blob: {model.BlobName} - Requested : {model.HydrateRequestDateTime.ToString()}");

            var properties = blobClient.GetProperties();
            if (properties.Value.ArchiveStatus == "rehydrate-pending-to-hot")
            {
                log.LogInformation($"File { model.BlobName } not hydrated yet, requeuing message");
                QueueClient queueClient = new QueueClient(connectionString, "blobhydrationrequests");
                string requeueMessage = Convert.ToBase64String(Encoding.UTF8.GetBytes(msg));
                queueClient.SendMessage(requeueMessage, visibilityTimeout: TimeSpan.FromMinutes(5));
            }
            else
            {
                log.LogInformation($"File { model.BlobName } hydrated successfully, sending response message.");
                //Trigger appropriate behavior
            }
        }

By checking the ArchiveStatus, we can tell when the blob is re-hydrated and can then trigger the appropriate behavior to push that update back to your application.

Leveraging Private Nuget Packages to support Micro-services

Leveraging Private Nuget Packages to support Micro-services

So one of the most common elements I see come up as your build out a micro-service architecture, is that there is a fair bit of code-reuse when you do that. And managing that code can be painful, and cause its own configuration drift, that can cause management issues.

Now given the above statement, the first response is probably “But if each Micro-service is an independent unit, and meant to be self-contained, how can their be code re-use?” And my response to that, is that just because ultimately the services and independently deployable does not mean that there isn’t code that will be reused between the different services.

For example, if you are building an OrderProcessingService, that might interface with classes that support things like:

  • Reading from a queue
  • Pushing to a queue
  • Logging
  • Configuration
  • Monitoring
  • Error Handling

And these elements, should not be wildly different from each micro-service, and should have some common elements. For example, if you are leveraging KeyVault for your configuration, odds are you will have the same classes implemented across every possible service.

But this in itself creates its own challenges, and those challenges include but are not limited to:

  • Library Version Management: If each service is its own deployable unit, then as you make changes to common libraries you want to be able to manage versions on going.
  • Creating a central library for developers: Allowing developers to manage the deployment and changes to versions of that code in a centralized way is equally important.
  • Reduce Code Duplication: Personally I have a pathological hatred of code duplication. I’m off the belief that if you are copying and pasting code, you did something wrong. There are plenty of tools to handle this type of situation without doing copy and paste and adding to technical debt.

Now everyone is aware of NuGet, and I would say it would be pretty hard to do software development without it right now. But what you may not know is that Azure DevOps makes it easy to create a private NuGet feed, and then enable the packaging of NuGet packages and publishing via CI/CD.

Creating a NuGet Feed

The process of creating a feed is actually pretty straight forward, it involves going to a section in Azure DevOps called “Artifacts”.

And then select the “Create Feed”, and give the feed a name, and specify who has rights to use this feed:

And from here its pretty easy to setup the publishing of a project as a NuGet package.

Creating a package in Azure Pipelines

A few years ago, this was actually a really painful process, but now its pretty easy. You actually don’t have to do anything in Visual Studio to support this. There are options, but to me, a NuGet package is a deployment activity, so I personally believe it should be handled in the Pipeline.

So the first thing, is you need to specify the details on the configuration. If you go to “Properties” on the project:

These are important as this is the information your developers will see in their NuGet feed.

From here the next step is to configure your pipeline to enable the CI/CD. The good news is this is very easy using the following pieces:

- task: DotNetCoreCLI@2
  inputs:
    command: pack
    versioningScheme: byEnvVar
    versionEnvVar: BUILD_BUILDNUMBER
  displayName: 'dotnet pack $(buildConfiguration)'

- task: NuGetAuthenticate@0
  displayName: 'NuGet Authenticate'

- task: NuGetCommand@2
  inputs:
    command: push
    publishVstsFeed: 'Workshop/WellDocumentedNerd'
    allowPackageConflicts: true
  displayName: 'NuGet push'

Now using the above, I have specified the BUILD_BUILDNUMBER as the identifier for new versions. I do this because I find its easier to ensure that the versions are maintained properly in the NuGet feed.

One thing of note, is that NuGet Authenticate step is absolutely essential to ensure that you are logged in with the appropriate context.

Now after executing that pipeline, I would get the following in my NuGet Feed.

Consuming NuGet in a project

Now, how do we make this available to our developers, and to our build agents. This process is very easy. If you go back to the “Artifacts” section, you will see the following:

The best part is that Azure DevOps will give you the ability to pull the XML required when you select “dotnet”, and it will look something like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear />

    <add key="WellDocumentedNerd" value="....index.json" />

  </packageSources>
</configuration>

After this is done, and added to your project, whenever you build pipeline attempts to do a build of your code project it will consider this new data source in the NuGet Restore.

Why go through this?

As I mentioned above, ultimately one of the biggest headaches of a micro-service architecture for all of its benefits, is that it creates a lot of moving parts. And ultimately managing any common code, could make things difficult if you have to replicate code between projects.

So this creates a nice easy solution that allows you to manage a separate code repository with private / domain specific class libraries, and still have versioning to allow for having different versions on each service while enabling them to be independently deployable.