Browsed by
Category: AI

AI / Analytics Tools in Azure

AI / Analytics Tools in Azure

So when its comes to Artificial Intelligence in Azure, there are a lot of tools, and a lot of options and directions you can explore. And AI is a broad topic by itself. But that being said I wanted to share some resources to help if you are looking for some demos to show the “art of the possible” or tools to start if you are a data scientist or doing this kind of work to help.

Let’s start with some demos.  Here are links to some of the demos that I find particularly interesting about the capabilities provided by Azure in this space.

  • Video.AI : This site allows you to upload videos and run them through a variety of cognitive / media services to showcase the capabilities. 
  • JFK Files : This is one of my favorites, as it shows the capabilities of cognitive search with regard to searching large datasets and making for a good reusable interface for surfacing some of the findings of things like transcription. 
  • Coptivity : Here’s a link to the video for CopTivity and how the use of a modern interface is interesting to law enforcement. 

Now when its comes to offerings in this space, there are a lot and its always growing but I wanted to cover some at a high level that can be investigated quickly.

Cognitive Services : This includes azure services that are more using APIs to provide AI capabilities to your applications without having to build it yourself.  These include things like Custom Vision, Sentiment Analysis, and other capabilities.  Here’s a video discussing it further. 

DataBricks : DataBricks is a great technology for generating the compute required to run your Python, and Spark based models and do so it a way that minimizes the management demands and requirements placed on your application. 

Azure Machine Learning : Specifically this offering provides options to empower developers and data scientists to increase productivity.  Here’s a video giving the quick highlights of what Azure Machine Learning Studio is.  And a video on data labeling in ML Studio.  Here’s a video about using Azure Machine Learning Designer to democratize AI.  Here’s a video on using Azure Machine Learning DataSets. 

Data Studio : Along with tools like VS Code, which is a great IDE for doing Python and other work, we do provide a similar open source tool called Azure Data Studio, which can help with the data work your teams are doing.  Here’s a video on how to use Jupyter notebooks with it.  Additionally VSCode provides options to support this kind of work as well (video). 

Azure Cognitive Search:  As I mentioned above Search can be a great way to surface insights to your users, and here’s a video on using Cognitive Search. 

Azure Data Science VM: Finally, part of the battle of doing Data Science work is maintaining all the open source tools, and leveraging them to your benefit, the amount of time required for machine configuration is not insignificant.  Azure provides a VM option where you can create a VM preloaded with all the tools you need.  Azure has it setup for Windows 2016, Ubuntu, CentOS. And there is even have a version built around Geo AI with ArcGIS.  There is no additional charge this, as you pay for the underlying VM you are using but Microsoft do not charge for the implementation of the data science tools on this. 

I particularly love this diagram as it shows all the tools included:

Now again, this is only scratching the surface but I think its a powerful place to start to find out more. I have additional posts on this topic.

See the source image
Musings on Ethical AI for Business and resources to help

Musings on Ethical AI for Business and resources to help

When I was a kid, one of my favorite movies was Jurassic Park, because well…dinosaurs. I remember the movie being such a phenomenon too that summer, there were shirts and toys everywhere. I even remember going to the community pool and seeing adults everywhere holding the book with the silver cover and the T-Rex skull on it.

It really was a movie ahead of its time, not just in terms of special effects, or how it covers the topic of cloning, but in that it described a societal nexus we were all headed towards that many people didn’t quite see yet. One of my favorite moments in the movie is when Jeff Goldblum’s character, having just survived a T-Rex attack deliers this line:

See the source image

Technology has grown, by leaps and bounds, to the point now that many argue Moore’s law is irrelevant and outdated. And we are making advances in everything major area of life to the point that the world we grew up in is completely unrecognizable to that of our children. Furthermore to the point that this question has become all the more relevant today, with regard to artificial intelligence.

Just to be clear these are the thoughts of one developer / architect (me) on this subject and I would recommend you research this heavily, and come to your own conclusions, but these are my opinions and mine alone.

We have reached a period of time where more and more businesses and society in general are looking to artificial intelligence as a potential solution to solve a lot of problems and more and more the question of AI ethics has become prevalent. But what does that actually mean and how can an organization build AI solutions that serve to benefit all of humanity rather than cause unintended problems and potentially harm members of society.

The first part of this comes down to the recognition that artificial intelligence solutions need to be fully baked and great care needs to be given to supporting the idea of mitigating built in bias in both training data and the end results of the service. Now the question is what do I mean about bias. And I mean actively searching for potentially bad assumptions that might find their way into a model based upon a training dataset. Let’s take a good hypothetical case that strikes close to home for me.

If you wanted to build a system to identify patients that were at high risk for pneumonia. This was a hypothetical I talked to a colleague about a few months ago. If you took training data of conditions they have and an indicator of whether or not they ended up getting pneumonia, this would seem like a logical way to tackle the problem.

But there are potential bias that could occur based on the fact that many asthmatics like myself tend to seek proactive treatment, as we are at high risk, and many doctors treat colds very aggressively. Mainly because when we get pneumonia it can be life threatening. So if you don’t account for this bias it might skew the results of any AI system. Because you likely won’t see many asthmatics appear in your training data that actually got pneumonia.

Or another potential consideration could be location, if I take my data sample just from the southwest like Arizona, dry climates tend to be better for people with respiratory problems and they might have lower risk of pneumonia.

My point is the idea of how you gather data and create a training data set is something that requires a significant amount of thought and care to ensure success.

The other major problem is that every AI system is unique in the implications of a bad result. In the above case, its life threatening, in terms of a recommendations engine for Netflix, it means I miss a movie I might like. Very different results and impact on lives. And this cannot be ignored as it really does figure into the overall equation.

So the question becomes how do we ensure that we are doing the right thing with AI solutions? The answer is to take the time to decide on what values as an organization we will embrace at our core for these solutions. We need to make value driven decisions on what type of implications we are concerned about and let those values guide our technology decisions.

For a long time values have been one of the deciding factors between successful organizations and unsuccessful ones. The one example that comes to mind was the Tylenol situation where a batch of Tylenol had been tampered with. The board had a choice, pull all the Tylenol on market shelves for public safety and hurt their shareholders or protect share holders and deny. The company values indicated that customers must always come first and it made their decision clear. And it was absolutely the right decision. I’m giving a seriously abridged version, but here’s a link to an article on the scare.

Microsoft actually released an AI School for business to help customers to get a good starting point for figuring that out. They also made several tracks for a variety of industries to help with what should be considered for each industry. Microsoft has also made their position on ethical AI very clear in a blog post by Company President Brad Smith and Our Approach: Microsoft AI

Below are the links to some of the training courses on the subject:

Along side this, there has been a lot of discussion around this, from some of the biggest executives in the AI space, including Satya Nadella:

But one of the most interesting voices I’ve heard with regard to the ethics and future of AI is Calum Chace, and I would tell you to watch this as it really goes into the depth of the challenges and ways that if AI is not handled responsibly we are looking at another major singularity in human evolution:

This is a complicated and multi-faceted topic that is great food for thought on a Friday. Empathy is the most important elements of any technology solution as these solutions are having greater and greater ramifications on society.