Page 23 - MSDN Magazine, October 2017
P. 23

Figure 2 Pricing Tiers of Azure Machine Learning Studio
emotion and video detection; facial, speech and vision recognition; and speech and language understanding—to your applications. Microsoft’s vision is for more personal computing experiences and enhanced productivity aided by systems that increasingly can see, hear, speak, understand and even begin to reason.
These services expose models that have been trained on millions of sample images. In a video on Channel9, Microsoft’s Anna Roth briefly explains the process of training the algorithms with a wide variety of sample data (bit.ly/2x7u1D4). The models exposed by the many Cognitive Services APIs were trained over years and mil- lions of data points by dozens of engineers and researchers. That’s why they’re so good at what they do. For when your app or Web site requires a solution that one of the Cognitive Services APIs can resolve, use them. The list of Microsoft Cognitive Services offerings continues to grow. For an updated list, go to bit.ly/2vGWcuN.
However, when your data has a more limited scope inside a spe- cific domain, then you’ll have to create your own models from your own data. While that might sound intimidating or impractical, the process is quite straightforward with another cloud
service provided by Microsoft: Azure ML Studio.
chance to save your projects and expose your models via Web services.
If you already have a Microsoft account, click on Sign In under the Free Workspace option. If this is the first time you’ve logged into Azure ML Studio, you’ll see an empty list of exper- iments. As ML is considered a subset of data science, the term “experiment” is used here.
Creating an Experiment The best way to examine the power of Azure ML Studio is to start with a sample experi- ment. Fortunately, there are a number of pre-built samples provided by Microsoft. First, click on the New button on the lower-left corner of the browser window.
In the resulting dialog, type flight into the textbox. The screen should look similar to Figure 3. Clicking on the View in Gallery link will bring up a page detailing information about the experi- ment (bit.ly/2i9Q61i). Move the mouse over the tile and click the Open in Studio button to open the experiment to start working on it.
This experiment runs what’s known as a Binary Classification, which means the ML algorithm will place each record in the data- set into one of two categories. In this case, whether or not the flight will be delayed.
Once the experiment loads your screen will look something like Figure 4.
While this might look intimidating at first, what’s going on is actually quite simple. Zoom in using the scroll wheel on the mouse or using the zoom controller on the lower left of the workspace canvas.
Navigating the Workspace Canvas Azure ML Studio has built-in navigation controls to explore and manipulate the view of the workspace canvas. The navigation controls from left to right are: a Mini Map of the canvas, zoom slider control, zoom to actual
Azure ML Studio
Azure ML Studio is an online service that makes ML and building predictive models approachable and straightforward. For the most part, there’s no code involved. Users drag around various modules representing actions and algorithms in an interface resembling Visio. For maximum flexibility and extensibility, there are modules for inserting R and Python code for cases where the built-in models don’t suffice or for using existing code.
Getting Started Open a browser and head over to studio.azureml.net. If you’ve never used Azure ML Studio before, click on the Sign Up button. You may choose either the Guest Workspace or Free Workspace option in the dialog that follows (see Figure 2). For the purposes of this article, I recom- mend using the free workspace, as you’ll have the
Figure 3 The Flight Delay Prediction Sample Experiment
msdnmagazine.com
October 2017 19


































































































   21   22   23   24   25