Page 48 - MSDN Magazine, July 2017
P. 48

Figure 8 Request Body of the Machine Learning Service
Figure 9 Response Body of the Machine Learning Service
After the Scoring step, you can now connect the scored dataset to the Evaluate Model module (bit.ly/1SL05By) to generate a set of metrics used for evaluating the model’s accuracy (performance). Consider the Evaluation step as your QA process: You want to make sure that the predicted values are as accurate as possible by reducing the amount of error. A model is considered to fit the data well if the difference between observed and predicted values is small. The Evaluate Model module is available in the Machine Learning | Evaluate section.
Service
A prediction of cache hits would be pointless if you couldn’t access this information and use it to optimize the pre-allocation of objects in cache. Access to the outcome of the predictive experi- ment is via a Web service that Azure Machine Learning generates and hosts at a public URL. It’s a REST endpoint that accepts a POST request, with an authorization bearer in the header, and a JSON input message in the body.
The authorization bearer is a key that authorizes a client appli- cation to the consumer service. The request body contains the input parameters to pass to the service, as specified in the predic- tive experiment. The format looks like that shown in Figure 8.
The service’s response is a JSON message containing the scored label, as shown in Figure 9.
Using HttpClient for establishing an HTTP connection to the service, it’s trivial to access the Web service and read the predicted outcome:
• Input parameters are passed as a collection of strings.
• The API key is assigned a bearer value in the request’s header. • The message is sent to the endpoint as a POST in JSON format. • The response is read as a string, again in JSON format.
Figure 10 shows the code input for consuming the Machine Learning Service in the Microsoft .NET Framework.
The full source code is available at bit.ly/2qz4gtm.
Wrapping Up
Observing cache hits for objects over several weeks generated a dataset that could be used in a machine learning project to iden- tify access patterns and predict future demand. By exposing a Web service that can be consumed by an external integration workflow (running on Azure Logic Apps, for example), it’s possible to obtain predictions of demand on specific objects and pre-allocate them in Redis cache before they’re requested in order to minimize miss ratio. The observed improvement was of nearly 20 percent better hit ratio, passing from about 60 percent to almost 80 percent in the L2 cache. This has helped sizing the L2 cache accordingly, and by using the regional syncing capability of Azure Redis Cache, its minimized sync time between distributed nodes by a similar proportion (20 percent shorter duration). n
Stefano tempeSta is a Microsoft MVP and technology ambassador, as well as a chapter leader of CRMUG for Italy and Switzerland. A regular speaker at inter- national conferences, including Microsoft Ignite, NDC, API World and Developer Week, Tempesta’s interests span across cloud, mobile and Internet of Things. He can be reached via his personal Web site at tempesta.space.
thankS to the following Microsoft technical expert for reviewing this article: James McCaffrey
{
"Results": {
"outputData": { "type": "DataTable", "value": {
"ColumnNames": [ "Scored Labels"
], "ColumnTypes": [
"Numeric" ],
"Values": [ [
"0" ],
[ "0"
] ]
} }
} }
{
"Inputs": {
"inputData": { "ColumnNames": [
... ],
"Values": [ [
... ],
[ ...
] ]
} },
"GlobalParameters": {} }
These algorithms can incorpo-
rate input from multiple features
by determining the contribution
of each feature of the data to the regression function. Once the regression algorithm has trained a function based on already labeled data, the function can be used to predict the label of a new (unlabeled) instance. More information on how to choose algo- rithms for Azure Machine Learning is available at bit.ly/2gsO6PE.
Scoring and Evaluation
“Scoring” is the process of applying a trained model to new data to generate predictions and other values. The Score Model mod- ule (bit.ly/1lpX2Ed), available in the Machine Learning | Score | Score Model section, will predict the number of cache hits according to the selected features, as shown in Figure 7.
Figure 10 Consuming the Machine Learning Service in the Microsoft .NET Framework
using (var client = new HttpClient()) {
var scoreRequest = new {
Inputs = new Dictionary<string, StringTable>() { {
"inputData",
new StringTable() {
ColumnNames = new string[] { "Date", "Object", "Hits" },
Values = new string[,] { { "YYYY/MM/DD", "GUID", "#" } } }
}, },
GlobalParameters = new Dictionary<string, string>() {
}
};
client.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue("Bearer", apiKey);
client.BaseAddress = new Uri(serviceEndpoint);
HttpResponseMessage response =
await client.PostAsJsonAsync(string.Empty, scoreRequest);
if (response.IsSuccessStatusCode) {
string result = await response.Content.ReadAsStringAsync(); }
}
42 msdn magazine
Machine Learning

































   46   47   48   49   50