Page 25 - MSDN Magazine, December 15, 2017
P. 25

nodes. A third possibility is that the differences are introduced during the conversion of the Keras model to CoreML. One way or the other, the differences highlight the importance of validating the model’s behavior on the device.
Device-Based Inferencing
Compared to the complexity of developing the model, on-device inferencing is quite simple and consistent between projects:
Load the model into the library’s inferencing class. Once on the device, the model is essentially a blackbox, with little support for reading or modifying internal state. The inferencing functions can be expressed as an interface in the Xamarin.Forms base project:
public interface ITidePredictor {
/// <summary>
/// From 200 input sea levels, in feet, taken every 3 hours,
/// predict 100 new sea levels (next 300 hours)
/// </summary>
/// <returns>100 levels, in feet, representing the predicted tide /// from the final input time to + i * 3 hours</returns>
/// <param name="sealevelInputs">200 water levels, measured in feet, /// taken every 3 hours</param>
float[] Predict(float[] seaLevelInputs);
}
The native Android class for inferencing is Org.Tensorflow.Con- trib.Android.TensorflowInferenceInterface and the iOS class is CoreML.MLModel. Both are loaded in their device project’s con- structor, as shown in Figures 8 and Figure 9.
In both cases, the model is loaded from a file, but both CoreML and TAI support loading a model from a Web-based URL. While a Web-based model is obviously easier to update, the tradeoff is that many ML models are extremely large. The tide prediction model is only a few hundred kilobytes in size, but image-recognition models often weigh in at hundreds of megabytes.
Configure input data. Because ML libraries have their own internal datatypes and structures, almost all interact with calling programs using dictionaries of strings to input and output data. Although the names can be set in Keras, conversion to CoreML and Tensorflow has a tendency to mangle them. In the case of Android, the input sealevels are associated with the string “lstm_1_input” and
Figure 8 The Android Inferencing Class
the output predictions with “output_node0.” Configuring input is easy in Android, as conversion isn’t necessary. As you can see in the call to Feed in Figure 8, the input array is passed, followed by inputSeaLevels.Length, 1, 1. This encodes the shape of the input data: 200 rows, each containing 1 feature defined by 1 value.
CoreML input and output is more complex. While TAI takes and returns managed arrays, CoreML works with MLFeatureValue datatypes defined in the CoreML namespace and presumably tuned to Apple hardware. The inputs to the model are defined in the TideInput class, shown in Figure 10. Note that TideInput is defined as implementing the IMLFeatureProvider interface. The MLModel object knows the names and types of its expected inputs, and uses the IMLFeatureProvider interface to retrieve that data. The FeatureNames property must mimic the set of expected variables names, and the GetFeatureValue method must provide the data for the relevant string.
When converting the Keras tide-prediction model to CoreML, the converter told us that the model takes as input 3 MLMultiArray objects. The TideInput class needs to initialize those objects. The first is the expected readings input with its [200, 1, 1] shape:
var ma = new MLMultiArray(new nint[] { INPUT_SIZE, 1, 1 }, MLMultiArrayDataType.Double, out mlErr);
for (int i = 0; i < INPUT_SIZE; i++) {
ma[i] = tideInputData[i];
readings = MLFeatureValue.Create(ma);
Figure 9 The iOS Inferencing Class
}
public class CoreMLTidePredictor : NSObject, ITidePredictor {
public event EventHandler<EventArgsT<String>> ErrorOccurred = delegate { }; MLModel model;
const int OUTPUT_SIZE = 100;
const int OUTPUT_FIELD_NAME = "predicted_tide_ft";
public CoreMLTidePredictor() {
// Load the ML model
var bundle = NSBundle.MainBundle;
var assetPath = bundle.GetUrlForResource("LSTM_TidePrediction", "mlmodelc"); NSError mlErr;
model = MLModel.Create(assetPath, out mlErr);
if (mlErr != null)
{
ErrorOccurred(this, new EventArgsT<string>(mlErr.ToString())); }
}
public float[] Predict(float[] seaLevelInputs) {
var inputs = new TideInput(seaLevelInputs);
NSError mlErr;
var prediction = model.GetPrediction(inputs, out mlErr); if(mlErr != null){
ErrorOccurred(this, new EventArgsT<string>(mlErr.ToString())); };
var predictionMultiArray = prediction.GetFeatureValue( OUTPUT_FIELD_NAME).MultiArrayValue;
var predictedLevels = new float[OUTPUT_SIZE]; for (int i = 0; i < OUTPUT_SIZE; i++)
{
predictedLevels[i] = predictionMultiArray[i].FloatValue; }
return predictedLevels; }
}
public class TensorflowInferencePredictor : ITidePredictor {
const string MODEL_FILE_URL = "file:///android_asset/TF_LSTM_Inference.pb"; const string INPUT_ARGUMENT_NAME = "lstm_1_input";
const string OUTPUT_VARIABLE_NAME = "output_node0";
const int OUTPUT_SIZE = 100;
TensorFlowInferenceInterface inferenceInterface;
public TensorflowInferencePredictor(AssetManager assetManager) {
inferenceInterface = new TensorFlowInferenceInterface( assetManager, MODEL_FILE_URL);
}
public float[] Predict(float[] inputSeaLevels) {
inferenceInterface.Feed(INPUT_ARGUMENT_NAME, inputSeaLevels, inputSeaLevels.Length, 1, 1);
inferenceInterface.Run(new string[] { OUTPUT_VARIABLE_NAME }); float[] predictions = new float[OUTPUT_SIZE]; inferenceInterface.Fetch(OUTPUT_VARIABLE_NAME, predictions); return predictions;
} }
msdnmagazine.com
Dec. 15, 2017 / Connect(); Special Issue 21



































   23   24   25   26   27