Page 24 - MSDN Magazine, December 15, 2017
P. 24

The CoreML code relies on the coremltools package written by Apple, whose source code is available under the 3-Clause BSD License at bit.ly/2m1gn3E. CoreML works with a large number of ML models, including non-neural network models, such as Supper Vector Machines, tree ensembles, and linear and logistic regresson models. (See the table at the Xamarin’s CoreML API documenta- tion homepage, bit.ly/2htyUkc.)
Because this model was trained using Tensorflow, I can extract the underlying Tensorflow computational graph and save the weights, as shown in Figure 6. This code is derived from the work of Amir Abdi (bit.ly/2ApB5ND).
Writing the Apps
With a well-performing model in hand and converted to device formats, it’s time to develop the app. Although the ML inferencing specifics differ between iOS and Android, naturally I'll have a sin- gle source for UI code via Xamarin.Forms.
The completion solution (available at bit.ly/2j7xcsM) contains four projects: the Xamarin.Forms shared-code project that defines the UX, its iOS and Android implementations, and an Android binding project for the TAI library.
Although Tensorflow is associated with Google, shipping versions of Android don’t have built-in support for ML models. Instead, this project uses the easy-to-use TensorFlowInferenceInterface class, which is defined in the Org.Tensorflow.Contrib.Android name- space and distributed in an 11MB shared library. Projects under Tensorflow’s Contrib directory aren’t officially supported, but this project appears to have an active community of committers and I suspect it will continue into the future.
Although binding an Android or iOS native library sometimes has complexities, in this case the binding project is trivial. It’s
Figure 7 Xamarin.Forms Under iOS (left) and Android (right), Using CoreML and Tensorflow Android Inference
literally just a matter of putting the Java files in the appropriate place in the /Jars subdirectory and letting the Xamarin infrastruc- ture take care of the rest.
The Android project’s /Assets directory should contain a copy of the Tensorflow protobuf weight file generated by Keras2Ten- sorflow.py. To use the .mlmodel file produced by Keras2CoreML, though, an additional step is required. While the .mlmodel file is good for sharing, actual runtime use requires it to be compiled into a different format.
On-device inferencing is quite simple and consistent between projects.
On a Macintosh that has Xcode installed, you convert a .mlmodel using xcrun compile LSTM_TidePrediction.mlmodel LSTM_Tide- Prediction. The result is a directory called LSTM_TidePredic- tion.mlmodelc that contains several model-defining files. This folder should be moved to the iOS project’s Resources folder. (The source-code distribution has already performed all the steps nec- essary and the TAI library, Tensorflow and CoreML models are all in their proper locations.)
With the project structure in place, let’s briefly discuss the UI. This isn’t my strong suit, as Figure 7 amply demonstrates. The app consists of a scrolling graph of predictions (using Aloïs Deniel’s Microcharts package available on NuGet or bit.ly/2zrIoGu), three but- tons that allow you to load and predict tides based on any of three datasets, and the prediction values themselves.
You’ll notice that while the general shape of the overall pre- diction is consistent, the fine-grained predictions are often in disagreement by as much as a few inches. There are several possible reasons for this. The output of a deep neural net is the product of thousands of floating-point multiplications and sums; differences in low-level representations could easily accumulate to significant levels. Another possibility is differences between CoreML and Tensorflow in the implementation of the LSTM or feed-forward
8 Key Considerations
Use these eight steps to deliver an AI/ML solution on mobile devices:
1. Choose an ML library compatible with the device OS or
systems you’re targeting.
2. Develop a data-wrangling training pipeline that allows you to
rapidly explore your data and iterate your model.
3. Consider using cloud-based resources for final training.
4. Convert your model to device format.
5. Convert your on-device data to the form expected by the model.
6. Treat the on-device inferencing as a “black box” function call.
7. Validate that the on-device inferencing matches your
training results.
8. Consider implementing the ability to download a new model,
but be aware of the size implications.
20 msdn magazine
Machine Learning





































































   22   23   24   25   26