Page 22 - MSDN Magazine, December 15, 2017
P. 22

Figure 5 The Neural Net Captures Tide Cycles
Even better, Figure 5 shows the model and ground-truth pre- dictions going forward 100 timesteps, which for the three-hour timesteps translates into 12 1/2 days. While the amplitudes are off, the timing and general waxing and waning of spring and neap tides are clearly captured by the model. (It’s typical that the water level doesn't average to zero over such a short time peroid.)
The trained model can be saved in the preferred Keras HDF5 format and reloaded and used as necessary to predict tides in Contoso Harbor given recent or historical tide gauge readings.
Conversion and Deployment to Mobile Devices
While many ML scenarios can use a cloud-based Web service for the calculation, there are several reasons why on-device inferencing might be preferable.
Most non-research-level ML architectures can be described using higher-level abstractions.
First and foremost is performance. While mobile devices pale in horsepower compared to the GPUs on Azure N-Series machines, inference is vastly less costly than training. The latest iPhones have the A11 Bionic chip, with hardware dedicated to neural net oper- ations, and the Pixel 2 Pixel Visual Core points the way to similar accelerated capabilities on Android.
While my experience is that on-device inference with typical hardware can take upward of a second with large models, good async programming practices can lead to apps with excellent responsiveness. See, for instance, the CoreMLAzureModel and CoreMLVision samples at bit.ly/2zqQBsO, and bit.ly/2AslxJc, both of which perform inference on video streams. The StopWatch class
can be invaluable for understanding the computational cost of your on-device inferencing.
Second, data volumes can be significant. In scenarios that involve continuous inferencing, audio and image data (much less video streams) can chew up bandwidth quickly.
Finally, there will always be the possibility that users just plain don’t have Internet access at the moment.
On-device inferencing was introduced in the CoreML framework in iOS 11 and macOS, while Android users can use Tensorflow Android Inference (TAI). In the future, Google’s just-announced Neural Networks API (bit.ly/2Aq2fnN), will likely be preferred over this library.
Whether targeting CoreML or TAI, you have to convert the Keras HDF5 file to compatible formats. Conversion to CoreML is simple:
import coremltools
# Convert to CoreML
coreml_model = coremltools.converters.keras.convert(
"keras_model_lstm.hdf5", ["readings"], ["predicted_tide_ft"]) coreml_model.author = 'Larry O\\'Brien'
coreml_model.license = 'MIT' coreml_model.save('LSTM_TidePrediction.mlmodel')
Figure 6 Extracting and saving Underlying Tensorflow Data from a Keras Model
# Derived from code by Amir H. Abdi released under the MIT Public License # https://github.com/amir-abdi/keras_to_tensorflow/blob/master/keras_to_ tensorflow.ipynb
input_fld = '.'
weight_file = 'keras_model_lstm.hdf5'
num_output = 1
write_graph_def_ascii_flag = True prefix_output_node_names_of_final_network = 'output_node' output_graph_name = 'TF_LSTM_Inference.pb'
from keras.models import load_model import tensorflow as tf
import os
import os.path as osp
from keras import backend as K
output_fld = input_fld
if not os.path.isdir(output_fld):
os.mkdir(output_fld)
weight_file_path = osp.join(input_fld, weight_file)
K.set_learning_phase(0)
net_model = load_model(weight_file_path)
pred = [None]*num_output pred_node_names = [None]*num_output for i in range(num_output):
pred_node_names[i] = prefix_output_node_names_of_final_network+str(i)
pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i]) print('output nodes names are: ', pred_node_names)
sess = K.get_session()
if write_graph_def_ascii_flag:
f = 'only_the_graph_def.pb.ascii' tf.train.write_graph(sess.graph.as_graph_def(), output_fld, f, as_text=True) print('saved the graph definition in ascii format at: ', osp.join(output_fld, f))
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io constant_graph = graph_util.convert_variables_to_constants(
sess, sess.graph.as_graph_def(), pred_node_names) graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_text=False) print('saved the constant graph (ready for inference) at: ', osp.join(
output_fld, output_graph_name))
18 msdn magazine
Machine Learning





















































   20   21   22   23   24