Page 52 - MSDN Magazine, February 2018
P. 52
First, I need to create a class deriving from the ParserFunction class and override its protected virtual Evaluate method, as shown in Figure 5. This class is just a wrapper over the actual text-to-speech implemen- tation. For iOS, the text-to-speech implementation is shown in Figure 6. The Android implementation is similar, but it takes a bit more coding.Youcanseeitintheaccompanyingsourcecodedownload.
Once I have an implementation, I need to plug it in to the parser.
Once I have an implementation, I need to plug it in to the parser. This is done in the shared project in CommonFunctions.Register- Functions static method (also shown in Figure 3):
ParserFunction.RegisterFunction("Speak", new SpeakFunction());
Voice Recognition
For voice recognition I need to use a callback function in order to tell the user what word was actually recognized (or to report an error, as in Figure 2).
Figure 6 iOS Text-to-Speech Implementation (Fragment)
I’m going to implement two functions for voice recognition— one to start voice recognition and another to cancel it. These two functions are registered with the parser just as I registered text-to- speech in the previous section:
ParserFunction.RegisterFunction("VoiceRecognition", new VoiceFunction()); ParserFunction.RegisterFunction("StopVoiceRecognition", new StopVoiceFunction());
The implementation of these two functions for iOS is shown in Figure 7. For Android the implementation is similar, but note that voice recognition was added to iOS only in version 10.0, so I must check the device version and, if necessary, inform the user that the device doesn’t support it in iOS versions prior to 10.0.
The actual voice recognition code is in the SST class. It’s a bit too long to show here and is also different for iOS and Android. I invite you to check it out in the accompanying source code.
Figure 7 Voice Recognition Implementation
public class VoiceFunction : ParserFunction {
static STT m_speech = null;
public static STT LastRecording { get { return m_speech; }}
protected override Variable Evaluate(ParsingScript script) {
bool isList = false;
List<Variable> args = Utils.GetArgs(script,
Constants.START_ARG, Constants.END_ARG, out isList); Utils.CheckArgs(args.Count, 1, m_name);
string strAction = args[0].AsString();
STT.Voice = Utils.GetSafeString(args, 1, STT.Voice).Replace('_', '-');
bool speechEnabled = UIDevice.CurrentDevice.CheckSystemVersion(10, 0); if (!speechEnabled) {
UIVariable.GetAction(strAction, "\"" +
string.Format("Speech recognition requires iOS 10.0 or higher. You have iOS {0}",
UIDevice.CurrentDevice.SystemVersion) + "\"", ""); return Variable.EmptyInstance;
}
if (!STT.Init()) {
// The user didn't authorize accessing the microphone. return Variable.EmptyInstance;
}
UIViewController controller = AppDelegate.GetCurrentController(); m_speech = new STT(controller);
m_speech.SpeechError += (errorStr) => {
Console.WriteLine(errorStr); controller.InvokeOnMainThread(() => {
UIVariable.GetAction(strAction, "\"" + errorStr + "\"", ""); });
};
m_speech.SpeechOK += (recognized) => {
Console.WriteLine("Recognized: " + recognized); controller.InvokeOnMainThread(() => {
UIVariable.GetAction(strAction, "", "\"" + recognized + "\""); });
};
m_speech.StartRecording(STT.Voice);
return Variable.EmptyInstance; }
}
public class StopVoiceFunction : ParserFunction {
protected override Variable Evaluate(ParsingScript script) {
VoiceFunction.LastRecording?.StopRecording(); script.MoveForwardIf(Constants.END_ARG); return Variable.EmptyInstance;
} }
using AVFoundation;
namespace scripting.iOS {
public class TTS {
static AVSpeechSynthesizer g_synthesizer = new AVSpeechSynthesizer(); static public float SpeechRate { set; get; } = 0.5f;
static public float Volume { set; get; } = 0.7f;
static public float PitchMultiplier { set; get; } = 1.0f;
static public string Voice { set; get; } static bool m_initDone;
public static void Init() {
if (m_initDone) { return;
= "en-US";
}
m_initDone = true;
// Set the audio session category, then it will speak
// even if the mute switch is on. AVAudioSession.SharedInstance().Init(); AVAudioSession.SharedInstance().SetCategory(AVAudioSessionCategory.Playback,
AVAudioSessionCategoryOptions.DefaultToSpeaker); }
public static void Speak(string text) {
if (g_synthesizer.Speaking) { g_synthesizer.StopSpeaking(AVSpeechBoundary.Immediate);
}
var speechUtterance = new AVSpeechUtterance(text) {
Rate = SpeechRate * AVSpeechUtterance.MaximumSpeechRate, Voice = AVSpeechSynthesisVoice.FromLanguage(Voice), Volume = Volume,
PitchMultiplier = PitchMultiplier
};
g_synthesizer.SpeakUtterance(speechUtterance); }
} }
48 msdn magazine
C#