Page 62 - MSDN Magazine, April 2017
P. 62

Get news from MSDN in your inbox!
Sign up to receive MSDN FLASH, which delivers the latest resources, SDKs, downloads, partner offers, security news, and updates
on national and local developer events.
magazine

the current training data item on an incorrect prediction, a single numWrong variable or numCorrect variable is incremented:
if ((di == -1 && y >= 0.0) || (di == 1 && y <= 0.0)) ++numWrong;
else ++numCorrect;
After all data items under investigation have been examined, the percentage of correct predictions is returned:
return (1.0 * numCorrect) / (numCorrect + numWrong);
The Accuracy method has a verbose parameter, which if true, causes diagnostic information to be displayed:
if (verbose == true) {
Console.Write("Input: ");
for (int j = 0; j < data[i].Length - 1; ++j)
Console.Write(data[i][j].ToString("F1") + " "); // Etc.
}
Using the code from the Accuracy method, you could write a dedicated Predict method that accepts an array of x values (without a known correct class), the training data, the wrong-counter array, and the RBF kernel sigma value, which returns a +1 or -1 prediction value.
Wrapping Up
Kernel perceptrons aren’t used very often. This is due in large part to the fact that there are more powerful binary classification tech- niques available, and that there is a lot of mystery surrounding ML kernel methods in general. If you do an Internet search for kernel perceptrons, you’ll find many references that show the beautiful mathematical relationships between ordinary perceptrons and kernel perceptrons, but very little practical implementation information. In my opinion, the primary value of understanding kernel perceptrons is that the knowledge makes it easier to understand more sophisti- cated ML kernel methods (which I’ll present in a future column).
An ordinary perceptron can perform binary classification for simple, linearly separable data.
One of the weaknesses of kernel perceptrons is that to make a prediction, the entire training data set (except for items that have a wrong counter value of 0) must be examined. If the training set is huge, this could make real-time predictions unfeasible in some scenarios.
Kernel perceptrons are arguably the simplest type of kernel methods. The kernel trick can be applied to other linear classifiers. In fact, applying the kernel trick to a maximum margin linear clas- sifier is the basis for support vector machine (SVM) classifiers, which were popular in the late 1990s. n
Dr. James mccaffrey works for Microsoft Research in Redmond, Wash. He has worked on several Microsoft products including Internet Explorer and Bing. Dr. McCaffrey can be reached at jammc@microsoft.com.
Thanks to the following Microsoft technical experts who reviewed this article: Ani Anirudh and Chris Lee
Test Run










































































   60   61   62   63   64