Page 16 - Security Today, JulyAugust 2023
P. 16

                                  the capability to feel and register experiences and feelings. Self- Aware AI is a higher-level AI than Theory of Mind and as such is currently only theorized. Today, most of the AI models require rules-based input to cre- ate the desired output, falling into the category of Limited Mem- ory AI. Yes, it will reduce time and increase accuracy, however, this is a long way from sentient AI. AI Training All AI has the combination of two key steps, training and infer- encing. All AI must first be trained. Depending on the model this could be simple computer vision or complex deep-learning neural networks. Typically, this shows itself in the level of complexities, identification versus classifica- tion of items. Training accuracy is just like training anything else, the accuracy of the data used, and the amount of time used to train the system reflects directly on how well the AI will ultimate- ly work. Bad data in equals bad data out, and AI trained for an insufficient amount of time will have a higher error rate requiring added input once deployed. What happens when there is not enough training data to make a functional model? This is where companies, and not just start- ups, find themselves not only starting but as they retrain their models over time for accuracy. The more data points, the more accurate the AI. To meet this requirement, these companies may look to use open-source data models or purchase data sources to train their models against. Once AI is trained; the next step is the AI model inference. The AI model inference is simply the process of inferring data. AI model inferencing is the process of using a trained model to make predictions on new data. Take video analysis AI as an example, Shutterstock.com/Alexander Supertramp “Training accuracy is just like training anything else, the accuracy of the data used, and the amount of time used to train the system reflects directly on how well the AI will ultimately work.” where the AI takes what it’s been trained to do, applies logical rules to analyze the scene, and then decides based on the trained data what the analyzed scene should look like. If the trained data is to show cars, trucks, or bicycles; the inference will identify cars, trucks, or bicycles, or not identify an item because it does not fit the identification. The same concept works with AI trained for sounds, biometric data, and even computer logic. AI ranges from rules-based logic where virtual boundaries are set to guide the AI inference, alerting a human to any anoma- lies. These anomalies, unchecked, can become learned behavior; requiring an understanding of the scene and diligence by the hu- man to correct the action. The other side of the spectrum is a deep learning AI that learns the scene and makes its calculations, with gradual human interaction for correction, learning a scene and repeatedly getting more accurate. The Error Rate Question AI models assume a measure of bias in the model, as the real- world implementations are never the same as lab scenarios. This has led to implementations having unacceptable error rates post implementation, forcing costly fixes or replacements. The error rate of AI has come into question with biometrics, license plate 16 JULY/AUGUST 2023 | SECURITY TODAY COVER STORY  


































































































   14   15   16   17   18