Page 17 - Security Today, JulyAugust 2023
P. 17

                                                                                 recognition, access control, and more. Overselling solutions and underperformance of solutions have caused implementations to have unacceptable error rates that have led to underperforming expectations, breaching the trust be- tween customer and analytic(s), or creating other problems when the AI failed to work as sold. These are some of the reasons that have created difficulties in the market for many analytic software packages to gain adoption. The Legal Question As AI continues to get more intuitive, the legality of AI will also come into question. This is seen more so in topics such as biomet- rics; but reflects upon the security industry, as well as the ethics of AI as a whole. The questions are not just around the ethics of AI but also the data privacy, who holds the data, where is the data held, who has access to the data, how will the data be used, and is the data held a protectable interest. These questions are just the begin- ning of concerns about data privacy and the ethics of AI models. Currently, there is no approved data set or standard of ap- proved AI models that will regulate bias in training data. There are a few specific testing activities such as in the United States, the Na- tional Institute of Standards and Technology (NIST) has an ongo- ing facial biometric test for algorithm accuracy against a stationary face. Again, biometrics has seen some of the most controversial publicity; but it points to a much larger conversation as two differ- ent AI models and two different training sets will output a different accuracy, variance, and acceptable threshold for error. It should come as no surprise that AI and the use of AI are being considered for regulation. This framework is currently being considered by the European Union (EU); which has enacted the General Data Protection Regulation, known to most as GDPR. The EU’s Artificial Intelligence Act follows a risk-based ap- proach where legal intervention is based upon the level of risk and aims to explicitly ban harmful AI practices. The framework for this Act was originally introduced in April 2021, and in May 2023 the initial draft for the mandate was approved. Once approved, they will be the world’s first AI rules, with specific bans on biometric surveillance, emotion recognition and predictive policing AI systems. The draft calls for specific gover- nance on AI models such as ChatGPT. The draft also calls for transparency to ensure “AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmen- tally friendly.” The draft calls for a uniform definition of AI designed to be technology-neutral so that it can apply to the AI systems of today and tomorrow. The governance will be administered through the European Union AI Office; and while this is aimed to protect the members of the EU, the regula- tion is expected to have far-reaching stipula- tions that will affect AI globally and in every industry. Brian Leary is the vice president of product and operations at Accuate. A.I. PROACTIVE GUARDING + CLOUD BACKUP Hybrid Cloud      WWW.SECURITYTODAY.COM ↘ Cloud backup for on-premise solutions ↘ Powerful AI-based analytics reduces false alarms by 95% ↘ Gun Detection for campus safety ↘ Integrates with most video platforms ↘ Simple to deploy, easy to manage POWERED BY 877.917.9059 | ALIBISECURITY.COM CALL FOR A DEMO TODAY! MENTION THIS AD AND GET A SPECIAL DEMO OFFER 


































































































   15   16   17   18   19