Aug 01, 2023
A Dose of AI, from Security Today
About the Author: Brian Leary is the vice president of product and operations at Accuate.
Examining what type of AI exists today and what type can we expect moving forward.
What is Artificial Intelligence? We’ve all heard the term, but what does it mean? For some it evokes imagery of a cinematic world’s end; for others, it is the Easy Button come to reality. Artificial Intelligence was originally defined by Stanford University Professor John McCarthy as the science and engineering of making intelligent machines.
The security industry finds itself on a wide scale of adoption. Artificial Intelligence, or AI as it is commonly referred to, is not all the same. AI is found in almost every aspect of the security industry; from the physical devices to the software platforms that run them. AI is a holistic term used to include different training methods, including computer vision (CV), Machine Learning (ML), Neural Networks (NN) and Deep Learning (DL). These methods of training AI all have different outcomes. A keynote here is that not all AI is the same.
There are macro-category stages, and micro types of AI within each stage.
The Stages: There are three macro-category stages of AI
Artificial Narrow Intelligence (ANI). Artificial narrow intelligence, also known as “Weak AI” is AI that can perform a narrow, single focused set of specific tasks. It does not think for itself, it responds to a set of pre-defined training. A good example of this technology is AI-enabled object classification or identification analysis.
Artificial General Intelligence (AGI). Artificial general intelligence, also known as “Strong AI” is AI that can think for itself and make decisions based on artificial thought; removing the need for a human to confirm its alert. ChatGPT and similar AI-enabled bots are the closest to AGI that AI has come to date.
Artificial Super Intelligence (ASI). Artificial super Intelligence is the capability of computer intelligence surpassing human intelligence, so far only seen in cinematic magic; The Terminator, The Matrix, The Avengers Ultron and I Robot to name just a few. The world may not be facing a zombie apocalypse of AI; however, AI has seen more innovation in the last 10 years, than in the last 50 years before.
AI continues to be innovative, solving new and existing problems the world has not thought of yet. The innovative solutions that the security industry has built and continues to build models are providing real-time and forensic solutions to protect people and assets while making life easier and better for the user and their customers. But what type of AI exists today and what type can we expect moving forward? The type of AI programmed into each of these categories will drive the next innovative step.
Types of AI
There are four types of AI that can be found in each of the three categories above.
Reactive Machine AI. Reactive Machine AI is simply that, a reaction based on present data provided where the AI supplies a logical response output. A famous example of this was the IBM Deep Blue chess computer that beat Gary Kasparov.
Limited Memory AI. Limited Memory AI is where the AI starts to make informed decisions based on its training. The spectrum of decision-making can start from basic tasks to improving on the last positive response it had to a task.
Theory of Mind AI. Theory of Mind AI is where emotional intelligence will be brought into the scenario. This level of AI has not been developed yet.
Self-aware AI. Self-Aware AI is where AI reaches sentience; with the capability to feel and register experiences and feelings. Self-Aware AI is a higher-level AI than Theory of Mind and as such is currently only theorized.
Today, most of the AI models require rules-based input to create the desired output, falling into the category of Limited Memory AI. Yes, it will reduce time and increase accuracy, however, this is a long way from sentient AI.
AI Training
All AI has the combination of two key steps, training and inferencing. All AI must first be trained.
Depending on the model this could be simple computer vision or complex deep-learning neural networks. Typically, this shows itself in the level of complexities, identification versus classification of items. Training accuracy is just like training anything else, the accuracy of the data used, and the amount of time used to train the system reflects directly on how well the AI will ultimately work. Bad data in equals bad data out, and AI trained for an insufficient amount of time will have a higher error rate requiring added input once deployed.
What happens when there is not enough training data to make a functional model? This is where companies, and not just start-ups, find themselves not only starting but as they retrain their models over time for accuracy. The more data points, the more accurate the AI. To meet this requirement, these companies may look to use open-source data models or purchase data sources to train their models against.
Once AI is trained; the next step is the AI model inference. The AI model inference is simply the process of inferring data. AI model inferencing is the process of using a trained model to make predictions on new data. Take video analysis AI as an example, where the AI takes what it’s been trained to do, applies logical rules to analyze the scene, and then decides based on the trained data what the analyzed scene should look like. If the trained data is to show cars, trucks, or bicycles; the inference will identify cars, trucks, or bicycles, or not identify an item because it does not fit the identification. The same concept works with AI trained for sounds, biometric data, and even computer logic.
AI ranges from rules-based logic where virtual boundaries are set to guide the AI inference, alerting a human to any anomalies. These anomalies, unchecked, can become learned behavior; requiring an understanding of the scene and diligence by the human to correct the action. The other side of the spectrum is a deep learning AI that learns the scene and makes its calculations, with gradual human interaction for correction, learning a scene and repeatedly getting more accurate.
The Error Rate Question
AI models assume a measure of bias in the model, as the real-world implementations are never the same as lab scenarios. This has led to implementations having unacceptable error rates post implementation, forcing costly fixes or replacements. The error rate of AI has come into question with biometrics, license plate recognition, access control, and more.
Overselling solutions and underperformance of solutions have caused implementations to have unacceptable error rates that have led to underperforming expectations, breaching the trust between customer and analytic(s), or creating other problems when the AI failed to work as sold. These are some of the reasons that have created difficulties in the market for many analytic software packages to gain adoption.
The Legal Question
As AI continues to get more intuitive, the legality of AI will also come into question. This is seen more so in topics such as biometrics; but reflects upon the security industry, as well as the ethics of AI as a whole. The questions are not just around the ethics of AI but also the data privacy, who holds the data, where is the data held, who has access to the data, how will the data be used, and is the data held a protectable interest. These questions are just the beginning of concerns about data privacy and the ethics of AI models.
Currently, there is no approved data set or standard of approved AI models that will regulate bias in training data. There are a few specific testing activities such as in the United States, the National Institute of Standards and Technology (NIST) has an ongoing facial biometric test for algorithm accuracy against a stationary face. Again, biometrics has seen some of the most controversial publicity; but it points to a much larger conversation as two different AI models and two different training sets will output a different accuracy, variance, and acceptable threshold for error.
It should come as no surprise that AI and the use of AI are being considered for regulation. This framework is currently being considered by the European Union (EU); which has enacted the General Data Protection Regulation, known to most as GDPR.
The EU’s Artificial Intelligence Act follows a risk-based approach where legal intervention is based upon the level of risk and aims to explicitly ban harmful AI practices. The framework for this Act was originally introduced in April 2021, and in May 2023 the initial draft for the mandate was approved.
Once approved, they will be the world’s first AI rules, with specific bans on biometric surveillance, emotion recognition and predictive policing AI systems. The draft calls for specific governance on AI models such as ChatGPT. The draft also calls for transparency to ensure “AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly.”
The draft calls for a uniform definition of AI designed to be technology-neutral so that it can apply to the AI systems of today and tomorrow. The governance will be administered through the European Union AI Office; and while this is aimed to protect the members of the EU, the regulation is expected to have far-reaching stipulations that will affect AI globally and in every industry.