A novel machine-learning system can quantify the degree of change between multiple data inputs, enabling more accurate anomaly detection that could have wide-ranging applications in fields as varied as biomedicine, autonomous driving and network security.
The invention, created by a team of Georgia Institute of Technology researchers, was described in a patent application, published March 11 by the World Intellectual Property Organization.
The team's machine learning platform, which it began developing in 2017, relies on gradients, or the degree of change between separate inputs, to help identify which data inputs belong and which do not. The system can, for instance, recognize that the difference between a clear and blurry image of a printed number is not as significant as the difference between a number and an animal.
Ghassan AlRegib, a professor of electrical and computer engineering at Georgia Tech who headed the project, said his team aimed to make a machine learning system that is more accessible to everyday professionals compared to previous iterations.
"Human-centric [artificial intelligence] machine learning is something that's very, very important," AlRegib told The Academic Times. It's vital that people "can interact with the model and understand why the model made a decision, and at the same time, they can give feedback to the model — trust, which is a really human thing."
Most anomaly detection platforms incorporate an introductory calibration step that provides the system with a range of acceptable values to use when evaluating future samples. For instance, if one provides a deep learning system with simple images of numbers during initial optimization, the system will learn to recognize numbers while rejecting abnormal images, such as pictures of animals or buildings.
But if the data used in the introductory test contains errors, the entire detection system could become compromised and ineffective, creating devastating real-world problems. In a biomedical setting, for instance, an artificial intelligence software may inadvertently skip over a troubling symptom of an illness during a routine screening, leaving a patient without the proper care she requires. Alternatively, from a national security perspective, a hacker could insert seemingly compatible data into a network, compromising the integrity of future inputs.
Deep learning platforms also have trouble making decisions about data entries they are unsure about, taking a guess on whichever option they find most likely. In an autonomous driving context, however, these "either/or" scenarios could force a self-driving car to make a dangerous decision instead of defaulting to a backup safety protocol.
Accordingly, the Georgia Tech team has programmed its platform to be able to recognize when it does not know where to sort a particular data input, in which case it can pause its detection process and wait for human intervention. It also has a more sensitive threshold to identify and weed out data that may originate from an adversarial source, like a digital virus.
The team tested its invention with images of traffic signs under various circumstances: time of day, weather conditions or problems with exposure and lens flare. It found that the platform could detect road signs with a high degree of accuracy, even when there were significant image distortions. The technology could one day be implemented to improve the accuracy of autonomous car's cameras.
The platform has also been used to read optical coherence tomography scans in order to detect fluid buildup in the human retina — an early warning sign of some eye diseases — that could eventually save certain patients from blindness if detected early enough, AlRegib said. There are geophysics applications for the researchers' system, too, in which an AI platform might be able to scan the surface of the earth or other planets to locate points of interest.
AlRegib said that better detection systems can help people who are worried about some of the errors that are common in current AI and machine learning platforms.
"What I'm passionate about is building that trust between AI and us as humans — whether as users or as medical doctors or as geophysicists or autonomous vehicle designers or city planners," AlRegib said. "Hopefully, in the next four or five years, as a community, we can really make sure that we have that trust in place."
The application for the patent, "Detecting and classifying anomalies in artificial intelligence systems," was filed Sept. 4, 2020, to the World Intellectual Property Organization. It was published March 11, 2021, with the application number PCT/US2020/049331. The earliest priority date was Sept. 4, 2019. The inventors of the pending patent are Ghassan AlRegib, Gukyeong Kwon, Mohit Prabhushankar and Dogancan Temel, Georgia Institute of Technology. The assignee is Georgia Tech Research Corporation.
Parola Analytics provided technical research for this story.