How to outsmart artificial intelligence - human versus machine.
AI computer systems are finding their way into many areas of our lives and offer great potential, from self-driving vehicles to assisting doctors with diagnoses to autonomous search and rescue robots.
However, one of the major unsolved problems, especially with the branch of AI known as "neural networks", is that scientists often cannot explain why things go wrong. This is due to a lack of understanding of the decision-making process within AI systems. This problem is known as the "black box" problem.
Who is smarter?
A new 15-month research project from the University of Lancaster, in which the University of Liverpool is also involved, aims to unlock the secrets of the black box problem and find a new way to "Deep Learning"of AI computer models that make decisions transparent and explainable.
The project "Towards responsible and explainable autonomous robotic learning systems"will develop a series of security verification and test procedures for the development of artificial intelligence algorithms. These will help ensure that the decisions made by the systems are robust and explainable.
Image source: Pixabay
The researchers will use a technique called "reverse training". It consists in presenting the system in a given situation where it learns how to take an action - e.g. B. Detecting and lifting an object. The researchers then change various elements of the scenario such as color, shape, environment and observe how the system learns through trial and error. The researchers believe that these observations can lead to a better understanding of how the system learns and insights into the Decision making process granted.
By developing ways to create systems with neural networks that can understand and predict decisions, research will be key to unlocking autonomous systems in safety-critical areas such as vehicles and robots in industry.
Dr. Wenjie Ruan, professor at Lancaster University's School of Computing and Communications and lead researcher on the project, said, "Although the Deep Learning As one of the most remarkable Artificial Intelligence techniques has been hugely successful in many applications, it has its own problems when used in security critical systems, including opaque decision-making mechanisms and vulnerability to adversarial attacks. "This project is an excellent opportunity for us to close the research gap between deep learning techniques and safety-critical systems.