topSkip to main content

Menu, Secondary

Menu Trigger

Menu

Making Artificial Intelligence More Reliable

Making Artificial Intelligence More Reliable

MSU researchers are combining modern and classical approaches to AI to make this transformative technology more trustworthy

Artificial intelligence has entered the mainstream in a way the world has never experienced before. Millions of people are using tools such as ChatGPT and Stable Diffusion for AI-generated help answering questions, creating images and accomplishing a host of other tasks.

But anyone who has used these systems has probably also noticed these tools have limitations. They can, for example, overlook a key component of a request or come up with something that’s not quite right.

“Every day, these AI models impress us, but we’re still not sure how trustworthy and reliable they are,” said Parisa Kordjamshidi, an assistant professor in the Department of Computer Science and Engineering at Michigan State University.

“Even when they provide the right answer, they might be right for the wrong reasons.  We need to know what is their line of reasoning,” Kordjamshidi said. “That’s not very clear right now, and that’s the challenge.”

The Office of Naval Research has awarded Kordjamshidi and her colleagues a $1.8 million grant to make our interactions with AI more reasonable and reliable. This would bolster the confidence people have in using AI tools that are increasingly acting as digital assistants. But the team also has larger goals.

The researchers are working to help AI better process a range of inputs — text, images and video — to make human interactions with computer systems more powerful and seamless. The project could thus enable advances in a variety of applications, Kordjamshidi said, including education, navigation and multimodal question-answering systems in general.

This represents one of the major research thrusts for Kordjamshidi’s team. In fact, she won a 2019 National Science Foundation Faculty Early Career Development, or CAREER, Award and a 2021 Amazon Research Award on this front. Her team is working to help AI understand natural, everyday human language — rather than computer code — and put that understanding to work in following human instructions for navigating a realistic environment.

“We want to be able to connect this to the real world, to physical environments,” Kordjamshidi said. “Even if an AI system is 70% reliable, that wouldn’t be high enough for many serious real-world applications.”

While Kordjamshidi’s primary expertise is on the natural language component of AI, she’s teamed up with other AI innovators with complementary skills on this grant. At MSU, that includes Yu Kong and Vishnu Boddeti, both assistant professors in the College of Engineering. Dan Roth, a professor at the University of Pennsylvania, is also a co-investigator.

“This is a very collaborative project,” Kordjamshidi said. “We’re bringing together experts from various areas of learning and reasoning over vision and language data.”

Make new algorithms, but keep the old

The core of the team’s idea is to combine modern algorithms with an earlier approach to AI that started rising to prominence in the 1980s. Now known as classical or symbolic AI, these algorithms worked to teach computer systems explicit forms of reasoning and logic.


“I don’t think we can put all of the world’s information into a system and have it reason and compose new concepts like a human.”
- Parisa Kordjamshidi, assistant professor of computer science and engineering

 

But symbolic AI systems could not scale up with real-world complexity and weren’t robust enough to handle noisy or incomplete information, Kordjamshidi said.

On the other hand, early neural models also struggled with scaling to real-world complexity because they lacked the requisite data and computational power. But those obstacles have become surmountable with the current availability of data and new technology capable of handling the necessary computations.

Today’s neural network algorithms can be trained using massive data sets to recognize billions of parameters within text, images and videos. Researchers are still improving these algorithms by feeding them more data and using more parameters, which has enabled the success of platforms like ChatGPT.

“I don’t think we can put all of the world’s information into a system and have it reason and compose new concepts like a human,” Kordjamshidi said. “Adding more data into larger and larger models has been the approach — and it has been a game-changer — but if we want to get to the next level of AI, I think we need to combine paradigms.”

“Neural networks are very good at learning in a black-box way, but they’re not good at reasoning like humans,” Kong said.

These are images made by the DALL•E 2 AI system when given the prompt: “A woman with an apple in her left hand stands in front of a car.” Two of the four images fail to get at least one of the spatial instructions correct. By combining the algorithms that power modern tools like DALL•E 2 with “classical” AI, Michigan State University researchers are working to overcome such limitations to make AI systems more powerful and reliable. Credit: DALL•E 2

Neural networks do have some implicit reasoning capabilities, but those are essentially inferred from the data they see rather than conferred by human programmers, he said.

Incorporating symbolic AI would build in explicit reasoning, which would bolster AI’s ability to understand things like temporal and spatial relationships that humans handle almost innately.

For instance, Kordjamshidi said, a current online AI image generator can have trouble with seemingly straightforward prompts like, “a woman with an apple in her left hand stands in front of a car.”

Output images often show the apple in the wrong hand or the woman in the wrong place relative to the car. The prompt's instructions are easy for humans to interpret, but gigantic neural networks lack such spatial reasoning capabilities.

Combining the two AI paradigms into a neuro-symbolic formalism would yield algorithms that are far more likely to provide the desired results. And, if the systems do get something wrong, they would be able to provide a more detailed account of their line of reasoning.

Researchers would thus get a better look inside the “black box” of modern AI and be able to develop more powerful, more trustworthy tools for today’s demands and future applications.

“Ultimately, we’ll have more control and more flexibility,” Boddeti said. “And we hope people will build on that and give these systems even more capabilities.”

The AI research community has already shown it has an appetite for these developments.

“People have tried to do this a few times in the past, but I think it wasn’t quite the right time and they didn’t have the right tools,” Boddeti said. “I think we’re in the right time now.”

Beyond the technical goals of the project, the team also is excited by the opportunities that the Office of Naval Research grant is creating for students and postdoctoral researchers.

“This research area is growing so fast. I have people emailing me every day saying they’re interested in this,” Kordjamshidi said. “There are so many excellent students, you wish you could work with all of them.”

This story was originally published Michigan State University on Aug 1, 2023