Just what is AI?
This was one of the first questions posed by Missy Cummings, director of George Mason University’s Autonomy and Robotics Center, at the Future of AI roundtable in June.
“At the end of the day, AI can be a tool to help you in your job, but you need to understand both its strengths and weaknesses,” Cummings said. “That’s why Mason will be offering a certificate and master’s degree focused on responsible AI, so that people across industry and government can learn how to manage the risks while promoting the benefits of AI.”
The invite-only roundtable, hosted by the College of Engineering and Computing, explored the issues, challenges, and solutions to think about as AI technologies rapidly evolve and change almost as soon as they are introduced.
Cummings led the roundtable and highlighted AI areas like ChatGPT and autonomous driving systems that lack human reasoning skills. When human behavior, environment, and AI blind spots intersect, the resulting uncertainty contributes to fundamental limitations.
“As humans, we do a great job of successfully filling in the blanks when there is imperfect information,” said Cummings. “If I quickly flash a picture of a stop sign, your brain automatically recognizes what it is, even if it’s not a clear image. But it’s arguable whether the vision system on a self -driving car can do the same.”
She added that automation can fall apart at a critical threshold, such as when a car is supposed to stop at a stop sign that’s visible to the human eye, but maybe overlooked by an AI vision system.
“As humans, we have much more experience driving cars,” she said. “Self-driving cars are still a new technology and while simulation testing can help identify problems, real-world testing for such non-deterministic systems that never reason the same way twice is critical.”
One of her favorite uses of ChatGPT is for her students to use it for grammar and spell checking when they write papers, but that’s the extent.
“ChatGPT cannot reason under uncertainty. It does not think. It does not know. It can approximate human knowledge, but there is no actual thinking or knowledge,” said Cummings. “ChatGPT goes after the most probable image, or the most probable grouping of words.”
In general, large language models like ChatGPT will use what the average person is saying on the internet. This means it could pick up extremist views, if it is currently something trending or popular online.
It can also be a concern when it comes to diversity, she said.
“If a company uses ChatGPT to write their mission statement, in another five years, everyone’s mission statement will be the same,” Cummings said. “Creative thoughts and authenticity are lost. It’s important to understand how far we push these models, and what the long-term ramifications could be.”
Roundtable attendees from local universities and tech companies left with a greater understanding of how AI can impact everyday life, in more ways than one.