Drexel Researchers Selected for Federal AI Research Pilot Program

Drexel Team Will Develop Brain-inspired Machine Learning Model to Enable Oversight, Safety and Transparency of Large Language Models
neural network illustration

Researchers from Drexel University’s College of Engineering and College of Computing & Informatics are among the first cohort tapped by the U.S. National Science Foundation to approach the challenge of advancing safe, secure and trustworthy artificial intelligence programs. In a ceremony at the White House on May 6, the president’s Office of Science and Technology Policy introduced Drexel’s effort to use brain-inspired machine learning algorithms to improve transparency and oversight of large language models, like ChatGPT, as one of the first recipients of resource allocation under the National AI Research Resource Pilot program.

 

Supported by the NSF and the Department of Energy, the program is initially funding research focused on five areas of artificial intelligence technology:

 

·      Testing, evaluating, verifying, and validating AI systems

·      Improving accuracy, validity, and reliability of model performance, while controlling bias

·      Increasing the interpretability and privacy of learned models

·      Reducing the vulnerability of models to families of adversarial attacks

·      Advancing capabilities for assuring that model functionality aligns with societal values and obeys safety guarantees

 

The project, entitled "Neuro-inspired Oversight for Safe and Trustworthy Large Language Models,” will be led by Edward Kim, PhD, an associate professor in the College of Computing & Informatics and Matthew Stamm, PhD, an associate professor in the College of Engineering. It will employ machine learning algorithms modeled after the brain’s neural pathways to ensure LLM programs are producing accurate, unbiased responses that are internally moderated by the LLMs own behavioral control center, a kind of prefrontal cortex that dictates how it should behave in socially acceptable ways.

 

“Being included in the federal government’s first efforts to develop guardrail systems for AI technology is a significant recognition of Drexel’s field-leading research and substantial faculty expertise in this area,” said Aleister Saunders, PhD, Drexel’s executive vice provost for Research & Innovation. “As this technology reshapes how we live, learn and interact, researchers like Ed and Matt will play a pivotal role in helping to ensure that AI is being used to society’s benefit, rather than its detriment.”

 

Kim’s research focuses on the ethical design of AI and machine learning technology, including raising awareness of implicit bias in the algorithms that drive it. His Spiking and Recurrent Software Coding Lab studies a type of AI, called sparse coding, modeled after the mammalian brain.

 

Stamm’s Multimedia and Information Security Lab leads information forensic research by developing technologies to detect multimedia forgeries, such as “deepfakes,” and AI-generated images and videos. His approach involves using constrained neural network machine learning programs to sift out the digital fingerprints of each type of digital manipulation and the hallmarks of synthetic media.

 

Drexel has established itself as a leader in AI research for more than a decade. Drexel researchers are leading efforts to develop the technology for health care applications ranging from sharing medical notes, to detecting Alzheimer’s disease and premature brain aging, to interpreting ultrasound imaging and even reminding medical personnel to wear a mask. They are testing AI in the field — a number of fields, in fact — including rapidly changing military scenarios; aging buildings and infrastructure; online learning environments and lantern fly-infested regions. And they are looking at its early impacts in academic settings.

 

The University’s recent reaffirmation of artificial intelligence as a key focus of interdisciplinary research efforts has enabled collaborations, like Stamm and Kim’s, to flourish. Both researchers are also representing Drexel, along with a number of colleagues from across the University, on the Department of Commerce’s recently announced U.S. AI Safety Consortium. They are also helping to guide Drexel’s academic policies and recommendations around using AI in the classroom.

 

Drexel’s project was one of 35 such efforts recognized during the White House Office of Science and Technology Policy’s launch of the NSF pilot program at a ceremony in the Eisenhower Executive Office Building at the White House on May 6.