Artificial intelligence and the rise of popular open-source platforms like ChatGPT are disrupting life now more than ever. As the technology evolves, it stirs interest among researchers trying to keep up with both the pace of development and the ethical ramifications that accompany its use. This proliferation may provoke anxiety in some, but that’s nothing new. 

depicts woman, hair up, in ruff, looking at sewing machine

In the late 1500s, Queen Elizabeth I denied a patent for an automated knitting machine out of fear it would steal the jobs of young maidens. In 1930, an economist predicted 15-hour workweeks due to technological advances. And for “Twilight Zone” buffs, there’s “The Brain Center at Whipple’s” episode, in which a manufacturing company owner fires every employee and replaces them with machines.

However, West Virginia University researchers – across various disciplines such as engineering, health, journalism, and law – have unraveled ways of utilizing AI for public good. Even before ChatGPT became a household name, WVU experts have kept a close eye on AI’s rising impact on the modern world. The battle of a human and the product of a human's brain may not be a battle after all. As an experiment and way to help exhibit AI as a tool and resource, all images within this feature were created using an image generator using prompts directly from the story. The direct keywords used are underlined in red for each image.

ON THE GRID

It’s everywhere, humming, all around us: the power grid. Or, as Anurag Srivastava calls it, “the biggest man-made machine ever.” Srivastava, professor and chair of the Lane Department of Computer Science and Electrical Engineering in the Benjamin M. Statler College of Engineering and Mineral Resources, dedicates his AI research to keeping the grid safe. Safe from the kinds of natural disasters that are increasing with climate change – floods, wildfires, ice storms, heat waves. Safe from bad actors, whether those are sophisticated cybercriminals holding the power supply for ransom or terrorists committing violent assaults on grid infrastructure.

Safe, even, from the consequences of its own size and escalating complexity. As the grid grows, it encompasses new power sources like residential solar panels and electric vehicle charging stations. That vast labyrinth of interconnections is vulnerable to ripple effects from small disruptions. Its oversight and crisis management have become too urgent and enormous for human operators to handle. So Srivastava is developing AI that can analyze a tsunami of real-time information about the grid, differentiate actual problems from data hiccups and swiftly seal off affected areas before damage can begin to spread like a shockwave.

“In the grid, we have the butterfly effect,” Srivastava said. “This means that if a

butterfly on tallish grass, sunrise in background

butterfly  flutters its wings in Florida, that will cause a windstorm in Connecticut because things are synchronously connected. States like Florida, Connecticut, Illinois, and West Virginia are linked in the grid’s eastern interconnection, so a big event in the Deep South will cause problems up North.”  


To stop that from happening, Srivastava has created an AI-based tool that’s learning to detect and gracefully quarantine malfunctioning parts of the grid. And with distributed intelligence-sharing that communicates and compares decentralized data from across the grid, the tool will protect against cyberattacks, sealing a major hole in national security preparedness.

MINING FOR SAFETY

Like Srivastava, roboticists at the Statler College believe in AI’s potential as a safeguard. But, far from the orbital heights of outer space, they’re plunging into caverns deep within the Earth, where their AI-enabled robot-drone duo Rhino and Oxpecker explore and map underground mines. Rhino is a wheeled robot, developed by Associate Professor Yu Gu and his team. Oxpecker, like its avian namesake, is the small, flighted companion that rides on Rhino’s back and was developed in Associate Professor Guilherme Pereira’s lab.

drone at the top of lighted tunnel

Together, Rhino and Oxpecker perform critical tasks like gas detection, search-and-rescue  missions, and geotechnical characterization in harsh subterranean environments, overseen by Deniz Tuncay, assistant professor of mining engineering.

Jason Gross, associate professor and chair of the Department of Mechanical, Materials and Aerospace Engineering, who led the drone mapping efforts with his students, said the terrain in spaces like limestone mines is “difficult to traverse, with slippery surfaces, large rocks and mud. Dust, smoke, and fog degrade the robots’ performance sensors, and communication is very limited.

“Because it’s dangerous for humans to conduct these inspections, Rhino and Oxpecker have to choose their own safe path to follow while they map the roofs and pillars in the mines.”

TRUTH, JUSTICE AND AI

According to legal scholars at WVU, AI systems can influence the scales of justice. AI can benefit the practice of law when used responsibly, said Amy Cyphert, a College of Law lecturer, but such systems could provide errors and biases, potentially leading to unjust legal ramifications. “There are recidivism prediction tools that rely on certain AI systems,” Cyphert said. “AI tools can actually have an impact on who might go to jail and for how long.” Some AI systems may have been trained on biased historical criminal justice data.

“The computer science concept is ‘garbage in, garbage out,’” she said. “Which means if you've

blind justice statue on muted blue/green background

got data that's flawed, whatever output it produces is going to be flawed. In law, and especially in the field of artificial intelligence and law, scholars say, ‘bias in, bias out.’”

Moreover, people tend to believe AI predictive systems are neutral, objective, and wholly accurate, a conclusion that Cyphert argues is unwarranted. Cyphert, who also directs the WVU ASPIRE Office, has taught classes on the ethics of AI in the law since 2020. She currently guides her students on how to use the technology and what problems may arise.

AI in the law isn’t new; it’s been used for years in technology like facial recognition software and “predictive policing,” which uses algorithms to predict where and when crime is likely to occur. AI can also make legal work easier and help reduce time and costs associated with drafting motions and assembling legal documents. And it’s able to whittle down the process of electronic discovery from a multiweek endeavor to a much shorter timeframe. The devil, Cyphert said, is in the details of how lawyers use these tools.

Her AI and the Law class focuses in part on helping students learn the ethical use of AI systems in the practice of law. She’s also been warning her students about confidentiality and privacy issues, because while AI can help edit or proofread a document, doing so might require uploading sensitive client data to a database. “Those who don’t understand the technology may use it anyway and that could be detrimental to their clients,” she said. “Lawyers have an obligation to save their clients money, so if you can use an AI tool that makes your case stronger and cheaper for your client, you should absolutely use it. But you have to understand the tool. If you don’t and you accidentally make the client’s case weaker, that’s problematic.”

THE SOUNDS OF SCIENCE

Elsewhere at the College of Law, Professor Sean Tu and student Angelyn Gemmen have analyzed how AI systems can help determine the verdicts of music copyright cases, like those faced by musicians George Harrison, Ed Sheeran, and John Fogerty. The first two singers were accused of borrowing from other artists, and Harrison later wrote that his loss in court gave him lasting paranoia about songwriting. Fogerty, on the other hand, was sued by a record company for plagiarizing himself from a song written 15 years earlier.

person playing keyboard, facing left, head bent

Cases of musical copyright violations can be difficult to prove or disprove in court, according to Tu and Gemmen. While the “I know it when I hear it” test sometimes includes testimony from forensic musicologists and professional musicians, the results are generally inconsistent because both parties will hire their own experts focusing on either the differences or similarities of the works, depending on which side they are on, Tu said.

AI could serve as a substitute for this “battle of the experts,” which can confuse both judges and juries. Furthermore, AI could act as a flagging system for artists, who could use it to check songs before they’re released, just as students use plagiarism checkers before turning in papers, Gemmen said. Additionally, the insurance industry might be willing to insure artists and record labels who are willing to subject their songs to an AI similarity test before their music is released.

“The problem is, for songs to be part of the same genre, elements are going to be similar,” she said.

A spy novel will have a femme fatale in it, a movie about New York will have yellow taxi cabs in it and pop music is going to have these four chords in it. This idea is intuitive to people and can be made more obvious through artificial intelligence.

—Angelyn Gemmen

HOW TO TRAIN YOUR AI

At the end of the day, it’s not necessarily about the AI controlling us. It’s about us controlling the AI. In the Reed College of Media, Professor Bob Britten said there are practical areas of AI use to study as well as ethical areas. 

robot heads on orange background

About his data journalism class, Britten said, “We get into a lot of case studies and examples about how AI learns and the unintended consequences that it often spits out.” Before the rise of ChatGPT and its ilk, he taught his students about truth in the media. Now, he wants them to understand that AI is learning from us all the time.

“But not from the best side of us,” he said. “There’s a wealth of language out there, free to train your AI on. You’ll get algorithms that return highly racially inflammatory results. When you train your AI on the internet — which is full of hateful language — you're going to get pretty hateful language back.” While it’s easy to believe young people might be savvy about the nuances of AI, Britten said his students are learning as they go, too.

He advises them to remember that AI is still quite young and doesn’t recognize what it’s “digesting” from its source of sustenance: the internet. “It’s like raising a child,” he said. “What we're putting into it is what comes out. It's just a child that learns super-fast. And you might be upset when your child eats junk food for dinner. Well, this child can eat a million dinners of junk food all at once.”

A TEACHING - AND POTENTIALLY LIFESAVING - TOOL

For bioinformaticians — the humans who analyze and interpret biological information to prioritize treatment for disease – AI tools can be both a godsend for saving time and a trial that has to be monitored for accuracy. Gangqing “Michael” Hu, assistant professor in the School of Medicine’s Department of Microbiology, Immunology and Cell Biology, is hoping to use ChatGPT to enhance the work of these scientists who can help prioritize targeted treatment for cancer and genetic disorders.

Hu works with his students on AI research and has published several studies on the topic. For one study, Hu saw potential in educational settings for the newest official ChatGPT plugin, called Code Interpreter. But he also found limitations for its use by scientists who work with biological data. “Code Interpreter is a good thing and it’s helpful in an educational setting as it makes coding in the STEM fields more accessible to students,” Hu said. “However, it doesn’t have the features you need for bioinformatics.”

colorful circuit board

ChatGPT produces human-like responses to text-based converations and is being used by multiple companies to respond to customer inquiries and provide general information. Anyone can use it to seek information on a plethora of subjects. One of the responses from ChatGPT can be code, and in this case the platform becomes a coding tool through prompting.

Hu led another study to prepare high school and college students to harness the power of ChatGPT through coding. "This is like kids wading in the muddy shoreline seeking beautiful seashells," Hu said.

"The kids are beginning students and the muddy shoreline is ChatGPT. The beautiful seashells represent all the attractive opportunities which beginners cannot resist. But the ChatGPT shoreline is muddy with challenges such as the uncertainty from the chatbot's response - including misleading artifacts - and students' overreliance on AI for coding."

THE EXPERTS OF TOMORROW

Students aren’t letting faculty have all the fun. Take industrial engineering major and Morgantown native Andrew Shephard, who has already rolled out a beta version of his AI software, GPTeacher, which helps teachers teach and students learn. Shephard has tested GPTeacher in computer engineering, wood science, and entrepreneurship classrooms, and he’s considering launching a startup around it after he graduates in May 2024.

Course instructors train GPTeacher on problems students will be given, providing the AI not only with the right answers, but with the process for getting there. That enables GPTeacher to lead students through the critical thinking necessary to master the material. The tool works for teachers as well, highlighting students’ problem areas. Shephard believes AI is a once-in-a-generation opportunity for research and industry to connect, innovate, and change the world.

colorful circuit board

We’re almost at ground zero in the AI field,” he said. “To me, the business aspect of artificial intelligence, its potential to fundamentally change numerous industries, makes this an incredibly exciting time.” 

At the School of Medicine, ChatGPT has raised curiosity and inspiration among students looking for ways to make lab work less cumbersome and more efficient.

Three graduate students in Associate Professor Ed Pistilli’s lab found themselves wanting to focus their time on answering research questions instead of sitting behind a screen processing data. Together they found a solution and produced CLAMSwrangler – an app generated mostly through the use of ChatGPT – to help them work with the Comprehensive Laboratory Animal Monitoring System.

The CLAMSwrangler app essentially takes what would have been five-to-six hours of copying, pasting, and formatting work in Excel and allows a user to perform those same manipulations in under a minute.

— Alan Mizener, a third-year M.D./Ph.D. student.

“It also makes the entire process more reliable and less prone to introducing errors in the data. Another great benefit is that users can easily combine data from multiple runs, a task that is even more difficult to do by hand.”

Mizener paired up with Stuart Clayton, a Ph.D. student in the Pathophysiology, Rehabilitation, and Performance Program, and Lauren Rentz, a Ph.D. student in exercise physiology, to develop the code. Their app is available for download on the lab’s GitHub repository. The students are now considering how to develop software to quicken the processing pace for the muscle physiology system — a staple piece of equipment in their lab that allows researchers to assess isolated muscle functions.

“Our intention is to freely release any software we develop as open-source and make it available to any researchers who may have use for it, as well as to incorporate new feature ideas they may have,” Clayton said. “ChatGPT allowed us to develop an exceedingly niche piece of software that might not have otherwise been developed due to the time investment it would have taken.

“I hope that in the future, newer versions of large-language models like ChatGPT allow for more researchers to do the same so that we can all benefit from the reproducibility and efficiency that software brings to science.”