ChatGPT and similar Large language models (LLMs) can be used to write texts about any given subject, at any desired length at a speed unmatched by humans.
So it’s not a surprise that students have been using them to “help” write assignments, much to the dismay of teachers who prefer to receive original work from actual humans.
In fact, in Malwarebytes’ recent research survey, “Everyone’s afraid of the internet and no one’s sure what to do about it,” we found that 40% of people had used ChatGPT or similar to help complete assignments, while 1 in 5 admitted to using it to cheat on a school assignment.
It’s becoming really hard to tell what was written by an actual person, and what was written by tools like ChatGPT, and has led to students being falsely accused of using ChatGPT. However, students that are using those tools shouldn’t be receiving grades that they don’t deserve.
Worse than that could be an influx of so-called scientific articles that either add nothing new or bring “hallucination” to the table—where LLMs make up “facts” that are untrue.
Several programs that can filter out artificial intelligence (AI) texts have been created and tests are ongoing, but the success rate of these, mostly AI-based tools, hasn’t been great.
Many have found the existing detection tools not very effective, especially for professional academic writing. These tools have a bias against non-native speakers. Seven common web-based AI detection tools all identified non-native English writers’ works as AI-generated text more frequently than native English speakers’ writing.
But now it seems as if chemistry scientists have…
“The adventure of life is to learn. The purpose of life is to grow. The nature of life is to change. The challenge of life is to overcome. The essence of life is to care. The opportunity of like is to serve. The secret of life is to dare. The spice of life is to befriend. The beauty of life is to give.” —William Arthur Ward