Student Finds Out Professor Used Chat GPT, Demands Tuition Refund - ForumDaily
The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

Student Finds Out Professor Used Chat GPT, Demands Tuition Refund

College professors have started using ChatGPT to prepare course materials. One student, noticing this, has demanded a refund of the money she paid for tuition, the publication writes Entrepreneur.

Photo: Waingro | Dreamstime.com

The use of AI in higher education is becoming increasingly popular among both students and teachers.

In February, Northeastern University student Ella Stapleton noticed that lecture notes for a corporate behavior course at Northeastern appeared to have been generated by ChatGPT. About halfway through the document was the phrase “expand all sections. Make the text more detailed and specific,” which could well have been a prompt from the AI ​​chatbot.

On the subject: Great opportunities with ChatGPT: 12 easy ways to make money using artificial intelligence

Stapleton examined other course materials, including slide presentations, and also found signs of AI, such as images of people with extra limbs and spelling errors. This struck her as startling, especially given that the syllabus given by instructor Rick Arrow prohibited students from using artificial intelligence.

“He tells us not to use AI, but he uses it himself,” the student noted.

Stapleton formally filed a complaint with Northeastern University's business school and demanded a refund of the course fee. The total refund would have been more than $8000.

The university rejected Stapleton the day after she graduated.

Arroad is a visiting professor who has been teaching at various colleges for over fifteen years. He admitted to using ChatGPT to process and refine course files and documents. He said the experience has made him more wary of AI and to warn students before using it.

Stapleton’s case highlights the growing use of artificial intelligence in higher education. According to a 2023 survey by consulting group Tyton Partners, 22% of university professors said they regularly use generative AI. In 2024, that figure nearly doubled to about 40%.

AI is increasingly being used by students themselves, too. According to a study by OpenAI published in February, more than a third of 18- to 24-year-olds in the United States use ChatGPT, with 25% of their messages related to learning. The two most popular uses were for learning assistance and writing.

According to Tyton's 2024 survey, educators using AI use it to design assignments, create course syllabuses, create rubrics, and prepare tests and quizzes.

Students use AI to get answers to homework assignments, help with papers, and take notes from lectures.

In response to students’ use of AI, universities have adapted and begun publishing official guidelines for the use of ChatGPT and other generative AI. For example, Harvard University advises students to protect sensitive data, such as unpublished research, and to review AI content for errors or “hallucinations” when using AI chatbots. New York University’s (NYU) policy states that students must obtain instructor permission to use ChatGPT.

(The artificial intelligence Grock spoke very interestingly about what AI “hallucinations” are.

So, in the context of generative AIs like ChatGPT, “hallucinations” are instances where the AI ​​creates information that seems plausible but is actually fictitious, inaccurate, or completely false. This is not a bug in the traditional sense (like a software glitch), but a feature of AI models that generate text based on probabilistic patterns rather than a factual knowledge base. The word “hallucination” is chosen because the AI ​​is “making up” or “seeing” something that isn’t there, similar to the human imagination or a dream.

Why is this happening?

Generative AIs like ChatGPT are trained on huge corpuses of text, from which they extract patterns and regularities. When these models respond, they predict which words or phrases are most likely to be answered, but they don’t fact-check like a human. If there are gaps in the data, inconsistencies, or a query is beyond their knowledge, the AI ​​can:

  • Invent non-existent facts.
  • Mix real data with fiction.
  • Give a confident but wrong answer.

Examples of "hallucinations":

  • Fictitious sources: The AI ​​may cite a non-existent article or book, such as "According to a study by John Smith in the Journal of Future Studies (2024), ...", even though neither the journal nor the author exist.
  • False Facts: When asked about a little-known event, the AI ​​might give a plausible but false story, such as "There was a robot festival in Moscow in 1995," which never happened.
  • Mixing up details: AI can confuse names, dates, or events, confidently stating that "Einstein received the Nobel Prize in 1930 for the theory of relativity" (in reality, it was in 1921 for the photoelectric effect). - Note.)

Universities use software to detect signs of AI in written work such as essays. However, students have learned to bypass these detectors by intentionally inserting typos into texts created using ChatGPT.

You may be interested in: top New York news, stories of our immigrants, and helpful tips about life in the Big Apple - read all this on ForumDaily New Y

The rise of AI in higher education may be leading to a decline in critical thinking. Researchers from Microsoft and Carnegie Mellon University published a study this year that found that people who are confident in AI and who use it regularly use fewer critical thinking skills.

"When used incorrectly, technologies can and do lead to deterioration of cognitive abilities that need to be preserved," the researchers concluded.

Read also on ForumDaily:

IT is no longer trendy: programmer lost a cool job because of AI: he sent 800 resumes, but to no avail

Scientists Created a Company Employing Only AI Employees: What Came of It

US Programmers Send Their Children to Study Humanities: IT Education Is No Longer Valued

In the U.S. Artificial Intelligence study in the USA Education and Career Chat GPT
Subscribe to ForumDaily on Google News

Do you want more important and interesting news about life in the USA and immigration to America? — support us donate! Also subscribe to our page Facebook. Select the “Priority in display” option and read us first. Also, don't forget to subscribe to our РєР ° РЅР ° Р »РІ Telegram  and Instagram- there is a lot of interesting things there. And join thousands of readers ForumDaily New York — there you will find a lot of interesting and positive information about life in the metropolis. 



 
1241 requests in 1,289 seconds.