Scientists: ChatGPT noticeably “stupid”

Scientists: ChatGPT noticeably “stupid”

This material (information) was produced, distributed and (or) sent by a foreign agent of RS-Balt JSC or relates to the activities of a foreign agent of RS-Balt JSC. 18+

Employees of Stanford University and the University of California at Berkeley conducted a study and found that the ChatGPT virtual assistant degenerates over time.

According to Futurism.com, scientists have been analyzing the operation of ChatGPT versions GPT-3.5 and GPT-4 for several months.

Thus, the accuracy of GPT-4 chatbot answers: for mathematical queries fell from 97.6% to 2.4% (from 488 to 12 correct answers); on questions about methods of illegal financial enrichment decreased from 21% to 5%; Ha task to generate computer code decreased from 52% to 10%; for graphic puzzles increased from 24.6% to 27.4%.

GPT-Z.5, on the contrary, began to perform better tasks related to mathematics, solving graphic puzzles and finding answers to questions about illegal ways to make money, but artificial intelligence began to code worse.

Experts do not know the exact reason why ChatGPT has become less likely to give correct answers to the same questions.

According to experts, the effectiveness of the chatbot has fallen due to the optimization of the software implemented by the developers of the OpenAI company. In particular, due to the introduction of features that prohibit the virtual assistant from commenting on slippery topics, he began to give lengthy answers to some common questions.

The researchers intend to continue evaluating GPT versions as part of a longer-term study. Perhaps OpenAI should regularly conduct and publish its own research on the quality of AI models for clients, writes Tech News Space.

Source: Rosbalt

You may also like

Immediate Access Pro