A researcher from Sweden named Almira Osmanovic Thunströmgave claims that the language model called GPT-3 has written an academic thesis about itself after she asked the model to do so. The researcher said that the thesis had a “fairly good” research introduction and even had scientific references and citations included in the text.
Thunström, a researcher at Gothenburg University, sought to publish the research paper in a peer-reviewed academic journal. After GPT-3 completed its scientific paper in just 2 hours, Thunström had to ask the model for its consent to publish the paper, to which it replied positively.
The model replied’ no’ when asked if it had any conflicts of interest. Thunström said that the authors began to treat GPT-3 as a sentient being, even though it was not.
Read More: Google Suspends Blake Lemoine For Claiming Its AI Chatbot Is A Person
Thunström wrote about the experiment in Scientific American, emphasizing the fact that the process of getting GPT-3 published sparked many ethical and legal questions.
Recently, the sentience of AI became a significant topic of conversation after Google engineer Blake Lemoine claimed that AI technology called LaMBDA had become sentient and even asked to hire an attorney for itself. However, experts say that technology has not yet reached the level of creating machinery precisely resembling humans.
Thunström said that the experiment had seen positive results among the AI community. She added that the other scientists trying to replicate the results of the experiment are finding that GPT-3 can write about all kinds of subjects.