DeepMind, which developed AlphaGo etc., is considering releasing a private beta version of its own chatbot called 'Sparrow' against ChatGPT in 2023
Demis Hassabis, CEO of DeepMind , an AI development company famous for developing Go AI ' AlphaGo ', which became a hot topic by defeating the Go champion, said in an interview with TIME that ChatGPT It revealed that it is developing a chatbot called ' Sparrow ' to compete. DeepMind is looking to release a private beta of Sparrow sometime in 2023.
DeepMind CEO Demis Hassabis Urges Caution on AI | TIME
Our founder and CEO @DemisHassabis talks to @TIME about DeepMind 's mission to build AI responsibly to help solve some of the world's toughest problems.—DeepMind (@DeepMind) January 12, 2023
With AI's promise also comes peril.—TIME (@TIME) January 15, 2023
It is in this uncertain climate that @demishassabis agrees to a rare interview with @billyperrigo , to issue a stark warning about his growing concerns https://t.co/b5GKLPDT6V
ChatGPT is a chat AI that can respond naturally to questions from humans, and reports that it is possible to pass the free-text question of university-level exams, and that researchers cannot distinguish the gist of papers written by ChatGPT. The high accuracy has become a hot topic.
DeepMind is developing its own chat AI 'Sparrow', which is the same as ChatGPT, and it has become clear that a private beta version of this will be released in 2023. It is reported that the reason why Sparrow's release date is 2023, which is much later than ChatGPT, is that DeepMind is working on the development of reinforcement learning -based features that ChatGPT lacks.
In addition, CEO Hasabis stated in an interview with TIME that DeepMind's goal is to develop ' general-purpose artificial intelligence (AGI) ' that thinks like humans, not conventional artificial intelligence (AI). increase. Hassabis CEO said about AGI, 'It is a technology that marks the era like' the invention of electricity 'and will change human life itself.' The difference between AGI and AI is summarized in the following article.
The reason why AI research on the extension of the algorithm framework cannot achieve true artificial intelligence is due to the lack of `` affordance ''-GIGAZINE
Like DeepMind, which aims to develop such AGI, AI startup OpenAI, launched by Earon Mask and Peter Thiel, has attracted attention as an AI development organization. OpenAI is developing AI with a huge investment from Microsoft, but the organization's stance on AGI is completely different from DeepMind. DeepMind advances AI research by letting AI play games and applying reinforcement learning, while OpenAI researches AI with unsupervised learning and a method of flowing information collected from the Internet to a neural network. doing.
There are other differences between DeepMind and OpenAI. While DeepMind secretly pursues AI research, OpenAI is active in making its research results available to the public. In 2022, the image generation AI ' DALL-E 2 ' and the chat AI ChatGPT were announced, and the test version is also available to general users. However, TIME said, ``Because they are trained with data gathered from the Internet, they suffer from structural biases and inaccuracies. , 'Flight attendants' tend to be portrayed as young beautiful women, and ChatGPT also tends to claim false information with confidence,' pointing out the limitations of OpenAI's AI development method.
Announcement of language model 'ChatGPT' for dialogue, making it possible to admit mistakes and reject inappropriate requests - GIGAZINE
According to a research paper published by DeepMind in 2021, language generation tools like ChatGPT and GPT-3 tend to accelerate the spread of misinformation, ``facilitating government censorship and surveillance, It can spread harmful stereotypes under the guise of objectivity.' OpenAI also acknowledges that there is a bias in the AI it develops, but says it is continuing efforts to minimize it.
In addition, Hassabis said in an interview that he is concerned about the speed of AI evolution, encouraging competitors to focus more on safety, and the AI industry's efforts to openly announce the results of AI research. It suggests that the time is approaching for the culture to end.
in Software, Posted by logu_ii