Musk, Wozniak and Harari ask to stop research and training of artificial intelligence: it poses too much threat - ForumDaily
The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

Musk, Wozniak and Harari ask to stop research and training of artificial intelligence: it poses too much threat

Prominent entrepreneurs, researchers, politicians and writers have signed an open letter asking for a halt to artificial intelligence (AI) experiments. Appeal published by a non-profit organization Future of Life Institute.

Photo: IStock

Over a thousand signatories are calling on all AI labs to immediately suspend research and training on AI systems more powerful than GPT-4 for at least 6 months.

Scientists say artificial intelligence systems with human-competitive intelligence “could pose a serious danger to society and humanity.”

The open letter states that AI should be planned and managed with commensurate care and resources. Unfortunately, the signatories note, this level of planning and management does not currently exist. In recent months, AI labs have been “stuck in an uncontrollable race to develop and deploy increasingly powerful digital systems that no one—not even their creators—can understand, predict, or reliably control.”

On the subject: Better than humans: which professions will soon be mastered by robots

The AI ​​researchers who signed the open letter are asking whether we should allow machines to flood our information channels with propaganda and untruth, whether we should develop non-human minds that could eventually outnumber, outwit and replace us.

The letter admits the possibility of losing control of our civilization. The signatories argue that AI development decisions should not be delegated to unelected leaders. They insist that powerful artificial intelligence systems should be developed only when people are confident in their effectiveness in the absence of serious risks. This belief must be well founded and supported by research.

A recent statement from OpenAI regarding artificial general intelligence states that “at some point it may be important to get an independent assessment before proceeding to train future systems.” The authors of the open letter believe that such a moment has already arrived.

Musk, who co-founded OpenAI but left in 2018 and has since become critical of the organization, also signed the letter.

Speaking at the Massachusetts Institute of Technology, Musk said that this could be the most significant threat to humanity: “With artificial intelligence, we are summoning a demon. In all the stories about the guy with the pentagram and holy water, this guy is sure he can control the demon. It didn't come out."

“This is pride and an obvious mistake,” Musk said of underestimating the power of AI.

Therefore, the signatories are calling on all AI labs to immediately suspend training on AI systems more powerful than GPT-6 for at least 4 months. This pause should be universal and verifiable, and should involve all key players. If such a pause cannot be introduced quickly, governments should intervene and impose a moratorium, as suggested in the letter.

The pause, the signatories argue, is for AI labs and independent experts to develop and implement a set of common security protocols for advanced AI design and development that are carefully reviewed and monitored by independent external experts. These protocols are to ensure that systems that adhere to them are safe beyond reasonable doubt.

Like the article? Support ForumDaily!?

The signatories emphasized that this does not mean a pause in the development of AI in general, but simply a step back from “a dangerous race towards ever larger unpredictable black box models.” They advocate oversight, control, verification of AI systems by governments and special bodies.

This pause will provide an opportunity to adapt, the opportunity to fully enjoy the “summer of AI” later.

“Society has suspended the use of other technologies with potentially catastrophic consequences for society. - the letter says. - We can do it here. Let’s enjoy the long summer with artificial intelligence, rather than rush into the fall unprepared.”

Who signed the letter

Among the outstanding minds who work with AI and signed the appeal, we can highlight:

  • Elon Musk, founder of SpaceX and Tesla;
  • historian and futurist Yuval Noah Harari;
  • Apple co-founder Steve Wozniak;
  • Skype co-founder Jaan Tallinn;
  • policy that supports technology, Andrew Yang.

The signatories also include engineers from Amazon, DeepMind, Google, Meta and Microsoft, as well as academics including famed cognitive scientist Gary Markus.

You may be interested in: top New York news, stories of our immigrants, and helpful tips about life in the Big Apple - read it all on ForumDaily New Y.

Co-authors include the co-founders of Apple, Pinterest and Skype, as well as the founders of AI startups Stability AI and Character.ai.

Harari considers AI one of the most important challenges of the near future, which is on a par with climate change and the nuclear threat.

“It is not the apocalyptic scenarios of the Wild West World that should be feared, but the social and economic crisis caused by AI,” says the historian.

“Sooner or later, thinking devices designed to make our lives easier will begin to realize that they are better. Who will then manage companies - a slow person or a machine? says Steve Wozniak.

And then, in his opinion, a person can turn for artificial intelligence "into something like ants."

Read also on ForumDaily:

'May the force be with you': 'Star Wars' actor voiced Ukrainian app for alerting air raids

Homeschooling: Many Americans have taken their children out of school and are teaching them 'on wheels' while traveling

Miscellanea open letter Artificial Intelligence Ilon Mask World
Subscribe to ForumDaily on Google News

Do you want more important and interesting news about life in the USA and immigration to America? — support us donate! Also subscribe to our page Facebook. Select the “Priority in display” option and read us first. Also, don't forget to subscribe to our РєР ° РЅР ° Р »РІ Telegram  and Instagram- there is a lot of interesting things there. And join thousands of readers ForumDaily New York — there you will find a lot of interesting and positive information about life in the metropolis. 



 
1085 requests in 1,508 seconds.