Immortality or destruction: what can AI do with humanity when it becomes supermind - ForumDaily
The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

Immortality or destruction: what can AI do with humanity when it becomes supermind

Artificial intelligence could lead to the extinction of humanity, some experts warn, including the heads of OpenAI and Google Deepmind. How machines can defeat people and what will come of it, the publication found out with the BBC.

Photo: IStock

Since launching ChatGPT in November 2022, this artificial intelligence (AI) chatbot has become the fastest growing internet application in history.

In just two months, it had 100 million active users. According to technology monitoring company Sensor Town, it took Instagram two and a half years to achieve this mark.

Such popularity of ChatGPT has sparked an intense debate about the impact of AI on the future of humanity.

Dozens of experts backed the AI ​​Security Center, which stated that "reducing the risk of human extinction from artificial intelligence should be a global priority, along with other global risks such as pandemics and nuclear war."

At the same time, other experts believe that such fear is exaggerated, and humanity has nothing to fear.

Imitation of people

Text and images generated by AI tools such as ChatGPT, DALL-E, Bard and AlphaCode can be indistinguishable from human work.

With their help, students do their homework, and politicians write speeches, such as the representative of the Democratic Party, Jake Auchinclos.

The tech giant IBM has said it will soon stop hiring people for the 7800 positions that AI can work in.

On the subject: Artificial intelligence has invented a new antibiotic: it will solve the problem that doctors could not cope with for decades

And as amazing as these changes may be, this is just the beginning.

Now we are only at the first stage of AI development. There are two more ahead, which, according to some scientists, may directly threaten the survival of mankind.

Three stages

1. Artificial narrow intelligence (AI)

Artificial Narrow Intelligence, also known as Narrow AI, focuses on a single task and performs repetitive work within a given range of functions.

He can study a lot of data, for example from the Internet, but only in a certain area for which he was programmed.

A good example is chess programs. They can beat the world chess champion, but that's all they can do.

There are many applications on smartphones that use this technology, from GPS maps to music and video programs.

Even more complex systems like self-driving cars and ChatGPT are forms of narrow artificial intelligence. They cannot act outside of a set set of tasks, so they cannot make decisions on their own.

However, some experts believe that systems programmed for automatic learning, such as ChatGPT or AutoGPT, will be able to move to the next stage of development.

2. Artificial General Intelligence (AI)

Artificial general intelligence will become a reality when a machine can perform any intellectual task that a human is capable of.

Artificial General Intelligence is the point at which a machine has the same intellectual capabilities as a human.

It is also called "strong AI".

six month break

In March 2023, more than 1000 technology experts called for "all AI labs to immediately suspend training on AI systems more powerful than GPT-6 for at least 4 months" (latest version of ChatGPT).

“AI systems with human-like and competitive intelligence can pose serious risks to society and humanity as a whole,” Apple co-founder Steve Wozniak and other technology leaders, including Tesla and SpaceX owner Elon Musk, wrote in their address.

Musk was one of the co-founders of OpenAI, but subsequently left due to disagreements with the firm's management.

In a letter published by the non-profit organization Future of Life Institute, experts said that if companies refuse to quickly stop their projects, "governments should intervene and impose a moratorium" so that security measures can be developed and implemented.

"As stupid as it is smart"

The first letter was also signed by Carissa Velise of the AI ​​Ethics Institute at the University of Oxford. Although a later statement from the AI ​​Security Center that warns of human extinction is, in her opinion, going too far, she chose not to sign it.

“The artificial intelligence we are building right now is as stupid as it is smart,” she said. “If someone tries ChatGPT or other AI, they will notice that they are very, very limited.”

Veliz says she's more concerned about the fact that AI can create disinformation at a tremendous rate.

“As the US election approaches, important platforms like Twitter and others are firing their AI ethics and safety teams. This worries me much more,” she emphasized.

The US government recognizes potential threats.

“Artificial intelligence is one of the most powerful technologies of our time, but to take advantage of the opportunities it presents, we must first mitigate the risks it poses,” the White House said in a May 4 statement.

The US Congress even called in OpenAI CEO Sam Altman to ask him some questions about ChatGPT.

During a Senate hearing, Altman said it was extremely important that his industry be regulated by the government, as AI is getting stronger every day.

Carlos Ignacio Gutiérrez, a public policy researcher at the Future Life Institute, explained that one of the big problems created by artificial intelligence is that “there is no collegial body of experts who decide how to regulate it, as is the case, for example, with the Intergovernmental Group climate change experts.

This brings us to the third and final stage of AI.

3. Artificial superintelligence (IS)

When we reach stage 2 (AI), we will almost immediately move to the final stage - artificial superintelligence (AS). This will happen when artificial intelligence becomes superior to human intelligence.

Oxford University philosopher and artificial intelligence expert Nick Bostrom defines superintelligence as an intelligence that is "greatly superior to the best human brain in virtually every area, including scientific creativity, general intelligence, and social skills."

“People who become engineers, nurses or lawyers have to study for a long time. The problem with Y is that... it can continually improve while we cannot,” Gutierrez explains.

Science fiction

This concept of development is reminiscent of the plot of the movie "Terminator", in which the machines start a nuclear war to destroy humanity.

Arvind Narayanan, a computer scientist at Princeton University, has previously said that sci-fi-like disaster scenarios are unrealistic: “Current artificial intelligence is nowhere near able to realize these risks. As a consequence, it detracts from the short-term damage to artificial intelligence.”

While there is much discussion about whether a machine can actually acquire the broad intelligence that a human can, especially when it comes to emotional intelligence, this is what most worries those who believe we are close to achieving IP.

Recently, the so-called “godfather of artificial intelligence,” Jeffrey Ginton, a pioneer in machine learning, warned that we may be about to hit that milestone.

“Nowadays machines are no smarter than us, as far as I can see. But I think they may soon become so,” said Ginton, 75, who recently retired from Google.

According to him, he now regrets the work done and fears that "bad people" will use artificial intelligence for "bad deeds."

“Imagine, for example, that someone like Russian President Vladimir Putin decided to give robots the ability to create their own intermediate goals,” he argued.

You may be interested in: top New York news, stories of our immigrants and helpful tips about life in the Big Apple - read it all on ForumDaily New York

In his opinion, in the end, the machine can set itself the goal of "getting more power."

At the same time, Ginton noted that in the short term, in his opinion, artificial intelligence will bring much more benefits than risks.

"So I don't think we should stop developing it," he concluded.

Extinction or immortality

British physicist Stephen Hawking has long warned future generations about the dangers of AI.

“The development of full-fledged artificial intelligence could mean the end of the human race,” he said in 2014, four years before his death.

According to him, a machine with this level of intelligence could act on its own and redesign itself.

Nanobots and immortality

One of the biggest AI enthusiasts is futurist-inventor and author Ray Kurzweil, an AI researcher at Google and co-founder of Silicon Valley's Singularity University.

Kurzweil believes that humans will be able to use super intelligent AI to overcome biological barriers.

In 2015, he suggested that by 2030, humans could achieve immortality thanks to nanobots (extremely small jobs) that could repair any damage and cure diseases inside our body.

AI control

Gutierrez agrees that the key is to create an AI management system.

“Imagine a future where some entity has so much information about every person on the planet and their habits that it can control us in ways we don't even realize,” he says. “The worst case scenario is not that there are wars between people and jobs. The worst thing is that we don't realize that we are being manipulated because we live on a planet with a being much smarter than us."

Read also on ForumDaily:

Summer is coming: how to prepare your home for the heat

Six American companies that will pay you for vacation and allow you to work remotely

Four major US cities where buying a house is cheaper than renting

You can find out about a flight delay in advance, even before it is announced

Six ways to cut your medical bill in the US

How to save on groceries: 6 apps that sell surplus food for cheap at restaurants, bakeries and other businesses

development Artificial Intelligence Educational program humanity
Subscribe to ForumDaily on Google News

Do you want more important and interesting news about life in the USA and immigration to America? — support us donate! Also subscribe to our page Facebook. Select the “Priority in display” option and read us first. Also, don't forget to subscribe to our РєР ° РЅР ° Р »РІ Telegram  and Instagram- there is a lot of interesting things there. And join thousands of readers ForumDaily New York — there you will find a lot of interesting and positive information about life in the metropolis. 



 
1085 requests in 1,212 seconds.