People fear that the development of artificial intelligence will one day reach a point where it will turn against its inventors and destroy humanity. Currently, artificial intelligence has no awareness of the challenges associated with it, nor any form of awareness, but London is organizing a summit on its regulation, and the White House has just signed a decree to regulate it, while the European Union is taking a similar path to adopt new rules in this field before the end of the year.

Below is an overview of the risks associated with this technological revolution, reports Al-Rai daily.

Algorithms have been part of people’s daily lives for a long time, but the unprecedented success achieved by the “Chat GPT” program (developed by OpenAI) has reignited the controversy in 2023. In this context, what is called intelligence raises generative artificial intelligence, which is capable of producing text, images and sounds with simple commands in everyday language, has particular concerns associated with the obsolescence of some functions.

Relying on machines to accomplish a large number of tasks has already had this impact in several sectors, from agriculture to factories. Thanks to its generative capabilities, AI now also affects broad categories of workers, such as white-collar employees, lawyers, doctors, journalists, teachers, etc.

“By 2030, machines could be used to complete up to 30 percent of current work hours in the US economy, a trend accelerated by generative artificial intelligence,” consulting firm McKinsey said in July.

As a solution, major American technology companies often invoke the principle of universal basic income, that is, a minimum allotment for everyone, which would compensate for job losses, even if its effectiveness has not been widely proven.

Artists were among the first to object to programs like DAL-II (from OpenAI) or MedGourney, which generate images on demand.

Like developers, writers, and other creative professionals, they charge companies for using their work to create their technology, without permission or compensation because generative AI relies on language models, computer systems that require retrieving large amounts of data over the Internet.

Fake news and deepfakes are nothing new, but generative AI is raising concerns about an increase in inauthentic content online. Artificial intelligence specialist Gary Marcus confirms that the elections may be “won by the people who are most talented at spreading misinformation.”

Primarily, “democracy depends on the ability to access the information necessary to make the right decisions. If no one is able to distinguish between what is true and what is not, that will be the end.”

Generative AI also makes it easier for fraudsters to create more convincing messages for those involved in phishing activities. There are even language models that have been specially trained to produce malicious content, such as FraudGPT.

But above all, technology has made it very easy to copy a face or a voice and thus trick people into believing, for example, that their child has been kidnapped in order to blackmail them.

As with many other technologies, the main risk of AI is related to humans – from design to use. For example, recruiting software can discriminate against candidates if it automatically reproduces human biases present in society.

The linguistic model does not defend the rights of marginalized groups, nor is it racist in itself, and its performance depends on the data and instructions provided by its developers. Overall, AI can facilitate many activities that pose a risk to humans and their basic rights, from inventing harmful molecules to monitoring populations.

Some in the sector fear that artificial intelligence will become able to think to the extent that it can control humans.
OpenAI is working to build “artificial general intelligence” (beyond human intelligence), with the aim of “benefiting all of humanity.” It relies on the widespread use of its models to detect and correct problems.

Meanwhile, Sam Altman and other Big Tech leaders this summer called for combating the “risks of the demise” of humanity “due to artificial intelligence.”

For historian Emil Torres, it represents a distraction from very real problems. He said in recent statements to Agence France-Presse,

“Talking about the demise of humanity, which is a real horrific event, is much more attractive than talking about Kenyan workers who are paid $1.32 an hour” to moderate user content in artificial intelligence “or exploiting artists and writers” to feed models. artificial intelligence.


Read Today's News TODAY... on our Telegram Channel click here to join and receive all the latest updates t.me/thetimeskuwait