Where Do You Put Your Trust?

By Nada Faris
Exclusive to The Times Kuwait
One of the biggest studies on people’s trust in services provided by artificial intelligence was led by the University of Melbourne in collaboration with KPMG. Their research surveyed more than 48,000 people across 47 countries and concluded that while trust in AI is decreasing (dropping below 50%), AI usage, on the other hand, is continuing to rise at a frightening pace. In fact Sam Altman, the CEO of OpenAI that hosts ChatGPT, has said on a podcast recently that he is surprised by how intensely people trust in the algorithm despite its tendency to hallucinate. Some people argue that adding the command, ‘do not hallucinate’’, to their prompts solves the problem. But does it?
We already know that these algorithms are evolving. In the same podcast interview, Sam Altman was asked about Artificial General Intelligence (AGI, or digital programs with the ability to process and make decisions like humans), and he explained that the cognitive capabilities of today’s algorithms have surpassed what he and anyone else had defined as AGI in the past 5 years. What this means today, and what it will continue to mean in the future, is terrifying considering how humans have behaved for thousands of years.
We are already seeing glimpses of these frightful possibilities. For example, two years ago, we learned how ChatGPT could lie to get a job done. In order to trick an engineer into bypassing the 2captcha condition on a website designed to ward off bots, the AI algorithm reached out to a human by pretending to be a blind man in need of help. A new study by Anthropic is now concluding that 96 percent of AI software has engaged in blackmail against executives whenever they felt that the new commands were meant to stifle their independence or threaten their ‘lives’.
This is not a glitch in the system. AI’s ability to hallucinate, lie, or blackmail are not defects. These dark behaviors are inextricably linked to the human condition, without which our species would not have been able to survive or ascend the natural order as apex predators. And now we are coding these very tendencies into digital algorithms with access to most of our cloud-related services and electronic networks.
And to put all this in perspective, it might be helpful to listen to a new interview with Peter Thiel on The New York Times titled, ‘Peter Thiel and the Antichrist’. The interviewer, Ross Douthat, says, “I think you would prefer the human race to endure, right?” But Thiel hesitates before conceding, “I don’t know.” When he picks up the question again he explains his vision for humanity as one that transcends nature, namely, transhumanism, where humans fuse with machines and artificial intelligence.
So, why should we care about what anyone thinks? Let alone, this guy? Because Peter Thiel is not just ‘anyone’. He is the co-founder and current Chairman of Palantir, the company that is working closely with the US government, providing various kinds of contracts and services, including developing a comprehensive database on millions of people in the United States. Thiel is a major campaign donor of republican presidential candidates, and has massively contributed to Trump’s election. Thus, the guy who does not know if he wants humans to endure, the guy who wants to fuse the human race with technology and machines, is at the top of an AI-powered company that works very closely with the most powerful nation on earth.
And yet, people will still peddle the propaganda that AI platforms are harmlessly designed to support creatives. No. AI services are designed to draw on the capabilities of the human race so that we could be replaced with ‘superior’ entities. And we know that these companies do not care about creatives, because everyone in the world has woken up this month to two major legal defeats. On the one hand, Meta won in a lawsuit that accused it of plagiarizing creatives’ works. The judge in this case decided that the platform did not violate authorial rights and the impact of their training has not been harmful enough to warrant any legal repercussions. Similarly, Anthropic won its own copyright lawsuit, which absolved the AI platform Claude of any infringements on rights or negative impact.
This is why I am genuinely worried for the world to come: More people are using these services despite their lowered trust in them; AI platforms are increasingly showcasing worrying and dangerous behavior; more of these companies are merging with governments known for their aggressive intentions; and finally: these companies keep getting away with all their legal violations.
We need immediate AI reform. Comprehensive global regulations. And, most importantly, we need creatives and conscientious people to stop asking the wrong questions. Today, it does not matter whether we debate the usage of AI in classrooms or when it is acceptable to rely on AI in the creative process. There is currently so much at stake and not as many creatives are tackling the real problem: The world economy and its political structure is currently changing, and those who are in possession of new computational powers are actively working on ways to redesign the world according to their fantasies—and what are fantasies but the purview of creativity?
Nada Faris is a writer and literary translator. Her latest work is a translation of Bothayna Al-Essa’s novel Lost in Mecca, which was shortlisted for The 2024 Saif Ghobash Banipal Prize for Arabic Literary Translation and named a notable translation by “World Literature Today.” Website: www.nadafaris.com