Why we need a global framework with inbuilt safety mechanisms for AI

Dec 12, 2024 07:20 IST

First published on: Dec 12, 2024 at 07:20 IST

With the advent of new technologies like ChatGPT, governments and policymakers across the world are obsessed with AI. Nations are locked in a mindless competition to overtake one another, convinced that those left behind will be the losers. Votaries of AI are excited, but many sobering voices among them are worried and not without reason. Geoffrey Hinton, a pioneer of AI, has emphasised its potential to surpass human intellectual capabilities. While he has said that AI “will be comparable with the industrial revolution,” he has also warned that “we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.” What must not be forgotten is how disruptive new technologies can be for societies and economies.

Today, AI technologies are embedded in our lives, from virtual assistants and smart home devices to AI-driven algorithms that influence what we see online. The content generated by AI tools like ChatGPT, Copilot and Gemini has been found to provide fabricated data that appears authentic. AI models are trained on vast amounts of data containing both accurate and inaccurate content as well as societal and cultural biases. Since models mimic patterns in training data without discerning the truth, they can reproduce falsehood and biases present in the data which could result in discriminatory targeting.

In March 2023, leaders of technology companies and researchers cautioned the world of the “profound risks to society and humanity” posed by AI tools. Appealing for a moratorium, they cautioned that AI developers are locked in an out-of-control race to develop more powerful digital minds that no one, not even their creators can understand and reliably control. There are countless examples of rampant AI misuse. AI has become a tool for more sophisticated forms of fraud. AI-generated images, videos and audio clips are being manipulated to create realistic but false representations of events and individuals. Deepfakes are being used in political campaigns and social media to spread misleading information, undermining public trust. Recent episodes underscore the urgent need for vigilance as AI technologies become more accessible and sophisticated.

Governments and institutions can use AI to monitor social media, track individuals or suppress certain types of content which can lead to curtailment of free speech. Accountability for actions taken by autonomous systems remains unclear, complicating legal and ethical responsibilities. The impact of AI is not limited to the digital world but extends to the job market as well. Automation and AI systems are rapidly replacing human workers in various sectors from manufacturing to customer service with millions of workforces becoming redundant.

The malaise is widespread, threatening social stability and rules-based order. It is important to recall the experience of the alarming increase of cybercrime, fraud and abuse of social media in particular, destroying reputations, targeting and shaming vulnerable innocents and triggering suicides. The victims are stripped of their privacy, dignity. There are no guardrails to protect citizens. Enactment of laws to check abuse and enforce accountability of platforms and service providers has unfortunately not helped in the timely redressal of grievances. The mechanisms for redressal are complex and inaccessible to those who are most vulnerable, poor and unaware.

Although governments and policymakers are conscious of the benefits of new technologies, the pitfalls cannot be wished away. There are both positive benefits and disturbing downsides, including irresponsible use and misuse by criminals and fraudsters. What is required is a healthy ecosystem for the judicious use of AI and to mitigate the risks. Recently, the EU has introduced the Artificial Intelligence Act, 2024 and has classified AI’s potential risk under different categories imposing certain obligations upon developers. Similarly, the US has also proposed the Algorithmic Accountability Act, 2023 to assess the impact of AI in high-stakes areas like employment, housing and credit. Presently, India does not have a specific, single unified law to govern or set out a systematic regulation for the proper functioning and development of AI.

Therefore, there is a need for a comprehensive global framework and a functional mechanism with inbuilt safeguards, and checks and balances to tackle crime, fraud, and misuse of new technologies. It is important to evaluate existing laws and the adverse fallouts to ensure that the legal path helps to create a much-needed balanced approach. Governments and crime agencies concerned with investigating such cases must swiftly bring the criminals to book. Unfortunately, the wake-up calls are being ignored by the owners of the platforms and new technologies for reasons such as commercial benefits.

There is a need to pause and reflect. A dispassionate evaluation of AI’s benefits, along with the disruptions caused by it, is required. We must have clarity as to what kind of society we wish to build, one which is sensitive and humane or one lacking empathy.

Anand Sharma (former Union Cabinet Minister), Shimpy Sharma and Alexandra Celestine are advocates at the Supreme Court of India



Source link

Share the Post:

Related Posts