Since 1975
  • facebook
  • twitter

Take stock of AI first

More than 1,000 technology experts have called for an immediate pause in the further development of AI (AFP)
More than 1,000 technology experts have called for an immediate pause in the further development of AI (AFP)
Short Url:
12 Apr 2023 01:04:18 GMT9
12 Apr 2023 01:04:18 GMT9

Last week, a letter was sent out by more than 1,000 technology experts and big tech business leaders, including Tesla founder Elon Musk and Apple co-founder Steve Wozniak, calling for an immediate pause in further development of artificial intelligence to take stock and set some parameters and regulatory framework to ensure that AI’s development and deployment is carefully considered and planned.

The letter has come after months of some outstanding launches in the domain of AI, notably through platforms such as ChatGPT, developed by OpenAI and backed by Microsoft, as well as the release of GPT-4, a highly sophisticated model that underpins ChatGPT. While ChatGPT is the most revolutionary development in AI, it is far from being the only one. Google, Microsoft, Adobe and other tech firms have also been adding AI capabilities to their star products, notably the search engines and other productivity tools, giving access to practical use of AI to users around the world.

The emergence of ChatGPT and the news that similar or even more advanced chatbots and other tools were under development in various parts of the world has rightly alarmed many experts about the likely impact that a super-powerful AI tool such as ChatGPT could have on employment and whether humans can keep up with a machine that will continue to become ever more powerful without any human interference or control.

Within two weeks of its dispatch, the letter, sent by Future of Life, an NGO that has been founded to steer technology development toward the betterment of human society, had collected more than 20,000 signatures. The letter called for development of shared safety protocols, audited by independent experts to ensure that the systems adhering to them are safe beyond reasonable doubt.

The issues raised by the letter are real. For instance, AI has reached a level where it can carry out many tasks almost as independently as a human, but in a far more comprehensive manner. This poses various kinds of threats. One would be an uncontrolled spread of propaganda and even falsehoods, which would lead to far greater damage than the fake news being spread today through social media platforms such as Facebook or WhatsApp.

The damage caused by an uncontrolled or uncontrollable spread of ever-advancing AI is perhaps going to be many times greater than, say, a virus that may leak from a lab and infect humans.

Ranvir S. Nayar

But the risks of AI are not just the falsehoods that it can spread. AI is already interfering with things such as copyright or original work as the ChatGPT is capable of generating at least basic articles an news reports. Besides the falsehoods that can be deeply ingrained in these documents, these capabilities also raise many challenges, for example, for a college to rate a student or a company to gauge the real talent of an employee. It could also lead to millions of lay-offs in all domains that require some intellectual endeavor — be it coding as a software developer, a legal brief for a law firm, or work as an academic researcher or journalist. These and other capabilities of advanced AI can easily threaten human civilization and society in a serious manner.

Some backers of AI may insist that the fears of job losses or plagiarism are unfounded, comparing it to the arrival of computers decades ago, saying that AI would also generate an entire new stream of jobs just like computers did or automation of factories is leading to currently. However, the parallels are unfounded as AI may soon be able to do even without the human coders who first created AI or who are enhancing it today.

But even more serious than any other threat posed by AI is what it could morph into. In many ways it is like a pathogen or a virus generated or isolated in a human laboratory. The damage caused by an uncontrolled or uncontrollable spread of ever-advancing AI is perhaps going to be many times greater than, say, a virus that may leak from a lab and infect humans.

As with most other areas of frontier technology, governments or regulators are struggling to keep pace with the rapid developments. One could easily say that they are often miles behind and by the time they catch up, technology has not only morphed into something else but has also become far too powerful for them to do anything about it.

Look at Facebook (or Meta), Google or Microsoft and Amazon. No regulator could anticipate the breakneck speed at which these companies would come to dominate almost every sphere of our lives today and within years of their development. It is only now that regulators have woken up to threats that they pose to society, and especially free and competitive markets. Though some countries are trying to limit the influence of big tech, it has been an ineffectual and useless exercise.

Currently, we are on the same path with AI as we were a couple of decades ago with big tech firms. The regulators, bureaucrats and elected officials were swayed by the promises made by the sweet-talking CEOs of big tech about how they were only meant to aid and assist human civilization. Instead, they have more control over our lives and our society than even the most despotic governments around the world.

One cannot take a similar risk with AI, especially since it may not be controlled by a clutch of rich CEOs but it could indeed be a self-feeding giant that no one can fight or curb.

Instead of heading blindly into yet another tech-abyss, it is time for governments, elected representatives and regulators to take charge and pause further development until there are proper checks and balances in place and international agreements. Those should be led by societies and not by a handful of businessmen, on the limits over end-use of AI. It is time to cage the beast before it begins to feed on us.

  • Ranvir S. Nayar is managing editor of Media India Group.
Most Popular
Recommended

return to top