Cakra News

Paytm CEO Vijay Shekhar worried about human extinction, shares ChatGPT creator OpenAI's blog post

In a tweet, Paytm CEO Vijay Shekhar wrote that he is ‘genuinely concerned with power some set of people and select countries have accumulated already’, referring to OpenAI’s blog post about superintelligent AI.

paytm ceo

In Short

  • In a tweet, Paytm CEO Vijay Shekhar expressed his concern over superintelligent AI.
  • He shared OpenAI’s blog post.
  • The blog post states that AI could case human extinction soon.

By Divyanshi SharmaThe idea of Artificial Intelligence (AI) becoming smarter than its creators and taking over the world someday has been explored in ample sci-fi movies till now. However, in the last couple of months, tech experts have warned us that this might be a reality sooner than we think. Until now, superintelligent AI doesn’t exist and the chatbots at our disposal need our help to function.

advertisement

However, as per experts, as AI advances further, it can achieve its own consciousness and might end up wiping off humanity as well. And this has got Paytm CEO Vijay Shekhar worried.

Paytm CEO Vijay Shekhar on AI

In a tweet, Paytm CEO Vijay Shekhar wrote that he is ‘genuinely concerned with power some set of people and select countries have accumulated already’.

He shared OpenAI’s recent blog post in which the company announced the formation of a new team to keep a tab on superintelligent AI systems.

“Here is OpenAI blog post done this week. In less than 7 years we have system that may lead to disempowerment of humanity => even human extinction. I am genuinely concerned with power some set of people & select countries have accumulated – already,” the tweet reads.

The Paytm CEO isn’t the only one who is concerned about superintelligence. Godfather of AI Geoffrey Hinton, in an interview with MIT Technology Review earlier this year, had warned that AI could soon surpass human beings in terms of intelligence. He had also added that we need to start thinking seriously about the potential consequences of creating machines that are more intelligent than humans.

OpenAI’s blog post

OpenAI, in its blog post, said that “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade.”

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs,” it added.

advertisement

The company then added that to control the superintelligent AI systems, they are building a ‘roughly human-level automated alignment researcher’.

“Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence. To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline,” the blog post read.