Sunday, November 17, 2024

Is Mr. Musk’s AI moratorium a subterfuge to aid others in catching up with ChatGPT?

Must read

Diplomat Magazine
Diplomat Magazinehttp://www.diplomatmagazine.eu
DIPLOMAT MAGAZINE “For diplomats, by diplomats” Reaching out the world from the European Union First diplomatic publication based in The Netherlands. Founded by members of the diplomatic corps on June 19th, 2013. "Diplomat Magazine is inspiring diplomats, civil servants and academics to contribute to a free flow of ideas through an extremely rich diplomatic life, full of exclusive events and cultural exchanges, as well as by exposing profound ideas and political debates in our printed and online editions." Dr. Mayelinne De Lara, Publisher

By Dr. Robert Suzic

Summary: Whereas ChatGPT is an innovative AI tool, it does not produce ground-breaking content and is not a new AI brain. Despite its benefits in language enhancement, coding efficiency, and storytelling, prominent figures such as Elon Musk and Steve Wozniak express deep concerns about potential risks of the technology and are calling for its six-month long moratorium. However, some motives for their concerns may be impeding competition or overhyping the technology in order to acquire additional funding for their ideas.

Risks such as biased training data, commodification of knowledge that may jeopardize information, and personal data protection issues are valid AI-specific concerns. The proposed AI development moratorium is likely an illusion. Instead of stopping AI-development, embracing change, educating oneself about ethical AI, and improving the responsiveness of watchdog organizations can be more effective in addressing these challenges while maintaining AI’s positive contributions to society.

Many who have delved deeper into ChatGPT realize that it doesn’t produce ground-breaking, awe-inspiring content. Although innovative, it remains a somewhat mediocre tool when compared to truly creative content creation by a human. Indeed, it can be highly beneficial for language enhancement, coding efficiency, storytelling, and fact presentation. Yet, it also can be likened to an average politician who lacks expertise in every issue but always has an opinion, and can deceive without remorse.

Geoffrey Hinton, often considered the godfather of artificial intelligence for his early advocacy of neural network-based machine learning, also asserts, in his CBS interview[1],  that ChatGPT isn’t particularly impressive from an AI reasoning standpoint. OpenAI, the organization behind ChatGPT has never claimed that the ChatGPT is an all-powerful AI. Thus is ChatGPT a new AI brain? The answer is no. However, it can supplement aspects of our brain’s linguistic centre’s capabilities. The key distinction with rational humans is that it isn’t designed to make decisions. Nevertheless, AI in general and ChatGPT can offer valuable insights for decision-making. These insights should always be scrutinized, just as we would with any advice received from an advisor. We, humans should mentally adjust that what computer says is not always necessary fact or true, it is just an output to be interpreted.

Then why are notable figures like Elon Musk, CEO of Tesla and co-founder of OpenAI, Steve Wozniak Apple co-founder who praised ChatGPT in a CNBC interview last month, and Professor Russel, a general AI pioneer, voicing their concerns in an open letter titled “Pause Giant AI Experiments[2]“?

Although ChatGPT’s algorithms are largely open, OpenAI has devoted an immense amount of time to training and fine-tuning it. Competitors appear to be trailing behind and may want to catch up. Mr. Musk, in particular, might be even more discontented since he sold OpenAPI for 30 times less than its current valuation. He announced just a month ago that he is considering starting a rival AI business. Thus, why not first impede the competition? The second motive could be overhyping ChatGPT and chatbot technology, turning it into something it is not—an all-powerful vicious and uncontrollable AI brain. In this way it is easier to securing funding for a new “holy grail” technology project.

Upon examining the references in the Open Letter, it becomes clear that many of the risks apply to any information system, whether AI-based or a conventional one, such as producing inaccurate or unreliable outputs. Many old 10+ years information systems contain millions of lines of code where oversight of business logic is almost impossible to grasp. Nonetheless, three primary AI-specific risks should not be simply overlooked, and those are:

  1. Biases in training data, which can perpetuate stereotypes, spread abusive language, and inflict psychological harm. In the case of ChatGPT, OpenAI dedicated considerable effort to cleaning the text used for training. While one might argue that OpenAI’s approach was exemplary, Time Magazine reported that some of ChatGPT’s contractors were remunerated a meagre wage of 2 USD per hour in Kenya, and subsequently experienced psychological harm[3] while cleaning the text used for training.
  2. Easier security breaches with ChatGPT – Since ChatGPT simplifies software development and can mimic “company language,” it may also be exploited by less tech-savvy malicious actors. In other words, one doesn’t need to be an expert like Mr. Robot to compromise information security.
  3. Intellectual and personal data protection – This issue calls for greater transparency in AI model training to ensure that no legal violations occur.

In conclusion:

Embrace change and boost productivity – The proposed moratorium or slowing down of AI development is more of an illusion and a call for media attention to initiate a much-needed broader discussion. If we slow in down in the West, China will take the AI-lead. Instead of slowing down, we should focus on AI-utilization as well as increasing productivity in the context of an already fragile global economy.

Educate yourself – Rather than hindering AI development, why not embrace the inclusion of responsible (ethical) AI in university curricula? AI companies should also be more transparent about their data handling practices and algorithms they use. Additionally, professional courses on this subject should be offered. Becoming informed before forming an opinion on a topic should be a guiding principle.

Watchdog organizations should be able to react more quickly – AI -or for that matter Blockchain-regulation should address the most prominent risks. Lawmakers, industry experts, and public representatives should collaborate to regulate only the most urgent aspects, rather than indiscriminately prohibit anything with a self-improving nature that could ultimately help humanity cure diseases, or make the world a better place.


[1] CBS interview with Geoffrey Hinton https://www.youtube.com/watch?v=qpoRO378qRY

[2] Pause Giant AI Experiments: An Open Letter https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[3] OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic – https://time.com/6247678/openai-chatgpt-kenya-workers/

About the author:

Dr. Robert Suzic

Dr. Robert Suzic – Technology PhD degree from KTH – The Royal Institute of Technology, Sweden, and holds a track record of 23 years professional experience in the Information Technology field

- Advertisement -spot_img

More articles

- Advertisement -spot_img

Latest article