More and more people are beginning to realize that Artificial Intelligence is a high-risk technology that could lead to the extermination of the species. That may sound like an exaggeration, but if you follow the developments in AI closely, you’ll see that it’s an accurate assessment. AI is a potentially lethal technology that can either be used to benefit mankind or pave the way to unimaginable death and destruction. Check out this excerpt from an article at Scientific American:
A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development….Why are we all so concerned? In short: AI development is going way too fast. Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not, Scientific American
Elon Musk has used his platform at X to amplify his concerns about AI and to emphasize the need to proceed with caution in order to minimize the risks. Regrettably, Musk’s concerns have been lost on the Trump administration who see AI as the weapon they need to maintain America’s dominant position in the world order. The conflict that is brewing between Trump and Musk on this key issue has not yet exploded into public view, but we can be reasonably certain that the clash will take place sometime in the near future. If we consider, for example, Vice President JD Vance’s alarming speech at the AI Summit in France this week, in which the VP flatly rejected the push for prudent regulation or government oversight while characterizing people who take such concerns seriously as “too self-conscious and too risk averse”, then we don’t need to wonder what the administration’s approach will be. In fact, Vance summed it up for his audience in one shocking sentence: “The AI future is not going to be won by hand-wringing about safety…”
So, taking sensible steps to avoid a full-blown mass extinction event is “hand-wringing”??
Naturally, we disagree with that view. And as we noted earlier, over 28,000 people have already signed a letter requesting a six-month moratorium on advanced AI development. Are we to assume that these 28,000 people—all of whom have more expertise on the subject of artificial intelligence than Vance—are merely hand-wringing worrywarts whose views are not grounded in an intimate grasp of the topic and the threat it poses to humanity?
And what exactly are those threats? Can they be summarized? Here’s more from Scientific American:
Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity…. Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI …will be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies….Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not, Scientific American
Musk’s concerns are even more explicit. Here’s a brief summary provided on his own AI entity called Grok: (note—In response to the question: “What concerns Musk most about AI?”)
There’s a significant concern about how AI, especially powerful forms like AGI (Artificial General Intelligence), should be developed and controlled. Musk has expressed distrust towards OpenAI’s leadership, particularly Sam Altman, regarding the stewardship of such technology. He has indicated a preference for OpenAI to return to its safety-focused and open-source roots, which he believes would align better with the broader public interest rather than corporate profit motives…..(Grok answers this question: what worries Musk most about AI:)
Existential Risk to Humanity:
Musk has warned that AI could either eliminate or constrain humanity’s growth, highlighting the risk of AI becoming “vastly smarter than humans” and potentially leading to scenarios where AI might decide to dispose of humanity or place it under strict control. This concern is likened by Musk to the dangers of nuclear physics, where the power can be used for both beneficial and catastrophic outcomes….
Musk has advocated for government regulation of AI, expressing stress over the technology’s advancement without adequate regulatory frameworks. He fears AI could lead to “civilization destruction” if not properly managed, proposing the creation of an “insight committee” to oversee AI development (source: web results from cnn.com, foxbusiness.com, and reuters.com).
AI’s Potential to Outsmart Humans:
He has repeatedly mentioned the risk of AI surpassing human intelligence, leading to scenarios where AI might not align with human values or interests. Musk has cited this as one of the biggest risks to civilization, comparing it to summoning a demon…
Musk has also expressed concerns about AI being used for… the proliferation of autonomous weapons systems, which could lead to unintended escalations in conflicts.
Safety and Ethics:
Musk advocates for careful development of AI, suggesting that safety protocols should be established before advancing to more powerful AI systems. He has called for a regulatory framework similar to those for aviation or pharmaceuticals to ensure AI’s safety. (Grok)
Musk’s concerns, which emerge from his vast technological experience, clearly conflict with those of JD Vance and the administration who regard regulation as a form of bureaucratic strangulation that stifles innovation. Here’s part of what Vance had to say at the AI Summit in Paris on Tuesday:
When conferences like this convene to discuss a cutting-edge technology, oftentimes, I think our response is to be too self-conscious, too risk averse. But never have I encountered a breakthrough in tech that so clearly caused us to do precisely the opposite. Our administration, the Trump administration, believes that AI will have countless revolutionary applications… And to restrict its development now would not only unfairly benefit incumbents in the space, it would mean paralyzing one of the most promising technologies we have seen in generations.this administration will ensure that American AI technology continues to be the gold standard worldwide, and we are the partner of choice for others, foreign countries, and certainly businesses as they expand their own use of AI. Number two, we believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, and we’ll make every effort to encourage pro-growth AI policies. And I like to see that deregulatory flavor, making its way into a lot of the conversations this conference. Number three, we feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship. …
The United States of America is the leader in AI, and our administration plans to keep it that way. The US possesses all components across the full AI stack, including advanced semiconductor design, frontier algorithms, and, of course, transformational applications…. And to safeguard America’s advantage, the Trump administration will ensure that the most powerful AI systems are built in the US with American designed and manufactured chips. (America must dominate AI because AI provides the means for domination.)
America wants to partner with all of you, and we want to embark on the AI revolution before us with the spirit of openness and collaboration. But to create that kind of trust, we need international regulatory regimes that foster the creation of AI technology rather than strangle it. And we need our European friends in particular to look to this new frontier with optimism rather than trepidation. (Note—Ignore the risks, damn the torpedoes)…
with the President’s recent executive order on AI, we’re developing an AI action plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential…..
Concerns About International Regulations
The US innovators of all sizes already know what it’s like to deal with onerous international rules…..
Ladies and gentlemen…. The AI future is not going to be won by hand-wringing about safety….
Now at this moment, we face the extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine or Bessemer steel, but it will never come to pass if overregulation deters innovators from taking the risks necessary to advance the ball… TRANSCRIPT: VP JD Vance’s Speech at Paris AI Summit 2025, Singju Post
Vance’s entire presentation was little more than an anti-regulation harangue designed to belittle anyone who failed to ascribe to his “Damn the torpedoes, all ahead full” philosophy. What the speech shows is that the Trump team believes that anyone who expresses the slightest support for modest oversight (of this potentially lethal technology) is a namby-pamby trying to block the path to the future. But what is so surprising about Vance’s analysis is that it appears to be the polar opposite of Musk’s. Musk has not expressed any such opposition to regulation or oversight; quite the contrary. As we’ve already shown, Musk feels quite strongly that we must reach international consensus on how AI should be regulated to ensure things don’t get out of hand.
It’s worth noting that Elon Musk made a $97.4 billion bid last week to buy back OpenAI from its current owner, Sam Altman, saying that Altman had abandoned the system’s original mission to remain an open source, non-profit. At present, OpenAI is a “closed source, maximum-profit company effectively controlled by Microsoft,” a complete reversal of Musk’s vision of a transparent, (community oriented) learning tool that could be used for the benefit of humanity. The nearly $100 billion offer underscores the importance Musk attaches to AI development given the risks it poses for humanity. In other words, he wants to buy OpenAI back because he doesn’t consider its current owner “trustworthy”.
Elon Musk said to Tucker Carlson, “I don’t trust Sam Altman, and I don’t think we want the most powerful AI in the world controlled by someone who’s not trustworthy.”
It’s also worth noting that a growing number of experts have been fleeing OpenAI complaining that the company is not taking steps to address their safety concerns. Among these are Daniel Kokotajlo, William Saunders, Ilya Sutskever, Jan Leike, Gretchen Krueger, Leopold Aschenbrenner, Pavel Izmailov, and Cullen O’Keefe, Miles Brundage and Rosie Campbell.
Why are so many well-paid professionals fleeing OpenAI while warning of safety concerns?
Because—as former AI researcher Steve Adler candidly stated: OpenAI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI.”
That’s it in a nutshell. These people simply believe that it is immoral for them to participate in a project that puts the species at risk. Here’s more from the BBC:
The UK and US have not signed an international agreement on artificial intelligence (AI) at a global summit in Paris. The statement, signed by dozens of countries including France, China and India, pledges an “open”, “inclusive” and “ethical” approach to the technology’s development…..The statement signed by 60 countries sets out an ambition to reduce digital divides by promoting AI accessibility, and ensuring the tech’s development is “transparent”, “safe” as well as “secure and trustworthy”….
…. US Vice President JD Vance told delegates in Paris that too much regulation of artificial intelligence (AI) could “kill a transformative industry just as it’s taking off”. Vance told world leaders that AI was “an opportunity that the Trump administration will not squander” and said “pro-growth AI policies” should be prioritized over safety…..
However UKAI – a trade body representing businesses working in the sector across the country – said it was the right decision. “While UKAI agrees that being environmentally responsible is important, we question how to balance this responsibility with the growing needs of the AI industry for more energy,” said its chief executive Tim Flagg.
“UKAI cautiously welcomes the Government’s refusal to sign this statement as an indication that it will explore the more pragmatic solutions that UKAI has been calling for – retaining opportunities to work closely with our US partners”, he added. UK and US refuse to sign international AI declaration, BBC
Judging from its webpage, the UKAI appears to be an industry/lobby group that may have influenced Vance’s decision to reject the Paris AI Declaration. Here’s a snippet from their webpage:
UKAI represents companies of all sizes with an interest in AI, from startups to industry leaders, ensuring their voices are heard in shaping policy. By working closely with UK Government and regulators, UKAI ensures AI policies foster innovation and business growth without stifling progress. UKAI is the bridge between policymakers and the AI community, offering a platform for feedback on legislation, programmes, and initiatives. UKAI believes in the transformative role AI can play in the UK’s social and economic development. UKAI
So, did this British industry group influence the administration’s position on the Declaration; is that what’s going on? And if they did, then what role did the tech giants in Silicon Valley play? We put that very question to Grok: “Did big tech push JD Vance to oppose to the Paris AI Declaration”:
Answer—Vance’s background in Silicon Valley and his funding from tech billionaires like Peter Thiel suggest he has strong connections with tech leaders. These ties might influence his policy perspectives, but no direct link to Big Tech lobbying him against the AI declaration is explicitly stated (NPR)….. While Vance’s positions seem to align with Big Tech’s interests in avoiding stringent regulations, the available information does not explicitly confirm that these companies directly influenced his decision regarding the AI declaration. His actions could be seen as part of a broader policy stance on tech regulation, influenced by his political beliefs, his role in the Trump administration, and his previous interactions with the tech industry rather than a specific push from Big Tech companies. However, given his criticism of “excessive regulation,” it’s reasonable to infer that his views are at least sympathetic to Big Tech’s general stance on regulatory matters. Grok
In short, we cannot yet verify that the administration’s rejection of the Paris AI Declaration was a response to the lobbying efforts of the Silicon Valley giants. But we do think that it is highly likely that these corporations were at least consulted on the matter before the decision was made.
In any event, the Paris AI Summit was largely a public relations extravaganza that flopped miserably due to Vance’s shocking refusal to sign the Declaration. Keep in mind, the Declaration contains no onerous regulations or binding obligations. It’s merely an expression of support for a few generalized principles that were concocted to build public confidence. Instead of showing a willingness to work collaboratively with other world leaders on a matter of global security, the Trump administration decided to stick a thumb in their eye while conveying their intention to develop AI in any way they see fit. The fact is that Trump and his lieutenants see AI as a tool for global domination and for maintaining America’s privileged position in the world order. And for that, they are willing to risk everything.