Saturday, October 25, 2025
INBV News
Submit Video
  • Login
  • Register
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
INBV News
No Result
View All Result
Home Technology

Techno-optimists, doomsdayers and Silicon Valley’s riskiest AI debate

INBV News by INBV News
December 17, 2023
in Technology
394 4
0
Techno-optimists, doomsdayers and Silicon Valley’s riskiest AI debate
548
SHARES
2.5k
VIEWS
Share on FacebookShare on Twitter

RELATED POSTS

China, Russia sending attractive women to seduce US tech execs: report

Intel (INTC) earnings report Q3 2025

WASHINGTON, DC – SEPTEMBER 13: OpenAI CEO Sam Altman speaks with reporters on his arrival to the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, on September 13, 2023. (Photo by Elizabeth Frantz for The Washington Post via Getty Images)

The Washington Post | The Washington Post | Getty Images

Now greater than a 12 months after ChatGPT’s introduction, the most important AI story of 2023 can have turned out to be less the technology itself than the drama within the OpenAI boardroom over its rapid advancement. Throughout the ousting, and subsequent reinstatement, of Sam Altman as CEO, the underlying tension for generative artificial intelligence going into 2024 is obvious: AI is at the middle of an enormous divide between those that are fully embracing its rapid pace of innovation and people who want it to decelerate as a result of the numerous risks involved.

The controversy — known inside tech circles as e/acc vs. decels — has been making the rounds in Silicon Valley since 2021. But as AI grows in power and influence, it’s increasingly vital to grasp either side of the divide.

Here’s a primer on the important thing terms and a number of the distinguished players shaping AI’s future.

e/acc and techno-optimism

The term “e/acc” stands for effective accelerationism.

In brief, those that are pro-e/acc want technology and innovation to be moving as fast as possible.

“Technocapital can usher in the following evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness,” the backers of the concept explained within the first-ever post about e/acc.

By way of AI, it’s “artificial general intelligence”, or AGI, that underlies debate here. AGI is a super-intelligent AI that’s so advanced it could actually do things as well or higher than humans. AGIs may improve themselves, creating an infinite feedback loop with limitless possibilities.

OpenAI drama: Faster AI development won the fight

Some think that AGIs may have the capabilities to the top of the world, becoming so intelligent that they determine methods to eradicate humanity. But e/acc enthusiasts decide to deal with the advantages that an AGI can offer. “There’s nothing stopping us from creating abundance for each human alive aside from the need to do it,” the founding e/acc substack explained.

The founders of the e/acc began have been shrouded in mystery. But @basedbeffjezos, arguably the most important proponent of e/acc, recently revealed himself to be Guillaume Verdon after his identity was exposed by the media.

Verdon, who formerly worked for Alphabet, X, and Google, is now working on what he calls the “AI Manhattan project” and said on X that “this is just not the top, but a latest starting for e/acc. One where I can step up and make our voice heard in the normal world beyond X, and use my credentials to supply backing for our community’s interests.”

Verdon can also be the founding father of Extropic, a tech startup which he described as “constructing the last word substrate for Generative AI within the physical world by harnessing thermodynamic physics.”

An AI manifesto from a top VC

One of the crucial distinguished e/acc supporters is enterprise capitalist Marc Andreessen of Andreessen Horowitz, who previously called Verdon the “patron saint of techno-optimism.”

Techno-optimism is strictly what it feels like: believers think more technology will ultimately make the world a greater place. Andreessen wrote the Techno-Optimist Manifesto, a 5,000-plus word statement that explains how technology will empower humanity and solve all of its material problems. Andreessen even goes so far as to say that “any deceleration of AI will cost lives,” and it might be a “type of murder” to not develop AI enough to forestall deaths.

One other techno-optimist piece he wrote called Why AI Will Save the World was reposted by Yann LeCun, Chief AI Scientist at Meta, who’s often known as one among the “godfathers of AI” after winning the celebrated Turing Prize for his breakthroughs in AI.

Yann LeCun, chief AI scientist at Meta, speaks on the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

LeCun labels himself on X as a “humanist who subscribes to each Positive and Normative types of Energetic Techno-Optimism.”

LeCun, who recently said that he doesn’t expect AI “super-intelligence” to reach for quite a while, has served as a vocal counterpoint in public to those that he says “doubt that current economic and political institutions, and humanity as a complete, might be able to using [AI] for good.”

Meta’s embrace of open-source AI underlies Lecun’s belief that the technology will offer more potential than harm, while others have pointed to the hazards of a business model like Meta’s which is pushing for widely available gen AI models being placed within the hands of many developers.

AI alignment and deceleration

In March, an open letter by Encode Justice and the Way forward for Life Institute called for “all AI labs to instantly pause for a minimum of six months the training of AI systems more powerful than GPT-4.”

The letter was endorsed by distinguished figures in tech, resembling Elon Musk and Apple co-founder Steve Wozniak.

OpenAI CEO Sam Altman addressed the letter back in April at an MIT event, saying, “I believe moving with caution and an increasing rigor for issues of safety is admittedly vital. The letter I do not think was the optimal solution to address it.”

OpenAI's Sam Altman on AI regulation: We can manage this for sure

Altman was caught up within the battle anew when the OpenAI boardroom drama played out and original directors of the nonprofit arm of OpenAI grew concerned in regards to the rapid rate of progress and its stated mission “to make sure that artificial general intelligence — AI systems which can be generally smarter than humans — advantages all of humanity.”

A few of the ideas from the open letter are key to decels, supporters of AI deceleration. Decels want progress to decelerate because the longer term of AI is dangerous and unpredictable, and one among their biggest concerns is AI alignment.

The AI alignment problem tackles the concept AI will eventually turn out to be so intelligent that humans won’t find a way to regulate it.

“Our dominance as a species, driven by our relatively superior intelligence, has led to harmful consequences for other species, including extinction, because our goals aren’t aligned with theirs. We control the longer term — chimps are in zoos. Advanced AI systems could similarly impact humanity,” said Malo Bourgon, CEO of the Machine Intelligence Research Institute.

AI alignment research, resembling MIRI’s, goals to coach AI systems to “align” them with the goals, morals, and ethics of humans, which might prevent any existential risks to humanity. “The core risk is in creating entities much smarter than us with misaligned objectives whose actions are unpredictable and uncontrollable,” Bourgon said.

Government and AI’s end-of-the-world issue

Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, has devoted her profession to de-risking dangerous situations, and she or he recently told CNBC that once we consider the “mass scale death” AI could cause if used to oversee nuclear weapons, it’s a problem that requires immediate attention.

But “watching the issue” won’t do any good, she stressed. “The entire point is addressing the risks and finding solution sets which can be handiest,” she said. “It’s dual-use tech at its purist,” she added. “There isn’t a case where AI is more of a weapon than an answer.” For instance, large language models will turn out to be virtual lab assistants and speed up medicine, but in addition help nefarious actors discover the very best and most transmissible pathogens to make use of for attack. That is amongst the explanations AI cannot be stopped, she said. “Slowing down is just not a part of the answer set,” Parthemore said.

Air Force Secretary on AI technology on the battlefield: There will always be humans involved

Earlier this 12 months, her former employer the DoD said in its use of AI systems there’ll all the time be a human within the loop. That is a protocol she says must be adopted in all places. “The AI itself can’t be the authority,” she said. “It could possibly’t just be, ‘the AI says X.’ … We’d like to trust the tools, or we should always not be using them, but we’d like to contextualize. … There’s enough general lack of information about this toolset that there’s a higher risk of overconfidence and overreliance.”

Government officials and policymakers have began paying attention to these risks. In July, the Biden-Harris administration announced that it secured voluntary commitments from AI giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to “move towards secure, secure, and transparent development of AI technology.”

Just just a few weeks ago, President Biden issued an executive order that further established latest standards for AI safety and security, though stakeholders group across society are concerned about its limitations. Similarly, the U.K. government introduced the AI Safety Institute in early November, which is the primary state-backed organization specializing in navigating AI.

Britain’s Prime Minister Rishi Sunak (L) attends an in-conversation event with X (formerly Twitter) CEO Elon Musk (R) in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. (Photo by Kirsty Wigglesworth / POOL / AFP) (Photo by KIRSTY WIGGLESWORTH/POOL/AFP via Getty Images)

Kirsty Wigglesworth | Afp | Getty Images

Amid the worldwide race for AI supremacy, and links to geopolitical rivalry, China is implementing its own set of AI guardrails.

Responsible AI guarantees and skepticism

OpenAI is currently working on Superalignment, which goals to “solve the core technical challenges of superintelligent alignment in 4 years.”

At Amazon’s recent Amazon Web Services re:Invent 2023 conference, it announced latest capabilities for AI innovation alongside the implementation of responsible AI safeguards across the organization.

“I often say it is a business imperative, that responsible AI should not be seen as a separate workstream but ultimately integrated into the way in which wherein we work,” says Diya Wynn, the responsible AI lead for AWS.

In response to a study commissioned by AWS and conducted by Morning Seek the advice of, responsible AI is a growing business priority for 59% of business leaders, with about half (47%) planning on investing more in responsible AI in 2024 than they did in 2023.

Although factoring in responsible AI may decelerate AI’s pace of innovation, teams like Wynn’s see themselves as paving the way in which towards a safer future. “Firms are seeing value and starting to prioritize responsible AI,” Wynn said, and because of this, “systems are going to be safer, secure, [and more] inclusive.”

Bourgon is not convinced and says actions like those recently announced by governments are “removed from what is going to ultimately be required.”

He predicts that it’s likely for AI systems to advance to catastrophic levels as early as 2030, and governments should be prepared to indefinitely halt AI systems until leading AI developers can “robustly show the security of their systems.”

WIRED's Steve Levy on the AI arms race: OpenAI doesn't have the 'invulnerability' it once had

1

Do you trust technology Today?

Tags: DebatedoomsdayersriskiestSiliconTechnooptimistsValleys
Share219Tweet137
INBV News

INBV News

Related Posts

edit post
China, Russia sending attractive women to seduce US tech execs: report

China, Russia sending attractive women to seduce US tech execs: report

by INBV News
October 24, 2025
0

China and Russia have deployed attractive women to america to seduce unwitting Silicon Valley tech executives as a part of...

edit post
Intel (INTC) earnings report Q3 2025

Intel (INTC) earnings report Q3 2025

by INBV News
October 24, 2025
0

Intel CEO Lip-Bu Tan holds a wafer of CPU tiles for the Intel Core Ultra series 3, code-named Panther Lake,...

edit post
Elon Musk’s Tesla disappoints investors despite record sales

Elon Musk’s Tesla disappoints investors despite record sales

by INBV News
October 23, 2025
0

Tesla reported record third-quarter revenue that beat Wall Street estimates on Wednesday, driven by the very best quarterly sales of its electric...

edit post
Meta AI layoffs sign hottest tech profession could also be more risk than riches

Meta AI layoffs sign hottest tech profession could also be more risk than riches

by INBV News
October 22, 2025
0

Jose Luis Pelaez Inc | Digitalvision | Getty ImagesAt around the identical time Accenture announced its investment in data labeling...

edit post
Meet the $150K ‘B2’ robo dog that may blast fires away with a cannon

Meet the $150K ‘B2’ robo dog that may blast fires away with a cannon

by INBV News
October 22, 2025
0

This metal dog is bound to fetch some attention. A robotic fire dog developed on Long Island will have the...

Next Post
edit post
Inside a billionaire’s private island where nightly stays start at $20K

Inside a billionaire's private island where nightly stays start at $20K

edit post
TV Patrol Livestream | November 20, 2023 Full Episode Replay

TV Patrol Livestream | November 20, 2023 Full Episode Replay

CATEGORIES

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

CATEGORY

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

SITE LINKS

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

[mailpoet_form id=”1″]

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

© 2022. All Right Reserved By Inbvnews.com

No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream

© 2022. All Right Reserved By Inbvnews.com

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist