Tuesday, December 2, 2025
INBV News
Submit Video
  • Login
  • Register
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream
No Result
View All Result
INBV News
No Result
View All Result
Home Technology

paper clips, parrots and safety vs. ethics

INBV News by INBV News
May 21, 2023
in Technology
371 27
0
paper clips, parrots and safety vs. ethics
548
SHARES
2.5k
VIEWS
Share on FacebookShare on Twitter

RELATED POSTS

Elon Musk claims America’s ‘insanely high’ $38T debt crisis can only be solved by AI, robotics

Intel stock sinks, giving up boost from Apple deal prediction

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions on the long run of creative industries and the flexibility to inform fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room stuffed with politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that will not be widely known amongst most of the people.

“AGI safety is admittedly necessary, and frontier models must be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the edge.”

On this case, “AGI” refers to “artificial general intelligence.” As an idea, it’s used to mean a significantly more advanced AI than is currently possible, one which can do most things as well or higher than most humans, including improving itself.

“Frontier models” is a option to talk in regards to the AI systems which might be the costliest to provide and which analyze essentially the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as in comparison with smaller AI models that perform specific tasks like identifying cats in photos.

Most individuals agree that there should be laws governing AI because the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT got here out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a pc science professor on the University of Florida. “We’re afraid that we’re racing right into a more powerful system that we do not fully comprehend and anticipate what what it’s it may well do.”

However the language around this debate reveals two major camps amongst academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The opposite camp is nervous about what they call “AI ethics.“

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at corporations like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the opportunity of constructing an unfriendly AGI with unimaginable powers. This camp believes we’d like urgent attention from governments to control development an prevent an premature end to humanity — an effort much like nuclear nonproliferation.

“It’s good to listen to so many individuals beginning to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We should be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and on the White House about regulation is thru an AI ethics lens, which focuses on current harms.

From this attitude, governments should implement transparency around how AI systems collect and use data, restrict its use in areas which might be subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last 12 months included a lot of these concerns.

This camp was represented on the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies must have an “AI ethics” point of contact.

“There should be clear guidance on AI end uses or categories of AI-supported activity which might be inherently high-risk,” Montgomery told Congress.

How you can understand AI lingo like an insider

See also: How you can speak about AI like an insider

It is not surprising the controversy around AI has developed its own lingo. It began as a technical academic field.

Much of the software being discussed today is predicated on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” In fact, AI models should be built first, in a knowledge evaluation process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and infrequently seek advice from shared references and in-jokes.

For instance, AI safety people might say that they are nervous about turning right into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could possibly be given a mission to make as many paper clips as possible, and logically resolve to kill humans make paper clips out of their stays.

OpenAI’s logo is inspired by this tale, and the corporate has even made paper clips in the form of its logo.

One other concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that implies if someone succeeds at constructing an AGI that it can already be too late to avoid wasting humanity.

Sometimes, this concept is described by way of an onomatopeia — “foom” — especially amongst critics of the concept.

“It’s like you think within the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have got zero understanding of how the whole lot works,” tweeted Meta AI chief Yann LeCun, who’s skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the constraints of the present LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.“

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while among the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed throughout the hearing was “explainability” in AI results. That signifies that when researchers and practitioners cannot point to the precise numbers and path of operations that larger AI models use to derive their output, this might hide some inherent biases within the LLMs.

“You have got to have explainability across the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you happen to take a look at the classical algorithms, it tells you, ‘Why am I making that call?’ Now with a bigger model, they’re becoming this huge model, they seem to be a black box.”

One other necessary term is “guardrails,” which encompasses software and policies that Big Tech corporations are currently constructing around AI models to be sure that they do not leak data or produce disturbing content, which is usually called “going off the rails.“

It might also seek advice from specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to make sure we introduce technology into the world in a responsible and protected manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as within the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to discover several “emergent behaviors” in OpenAI’s GPT-4, reminiscent of the flexibility to attract animals using a programming language for graphs.

But it may well also describe what happens when easy changes are made at a really big scale — just like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are getting used by tens of millions of individuals, reminiscent of widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

1

Do you trust technology Today?

Tags: clipsEthicsPaperparrotssafety
Share219Tweet137
INBV News

INBV News

Related Posts

edit post
Elon Musk claims America’s ‘insanely high’ $38T debt crisis can only be solved by AI, robotics

Elon Musk claims America’s ‘insanely high’ $38T debt crisis can only be solved by AI, robotics

by INBV News
December 2, 2025
0

Elon Musk said in a brand new interview that he thinks robotics powered by artificial intelligence driving productivity gains and output...

edit post
Intel stock sinks, giving up boost from Apple deal prediction

Intel stock sinks, giving up boost from Apple deal prediction

by INBV News
December 1, 2025
0

An Intel manufacturing technician holds an Intel Core Ultra series 3 processor (code-named Panther Lake) built on Intel 18A, inside...

edit post
Black Friday online spending surges as Americans embrace AI to help with shopping

Black Friday online spending surges as Americans embrace AI to help with shopping

by INBV News
December 1, 2025
0

American shoppers turned to artificial intelligence (AI) in unprecedented numbers this Black Friday, helping push online spending to a record $11.8 billion...

edit post
Palantir has worst month in two years as AI stocks dump

Palantir has worst month in two years as AI stocks dump

by INBV News
November 30, 2025
0

CEO of Palantir Technologies Alex Karp attends the Pennsylvania Energy and Innovation Summit, at Carnegie Mellon University in Pittsburgh, Pennsylvania,...

edit post
America’s most-used password in 2025 is one word

America’s most-used password in 2025 is one word

by INBV News
November 29, 2025
0

Passwords play an enormous role in the way you stay protected online. They protect your accounts, devices and money. Still, many individuals...

Next Post
edit post
Vancouver offers prayers for deadly fire victims | TFC News British Columbia, Canada

Vancouver offers prayers for deadly fire victims | TFC News British Columbia, Canada

edit post
Martin Scorsese’s ‘Killers of the Flower Moon’ gets nine-minute standing ovation

Martin Scorsese’s 'Killers of the Flower Moon' gets nine-minute standing ovation

CATEGORIES

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

CATEGORY

  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Podcast
  • Politics
  • Sports
  • Technology
  • Travel
  • Videos
  • Weather
  • World News

SITE LINKS

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

[mailpoet_form id=”1″]

  • About us
  • Contact us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA

© 2022. All Right Reserved By Inbvnews.com

No Result
View All Result
  • Home
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Politics
  • Sports
  • Technology
  • Travel
  • Weather
  • World News
  • Videos
  • More
    • Podcasts
    • Reels
    • Live Video Stream

© 2022. All Right Reserved By Inbvnews.com

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist