Apple on Friday said it’ll voluntarily adopt safeguards for artificial intelligence — joining other tech giants including OpenAI, Amazon, Google parent Alphabet and Meta in complying with Biden administration guidelines geared toward minimizing national security risk.
In July 2023, the Biden administration announced that it had secured voluntary commitments from seven leading AI firms who pledged “to assist move toward secure, secure, and transparent” development of the technology.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI were the primary seven firms to sign on to the administration’s initiative.
The businesses are asked to transparently share results of tests that measure compliance with security and anti-discrimination regulations.
Apple joined their tech rivals after announcing last month that it will be incorporating AI features into its signature products including iPhone, iPad and Mac.
The Cupertino, Calif.-based colossus announced a fresh set of free software updates dubbed “Apple Intelligence” in an effort to meet up with other Silicon Valley rivals, akin to Microsoft and Google, which have moved ahead of the pack by leaps and bounds within the AI arms race.
At its annual World Wide Developers Conference last month, Apple said it will depend on OpenAI’s ChatGPT to make its virtual assistant Siri smarter and more helpful.
Siri’s optional gateway to ChatGPT can be free to all iPhone users and made available on other Apple products once the choice is baked into the subsequent generation of Apple’s operating systems.
ChatGPT subscribers are alleged to have the opportunity to simply sync their existing accounts when using the iPhone, and will get more advanced features than free users would.
Apple’s full suite of upcoming features will only work on more moderen models of the iPhone, iPad and Mac since the devices require advanced processors.
As an illustration, consumers will need last yr’s iPhone 15 Pro or buy the subsequent model coming out later this yr to take full advantage of Apple’s AI package, although all of the tools will work on Macs dating back to 2020 after that computer’s next operating system is installed.
The rapid advancement of AI technology has prompted debate amongst tech observers over possible risks posed to the economy, national security and even the survival of the human race.
Last month, a bunch of AI whistleblowers claimed that Google and OpenAI were endangering humanity as they sprinted to develop the brand new technology.
Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that “AI firms have strong financial incentives to avoid effective oversight” and cited an absence of federal rules on developing advanced AI.
“Firms are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI,” former OpenAI worker Daniel Kokotajlo, one among the letter’s organizers, said in a press release.
“I made a decision to depart OpenAI because I lost hope that they might act responsibly, particularly as they pursue artificial general intelligence.”
Government and personal sector researchers worry US adversaries could use the models, which mine vast amounts of text and pictures to summarize information and generate content, to wage aggressive cyber attacks and even create potent biological weapons.
With Post Wires
Apple on Friday said it’ll voluntarily adopt safeguards for artificial intelligence — joining other tech giants including OpenAI, Amazon, Google parent Alphabet and Meta in complying with Biden administration guidelines geared toward minimizing national security risk.
In July 2023, the Biden administration announced that it had secured voluntary commitments from seven leading AI firms who pledged “to assist move toward secure, secure, and transparent” development of the technology.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI were the primary seven firms to sign on to the administration’s initiative.
The businesses are asked to transparently share results of tests that measure compliance with security and anti-discrimination regulations.
Apple joined their tech rivals after announcing last month that it will be incorporating AI features into its signature products including iPhone, iPad and Mac.
The Cupertino, Calif.-based colossus announced a fresh set of free software updates dubbed “Apple Intelligence” in an effort to meet up with other Silicon Valley rivals, akin to Microsoft and Google, which have moved ahead of the pack by leaps and bounds within the AI arms race.
At its annual World Wide Developers Conference last month, Apple said it will depend on OpenAI’s ChatGPT to make its virtual assistant Siri smarter and more helpful.
Siri’s optional gateway to ChatGPT can be free to all iPhone users and made available on other Apple products once the choice is baked into the subsequent generation of Apple’s operating systems.
ChatGPT subscribers are alleged to have the opportunity to simply sync their existing accounts when using the iPhone, and will get more advanced features than free users would.
Apple’s full suite of upcoming features will only work on more moderen models of the iPhone, iPad and Mac since the devices require advanced processors.
As an illustration, consumers will need last yr’s iPhone 15 Pro or buy the subsequent model coming out later this yr to take full advantage of Apple’s AI package, although all of the tools will work on Macs dating back to 2020 after that computer’s next operating system is installed.
The rapid advancement of AI technology has prompted debate amongst tech observers over possible risks posed to the economy, national security and even the survival of the human race.
Last month, a bunch of AI whistleblowers claimed that Google and OpenAI were endangering humanity as they sprinted to develop the brand new technology.
Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that “AI firms have strong financial incentives to avoid effective oversight” and cited an absence of federal rules on developing advanced AI.
“Firms are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI,” former OpenAI worker Daniel Kokotajlo, one among the letter’s organizers, said in a press release.
“I made a decision to depart OpenAI because I lost hope that they might act responsibly, particularly as they pursue artificial general intelligence.”
Government and personal sector researchers worry US adversaries could use the models, which mine vast amounts of text and pictures to summarize information and generate content, to wage aggressive cyber attacks and even create potent biological weapons.
With Post Wires