Google CEO Sundar Pichai speaks in conversation with Emily Chang through the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and runs through November 17.
Justin Sullivan | Getty Images News | Getty Images
Munich, GERMANY — Rapid developments in artificial intelligence could help strengthen defenses against security threats in cyber space, in keeping with Google CEO Sundar Pichai.
Amid growing concerns concerning the potentially nefarious uses of AI, Pichai said that the intelligence tools could help governments and firms speed up the detection of — and response to — threats from hostile actors.
“We’re right to be apprehensive concerning the impact on cybersecurity. But AI, I feel actually, counterintuitively, strengthens our defense on cybersecurity,” Pichai told delegates at Munich Security Conference at the top of last week.
Cybersecurity attacks have been growing in volume and class as malicious actors increasingly use them as a approach to exert power and extort money.
Cyberattacks cost the worldwide economy an estimated $8 trillion in 2023 — a sum that is about to rise to $10.5 trillion by 2025, in keeping with cyber research firm Cybersecurity Ventures.
A January report from Britain’s National Cyber Security Centre — a part of GCHQ, the country’s intelligence agency — said that AI would only increase those threats, lowering the barriers to entry for cyber hackers and enabling more malicious cyber activity, including ransomware attacks.
“AI disproportionately helps the people defending since you’re getting a tool which might impact it at scale.
Sundar Pichai
CEO at Google
Nevertheless, Pichai said that AI was also lowering the time needed for defenders to detect attacks and react against them. He said this might reduce what’s referred to as a the defenders’ dilemma, whereby cyberhackers have to achieve success only once to a system whereas a defender has to achieve success each time with the intention to protect it.
“AI disproportionately helps the people defending since you’re getting a tool which might impact it at scale versus the people who find themselves trying to take advantage of,” he said.
“So, in some ways, we’re winning the race,” he added.
Google last week announced a latest initiative offering AI tools and infrastructure investments designed to spice up online security. A free, open-source tool dubbed Magika goals to assist users detect malware — malicious software — the corporate said in a press release, while a white paper proposes measures and research and creates guardrails around AI.
Pichai said the tools were already being put to make use of in the corporate’s products, akin to Google Chrome and Gmail, in addition to its internal systems.
“AI is at a definitive crossroads — one where policymakers, security professionals and civil society have the possibility to finally tilt the cybersecurity balance from attackers to cyber defenders.
The discharge coincided with the signing of a pact by major corporations at MSC to take “reasonable precautions” to forestall AI tools from getting used to disrupt democratic votes in 2024’s bumper election yr and beyond.
Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok and X, formerly Twitter, were among the many signatories to the brand new agreement, which incorporates a framework for the way corporations must reply to AI-generated “deepfakes” designed to deceive voters.
It comes as the web becomes an increasingly vital sphere of influence for each individuals and state-backed malicious actors.
Former U.S. Secretary of State Hillary Clinton on Saturday described cyberspace as “a latest battlefield.”
“The technology arms race has just gone up one other notch with generative AI,” she said in Munich.
“For those who can run somewhat bit faster than your adversary, you are going to do higher. That is what AI is admittedly giving us defensively.
Mark Hughes
president of security at DXC
A report published last week by Microsoft found that state-backed hackers from Russia, China, and Iran have been using its OpenAI large language model (LLM) to boost their efforts to trick targets.
Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments were all said to have relied on the tools.
Mark Hughes, president of security at IT services and consulting firm DXC, told CNBC that bad actors were increasingly counting on a ChatGPT-inspired hacking tool called WormGPT to conduct tasks like reverse engineering code.
Nevertheless, he said that he was also seeing “significant gains” from similar tools which help engineers to detect and reserve engineer attacks at speed.
“It gives us the power to hurry up,” Hughes said last week. “More often than not in cyber, what you could have is the time that the attackers have in advantage against you. That is often the case in any conflict situation.
“For those who can run somewhat bit faster than your adversary, you are going to do higher. That is what AI is admittedly giving us defensively in the mean time,” he added.