How will Google solve the dilemma of artificial intelligence?

In the AI ​​arms race that has just broken out in the technology industry, The Googleas a lot of the latest technology has been invented, you should be well positioned to be one of the big winners.

There’s just one problem: With politicians and regulators breathing heavily, and a business model too lucrative to defend, the internet search giant may be reluctant to use the many weapons at its disposal.

Microsoft posed a direct challenge to the search giant this week when it shut down its Billions of dollars Investing in artificial intelligence research firm OpenAI. The move comes less than two months after OpenAI was released chata chatbot that answers queries with blocks of text or code, and suggests how to do it Generative artificial intelligence It may one day replace Internet search.

With preferential rights to commercialize OpenAI technology, Microsoft managers made no secret their goal than using it to challenge Google, reawakening an age-old rivalry that has raged since Google won the search wars a decade ago.

DeepMind, the London research company Google bought in 2014, and Google Brain, an advanced research division based in Silicon Valley, have long given the research company one of the strongest footholds in the field of artificial intelligence.

Recently, Google has embarked on various changes to its so-called generative AI that supports ChatGPT, including AI models capable of telling jokes and solving math problems.

One of its most advanced language models, known as palm, palmIt is a general-purpose model that is three times larger than GPT, and it is the artificial intelligence model that ChatGPT is based on, based on the number of parameters that the models are trained on.

Google’s LaMDA chatbot, or Language Model for Dialog Applications, can talk to users in natural language, in a similar way to ChatGPT. The company’s engineering teams have been working for months to integrate it into a consumer product.

Despite the technical progress, most of the newer technologies are still only a subject of research. Critics of Google say its search business is too lucrative, which discourages it from bringing generative AI into consumer products.

Microsoft plans to use OpenAI technology in all of its products and services © Lionel Bonaventure / AFP via Getty Images

Sridhar Ramaswamy, a former senior Google executive, said that giving direct answers to queries, rather than simply directing users to suggested links, would lead to fewer searches.

That left Google with a “classic innovator’s dilemma” — a reference to the book by Harvard Business School professor Clayton Christensen that sought to explain why industry leaders fell prey to fast-growing start-ups. “If I were the person running a $150 billion business, I would be terrified of this thing,” Ramaswamy said.

“We have always been focused on developing and deploying AI to improve people’s lives. We believe AI is a foundational and transformative technology that is incredibly beneficial to individuals, businesses and societies.” However, the search giant “needs to consider the broader societal impacts these innovations could have.” Google added that it will be announcing “more external experiences soon.”

While it leads to fewer searches and lower revenues, the spread of AI could also cause a jump in Google costs.

Ramaswamy calculated that, based on OpenAI pricing, it would cost $120 million to use natural language processing to “read” all web pages in a search index and then use this to generate more direct answers to questions people enter into the search engine. Meanwhile, analysts at Morgan Stanley have estimated that answering a search query using language processing costs about seven times as much as a standard Internet search.

The same considerations could dissuade Microsoft from a radical overhaul of its Bing search engine, which generated more than $11 billion in revenue last year. But the software company said it plans to use OpenAI technology in all of its products and services, which could lead to new ways of providing users with relevant information while they are inside other apps, thus reducing the need to go to a search engine.

A number of former and current employees close to Google’s AI research teams say the biggest limitations to the company’s release of AI have been concern about potential damage and how it will affect Google’s reputation, as well as downplaying competition.

“I think they were asleep at the wheel,” said a former Google AI scientist who now runs an AI company. “Frankly, not everyone appreciated how language paradigms would disrupt research.”

These political challenges are exacerbated Regulatory concerns triggered by the growing power of Google, as well as greater public scrutiny of the industry leader in adopting new technologies.

According to a former Google executive, company leaders have been concerned for more than a year that sudden advances in AI capabilities could lead to a wave of public anxiety about the implications of such powerful technology in the company’s hands. Last year, it hired former McKinsey CEO James Manica as its new senior vice president to advise on the broader social implications of its new technology.

Manica said that generative AI, which is used in services like ChatGPT, tends by its nature to provide incorrect answers and can be used to produce misleading information. Speaking to the Financial Times just days before ChatGPT was released, he added: “That’s why we’re not rushing to roll out these things the way people might have expected us to.”

However, the Great interest ChatGPT put pressure on Google to match OpenAI faster. This left it with the challenge of demonstrating its AI prowess and integrating it into its services without damaging its brand or sparking a political backlash.

“For Google, it’s a real problem if they write a sentence with hate speech and it’s close to Google’s name,” said Ramaswamy, co-founder of search startup Neeva. He added that Google is held to higher standards than a startup that could argue its service was merely an objective summary of the content available online.

The research company has been criticized before for its handling of AI ethics. In 2020, when two prominent AI researchers left in controversial circumstances after objecting to a paper assessing the risks of language-related AI, an uproar erupted over Google’s stance on the ethics and safety of its AI technologies.

Such events have left it under greater public scrutiny from organizations like OpenAI or open-source alternatives like Stable Diffusion. The latter, which generates images from text descriptions, has many security issues, including with creating pornographic images. Its security filter can be easily hacked, according to AI researchers, who say relevant lines of code can be manually deleted. Its parent company, Stability AI, did not respond to a request for comment.

OpenAI technology has also been abused by users. In 2021, an online game called AI Dungeon licenses GPT, a text generation tool, to generate your own story events based on individual user prompts. Within a few months, users were creating a game that featured pedophilia, among other disturbing content. In the end, OpenAI credited the company to provide better tuning systems.

OpenAI did not respond to a request for comment.

A former AI researcher at Google said that if anything like this had happened at Google, the reaction would have been much worse. They added that with the company now facing a serious threat from OpenAI, it was not clear if anyone at the company was prepared to take on the responsibility and risk of releasing new AI products more quickly.

However, Microsoft faces a similar dilemma about how to use the technology. It has sought to portray itself as more responsible in its use of artificial intelligence than Google. Meanwhile, OpenAI warned that ChatGPT is prone to inaccuracies, making it difficult to embed the technology in its current form into a commercial service.

But as the most dramatic display yet of the power of artificial intelligence sweeping the tech world, OpenAI has signaled that even established powerhouses like Google could be at risk.

Leave a Comment