The basic meaning of what it means “Search” something on Google -Type keywords and scour links, images and excerpts of information-is almost overcome, according to Google.
The search giant exposed his vision for the future of web search on Tuesday (20), introducing a series of updates that aim to transform the ubiquitous search engine to process a keywords into a system of a system of “Digital Agents” that can track the web and answer questions based on the real world’s environment, tastes and preferences.
The Google initiative at IA occurs at a time when editors – especially independent – have already raised concerns about how the prominence of the could threaten their business.
Ads, made during the Annual Company Developer ConferenceThey point out that Google’s most important business is facing more competition than ever. Chatbots such as the AI -powered search engines, such as perplexity, have an alternative way to find information and perform tasks – two tasks firmly at Google’s main business center.
Newly informed tools can be seen as an effort to prove that their search engine almost 30 years has not been losing relevance in the Age of AI.
“What all this progress tells me is that we are now entering a new phase of AI platform change, where decades of research are coming true for people, companies and communities around the world,” Sundar Pichai, Google CEO and his controlling company Alphabet said at a press conference before the conference.
“Way ia”
Google is expanding the Way iapreviously available only to those who signed up to test the initial features through their Labs program for all US users through the Google app.
It is a step beyond the general views of AI, the AI -generated answers that consumers see at the top of the results. The main difference between AI mode and standard Google search is the way it processes consultations. Instead of just analyzing the entire question, the Mode would divide consultations into subtopics and generate additional research based on these subtopics to provide a more specific response.
Google states that Mode AI will soon use a person’s search history to further customize the answers, and users will also be able to link it to other Google apps, such as Gmail.
In addition to processing questions, it is expected that the AI mode will offer two new major search ways: one that, according to the company, will perform tasks on behalf of the user and one that allows Google to show the surroundings using the phone camera.
Although I was already available in the US, these two specific features will still require users to subscribe to Labs. The technology Project Mariner do Googlewhich the company announced as a research prototype last year, will be able to perform certain tasks on behalf of a person and answer questions that usually require several steps, says the company.
For example, you can ask a question as “find two accessible tickets for a football game this Sunday on the bottom floor”, and Google will search for tickets, will analyze options and prices, fill forms autonomously and then display tickets that correspond to the user’s criteria.
Initially, it will be available to buy tickets, make reservations at restaurants and schedule local appointments through services such as Ticketmaster, StubHub, Resy and Vagaro, and will reach the Labs section of the Google app in the coming months.
Google app Mode will also receive a new feature that allows users Ask questions about the world around you. Visual search is not new on Google; The company’s Lens tool already allows users to ask questions about photos they took. But this mode takes this idea a step further, showing Google what a person is seeing in real time.
The idea is to make it easier for Google to answer questions about complex and difficult tasks to describe – as if the specific screw in the toolbox is the right size for the bicycle frame that is being fixed – just pointing a phone to him and asking.
Google has already brought this visual search functionality to your Gemini assistant on Android, which is now expanding to the iPhone. But Tuesday’s announcement shows that Google sees it as fundamental to the future of its search engine as well. Some of the new search features overlap with those available in the Google assistant, which can cause confusion among consumers.
Robby Stein, Vice President of Product at Google Search, told the CNN That the search is focused on learning, while Gemini was created to assist in tasks such as generating code and writing business plans, as well as answering questions.
Google competition
Google’s search engine has been the main means of finding online information for almost three decades. But this position is being challenged more than ever due to the proliferation of AI services from companies such as OpenAi and perplexity, as well as other technology giants such as Apple, Amazon and – all of which have updated or are in the process of updating their virtual assistants with advanced AI resources.
OpenAi, Google’s main rival in the AI assistant segment, launched its own search engine. The increase in Google competition was evident earlier this month, when Eddy Cue, Apple’s senior vice president, revealed in court testimony that Google searches in his Safari browser In April they had first diminished since 2002, Bloomberg said.
Google refuted this statement, claiming to have observed a “general growth growth,” including those from Apple devices. Gartner market research company estimated last year that searches would fall 25% by 2026 as consumers migrated to AI tools.
But Pichai, in a teleconference with reporters, said updates reflect the new ways people are using their business search engine. “When I look at the future, I see glimpses of a proactive world, an agent world,” he said. “All this will continue to improve.”