Issue 02 - 2024MAGAZINETechnology
GBO_ Google Search

The future of Google Search

Like other computer behemoths, Google now views generative AI as a tool for speeding and streamlining search

Google has been using the term “helpful” to describe the new features added to its search product, voice assistant, generative AI tool Bard, and even Pixel earbuds in recent years. If you search for the keyword “helpful” in Google’s corporate news blog, you will find more than 1,200 entries.

However, Google’s primary search engine is now less useful depending on what you’re looking for. Google search is currently a “tragedy” that is “bloated and over monetised,” according to one columnist.

It’s “cluttered with adverts,” according to The Financial Times, with more Yellow Pages than encyclopedia content. A former Google employee attributes Google search quality decline to the web itself, but Google still provides free access to the world’s knowledge.

Furthermore, a recent analysis of product review results reveals that Google outperforms some of its rivals even if there are indications of worse quality results everywhere.

However, it only takes a team of investigators or the qualifications of an elite technologist to conduct a fast Google search and see that the majority of the results—at least the first few—are advertisements, with further clutter showing up below the digital fold.

Like other computer behemoths, Google now views generative AI as a tool for speeding and streamlining search. As a result, it is currently walking a tightrope between incorporating more complexities into its already cluttered user interface and making search more intelligent.

Part of the experiment includes its most recent announcements regarding generative AI on mobile search: Is it possible to improve the accessibility and convenience of Google search while maintaining the company’s commitment to its current advertising strategy?

From February 2024, a few new artificial intelligence (AI) features will be aiding Android phones, including Samsung’s new Galaxy S24 phones and Google’s Pixel 8 and Pixel 8 Pro.

These features will allow search and ‘Google Lens’, the company’s image recognition software, to be integrated directly into other phone apps. One of those features is Circle to Search, which enables you to quickly search an app’s contents using touch while selecting images, text, or videos. An overlay that shows at the bottom of the screen displays the results of your search.

In an early presentation, Google used a text message discussion between friends as an example. One friend suggested a restaurant, and the other friend was able to Click to Search it and get restaurant results without ever leaving the text messaging app. A further application scenario may involve stopping and circling a product you see in an Instagram video and searching for it, all from within the same app display.

Because they let the user do searches without having to jump between apps, both of these use cases demonstrate a particular efficiency in search—a sort of helpfulness if you will. However, in addition to being useful for identifying wildlife, they also offer a clear commercial prospect, which makes them advantageous for Google’s advertising company.

Google has verified that the dedicated ad spots on the results page would still display adverts for searches and purchases. Since the search overlay will only occupy a small portion of your mobile device’s screen, it could soon become less effective and more annoying if the results are advertisements.

Herein lies the role of generative AI: Rather than a list of links, a condensed answer might make more sense on a small screen. With a different input, Google’s new AI-powered multi-search feature functions similarly to Circle to Search. Google Lens is a visual search feature available in the Google mobile app. It works by having you aim your phone at an object to get “AI-powered insights” in addition to the standard search results.

Google gave the following example using a board game: choose a game you’re unfamiliar with, take a picture of it, and enquire as to how it’s played. And an overview will be generated by Google’s AI. Asking “How do I fix this?” pointing the phone to a malfunctioning device is an additional option.

Liz Reid, Vice President and General Manager of search at Google, says, “In my mind, this is about taking search from multi-modal input to really doing multi-modal output as well.”

She is referring to the different ways that people can interact with a computer or artificial intelligence model to potentially produce more relevant results.

“It really opens up a whole range of questions that you couldn’t just ask Google before,” she added.

AI-powered multi-search results won’t require signing up for Google’s SGE, or Search Generative Experience, a portal that gives early testers access to new AI capabilities, unlike Circle to Search. Any iOS or Android phone in the US running the Google app will be able to use the AI-powered multi-search feature. However, users of Google’s SGE outside of the United States can also get a sneak peek of multisearch powered by AI.

These changes are gradual, but that is typical of Google’s strategy for SGE, as the business has been testing some of its newest and most sophisticated artificial intelligence search capabilities before making them available to a wider audience.

Including early consumers in SGE allows Google some leeway if the product isn’t quite right, in addition to providing it with extra data to train its AI models.

According to Reid, there probably won’t be a “moment of light” when Google search as we know it is completely replaced by the SGE experience.

Instead, Reid says, the approach is more likely to involve “pushing the boundaries of what’s possible and then thinking about which use cases are helpful and that we have the right balance of latency, quality, and factuality.”

This approach to ushering in a completely new era of search is undoubtedly beneficial to Google. In a perfect AI world, searchers would also benefit more from it—on the web and mobile devices.

It seems that Google is introducing new AI-powered multi-search features to improve the accessibility and convenience of its search engine while maintaining its commitment to its current advertising strategy.

These features will be added to high-end Android phones and will allow search and Google Lens to be integrated directly into other phone apps. One such feature is Circle to Search, which enables users to quickly search an app’s contents using touch. Google’s AI-powered multi-search function will also offer clear commercial prospects, which makes it advantageous for Google’s advertising company.

However, the role of generative AI is to provide a condensed answer rather than a list of links on a small screen.

According to Reid, the approach to ushering in a completely new era of search is likely to involve “pushing the boundaries of what’s possible and then thinking about which use cases are helpful and that we have the right balance of latency, quality, and factuality.”

Related posts

Fonio – the rice everyone wants

GBO Correspondent

Flying high: Electrification of aerospace

GBO Correspondent

Blockchain is disruptive

GBO Correspondent