Reflection of a circuit board on glasses

Three significant shocks impacting search in the Generative AI era

Related topics

Searching is ubiquitous in modern information-intensive economies, shaping all variety of transactions from labor and product markets to e-commerce and social media.

Arich body of scholarly work, including Nobel prize winning research, is devoted to exploring the implications of search costs. And some of the world’s biggest technology companies have been built on minimizing them, both for consumers searching for information, and for businesses searching for audiences. But with the explosion of generative AI, the way we all search for information is changing significantly and perhaps permanently.

We highlight 3 significant shocks that are impacting searching:

  1. Search costs are rising as a tsunami of generated content inundates the market;
  2. Users are demanding a new search experience in response to deteriorating search quality;
  3. Research and development into new search techniques is accelerating.

Don’t bore us, get to the chorus

As the volume of content grows, it becomes more expensive to sift through it all to find the information you need. The needle stays the same size, but the haystack keeps growing, so we all must spend more resources searching through the hay. Generative AI will unleash exponentially growing volumes of content, leading to rising search costs. Such frictions can reshape a wide variety of markets in interesting and sometimes counterintuitive ways.

For example, as music streaming services have grown to dominate listening preferences, the actual musical structure and content of songs themselves is changing in response to high search costs faced by users amidst practically infinite choices. Artists have begun to re-write songs for this new medium, composing music with shorter intros and overall lengths. Additionally, songs are usually written in the format of VERSE-CHORUS-VERSE, but now more new music is CHORUS-VERSE-CHORUS, designed specifically with streaming in mind. The subtle change is driven by the extreme content abundance; artists know that listeners aren’t likely to continue far beyond the first 15 to 30 seconds of a song, as innumerable additional choices are available on demand. They thus choose to deliver the catchiest part of the song in those first seconds of listening to maximize the likelihood that they hook the audience (and get paid royalties for the play).

But the impact of higher search costs doesn’t end at just the style and structure of content markets. Many real-world decisions depend on a searchers’ beliefs about the availability of alternatives and are classified mathematically as “optimal stopping problems”, e.g., when should we stop searching and just execute. Almost every major “transaction” we undertake involves an information gathering component to it, and digital platforms are a primary source of that information. Their influence extends to life-changing decisions such as searching for a job, a home to live in, a partner to spend our lives with, or searching for the best car at the lowest price. Search costs have been shown to impact all these critical economic and social functions.

In short, rising search costs make it more difficult to find a trading partner, for buyers of all kinds to find sellers, and vice versa. As a result, Generative AI risks making it harder to execute a wide variety of transactions. Such frictions are associated with several inefficiencies including rising inflation, widening price dispersion, and higher unemployment. In a period of significant economic volatility, such negative impacts could be material.

Are you experienced?

There is a growing consensus that the quality of search results has been deteriorating for some time, with several proximate causes. The extraordinarily rapid growth in demand for new ways to retrieve information suggests the market is craving a different experience, which comes at a critical juncture. Recent estimates show approximately 1 in every 5 minutes of knowledge worker time is spent searching. We are drowning in information yet are unable to find what we need when we need it, and the problem is only getting worse.

Commenters on the popular internet message board Reddit, for example, widely opine that search results are dominated by advertisements, spam, and clickbait, making it harder to find the information you searched for. A quick check with the Wayback Machine demonstrates how much more scrolling is required today to get to organic search results than 10 or 20 years ago. “Search engine optimization” or SEO techniques are pervasive, enabling publishers to easily game the results rankings by using specific keywords to rise to the top of the list. Advertising-based business models for search engines was always known to be a threat to the quality of results; rather than finding the most relevant and highest quality information, a for-profit business has an incentive to direct users’ attention towards the content of the highest bidder.

This incentive misalignment embedded deeply into the business model of the internet could explain some of the attractiveness of using (for now) advertising-free chatbots to answer questions. It’s simply a cleaner, nicer, less overwhelming experience. Rather than sifting through a series of links, generative AI can provide a direct answer in an easy-to-consume format without all the clutter and noise of existing search engines. And because the chatbot has a memory, it allows the user to conduct highly contextualized and more detailed subsequent searches, enabling further refinement and scope of the request to retrieve the specific desired information. Instead of trying out different keywords or clicking well beyond the first page of links, a user can simply adjust or extend the initial text prompt to continue exploring within a narrower content space. The conversational interface requires less effort from the users, and the lack of ads and noise is less mentally taxing.

Regardless of the underlying driver, users are talking with their feet, even despite the obvious limitations of chatbots including poor accuracy (frequently giving wrong answers), timeliness (the algorithms require training, so content generated is around 12 months out of date), and the occasional unexpected emotional outburst.

The new kids on the graph

As search costs rise, innovation is pushing forward with new search technologies to offset the increase. Several new AI search startups are emerging to challenge the incumbents, and even the technology giants that have dominated this sector for decades are restructuring their product offerings in response to new competitive pressures. Most of the new services are conversational in nature.

But perhaps the most important new offering is exemplified by GitHub’s Co-pilot model, which you might describe as “auto-search”. Co-pilot operates as a programming partner, monitoring your code in real-time as you construct it to automatically retrieve relevant information. Programmers using the tool report significant increases in productivity, as they don’t have to redirect their attention, open their browser, and search through API docs or developer forums. Co-pilot has essentially already done that for you and pulled out suggestions and insights from the billions of lines of code stored with GitHub. Similar models are already under development to create co-pilots for several knowledge professions, including lawyers, securities traders, doctors, and architects.

But what makes co-pilot techniques so powerful isn’t just the ability to automatically search in the background, it’s the way these models systematize the knowledge embedded in the training data and make that knowledge accessible to the user. In the GitHub example above, past developers who created the code comprising the training data will have already read the necessary documentation and incorporated that knowledge into their software. Thus, when their code gets reproduced for a new end user, it retains that enhanced structure. This is different from merely fetching unstructured information, which takes additional effort on behalf of the user to convert into more actionable knowledge and insight. Of course, the inverse is also true; mistakes coded into the training data will be propagated into new code as well.

Generative AI therefore represents the onset of a shift away from information retrieval, which can no longer handle the volumes of growing data, and towards knowledge retrieval, a delicate but important distinction. Knowledge retrieval requires new techniques to store and organize the raw data, new ways to represent it in multiple dimensions, new methods to enable querying of the database, and different models to generate inferences. Further, it requires a rigorous review process to ensure the knowledge encoded into the graph is true, fair, and accurate, otherwise we risk perpetuating false and incorrect beliefs.

Summary

The future of search is on the horizon, after more than 2 decades of minimal change. Add searching to the list of disruptions driven by advances in artificial intelligence.

About this article