Google I/O 2024: What’s New In Site Performance and Search

Last updated on May 22nd, 2024 | 6 min

From advancements in AI and machine learning to significant updates in web performance and search, Google I/O 2024 showcased a lot of transformative technologies.

The annual developer conference was nothing short of spectacular, and we couldn’t be happier that we were invited to the Mountain View event for a second straight year. 

Georgi Petrov at Google I/O 2024

So here are the biggest announcements from I/O 2024 from our point of view. 
 

Instant Browsing with Speculation Rules API

While all AI announcements were groundbreaking (more on that later), it's our duty to start with the performance news. 

During the conference, Google repeatedly emphasized that the future of the web is "providing near-instant loading experiences." 

And the way to achieve it is through their latest Chrome addition—Speculation Rules API.

The Speculation Rules API aims to enhance future page navigation performance. It extends the functionality of existing resource hints like link rel=prefetch and link rel=prerender, providing a JSON-defined, flexible way for developers to specify documents for prefetching or prerendering. 

A truly impactful technology that we at NitroPack already implement through Navigation AI. 

This implementation led to the most exciting part of I/O 2024 for us—NitroPack was mentioned not once but twice as an example of a Google partner successfully leveraging the power of the Speculation Rules API.

First, we've been mentioned during the Developer Keynote:

NitroPack was mentioned during Google I/O 2024

The second time was during Barry Pollard's presentation "From fast loading to instant loading:"

NitroPack mentioned during Google I/O

We would lie if we said this recognition didn’t leave us speechless. 

It’s been quite an exceptional journey so far:

  1. Being invited to Google I/O for the first time in 2023.
  2. Holding a webinar series on web performance with Barry and Adam Silverstein.
  3. Being mentioned during I/O 2024. 

And we’re just getting started. 

But circling back to Barry’s presentation and the opportunities Speculation Rules API brings to the table, here's how to integrate Speculation Rules API on your website:
 

How to Activate Speculation Rules API On Your Site

You have two options to enable Speculation Rules API:

  1. Use URL patterns: Define which URLs are eligible for prefetch or prerender.
  2. Specify level of “eagerness”: Use the eagerness setting to indicate when the speculations should fire – “eager” fires the speculation rules as soon as they are observed; “moderate” performs speculations if you hover over a link for 200 milliseconds; “conservative” speculates on pointer or touch down.

Speculation Rules API example

For more details, we highly recommend checking Barry Pollard’s presentation:


 

 

Navigation AI: Automating Instant Page Loads

The reason why we were mentioned during the conference was our latest product, Navigation AI.


Navigation AI is an advanced AI-powered tool that enhances web browsing by predicting and analyzing user behavior to prerender entire pages during the customer journey. This technology allows site owners to provide instant browsing experiences on both desktop and mobile, increasing customer engagement and conversion rates.

Using the Speculation Rules API, Navigation AI operates in two steps:

  • Initial Predictions: AI makes initial predictions on page load without overwhelming the browser.
  • Behavior Analysis: Adjusts predictions and instructs the API to prerender pages once user actions are clearer.

This predictive page loading leads to the following:

  • 20% further improvement to LCP
  • 80% improvement to CLS

Navigation AI Core Web Vitals improvements

Experience the power of Speculation Rulest API and predictive page loading. Join the waitlist for Navigation AI →

Better Ways to Debug Interaction to Next Paint (INP)

It’s been two months since INP has been promoted to an official Core Web Vital, replacing First Input Delay (FID).

And since its release, there has been a global drop in Core Web Vitals pass rate. 

Luckily, Chrome has been working tirelessly to enable developers and site owners to debug INP better, releasing v4 of web-vitals.js. 

Some of the improvements include:

  • Added nextPaintTime, which marks the timestamp of the next paint after the interaction.
  • Added inputDelay, which measures the time from when the user interacted with the page until when the browser was first able to start processing event listeners for that interaction.
  • Added processingDuration, which measures the time from when the first event listener started running in response to the user interaction until when all event listener processing has finished.
  • Added presentationDelay, which measures the time from when the browser finished processing all event listeners for the user interaction until the next frame is presented on the screen and visible to the user.
  • Added processedEventEntries, an array of event entries that were processed within the same animation frame as the INP candidate interaction.
  • Added longAnimationFrameEntries, which includes any long-animation-frame entries that overlap with the INP candidate interaction.

In summary, the latest version provides greater insights into different parts of an interaction and why it is slow. 

To fully understand the new improvements and how to leverage them to improve INP, check Jeremy Wagner’s presentation:

 

The Gemini Era: Turning Any Input into Output 

The word “AI” was said 120 times during the first day of the conference, so it’s no surprise that Google’s large language model Gemini was in the spotlight. 

AI mentions during Google I/O

Google’s on-device mobile large language model, formerly known as Gemini Nano, is receiving a significant upgrade and will now be called Gemini Nano with Multimodality. 

Onstage, Google CEO Sundar Pichai explained that this enhancement allows the model to “turn any input into any output.” 

This means it can gather information from various sources such as text, photos, audio, web content, social videos, and even live video from our phone’s camera. It can then synthesize this input to provide summaries or answer questions related to the content.

We know that reading about it might not seem as exciting, but trust us when we say it wow-ed us the first time we saw it. This demo will give a better understanding of what Gemini is capable of:


Google Revamps Search in the US

Generative AI isn’t breaking news, being at the forefront of Google I/O 2023. 

However, this year, Google announced an expansion to its AI-powered search experience, introducing AI Overviews. 

Stepping on the powerful Gemini model, AI Overviews can generate quick answers to queries, piecing together information from multiple sources.

As Google said:

“This is all made possible by a new Gemini model customized for Google Search. It brings together Gemini’s advanced capabilities — including multi-step reasoning, planning and multimodality — with our best-in-class Search systems.”


Gemini’s advanced multi-step reasoning capabilities enable it to handle complex queries and provide detailed answers.

For instance, you could ask, “Find the best yoga or Pilates studios in Boston, including details on their introductory offers and walking time from Beacon Hill,” and receive a thorough and informative response:

AI Overviews example

Source: Google

However, perhaps the best thing about the improved AI search is the ability to create plans from scratch. 

If you’re located in the US, a simple “plan a 3-day trip to Rome” search will give you a complete list of places to stay, visit, and eat. 
 

How to Prepare Your Site for Google’s AI Overviews

As exciting as all the new improvements sound from a user standpoint, the reality is that many businesses that rely on Google Search traffic will be impacted.

AI Overviews occupy extensive screen real estate and could bury traditional “blue link” web results, significantly limiting clickthrough rates.

Suddenly, being in the top 3 results won’t be enough. 

So here are a couple of tactics that might increase your visibility in search results:

  • Use a Q&A format – Structure content explicitly as questions and direct answers to increase visibility in Google's AI overviews.
     
  • Develop comprehensive topic pages – Create overview pages that cover the entire user journey, from initial research to final decisions, for complex queries.
     
  • Feature content on high-authority Q&A sites – Publish authoritative content on platforms like Quora and Reddit to enhance visibility in AI search results.
     
  • Optimize technical SEO for better crawling – Ensure your site’s technical SEO is optimized so Google's AI can effectively crawl and render all on-page content.
     
  • Monitor search volume for AI-enhanced queries – Track queries that trigger AI overviews to identify content gaps and prioritize high-value optimization opportunities

 

Wrap Up

Google I/O is always an eye-opener, offering a glimpse into the future of technology and innovation. Each year, we return from the event brimming with fresh ideas and inspiration for improvements. 

The wealth of knowledge shared by industry leaders and the unveiling of groundbreaking advancements consistently push us to elevate our own projects. We look forward to applying the insights gained and driving progress in our work. 

Also, if you want to delve deeper into all the announcements made during the conference, go to the official website: https://io.google/2024/explore/

Niko Kaleev
Web Performance Geek

Niko has 5+ years of experience turning those “it’s too technical for me” topics into “I can’t believe I get it” content pieces. He specializes in dissecting nuanced topics like Core Web Vitals, web performance metrics, and site speed optimization techniques. When he’s taking a breather from researching his next content piece, you’ll find him deep into the latest performance news.