Google has become so integral to online navigation that its name became a verb, meaning “to find things on the Internet.” Soon, Google might just tell you what’s on the Internet instead of showing you. The company has announced an expansion of its AI search features, powered by Gemini 2.0. Everyone will soon see more AI Overviews at the top of the results page, but Google is also testing a more substantial change in the form of AI Mode. This version of Google won’t show you the 10 blue links at all—Gemini completely takes over the results in AI Mode.

This marks the debut of Gemini 2.0 in Google search. Google announced the first Gemini 2.0 models in December 2024, beginning with the streamlined Gemini 2.0 Flash. The heavier versions of Gemini 2.0 are still in testing, but Google says it has tuned AI Overviews with this model to offer help with harder questions in the areas of math, coding, and multimodal queries.

With this update, you will begin seeing AI Overviews on more results pages, and minors with Google accounts will see AI results for the first time. In fact, even logged out users will see AI Overviews soon. This is a big change, but it’s only the start of Google’s plans for AI search.

Gemini 2.0 also powers the new AI Mode for search. It’s launching as an opt-in feature via Google’s Search Labs, offering a totally new alternative to search as we know it. This custom version of the Gemini large language model (LLM) skips the standard web links that have been part of every Google search thus far.

  • 4Robato@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    I think it’s simply that getting a direct answer is easier than reading different forums with different views and come up with your idea. That doesn’t mean people want google search to stop searching. We have gemini, if I want to use gemini I can go to gemini. I don’t get why everything has to be AI. We can have multiple tools, not everything out there is a nail.

    • StarDreamer
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Somehow I disagree with both the premise and the conclusion here.

      I dislike a direct answer to things as it discourages understanding. What is the default memory allocation mechanism in glibc malloc? I could get the answer sbrk() and mmap() and call it a day, but I find understanding when it uses mmap instead of sbrk (since sbrk isn’t numa aware but mmap is) way more useful for future questions.

      Meanwhile, Google adding a tab for AI search is helpful for people who want to use just AI search. It doesn’t take much away from people doing traditional web searches. Why be mad about this instead of the other true questionable decisions Google is doing?

      • 4Robato@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        First of all, as long as it doesn’t replace search I’m fine with more options, I don’t like when a company forces me stuff, that’s it. I just think this is pointing to a future where search disappears and I don’t quite like that.

        You might dislike a direct answer but young people don’t. The design of the applications can encourage certain types of behaviors, can spread misinformation and promote racism as we have seen in many social media now.

        Things like Instagram and TikTok stress me out by the speed they show content and I could conclude no one will use those but then you see the new generations and how they even listen to music there, just the chorus of a song and go to the next one. This way of consuming information is being promoted by how the apps are designed; it is no coincidence that depression is increasing among young people.

        You can say that in principle people can use more responsibly this types of social media, but that’s not the reality when we have algorithms trying to maximize the time we spend on the phone. Also, how do you know that companies don’t add biases to LLMs? Turns out Gorg (the LLM on twitter) they have added to the prompt to not criticize Elon or Trump. That’s the current problems of LLMs is that we don’t have context and even if there is context, if you have to put effort into it people won’t do it. This happens to scientific articles where people never check the sources, imagine with other stuff…

        I honestly like LLMs and I think they are fascinating and very useful in a lot of situations! And efforts like Perplexity gives me a bit more faith than google just throwing an LLM that suggests to eat rocks. And while you might see that eating rocks shouldn’t be done, there’s this bias that can be build behind any LLM that affects in a way that will be hard to avoid or notice. Same way current algorithms affects us more than we think they do and polarizes opinions.

        I mean will see where this goes, I just want companies to take the matter seriously.