I guess we all kinda knew that, but it’s always nice to have a study backing your opinions.

  • @blahsay@lemmy.world
    link
    fedilink
    English
    4010 months ago

    It’s time for Google to die. They are a truly awful company now so it’s time to take her down to the shed like ol’ blockbuster

    • KaynA
      link
      English
      910 months ago

      What will be replacing it? Bing?

      • Keith
        link
        fedilink
        English
        710 months ago

        Kagi (although recent drama leaves me soured)

        • @ikidd@lemmy.world
          link
          fedilink
          English
          610 months ago

          I don’t fathom paying to have your search history catalogued in correlation to your payment info. This will end as it always does, either hacked or enshittified.

          • Keith
            link
            fedilink
            English
            1
            edit-2
            10 months ago

            The fundamental difference is that Kagi is making money from having the better product, not from serving more/better ads.

          • KaynA
            link
            English
            510 months ago

            Kagi has started using search results from Brave’s search index. The LGBT community disapproved of this because of past homophobic actions by Brave’s CEO Brendan Eich.

            • @ripcord@lemmy.world
              link
              fedilink
              English
              310 months ago

              Oh, that. Yeah, I’m not personally worried that they used it very lightly as one of a dozen sources and then stopped.

              • Keith
                link
                fedilink
                English
                310 months ago

                The problem was mainly their questionable response

              • @UnderpantsWeevil@lemmy.world
                link
                fedilink
                English
                210 months ago

                Fool me once…

                Twitter has a similar problem. The more the CEO injects personal politics into the function of the site, the less confidence people have that a new search won’t be fucked with. Whatever you might say about Google, Bing, and Yahoo, their owners have at least kept their politics closer to the chest.

            • @BobGnarley@lemm.ee
              link
              fedilink
              English
              210 months ago

              Thats a terrible reason to not use something that works well though. I mean the founder or CEO of any major bank is probably a shit person with bad takes like racism but does it make their banking service any less useful?

        • KaynA
          link
          English
          4
          edit-2
          10 months ago

          I don’t think so. Wiby limits its index to specific kinds of websites by design.

          I imagine it’s great for entertainment purposes, but not for the things you’d usually use a search engine for (gathering information, troubleshooting issues, etc.)

      • @Swuden@lemmy.world
        link
        fedilink
        English
        310 months ago

        No joke, I’ve been using Bing’s GPT-4 search and it’s helped me much more frequently than Google lately. AI might actually be where Bing out-competes Google.

        • KaynA
          link
          English
          1010 months ago

          Are we expecting normal people to learn how to self-host?

        • hannes3120
          link
          fedilink
          English
          510 months ago

          How are you supposed to self-host a web crawler and indexer without getting a giant server bill?

          Having this service at least slightly centralised makes sense ressource-wise - but assuming crawling and indexing is free is just foolish. I’d choose something like kagi but I guess many people will rather cheap out and go for the next free service not realising that that company has to make money another way to make up for the high cost of running a search engine

          • @UnderpantsWeevil@lemmy.world
            link
            fedilink
            English
            410 months ago

            I’d choose something like kagi but I guess many people will rather cheap out

            I often feel as though these paid-for services aren’t delivering a meaningfully better product. After all, it isn’t as though Google’s problem is that they don’t have enough cash to spend on optimization. The problem is that they’re a profit-motivated firm fixated on minimizing their cost and maximizing their revenue. Kagi has far less money to optimize than Google and the same profit-chasing incentives.

            If there was a Github / Linux distro equivalent to a modern search engine - or even a Wikipedia-style curated collaborative effort - I’d be happy to kick in for that (like I donate to these projects). For all Wiki gets shit on ask Spook-o-pedia, they do at least have a public change history and an engaged community of participants. If Kagi is just going to kick me back the same Wiki article at a higher point in the return list than Google, why get their premium service when I can just donate to Wiki and search there directly?

            If I’m just getting a feed of paywalled news journals like the NYT or WaPo, its the same question? Why not just pay them directly and use their internal search?

            Other than screening out the crap that Google or Bing vomit up, what is the value-add of Kagi? And why shouldn’t I expect to see the same shit-creep in Kagi that I’ve seen in Google or Bing over the last decade? Because I’m paying them? Fuck, I subscribe to Google and Amazon services, and they haven’t gotten any better.

            • hannes3120
              link
              fedilink
              English
              210 months ago

              The problem is that it’s just incredibly expensive to keep scanning and indexing the web over and over in a way that makes it possible to search within seconds.

              And the problem with search engines is that you can’t make the algorithm completely open source since that would make it too easy to manipulate the results with SEO which is exactly what’s destroying google

              • @UnderpantsWeevil@lemmy.world
                link
                fedilink
                English
                110 months ago

                you can’t make the algorithm completely open source since that would make it too easy to manipulate

                I don’t think “security through obscurity” has ever been an effective precautionary measure. SEO optimization works today because it is possible to intuit the function of the algorithms without ever seeing the interior code.

                Knowing the interior of the code gives black hats a chance to manipulate the algorithm, but it also gives white hats the chance to advise alternative optimization strategies. Again, consider an algorithm that biases itself to websites without ads. The means by which you game the system would be contrary to the incentives for click-bait. What’s more, search engines and ad-blockers would now have a common cause, which would have their own knock-on effects.

                But this would mean moving towards an internet model that was more friendly to open-sourced, collaboratively managed, and not-for-profit content. That’s not something companies like Google and Microsoft want to encourage. And that’s the real barrier to such an implementation.

                • hannes3120
                  link
                  fedilink
                  English
                  210 months ago

                  It’s not about security through obscurity but “if a measurement becomes a goal then it ceases to be a good measurement” - so keeping the measurements hidden in order to make it harder for them to become a goal is a decent way to go on about it.

                  How would you measure “without ads”? That would just be the same cat and mouse game that adblockers have to deal with for decades.

                  I’m not sure it’s possible to find a good completely open source solution that’s not either giving bad results by down rating good results for the wrong reasons or that’s open to misuse by SEO.

                  That might work if it’s a small project where noone cares about fixing the results but if something like that becomes mainstream it’s going to happen

                  • @UnderpantsWeevil@lemmy.world
                    link
                    fedilink
                    English
                    110 months ago

                    keeping the measurements hidden in order to make it harder for them to become a goal is a decent way to go on about it.

                    The measure, from the perspective of Clickbaiters, is purely their own income stream. And there’s no way to hide that from the guy generating the clickbait.

                    How would you measure “without ads”?

                    We have a well-defined set of sites and services that embed content within a website in exchange for payment. An easy place to start is to look for these embeds on a website and downgrade the results in your query as a result. We can also see, from redirects and ajax calls off a visited website, when lots of other information is being drawn in from third-party sites. That’s a very big red flag on a site that’s doing ad pop-ups/pop-overs and other gimmicks.

                    I’m not sure it’s possible to find a good completely open source solution that’s not either giving bad results by down rating good results for the wrong reasons or that’s open to misuse by SEO.

                    I would put more faith in an open-source solution than a private model, purely due to the financial incentives involved in their respective creations. The challenge with an open model is in getting the space and processing power to do all the web-crawling.

                    After that, it wouldn’t be crazy to go in the Wikipedia/Reddit direction and have user-input to grade your query results, assuming a certain core pool of reliable users could be established.

          • @Blue_Morpho@lemmy.world
            link
            fedilink
            English
            310 months ago

            The Internet was tiny in 1998 but so were Google’s servers. A little searching seems to show they ran everything on a dozen Pentium PC’s with at total of 100GB of drives. That’s less power than a single Raspberry Pi today with a $30 SD memory card.

    • @EarMaster@lemmy.world
      link
      fedilink
      English
      510 months ago

      Before we had Google, we had Altavista and before that we had indexes like Yahoo. Maybe we should consider going back. With the help of AI (I know…) it seems feasible to keep up with the ever growing content.

      • @UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        1010 months ago

        Maybe we should consider going back.

        You can’t really go back. Those old engines worked on more naive algorithms against a significantly smaller pool of websites.

        The more modern iteration of Altavista/AOL/Yahoo has been the aggregation sites like Reddit, where people still post and interact with the site to establish relevancy. Even that’s been enshittified, but its a far better source than some basic web crawler that just scans website text and metadata for the word “Horse” and returns a big listical of results based on a hash weighted by number of link-backs.

        That system was gamed decades ago and is almost trivial to undermine in the modern moment. Nevermind how hard you’d need to work to recreate the original baseline hash tables that these old engines built up over their own decades of operation.