Paid, but less crap.
Duckduckgo. Just moved to it within the year, and it’s been fine for my basic searches.
DDG for me, but mostly I use !wi and !gh. Pretty rare to do a rawdog search
SearXNG - it’s a meta-engine that serves results from a combination of other ones (you can set up which ones you want to use). I like it a lot. Here’s a list of public instances: https://searx.space/
I host my own instance for that added oomph.
I just started using this one and it is crazy how different the experience is
Right? So much better it’s not even funny. It was recommended to me here on Lemmy a while ago, and I couldn’t be happier. Sometimes instances get clogged or go down, but it still beats any other search I tried by a mile.
Agreed
I’m using Qwant. Works better for me than DuckDuckGo.
Another Qwant person! I’m enjoying it.
I’ve been using kagi for a while now, but I’m not ready to pay for search results. So I’ll basically stick with google plus a bunch of excluded terms for the time being, and use the free search contingent from kagi if that doesn’t yield and useful results regardless.
Altavista
Hotbot
Dogpile
AskJeeves should be rebooted with a LLM
Magellan
Mostly using DDG now for sometime. Once in a while need to use Google.
I’ve found for the last 3-4 years I haven’t had to fall back to Google. DDG results have been good enough I can use it full time.
In my case I’ve found the need to use Google for local searches, and certain very specific searches (one example is journal impact factors). In a lot of other cases, DDG has actually given me better results - I was getting fed up with some of the crappy results I was getting using Google, which prompted me to try out and eventually shift to DDG.
Kagi
My own self-hosted SearXNG.
How does that work, does it go out and start indexing the internet for you?
It’s a meta search engine. It queries other search engines and compiles you results.
That’s a shame, It would be more interesting to run your own crawlers.
(Yes I realise that would be computationally intense and hard, but it would nonetheless be cool)
There is probably a neat wget oneliner that could crawl everything on the open web. The real challenge is how to index all the information. That might be a neat Perl oneliner.
I use startpage but there really şsnçt s “good” option
startpage.com or DuckDuckGo
I just cancelled Kagi. It’s good but not really good enough to justify the cost, plus stuff detailed here https://www.osnews.com/story/139270/do-not-use-kagi/
I gave yandex a quick run, it’s actually very good, functionally, but a privacy nightmare.
Currently trying out Mojeek, one of the few outside the big three to have it’s own index. Pretty good - not all the conveniences of the bigger ones but maybe good enough most of the time
I’m a Kagi user … I’ve also previously responded to the referenced post on Kagi.
https://alexandrite.app/social.packetloss.gg/comment/1704525