FUCK u/spez
Ok so they are earning on our data
You just described every company
IMO, another good reason to not use Google!
Ah, so Google signed a contract with the company that trained their AI to … (checks notes) … suggest putting glue on pizza.
Sounds like a perfect match.
I’d look at what will be, rather than what is. I think that it’s probably not controversial to say that AI is going to improve; these are early days. The question is to what extent.
If one is to assume that AI will improve very little over time, that ten years from now the kind of responses that you’ll get generated by a computer ten years hence in response to a question will be about the same as they are today, then, yeah, it’s probably an error to commit major resources to AI stuff or to expend resources acquiring training data for it.
But that assumption may not hold.
Let’s two of them die together
with threads too
Blocking other search engines will hurt Reddit, all else held equal. But not by that much. Google is seriously dominant in the search engine market.
kagis
Yeah.
https://gs.statcounter.com/search-engine-market-share
According to this, Google has 91.06% of the search engine market. So for Reddit, they’re talking about cutting themselves off from a little under 9% of people searching out there. Which…I mean, it isn’t insignificant, but it isn’t likely gonna hurt them all that badly.
Yeah I thought the same so it’s good to see the numbers. I don’t think people realize that to support a search engine means letting them crawl your pages which means serving all your pages to them, which costs server resources. A lot of sites get more crawler load than load from actual users viewing pages. It’s a real cost.
Still, you’d think they could manage to support DuckDuckGo at least. Or a small set of search giants to give some appearance of supporting competition.
It’s also worth noting that the 9% they cut off was probably the group more inclined to already be using alternatives to Reddit anyways.
You underestimate the amount of average joes that use stuff like DuckDuckGo
I would actually think that the 9% they cut off would be more likely than the 91% to be using Reddit.
One only can hope, but until people learns that you can use other browser and other search engine not likely (I am talking on Google side ofc, Reddit might be affected by this in the long run).
Is there a downside? I’m confused.
deleted by creator
“Would you like to expand your search to include human-created content? Upgrade to Google Advanced* to unlock the power of the human web!”
“sorry bro, I can’t search that website—it’s not covered by my subscription package”
Google already signaled they want to charge for their trash AI search.
I’m excited for this to start triggering anti-trust legislation
It obviously should, but it won’t, because the US is a capitalist dictatorship masquerading as a democracy. The oligarchy own the government, and the regulators.
But other search engines like Bing are also American capitalist corporations and they don’t want this I’m sure.
letthemfight.gif
Makes sense they’ve spent years curating other people’s content and are now selling it… Oh wait 😯.
I wonder what kind of contract they went with.
I can’t imagine this being a great long-term deal for Google. There’s minimal good new content being created on Reddit. Searching for useful information mostly brings up old posts, while new posts are heavily spam generated or designed to support AI learning.
I imagine buying access to historic reddit content from creation to ~2020 would be valuable. While paying for ongoing access to new content is going to be far less valuable and turn into AI devolution as we get to where AI is learning from other AI and spiraling into progressively worse outputs.
At least on some smaller subs, there seems to be a suspicious amount of brand new accounts asking one question to get human answers.
It would not surprise me if reddit, or some other service, are seeding to get more LLM-able content. Of course, this might backfire if people start giving stupid answers to eff up the data.If I’m not mistaken, Reddit has actual staff centered around asking questions to get engagement in small communities. Not so much for LLM reasons but to actually grow those communities (and thus edge out competition).
Around here we love the idea of Reddit being totally devoid of life but the fact is it’s still one of the most active public facing sites on the web. The attrition to sites like Lemmy is pretty negligible to the overall Reddit activity and bot AI activity only really affects the largest subreddits which have always been a bit spammy and click batey. The medium and small subreddits are still full of active people. Don’t get me wrong, Lemmy is my daily driver for this content but I won’t pretend everyone fled Reddit for this.
Additionally, exclusivity with Google isn’t necessary just to keep the search results but to prevent their biggest AI competition ChatGPT and their ties to Microsoft from getting access to what is the Internet’s largest database of public facing conversation.
I wonder what kind of contract they went with.
SAN FRANCISCO, Feb 21 (Reuters) - Social media platform Reddit has struck a deal with Google (GOOGL.O) , opens new tab to make its content available for training the search engine giant’s artificial intelligence models, three people familiar with the matter said.
The contract with Alphabet-owned Google is worth about $60 million per year, according to one of the sources.
For perspective:
https://www.cbsnews.com/news/google-reddit-60-million-deal-ai-training/
In documents filed with the Securities and Exchange Commission, Reddit said it reported net income of $18.5 million — its first profit in two years — in the October-December quarter on revenue of $249.8 million.
So if you annualize that, Reddit’s seeing revenue of about $1 billion/year, and net income of about $74 million/year.
Given that Reddit granting exclusive indexing to Google happened at about the same time, I would assume that that AI-training deal included the exclusivity indexing agreement, but maybe it’s separate.
My gut feeling is that the exclusivity thing is probably worth more than $60 million/year, that Google’s probably getting a pretty good deal. Like, Google did not buy Reddit, and Google’s done some pretty big acquisitions, like YouTube, and that’d have been another way for Google to get exclusive access. So I’d think that this deal is probably better for Google than buying Reddit. Reddit’s market capitalization is $10 billion, so Google is maybe paying 0.6% the value of Reddit per year to have exclusive training rights to their content and to be the only search engine indexing them; aside from Reddit users themselves running into content in subreddits, I’d guess that those two forms are probably the main way in which one might leverage the content there.
Plus, my impression is that the idea that a number of companies have – which may or may not be valid – is that this is the beginning of the move away from search engines. Like, the idea is that down the line, the typical person doesn’t use a search engine to find a webpage somewhere that’s a primary source to find material. Instead, they just query an AI. That compiles all the data that it can see and spits out an answer. Saves some human searcher time and reduces complexity, and maybe can solve some problems if AIs can ultimately do a better job of filtering out erroneous information than humans. We definitely aren’t there yet in 2024, but if that’s where things are going, I think that it might make a lot of strategic sense for Google. If Google can lock up major sources of training data, keep Microsoft out, then it’s gonna put Microsoft in a difficult spot if Microsoft is gunning for the same thing.
Cool, thank you. You seem to know quite a bit about this stuff.
If we do end up at a point without search engines, where AI does the search and summarizes an answer, what do you think their level of ability to tie back to source material will be?
I’m thinking in cases of asking about a technical detail for a hobby, “how do I get x to work”. I don’t necessarily want a response like “connect blue wire to red”. What I really want is the forum posts discussing the troubleshooting and solutions from various people. If an AI search can’t get me to those forums, it’s of little value to me and when I do figure out an answer acceptable to my application, I’m not tied into that forum to share my findings (and generate new content for the AI to index).
Related to that, I’m thinking about these stories of lawyers relying on AI to write their briefs, and the AI cites non-existent cases as if they were real. It seems to me, not at all a programmer, that getting an AI to the point where it knows what’s real and what’s a hallucination would be a challenge. And until we get to that point, it’s hard to put full trust into an AI search.
If we do end up at a point without search engines, where AI does the search and summarizes an answer, what do you think their level of ability to tie back to source material will be?
I haven’t used the text-based search queries myself; I’ve used LLM software, but not for this, so I don’t know what the current situation is like. My understanding is that current approach doesn’t really permit for it. And there are two issues with that:
-
There isn’t a direct link between one source and what’s being generated; the model isn’t really structured so as to retain this.
-
Many different sources probably contribute to the answer.
All information contributes a little bit to the probability of the next word that the thing is spitting out. It’s not that the software rapidly looks through all pages out there and then finds a given single reputable source that could then cite, the way a human might. That is, you aren’t searching an enormous database when the query comes in, but repeatedly making use of a prediction that the next word in the correct response is a given word, and that probability is derived from many different sources. Maybe tens of thousands of people have made posts on a given subject; the response isn’t just a quote from one, and the generated text may appear in none of them.
To maybe put that in terms of how a human might think, place you in the generative AI’s shoes, suppose I say to you “draw a house”. You draw a house with two windows, a flowerbed out front, whatever. I say “which house is that”? You can’t tell me, because you’re not trying to remember and present one house – you’re presenting me with a synthetic aggregate of many different houses; probably all houses have mentally contributed a bit to it. Maybe you could think of a given house that you’ve seen in the past that looks a fair bit like that house, but that’s not quite what I’m asking you to tell me. The answer is really “it doesn’t reflect a single house in the real world”, which isn’t really what you want to hear.
It might be possible to basically run a traditional search for a generated response to find an example of that text, if it amounts to a quote (which it may not!)
And if Google produces some kind of “reliability score” for a given piece of material and weights the material in the training set by that (which I will guess that if they don’t now, they will), they could maybe use the reliability score to try to rank various sources when doing that backwards search for relevant sources.
But there’s no guarantee that that will succeed, because they’re ultimately synthesizing the response, not just quoting it, and because it can come from many sources. There may potentially be no one source that says what Google is handing back.
It’s possible that there will be other methods than the present ones used for generating responses in the future, and those could have very different characteristics. Like, I would not be surprised, if this takes off, if the resulting system ten years down the road is considerably more complex than what is presently being done, even if to a user, the changes under the hood aren’t really directly visible.
There’s been some discussion about developing systems that do permit for this, and I believe that if you want to read up on it, the term used is “attributability”, but I have not been reading research on it.
Attribution, great term to search. Thank you.
Websearching “attribution + AI” brings up a lot of hits on copyright concerns. Which opens up even more questions. If we get to the point where AI attributes it’s sources with some sort of scoring, then it’s near certainly going to be using copyrighted materials at times. And depending on the copyright and what profits the AI company is gaining from their use and probably a bunch more detailed copyright stuff beyond my civilian acknowledge, there’s probably financial and legal reasons for AI searches to not publicly attribute sources. Which loops me back to, I want to see conflicting materials and make a judgement call on final summary myself in many cases.
I’m sure there are many people much smarter than me with nothing but pure, ethical intentions figuring all this out. Who knows, maybe this will be the tipping point for better copyright and intellectual property protections in the US and elsewhere.
-
With all the botting going on on Reddit, this whole Google AI deal makes me think of the recent paper that demonstrates that, as common sens would suggest, deep learning models collapse when successive generations are trained on the previous generations’ output
Hot take here.
I do believe in free information.
Instead of investing money in stop crawlers why do not make the data they are trying to crawl available to everyone for free so we can have a better world all together?
Data transfer isn’t free. It costs real money and energy to respond to queries. Don’t be surprised to see ~50% of all requests made to your server be from bots which you may have no interest in servicing outside of search engine indexers.
If you publish your data in a friendly manner bots would have no need to crawl your site.
Data that is more interesting and requested a lot could even be served over p2p.
This moderl would generate less cost that dealing with constant bot scrappers.
It is not a technical discussion. Or a discussion about associated cost. It’s a discussion about morals and economic models.
Reddit responded: “Only google pays us”. The content is not yours. You built this of naive user base that just wanted to share now these fuckers are taking it as their entitlement. As early an reddit user - fuck that place, I’m still angry.
Been on Reddit since like 2009-ish. You completely nailed the point.
should fight in court that it’s not reddit’s content. it belongs to the people not steve fuck face.
I’m sure the reddit TOS you agreed to during signup says otherwise…
Legally speaking, the content is theirs.
No, I don’t think so. Just because you put a clause in ToS doesn’t make it legally binding and most precedent is in favor of the original copyright owner.
I’d love to see the precedent, if you don’t mind.
Nonsense.
If someone posts a copyright violation on YouTube, YouTube can go free under the safe harbor provisions of the DMCA. (In the US.) YouTube just points a finger at the user and says “it’s their fault”, because the user owns (or claims to own) the content. YouTube is just hosting it.
I don’t know of any reason to think it’s not the same for written works. User posts them, Reddit hosts them, user still owns them. Like YouTube, the user gives the host a lot of license for that content, so that they can technically copy and transmit it. But ultimately the user owns it. I assume by the time Reddit made the AI deal they probably put in wording to include “selling a copy of the data” to active they want in the TOS.
Now, determining if the TOS holds up in court is of course trickier. And did they even make us click our permission away again after they added it, it just change something we already clicked? I don’t recall.
Usually any hosting platform has some kind of wording to the tune of “you give us permanent and unrestricted right to use your content however we want”. Copyright is still yours, but you can’t use it against the platform. Applies to social networks, YouTube, Flickr, anything I can think of.
Oh well. Time to post more questions on lemmy
They’re also blocking posts by users who aren’t banned or even got a warning. It appears to the user as though it’s been posted, but it hasn’t.
Shadowbanning? Do you have more info on this?
They’ve done this for a long time. It’s supposedly only supposed to be used on bots but it definitely isn’t in practice
shadowbanning is a totally different issue that’s existed for a long time though.
How many times is this going to be posted? I’ve seen this several times now over the past few days.
Sorry, I haven’t seen it. If it’s been posted here before, Send me the link to the previous post, and I’ll take this one down. Even better, you can report the post, and the mods will investigate it.
Thank you!
Since you asked, here are the other four times it was posted.
- https://lemmy.world/post/17906460
- https://lemmy.world/post/17913261
- https://lemmy.world/post/17930528
- https://lemmy.world/post/17949956
There was a fifth one, but that one has since been removed.
Thanks, this looks like different reporting on the same story. That happens with major news, but I can understand why it may seem like excess if it’s not a story you’re interested in.
Sure, some of those links are different. But you have to admit, even if you are interested in this story, 5 times is a bit excessive.
https://addons.mozilla.org/en-US/firefox/addon/g-search-filter/
Install this and exclude it from all search results.
This one works better: https://addons.mozilla.org/en-US/firefox/addon/hohser/ - more supported sites, and it doesn’t break as often.
Why not change your search engine and set up a SearX instance? You can find all instances here: https://searx.space. For example, I have set it up like this: https://search.inetol.net/search?q=%s&category_general=1&language=en&time_range=&safesearch=0&theme=simple, and it works wonders. Results are still mostly from Google, or you can configure it to be whatever you want.
Mainly because it’s easier to set up a browser extension. Does SearXNG let you hide sites and rank sites higher in the results?
Also you’d really want to use SearXNG… The original SearX is dead.