They couldn’t keep their heads on fucking straight during the .com bubble, and here they are doing it all over again.
No bubble has deserved to pop as much as AI deserves to
Try Venice Ai, free to use, won’t try to censor your topics. Still just a chat bot though (although I think it does image generation too).
I’m sorry, what about their comment made you think they were asking for reccomendations?
The part where they were saying they don’t like the current AIs they know about. Showing disapproval of the trend.
Censoring topics is the least of the issues with the AI bubble.
No it’s a huge one, because it’s the most likely application of AI, AI site moderation will be the start of AI digital policing a field which risks growing larger and larger until it manifests as actual legal policing.
Blockchain and crypto were worse. „AI” has some actual use even if it’s way overblown.
Yes. But companies bought into AI way more than they bought into crypto though, in many outlandish and stupid ways. And many AI companies sell it in ways they shouldn’t.
As a counterpoint: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.
Truly the hardest and most weird buy-in of crypto that happened.
Oh yeah. Kodak too. Gamestop, right? There were a bunch of others.
I’m glad you didn’t say NFTs because my Bored Ape will regain and triple its value any day now!
Bro the GME short squeeze is going to hit any day now. We’re going to be millionaires bro, you just wait
Creating a specialized neural net to perform a specific function is cool. Slapping GPT into customer support because you like money is horse shit and I hope your company collapses. But yeah you’re right. Blockchain was a solution with basically no problems to fix. Neural nets are a tool that can do a ton of things, but everyone is content to use them as a hammer.
Yes! “AI” defined as only LLMs and the party trick applications is a bubble. AI in general has been around for decades and will only continue to grow.
I mean, block chain does have some actual uses, definitely more niche than LLMs though.
I’m not even understanding what AI is at this point because there’s no delineation between moderately sophisticated algorithms and things that are orders of magnitude more complex.
I mean, if something like multisampling came out today we’d all know how it’d be marketed
Technically speaking how I differentiate it is:
- clever algorithm is a good heuristic
- statistics on steroids is machine learning
- using a transformer model is AI (for now)
The AI buzzword means machine learning. You give it a massive dataset and it identifies correlations.
Regular hand-coded AI is mostly simple state machines.
AI is a ridiculous broad term these days. Everybody had been slapping the label on anything. It’s kinda like saying “transportation” and it means anything between babies crawling up to wrap drive and teleportation.
Because most of them are AI
It’s just once AI becomes useful (and not magical), we tend to stop calling it AI unless AI gets more VC money.
It’s called the AI effect
Blockchain has many valuable uses. A distributed zero trust ledger is useful. Sadly the finance scammers and the digital beanie baby collectors attracted all the marketing money.
And yet, every single company that has ever tried to implement a distributed zero trust ledger into their products and processes has inevitably ditched the idea after releasing that it does not, in fact, provide any useful benefit.
It is exceptionally useful for the auditing of damn near everything in digital space, as long as shared resources and 3rd parties have access to the blockchain … which is probably the major reason corporations and politicians don’t want anything to do with it.
It’d be a lot harder to hide crimes, fraud, grey business dealings, bribery and illegal donations, sanction violations, secret police slush funds, etc, etc if every event in the entire financial system and supply chain was logged and verifiable.
The reason major businesses haven’t bothered using distributed blockchains for auditing is because they fundamentally do not actually help in any way with auditing.
At the end of the day, the blockchain is just a ledger. At some point a person has to enter the information into that ledger.
Now, hear me out here, because this is going to be some totally out there craziness that is going to blow your mind… What happens if that person lies?
Like, you’ve built your huge, complicated system to track every banana you buy from the farm to the grocery store… But what happens if the shipper just sends you a different crate of bananas with the wrong label on them? How does your system solve that? What happens if the company growing your bananas claims to use only ethical practices but in reality their workers are effectively slaves? How does a blockchain help fix that?
The data in a system is only as good as your ability to verify it. Verifying the integrity of the data within systems was largely a solved problem long before distributed blockchains came along, and was rarely if ever the primary avenue for fraud. It’s the human components of these systems where fraud can most easily occur. And distributed blockchains do absolutely nothing to solve that.
Counterpoint, having a currency where every token is tied into its own transaction history might be unpopular with large businesses for other reasons. Like maybe they don’t want to be that transparent or accountable. The FBI have made public statements about how much easier it is to track criminals who used Crypto.
Your opinion seems to contradict reality.
This is a very poorly considered argument. Even if we suppose that everything you’ve said is true, the existence of a second plausible explanation doesn’t invalidate the first. You’ve not actually offered any reason why any of what I said is wrong, you just said “X is possible, therefore Y cannot be true.”
Also, I want to note that this particular digression wasn’t about cryptocurrency at all. The point I was responding to was a claim that blockchains had uses other than as currencies. So you really might want to step back a bit and consider what you think is being discussed here, and what you’re actually trying to say.
The idea has merit, in theory – but in practice, in the vast majority of cases, having a trusted regulator managing the system, who can proactively step in to block or unwind suspicious activity, turns out to be vastly preferable to the “code is law” status quo of most blockchain implementations. Not to mention most potential applications really need a mechanism for transactions to clear in seconds, rather than minutes to days, and it’d be preferable if they didn’t need to boil the oceans dry in the process of doing so.
If I was really reaching, I could maybe imagine a valid use case for say, a hypothetical, federated open source game that needed to have a trusted way for every node to validate the creation and trading of loot and items, that could serve as a layer of protection against cheating nodes duping items, for instance. But that’s insanely niche, and for nearly every other use case a database held by a trusted entity is faster, simpler, safer, more efficient, and easier to manage.
Your second point of trading loot and items got me thinking about my Steam CS:GO skins. Why should I trust a centralized entity like Steam who could at any moment decide to delete all my skins or remove my account for whatever reason with my skins, vs storing those skins in a wallet on a public blockchain for example to keep it’s value and always allow trading? Ofc there will always be a “centralized” smart contract but at least they can’t make changes to it if the smart contract code is audited ,
In that case (as is the case with most games) the near-worst case scenario is that you are no worse off trusting Valve with the management of item data than you would be if it was in a public block chain. Why? Because those items are valueless outside the context of the commercial game they are used in. If Valve shuts down CS:GO tomorrow, owning your skins as a digital asset on a blockchain wouldn’t give you any more protection than the current status quo, because those skins are entirely dependent on the game itself to be used and viewed – it’d be akin to holding stock certificates for a company that’s already gone bankrupt and been liquidated: you have a token proving ownership of something that doesn’t exist anymore.
Sure, there’s the edge case that if your Steam account got nukes from orbit by Gaben himself along with all its purchase and trading history you could still cash out on your skin collection, Conversely, having Valve – which, early VAC-ban wonkiness notwithstanding, has proven itself to be a generally-trustworthy operator of a digital games storefront for a couple decades now – hold the master database means that if your account got hacked and your stuff shifted off the account to others for profit, it’s much easier for Valve support to simply unwind those transactions and return your items to you. Infamously, in the case of blockchain ledgers, reversing a fraudulent transaction often requires forking the blockchain.
Crypto has a legitimate value, you can buy drugs with it.
Honestly kinda miss when the drugs I did were illegal. I used to buy weed from this online seller that was really into designer drugs. The amount of time I used to spend on Erowid just to figure out wtf I was about to take.
Maybe real estate?
I think all the crypto scams, all the shitcoins, NFTs and other blockchain bullshit were much worse. At least AI companies usually don’t require you to give them large sums of money, they’re only after your data and absolutely fuck the environment by wasting absurd amounts of power, but they don’t try to take away your life savings
the housing bubble.
ai is probably close second though.
Not shocked. It seems the tech bros like to troll us every few years.
The tech bros are selling, but it’s the VCs that are fueling this whole thing. They’re grasping for the next big thing. Mostly they don’t care if any of it actually works, as long as they can pump share value and then sell before it collapses.
They they have been trying to repeast big tech rise…
But each generation ia more limp difk
Uber/airbna > crypto > ai
Techbros are the modern day equivalent to snake oil salesmen.
As a major locally-hosted AI proponent, aka a kind of AI fan, absolutely. I’d wager it’s even worse than crypto, and I hate crypto.
What I’m kinda hoping happens is that bitnet takes off in the next few months/years, and that running a very smart model on a phone or desktop takes milliwatts… Who’s gonna buy into Sam Altman $7 trillion cloud scheme to burn the Earth when anyone can run models offline on their phones, instead of hitting APIs running on multi-kilowatt servers?
And ironically it may be a Chinese company like Alibaba that pops the bubble, lol.
If bitnet takes off, that’s very good news for everyone.
The problem isn’t AI, it’s AI that’s so intensive to host that only corporations with big datacenters can do it.
The fuck is bitnet
https://www.microsoft.com/en-us/research/publication/bitnet-scaling-1-bit-transformers-for-large-language-models/ use 1 bit instead of 8 or 16, yay performance gainz
So will the return of the flag conclude the adventures of ressource usage in computers?
As someone who follows the startup space (and is thinking of starting their own, non-AI driven startup), the issue is all of the easily solvable problems have already been solved. The only thing that shakes up the tree is when new tech comes along and makes some of the old problems easy to solve.
So take a look at crypto - If you wanted to make a tip bot on Telegram, before crypto that was really hard. You needed to register with something like PayPal, have the recipient register with PayPal, etc etc etc. After crypto it was “Hey this person sent you 5$, use this private key if you want to recover it” (btw I made this service and it was used a lot).
Now look at AI - Imagine making a service that detects CSAM before AI took off. As an aside, I did NOT make this service, but I know a group of people who did. Imagine trying to make this without the AI boom - you’d need millions of images for training data, a PhD in machine learning, and so much more. Now, anyone can make it in their basement.
The point is, investors KNOW the bubble is a bubble and that it will pop. It doesn’t matter though. They’re looking for people who will solve problems that previously cost 1bln to solve with only 1mln of funding. If even 1% of their companies pay off, they make a profit.
If even 1% of their companies pay off, they make a profit.
I suspect they make a profit even when 0% pan out. They just need to find someone gullible enough to buy in at the peak, and there’s a new sucker born every minute.
bubble after bubble after bubble after…
problem is, the amount of soap(money) that goes around to make the bubbles keeps shrinking because the bubbles are siphoning it away from the consumers.
I wonder what happens when there’s no more soap left to go around?
Tech makes bubbles because people think it’s magic.
Checks to see if Baidu is doing AI…yes, they are. How shocking.
“probably 1% of the companies will stand out and become huge and will create a lot of value, or will create tremendous value for the people, for society. And I think we are just going through this kind of process.”
Baidu is huge. Sounds like good news for Baidu!
I think less restrictive AI that are free, like Venice AI (you can ask it pretty much anything and it will not stop you) will be around for longer than ones that went with restrictive subscription models, and that eventually those other ones will become niche.
New technology always propagates further the freer it is to use and experiment with, and ChatGPT and OpenAI are quite restrictive and money hungry.
Aw, only 99%?
And they will ALL deserve it.
10 to 30? Yeah I think it might be a lot longer than that.
Somehow everyone keeps glossing over the fact that you have to have enormous amounts of highly curated data to feed the trainer in order to develop a model.
Curating data for general purposes is incredibly difficult. The big medical research universities have been working on it for at least a decade, and the tools they have developed, while cool, are only useful as tools too a doctor that has learned how to use them. They can speed diagnostics up, they can improve patient outcome. But they cannot replace anything in the medical setting.
The AI we have is like fancy signal processing at best
Not an expert so I might be wrong, but as far as I understand it, those specialised tools you describe are not even AI. It is all machine learning. Maybe to the end user it doesn’t matter, but people have this idea of an intelligent machine when its more like brute force information feeding into a model system.
Don’t say AI when you mean AGI.
By definition AI (artificial intelligence) is any algorithm by which a computer system automatically adapts to and learns from its input. That definition also covers conventional algorithms that aren’t even based on neural nets. Machine learning is a subset of that.
AGI (artifical general intelligence) is the thing you see in movies, people project into their LLM responses and what’s driving this bubble. It is the final goal, and means a system being able to perform everything a human can on at least human level. Pretty much all the actual experts agree we’re a far shot from such a system.
It may be too late on this front, but don’t say AI when there isn’t any I to it.
Of course it could be successfully argued that humans (or at least a large amount of them) are also missing the I, and are just spitting out the words that are expected of them based on the words that have been ingrained in them.
Intelligence: The ability to acquire, understand, and use knowledge.
A self-driving car is able to observe its surroundings, identify objects and change its behaviour accordingly. Thus a self-driving car is intelligent. What’s driving such car? AI.
You’re free to disagree with how other people define words but then don’t take part in their discussions expecting everyone to agree with your definiton.
This is not up to you or me : AI is an area of expertise / a scientific field with a precise definition. Large, but well defined.
AI as a field of computer science is mostly about pushing computers to do things they weren’t good at before. Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.
Along the way, it created a lot of really important tools. Things like optimizing compilers, virtual memory, and runtime environments. The way computers work today was built off of a lot of things out of the old MIT CSAIL labs. Saying “there’s no I to this AI” is an insult to their work.
Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.
You make it sound like these systems stopped being AI the moment they actually succeeded at what they were designed to do. When you play chess against a computer it’s AI you’re playing against.
That’s exactly what I’m getting at. AI is about pushing the boundary. Once the boundary is crossed, it’s not AI anymore.
Those chess engines don’t play like human players. If you were to look at how they determine things, you might conclude they’re not intelligent at all by the same metrics that you’re dismissing ChatGPT. But at this point, they are almost impossible for humans to beat.
I’m not the person you originally replied to. At no point have I dismissed ChatGPT.
I disagree with your logic about the definition of AI. Intelligence is the ability to acquire, understand, and use knowledge. A chess-playing AI can see the board, understand the ramifications of each move, and respond to how the pieces are moved. That makes it intelligent - narrowly so, but intelligent nonetheless. And since it’s artificial too, it fits the definition of AI.
AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.
The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.
I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.
LLM’s are not the only type of AI out there. ChatGPT appeared seemingly out of nowhere. Whose to say the next AI system wont do that as well?
ChatGPT did not appear out of nowhere.
ChatGPT is an LLM that is a generative pre-trained model using a nueral network.
Aka: it’s a chat bot that creates it’s responses based on an insane amount of text data. LLMs trace back to the 90s, and I learned about them in college in the late 2000s-2010s. Natural Language Processing was a big contributor, and Google introduced some powerful nueral network tech in 2014-2017.
The reason they “appeared out of nowhere” to the common man is merely marketing.
You’re misquoting me. I haven’t claimed LLMs appeared out of nowhere.
You said ChatGPT appeared out of nowhere. ChatGPT is basically Eliza with an LLM.
Those are not my words and you know it. You’re misquoting me.
LLM’s are not the only type of AI out there. ChatGPT appeared seemingly out of nowhere. Whose to say the next AI system wont do that as well?
I’m not sure what I’m misquoting. A large language model is not AI, a large language model is a non-human readable function used by a generative AI algorithm.
Simply put, ChatGPT did not appear out of nowhere.
ChatGPT did not appear out of nowhere
I agree.
The key word there is seemingly. The technology itself had existed for a long time, but it wasn’t until the massive leap OpenAI made with it that it actually became popular. Before ChatGPT, 99% of people had never heard of LLMs, and now everyone has. That’s what I mean when I say it appeared seemingly out of nowhere - it took the masses by surprise. There’s no reason to assume another company working on a different approach to AI won’t make a similar massive breakthrough, giving us AI far more powerful than LLMs and taking everyone by surprise, despite the base technology having existed for a long time.
A large language model is not AI
It is AI though - a subset of generative AI to be specific, but it still falls under the AI category.
Anything can happen. We can discover time travel tomorrow. The economy cannot run on wishful thinking.
It can! For a while. Isn’t that the nature of speculation and speculative bubbles? Sure, they may pop some day, because we don’t know for sure what’s a bubble and what is a promising market disruption. But a bunch of people make a bunch of money until then, and that’s all that matters.
The uncertainty of it is exactly why it shouldn’t suck up as much capital and resources as it is doing.
Shouldn’t, definitely. But for a while, it will keep running, because that’s how a lot of speculative investment works.
I agree, and the problem is finance capitalism itself. But then it becomes an ideological argument.
The argument could be made economically rather than ideologically.
Capitalism has a failure mode where too much capital gets concentrated into too few hands, depressing the flow of money moving through the economy.
But Capitalists start crying “Socialism!” as soon as you start talking about anti-trust.
Tulips all the way down…
I am old enough to remember when the CEO of Nortel Networks got crucified by Wall Street for saying in a press conference that the telecom/internet/carrier boom was a bubble, and the fundamentals weren’t there (who is going to pay for long distance anymore when calls are free over the internet? where are the carriers-- Nortel’s customers-- going to get their income from?). And 4 years later Nortel ceased to exist. Cisco crashed too, though had enough TCP/IP router biz and enterprise sales to keep them alive even until today.
This all reminds me of the late 1990s internet bubble rather than the more recent crypto bubble. We’ll all still be using ML models for all kinds of things more or less forever from now on, but it won’t be this idiotic hype cycle and overvaluation anymore after the crash.
Shit, crypto isn’t going anywhere either, it’s a permanent fixture now, Wall Street bought into it and you can buy crypto ETFs from your stockbroker. We just don’t have to listen to hype about it anymore.
Crypto is still just as awful as it ever was IMO. Still plenty of assholes
gamblinginvesting in crypto.This message has existed for 10 hours and a cryptobro hasn’t commented yet?
Well put.
Soon, it won’t be this idiotic hype cycle, but it’ll be some other idiotic hype cycle. Short term investors love hype cycles.
We just don’t have to listen to the hype about it anymore.
True, it’s now in most circles just been mixed in as a commodity to trade on. Though I wish everyone would get that. There’s still plenty of idiots with .eth usernames who think there’s some new boon to be made. The only “apps” built on crypto networks were and are purely for trading crypto, I’ve never seen any real tangible benefit to society come out of it. It’s still used plenty for money laundering, but regulators are (slowly) catching up. And it’s still by far the easiest way to demonstrate what happens to unregulated markets.
Crypto has been turned into gold by wallstreet, they bought up enough of it to jot be completely exposed, it’s supply is extremely limited and will run out. Putting your money into it is no different than putting it into gold, you might catch a good moment and buy in low and get some return, but most wont.
Putting your money into it is no different than putting it into gold
Sorry kiddo, putting your money into crypto is very, very different to putting it into gold.
Well yeah, it’s easier to steal it if you click on a link you shouldnt.
Once the apocalypse comes, you can at least use a gold brick to brain the zombies, whereas your crypto will vanish along with the Internet and electrical grid.
The supply is absolutely more like unlimited lol.
Not enough btc? Make lite coin! Etc etc etc
No one cares about lite coin though which defeats the purpose.
Etherium bla bla bla there are tons of them thousands. There us no shortage lol.
Silver, bronze etc, are you being dense on purpose? Though your seeming affinity for crypto implies you are simply dense.
Lol touch grass cryptobro.
I think you are the cryptobro, how the fuck is likening crypto to a known stupid investment like gold make me a crypto bro you absolute muffin.
That’s like saying US Dollars are Unlimited because you can always buy Zimbabwe Dollars…
Good metaphor.
YUP
Good.
Eh… “Robin Li says increased accuracy is one of the largest improvements we’ve seen in Artificial Intelligence. “I think over the past 18 months, that problem has pretty much been solved—meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer,” the CEO added.”
That’s plain wrong. Even STOA black box chatbots give wrong answer to the simplest of questions sometimes. That’s precisely what NOT being able to trust mean.
How can one believe anything this person is saying?
To trust a computer it has to be correct 100% of the time, because it can’t say “I don’t know”.