I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?
Cause it’s cool
Not to me. If you like it, that’s fine.
Perhaps your personal bias is cluding your judgement a bit here. You don’t seem very open minded about it. You’ve already made up your mind.
Probably but I’m far from the only one.
Money. If you paid to use those services, they got what they wanted.
Money.
That’s the entirety of the reason.
“Line must go up.”
Summed up an MBA in four words.
“Be greedy” => there, did it in two:-P
It is so sad that it works too - no room for nuance, responsibility, even long-term stability (even for the entire human species, + all other mammals on Earth & many others too that we seem ready to take down with us on our way to extinction).
Like was said: money.
In addition, they need training data. Both conversations and raw material. Shoving “AI” into everything whether you want it or not gives them the real world conversational data to train on. If you feed it any documents, etc it’s also sucking that up for the raw data to train on.
Ultimately the best we can do is ignore it and refuse to use it or feed it garbage data so it chokes on its own excrement.
That works for me. I’ll just ignore it to spare my sanity
The idea is that it can replace a lot of customer facing positions that are manpower intensive.
Beyond that, an AI can also act as an intern in assisting in low complexity tasks the same way that a lot of Microsoft Office programs have replaced secretaries and junior human calculators.
I’ve always figured part of it Is that businesses don’t like to pay labor and they’re hoping that they can use artificial intelligence to get rid of the rest of us so they don’t have to pay us.
Ignoring AI as he is like ignoring Spreadsheets as hype. “I can do everything with a pocket calculator! I don’t need stupid auto fill!”
AI doesn’t replace people. It can automate and reduce your workload leaving you more time to solve problems.
I’ve used it for one off scripts. I have friends who have done the same and another friend who used it to create the boilerplate for a government contact bid that he won (millions in revenue for his company of which he got tens of thousands in bonus as engineering sales support).
It’s understandable to feel frustrated when AI systems give incorrect or unsatisfactory responses. Despite these setbacks, there are several reasons why AI continues to be heavily promoted and integrated into various technologies:
-
Potential and Progress: AI is constantly evolving and improving. While current models are not perfect, they have shown incredible potential across a wide range of fields, from healthcare to finance, education, and beyond. Developers are working to refine these systems, and over time, they are expected to become more accurate, reliable, and useful.
-
Efficiency and Automation: AI can automate repetitive tasks and increase productivity. In areas like customer service, data analysis, and workflow automation, AI has proven valuable by saving time and resources, allowing humans to focus on more complex and creative tasks.
-
Enhancing Decision-Making: AI systems can process vast amounts of data faster than humans, helping in decision-making processes that require analyzing patterns, trends, or large datasets. This is particularly beneficial in industries like finance, healthcare (e.g., medical diagnostics), and research.
-
Customization and Personalization: AI can provide tailored experiences for users, such as personalized recommendations in streaming services, shopping, and social media. These applications can make services more user-friendly and customized to individual preferences.
-
Ubiquity of Data: With the explosion of data in the digital age, AI is seen as a powerful tool for making sense of it. From predictive analytics to understanding consumer behavior, AI helps manage and interpret the immense data we generate.
-
Learning and Adaptation: Even though current AI systems like Gemini, ChatGPT, and Microsoft Co-pilot make mistakes, they also learn from user interactions. Continuous feedback and training improve their performance over time, helping them better respond to queries and challenges.
-
Broader Vision: The development of AI is driven by the belief that, in the long term, AI can radically improve how we live and work, advancing fields like medicine (e.g., drug discovery), engineering (e.g., smarter infrastructure), and more. Developers see its potential as an assistive technology, complementing human skills rather than replacing them.
Despite their current limitations, the goal is to refine AI to a point where it consistently enhances efficiency, creativity, and decision-making while reducing errors. In short, while AI doesn’t always work perfectly now, the vision for its future applications drives continued investment and development.
We shall see. The above feels like an AI reponse.
Whoosh
lmao I see what you did there
Bravo.
I’m 80% sure this reply was written by an AI. Right now pretty much all it can do is tell people to eat rocks, claim you can leave dogs in hot cars, and starve artists.
-
Because if you can get a program to write a program, that can both a) write it self, and b) improve upon the program in some way, you can put together a feedback where exponential improvement is possible.
I’ve wondered if you could do that until it makes a perfect machine.
First I recommend at least reading the wikipedia on super-intelligence.
Second, I recommend playing this game: https://www.decisionproblem.com/paperclips/index2.html
I think there’s a lot of armchair simplification going on here. Easy to call investors dumb but it’s probably a bit more complex.
AI might not get better than where it is now but if it does, it has the power to be a societally transformative tech which means there is a boatload of money to be made. (Consider early investors in Amazon, Microsoft, Apple and even the much derided Bitcoin.)
Then consider that until incredibly recently, the Turing test was the yardstick for intelligence. We now have to move that goalpost after what was preciously unthinkable happened.
And in the limited time with AI, we’ve seen scientific discoveries, terrifying advancements in war and more.
Heck, even if AI gets better at code (not unreasonable, sets of problems with defined goals/outputs etc, even if it gets parts wrong shrinking a dev team of obscenely well paid engineers to maybe a handful of supervisory roles… Well, like Wu Tang said, Cash Rules Everything Around Me.
Tl;dr: huge possibilities, even if there’s a small chance of an almost infinite payout, that’s a risk well worth taking.
When ChatGPT first started to make waves, it was a significant step forward in the ability for AIs to sound like a person. There were new techniques being used to train language models, and it was unclear what the upper limits of these techniques were in terms of how “smart” of an AI they could produce. It may seem overly optimistic in retrospect, but at the time it was not that crazy to wonder whether the tools were on a direct path toward general AI. And so a lot of projects started up, both to leverage the tools as they actually were, and to leverage the speculated potential of what the tools might soon become.
Now we’ve gotten a better sense of what the limitations of these tools actually are. What the upper limits of where these techniques might lead are. But a lot of momentum remains. Projects that started up when the limits were unknown don’t just have the plug pulled the minute it seems like expectations aren’t matching reality. I mean, maybe some do. But most of the projects try to make the best of the tools as they are to keep the promises they made, for better or worse. And of course new ideas keep coming and new entrepreneurs want a piece of the pie.
A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.
I’ll just toss in another answer nobody has mentioned yet:
Terminator and Matrix movies were really, really popular. This sort of seeded the idea of it being a sort of inevitable future into the brains of the mainstream population.
The Matrix was a documentary
They were pretty cool when they first blew up. Getting them to generate semi useful information wasn’t hard and anything hard factual they would usually avoid answering or defer.
They’ve legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They’ve also been pushed harder than they were meant to, to show “improvement” to investors demanding more accurate human like fact responses.
At this point it’s a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM’s like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like “paid time off” and “healthcare”. In reality they’ve been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.
Now it’s degrading even faster as AI scrapes from AI in a technological circle jerk.
Yes, that’s what they said. I’m starting to think you came here with a particular agenda to push, and I don’t think that’s very polite.
The person who said AI is neither artificial nor intelligent was Kate Crawford. Every source I try to find is paywalled.
Look it up. Also, they were pushing AI for web searches and I have not had good luck with that. However, I created a document with it yesterday and it came out really good. Someone said to try the creative side and so far, so good.
I found a non paywalled article where scientists from Oxford University state that feeding AI synthetic data from other AI models could lead to a collapse.
Ooh an article, thank you
She might be full of crap. I don’t know. You would probably understand it better than I do.
I find that a lot of discourse around AI is… “off”. Sensationalized, or simplified, or emotionally charged, or illogical, or simply based on a misunderstanding of how it actually works. I wish I had a rule of thumb to give you about what you can and can’t trust, but honestly I don’t have a good one; the best thing you can do is learn about how the technology actually works, and what it can and can’t do.
For a while Google said they would revolutionize search with artificial intelligence. That hasn’t been my experience. Someone here mentioned working on the creative side instead. And that seems to be working out better for me.
I genuinely think the best practical use of AI, especially language models is malicious manipulation. Propaganda/advertising bots. There’s a joke that reddit is mostly bots. I know there’s some countermeasures to sniff them out but think about it.
I’ll keep reddit as the example because I know it best. Comments are simple puns, one liner jokes, or flawed/edgy opinions. But people also go to reddit for advise/recommendations that you can’t really get elsewhere.
Using an LLM AI I could in theory make tons of convincing recommendations. I get payed by a corporation or state entity to convince lurkers to choose brand A over brand B, to support or disown a political stance or to make it seem like tons of people support it when really few do.
And if it’s factually incorrect so what? It was just some kind stranger™ on the internet
If by “best practical” you meant “best unmitigated capitalist profit optimization” or “most common”, then sure, “malicious manipulation” is the answer. That’s what literally everything else is designed for.
Holy BALLS are you getting a lot of garbage answers here.
Have you seen all the other things that generative AI can do? From bone-rigging 3D models, to animations recreated from a simple video, recreations of voices, art created from people without the talent for it. Many times these generative AIs are very quick at creating boilerplate that only needs some basic tweaks to make it correct. This speeds up production work 100 fold in a lot of cases.
Plenty of simple answers are correct, breaking entrenched monopolies like Google from search, I’ve even had these GPTs take input text and summarize it quickly - at different granularity for quick skimming. There’s a lot of things that can be worthwhile out of these AIs. They can speed up workflows significantly.
I’m a simple man. I just want to look up a quick bit of information. I ask the AI where I can find a setting in an app. It gives me the wrong information and the wrong links. That’s great that you can do all that, but for the average person, it’s kind of useless. At least it’s useless to me.
So you got the wrong information about an app once. When a GPT is scoring higher than 97% of human test takers on the SAT and other standardized testing - what does that tell you about average human intelligence?
The thing about GPTs is that they are just word predictors. Lots of time when asked super specific questions about small subjects that people aren’t talking about - yeah - they’ll hallucinate. But they’re really good at condensing, categorizing, and regurgitating a wide range of topics quickly; which is amazing for most people.
It’s not once. It has become such an annoyance that I quit using and asked what the big deal is. I’m sure for creative and computer nerd stuff it’s great, but for regular people sitting at home listening to how awesome AI is and being underwhelmed it’s not great. They keep shoving it down our throats and plain old people are bailing.
tl;dr: It’s useful, but not necessarily for what businesses are trying to convince you it’s useful for
Yeah, see that’s the kicker. Calling this “computer nerd stuff” just gives away your real thinking on the matter. My high school daughters use this to finish their essay work quickly, and they don’t really know jack about computers.
You’re right that old people are bailing - they tend to. They’re ignorant, they don’t like to learn new and better ways of doing things, they’ve raped our economy and expect everything to be done for them. People who embrace this stuff will simply run circles around those who don’t. That’s fine. Luddites exist in every society.
You aren’t really using it for its intended purpose. It’s supposed to be used to synthesize general information. It only knows what people talk about; if the subject is particularly specific, like the settings in one app, it will not give you useful answers.
I mentioned somewhere in here that I created a document with it and it turned out really good.
Yeah, it’s pretty good at generating common documents like that
Yeah, I feel like people who have very strong opinions about what AI should be used for also tend to ignore the facts of what it can actually do. It’s possible for something to be both potentially destructive and used to excess for profit, and also an incredible technical achievement that could transform many aspects of our life. Don’t ignore facts about something just because you dislike it.
It depends on the task you give it and the instructions you provide. I wrote this a while back i find it gives a 10x in capability especially if u use a non aligned llm like dolphin 8x22b.
I have no idea what any of that means. But thanks for the reply.
One of the few things they’re good at is academic “cheating”. I’m not a fan of how the education industry has become a massive pyramid scheme intended to force as many people into debt as possible, so I see ai as the lesser evil and a way to fight back.
Obviously no one is using ai to successfully do graduate research or anything, I’m just talking about how they take boring easy subjects and load you up with pointless homework and assignments so waste your time rather than learn anything. My homework is obviously ai generated anyway.
It’s good at making Taylor Swift look like a Trump fan.