I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
I like being able to generate porn.
Not that I can’t just, you know, FIND porn, but there’s something really fun about trying to generate an image just right, tweaking settings and models until you get the result you’re after.
/shrug
I find ChatGPT useful in getting my server to work (since I’m pretty new with Linux)
Other than that, I check in on how local image models are doing around once every couple of months. I would say you can achieve some cool stuff with it, but not really any unusual stuff.
I was really psyched about AI when it first hit my news feed. Now I’m less than impressed. Most generalist AI platforms get things wrong constantly. Having an LLM trained on specific things, like math or science or maybe law, I could see being useful.
We’re at the “AI everything” phase instead of the “AI what makes sense” phase.
It’s really useful for churning out some basic code. For searching the web, it’s providing better results than Google these days.
I hate that it monetized generally knowledge that use to be easily searchable then repackaged it as some sort of black box randomizer.
A friend’s wife “makes” and sells AI slop prints. He had to make a twitter account so he could help her deal with the “harassment”. Not sure exactly what she’s dealing with, but my friend and I have slightly different ideas of what harassment is and I’m not interested in hearing more about the situation. The prints I’ve seen look like generic fantasy novel art that you’d see at the checkout line of a grocery store.
An LLM (large language model, a.k.a. an AI whose output is natural language text based on a natural language text prompt) is useful for the tasks when you’re okay with 90% accuracy generated at 10% of the cost and 1,000% faster. And where the output will solely be used in-house by yourself and not served to other people. For example, if your goal is to generate an abstract for a paper you’ve written, AI might be the way to go since it turns a writing problem into a proofreading problem.
The Google Search LLM which summarises search results is good enough for most purposes. I wouldn’t rely on it for in-depth research but like I said, it’s 90% accurate and 1,000% faster. You just have to be mindful of this limitation.
I don’t personally like interacting with customer service LLMs because they can only serve up help articles from the company’s help pages, but they are still remarkably good at that task. I don’t need help pages because the reason I’m contacting customer service to begin with is because I couldn’t find the solution using the help pages. It doesn’t help me, but it will no doubt help plenty of other people whose first instinct is not to read the f***ing manual. Of course, I’m not going to pretend customer service LLMs are perfect. In fact, the most common problem with them seems to be that they go “off the script” and hallucinate solutions that obviously don’t work, or pretend that they’ve scheduled a callback with a human when you request it, but they actually haven’t. This is a really common problem with any sort of LLM.
At the same time, if you try to serve content generated by an LLM and then present it as anything of higher quality than it actually is, customers immediately detest it. Most LLM writing is of pretty low quality anyway and sounds formulaic, because to an extent, it was generated by a formula.
Consumers don’t like being tricked, and especially when it comes to creative content, I think that most people appreciate the human effort that goes into creating it. In that sense, serving AI content is synonymous with a lack of effort and laziness on the part of whoever decided to put that AI there.
But yeah, for a specific subset of limited use cases, LLMs can indeed be a good tool. They aren’t good enough to replace humans, but they can certainly help humans and reduce the amount of human workload needed.
It looks impressive on the surface but if you approach it with any genuine scrutiny it falls apart and you can see that it doesn’t know how to draw for shit.
I find it helpful to chat about a topic sometimes as long as it’s not based on pure facts, You can talk about your feelings with it.
There are a few uses where it genuinely speeds up editing/insertion into contracts and warns of you of red flags/riders that might open you up to unintended liability. BUT the software is $$$$ and you generally need a law degree before you even need a tool like that. For those that are constantly up to their chins in legal shit, it can be helpful. I’m not, thankfully.
Porn has been ruined by AI too. Jokes aside it’s really a boner killer.
Idk who faps to that whack shit but it’s trying so hard to make everything look baby silk smooth with unrealistic bodies most likely stolen from hentai.
I made an AI song for my mom’s birthday on Suno and she loved it so much she cried. So that was nice.
I don’t like how people are using it to just replace artists. It would be find if it’s just to automate some things, like, “AI can tell you when ___ needs to be replaced,” but it feels more like it’s being used as a stick to workers. Like, “Keep acting up and I’ll replace you with dun dun dun AI!”
I use LLMs for multiple things, and it’s useful for things that are easy to validate. E.g. when you’re trying to find or learn about something, but don’t know the right terminology or keywords to put into a search engine. I also use it for some coding tasks. It works OK for getting customized usage examples for libraries, languages, and frameworks you may not be familiar with (but will sometimes use old APIs or just hallucinate APIs that don’t exist). It works OK for things like “translation” tasks; such as converting a MySQL query to a PostGres query. I tried out GitHub CoPilot for a while, but found that it would sometimes introduce subtle bugs that I would initially overlook, so I don’t use it anymore. I’ve had to create some graphics, and am not at all an artist, but was able to use transmission1111, ControlNet, Stable Diffusion, and Gimp to get usable results (an artist would obviously be much better though). RemBG and works pretty well for isolating the subject of an image and removing the background too. Image upsampling, DLSS, DTS Neural X, plant identification apps, the blind-spot warnings in my car, image stabilization, and stuff like that are pretty useful too.
to copy my own comment from another similar thread:
I’m an idiot with no marketable skills. I put boxes on shelves for a living. I want to be an artist, a musician, a programmer, an author. I am so bad at all of these, and between having a full time job, a significant other, and several neglected hobbies, I don’t have time to learn to get better at something I suck at. So I cheat. If I want art done, I could commission a real artist, or for the cost of one image I could pay for dalle and have as many images as I want (sure, none of them will be quite what I want but they’ll all be at least good). I could hire a programmer, or I could have chatgpt whip up a script for me since I’m already paying for it anyway since I want access to dalle for my art stuff. Since I have chatgpt anyway, I might as well use it to help flesh out my lore for the book I’ll never write. I haven’t found a good solution for music.
I have in my brain a vision for a thing that is so fucking cool (to me), and nobody else can see it. I need to get it out of my brain, and the only way to do that is to actualize it into reality. I don’t have the skills necessary to do it myself, and I don’t have the money to convince anyone else to help me do it. generative AI is the only way I’m going to be able to make this work. Sure, I wish that the creators of the content that were stolen from to train the ai’s were fairly compensated. I’d be ok with my chatgpt subscription cost going up a few dollars if that meant real living artists got paid, I’m poor but I’m not broke.
These are the opinions of an idiot with no marketable skills.
I needed instructions on how to downgrade the firmware of my Unifi UDR because they pushed a botched update. I searched for a while and could only find vague references to SSH and upgrading.
They had a “Unifi GPT” bot so I figured what the hell. I asked “how to downgrade udr firmware to stable”. It gave me effective step by step instructions on how to enable SSH, SSH in and what commands to run to do so. Worked like a charm.
So yeah, I think the problem is we’re in the hype era of LLMs. They’re being over applied at lots of things they aren’t good at. But it’s extremism in the other direction to say there aren’t functions they can do well.
They are at least better than your average canned chat/search bot or ill informed CSR at finding an answer to your question. I think they can help with lots of frustrating or opaque computer related tasks, or at least point you in the right direction or surface something you might not be able to find easily otherwise.
They just aren’t going to write programs for you or do your office job for you like execs think they will.
It’s funny to see Godzilla in weird contexts.
No, I don’t think that’s a particularly good reason for it all, either.