I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
deleted by creator
I created a funny AI voice recording of Ben Shapiro talking about cat girls.
Then it was all worth it.
Personally I use it when I can’t easily find an answer online. I still keep some skepticism about the answers given until I find other sources to corroborate, but in a pinch it works well.
because of the way it’s trained on internet data, large models like ChatGPT can actually work pretty well as a sort of first-line search engine. My girlfriend uses it like that all the time especially for obscure stuff in one of her legal classes, it can bring up the right details to point you towards googling the correct document rather than muddling through really shitty library case page searches.
especially when you use something with inline citations like bing
Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code “shaders” for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.
ChatGPT can be useful or fun every now and then but besides that no.
Do I think it’s generally useful? No, not at all.
But for very specific purposes it’s worth considering as an option.
Text-to-image generation has been worth it to get a jumping-off point for a sketch, or to get a rough portrait for a D&D character.
Regular old ChatGPT has been good on a couple occasions for humor (again D&D related; I asked it for a “help wanted” ad in the style of newspaper personals and the result was hilariously campy)
In terms of actual problem solving… There have been a couple instances where, when Google or Stack Overflow haven’t helped, I’ve asked it for troubleshooting ideas as a last resort. It did manage to pinpoint the issue once, but usually it just ends up that one of the topics or strategies it floats prove to be useful after further investigation. I would never trust anything factual without verifying, or copy/paste code from it directly though.
I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.
Sometimes it’s easier to check ChatGPT’s answers, ask follow up questions, look at the sources it provides and live with the occasional hallucinations than to sift through the garbage pile that google search has become.
Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn’t. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).
It’s great at summarization and translations.
Until it makes shit up that the original work never said.
The services I use, Kagi’s autosummarizer and DeepL, haven’t done that when I’ve checked. The downside of the summarizer is that it might remove some subtle things sometimes that I’d have liked it to keep. I imagine that would occur if I had a human summarize too, though. DeepL has been very accurate.
LLMs are especially bad for summarization for the use case of presenting search results. The source is just as critical of information for search as the information itself, and LLMs obfuscate this critical source information and combine results from multiple sources together…
LLMs are TERRIBLE at summarization
Downvoters need to read some peer reviewed studies and not lap up whatever BS comes from OpenAI who are selling you a bogus product lmao. I too was excited for summarization use-case of AI when LLMs were the new shiny toy, until people actually started testing it and got a big reality check
tl;dr?
Translates Sumerian texts.
Might want to rethink the summarization part.
AI also hasn’t made any huge improvements in machine translation AFAIK. Translators still get hired because AI can’t do the job as well.
The AI summaries were judged significantly weaker across all five metrics used by the evaluators, including coherency/consistency, length, and focus on ASIC references. Across the five documents, the AI summaries scored an average total of seven points (on ASIC’s five-category, 15-point scale), compared to 12.2 points for the human summaries.
The focus on the (now-outdated) Llama2-70B also means that “the results do not necessarily reflect how other models may perform” the authors warn.
to assess the capability of Generative AI (Gen AI) to summarise a sample of public submissions made to an external Parliamentary Joint Committee inquiry, looking into audit and consultancy firms
In the final assessment ASIC assessors generally agreed that AI outputs could potentially create more work if used (in current state), due to the need to fact check outputs, or because the original source material actually presented information better. The assessments showed that one of the most significant issues with the model was its limited ability to pick-up the nuance or context required to analyse submissions.
The duration of the PoC was relatively short and allowed limited time for optimisation of the LLM.
So basically this study concludes that Llama2-70B with basic prompting is not as good as humans at summarizing documents submitted to the Australian government by businesses, and its summaries are not good enough to be useful for that purpose. But there are some pretty significant caveats here, most notably the relative weakness of the model they used (I like Llama2-70B because I can run it locally on my computer but it’s definitely a lot dumber than ChatGPT), and how summarization of government/business documents is likely a harder and less forgiving task than some other things you might want a generated summary of.
Please share any studies you have showing AI is better than a person at summarizing complex information.
If it wasn’t clear, I am not claiming that AI is better than a person at summarizing complex information.
My bad for misunderstanding you.
Thank you for pointing that out. I don’t use it for anything critical, and it’s been very useful because Kagi’s summarizer works on things like YouTube videos friends link which I don’t care enough to watch. I speak the language pair I use DeepL on, but DeepL often writes more natively than I can. In my anecdotal experience, LLMs have greatly improved the quality of machine translation.
AI is used extensively in science to sift through gigantic data sets. Mechanical turk programs like Galaxy Zoo are used to train the algorithm. And scientists can use it to look at everything in more detail.
Apart from that AI is just plain fun to play around with. And with the rapid advancements it will probably keep getting more fun.
Personally I hope to one day have an easy and quick way to sort all the images I have taken over the years. I probably only need a GPU in my server for that one.
anyone who uses machine learning like that would probably take issue with it being called AI too
Meh, language evolves. Can’t fight it, might as well join them.
I usually keep abreast of the scene so I’ll give a lot of stuff a try. Entertainment wise, making music and images or playing dnd with it is fun but the novelty tends to wear off. Image gen can be useful for personal projects.
Work wise, I mostly use it to do deep dives into things like datasheets and libraries, or doing the boring coding bits. I verify the info and use it in conjunction with regular research but it makes things a lot easier.
Oh, also tts is fun. The actor who played Dumbledore reads me the news and Emma Watson tells me what exercise is next during my workout, although some might frown on using their voices without consent.
Playing with it on my own computer, locally hosting it and running it offline, has been pretty cool. I find it really impressive when it’s something open source and community driven. I also think there are a lot of useful applications for things that are traditionally not solvable with traditional programming.
However a lot of the pushed corporate AI feels not that useful, and there’s something about it that really rubs me the wrong way.
You know those people who have no creative skills or drive, but want to be thought of as a creative?
You know those people who have this really neat idea for an app, but they don’t plan on making it themself because they’re “just an ideas guy”?
You know those people who will offer to pay in exposure? I mean, do you really need to be paid just to draw some pictures anyway?
You know those guys who send you a picture they got from google images and claim this to be a girl they know?
That’s the vast majority of the AI audience. I could probably sum that up with the word “parasite”, but I wanted to be thorough.
I love chatgpt, and am dumbfounded at all the AI hate on lemmy. I use it for work. It’s not perfect, but helps immensely with snippets of code, as well as learning STEM concepts. Sometimes I’ve already written some code that I remember vaguely, but it was a long time ago and I need to do it again. The time it would take to either go find my old code, or just research it completely again, is WAY longer than just asking chatgpt. It’s extremely helpful, and definitely faster for what I’d already have to do.
I guess it depends on what you use it for ¯\_(ツ)_/¯.
I hope it continues to improve. I hope we get full open source. If I could “teach” it to do certain tasks someday, that would be friggin awesome.
You can whip up a whole album of aggressively mid music just cyberbullying the shit out of one person.
I think it’s great that you found AI helpful with your hobbies.
New social fear unlocked.