• 0 Posts
  • 351 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle


  • Hi! It’s me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.

    I am 100% with Linus AND would say the 10% good use cases can be transformative.

    Since there isn’t any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).

    I gotta say though this wave of AI tech feels different. It reminds me of the early days of the web/computing in the late 90s early 2000s. Where it’s fun, exciting, and people are doing all sorts of weird,quirky shit with it, and it’s not even close to perfect. It breaks a lot and has limitations but their is something there. There is a lot of promise.

    Like I said else where, it ain’t replacing humans any time soon, we won’t have AGI for decades, and it’s not solving world hunger. That’s all hype bro bullshit. But there is actual value here.



  • Haha, yea I’m familiar with it(always heard it called the Barnum effect though it sounds like they are the same thing), but this isn’t a fortune cookie-esque, meyers-briggs response.

    In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense), and to my other point, it didn’t misunderstand and tell me I was envisioning LLMs sending emails back and forth to each other.

    Either way, there is this general tenor of negativity on Lemmy about AI (usually conflated to mean just LLMs). I think it’s a little misplaced. People are lumping the tech I’m with the hype bros- Altman, Musk, etc. the tech is transformative and there are plenty of valuable uses for it. It can solve real problems now. It doesn’t need to be AGI to do that. It doesn’t need to be perfect to do that.






  • People are treating AI like crypto, and on some level I don’t blame them because a lot of hype-bros moved from crypto to AI. You can blame the silicon valley hype machine + Wall Street rewarding and punishing companies for going all in or not doing enough, respectively, for the Lemmy anti-new-tech tenor.

    That and lemmy seema full of angsty asshats and curmudgeons that love to dogpile things. They feel like they have to counter balance the hype. Sure, that’s fair.

    But with AI there is something there.

    I use all sorts of AI on a daily basis. I’d venture to say most everyone reading this uses it without even knowing.

    I set up my server to transcribe and diarize my my favorite podcasts that I’ve been listening to for 20 years. Whisper transcribes, pyannote diarieizes, gpt4o uses context clues to find and replace “speaker01” with “Leo”, and the. It saves those transcripts so that I can easily switch them. It’s a fun a hobby thing but this type of thing is hugely useful and applicable to large companies and individuals alike.

    I use kagi’s assistant (which basically lets you access all the big models) on a daily basis for searching stuff, drafting boilerplate for emails, recipes, etc.

    I have a local llm with ragw that I use for more personal stuff like, I had it do the BS work for my performance plan using notes I’d taken from the year. I’ve had it help me reword my resume.

    I have it parse huge policy memos into things I actually might give a shit about.

    I’ve used it to run though a bunch of semi-structured data on documents and pull relevant data. It’s not necessarily precise but it’s accurate enough for my use case.

    There is a tool we use that uses CV to do sentiment analysis of users (as they use websites/apps) so we can improve our ux / cx. There’s some ml tooling that also can tell if someone’s getting frustrated. By the way, they’re moving their mouse if they’re thrashing it or what not.

    There’s also a couple use cases that I think we’re looking at at work to help eliminate bias so things like parsing through a bunch of resumes. There’s always a human bias when you’re doing that and there’s evidence that shows llms can do that with less bias than a human and maybe it’ll lead to better results or selections.

    So I guess all that to say is I find myself using AI or ml llms on a pretty frequent basis and I see a lot of value in what they can provide. I don’t think it’s going to take people’s jobs. I don’t think it’s going to solve world hunger. I don’t think it’s going to do much of what the hypros say. I don’t think we’re anywhere near AGI, but I do think that there is something there and I think it’s going to change the way we interact with our technology moving forward and I think it’s a great thing.







  • Because a degree isn’t job training. Education and training are very different.

    Think of how sex education and sex training are wildly different things. They can compliment each other but they aren’t the same. You go to college for the education.

    I think that “get a degree so you can get a job” mentality that our parents and parents parents touted is advice from an era gone by. An era when having a degree set you apart from a sea of high school diplomas. It didnt matter if it was in medieval art History. It was a university degree (so you were smarter than the average bear/could learn and be taught).

    It got distorted over the years and now we are here. Lots of degrees, people “go to school to get a job”, and then can’t land one because…well. it just sucks


  • I know that ploum blog post gets cited way too often on Lemmy, but this is a situation where I think Google has either intentionally or inadvertently executed a variation of the “embrace, extend, extinguish” playbook that Microsoft created.

    They embraced open source, extended it until they’ve practically cornered the market on browser engine, and now they are using that position to extinguish our ability to control our browsing experience.

    I know they are facing a possibly “break up” with the latest ruling against them.

    It would be interesting to see if they force divestiture of chrome from the ad business. The incentives are perverse when you do both with such dominance and its a massive conflict of interest.


  • The vision “air” that’s Apple’s version of the meta ray bans is going to be their next major product line.

    If they get it in ~$500-1000 they’d sell like hotcakes. The reviews on the Meta raybans are surprisingly positive with the biggest gripe being it’s from Meta and people don’t trust it.

    Apples big privacy focus and their local first implementation of AI make it really compelling alternative to the Meta offering. Assuming it pairs with iPhones (and their built-in ML cores) it also drives iPhone sales similar to the watch.

    Apple could do so much with an ecosystem play with something like that and it would/could also be a “fashion icon” the way white earbuds became synonymous with Apple and the way airpods don’t look “dorky” because everyone has them.

    It’s fun to hate apple on Lemmy but I think they’d crush with something like this. An AR glasses setup integrated in their ecosystem with privacy respecting local processing.

    I’d seriously consider switching to an iPhone if I got something like that.