• vegaquake@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    yeah, the internet is doomed to be unusable if AI just keeps getting more insidious like this

    yet more companies tie themselves to online platforns, websites, and other models of operation depending on being always connected.

    maybe the world needs a reboot, just get rid of it all and start from scratch

    • BarbecueCowboy@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I do kind of feel like this part of the experiment might just be coming to a close.

      There’s no “if AI just keeps getting more insidious”, the barrier for entry is too small. AI is going to keep doing the things it’s already doing, just more efficiently, and it doesn’t matter that much how we feel about whether those things are good or bad. I feel like the things it is starting to ruin are probably just going to be ruined.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      maybe the world needs a reboot, just get rid of it all and start from scratch

      That would destroy all the old good vintage stuff and leave us with machines that immediately fill the vacant space with pure trash.

      • vegaquake@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        rapture but with technology would be pretty funny

        save the good old stuff and burn the rest

  • KillingTimeItself@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    “i remember when reply guy was a term used for someone notorious for replying to things in a specific manner”

    “take your meds grandpa, it’s getting late”

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Well that’s certainly one way for your brand to lose a lot of respect once it becomes apparent. Much like I when want to lose respect for myself, I use Chum brand dog food. Chum, it’s still food, alright?

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Probably.

        So, we complain to a regulatory body, they investigate, they tell a company to do better or, waaaay down the road, attempt to levy a fine. Which most companies happily pay, since the profits from he shady business practices tend to far outweigh the fines.

        Legal or illegal really only means something when dealing with an actual person. Can’t put a corporation in jail, sadly.

      • Hubi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Reddit is past the point of no return. He might as well speed it up a little.

      • MelodiousFunk@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        He’s got to get them from somewhere. They certainly aren’t coming from his little piggy brain.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Like a built in brand dashboard where brands can monitor keywords for their brand and their competitors? And then deploy their sanctioned set of accounts to reply and make strategic product recommendations?

        Sounds like something that must already exist. But it would have been killed or hampered by API changes… so now Spez has a chance to bring it in-house.

        They will just call it brand image management. And claim that there are so many negative users online that this is the only way to fight misinformation about their brand.

        Or something. It’s all so tiring.

  • PrincessLeiasCat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    The creator of the company, Alexander Belogubov, has also posted screenshots of other bot-controlled accounts responding all over Reddit. Begolubov has another startup called “Stealth Marketing” that also seeks to manipulate the platform by promising to “turn Reddit into a steady stream of customers for your startup.” Belogubov did not respond to requests for comment.

    What an absolute piece of shit. Just a general trash person to even think of this concept.

  • merthyr1831@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    This shit isnt new, companies have been exploiting reddit to push products as if they’re real people for years. The “put reddit after your search to fix it!!!” thing was a massive boon for these shady advertisers who no doubt benefitted from random people assuming product placements were genuine.

  • Milk_Sheikh@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I still haven’t seen a use of AI that doesn’t serve state or corporate interests first, before the general public. AI medical diagnostics comes the closest, but that’s being leveraged to justify further staffing reductions, not an additional check.

    The AI-captcha wars are on, and no matter who wins we lose.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      AI is helping me learn and program C++. It’s built into my IDE. Much more efficient than searching stackoverflow. Whenever it comes up with something I’ve never seen before, I learn what that thing does and mentally store it away for future use. As time goes on, I’m relying on it less and less. But right now it’s amazing. It’s like having a tutor right there with you who you can ask questions anytime, 24/7.

      I hope a point comes where my kid can just talk to a computer, tell it the specifics of the program he wants to create, and have the computer just program the entire thing. That’s the future we are headed towards. Ordinary folks being able to create software.

      • Milk_Sheikh@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I’ll agree there’s huge potential for ‘assistant’ roles (exactly like you’re using) to give a concise summary for quick understanding. But LLMs aren’t knowledgeable like an accredited professor or tutor is, understanding the context and nuance of the topic. LLMs are very good at scraping together data and presenting the shallowest of information, but their limits get exposed quickly when you try to go into a topic.

        For instance, I was working a project that required very long term storage (+10 years) with intermittent exposure to open air, and was concerned about oxidation and rust. ChatGPT was very adamant that desiccant alone was sufficient (wrong) and that VCI packs would last (also wrong). It did a great job of repackaging corporate ad-copy and industrial white papers written by humans, but not of providing an objective answer to a semi complex question.

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          I guess it’s not great for things requiring domain knowledge. Programming seems to be easy for it, as programs are very structured, predictable, and logical. That’s where its pattern-matching-and-prediction abilities shine.

  • funn@lemy.lol
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I don’t understand how Lemmy/Mastodon will handle similar problems. Spammers crafting fake accounts to give AI generated comments for promotions

    • FeelThePower@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The only thing we reasonably have is security through obscurity. We are something bigger than a forum but smaller than Reddit, in terms of active user size. If such a thing were to happen here, mods could handle it more easily probably (like when we had the spammer of the Japanese text back then), but if it were to happen on a larger scale than what we have it would be harder to deal with.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I think the real danger here is subtlety. What happens when somebody asks for recommendations on a printer, or complains about their printer being bad, and all of a sudden some long established account recommends a product they’ve been happy with for years. And it turns out it’s just an AI bot shilling for brother.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          For one, well established brands have less incentives to engage in this.

          Second, in this example, the account in question being a “long established user” would seem to indicate you think these spam companies are going to be playing a long game. They won’t. That’s too much effort and too expensive. They will do all of this on the cheap, and it will be very obvious.

          This is not some sophisticated infiltration operation with cutting edge AI. This is just auto generated spam in a new upgraded form. We will learn to catch it, like we’ve learned to catch it before.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I mean, it doesn’t have to be expensive. And also doesn’t have to be particularly cutting edge. Start throwing some credits into an LLM API, haven’t randomly read and help people out in different groups. Once it reaches some amount of reputation have it quietly shill for them. Pull out posts that contain keywords. Have the AI consume the posts and figure out if they have to do with what they sound like they do. Have it subtly do product placement. None of this is particularly difficult or groundbreaking. But it could help shape our buying habits.

      • old_machine_breaking_apart@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        There’s one advantage on the fediverse. We don’t have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit. This alone makes using the fediverse worth for me.

        When it comes to problems involving the users themselves, things aren’t that different, and we don’t have much to do.

        • MinFapper@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          We don’t have corporations manipulating our feeds

          yet. Once we have enough users that it’s worth their effort to target, the bullshit will absolutely come.

          • bitfucker@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Federation means if you are federated then sure you get some BS. Otherwise, business as usual. Now, making sure there is no paid user or corporate bot is another matter entirely since it relies on instance moderators.

          • old_machine_breaking_apart@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            they can perhaps create instances, pay malicious users, try some embrace, extend, extinguish approach or something, but they can’t manipulate the code running on the instances we use, so they can’t have direct power over it. Or am I missing something? I’m new to the fediverse.

            • BarbecueCowboy@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              There’s very little to prevent them just pretending to be average users and very little preventing someone from just signing up a bunch of separate accounts to a bunch of separate instances.

              No great automated way to tell whether someone is here legitimately.

      • BarbecueCowboy@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        mods could handle it more easily probably

        I kind of feel like the opposite, for a lot of instances, ‘mods’ are just a few guys who check in sporadically whereas larger companies can mobilize full teams in times of crisis, it might take them a bit of time to spin things up, but there are existing processes to handle it.

        I think spam might be what kills this.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          If a community is so small that the mod team can be so inactive, there’s no incentive for the company to put any effort into spamming it like you’re suggesting.

          And if they do end up getting a shit ton of spam in there, and it sits around for a bit until a moderator checks in, so what? They’ll just clean it up and keep going.

          I’m not sure why people are so worried about this. It’s been possible for bad actors to overrun small communities with automated junk for a very long time, across many different platforms, some that predate Reddit. It just gets cleaned up and things keep going.

          It’s not like if they get some AI produced garbage into your community, it infects it like a virus that cannot be expelled.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Generative AI has really become a poison. It’ll be worse once the generative AI is trained on its own output.

    • Simon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Here’s my prediction. Over the next couple decades the internet is going to be so saturated with fake shit and fake people, it’ll become impossible to use effectively, like cable television. After this happens for a while, someone is going to create a fast private internet, like a whole new protocol, and it’s going to require ID verification (fortunately automated by AI) to use. Your name, age, and country and state are all public to everybody else and embedded into the protocol.

      The new ‘humans only’ internet will be the new streaming and eventually it’ll take over the web (until they eventually figure out how to ruin that too). In the meantime, they’ll continue to exploit the infested hellscape internet because everybody’s grandma and grampa are still on it.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yup. I have my own prediction - that humanity will finally understand the wisdom of PGP web of trust, and using that for friend-to-friend networks over Internet. After all, you can exchange public keys via scanning QR codes, it’s very intuitive now.

          That would be cool. No bots. Unfortunately, corps, govs and other such mythical demons really want to be able to automate influencing public opinion. So this won’t happen until the potential of the Web for such influence is sucked dry. That is, until nobody in their right mind would use it.

      • Baylahoo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.

  • ColeSloth@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I called this shit out like a year ago. It’s the end of any viable online searching having much truth to it. All we’ll have left is youtube videos from project farm to trust.

    • BurningnnTree@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I ran into this issue while researching standing desks recently. There are very few places on the internet where you can find verifiably human-written comparisons between standing desk brands. Comments on Reddit all seem to be written by bots or people affiliated with the brands. Luckily I managed to find a YouTube reviewer who did some real comparisons.

    • Debs@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      It kinda seems like the end of the Google era. What will we search Google for when the results are all crap? This is the death gasps of the internet I/we grew up with.

      • Hugh_Jeggs@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Remember when you could type a vague plot of a film you’d heard about into Google and it’d be the first result?

        Nah doesn’t work anymore

        Saw a trailer for a french film so I searched “french film 2024 boys live in woods seven years”

        Google - 2024 BEST FRENCH FILMS/TOP TEN FRENCH FILMS YOU MUST SEE THIS YEAR/ALL TIME BEST FRENCH MOVIES

        Absolute fucking gash

        I’ve not been too impressed with Kagi search, but at least the top result there was “Frères 2024”

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Remember when you could type a vague plot of a film you’d heard about into Google and it’d be the first result?

          I honestly don’t remember this at all. I remember priding myself on my “google-fu” and how to search it to get what i, or other people, needed. Which usually required understanding the precise language that you would need to use, not something vague. But over the years it’s gotten harder and harder, and now I get frustrated with how hard it has become to find something useful. I’ve had to go back to finding places I trust for information and looking through them.

          Although, ironically, I can do what you’re talking about with ai now.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’m feeling myself old and I’m 28.

        Cause in my early childhood in 2003-2007 we would resort to search engines only when we couldn’t find something by better (but more manual and social) means.

        Because - mwahahaha - most of the results were machine-generated crap.

        So I actually feel very uplift due to people promising the Web to get back to norm in this sense.

  • ILikeBoobies@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This market; expected to replace the same market that just used bots to achieve the same thing

  • laverabe@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I just consider any comment after Jun 2023 to be compromised. Anyone who stayed after that date either doesn’t have a clue, or is sponsored content.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I appreciate the mostly benign neglect we had for awhile. Now that they’re paying attention it’s just all bad. Or would be, if I was there. HA.