• Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      AI isn’t a product for consumers, its a product for investors. If somewhere down the line a consumer benefits in some way, that’s just a side effect.

      • GraniteM@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Think about the ways that information tech has revolutionized our ability to do things. It’s allowed us to do math, produce and distribute news and entertainment, communicate with each other, make our voices heard, organize movements, and create and access pornography at rates and in ways that humanity could only have dreamed of only a few decades ago.

        Now consider that AI is first and foremost a technology predicated on reappropriating and stealing credit for another person’s legitimate creative work.

        Now imagine how much of humanity’s history has had that kind of exploitation at the forefront of its worst moments, and consider what might lie ahead with those kind of impulses being given the rocket fuel of advanced information technology.

  • Simon@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Okay I’ll bite. Where exactly has it not been useful? that you all have had the chance to interact least once.

    Edit: Bruh, it’s a legit question. If you feel attacked by this neutral info gathering enough to downvote you have a sad, sad life.

    • z00s@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      It’s useful for a lot of stuff but it’s been waaay over hyped, mostly by youtubers desperate for content. So I think a lot of people are having a counter-reaction to that.

      Once everyone calms down and realise it’s not an automatic-do-everything-machine, they’ll appreciate the circumstances in which it actually is useful.

      Bonus points: find the landing page for any tech startup from the last 12 months that doesn’t mention AI or LLMs

    • gorgor301@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      In customer service. If I see a bot I know there is a 95 % chance I won’t get the information I am looking for. If I could get the information online somewhere, I wouldn’t have contacted customer service in the first place! I just want to interact with another human being who’s able to understand my queries.

      • acetanilide@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Yeah, it is so frustrating trying to get a question answered only to get stuck in a loop.

        And then finally find an email address to talk to and their only reply is to talk to the bot…

      • Zink@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        And judging from the prompts many customer service lines use, there are also a lot of people who call customer service for the simplest/dumbest reasons. Wouldn’t be the first dummies kept us from having nice things.

        But I’ll 100% acknowledge that even with perfect customer service, 99% of companies will enshittify it with AI if it promises to save them money.

    • TrueStoryBob@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      The porn hasn’t been very good. I can get around dudes and chicks with like fourteen fingers, no toes, and nipples that hover over their skin… but I draw the line at the dirty talk being “I’m glad I could help you with that.”

  • NoneYa@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I was in an auto parts store yesterday and saw that you can buy a can of that stuff to fix your AC and the damn can has Bluetooth capabilities. So no, we’re still not done putting Bluetooth where it doesn’t need to be.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      That sounds cool, I don’t have a smart home setup, but Bluetooth sounds kinda nice to me for changing the temperature on the thermostat in the house, car not so much. Now I do know many people who use Bluetooth to cast their phone calls to their hands free devices in cars, as well as to hook up those diagnostic tools and have the error codes go to your phone instead of buying a product that costs hundreds of dollars to have a screen you would only use for that one purpose.

      • NoneYa@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Oh no, this was strictly for the can to refill your car’s air conditioner liquid…the name is drawing a blank for me what it’s called, exactly.

        I’ve seen some of these cans have a digital display on them which I guess this Bluetooth is supposed to replace. But it’s still so weird to me especially because these cans are generally disposable.

  • N-E-N@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I’m too young to know what Bluetooth was like 20yrs ago, can anyone elucidate?

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      It wasn’t so much that it was put in stuff that wasn’t useful it was more that it was put in stuff that needed something better than Bluetooth but they put in Bluetooth because it was new and shiny rather than old boring radio.

      The problem with Bluetooth, especially back then, was that the range was terrible (about 1 ft in my experience), you couldn’t connect to more than one thing at a time, it consumed quite a lot more power than radio (we have ultra low power modes now), and the bandwidth wasn’t great either (still somewhat the case but the bandwidth has improved). So you had things like Bluetooth car keys which were like keyless entry systems we have today, but rather than using radio they used Bluetooth so half the time you’d go near your car and nothing would happen.

      A lot of the cases where things used to have Bluetooth now use Wi-Fi today. Of course there were always things that had Bluetooth for a gimmick, but the vast majority of it was simply things that had Bluetooth when something else would have been the better option. Back then Bluetooth headphones were seen as a gimmick because they basically didn’t work, now they work, so they’re not a gimmick anymore. The perception of if something is or is not a gimmick is more about if it works rather than if it actually is a useful product.

      • allan@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Pretty accurate except Bluetooth is also radio of course, so it sounds weird contrasting them like that.

        • Echo Dot@feddit.uk
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Radio in this case meaning an analog signal. Just fire the appropriate good vibes energy at the car rather than trying to send some packets over a wireless network. It didn’t work mostly due to the lack of any kind of redundancy. If a packet got dropped then it just dropped, it was gone.

          When we started adding some decent protocols to Bluetooth it became more reliable, but it’s still not great range because of the frequencies used. Of course these days you would use Wi-Fi not analog radio, to get all of the advantages of Bluetooth but a much greater range and reliability.

    • thirteene@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Op is describing the early stages of Internet of things https://aws.amazon.com/what-is/iot/

      The general idea is that every Device can communicate with every other Device. Bluetooth was added to everything in hopes that we could better automate every aspect of our lives when a critical mass of devices can talk to each other. The Bluetooth receiver in your alarm clock tells your coffee machine to start remotely. But we quickly realized that the overhead isn’t worth the payoff. But up until that point we made Bluetooth glasses, beanies, dash buttons, replaced inafred in most devices, power tools and appliances. It wasn’t that bad, but there were moments when you would pick up a smart nose trimmer and wonder why they included it.

      • Croquette@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        And in the mid 2010s it got worse where everyone and their mother put bluetooth in anything and everything. IoT became accessible, only to be used in the dumbest way to try and get rich from Kickstarter.

        • JasonDJ@lemmy.zip
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          I got an oral thermometer with Bluetooth.

          It doesn’t have any sort of a display on it. The only way to use it is with their app.

        • EvilLootbox@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          only to be used in the dumbest way to try and get rich from Kickstarter

          Followed of course by an obligatory Shark Tank appearance asking $2 million for 4% of their product where all profits go to the Zuck (the cost per customer acquisition is the entire margin and its all spent through facebook and ig ads)

          • Croquette@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            The goal isn’t to make money now, it’s to have growth, the sacrosaint nectar of gods, to be bought and have a big payout.

  • AromaticNeo@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I have bluetooth headphones that are being advertised as using “AI technology” to make call quality better. Its such a joke

  • EmoDuck@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    To be fair, we only know where Bluetooth is useful because we put it in a lot of places where it wasn’t useful

    • exanime@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Trial and error isn’t the only way to optimize things… It’s actually one of the worst, the one you use when you have no clue how to proceed

      So no, that is not a justification for having done it or continue to do it

      Now I wonder if substituting the sugar in my coffee with arsenic would render a delicious new beverage… Only one way to find out!

      • EmoDuck@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        I’m not talking about trial and error, I’m talking about throwing shit at the wall and seeing what sticks.

        There might be good ideas out there that no one could think of until they accidentally get invented

  • FireRetardant@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Or even like modern wifi. I saw a vacuum with wifi capabilities. Do I really need to check my vacuum battery level from my phone?

        • ours@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          There are a few that do that but feel gimmicky. It looks like the upper half of a dummy and throws vapor to wrinkle out the shirt.

          Yes, I’ve considered it in the past.

          • AnUnusualRelic@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Those things have been around forever and work very well. For domestic use it’s probably only worth it if you have a lot of shirts.

    • toofpic@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Well, this is something that I actually used. I have a robo vacuum. I was preparing my home for some guests once, when I saw that the vacuum wasn’t charged fully (because it was mispositioned on its base). I put it to the right spot, let it charge for half an hour, started it and left to buy groceries.
      At the store, I checked the app where I have my apartment mapped by the vacuum that shows its route and cleaning progress. And I saw that with the current charge, it will have to go back, charge and continue. So I set it from “max” power to “normal”, to let it at least finish the job.
      It is a cool and useful thing

        • toofpic@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Ah, ok, then yes. If it’s just an indicator on the vacuum against “indicator in an app + register + give us all your data+ “buy vacuum 2.0” notifications”, then fuck them

    • VodkaSolution @feddit.itOP
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I saw a Bluetooth toothbrush that send reports to your phone on how good you brushed your teeth, like wtf?!

    • AA5B@lemmy.world
      cake
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Yes? Maybe the battery was left uncharged, or used up, so you’re waiting to do more cleaning. Why shouldn’t you be able to check?

      I have an automation in my Home Assistant setup to notify me when batteries need to be replaced or charged. Currently it’s only for the smart devices in that deployment, but yes. I want my home automation to keep track of all batteries, so I can see status at a glance and be reminded if one needs attention

  • Kraiden@kbin.run
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    “VR in the 80s” is my go to analogy. Sooo many promises, such tantalizing potential… and zero follow through

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I think this is a good way to explain that VR today is no longer just a fad. It’s had its hype cycle and disillusionment, and now it’s on to the plateau of usefulness.

  • Excrubulent@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    This reminds me I’m into season 5 of Burn Notice and Sam said at one point, “I’m on Bluetooth if you need me”. It was a weird reminder that once upon a time people were paid to advertise just… Bluetooth, because that’s a brand name. These days it’s just everywhere.

    The product placements in that show are not exactly subtle. Excellent show though, I did not expect it to hold up so well.

      • 4am@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Boomers learned what Bluetooth was because they started making AirPod-style single ear headsets for cell phones. Everyone called them “a Bluetooth”.

        So if you said “I’m on Bluetooth” it means you’d have your big clunky EarPod on, ready to answer a call at a moments notice.

        A former fucking spy wouldn’t be caught dead using early Bluetooth for sensitive conversations though (and probably not current BT either). Considering every other segment of that show is a “here’s a hack to show how fragile the house of cards of modern society is, and how spies just navigate through it with impunity”, it’s pretty funny they leaned into this one.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      You sure they didn’t mean it like “put it on a USB?” As in, they use the name of the connectivity technology to imply a single class of product that might use it?

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      You’re using AI to mean AGI and LLMs to mean AI. That’s on you though, everyone else knows what we’re talking about.

        • OhNoMoreLemmy@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 months ago

          Words might have meanings but AI has been used by researchers to refer to toy neutral networks longer than most people on Lemmy have been alive.

          This insistence that AI must refer to human type intelligence is also such a weird distortion of language. Intelligence has never been a binary, human level indicator. When people say that a dog is intelligent, or an ant hive shows signs of intelligence, they don’t mean it can do what a human can. Why should AI be any different?

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            You honestly don’t seem to understand. This is not about the extent of intelligence. This is about actual understanding. Being able to classify a logical problem / a thought into concepts and processing it based on properties of such concepts and relations to other concepts. Deep learning, as impressive as the results may appear, is not that. You just throw a training data at a few billion “switches” and flip switches until you get close enough to a desired result, without being able to predict how the outcome will be if a tiny change happens in input data.

            • OhNoMoreLemmy@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              I mean that’s a problem, but it’s distinct from the word “intelligence”.

              An intelligent dog can’t classify a logic problem either, but we’re still happy to call them intelligent.

        • nonfuinoncuro@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          I’ve given up trying to enforce the traditional definitions of “moot”, “to beg the question”, “nonplussed”, and “literally” it’s helped my mental health. A little. I suggest you do the same, it’s a losing battle and the only person who gets hurt is you.

        • Echo Dot@feddit.uk
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Op is an idiot though hope we can agree with that one.

          Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.

            There’s a difference between objecting to misuse of language and “telling everyone how they should use language” - you may not have intended it, but you used a straw man argument there.

            What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.

            Fascists use language to create “outgroups” which they then proceed to dehumanize and eventually violate or murder. Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock. Tech corporations speak about “Artificial Intelligence” and proceed to persuade regulators that - because there’s “intelligent” systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.

            Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as “AI”.

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Nobody has yet met this challenge:

        Anyone who claims LLMs aren’t AGI should present a text processing task an AGI could accomplish that an LLM cannot.

        Or if you disagree with my

        • OhNoMoreLemmy@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          “Write an essay on the rise of ai and fact check it.”

          “Write a verifiable proof of the four colour problem”

          “If p=np write a python program demonstrating this, else give me a high-level explanation why it is not true.”

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Oops accidentally submitted. If someone disagrees with this as a fair challenge, let me know why.

          I’ve been presenting this challenge repeatedly and in my experience it leads very quickly to the fact that nobody — especially not the experts — has a precise definition of AGI

    • Ludrol@szmer.info
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      In 2022 AI evolved into AGI and LLM into AI. Languages are not static as shown by old English. Get on with the times.

      • Fedizen@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Changes to language to sell products are not really the language adapting but being influenced and distorted

          • randomsnark@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            I think the modern pushback comes from people who get their understanding of technology from science fiction. SF has always (mis)used AI to mean sapient computers.

        • Echo Dot@feddit.uk
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          LLMs are a way of developing an AI. There’s lots of conspiracy theories in this world that are real it’s better to focus on them rather than make stuff up.

          There really is an amazing technological development going on and you’re dismissing it on irrelevant semantics

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        They didn’t so much “evolve” as AI scared the shit out of us at such a deep level we changed the definition of AI to remain in denial about the fact that it’s here.

        Since time immemorial, passing a Turing test was the standard. As soon as machines started passing Turing tests, we decided Turing tests weren’t such a good measure of AI.

        But I haven’t yet seen an alternative proposed. Instead of using criteria and tasks to define it, we’re just arbitrarily saying “It’s not AGI so it’s not real AI”.

        In my opinion, it’s more about denial than it is about logic.

    • Fire Witch@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      This is such a half brained response. Yes “actual” AI in the form of simulated neurons is pretty far off, but it’s fairly obvious when people say they AI they mean LLMs and other advanced forms of computing. There’s other forms of AI besides LLMs anyways, like image analyzers

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        The only thing half-brained is the morons who advertise any contemporary software as “AI”. The “other forms” you mention are machine learning systems.

        AI contains the word “intelligence”, which implies understanding. A bunch of electrons manipulating a bazillion switches following some trial-and-error set of rules until the desired output is found is NOT that. That you would think the term AI is even remotely applicable to any of those examples shows how bad the brain rot is that is caused by the overabundant misuse of the term.

        • smoker@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 months ago

          What do you call the human brain then, if not billions of “switches” as you call them that translate inputs (senses) into an output (intelligence/consciousness/efferent neural actions)?

          It’s the result of billions of years of evolutionary trial and error to create a working structure of what we would call a neural net, which is trained on data (sensory experience) as the human matures.

          Even early nervous systems were basic classification systems. Food, not food. Predator, not predator. The inputs were basic olfactory sense (or a more primitive chemosense probably) and outputs were basic motor functions (turn towards or away from signal).

          The complexity of these organic neural networks (nervous systems) increased over time and we eventually got what we have today: human intelligence. Although there are arguably different types of intelligence, as it evolved among many different phylogenetic lines. Dolphins, elephants, dogs, and octopuses have all been demonstrated to have some form of intelligence. But given the information in the previous paragraph, one can say that they are all just more and more advanced pattern recognition systems, trained by natural selection.

          The question is: where do you draw the line? If an organism with a photosensitive patch of cells on top of its head darts in a random direction when it detects sudden darkness (perhaps indicating a predator flying/swimming overhead, though not necessarily with 100% certainty), would you call that intelligence? What about a rabbit, who is instinctively programmed by natural selection to run when something near it moves? What about when it differentiates between something smaller or bigger than itself?

          What about you? How will you react when you see a bear in front of you? Or when you’re in your house alone and you hear something that you shouldn’t? Will your evolutionary pattern recognition activate only then and put you in fight-or-flight? Or is everything you think and do a form of pattern recognition, a bunch of electrons manipulating a hundred billion switches to convert some input into a favorable output for you, the organism? Are you intelligent? Or just the product of a 4-billion year old organic learning system?

          Modern LLMs are somewhere in between those primitive classification systems and the intelligence of humans today. They can perform word associations in a semantic higher dimensional space, encoding individual words as vectors and enabling the model to attribute a sort of meaning between two words. Comparing the encoding vectors in different ways gets you another word vector, yielding what could be called an association, or a scalar (like Euclidean or angular distance) which might encode closeness in meaning.

          Now if intelligence requires understanding as you say, what degree of understanding of its environment (ecosystem for organisms, text for LLM. Different types of intelligence, paragraph 4) does an entity need for you to designate it as intelligent? What associations need it make? Categorizations of danger, not danger and food, not food? What is the difference between that and the Pavlovian responses of a dog? And what makes humans different, aside from a more complex neural structure that allows us to integrate orders of magnitude more information more efficiently?

          Where do you draw the line?

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            A consciousness is not an “output” of a human brain. I have to say, I wish large language models didn’t exist, because now for every comment I respond to, I have to consider whether or not a LLM could have written that :(

            In effect, you compare learning on training data: “input -> desired output” with systematic teaching of humans, where we are teaching each other causal relations. The two are fundamentally different.

            Also, you are questioning whether or not logical thinking (as opposed to throwing some “loaded” neuronal dice) is even possible. In that case, you may as well stop posting right now, because if you can’t think logically, there’s no point in you trying to make a logical point.

            • Lifter@discuss.tchncs.de
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              systematic teaching of humans, where we are teaching each other causal relations. The two are fundamentally different.

              So you mean that a key component to intelligence is learning from others? What about animals that don’t care for their children? Are they not intelligent?

              What about animals that can’t learn at all, wheere their barains are completely hard wired from birth. Is that not intelligence?

              You seem to be objecting that OPs questions are too philosophical. The question “what is intelligence” can only be solved by philosophical discussion, trying to break it down into other questions. Why is the question about the “brain as a calculator” objectionable? I think it may be uncomfortable for you to even speak of but that would only be an indicator that there is something to it.

              It would indeed throw your world view upside down if you realised that you are also just a computer made of flesh and all your output is deterministic, given the same input.

              • raspberriesareyummy@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                3 months ago

                So you mean that a key component to intelligence is learning from others? What about animals that don’t care for their children? Are they not intelligent?

                You contradict yourself, the first part of your sentence getting my point correctly, and the second questioning an incorrect understanding of my point.

                What about animals that can’t learn at all, wheere their barains are completely hard wired from birth. Is that not intelligence?

                Such an animal does not exist.

                It would indeed throw your world view upside down if you realised that you are also just a computer made of flesh and all your output is deterministic, given the same input.

                That’s a long way of saying “if free will didn’t exist”, at which point your argument becomes moot, because I would have no influence over what it does to my world view.

            • smoker@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              3 months ago

              A consciousness is not an “output” of a human brain.

              Fair enough. Obviously consciousness is more complex than that. I should have put “efferent neural actions” first in that case, consciousness just being a side effect, something different yet composed of the same parts, an emergent phenomenon. How would you describe consciousness, though? I wish you would offer that instead of just saying “nuh uh” and calling me chatGPT :(

              Not sure how you interpreted what I wrote in the rest of your comment though. I never mentioned humans teaching each other causal relations? I only compared the training of neural networks to evolutionary principles, where at one point we had entities that interacted with their environment in fairly simple and predictable ways (a “deterministic algorithm” if you will, as you said in another comment), and at some later point we had entities that we would call intelligent.

              What I am saying is that at some point the pattern recognition “trained” by evolution (where inputs are environmental distress/eustress, and outputs are actions that are favorable to the survival of the organism) became so advanced that it became self-aware (higher pattern recognition on itself?) among other things. There was a point, though, some characteristic, self-awareness or not, where we call something intelligence as opposed to unintelligent. When I asked where you draw the line, I wanted to know what characteristic(s) need to be present for you to elevate something from the status of “pattern recognition” to “intelligence”.

              It’s tough to decide whether more primitive entities were able to form causal relationships. When they saw predators, did they know that they were going to die if they didn’t run? Did they at least know something bad would happen to them? Or was it just a pre-programmed neural response that caused them to run? Most likely the latter.

              Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.

              From another comment, I’m not sure what you mean by “understands”. It could mean having knowledge about the nature of a thing, or it could mean interpreting things in some (meaningful) way, or it could mean something completely different.

              To your last point, logical thinking is possible, but of course humans can’t do it on our own. We had to develop a system for logical thinking (which we call “logic”, go figure) as a framework because we are so bad at doing it ourselves. We had to develop statistical methods to determine causal relations because we are so bad at doing it on our own. So what does it mean to “understand” a thing? When you say an animal “understands” causal relations, do they actually understand it or is it just another form of pattern recognition (why I mentioned pavlov in my last comment)? When humans “understand” a thing, do they actually understand, or do we just encode it with the frameworks built on pattern recognition to help guide us? A scientific model is only a model, built on trial and error. If you “understand” the model you do not “understand” the thing that it is encoding. I know you said “to varying degrees”, and this is the sticking point. Where do you draw the line?

              When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless. […] You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.

              I recognize that you understand the point I am trying to make. I am trying to make the same point, just with a different perspective. Your description of an “actually intelligent” artificial intelligence closely matches how sensory data is integrated in the layers of the visual cortex, perhaps on purpose. My question still stands, though. A more primitive species would integrate data in a similar, albeit slightly less complex, way: take in (visual) sensory information, integrate the data to extract easier-to-process information such as brightness, color, lines, movement, and send it to the rest of the nervous system for further processing to eventually yield some output in the form of an action (or thought, in our case). Although in the process of integrating, we necessarily lose information along the way for the sake of efficiency, so what we perceive does not always match what we see, as you say. Image recognition models do something similar, integrating individual pixel information using convolutions and such to see how it matches an easier-to-process shape, and integrating it further. Maybe it can’t reason about what it’s seeing, but it can definitely see shapes and colors.

              You will notice that we are talking about intelligence, which is a remarkably complex and nuanced topic. It would do some good to sit and think deeply about it, even if you already think you understand it, instead of asserting that whoever sounds like they might disagree with you is wrong and calling them chatbots. I actually agree with you that calling modern LLMs “intelligent” is wrong. What I ask is what you think would make them intelligent. Everything else is just context so that you understand where I’m coming from.

    • Halosheep@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      The term has been stolen and redefined . It’s pointless to be pedantic about it at this point.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        AI traditionally meant now-mundane things like pathfinding algorithms. The only thing people seem to want Artificial Intelligence to mean is “something a computer can almost do but can’t yet”.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          AI is, by definition these days, a future technology. We think of AI as science fiction so when it becomes reality we just kick the can in the definition.