Companies are going all-in on artificial intelligence right now, investing millions or even billions into the area while slapping the AI initialism on their products, even when doing so seems strange and pointless.

Heavy investment and increasingly powerful hardware tend to mean more expensive products. To discover if people would be willing to pay extra for hardware with AI capabilities, the question was asked on the TechPowerUp forums.

The results show that over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn’t know, while just under 2,000 voters said yes.

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    It just doesn’t really do anything useful from a layman point of view, besides being a TurboCyberQuantum buzzword.

    I’ve apparently got AI hardware in my tablet, but as far as I’m aware, I’ve never/mostly never actually used it, nor had much of a use for it. Off the top of my head, I can’t think of much that would make use of that kind of hardware, aside from some relatively technical software that is almost as happy running on a generic CPU. Opting for AI capabilities would be paying extra for something I’m not likely to ever make use of.

    And the actual stuff that might make use of AI is pretty much abstracted out so far as to be invisible. Maybe the autocorrecting feature on my tablet keyboard is in fact powered by the AI hardware, but from the user perspective, nothing has really changed from the old pre-AI keyboard, other than some additions that could just be a matter of getting newer, more modern hardware/software updates, instead of any specific AI magic.

  • MrAlternateTape@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I have no clue why any anybody thought I would pay more for hardware if it goes with some stupid trend that will be blow up in our faces soon or later.

    I don’t get they AI hype, I see a lot of companies very excited, but I don’t believe it can deliver even 30% of what people seem to think.

    So no, definitely not paying extra. If I can, I will buy stuff without AI bullshit. And if I cannot, I will simply not upgrade for a couple of years since my current hardware is fine.

    In a couple of years either the bubble is going to burst, or they really have put in the work to make AI do the things they claim it will.

  • Cyborganism@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I don’t mind the hardware. It can be useful.

    What I do mind is the software running on my PC sending all my personal information and screenshots and keystrokes to a corporation that will use all of it for profit to build user profile to send targeted advertisement and can potentially be used against me.

  • Godort@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    This is one of those weird things that venture capital does sometimes.

    VC is is injecting cash into tech right now at obscene levels because they think that AI is going to be hugely profitable in the near future.

    The tech industry is happily taking that money and using it to develop what they can, but it turns out the majority of the public don’t really want the tool if it means they have to pay extra for it. Especially in its current state, where the information it spits out is far from reliable.

    • Tenthrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I have to endure a meeting at my company next week to come up with ideas on how we can wedge AI into our products because the dumbass venture capitalist firm that owns our company wants it. I have been opting not to turn on video because I don’t think I can control the cringe responses on my face.

    • cheese_greater@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I don’t want it outside of heavily sandboxed and limited scope applications. I dont get why people want an agent of chaos fucking with all their files and systems they’ve cobbled together

      • FiveMacs@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        NDA also legally prevent you from using this forced garbage too. Companies are going to get screwed over by other companies, capitalism is gonna implode hopefully

    • TipRing@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Back in the 90s in college I took a Technology course, which discussed how technology has historically developed, why some things are adopted and other seemingly good ideas don’t make it.

      One of the things that is required for a technology to succeed is public acceptance. That is why AI is doomed.

      • SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        AI is not doomed, LLMs or consumer AI products, might be

        In industries AI is and will be used (though probably not LLMs, still, except in a few niche use cases)

        • TipRing@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah, I mean the AI being shoveled at us by techbros. Actual ML stuff is currently and will continue to be useful for all sorts on not-sexy but vital research and production tasks. I do task automation for my job and I use things like transcription models and OCR, my company uses smart sorting using rapid image recognition and other really cool uses for computers to do things that humans are bad at. It’s things like LLMs that just aren’t there - yet. I have seen very early research on AI that is trained to actually understand language and learns by context, it’s years away, but eventually we might see AI that really can do what the current AI companies are claiming.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    …just under 2,000 voters said “yes.”

    And those people probably work in some area related to LLMs.

    It’s practically a meme at this point:

    Nobody:

    Chip makers: People want us to add AI to our chips!

    • ozymandias117@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The even crazier part to me is some chip makers we were working with pulled out of guaranteed projects with reasonably decent revenue to chase AI instead

      We had to redesign our boards and they paid us the penalties in our contract for not delivering so they could put more of their fab time towards AI

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s absolutely crazy. Taking the Chicago School MBA philosophy to things as time consuming and expensive to setup as silicon production.

  • BlackLaZoR@kbin.run
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    There’s really no point unless you work in specific fields that benefit from AI.

    Meanwhile every large corpo tries to shove AI into every possible place they can. They’d introduce ChatGPT to your toilet seat if they could

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Someone did a demo recently of AI acceleration for 3d upscaling (think DLSS/AMDs equivilent) and it showed a nice boost in performance. It could be useful in the future.

      I think it’s kind of a ray tracing. We don’t have a real use for it now, but eventually someone will figure out something that it’s actually good for and use it.

      • NekuSoul@lemmy.nekusoul.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        AI acceleration for 3d upscaling

        Isn’t that not only similar to, but exactly what DLSS already is? A neural network that upscales games?

        • fuckwit_mcbumcrumble@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          But instead of relying on the GPU to power it the dedicated AI chip did the work. Like it had it’s own distinct chip on the graphics card that would handle the upscaling.

          I forget who demoed it, and searching for anything related to “AI” and “upscaling” gets buried with just what they’re already doing.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            That’s already the nvidia approach, upscaling runs on the tensor cores.

            And no it’s not something magical it’s just matrix math. AI workloads are lots of convolutions on gigantic, low-precision, floating point matrices. Low-precision because neural networks are robust against random perturbation and more rounding is exactly that, random perturbations, there’s no point in spending electricity and heat on high precision if it doesn’t make the output any better.

            The kicker? Those tensor cores are less complicated than ordinary GPU cores. For general-purpose hardware and that also includes consumer-grade GPUs it’s way more sensible to make sure the ALUs can deal with 8-bit floats and leave everything else the same. That stuff is going to be standard by the next generation of even potatoes: Every SoC with an included GPU has enough oomph to sensibly run reasonable inference loads. And with “reasonable” I mean actually quite big, as far as I’m aware e.g. firefox’s inbuilt translation runs on the CPU, the models are small enough.

            Nvidia OTOH is very much in the market for AI accelerators and figured it could corner the upscaling market and sell another new generation of cards by making their software rely on those cores even though it could run on the other cores. As AMD demonstrated, their stuff also runs on nvidia hardware.

            What’s actually special sauce in that area are the RT cores, that is, accelerators for ray casting though BSP trees. That’s indeed specialised hardware but those things are nowhere near fast enough to compute enough rays for even remotely tolerable outputs which is where all that upscaling/denoising comes into play.

              • AdrianTheFrog@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Having to send full frames off of the GPU for extra processing has got to come with some extra latency/problems compared to just doing it actually on the gpu… and I’d be shocked if they have motion vectors and other engine stuff that DLSS has that would require the games to be specifically modified for this adaptation. IDK, but I don’t think we have enough details about this to really judge whether its useful or not, although I’m leaning on the side of ‘not’ for this particular implementation. They never showed any actual comparisons to dlss either.

                As a side note, I found this other article on the same topic where they obviously didn’t know what they were talking about and mixed up frame rates and power consumption, its very entertaining to read

                The NPU was able to lower the frame rate in Cyberpunk from 263.2 to 205.3, saving 22% on power consumption, and probably making fan noise less noticeable. In Final Fantasy, frame rates dropped from 338.6 to 262.9, resulting in a power saving of 22.4% according to PowerColor’s display. Power consumption also dropped considerably, as it shows Final Fantasy consuming 338W without the NPU, and 261W with it enabled.

                • NekuSoul@lemmy.nekusoul.de
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  I’ve been trying to find some better/original sources [1] [2] [3] and from what I can gather it’s even worse. It’s not even an upscaler of any kind, it apparently uses an NPU just to control clocks and fan speeds to reduce power draw, dropping FPS by ~10% in the process.

                  So yeah, I’m not really sure why they needed an NPU to figure out that running a GPU at its limit has always been wildly inefficient. Outside of getting that investor money of course.

            • fuckwit_mcbumcrumble@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Nvidia’s tensor cores are inside the GPU, this was outside the GPU, but on the same card (the PCB looked like an abomination). If I remember right in total it used slightly less power, but performed about 30% faster than normal DLSS.

    • br3d@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      “Shits are frequently classified into three basic types…” and then gives 5 paragraphs of bland guff

      • Krackalot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        With how much scraping of reddit they do, there’s no way it doesn’t try ordering a poop knife off of Amazon for you.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        It’s seven types, actually, and it’s called the Bristol scale, after the Bristol Royal Infirmary where it was developed.

  • FiniteBanjo@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    People already aren’t paying for them, nVidia’s main source of income is industry use and not consumer parts, right now.

  • magiccupcake@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Most people have pretty decent ai hardware already in the form of a gpu.

    Sure dedicated hardware might be more efficient for mobile devices, but that’s already done better in the cloud.

    • PriorityMotif@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Google coral TPU has been around for years and it’s cheap. Works well for object detection.

      https://docs.frigate.video

      There’s a lot of use cases in manufacturing where you can do automated inspection of parts as they go by on a conveyor, or have a robot arm pick and place parts/boxes/pallets etc.

      Those types of systems have been around for decades, but they can always be improved.

    • Nomecks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It’s not really done better in the cloud if you can push the compute out to the device. When you can leverage edge hardware you save bandwidth fees and a ton of cloud costs. It’s faster in the cloud because you can leverage a cluster with economies of scale, but any AI company would prefer the end-user to pay for that compute instead, if they can service requests adequately.

  • cmrn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I still don’t understand how the buzzword of AI 10x’d all these valuations, when it’s always either: a) exactly what they’ve been doing before, now with a fancy new name b) deliberately shoehorning AI in, in ways with no practical benefit

  • Lost_My_Mind@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    84% said no.

    16% punched the person asking them for suggesting such a practice. So they also said no. With their fist.

  • ClamDrinker@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Depends on what kind of AI enhancement. If it’s just more things nobody needs and solves no problem, it’s a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn’t want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.

    But that isn’t the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn’t even need to be explicitly sold as “AI”. They’re most likely thinking of the kind of products where the manufacturer put an “AI powered” sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.

    Of course people are going to reject white collar scams if they think that’s what “AI enhanced” means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don’t think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      well, i think a lot of these cpus come with a dedicated npu, idk if it would be more efficient than the tensor cores on an nvidia gpu for example though

      edit: whatever npu they put in does have the advantage of being able to access your full cpu ram though, so I could see it might be kinda useful for things other than custom zoom background effects

      • yamanii@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        But isn’t ram slower then a GPU’s vram? Last year people were complaining that suddenly local models were very slow on the same GPU, and it was found out it’s because a new nvidia driver automatically turned on a setting of letting the GPU dump everything on the ram if it filled up, which made people trying to run bigger models very annoyed since a crash would be preferable to try again with lower settings than the increased generation time a regular RAM added.

        • AdrianTheFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Ram is slower than GPU VRAM, but that extreme slowdown is due to the bottleneck of the pcie bus that the data has to go through to get to the GPU.