The tech giant is evaluating tools that would use artificial intelligence to perform tasks that some of its researchers have said should be avoided.

Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.

  • Skies5394@lemmy.ml
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Why in the pissity-fuck would I take life advice from Google, Google applications or an AI trained by Google.

    That is so far out of the question for what I find reasonable

    • SSUPII@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I think this is more like a precaution. They are not making a service specifically for this, but probably an update to Bard in the case a user asks those questions. I think its reasonable, but must be done and released in the most curated and well developed state possible to prevent another story that already happened. (That suicide hotline that suddenly went full AI, to then backtrack because it responded badly).

  • ExLisper@linux.community
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I actually predicted this years ago. This technology will improve with time and people will use it the same way they use movie or books recommendations today. After some time people will get so used to it they will just let AI control their lives through career, lifestyle, fashion, relationship and other recommendations. Eventually google will just automatically book a restaurant dinner for you when its AI decides it is optimal for you to go out, it will rent you a new aparment when it decides it’s better for you than the current one and it will find you a new job when it decides it’s time to change it. People will turn into robots with external decision centres. And the AI won’t be even that smart, just well trained.

  • Jay Baker (they/he)@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Most potential technology will of course be acontextual and lacking any kind of critique of capitalism. I witness people I know make all kind of “wise” decisions based on career expectations or ambition and definitions of “success,” and they’re actually really miserable yet can’t seem to recognise why. I’m unconvinced AI developed by capitalist companies would provide healthier perspectives. Heck, even humans fall short in such roles as counselling, when they lack class consciousness or a critique of neoliberalism. AI probably doesn’t stand much more chance.