• evranch@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Do it, it’s easy and fun and you’ll learn about the actual capabilities of the tech. Started a week ago and I’m a convert on the utility of local AI. Got to go back to Reddit for it but r/localllama has tons of good info. You can actually run useful models at a conversational pace.

      This whole thread is silly because VRAM is what you need, I’m running some pretty good coding and general knowledge models in a 12GB Radeon. Almost none of my 32GB system ram is used lol either Microsoft is out of touch or hiding an amazing new algorithm

      Running in system ram works but the processing is painfully slow on the regular CPU, over 10x slower

      • Secret300@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Just downloaded gpt4all and lm studio or whatever. I’m learning slowly but there’s a lot of jargon. I only have the 4GB Rx 5500 and I’m not sure how to get it to run on my GPU. I think I really just need to upgrade my PC tho. I have 16GB of ram but an i5-6500. Shit be slow

        • evranch@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Start off with the Tinyllama model, it’s under 1gb. It will even run on raspberry pi so on real PCs it rips even on CPU. You need a “quantized” model, they are distributed as GGUF files.

          I would recommend 5 bit quantized. The less bits, the stupider to put it simply, and Tinyllama is already pretty stupid. But it’s still impressive for what it is, and you can learn the jargon which is the hard part.

          Fastest software to run the model on is llama.cpp which is a rewrite from python to C++. Use -ngl <number> to offload layers from cpu to GPU.

          Not sure what system you’re using, most AI development is done on Linux so if you’re on Windows I can’t guarantee anything will work.

          Working right now on making a voice assistant for my house that can read all my MQTT data and give status reports, it’s neat when you get it running. Fun to tweak it with prompts and see what it can do. Tinyllama can’t seem to reliably handle MQTT and JSON but slightly smarter models can with ease.