• Secret300@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Just downloaded gpt4all and lm studio or whatever. I’m learning slowly but there’s a lot of jargon. I only have the 4GB Rx 5500 and I’m not sure how to get it to run on my GPU. I think I really just need to upgrade my PC tho. I have 16GB of ram but an i5-6500. Shit be slow

    • evranch@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Start off with the Tinyllama model, it’s under 1gb. It will even run on raspberry pi so on real PCs it rips even on CPU. You need a “quantized” model, they are distributed as GGUF files.

      I would recommend 5 bit quantized. The less bits, the stupider to put it simply, and Tinyllama is already pretty stupid. But it’s still impressive for what it is, and you can learn the jargon which is the hard part.

      Fastest software to run the model on is llama.cpp which is a rewrite from python to C++. Use -ngl <number> to offload layers from cpu to GPU.

      Not sure what system you’re using, most AI development is done on Linux so if you’re on Windows I can’t guarantee anything will work.

      Working right now on making a voice assistant for my house that can read all my MQTT data and give status reports, it’s neat when you get it running. Fun to tweak it with prompts and see what it can do. Tinyllama can’t seem to reliably handle MQTT and JSON but slightly smarter models can with ease.