Instructions here: https://github.com/ghobs91/Self-GPT
If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).
- Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
- Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
- Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
Thanks! I actually picked up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?