People panned the game but this was the whole point of the NG Resonance character in Deus Ex Invisible War. People would basically tell this AI pop star intimate details about their lives and then the data was used against them and their local communities.
I always wondered though, how many of those people were aware that it was an ai and not the real NG Resonance? (Have to admit, she did get me in trouble with the Omar. My bad. 😅)
I think they knew and it didn’t matter to them which is oddly prophetic when I look at the state of the world today lol
The shit I do for a discount on non-PiezoChem functionality.
I thought this was a new Chuck Tingle novel.
Fooled again, it’s Chuck Testa.
He’s on social media. This could probably be arranged.
Pounded in the Butt by my Data Stealing AI Girlfriend
This is why you should always selfhost your AI girlfriend.
Nvidia^TM
You have to at least move out of your AI parents’ server rack before letting your AI girlfriend move in.
Woah there, I’m not sure I’m ready for that level of commitment.
I’d have to actually test my backup strategy
🤣
I ain’t got that kinda GPU to spare. If it’s the games or my AI girlfriend…
you dont need that much power. something like a rx 6600xt/rtx 3060/rx580 is plenty
Is support for AMD cards better these days? Last time I checked it involved checking ROCM compatibility because CUDA needs nvidia cards exclusively.
gpt4all worked out of the box for me
Now consider the number of normal people in the world who do not have a server rack in their closet, and how much they are about to be defrauded and blackmailed
- My Canadian Girlfriend
Broke, Busted, Burned Out
- My Canadian Server-Farm Girlfriend
Smart, Sexy, Superconductive
Let me go buy some milk.
I’d only date someone fully independent, so my AI joyfriend operates their own cloud cluster through a combination of a crypto wallet and findom
They sound fun
🤣
That’ll make you go blind
And give you hairy palms
You need a little gpu server farm for proper models & context sizes though. Single consumer gpus don’t have enough vram for that.
Might as well just pay for a prostitute
Locally hosted AI sucking down on our dick through usb plugged dildos. This is the future.
Yeah, I heard they’re also very privacy friendly.
Unless you piss them off
I was being sarcastic. If you pay for a prostitute you might as well pay for an AI service such as novelai.
I think my wife is cheating on me with my self hosted AI girlfriend’s boyfriend that lives in the same database file. What do I do?
Delete system32 obviously
Sounds like a job for Little Bobby Tables
For as long as everyone is using a virus checker, maybe you could try an open source relationship.
ROFL! There country be a more hilarious headline! I love it!
Why don’t these people do what perfectly mentally unhealthy do and just abuse the super chat of your local vtuber?
At least the VTuber is a real human behind an anime figure.
damn someone else besides me figured this out I thought I was a genius or something
this is the title of my next light novel
Are there any Open Source girlfriends that we can download and compile?
A Flat Dolphin Maid can cumpile. counts?
The bots (what the actual girlfriends or whatever other characters are) aren’t the problem. You can find them on chub.ai for example or write them yourself fairly easily. The issue the software, and even more so the hardware. You need something like the mentioned Kobold.ccp or oobabooga, and then you’d also need a trained LLM model that you can get on huggingface.co, which is already where it gets complicated (they’ll be loaded within kobold or oobabooga). You also need to understand how they work in regards to context sizes & bytes, because they need a lot, and I mean A LOT of vram to work properly. Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you’d have a bot that maybe knows to only contextualize the last couple messages. For paid services like novelai.net you basically have your bots run through big ass server farms with lots of GPUs that bundle their vram and processing power, giving you “decent” context sizes (imo the greatest weak point of LLMs and it is deeply rooted in how they work) and decent speed. NovelAI also supports front-ends like SillyTavern which is great for local bot management and settings, regardless if you self host or use a paid service (NOT EVERY PAID SERVICE HAS AN API FOR THIS! OpenAI’s ChatGPT technically does too but they do not allow NSFW content and can ban you for that if caught).
There’s a bunch of “free” online services too, like janitorai.com but most of them have slow speeds and the chat degrades significantly after just a few messages, because they have low context sizes. The better / paid models suffer from this degradation too but it is slower and less noticeable, at least at first. You can use that to get an idea of how LLMs work though.Edit: Should technically self explanatory / common sense, but I would advise not to share ANY personal information through online service chats that could identify you as a person!
Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you’d have a bot that maybe knows to only contextualize the last couple messages.
Hmm, if only there was some hardware analogue for long-term memory.
What are you trying to say? Do you understand what the problem is?
Does it make it faster if the GPU has waifu stickers on it?
I don’t know, I’m not a weeb.
Define “it”
Because waifu stickers may indeed speed up “it” for some definition of “it”
Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.
I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you’d need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.
You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.
Also for an interface, I’d recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.
Hey now, I don’t want anyone looking at my girlfriend’s source code. That’s personal!
I don’t want anyone looking at my girlfriend’s source code
it’s okay, dude, we all already did…
i second this request. please
24 thousand tracker calls in one minute.
That’s impressively gross.
Can we please stop posting Gizmodo shit posts? It’s just pure garbage
Hey Sugar, which of these pics have traffic lights on them?
bots gotta help each other
I didn’t realise they are that realistic.
If you not self-host your girlfriend, it is just prostitution. /s
Indeed. She’s not really your girlfriend unless you own her completely and can control everything she thinks.
“AI” is funny anyway because you’re basically gaslighting them the whole time to have them behave as they’re supposed to.
And their whole purpose in return is to try to guess exactly what you want to hear from them.
AI is so human!
Like I could ever get an AI girlfriend. They’d never have me.
You need Lucy Lou bot.
It’s amazing the way you NOTICE TWO THINGS.