Ah shit, is this the time line where privacy isn’t needed cause people want better AI?
At least until Big Tech realizes that hallucinations in generative AI aren’t fixable and the whole stock market crashes.
Losing privacy for convenience has been happening. We use GPS on our smart phones for better directions. We install listening devices to add things to shopping carts and to play music by voice. We install cloud security cameras at home. We accept free WiFi in stores which gives them our cell phone info and our location. We use digital cash instead of physical cash. We buy things online rather than going to the store. Every device, like a toaster, has a MAC address.
People don’t need better AI. People are made to thing they need better AI. Other than that yea
I’m not going back to typing my grocery list manually. So I’m fucked in that part of my life.
I use „tasks“ for that. When i prepare to go grocery shopping I go to my fridge, open the completed „shopping“ tasks list and uncomplete what is empty. I then complete them again in the mall. Of course the list is hosted on my server.
You can also have a barcode scanner to automate the „done“ action. You could also put in a timer to automatically make things undone if they need regular buying.
In any case, using ai voice recognition for this is a massive waste of computing power for things that can be done by simple if else statements. Of course it is also a massive privacy invasion if you use big tech stuff for it.
I use „tasks“ for that. When i prepare to go grocery shopping I go to my fridge, open the completed „shopping“ tasks list and uncomplete what is empty. I then complete them again in the mall. Of course the list is hosted on my server.
A man of culture I see.
Selfhosted task list. There is where we stand united my friend.
Fuck. That’s geneous. I type it new every time. I’ll test it, thx!
Thanks. :) good luck!
I mean you could also write your grocery list per hand.
You don’t write your grocery list on a bit of paper stuck to the fridge…? I thought that was downright universal
It seems a very valid reason to give up your privacy.
You could just write it down like some ancient wizard. Maybe even on a scroll of some sort.
I wonder if having your own self hosted local llm would be better then?
Get mental help I guess. It may help
so voice typing, huh? not really sure it matters at this point. I use an open source keyboard, but my inputs go right into the OS of the worlds largest spy organization.
on the other hand… this is a great opportunity to hone your handwriting and memory skills.
from the video…
I think we need to be very cautious with the AI narrative where we are being lead to confuse mass surveillance with intelligence and by doing so initiate these corporate technologies into the core of our social and governmental institutions.
Did they mean “and be doing so insinuate” I wonder? Initiate makes some sense too, just odd phrasing.
Anyway! I’m getting sidetracked lol! Haven’t even watched the video yet. Thanks for sharing the quote
there was a mental word search, glitch in the matrix moment right at that point - read into that what you will, cuz these days all options are valid.
“insinuate” is absolutely the very best word, but publicly one has to walk the fine line between complicity and hair-on-fire alarm, and so “initiate” came out of her mouth.
for the record, I think we are past the face-melting stage.
Yeah that’s relatable. It’s so easy to pick apart someone else’s words when you’re just passively observing, but when it’s you in the moment…
Why is Signal so reliant on Google? It’s not even in F-Droid
Signal is using Google Push messaging, but it could be used with websocket. And officially it’s not on fdroid because they don’t want forks of their app
How would that prevent you from forking the app? F-droid isn’t a repository for the code of the app. I don’t think this is related at all.
I don’t actually know the reason why it’s not on F-droid but I assume it has some historical reason. It has never been on F-drroid since Text-secure. Moxy Marlinspike was strictly against it afaik. If somebody has more detail on it, feel free to share it.
Because today if you find a Signal app on Fdroid you’re sure that it’s nonofficial, on the play store only one app is allowed, the official.
You’re not allowed to upload apps with proprietary blobs. Google Push services is a proprietary blob.
Yet the Molly fork supports UnifiedPush so I can reuse my connection with mf XMPP server to deliver notification from a server I control. Folks have asked for UnifiedPush or MQTT as an alternative to having multiple persistent socket connections open on your device, but Signal doesn’t seem to care.
How did you get Unified Push working? Last time I checked it required dedicated server side software
There is a fork on F-Droid that isn’t reliant on Google push (it uses Unified Push) called Molly. I donate to both Molly and Signal.
They are vehemently against self hosting as well.
I’m guessing its not fully open source, but I don’t actually know.
It is fully open-source. The distribution of the application is completely unrelated. You can still read the code and verify the build you’re running.
The App is open source, but they include google push services, which is not
I thought she made some very good points, but the quote in the title makes no sense to me.
I simply took the title of the video. :shrug:
I have yet to meet a person who gives a shut about AI. I have yet to meet a person who has intentionally used AI. It’s all marketing bs and a way to mine our data.
I don’t like the idea o LLMs everywhere, but I do use chatgpt quite a lot as a point of entrance to any topic that I might not know the existence of yet
Run a free model local
It isn’t.
I runs it local on my machine
I had a classmate that was really exstatic about AI, like he basically believed its the second coming of christ. And then another one who was like “ohh look i can use this to make neat wallpapers”. That was about all the resonance i got from my social circles.
I use LLMs just about every day. It’s better than web-search for certain things, and is useful for some coding tasks. I think they’re over-hyped by some people, but they are useful.
This isn’t entirely true. AI is usually trained on public data such as Wikipedia.
AI is a tool. How you use it is what matters.
OpenAI and Dall-Es lawyers would like to use your as a witness at their 87 court hearings coming up
I self host so I don’t care
Like cracking passwords / encryption and injecting itself into anything and everything that connects to the internet?
That’s not AI
You can train AI to crack passwords/encryption lol. You do realize, AI right at this moment is being utilized for exactly that, right? Simply put, the very first step is to eliminate it’s boundaries/guard rails, then proceed from there.
You can train AI to crack encryption
Oh do provide details.
PassGAN <3
encryption
It requires Deep Learning.
Deep Learning could be used to attempt breaking encryption, but the effectiveness depends on various factors such as the strength of the encryption algorithm and key length. Deep learning, a subset of machine learning, involves training artificial neural networks to learn and make decisions.
AI algorithms, such as machine learning and deep learning, have the potential to automate cryptanalysis and make it more effective, thereby compromising the security of cryptographic systems.
Very interesting tip, preciate that.
@PassGAN
Instead of relying on manual password analysis, PassGAN uses a Generative Adversarial Network (GAN) to autonomously learn the distribution of real passwords from actual password leaks, and to generate high-quality password guesses. Our experiments show that this approach is very promising.
No you can’t, at least not in the way you think. You crack password by trying combinations. AI and machine learning are bad at raw attempts.
Wikipedia requires attribution, which AI scrapers never give.
It is “public” work, but under a license.
Still public data
It’s also trained on data people reasonably expected would be private (private github repos, Adobe creative cloud, etc). Even if it was just public data, it can still be dangerous. I.e. It could be possible to give an LLM a prompt like, “give me a list of climate activists, their addresses, and their employers” if it was trained on this data or was good at “browsing” on its own. That’s currently not possible due to the guardrails on most models, and I’m guessing they try to avoid training on personal data that’s public, but a government agency could make an LLM without these guardrails. That data could be public, but would take a person quite a bit of work to track down compared to the ease and efficiency of just asking an LLM.
What you are describing is highly specific to a particular AI model.
Indeed, reminded me of https://www.ftc.gov/business-guidance/blog/2023/05/luring-test-ai-engineering-consumer-trust which I know think of as emotional honey pots, like https://techcrunch.com/2024/06/18/former-snap-engineer-launches-butterflies-a-social-network-where-ais-and-humans-coexist/
How else is AI supposed to grow? It’s supposed to observe everything in existence.
Then maybe it shouldn’t grow.