

As Wikipedia lists him as a founder i think it’s ok for me to call him that as well. But of course, you can insist on the loooooong explanation that he founded a company that merged with another company and the merged one finally became Paypal.
As Wikipedia lists him as a founder i think it’s ok for me to call him that as well. But of course, you can insist on the loooooong explanation that he founded a company that merged with another company and the merged one finally became Paypal.
Anyone else thinking of „WarGames“?
No?
Just me?
Isn’t that bloody scary specifically after the news that ChatGPT turned Nazi and wanted to enslave humans. Researchers puzzled by AI that praises Nazis after training on insecure code
Please have a look at the listed founders of PayPal: Paypal Wikipedia
Yes, you are right, not anymore, I don’t trust it though as it was founded by not only Peter Thiel but also Elon Musk.
PayPal blocks accounts that are politically controversial, such as some alternative media outlets, cryptocurrency platforms, or activists. Also Whistleblower organizations like WikiLeaks have been blocked and their funds frozen.
For these reasons I find a boycott completely justified.
I wish people would also boycott Zuckerberg‘s products and Peter thiel‘s PayPal.
Sadly no-one can tell you that as it is your decision based on your morals and your beliefs. It’s a hard decision, one that I also had to make. The question is, what is harder and more painful: losing this friend or being friends with someone who is like this.
Wish you all the strength you need to get through this.
I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.
Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.
In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.
The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.
So the AI wasn’t trained to be a „psychopathic Nazi“.
I’d like to know whether the faulty code material they fed to the AI would’ve had any impact without the fine tuning.
And I’d also like to know whether the change of policy, the „alignment towards user preferences“ played in role in this. (Edited spelling)
Ever heard the saying, „Your freedom ends where someone else’s begins“?
Exactly. Don’t give them a platform
I’m fairly certain the next civil war will be caused by severe wealth disparity.
I wish I could just sit back and enjoy the circus from a safe distance, but the way this American dumpster fire affects the whole world just scares the hell out of me.
I’m not naive enough anymore for this kind of trust.
It certainly is the lesser evil though.
There’s a lot going on in the US that I never thought would happen and it just goes on and on and on. Every day I read something that scares me even more.
To me it’s not that absurd that open source projects could be affected. Wouldn’t be the first time they tried (EARN IT Act or how often they tried to get backdoors in encrypted data https://www.atlasobscura.com/articles/a-brief-history-of-the-nsa-attempting-to-insert-backdoors-into-encrypted-data )
To me it seems possible.
Yes, it’s open source, yes, it can be taken elsewhere and developed outside of the USA. It’s just that I’m extra cautious right now.
I agree with you, it’s not only the USA which is problematic, but currently the US is the country with the most power doing „shitty things“. That’s why you get extra bonus points.
Just a couple of examples
Red Hat Developed by a U.S.-based company.
Fedora A community-driven project sponsored by Red Hat.
Debian Originally founded in the U.S., with some legal ties to US regulations.
Slackware developed by Patrick Volkerding in the US
Since these distributions are developed or registered in the United States, they are subject to US laws, regulations, and export restrictions.
When I have a look at what’s happening right now in the US I’m not sure what kind of laws will suddenly appear which might affect privacy and security of any kind of software from there. That’s why I decided to avoid them as much as possible.
I will certainly go through your suggestions and have a look if I should change stuff (apart from proton, I’m sure about changing this one).
I listed the stuff I use and what I changed. There’s also a reason why I chose this specific Linux distro as I try to avoid as much as I can with the jurisdiction in the US, which means a lot of Linux distros are not an option anymore.
But that does not mean everyone needs to do the same. Do whatever you think is best.
The main issue I have right now: the jurisdiction of this is in the US, and to be honest, I don’t trust the US that much when it comes to privacy laws regarding the (near) future.
Fastmail: Privacy & Security Overview
+Encrypted storage & transit (TLS 1.3, Perfect Forward Secrecy).
+No ads, no data selling – user-funded.
+2FA & Passkey support for added security.
-Based in Australia – subject to laws like the Assistance and Access Act (2018).
-No built-in end-to-end encryption (E2EE) – requires third-party PGP/S/MIME.
https://www.fastmail.com/features/security
https://www.fastmail.com/policies/privacy
Good for privacy, but jurisdiction risks & lack of E2EE make alternatives like tuta (or proton) a better choice.
Why old Facebook accounts still matter:
-Your past likes, groups, comments, and interactions are stored and can still be used for ad profiling or sold as part of larger datasets.
-If you once liked a brand or a political page, that interest could still be factored into long-term data models.
-If you have active friends, their interactions with your old profile (e.g. tagging you in old posts, mentioning you) can still keep your account relevant to Meta’s algorithms.
-Your friends may have synced their contacts with Facebook, meaning your email or phone number could still be in Meta’s database.
-If you’ve ever used “Log in with Facebook” for third-party apps, Meta can see when and where you log in.
-Even if you don’t actively sign in, Facebook cookies might still track you across other websites (depending on your browser settings).
-Advertisers may have access to archived data that gets combined with current trends.
-Your profile might be included in anonymized datasets used for AI training or market analysis.
That made me wonder, in regard to your question, how much meta really makes out of Facebook accounts like yours.
Out of curiosity I asked Mistral how much an inactive Facebook account might generate daily. It estimated $0.005 but noted it could be even lower. Let’s take a careful guess at $0.001.
Ridiculously low, irrelevant, right?
Well, there are 3 billion Facebook users. Let‘s assume Facebook earns $0.001 for each account, each day.
This would be 3 billion times $0.001 which equals $3,000,000. Daily!
Links:
-The Electronic Frontier Foundation’s analysis of Facebook’s tracking technologies
-Privacy International’s report on how Facebook tracks users across devices
-The Tracking Exposed project which documents Facebook’s data collection methods
-ProPublica’s series on Facebook’s data practices
-The Washington Post’s investigation into Facebook’s privacy controls
-Wired’s coverage of how Facebook continues tracking after account deactivation