To be honest, the explanation given in the screenshot makes sense. Whilst it’s frustrating, if the mods have had past problems with arguments over OSes (and there are dedicated subs for them), I can somewhat understand the reason for the rule.
To be honest, the explanation given in the screenshot makes sense. Whilst it’s frustrating, if the mods have had past problems with arguments over OSes (and there are dedicated subs for them), I can somewhat understand the reason for the rule.
There’s already talk-to-your-dog/cat products such as FluentPet. Probably the biggest issue with cats in particular is that their “vocabulary” is quite limited (usually less than a dozen distinct “meows”), but some of the FluentPet users (examples on Youtube such as BilliSpeaks) seem to suggest basic reasoning. A full-blown language is beyond them, but they do seem capable of understanding more concepts than we give them credit for.
The ultimate goal is to speak dolphin, if indeed there is such a language. The pursuit of this goal has led WDP to create a massive, meticulously labeled data set, which Google says is perfect for analysis with generative AI.
So they’re aiming for a real-life version of SeaQuest DSV? Considering Season 1 was set in 2018-2019, we’re 7 years behind schedule so far…
One can only conclude that either this is the latest step in a deliberate effort to sabotage the functioning of the US (and by extension much of the west), or just another monumentally stupid idea brought to life by their limitless incompetence.
Ouch, that’s going to hurt. I completely understand why, but still…
The irony is that, according to the article, it already does. What is changing is that the LLM will be able to use more of that data:
OpenAI is rolling out a new update to ChatGPT’s memory that allows the bot to access the contents of all of your previous chats. The idea is that by pulling from your past conversations, ChatGPT will be able to offer more relevant results to your questions, queries, and overall discussions.
ChatGPT’s memory feature is a little over a year old at this point, but its function has been much more limited than the update OpenAI is rolling out today… Previously, the bot stored those data points in a bank of “saved memories.” You could access this memory bank at any time and see what the bot had stored based on your conversations… However, it wasn’t perfect, and couldn’t naturally pull from past conversations, as a feature like “memory” might imply.
Because any tech has privacy risks?
We’re no longer living in a world where the only major cybersecurity threats are CCP-backed tech and Russian hackers.
Precisely what I am doing. Too many devices that still do what I need simply to ditch just because Windows 10 is EOL. I’m a bit over half-way in my migration (still have a few programs to sort out - may have to run a W10 VM for a couple of them as they don’t work under WINE and there is no Linux equivalent).
Music to my ears.
Why? LLMs are built by training maching learning models on vast amounts of text data; essentially it looks for patterns. We’ve seen this repeatedly with other behaviour from LLMs regarding race and gender, highlighting the underlying bias in the dataset. This would be no different, unless you’re disputing that there is a possible correlation between bad code and fascist/racist/sexist tendencies?
Notepadqq seems to be catching up to Notepad++. In my case the feature that I was sorely missing was the function list, as I am not a heavy macro/plugin user.
Just for the benefit of other readers, Notepadqq is one of the alternatives for Linux. However, there are a few features I really wanted from Notepad++, so I have installed it using wine. No problems there. Hopefully some day we’ll see a Linux release.
This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.
Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it’s becoming more obvious).
Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.
Honestly, I can’t blame them.
Makes me wonder if this was Musk’s plan all along - make Starlink indispensible and then leverage it against Ukraine when the time came.
Agreed. I buy physical versions wherever possible. Plus video and audio are generally higher quality than streaming/digital purchases.
This pretty much proves that the US government is experiencing its worst cybersecurity breach ever.
See also https://lemmy.world/post/25293137
“‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’”
George Orwell, 1984.
In a sane world, this lawsuit would be laughed out of court.
The full study for those interested: https://www.researchgate.net/publication/354152560_The_Psychology_of_Online_Political_Hostility_A_Comprehensive_Cross-National_Test_of_the_Mismatch_Hypothesis