I do use all the ML alternatives, but engagement is notably lower. I almost wish LW would just bite the bullet and defederate from ML.
Yeah, maybe more like top 100 for hexbear. I am on mobile too.
I do use all the ML alternatives, but engagement is notably lower. I almost wish LW would just bite the bullet and defederate from ML.
Yeah, maybe more like top 100 for hexbear. I am on mobile too.
Where does it say “Lemmy doesn’t exist”? The admins of the site are well within their right to curate what service they include. I say this as someone who uses Lemmy a lot and really wants there to be a non-corporate, competition-focused alternative (instances, UIs) to reddit specifically and oligarch run social networks in general.
I don’t understand how “censorship” plays into this (beyond shallow polemical grandstanding). Where is the censorship?
Still many top tech communities (in their niche) are on ML. Open source, Linux, Privacy, Raspberry Pi, Firefox come to mind.
Several hexbear communities are also in the top 50.
How is this censorship though?
You can always start joinfediversefreespeechstan.io or whatever. The code is even available, no?
I could never understand american-style preference for “free speech” themed theatrics.
Excuse me?
Point 1 and 2 really need to be addressed.
It would be so much better if lemmy wasn’t developed by genocide white-washing tankies.
True that. Yes, it looks like total MAU could be a lot higher, although in ballpark it does seem to be closer to Lemmy’s 45K.
Too bad MAUs are a mere 13K, even less than Lemmy with it’s rather modest 45K MAUs.
But then again, I personally don’t see any better competition to oligarch run social networks.
I originally stated that I did not find your arguments convincing. I wasn’t talking about AI safety as a general concept, but the overall discussion related to the article titled (Anthropic, Apollo astounded to find a chatbot will lie to you if you tell it to lie to you).
I didn’t find you initial post (or any you posts in that thread) to be explicit in the recognition in the potential for bad faith actions from the likes of Anthropic, Apollo. On the contrary, you largely deny the concept of “criti-hype”. One can, in good faith, interpret this as de facto corporate PR promotion (whether that was the intentional or not).
You didn’t mention the hypothetical profit maximization example in the thread and your phrasing implied a current tool/service/framework, not a hypothetical.
I don’t see how the YT video or the article summary (I did not read the paper) is honestly relevant to what was being discussed.
I am honestly trying to not take sides (but perhaps I am failing in this?), more like suggesting that how people interpret “groupthink” can take many forms and that “counter-contrarian” arguments in of themselves are not some of magical silver bullet.
That’s not what we are discussing though. We are discussing whether aweful.systems was right or wrong in banning you. Below is the title of your post:
Instance banned from awful.systems for debating the groupthink
I will note that I don’t think they should be this casual with giving out a bans. A warning to start with would have been fine.
An argument can be made that you went in to awful.systems with your own brand of groupthink; specifically complete rejection of even the possibility that we are dealing with bad faith actors. Whether you like it or not, this is relevant to any discussion on “AI safety” more broadly and that thread specifically (as the focus of the linked article was on Apollo Research and Anthropic and AI Doomerism as a grifting strategy).
You then go on to cite a YT video by “Robert Miles AI Safety”, this is a red flag. You also claim that you can’t (or don’t want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue. This is not some sort of bad faith request.
Further on you start talking about “Dunning-Kruger effect” and “deeper understanding [that YT fellow has]”. If you know the YT fellow has a deeper understanding of the issue, why can’t you explain in layman terms why this is the case?
I did watch the video and it has nothing to do with grifting approaches used by AI companies. The video is focused on explaining a relatively technical concept for non-specialists (not AI safety more broadly in context of real world use).
Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?
You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of “AI safety” messaging from organization like Apollo Research and Anthropic?
Could be :)
To be fair I was replying to a thread that said LW/ML are equal and something about fascism in LW.
I just don’t think they are as problematic as you imply. Are there issues? Sure (I be have my own complaints), but generally those communities seem somewhat usable.
It’s been a while since I’ve been/lived in the US (I do have close friends who lived there though), but I disagree. It seemed like a general social issue that crosses all demographic segments.
I don’t really see it. A lot of the posts in that community don’t even explicitly state what community is being discussed.
Some of the stuff is legitimate, some of the stuff feels more like bitching.
I don’t really understand how you can claim LW is the worst considering that on ML you get instance banned for opposing the russian invasion of Ukraine or having a critical attitude towards China.
Major tech communities honestly need to move off ML.
I am not sure if I read the correct thread, but I personally didn’t find your arguements convincing, although I think a full ban is excessive (at least initially).
Keep in mind that I do use local LLM (as an elaborate spell-checker) and I am a regular user of ML based video upscaleling (I am a fan of niche 80s/90s b-movies).
Forget the technical arguments for a seconds. And look at the social-economic component behind US-style VC groups, AI companies, and US technology companies in general (other companies are a separate discussion).
It is not unreasonable to believe that the people involved (especially the leadership) in the abovementioned organizations are deeply corrupt and largely incapable of honesty or even humanity [1]. It is a controversial take (by US standards) but not without precedent in the global context. In many countries, if you try and argue that some local oligarch is acting in good faith, people will assume you are trying (and failing) to practise a standup comedy routine.
If you do hold a critical attitude and don’t buy into tedious PR about “changing the world”, it is reasonable to assume that irrespective of the validity of “AI safety” as a technical concept, the actors involved would lie about it. And even the concept was valid, it is likely they would leverage it for PR while ignoring any actual academic concepts behind “AI safety” (if they do exist).
One could even argue that your arguementation approach is an example of provincialism, group-think and generally bad faith.
I am not saying you have to agree with me, I am more trying to show a different perspective.
[1] I can provide some of my favourite examples if you like, I don’t want to make this reply any longer.
How so?
Will need to try this. Grabbing the thumbnail via an external site is annoying.
Instances running 0.19.5 (I think) allow manually adding a thumbnail.
I extract them from https://youtube-thumbnail-grabber.com/ and manually add them in.
Really annoying.
I have not, no. There are still some technology communities that are only present on ML. Outside of those, I do not interact with ML.
And what’s with your prima donna attitude? What exactly is the problem with calling out an instance run by genocide white-washing tankie scum?