

Anecdotally, greek <-> english stuff seems to be deteriorating also.
Itās not always easy to distinguish between existentialism and a bad mood.
Anecdotally, greek <-> english stuff seems to be deteriorating also.
Maybe non-judgemental chatbots are a feature only at a higher paid tiers.
itās rather hilarious that the service is the one throwing the brakes on. I wonder if itās done because of public pushback, or because some internal limiter applied in the cases where the synthesis drops below some certainty threshold. still funny tho
Havenāt used cursor, but I donāt see why an LLM wouldnāt just randomly do that.
Thatās the second model announcement in a row by the major LLM vendor where the supposed advantage over the current state of the art is presented asā¦ better vibes. He actually doesnāt even call the output good, just successfully metafictional.
Meanwhile over at anthropic Dario just declared that weāre about 12 months before all written computer code is AI generated, and 90% percent of all code by the summer.
This is not a serious industry.
Huggingface cofounder pushes against LLM hype, really softly. Not especially worth reading except to wonder if high profile skepticism pieces indicate a vibe shift that canāt come soon enough. On the plus side itās kind of short.
The gist is that you canāt go from a text synthesizer to superintelligence, framed as how a straight-A student thatās really good at learning the curriculum at the teacherās direction canāt really be extrapolated to an Einstein type think-outside-the-box genius.
The world āhallucinationā never appears once in the text.
New ultimate grift dropped, Ilya Sutskever gets $2B in VC funding, promises his company wonāt release anything until ASI is achieved internally.
Before focusing on AI he was going off about what he called the rot economy, which also had legs and seemed to be in line with Doctorowās enshitification concept. Applying the same purity standard to that would mean we should be suspicious if he ever worked with a listed company at all.
Still I get how his writing may feel inauthentic to some, personally I get preacher vibes from him and he often does a cyclical repetition of his points as the article progresses which to me sometimes came off as arguing via browbeating, and also Iāve had just about enough of reading performatively angry internet writers.
Still, he must be getting better or at least coming up with more interesting material, since lately Iāve been managing to read them all the way through.
What else though, is he being secretly funded by the cabal to make convolutional neural networks great again?
That he found his niche and is trying to make the most of it seems by far the most parsimonious explanation, and the heaps of manure he unloads on the LLM both business and practices weekly surely canāt be helping DoNotPayās bottom line.
i think yud at some point claimed this (preventing the robot devil from developing alignment countermeasures) as a reason his EA bankrolled think tanks donāt really publish any papers, but my brain is too spongy to currently verify, as it was probably just some tweet.
I donāt think him having previously done undefined PR work for companies that include alleged AI startups is the smoking gun that mastopost is presenting it as.
Going through a Zitron long form article and leaving with the impression that heās playing favorites between AI companies seems like a major failure of reading comprehension.
Itās adorable how they let the alignment people still think they matter.
Should be noted that itās mutual, Hanania has gone to great lengths to suck up to siskind, going back to at least the designer mouth bacteria thing.
And GPT-4.5 is terrible for coding, relatively speaking, with an October 2023 knowledge cutoff that may leave out knowledge about updates to development frameworks.
This is in no way specific to GPT4.5 but remains a weirdly undermentioned albatross about the neck of the entire LLM code-guessing field, probably because the less you know about what you told it to generate the likelier you are to think itās doing a good job, and the enthusiastically satisfied customer reviews in social media that Iāve interacted with certainly seemed to skew toward less-you-know types.
Even when the up-to-date version release happened before the cut-off point you are probably out of luck, since the newer version is likely way underrepresented in the training data compared to the previous versions that people may have been using for years by that point.
Nothing in my experience with LLMs or my reading of the literature has ever led me to believe that prompting one to numerically rate something and treating the result as meaningful would be a productive use of someoneās time.
Still occasionally think about that bit in the o1 white paper where the openai researchers innocuously pose the question of what if our benchmarks for detecting hallucinations are shit actually, wouldnāt that be something.
Implicitly assuming that the technology to terraform Mars is just around the corner is the weāll become profitable once we hit AGI of space exploration.
In todays ACX comment spotlight, Elon-anons urge each other to trust the plan:
Just had a weird thought. Say youāre an eccentric almost-trillionare, richest person in history. You have a boyhood dream you cannot shake: get to Mars. As much as youāve accomplished, this goal still eludes you. You come to the conclusion that only a nation-state ā one of the big ones ā can accomplish this.
Wouldnāt co-opting a superpower nation-state be your next move?
Could also be donāt worry about deepseek type messaging that addresses concerns without naming names, to tell us that a drastic reduction in infrastructure costs was foretold by the writing of St Moore and was thus always inevitable on the way to immanentizing the AGI, į¼Ī»Ī»Ī·Ī»ĪæĻĻĪ±.
Itās like you founded a combination of an employment office and a cult temple, where the job seekers arenāt expected or required to join the cult, but the rites are still performed in the waiting room in public view.
chefās kiss
The surface claim seems to be the opposite, he says that because of Mooreās law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldnāt be even though their capabilities have already stagnated as per observation one.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Mooreās law changed the world at 2x every 18 months; this is unbelievably stronger.
Also, Yudās kink is literally rape1, isnāt it? Role playing non-consensual situations is fine and all, but this is a subculture where reporting sexual harassment is considered a possible infohazard2, and surely the utilitarian calculus in on the side of letting rationalists who do important work on existential risks have a go at you, imagine how many multiplujillion far future virtual entities of minimum moral status that might save.
Fuck a cult.
Heās openly declared himself a sexual sadist and writes stuff like this, and also math pets.
Occupational Infohazards
spoiler