Itā€™s not always easy to distinguish between existentialism and a bad mood.

  • 0 Posts
  • 88 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Also, Yudā€™s kink is literally rape1, isnā€™t it? Role playing non-consensual situations is fine and all, but this is a subculture where reporting sexual harassment is considered a possible infohazard2, and surely the utilitarian calculus in on the side of letting rationalists who do important work on existential risks have a go at you, imagine how many multiplujillion far future virtual entities of minimum moral status that might save.

    Fuck a cult.

    1. Heā€™s openly declared himself a sexual sadist and writes stuff like this, and also math pets.

    2. Occupational Infohazards

    spoiler

    In this case, in Zizā€™s previous interactions with central community leaders, these leaders encouraged Ziz to seriously consider that, for various reasons including Zizā€™s willingness to reveal information (in particular about the statutory rapes alleged by miricult.com in possible worlds where they actually happened), she is likely to be ā€œnet negativeā€ as a person impacting the future. An implication is that, if she does not seriously consider whether certain ideas that might have negative effects if spread (including reputational effects) are ā€œinfohazardsā€, Ziz is irresponsibly endangering the entire future, which contains truly gigantic numbers of potential people.







  • Before focusing on AI he was going off about what he called the rot economy, which also had legs and seemed to be in line with Doctorowā€™s enshitification concept. Applying the same purity standard to that would mean we should be suspicious if he ever worked with a listed company at all.

    Still I get how his writing may feel inauthentic to some, personally I get preacher vibes from him and he often does a cyclical repetition of his points as the article progresses which to me sometimes came off as arguing via browbeating, and also Iā€™ve had just about enough of reading performatively angry internet writers.

    Still, he must be getting better or at least coming up with more interesting material, since lately Iā€™ve been managing to read them all the way through.







  • And GPT-4.5 is terrible for coding, relatively speaking, with an October 2023 knowledge cutoff that may leave out knowledge about updates to development frameworks.

    This is in no way specific to GPT4.5 but remains a weirdly undermentioned albatross about the neck of the entire LLM code-guessing field, probably because the less you know about what you told it to generate the likelier you are to think itā€™s doing a good job, and the enthusiastically satisfied customer reviews in social media that Iā€™ve interacted with certainly seemed to skew toward less-you-know types.

    Even when the up-to-date version release happened before the cut-off point you are probably out of luck, since the newer version is likely way underrepresented in the training data compared to the previous versions that people may have been using for years by that point.








  • The surface claim seems to be the opposite, he says that because of Mooreā€™s law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldnā€™t be even though their capabilities have already stagnated as per observation one.

    1. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Mooreā€™s law changed the world at 2x every 18 months; this is unbelievably stronger.