Maybe I’m missing something, but has anyone actually justified this sort of “reasoning” by LLMs? Like, is there actually anything meaningfully different going on? Because it doesn’t seem to be distinguishable from asking a regular LLM to generate 20 paragraphs of ai fanfic pretending to reason about the original question, and the final result seems about as useful.
As the underlying tech seems to be based on neural networks, we can guarantee they are not thinking like this at all and are just writing fanfiction. (I love the ‘did I miscount’ step, for the love of god LLM, just use std::count).
Maybe I’m missing something, but has anyone actually justified this sort of “reasoning” by LLMs? Like, is there actually anything meaningfully different going on? Because it doesn’t seem to be distinguishable from asking a regular LLM to generate 20 paragraphs of ai fanfic pretending to reason about the original question, and the final result seems about as useful.
As the underlying tech seems to be based on neural networks, we can guarantee they are not thinking like this at all and are just writing fanfiction. (I love the ‘did I miscount’ step, for the love of god LLM, just use std::count).