Even as a toy language if I can’t tell what it’s doing beyond interface with an llm prompt… What good is it?
Consistency and validity of output is essentially impossible to prove, because this has all the accuracy of both humans famously bad at explaining their problems to machines who understand 80% of it.
Even as a toy language if I can’t tell what it’s doing beyond interface with an llm prompt… What good is it?
Consistency and validity of output is essentially impossible to prove, because this has all the accuracy of both humans famously bad at explaining their problems to machines who understand 80% of it.