I can’t see it happening tbh, but like the USA government discussed putting restriction on AI development, I think OpenAI or some other companies asked them to do so!? And there were short/reels of high profile developers hyping out the fact that “we don’t know what we’re doing”, and one of them quit his job. So why is all that hype? Is the “Matrix” route actually a possible future ?
I dunno. Obviously individual LLMs are basically sophisticated parrots and are unlikely to develop to AGI on their own. However, a lot of work is being done in combining multiple specialized LLMs. As unlikely as it is for direct LLM improvement to lead to true AI, I think it’s not terribly unlikely that some particular assemblage of many specialized LLMs could achieve the complexity necessary for AGI.
Sure but at what energy cost? Compared to a human brain it would be incredibly energy inefficient.