Vote to see vote counts
A new architecture enabling continual learning would eliminate the need for a special training phase, allowing agents to learn on the fly like humans and animals.
There is a convergence happening among large language models (LLMs), with no sustainable technical advantage being developed, as AI can quickly reverse engineer other AIs.
The day an LLM can create a large software project without any babysitting will be a significant step towards AGI. However, creating new science is a much higher bar.
LLMs trained on pre-1915 physics would never have discovered the theory of relativity. Discovering new science requires going beyond the existing training set, something current LLM architectures can't do.
AGI is not yet achieved because current AI systems cannot learn and improve over time like a human employee can, such as learning preferences and improving transcription skills.
The future of AI development may involve LLMs advancing to a point where they can independently discover the next technological breakthroughs.
Robots can benefit from the same AI techniques used in LLMs, as they both leverage prior knowledge and abstract representations.
Continual learning is necessary for true AGI, and while it doesn't exist with current LLMs, there may be straightforward ways to implement it.
Recursive self-improvement in LLMs is not possible without additional information. Even with multiple LLMs interacting, they can't generate new information beyond their training set.
AGI is not yet achieved because current models cannot learn and adapt over time like a human employee would.