Vote to see vote counts
The ability to leverage prior knowledge from pre-trained models is a significant advantage in developing robotics and AI systems.
The future of robotics will require improvements in inference speed, context length, and model size to match human capabilities, but these factors are interdependent and challenging to balance.
The capability overhang in AI is immense, with many people unaware of the full potential of models like Codex compared to ChatGPT.
AI models still struggle with tasks requiring long-term memory and context, which are essential for many human jobs.
These generalist models are getting good enough to perform most human standard tasks, showing economies of scope in AI development.
The future of AI may involve a single model capable of both knowledge work and physical tasks, leveraging the benefits of co-training on diverse data sources.
Advancements in inference speed, context length, and model size are needed to match human capabilities, but these improvements must be balanced to optimize performance.
The current robotics models have limited context, but this is not necessarily a disadvantage for well-rehearsed tasks that rely on ingrained skills rather than extensive memory.
Humans and AI are not perfectly substitutable; humans can learn and adapt over time, which current AI models cannot do.
The main missing capability in AI is continual learning, which allows humans to build context and improve over time.