Vote to see vote counts
OpenAI's models have reached a point where 40% of pull requests are checked in by AI, indicating a significant shift in research engineering.
Scaling laws in AI hold true, but the distribution of training data must align with the desired outcomes for effective model performance.
The future of robotics will require improvements in inference speed, context length, and model size to match human capabilities, but these factors are interdependent and challenging to balance.
OpenAI prioritizes research over product support when resources are constrained, emphasizing their commitment to building AGI.
These generalist models are getting good enough to perform most human standard tasks, showing economies of scope in AI development.
Regulation should focus on extremely superhuman AI models, not stifle less capable models that offer benefits.
The future of robotics involves improving inference speed, context length, and model size, which are currently limited compared to human capabilities. Solving these challenges requires innovation in representation and processing techniques.
OpenAI's approach has shifted towards vertical integration, realizing that delivering on their mission requires doing more than initially thought.
Advancements in inference speed, context length, and model size are needed to match human capabilities, but these improvements must be balanced to optimize performance.
Current AI models can reason but lack the economic impact expected, as evidenced by OpenAI's $10 billion revenue compared to larger companies like McDonald's.