PortalsOS

Related Posts

Vote to see vote counts

Podcast artwork
Dwarkesh PodcastSome thoughts on the Sutton in...

A new architecture enabling continual learning would eliminate the need for a special training phase, allowing agents to learn on the fly like humans and animals.

Podcast artwork
PivotKimmel & ABC, Nvidia’s OpenAI ...

There is a convergence happening among large language models (LLMs), with no sustainable technical advantage being developed, as AI can quickly reverse engineer other AIs.

The day an LLM can create a large software project without any babysitting will be a significant step towards AGI. However, creating new science is a much higher bar.

LLMs trained on pre-1915 physics would never have discovered the theory of relativity. Discovering new science requires going beyond the existing training set, something current LLM architectures can't do.

Podcast artwork
"Econ 102" with Noah Smit...AGI Thought Experiment with Dw...

AGI is not yet achieved because current AI systems cannot learn and improve over time like a human employee can, such as learning preferences and improving transcription skills.

Podcast artwork
a16z PodcastSam Altman on Sora, Energy, an...

The future of AI development may involve LLMs advancing to a point where they can independently discover the next technological breakthroughs.

Podcast artwork
Dwarkesh PodcastFully autonomous robots are mu...

Robots can benefit from the same AI techniques used in LLMs, as they both leverage prior knowledge and abstract representations.

Continual learning is necessary for true AGI, and while it doesn't exist with current LLMs, there may be straightforward ways to implement it.

Podcast artwork
a16z PodcastColumbia CS Professor: Why LLM...

Recursive self-improvement in LLMs is not possible without additional information. Even with multiple LLMs interacting, they can't generate new information beyond their training set.

AGI is not yet achieved because current models cannot learn and adapt over time like a human employee would.