PortalsOS

Related Posts

Vote to see vote counts

The iterative nature of science requires LLMs to engage in simulations, theoretical calculations, and experiments to discover scientific insights.

Podcast artwork
More or Less#119 OpenAI Sora vs. TikTok: C...

The debate between LLMs and other reasoning models highlights the limitations of LLMs in understanding real-world context and predicting the future.

The integration of LLMs in physics research could accelerate scientific discovery by enabling AI to iterate on experiments and simulations, similar to human scientific inquiry.

Podcast artwork
a16z PodcastBuilding an AI Physicist: Chat...

LLMs can be used as first-class citizens in physics research by teaching them to iterate on scientific inquiry, combining simulations, theoretical calculations, and experiments.

Podcast artwork
a16z PodcastSam Altman on Sora, Energy, an...

The future of AI development may involve LLMs advancing to a point where they can independently discover the next technological breakthroughs.

LLMs develop deep representations of the world due to their training process, which incentivizes them to do so.

The integration of LLMs in physics research can accelerate physical R&D by tightly coupling simulations and experiments.

LLMs are trained on vast amounts of human data, which is an inelastic and hard-to-scale resource, making it an inefficient use of compute.

Podcast artwork
a16z PodcastColumbia CS Professor: Why LLM...

Recursive self-improvement in LLMs is not possible without additional information. Even with multiple LLMs interacting, they can't generate new information beyond their training set.

Podcast artwork
Dwarkesh PodcastSome thoughts on the Sutton in...

A hypothetical LLM trained only on data up to 1900 would likely be unable to develop the theory of relativity independently.