Vote to see vote counts
The iterative nature of science requires LLMs to engage in simulations, theoretical calculations, and experiments to discover scientific insights.
The debate between LLMs and other reasoning models highlights the limitations of LLMs in understanding real-world context and predicting the future.
The integration of LLMs in physics research could accelerate scientific discovery by enabling AI to iterate on experiments and simulations, similar to human scientific inquiry.
LLMs can be used as first-class citizens in physics research by teaching them to iterate on scientific inquiry, combining simulations, theoretical calculations, and experiments.
The future of AI development may involve LLMs advancing to a point where they can independently discover the next technological breakthroughs.
LLMs develop deep representations of the world due to their training process, which incentivizes them to do so.
The integration of LLMs in physics research can accelerate physical R&D by tightly coupling simulations and experiments.
LLMs are trained on vast amounts of human data, which is an inelastic and hard-to-scale resource, making it an inefficient use of compute.
Recursive self-improvement in LLMs is not possible without additional information. Even with multiple LLMs interacting, they can't generate new information beyond their training set.
A hypothetical LLM trained only on data up to 1900 would likely be unable to develop the theory of relativity independently.