Vote to see vote counts
Vishal Misra accidentally invented what's now called RAC while trying to fix the interface for Stats Guru using GPT-3. He created a DSL that translated natural language queries into REST calls, which had been running in production since September 2021.
There is a convergence happening among large language models (LLMs), with no sustainable technical advantage being developed, as AI can quickly reverse engineer other AIs.
The success of LLMs in language tasks was surprising, as language was previously considered different from other AI tasks.
The day an LLM can create a large software project without any babysitting will be a significant step towards AGI. However, creating new science is a much higher bar.
Vishal Misra's work on understanding LLMs is profound. He has developed models that reduce the complex, multidimensional space of LLMs into a geometric manifold, allowing us to predict where reasoning can move within that space. This approach reflects how humans simplify the complex universe into manageable forms for reasoning.
The capability overhang in AI is immense, with many people unaware of the full potential of models like Codex compared to ChatGPT.
The future of AI development may involve LLMs advancing to a point where they can independently discover the next technological breakthroughs.
The potential for AI to democratize technology is significant, as it is already widely accessible through platforms like ChatGPT.
The internet's rapid adoption of AI, like ChatGPT, shows its potential to become a widely democratized technology.
The most impactful models for understanding LLMs, according to Martin, are those created by Vishal Misra. His work, including a notable talk at MIT, explores not only how LLMs reason but also offers reflections on human reasoning.