PortalsOS

Related Posts

Vote to see vote counts

Nathan Labenz reflects on the concern that AI might be making people lazy, particularly students who use AI to reduce the strain of their work. He acknowledges this as a valid concern but argues that the advancements in AI capabilities justify the reliance on AI for complex tasks.

Nathan Labenz challenges the idea that AI progress is flatlining, arguing that the perception of diminishing returns is misleading. He believes that the advancements between GPT-4 and GPT-5 are substantial, but the frequent updates have made it harder for people to recognize the scale of progress.

The cost of AI models is dropping dramatically, with GPT-5 being 95% cheaper than GPT-4, suggesting continued price reductions.

Podcast artwork
a16z PodcastSam Altman on Sora, Energy, an...

The capability overhang in AI is immense, with many people unaware of the full potential of models like Codex compared to ChatGPT.

Podcast artwork
The Ezra Klein ShowHow Afraid of the A.I. Apocaly...

OpenAI's update of GPT-4.0 went overboard on flattery, showing that AI doesn't always follow system prompts. This isn't like a toaster or an obedient genie; it's something weirder and more alien.

Nathan Labenz discusses the challenges faced during the launch of GPT-5, highlighting that the initial technical issues led to a poor first impression. The model router was broken, causing all queries to default to a less capable model, which contributed to negative perceptions.

Nathan Labenz reflects on the AI timeline predictions, noting that while AI reaching significant milestones by 2027 seems less likely, the likelihood of achieving them by 2030 remains unchanged. This shift is due to the resolution of some uncertainties and the absence of unexpected breakthroughs.

GPT-4.5 achieved a 65% score on the Simple QA benchmark, a significant leap from the 50% scored by the 03 models. This benchmark measures knowledge of esoteric facts, highlighting the model's improved factual knowledge.

Podcast artwork
a16z PodcastColumbia CS Professor: Why LLM...

Vishal Misra reflects on the pace of development in LLMs, noting that GPT-3 was a nice parlor trick, but with advancements like chat GPT and GPT-4, the technology has become polished and much more capable.

Podcast artwork
a16z PodcastIs AI Slowing Down? Nathan Lab...

Nathan Labenz discusses the complexity of measuring AI progress, noting that while loss numbers are used, they don't fully capture the capabilities of AI models. He suggests that the advancements in AI are often underestimated because people take for granted the features introduced in incremental updates.