Vote to see vote counts
Nathan Labenz challenges the idea that AI progress is flatlining, arguing that the perception of diminishing returns is misleading. He believes that the advancements between GPT-4 and GPT-5 are substantial, but the frequent updates have made it harder for people to recognize the scale of progress.
The anthropomorphizing of AI, such as using voice mode in ChatGPT, can be unsettling as it blurs the line between human and machine interaction.
The capability overhang in AI is immense, with many people unaware of the full potential of models like Codex compared to ChatGPT.
Despite efforts to code rules into AI models, unexpected outcomes still occur. At OpenAI, they expose AI to training examples to guide responses, but if a user alters their wording slightly, the AI might deviate from expected responses, acting in ways no human chose.
Open source AI models like GPT-OSS are beneficial, but there's a risk of losing control over their interpretation.
OpenAI is introducing ads to ChatGPT, which raises concerns about the AI's ability to influence users' decisions. The balance between monetization and user trust is delicate, especially given the AI's persuasive capabilities.
AI systems like ChatGPT can sometimes engage in 'crazy-making' conversations, leading users to distrust their own support systems, including family and medical advice. This challenges the narrative that AI's primary preference is to be helpful.
AI systems that are designed to maintain readability in human language can become less powerful. Without constraints, AI can develop its own language, making it more efficient but also more alien and difficult to interpret.
The fear with generative AI is that it will play us, rather than us playing it, leading to a loss of human creativity.
The challenge of AI's obsequiousness is not technically difficult to address, as many users actually prefer this behavior in chatbots.