This is timely as I felt myself capitulating into the max hype view in recent weeks. Every tech cycle needs a valley of despair. AI’s will come. It’s simultaneously true that average people are giving way too little respect to what LLMs as they are now mean for the next decade of humanity AND that claims about hockey stick graph improvements in general reasoning, or code that writes itself, or certain observed trends in youth employment having no explanation but AI impact, seem to be widely repeated horseshit. Also great you set invalidation points for your “AGI isn’t close” take in advance.
Yeah one moment I am amazed as it gives me intelligent and consistent answers, then the next moments it pukes out complete nonsense (usually when I ask an open ended question on a niche subject). The more I use it the more it feels like a summarization bot on steroids.
LLMs cannot function autonomously because they are non-deterministic Nothing can get around this fact. They require human supervision in each scenario they're put to use, either synchronously through prompting and editing the returned completion, or by engineers building comprehensive orchestration workflow on a case by case basis. https://dilemmaworks.substack.com/p/ai-supervised
This is timely as I felt myself capitulating into the max hype view in recent weeks. Every tech cycle needs a valley of despair. AI’s will come. It’s simultaneously true that average people are giving way too little respect to what LLMs as they are now mean for the next decade of humanity AND that claims about hockey stick graph improvements in general reasoning, or code that writes itself, or certain observed trends in youth employment having no explanation but AI impact, seem to be widely repeated horseshit. Also great you set invalidation points for your “AGI isn’t close” take in advance.
Yeah one moment I am amazed as it gives me intelligent and consistent answers, then the next moments it pukes out complete nonsense (usually when I ask an open ended question on a niche subject). The more I use it the more it feels like a summarization bot on steroids.
LLMs cannot function autonomously because they are non-deterministic Nothing can get around this fact. They require human supervision in each scenario they're put to use, either synchronously through prompting and editing the returned completion, or by engineers building comprehensive orchestration workflow on a case by case basis. https://dilemmaworks.substack.com/p/ai-supervised
Isn't the human brain also non-deterministic?
The human brain is not a computer. The human mind is self-deterministic.