The AI doomer community, advocating for AI safety, warns that unchecked AI progress poses existential risks. Despite setbacks like GPT-5’s disappointing release, their commitment remains steadfast. They propose robust regulations, believing AGI’s potential dangers are still significant, even as critics argue the timeline for such advancements is lengthening.













