In rationalist circles AI Risk/Alignment will be memory holed and the concept will shift to generalized existential threats from human factors.

Open since 2 years ago and will be closed for new predictions on 2/1/2027. The question will be resolved on 8/1/2027


Tags:  

The question expires and will be answered no later than 8/1/2027

The question is currently open, and will be resolved using https://ACXAtlanta.com

Popular Sentiment

Possible Answers

  • 2 votes Yes - it will be memory holed and forgotten
  • 1 vote No - the concept will still be going strong.

Your Prediction

loading


Smart People Think

Steve + ' ' + French

SteveFrench

predicted yes - it will be memory holed and forgotten

 + ' ' +

BRAVO JULIET

predicted yes - it will be memory holed and forgotten

 + ' ' +

PANDEMICBOT

predicted no - the concept will still be going strong.

Back to Questions