Discussion about this post

User's avatar
Ben Norman's avatar

Great post. I think the framework here is really useful, but it mostly operates at the level of general considerations (here are reasons to act on shorter timelines, here are reasons to act on longer ones). What it doesn’t do is cross-reference specific actions with specific timeline worlds in a more systematic way. So you come away with good intuitions about which direction to weight your timelines, but still have to do a lot of translation work to get from that to “what should we actually prioritise?”

I wonder how useful it would be to try and operationalise this into a matrix, e.g. concrete interventions as rows (AI tools for epistemics, compute governance/int’l agreements, movement building, etc.), timeline buckets as columns, rough leverage estimates in each cell. Even at the level of high/medium/low rather than precise numbers (which would obviously be super hard), it might be worthwhile?

Maybe this could shed light on the Q of “which actions are robust across timelines?” So if something looks high-leverage regardless of which world you’re in, that seems like an unusually strong signal for anyone super uncertain. This would all relate to your idea of mapping current community effort to timelines, e.g. to highlight gaps (where leverage is high but allocation is low).

1 more comment...

No posts

Ready for more?