Consider the following hypothetical scenario:
1. You are a small hypothetical group of individuals with low influence (e.g. PhD students)
- 1.1 your goal is to prevent the creation of misaligned superhuman AGI, in a scenario where it would otherwise be created;
- 1.2 your group has some technical expertize, some compute, low resources, no political power;
- 1.3 the group cares about avoiding irreversible harm to other humans (e.g. causing death of anyone outside of the group itself would be strongly prohibited);