I’m trying to distinguish subfields of AI safety. Which parts are wrong?
- Alignment (do what we want)
- Control (don’t do what we don’t want)
- Ethics (don’t make society worse)
- Policy (policy)
- X-risk (don’t destroy the world)
- S-risk (don’t make suffering AI)