[Question] FAI failure scenario

https://www.lesswrong.com/posts/a5TQQHB9merXd72wo/fai-failure-scenario

Imagine that research into creating a provably Friendly AI fails. At some point in the 2020s or 2030s it seems that the creation of UFAI is imminent. What measures then could the AI Safety community take?