DeepAI has launched a dedicated team to evaluate and prevent AI-related risks, which could potentially lead to suspending the launch of an AI model considered too risky. The announcement follows the brief dismissal of Sam Altman, creator of the ChatGPT interface, due to criticism from board members for his emphasis on rapid development, even at the cost of potential AI violations, reported Al-Rai Daily.
The preparedness team will focus on frontier models that have superior capabilities to the most advanced AI programs. It will evaluate each new model on cybersecurity, risks related to the ability to create harmful substances and weapons, potential behavior influence and a model’s autonomy. The Safety Advisory Group will provide recommendations to the head of DeepAI on any modifications required to reduce the associated risks.