OpenAI buffs safety team and gives board veto power on risky AI TechCrunch

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins […]

© 2023 TechCrunch. All rights reserved. For personal use only.

 OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins
© 2023 TechCrunch. All rights reserved. For personal use only.  Read More TechCrunch AI, OpenAI 

Share

Leave a Reply

Your email address will not be published. Required fields are marked *