OpenAI just disbanded their risk team (see link below).
It’s hard to tell what the effects of OpenAI’s organizational change will be, given we can’t see everything going on inside the company.
However, it’s not hard to tell what companies adopting/adapting AI and AI tools need to do …
More needs to be considered beyond just understanding AI and figuring out what to do with it – the direct opportunities and implementation.
Taking advantage of any technology advance is never about just the technology.
It’s also about the implications and downsides, not just the potential upside.
Risk evaluation and mitigation isn’t just for OpenAI – it’s for everyone.
If you’re currently using AI or thinking about doing so, consider:
- Ethics … how does AI support or erode the company values and culture? Of customers and other stakeholders?
- Legal factors … what are current and proposed laws regarding AI specifically?
- Compliance … if there are other regulatory factors involved (e.g. as in banking, medical fields), how will AI help or hinder compliance?
- Bias … how will we ensure how we implement AI does not create or induce biases that are contrary to our intent and culture? Contrary to law and complaince?
- Other risks … what are the effects to other factors like security, safety, misuse (by customers AND internally), etc.
Blinded adoption is like blinded driving … your chances of a wreck are usually higher than you’d like.
https://www.linkedin.com/news/story/risk-team-disbanded-at-openai-6741066/