Updated: April 19, 2022 (August 2, 2021)
Charts & IllustrationsThe European Commission’s Risk-Based Framework for AI
The European Commission’s proposed Artificial Intelligence Act segments AI use cases into four categories, based on risk to fundamental human rights or the likelihood of physical or psychological harm.
Prohibited use cases include those that target vulnerable groups, such as children, the elderly, or those with disabilities. Use cases employing “subliminal” techniques are also prohibited. In addition, use cases such as social scoring (national “blacklists,” such as those introduced in China) and mass surveillance (with certain key exceptions including searching for a missing child or for a terrorist).
High-risk AI applications, which require the most oversight, include those in which bias introduced by poor training data or other factors could generate discriminatory results, such as automated job or college application scans, as well as uses that involve safety, such as self-driving.
Limited-risk applications such as customer service chatbots have certain transparency requirements.
Atlas Members have full access
Get access to this and thousands of other unbiased analyses, roadmaps, decision kits, infographics, reference guides, and more, all included with membership. Comprehensive access to the most in-depth and unbiased expertise for Microsoft enterprise decision-making is waiting.
Membership OptionsAlready have an account? Login Now