Updated: May 31, 2023 (July 4, 2022)

  Sidebar

Understanding AI Bias

My Atlas / Sidebar

560 wordsTime to read: 3 min
Barry Briggs by
Barry Briggs

Before joining Directions on Microsoft in 2020, Barry worked at Microsoft for 12 years in a variety of roles, including... more

Machine learning is now increasingly available across the entire Microsoft portfolio, ranging from tools targeting the data scientist (Azure ML Studio and Jupyter notebooks) to developer libraries (Azure Cognitive Services and ML.NET), and is embedded in nearly all its applications. However, concern is growing about bias introduced by AI, that is, groups that are incorrectly over- or underemphasized, leading to unequal or unfair treatment.

At its core, machine learning works by analyzing large quantities of data, deriving correlations and relationships among many variables (hundreds, thousands, or even more—OpenAI’s GPT-3 purports to use some 175 billion, and a model developed in Beijing allegedly employs over a trillion) and creating a mathematical model describing the input, or training, data. Such a model can then be used with new data to predict future results.

The development cycle for an AI model typically consists of several phases:

Cleansing the data, that is, ensuring that all records have the requisite quality (for example, no blank or invalid fields). This is usually (by far) the most time-consuming phase. (See the sidebar “The Many Dimensions of Data Quality.”)

Atlas Members have full access

Get access to this and thousands of other unbiased analyses, roadmaps, decision kits, infographics, reference guides, and more, all included with membership. Comprehensive access to the most in-depth and unbiased expertise for Microsoft enterprise decision-making is waiting.

Membership Options

Already have an account? Login Now