Fair AI

Fair AI

What is Fair AI

Like humans, AI based systems can also be biased. An AI based system is considered as fair if the outputs produced by it is independent of sensitive attributes such as gender, race, religious faith, disability, etc. Otherwise, the system is considered as biased

Bias may creep-in at a stage as early as data capture or as late as post-deployment. Hence, bias need to be handled at all stages of model lifecycle. Bias may be caused due to data or the algorithm itself

Importance of Fair AI:

  1. Gartner predicts that through 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.
  2. GDPR (General Data Protection Regulation) has already laid down regulations to ensure that AI/ ML model used by organizations are fair and explainable.
  3. European Union has set fines of €20M or up to 4% of turnover for AI misuse
  4. Various countries have started to enact laws and regulations to ensure fairness of AI based systems.
  5. Biased AI systems may severely impact the goodwill & trust among consumers, partners, vendors and society

Some real-world cases and implications

  1. Amazon had to scrap its AI recruiting tool as it was found to be biased against women candidates
  2. AI based COMPAS system (Correctional Offender Management Profiling for Alternative Sanctions) used by U.S. courts to assess the likelihood of a defendant becoming a recidivist was found to be largely discriminatory against African American ethnicities and had to be scrapped
  3. Various AI based NLP systems have been found to reflect gender stereotypes in word embedding
  4. Many AI based systems predicting skin cancer have failed to provide accurate results as they were later found to be biased against black-skinned people
  5. Gender & Racial bias has been found in many Credit Card/ Loan Approval Models

Definition of Fairness

There are multiple definitions of fairness in AI/ ML based decision systems. Following are some of the most popular definitions:

  1. Equal Opportunity: ‘Equal Opportunity’ states that each group in the attribute under consideration should get True Positives at identical rates. This metric calculates the difference between the true positive rates (TPR) for underprivileged groups and the privileged groups.
  2. Equalized Odds: Just like ‘Equal Opportunity’, this metric states that each group in the attribute under consideration should get True Positives at identical rates. However, this metric also requires that the model should correctly identifies the false-positives at equal rates across groups.
  3. Demographic Disparity: Demographic disparity checks the proportion of the rejected candidates in the dataset along with the proportion of the selected candidates. If the proportion is unequal, then a bias is indicated.

Bias Mitigation Algorithms

Bias Mitigation Algorithms help to mitigate bias in AI/ ML models. There are 3 categories of bias mitigation algorithms:

  1. Pre-processing Algorithms: ‘Reweighing’ , ‘Optimized pre-processing’ , ‘Disparate Impact Remover’, and LFR are some of the popular pre-processing bias mitigation algorithms
  2. In-processing Algorithms: ‘Adversarial Debiasing’ and ‘Prejudice Remover’ are the popular in-processing bias mitigation algorithms
  3. Post-processing Algorithms: ‘Equalized odds postprocessing’, ‘Calibrated equalized odds postprocessing’ and ‘Reject option classification’ are some of the popular post-processing bias mitigation algorithms

Some Misconceptions about Fairness AI

  1. My training dataset does not contain any of any of the sensitive variables such as gender, race, colour etc.. Hence, my model cannot be biased
  2. My data scientist is honest and not biased towards any gender, race, colour etc. Hence, my model cannot be biased
  3. No one has ever complained about any sort of bias in my models. Hence, it is not even worth discussing
  4. I have removed all the sensitive variables from my training dataset. Hence, my model is bias free now.
  5. Bias detection or mitigation may be important for other type of models, but not for my model

Our Service Offerings

  1. Frame comprehensive fair AI policy and guidelines by considering regional & global aspects, standard practices etc.
  2. Assess your AI based systems for fairness
  3. Perform root cause analysis for biasness in your AI systems
  4. Mitigate bias in your AI systems at pre-processing stage, in-processing stage and post-processing stage as appropriate
  5. Generate detailed analysis reports depicting fairness and validating that AI system is fair and free from bias

Why Us

  1. We are a team of industry veterans and research scholars with deep expertise and extensive industry experience
  2. The AI & Machine Learning unit of our company is exclusively focused on Tabular Synthetic Data Generation and Fair AI giving us distinctive edge in this area of work
  3. Our business model allow us to remain cost-effective but at the same time deliver high-quality solutions
  4. Our commitment for on-time and high-quality solutions
  5. Our global exposure enables us to provide effective solutions across the globe