EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Unlock the advantages of the digital era to harness innovation, drive operational efficiencies and grow your business.
Read more
Fairlearn includes two types of unfairness mitigation algorithms — postprocessing algorithms and reduction algorithms — to help users improve the fairness of their AI systems. Both types operate as “wrappers” around any standard classification or regression algorithm.
Fairlearn’s postprocessing algorithms take an already-trained model and transform its predictions so that they satisfy the constraints implied by the selected fairness metric (e.g., demographic parity) while optimizing model performance (e.g., accuracy rate). There is no need to retrain the model.
For example, given a model that predicts the probability an applicant will default on a loan, a postprocessing algorithm will try to find a threshold above which the applicant should get a loan. This threshold typically needs to be different for each group of people (defined in terms of the selected sensitive feature). This limits the scope of postprocessing algorithms because sensitive features may not be available to use at deployment time or may be inappropriate to use or, in some domains, prohibited by law.
Fairlearn’s reduction algorithms wrap around any standard classification or regression algorithm, and iteratively re-weight the training data points and retrain the model after each re-weighting. After 10 to 20 iterations, this process results in a model that satisfies the constraints implied by the selected fairness metric while optimizing model performance.
Reduction algorithms do not need access to sensitive features at deployment time, and they work with many different fairness metrics. These algorithms also allow for training multiple models that make different trade-offs between fairness and model performance, which users can compare using Fairlearn’s interactive visualization dashboard.