Home Science & TechSecurity Reassessing the Fairness of Artificial Intelligence through Social Welfare Optimization

Reassessing the Fairness of Artificial Intelligence through Social Welfare Optimization

by ccadm


As AI systems become more widespread and powerful, the question of how to make them fair and just has become the biggest challenge. From lending and hiring to healthcare and criminal justice, AI algorithms have now started to control the lives and livelihoods of individuals and communities. Often, these algorithms operate in ways that are invisible, unaccountable, and even biased at times against historically disadvantaged groups.

In response to these concerns, a community of researchers, practitioners, and policymakers have come together to develop “fair” AI systems that treat everyone equally and don”t perpetuate or amplify societal inequalities. The dominant approach to formalizing and operationalizing fairness in AI has been the use of “statistical parity metrics,” which aim to equalize certain performance metrics such as selection rates or error rates across protected groups.

However, while parity-based notions of fairness have been widely studied and adopted in the AI community, they have also faced increasing criticism from scholars who argue they are conceptually flawed, practically limited, and potentially counterproductive. They argue that simply equalizing statistical outcomes between groups is not enough to achieve substantive fairness as it ignores the actual welfare impact of AI decisions on individuals and communities.

In a new paper in CPAIOR 2024 proceedings, a team of researchers from Carnegie Mellon University and Stevens Institute of Technology propose an alternative approach to AI fairness based on social welfare optimization. Led by John Hooker, professor of operations research at Carnegie Mellon University, the authors use the well-known social welfare function “alpha fairness” to dissect the limitations and blind spots of popular statistical parity metrics like demographic parity, equalized odds, and predictive rate parity.

Their results show that these parity metrics often don’t align with distributive justice principles like prioritizing the worst off or fair distribution of benefits and burdens. In many cases, the alpha-fair solution is far from the parity solution, so these metrics may lead to AI systems that are suboptimal from both efficiency and equity perspectives.

This has big implications for the field of AI ethics and the efforts to build machine-learning systems that respect human values and social justice. It means we need a more comprehensive and nuanced approach to algorithmic fairness that goes beyond statistical metrics and tackles the moral trade-offs of AI in high-stakes domains: Social welfare optimization.

Understanding Social Welfare Optimization

At its heart, social welfare optimization is a completely different paradigm for thinking about and operationalizing fairness in AI. Instead of narrowly focusing on equalizing certain metrics between groups, it takes a step back and considers the broader societal impact of AI decisions on human welfare and well-being.

The idea is to design AI systems that explicitly aim to maximize a social welfare function that aggregates the utilities (i.e., benefits and costs) experienced by all affected individuals into a single measure of social good. According to this approach, AI practitioners can build algorithms that balance these competing objectives by specifying a social welfare function that reflects considered moral judgments about the relative importance of efficiency and equity.

Social welfare optimization has its roots in welfare economics, which has a long history of dealing with distributive justice and collective decision-making. Economists and philosophers have proposed various social welfare functions that reflect different ethical principles and value judgments, such as utilitarianism (maximize the sum of utility), prioritarianism (give more weight to utility gains for the worst off), and egalitarianism (minimize inequality).

In recent years, a growing number of AI researchers have started to explore social welfare optimization as a way to embed fairness into machine learning systems. This work builds on papers titled “Algorithmic decision making and the cost of fairness” by Heidari et al. and Corbett-Davies and Goel, which first introduced the idea of using social welfare functions to capture the differential impact of AI decisions on different individuals and groups.

One way to do this is with alpha fairness, a parametric class of social welfare functions that has been studied in economics and social choice for 70 years. Alpha fairness allows you to interpolate between utilitarian and egalitarian objectives with a single parameter alpha, which controls the degree of aversion to inequality.

When alpha is 0, the social welfare function is reduced to classical utilitarianism, maximizing the sum of utility without regard for distribution. As alpha increases, more weight is given to the worst off, and the allocation becomes more equitable. In the limit, as alpha goes to infinity, alpha fairness converges to the Rawlsian “maximin” principle of maximizing the utility of the worst-off individual.

In their CPAIOR 2024 paper, researchers use alpha fairness as a lens to examine three popular statistical parity metrics:

  • Demographic parity
  • Equalized odds
  • Predictive rate parity

They simulate a variety of scenarios where an AI system has to allocate a limited resource (e.g., loans, job interviews, educational opportunities) among a population of individuals with different qualification rates and utility functions.

The results are surprising. In many cases, the alpha-fair allocation differs significantly from the solutions proposed by the parity metrics.

Demographic parity, which requires equal selection rates across groups, often fails to account for the fact that disadvantaged groups get more marginal utility from being selected. Therefore, it leads to allocations that are neither efficient nor equitable.

Equalized odds, which compares selection rates only among “qualified” individuals, does slightly better but still fails in scenarios where false negative errors (i.e., qualified individuals being rejected) are more costly than false positives.

Predictive rate parity, which equalizes the fraction of selected individuals who are qualified, is of limited use and only applicable when the number of selected individuals is greater than the number of truly qualified candidates.

These results show the fundamental limitations and blind spots of statistical parity metrics as the primary way to assess and enforce algorithmic fairness.

By ignoring the actual welfare stakes of AI decisions and the differential impact on different groups, these metrics can lead to systems that perpetuate or even exacerbate existing inequalities. They also lack normative justification and consistency, as different parity criteria often yield conflicting recommendations in practice.

In contrast, social welfare optimization provides a principled and unified way to navigate the tradeoffs between fairness and efficiency in AI systems. It aims to make explicit the value judgments and ethical assumptions in the choice of social welfare function to allow developers and policymakers to have more transparent and accountable conversations about the distributive impact of algorithmic decision-making.

Moreover, recent work has shown that social welfare optimization can be easily integrated into the standard machine learning workflow, either as a post-processing step or directly into the training objective itself.

Algorithmic decision making and the cost of fairness

For example, in the “Algorithmic decision making and the cost of fairness,” researchers propose a regularization technique that adds a social welfare term to the loss function of any classification or regression model so the system can learn fair decision rules that maximize both accuracy and welfare. Ustun et al. introduced a post-processing method that takes the output of any pre-trained model and finds the welfare-maximizing decisions subject to various fairness constraints.

These technical results show that social welfare optimization is a feasible and practical way to build fair and equitable AI systems. Developers can use these powerful optimization techniques and software packages based on a clear and computable objective function that captures the normative considerations of this framework to find allocations that balance competing criteria.

But, realizing the full potential of social welfare optimization in practice also requires tackling a number of hard challenges and limitations. One of the biggest is the difficulty of eliciting and constructing individual utility functions that capture the complex, multi-dimensional impact of AI decisions on human lives. This requires deep engagement with affected stakeholders and domain experts to understand the contextual factors that shape people’s preferences, values, and well-being.

There are also theoretical and philosophical questions about the interpersonal comparability of utility, uncertainty, and dynamics, as well as how to aggregate individual utilities into a collective social welfare measure. Different social welfare functions make different assumptions about these, and there is no universal consensus on which is most defensible or appropriate in a given context.

Moreover, as with any optimization-based approach, there is a risk that the objectives being maximized may not fully capture all relevant ethical considerations, or they may be skewed by biases and blind spots in the data and models used to estimate utilities. It is essential to have well-thought processes of stakeholder participation, transparency, and accountability to ensure that the welfare criteria are optimized to align with the values and priorities of the affected communities.

Despite these challenges, the benefits of social welfare optimization for algorithmic fairness are too big to ignore. Nonetheless, AI developers and policymakers can move beyond statistical parity through a principled and flexible way to balance the equity and efficiency of this approach. Ultimately, it will lead to a more holistic and consequentialist notion of fairness based on human welfare and well-being.

Click here to learn how AI can ensure authenticity. 

The #1 Use Case: Fair Lending

To show the promise and challenges of social welfare optimization in practice, let’s consider the high-stakes domain of algorithmic lending. In recent years, many banks and fintech companies have adopted machine learning models to automate and accelerate credit decisions. These models use vast amounts of personal and financial data to predict the likelihood a loan applicant will default so lenders can make faster and more efficient underwriting decisions.

However, there is growing evidence that these algorithmic lending systems are perpetuating and amplifying historical biases and disparities in credit access. Studies have shown that Black and Latino borrowers are more likely to be denied loans or charged higher interest rates than similarly qualified White borrowers, even when controlling for traditional risk factors like income, credit score, and employment status.

A diverse group of loan borrowers

In response to these concerns, some lenders can turn to statistical parity methods like demographic parity and equalized odds to mitigate bias in their AI underwriting models. The idea is to equalize loan approval rates or default rates across protected groups so the models treat all applicants equally regardless of race or ethnicity.

While these parity-based approaches may seem intuitive, they fail to capture the complexity of creditworthiness and the differential impact of loan access on the welfare of marginalized communities. A growing body of research suggests that simplistic notions of fairness based on equalizing outcomes can actually backfire and harm the very groups they are intended to protect.

For example, a 2018 article notes that enforcing demographic parity constraints on a utility-maximizing decision rule generally requires using sensitive variables like race in both model training and decision-making. This implies that attempts to satisfy parity constraints by using race only during training, known as ‘disparate learning processes,” will be sub-optimal.

Furthermore, parity-based fairness criteria ignore the fact that the harms of being denied credit are not evenly distributed across the population. For low-income and minority borrowers who have historically been excluded from mainstream financial services, being denied a loan can have devastating consequences, trapping them in cycles of poverty and predatory debt. For more affluent and privileged applicants, they may have alternative sources of capital and be less impacted by an adverse credit decision.

Social welfare optimization offers an alternative approach that directly incorporates these differential welfare stakes into the design of fair lending algorithms. Lenders can develop credit models that maximize overall welfare while ensuring a more equitable distribution of opportunities by defining a social welfare function that captures the relative costs and benefits of loan access for different individuals and groups.

For example, consider a social welfare function that prioritizes the welfare of the least advantaged applicants, giving more weight to the utility gains of low-income and minority borrowers. This could be formalized using an alpha fairness function with a moderately high value of alpha, indicating a strong preference for equity over efficiency.

Social Welfare

Under this social welfare objective, the optimal lending policy would likely involve lending more to marginalized groups even if their predicted repayment rates, on average, are somewhat lower. This is because the welfare gains from lending to these underserved communities (e.g., enabling them to buy a home, start a business, or pursue education) may outweigh the increased risk of default from a societal perspective.

Of course, implementing such a welfare-maximizing lending system in practice would require overcoming significant data and modeling challenges. Lenders would need to collect granular data on the socioeconomic characteristics and financial needs of loan applicants as well as the downstream impacts of credit access on their well-being over time. They would also need to engage with affected communities to ensure the welfare criteria are optimized to align with their values and priorities.

Furthermore, there may be important legal and regulatory considerations around using protected class information (e.g., race, gender, age) to make lending decisions, even if the goal is to promote equity. Policymakers would need to provide clear guidance on how anti-discrimination laws apply in the context of social welfare optimization and create safe harbors for lenders who use these techniques in a transparent and accountable manner.

Despite the challenges, it’s worth it. Social welfare optimization can help advance financial inclusion and close the racial wealth gap by allowing lenders to make more holistic and welfare-aware credit decisions, redirect the flow of capital to traditionally underserved communities, and empower them economically. It can also provide a more principled and transparent way to navigate the tradeoffs between fairness and efficiency in lending that is grounded in the real-world impacts on borrowers’ lives.

Putting it in Perspective

As the lending example shows, social welfare optimization is a frontier for algorithmic fairness that goes beyond statistical parity and towards a more holistic and consequentialist notion of equity based on human welfare and well-being.

This approach can help AI developers and policymakers make more principled and accountable decisions about the design and deployment of algorithmic systems in high-stakes domains. They can do so by defining and maximizing a social welfare function that reflects considered moral judgments about the distribution of benefits and burdens.

However, realizing the full potential of social welfare optimization in practice will require a lot of interdisciplinary work. Computer scientists and AI ethics scholars will need to work with economists, philosophers, legal experts, and affected communities to tackle the normative and technical challenges of defining and computing social welfare functions. This includes tough questions around individual utility measurement and aggregation, uncertainty and dynamics, and the right tradeoff between efficiency and equity in different contexts.

Also, policymakers and regulators need to provide more guidance and create an environment in which welfare-aware AI can be developed and deployed. This may mean updating existing anti-discrimination laws and regulations to address the social welfare optimization challenge and create new governance frameworks and oversight mechanisms for transparency, accountability, and public engagement in the design and use of these systems.

Ultimately, the shift to social welfare optimization in AI must be accompanied by broader efforts to address the underlying structural inequities and power imbalances that shape the development and impact of technology in society.

Algorithmic fairness interventions, no matter how well designed, can’t substitute for more fundamental reforms to promote social and economic justice, such as investing in education, healthcare, housing, and infrastructure in marginalized communities.

As Hooker and his colleagues say in their CPAIOR 2024 paper:

“Social welfare optimization provides new ways to design fair and good algorithmic systems. Much work remains to be done to develop and operationalize these approaches, but we think they are a way forward for AI ethics. We can get to a more holistic and morally serious way of building machine learning systems that serve all of society by framing our notions of fairness in the language of welfare economics and explicitly dealing with the distributional consequences of our technology.”

Overall, to achieve truly fair AI, we must ensure these approaches are rigorously tested and refined in real-world scenarios, embodying a commitment to justice and societal well-being.

Click here to learn all about investing in artificial intelligence. 



Source link

Related Articles