AI approach developed with human decision-makers in mind

AI approach developed with human decision-makers in mind

Summary of how the information available to the subject (the applicant’s role) and the information of the algorithm (the applicant’s personality type) relate to the applicant’s ability (bad or good). This information is shown to all participants in the experiment and is easily accessible throughout their decision-making process. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.01484

As artificial intelligence takes off, how do we efficiently integrate it into our lives and our work? Bridging the gap between promise and practice, Jann Spiess, an associate professor of operations, information, and technology at Stanford Graduate School of Business, is exploring how algorithms can be designed to most effectively support—rather than replace—human decision-makers.

This research, published on the arXiv preprint server, is particularly pertinent as prediction machines are integrated into real-world applications. Mounting empirical evidence suggests that high-stakes decisions made with AI assistance are often no better than those made without it.

From credit reports, where an overreliance on AI may lead to misinterpretation of risk scores, to social media, where models may depend on certain words to flag toxicity, leading to misclassifications—successful implementation lags behind the technology’s remarkable capabilities.

“We don’t have much work—yet—that takes design of the human-AI interface really seriously,” Spiess says. “Our debate about AI and the capabilities of AI is really misplaced, because it’s all about ‘Is the AI better than the human?'” he continues. “I think instead we should be asking, ‘What are the complementary uses of AI?'”

Today’s AI tends to prioritize capability, rather than usability, which creates a set of problems that lead users to make poor decisions. If users rely too heavily on an algorithm, for example, they may disregard relevant context or information that the algorithm may not know.

On the other hand, if users perceive recommendations as rigid, overly complex, or irrelevant, they may dismiss them altogether, defaulting to their own judgment and forgoing any advantages algorithmic recommendations may provide. There’s also misinterpretation, which can occur if a user misunderstands how an algorithm arrives at its results or fails to realize the algorithm’s limitations but acts on its recommendation anyway.

A more thoughtful design for human-AI interaction, Spiess posits, recognizes how decision-makers respond to the recommendations algorithms provide. “The best algorithm is the one that takes into account how a human will interact with the information it provides,” he says.

In a recent paper, Spiess and Bryce McLaughlin, Ph.D. ’24, of the Wharton Healthcare Analytics Lab at the University of Pennsylvania, outline a conceptual design framework modeling how humans respond to algorithmic recommendations—and present a different approach to building AI tools. This approach, known as complementarity, aims to refine human-AI collaboration rather than bypass human input altogether.

AI approach developed with human decision-makers in mind
Sample hiring decision from the Predictive treatment. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.01484

Better decisions, better outcomes

To determine its efficacy, the researchers tested different recommendation strategies in a simulated hiring experiment, where subjects made 25 hiring decisions with varying levels of algorithmic assistance.

People using a complementary algorithm—which offered selective recommendations in cases where a human was likely to be uncertain or incorrect—made the most accurate decisions, outperforming those using a purely predictive algorithm as well as those using no algorithmic support whatsoever.

It’s an encouraging result, which Spiess and his collaborators are evaluating across several research projects. “There’s a lot of promise [around] AI improving decisions and through that improving outcomes,” Spiess says.

“And that has led to new questions: How should I design an algorithm to make public or social policy decisions, for example? If we can learn how to improve policy by using data—and using it at scale in processes that are transparent and fair—we may be able to produce algorithms that deliver on the promise of this new technology.”

Spiess is particularly interested in applications that affect how services are allocated in resource-constrained environments, such as placing tutors in underserved school districts with limited budgets.

Spiess suggests the approach of for-profit enterprises—maximizing returns—might be applied to social impact. “Ads are targeted, but can we better target social interventions? This is a high-stakes decision, and if you could use algorithms to improve resource allocation at scale, there are a number of high-value use cases in areas where we don’t have ready or clear solutions.”

It’s the type of question that Stanford GSB is particularly well suited to answer, Spiess says, citing colleagues such as economics professor Susan Athey, the director of the Golub Capital Social Impact Lab.

“Delivering these solutions requires putting together the technical capability with the context and being able to model the human component. We are uniquely well positioned to think about algorithms in context and have a rich history of doing so,” he says. “Plus, we’re in Silicon Valley. We are immersed with the tools to actually implement projects in this space.”

More information:
Bryce McLaughlin et al, Designing Algorithmic Recommendations to Achieve Human-AI Complementarity, arXiv (2024). DOI: 10.48550/arxiv.2405.01484

Journal information:
arXiv


Provided by
Stanford University


Citation:
AI approach developed with human decision-makers in mind (2025, May 30)
retrieved 30 May 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Source link

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam! Read our privacy policy for more info.

More From Author

Macron warns the West could lose credibility over Ukraine and Gaza wars

Macron warns the West could lose credibility over Ukraine and Gaza wars

Satellite megaconstellations threaten radio astronomy observations, expert warns

Satellite megaconstellations threaten radio astronomy observations, expert warns

Leave a Reply

Your email address will not be published. Required fields are marked *