AI usage harms expert reputation, study recommends

INSUBCONTINENT EXCLUSIVE:
Using AI can be a double-edged sword, according to new research from Duke University
While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.On Thursday, the
Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and
Gemini at work face negative judgments about their competence and motivation from colleagues and managers."Our findings reveal a dilemma for
people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A
Reif, Richard P
Larrick, and Jack B
Soll of Duke's Fuqua School of Business.The Duke team conducted four experiments with over 4,400 participants to examine both anticipated
and actual evaluations of AI tool users
Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias
against those who receive help from AI.What made this penalty particularly concerning for the researchers was its consistency across
demographics
They found that the social stigma against AI use wasn't limited to specific groups. Fig
1 from the paper "Evidence of a social evaluation penalty for using AI." Credit: Reif et
al. "Testing a broad range of stimuli enabled us to examine whether the target's age, gender, or occupation qualifies the
effect of receiving help from Al on these evaluations," the authors wrote in the paper
"We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence,
competence, independence, or self-assuredness
This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups
The result appears to be a general one."In the first experiment conducted by the team from Duke, participants imagined using either an AI
tool or a dashboard creation tool at work
It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using
conventional technology
They also reported less willingness to disclose their AI use to colleagues and managers.The second experiment confirmed these fears were
justified
When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent,
less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.