Imagine an AI assistant that gives you a recommendation at work. You can stick with your judgment or accept the AI’s recommendation. Now add one twist: your manager can see exactly how often you accept those recommendations.
Suddenly, using AI is not just about being right. It is also about what your reliance signals about you. Do you use the tool freely when it helps, or do you hold back because frequent AI use might make you look less confident, less skilled, or less hardworking?
Concerns about how workers are perceived can deter effective collaboration with artificial intelligence (AI). In my job market paper, I show that making AI use visible can reduce reliance on AI and hurt performance. In a real online labor market, workers used AI recommendations 14 percent less when they knew an evaluator could observe their reliance, and their accuracy fell as a result. Even more striking, if successful collaboration means performing better than either the worker or the AI could do alone, observable AI use wipes out about one in four success stories.
The big idea: AI adoption is also about image
Organizations often treat AI as a pure productivity lever. Give people a good model, train them, and align incentives, and performance should rise.
But “using AI” isn’t just an input into output. It can also become a signal.
If heavy reliance might be read as:
- “I’m not confident,”
- “I’m not skilled,” or
- “I’m not trying hard enough,”
then workers may strategically hold back, even when everyone agrees the goal is accuracy.
This matters because modern workplaces make AI use unusually easy to track. Many AI systems produce logs, dashboards, and metrics by default. That creates a subtle tension: the same monitoring that helps managers understand workflows can also turn tool use into something workers feel they have to manage socially.
A field experiment in a real labor market
To isolate this “image concern” channel, I ran a field experiment on Upwork. I hired 450 U.S.-based workers for a paid image-categorization task (a form of data annotation). Each worker completed 50 rounds. In each round:
- The worker chose a category for a blurry image.
- They then saw an AI recommendation.
- If the AI disagreed, they could stick with their answer or switch to the AI’s.
The key behavior I track is AI reliance: how often workers switch to match the AI recommendation.
Workers were competing for a higher-paid contract extension, based on feedback from an HR evaluator. Evaluators always saw each worker’s accuracy. The experiment randomized one crucial detail:
- Private group: the evaluator saw accuracy only.
- Visible group: the evaluator saw accuracy and how often the worker relied on AI.
Workers knew exactly what the evaluator would see, and they were explicitly told that the evaluator’s job was to score them based on expected accuracy in a future version of the same AI-assisted task.
So if there were ever a setting where “just use the helpful tool” should be the dominant strategy, it was this one.
What changed when AI reliance became visible
When workers knew their AI reliance would be seen, they held back:
- AI reliance fell from 30.5% to 26.2% (a drop of 4.3 percentage points, or 14% relative to the private group).
- Accuracy fell from 79.1% to 76.4% (a drop of 2.7 percentage points).
This is the key result: visibility reduced AI use, and performance fell with it.
Importantly, workers did not compensate in the obvious ways. They didn’t offset lower AI use with meaningfully better initial performance driven by higher effort. They also didn’t become better about when to override the AI. Instead, they were simply more likely to reject high-quality AI advice, and their accuracy suffered.
Figure 1: Treatment Effects on AI Reliance and Accuracy

The real cost: fewer successful human-AI teams
A lot of AI deployment is ultimately justified by a simple promise: people plus AI should be better than either one alone.
In my setting, the AI alone was 85% accurate, and workers knew it.
When AI reliance was private, 24.7% of workers exceeded the AI’s standalone accuracy with the AI’s help. When reliance was visible, only 18.5% did.
That’s the real productivity cost. Even when the AI is strong, and even when everyone claims to care only about outcomes, social meaning can quietly shrink the set of people who realize the full benefits.
Why do workers hold back? Because AI use can look like low confidence
Saying “image concerns matter” is easy. The harder question is: what exactly do workers think heavy AI reliance signals?
To get at this directly, I used a feature that’s central on many platforms, and increasingly common inside organizations: public feedback.
After the task, workers selected one positive trait they wanted emphasized in feedback:
- effort (“I worked hard”),
- skill (“I’m good at this task”),
- confidence (“I’m confident in my judgment”).
For some workers, AI use was made salient as something that could be visible in feedback (above/below average). That framing changed what they wanted to signal.
The result was sharp:
- In the baseline framing, 11.9% chose to highlight “confidence.”
- When AI use might be visible, 25.8% chose “confidence” (a 117% increase).
In other words: when AI use might be observed, workers suddenly cared much more about being seen as confident in their own judgment.
Figure 2: Feedback Preferences

What this means for organizations rolling out AI
The message is not “don’t measure AI use.” It’s that measurement can change what AI use means, and that can undercut adoption and performance.
A few practical takeaways:
- Reward outcomes, not tool aesthetics. If the goal is accuracy or quality, align evaluation tightly with those outcomes. Over-emphasizing “how you got there” can discourage effective tool use.
- Be careful with individual AI-use dashboards. Metrics meant for coaching can become ranking tools, and ranking tools can create stigma.
- Normalize AI as infrastructure. When AI is treated like spellcheck (standard and expected), reliance becomes less diagnostic of competence.
- Design workflows that reduce “visible switching.” One image-threatening moment is being seen changing your mind because of AI. More seamless integration can reduce the sense that accepting AI advice is an admission of weakness.
If firms want AI to raise productivity in practice, they need to treat adoption as partly a social and organizational design problem, not only a technical one.
Takeaway
AI can raise productivity, but only if people feel comfortable using it. This paper provides causal evidence that when AI reliance becomes observable, image concerns reduce AI adoption and lower performance, even when everyone is explicitly told the goal is accuracy. Realizing AI’s benefits therefore requires attention not just to algorithms, but to the social meaning attached to using them.
About the author
David Almog is a 6th year PhD student in the program of Managerial Economics and Strategy (MECS) at Kellogg School of Management, Northwestern University.
Research interests: behavioral economics, experimental economics, and applied microeconomics. David studies questions related to AI-human interaction, attention, strategic incentives, and monitoring.
To learn more about his research, visit: https://www.david-almog.com
Social Media Handles:









