I enjoyed this breakdown of why naïve AI ROI analyses can be misleading and why rigorous causal inference, especially counterfactual analysis, is so important. In the corporate world, we often don't distinguish between selection effects and true treatment effects, so I appreciate the focus on measuring AI’s impact more precisely, even when the gains aren’t as dramatic as initial comparisons suggest.
One thing I’d be curious to explore further: this analysis focuses on engineers who were already top performers and self-selected into using AI. That makes sense for mitigating selection bias, but it leaves an open question: what about the engineers who didn’t initially adopt AI?
Some past research (like Ethan Mollick’s work with BCG on AI in consulting) has suggested that AI can actually provide the biggest gains for mid-level performers rather than top experts. While coding and consulting are different domains, the broader question remains: how does AI impact those who weren’t early adopters?
It would be fascinating to see a counterfactual that compares non-AI users without AI vs. non-AI users with AI. Would their productivity gains be smaller, the same, or unexpectedly large?
“ Some past research (like Ethan Mollick’s work with BCG on AI in consulting) has suggested that AI can actually provide the biggest gains for mid-level performers rather than top experts.”
I can support this conclusion with a similar finding in my dissertation that revealed comprehension was increased significantly among audio learners using navigational affordances who had mid-level knowledge of the topic. The response curve was quadratic. My hypothesis to explain this finding was that high-end comprehenders didn’t need the affordances while low end comprehenders didn’t know they needed the affordances. This would explain why the greatest effect size is seen in mid-level performers.
Excited to hear about this! Great to see the causal revolution come to business. I’d love to hear more about how you build a business/department that’s prepared to bet so much on an evidence based approach
The Police Bodycams study seems like an odd example, given that 'officer discretion' might be a reaction to the risks of different neighbourhoods, for example. There are also massive privacy concerns surrounding always-on body-cams.
I enjoyed this breakdown of why naïve AI ROI analyses can be misleading and why rigorous causal inference, especially counterfactual analysis, is so important. In the corporate world, we often don't distinguish between selection effects and true treatment effects, so I appreciate the focus on measuring AI’s impact more precisely, even when the gains aren’t as dramatic as initial comparisons suggest.
One thing I’d be curious to explore further: this analysis focuses on engineers who were already top performers and self-selected into using AI. That makes sense for mitigating selection bias, but it leaves an open question: what about the engineers who didn’t initially adopt AI?
Some past research (like Ethan Mollick’s work with BCG on AI in consulting) has suggested that AI can actually provide the biggest gains for mid-level performers rather than top experts. While coding and consulting are different domains, the broader question remains: how does AI impact those who weren’t early adopters?
It would be fascinating to see a counterfactual that compares non-AI users without AI vs. non-AI users with AI. Would their productivity gains be smaller, the same, or unexpectedly large?
... stay tuned! ;)
“ Some past research (like Ethan Mollick’s work with BCG on AI in consulting) has suggested that AI can actually provide the biggest gains for mid-level performers rather than top experts.”
I can support this conclusion with a similar finding in my dissertation that revealed comprehension was increased significantly among audio learners using navigational affordances who had mid-level knowledge of the topic. The response curve was quadratic. My hypothesis to explain this finding was that high-end comprehenders didn’t need the affordances while low end comprehenders didn’t know they needed the affordances. This would explain why the greatest effect size is seen in mid-level performers.
Excited to hear about this! Great to see the causal revolution come to business. I’d love to hear more about how you build a business/department that’s prepared to bet so much on an evidence based approach
The Police Bodycams study seems like an odd example, given that 'officer discretion' might be a reaction to the risks of different neighbourhoods, for example. There are also massive privacy concerns surrounding always-on body-cams.
https://www.aclu.org/news/privacy-technology/police-officer-discretion-use-body-worn-cameras
The net effect has probably been positive, but not as the left envisioned.
https://x.com/willchamberlain/status/1900177947262062614