EDITORIAL: MIT Severs Ties With AI Productivity Study, Citing Data Concerns But What’s the Full Story?

Date published
May 18, 2025

EDITORIAL: MIT Severs Ties With AI Productivity Study, Citing Data Concerns But What’s the Full Story?

A once-celebrated study on AI’s impact on scientific productivity is now at the center of quiet controversy. MIT has officially disavowed the working paper Artificial Intelligence, Scientific Discovery, and Product Innovation, authored by former PhD student Aidan Toner-Rodgers, urging that it be “withdrawn from public discourse” due to questions around the reliability of its data.

The study, which was posted to arXiv in 2023 and submitted to The Quarterly Journal of Economics, claimed that deploying a large-language-model-powered AI assistant in a materials science lab led to significantly more material discoveries and increased patent filings. It also highlighted a downside: researchers felt less satisfied with their work, a tension between accelerated output and human fulfillment.

image

Initially, the paper attracted praise from some of the most influential names in economics. MIT professors Daron Acemoglu and David Autor, both of whom have shaped policy and debate around labor and technology, openly supported the research. Autor told the Wall Street Journal he was “floored” by its implications. But now, both economists say they have “no confidence” in the study’s validity, citing concerns over data provenance. Their full joint statement is included here via MIT’s announcement.

The university claims its decision followed an internal review prompted by concerns from a computer scientist familiar with materials science. However, citing student privacy protections, MIT declined to share any findings. The author is now “no longer at MIT.”

The timing and nature of the disavowal raise quiet but serious questions.

The paper made claims that were not only provocative but potentially inconvenient highlighting how AI may generate more scientific output while leaving the scientists themselves less fulfilled. It touched a nerve in the ongoing debate about productivity versus meaning, mechanization versus creativity, acceleration versus agency.

That such a paper would be pulled not over confirmed fraud but over concerns about data with no transparent evidence released carries echoes of other past institutional responses to uncomfortable findings. Remember when internal Google researchers raised alarms over the limits and ethical dangers of AI scaling? That paper, too, was internally suppressed before co-author Timnit Gebru was ousted. It wasn’t about science. It was about optics.

The MIT case also brings to mind moments in history where research that questions the prevailing techno-optimism meets sudden institutional discomfort. What happens when the data doesn’t tell the story the system wants to hear? Who decides when a scientific finding is too disruptive?

MIT has requested the paper be removed from both the journal submission queue and from arXiv, though only the author can request an arXiv withdrawal. As of this writing, it remains online.

It’s worth asking: if the data is flawed, why not release the review and let the academic community scrutinize it? If the author was truly discredited, why the silence?

The original paper, regardless of its fate, struck at something deeply relevant — how AI may redefine productivity, not just in terms of metrics, but in terms of meaning. That this research has been so swiftly cast into the shadows without open debate should give anyone invested in technology’s future a reason to pause and reflect.

Not everything that is withdrawn disappears. Sometimes, it only lingers louder.

“Propaganda is to a democracy what violence is to a dictatorship.”

William Blum