I was watching an otherwise great presentation that opened by making an argument that although pretrial risk assessment tools (one of the catalysts for a lot of work in algorithmic fairness and beyond) affect tens of millions of people, algorithmic big tech systems also affect many millions more.

This struck a nerve with me that went beyond the context of this single presentation. To be sure, one always wants to position their work as addressing a pressing issue, glaring gap, or otherwise making leaps and bounds upon what came before. But I think there's structural consequences of positing impact on Big Tech vs. consequential societal domains as an Either - Or question vs. a Both - And question.

We should be developing responsible AI infrastructure everywhere, in AI safety, AI ethics, public-interest technology, public-sector, social sciences, everywhere. This is not really happening --- rather there's more and more fragmentation, rather than solidarity. While it's true that individual researchers end up developing expertise by going deep in a particular area --- therefore individuals face Either - Or decisions --- it would be better if the community were better at realizing a Both - And ecosystem.

For many structural reasons, including the job market for remarkably skilled and thoughtful responsible AI graduates, the scales are rigged towards Big Tech. So I argue that realizing a Both - And stance towards responsible AI, public-interest technology, whatever you call it, requires greater organizing and specific efforts to improve connections between "AI for social impact" and specific, existing organizational and policy infrastructure that delivers social impact.

It wasn't really my place to try to call out the framing here, but I wanted to unpack common rhetorical moves that reveal how "impact' remains deeply contested in ways that we rarely talk about explicitly.

I'm reminded about some realizations I had while we were presenting our position paper "Fostering the Ecosystem of AI for Social Impact Requires Expanding and Strengthening Evaluation Standards" at Neurips 2025. Bryan Wilder and I were co-program chairs for EAAMO 2022 and on a lark we were complaining afterwards about the review ecosystem for AI for social impact and we ended up writing up our complaints in a position paper. I won't go into details on the paper -- the title is our position -- but I think some of the conversations we had in presenting the paper exposed some rifts of the current zeitgeist.

Our "position paper poster" (?) was interleaved into the rest of the conference, so we did our best used-car-salesman impression to engage folks who had the misfortune of making eye contact. So we got some questions like "what is AI for social impact"?

In 2026 it's worth making sharper distinctions, as we are all drowning in a deluge of AI. One impression is that AI is everywhere now, sometimes against people's will, such that any improvements in AI improve social impact.

I personally think it's worth sharpening our terminology. Instead, one counteroffer is to distinguish between improving AI for institutions that develop AI and tech vs. institutions that first and foremost deliver social impact. At a broad stance, the latter includes healthcare, social services organizations, public-sector, programs, nonprofits, etc ... and other various non-tech companies. We need to better cultivate an explicit academic - practitioner - practice community around responsible AI/OR/tech/whatever for institutions that deliver social impact.

I'm not being a technosolutionist here: I fully believe often the best answer for AI in public-sector deployments is no-AI, logistic regression, or a finely crafted regex. (I have another hot-take to write about why AI-focused philanthropic RFPs are also contributing to "the problem" at large :) ) But I also believe that developing better connections with impact organizations --- taking as a given that our goal is to improve social welfare, not just hillclimb on AI --- is one route out of technosolutionism.

That's one of the goals of the Bridging Prediction and Intervention Problems in Social Systems whitepaper: to highlight how "program evaluation"'s north star of improving societal outcomes on the ground can provide some broader conceptual clarity for data-driven decision-making for social impact at large.

I don't have the answer. I've looked to the public-interest technology community for several years as an example of tech delivery in the public sector. But my sense is that we lack pathways from, let's say the systems science of decision-making (AI/OR/ML), to public-interest technology, which itself has shifted more towards a product/design focus (rather than technology).