Why “AI-First” Isn’t Enough for a Sector Built on Trust
The hotel ballroom in New York buzzed with the familiar electricity of a tech conference – half excitement, half inevitability. Onstage, a sleek banner read: “Becoming AI-First: Building Organizations That Think at Machine Speed.”
I had come to listen, to learn, and, if I’m honest, to reaffirm what I already believed. Like many in the field, I had bought into the idea that success in the age of artificial intelligence would depend on becoming AI-first.
For years, the phrase had been gospel in the private sector. Google used it to describe its transformation from search engine to predictive ecosystem. Spotify adopted it to refine personalization. JPMorgan Chase invoked it to automate credit and risk. Even Zoom had rebranded itself as an “AI-first company,” embedding generative tools into every interaction.
It made sense for them. In markets defined by margins, speed and scale are survival strategies. But as I listened to the speakers that morning – executives detailing how algorithms had redefined workflows, metrics, and decision-making – something inside me stirred uneasily. The conversation was full of what technology could do. No one seemed to ask whether it should.
I looked down at my notebook and wrote two words that would change how I thought about my work entirely: “HUMANITY > UTILITY.”
That single greater-than symbol reframed everything. Until that moment, I had accepted the premise that the organizations most likely to thrive in the AI era would be those that embedded it most deeply. But that small note reminded me that in the world of philanthropy – where trust, empathy, and connection are the true currencies – AI-first is not the right goal.
And it was then I realized: while the private sector can afford to be AI-first, the nonprofit sector must be Human-First + AI Forward.
The Trouble With “AI-First”

In the corporate world, “AI-first” signals ambition. It means reorganizing a business around intelligence rather than intuition – building products, processes, and even identities on top of machine learning and data analytics.
For companies like Google or Shopify, that’s strategic brilliance. For nonprofits, it’s an existential risk.
We don’t compete on efficiency; we compete on meaning. We measure success not by optimization but by impact. The nonprofit sector runs on belief – belief that generosity matters, that people can change lives, that trust is sacred.
When AI is implemented without that context, it risks hollowing out what makes us human.
→ Automation, unchecked, can trade empathy for expedience.
→ Algorithms can reinforce bias faster than we can detect it.
→ Predictive tools can become prescriptive ones, subtly shifting the balance between serving people and sorting them.
At Virtuous, where I serve as Chief AI Officer, we talk about this tension every day. My role isn’t just to have a lens on AI strategy – it’s to serve as the conscience of our technology. That means ensuring that every innovation we pursue is rooted in our deepest belief: that technology should amplify human flourishing, not replace it.
I’m proud to say Virtuous has built its ethos on that principle. We are not afraid to say no – even to promising tools – if they don’t align with human dignity or long-term mission impact. We measure progress not only by what our systems can do, but by what they should preserve: agency, authenticity, and trust.
That posture doesn’t slow innovation. It anchors it.
The Shift Toward Human-First + AI Forward

Human-First + AI Forward is not a rejection of technology – it’s a rebalancing of values. It’s a philosophy that insists AI should be in service to people, not the other way around.
Being Human-First means starting with empathy, transparency, and consent. It means giving people the right – and responsibility – to question, override, and shape technology. It means ensuring every automated process has a human in the relationship, not just a human in the loop.
Being AI Forward means we still move boldly. We experiment, iterate, and embrace new tools that can help scale generosity, personalize experiences, and illuminate patterns we might otherwise miss. But we do so anchored in a deeper awareness of consequence.
At Virtuous, this balance defines how we think about innovation. We often describe it as moving fast with conscience – pushing forward while constantly asking, does this serve humanity as much as it serves efficiency?
That’s not easy work. It means slowing down sometimes when the world says, “Go faster.” It means saying no to ideas that dazzle in the short term but could dull our moral clarity in the long run. Yet I believe that tension – between curiosity and conscience – is precisely where responsible innovation lives.
The Currency of Trust
The nonprofit sector doesn’t operate on quarterly earnings – it operates on relational equity. Every donation, every volunteer hour, every act of partnership rests on an unspoken agreement: I trust you to do what’s right.
If AI becomes another black box – something invisible, inscrutable, or unaccountable – that agreement begins to fracture. Transparency is not optional in our work; it’s sacred.
This is why responsible AI is a leadership imperative. When donors, beneficiaries, or partners don’t understand how AI informs their interactions, trust erodes. And once trust breaks, technology cannot rebuild it.
At Virtuous, we’ve made transparency a non-negotiable part of our design culture. We share how and where AI is used, invite feedback, and build visible oversight into every deployment. We view our AI systems not as hidden machinery but as extensions of our organizational values.
Because trust doesn’t scale automatically; it scales through stewardship.
The Tension Between Data and Dignity
A few months ago, I spoke with a nonprofit leader whose organization had implemented predictive analytics to identify likely donors. The system worked beautifully – until it didn’t.
“Sometimes the algorithm tells us who’s most likely to give,” she said, “but not who most needs to be asked.” Her team realized the model was unintentionally prioritizing transactional patterns over transformational relationships.
That’s the paradox of AI in mission-driven work. Data can tell us what’s efficient, but not always what’s right. Efficiency optimizes for outputs; empathy optimizes for outcomes.
At Virtuous, we’ve seen this play out as both a caution and a call to action. It’s why we’ve built our platform around the concept of responsive fundraising – technology that adapts to human behavior without dictating it, that listens before it automates. We use AI not to replace the fundraiser’s intuition, but to extend it.
And this is the framework we’ve used to build Virtuous Insights, our own AI-powered donor intelligence tool that lets you connect with the individuals in your database like humans. Not just disparate data points.

By uniting your first-party data with real-time wealth and demographic insights, Virtuous Insights creates dynamic 360° donor profiles that evolve daily. More than just data, it offers predictive intelligence to help you know who to engage, when, and how – all natively integrated within Virtuous CRM+.
Whether you’re looking at individual records or macro trends across your donor base, Insights empowers fundraisers to move from transactional asks to transformative relationships. In a time of growing disconnection, this is how we reignite generosity – with precision, purpose, and radical connection at scale.
→ See Insights in action when you book a demo for Insights HERE.
This is what Human-First + AI Forward looks like in practice: systems that accelerate generosity while preserving its humanity.
From Responsibility to Governance
For organizations ready to adopt AI, the first question shouldn’t be “What can we automate?” but “What should we govern?”
AI governance is how we make conscience actionable. It’s the framework that helps organizations ensure technology decisions align with values, ethics, and mission.
At Virtuous, we believe that governance isn’t about limiting innovation – it’s about guiding it with integrity. A good governance framework transforms responsible AI from an aspiration into a daily practice. It helps teams know how to ask questions, assess risks, and act with confidence.

Here’s a roadmap I often share with nonprofit leaders exploring this work:
7 Foundational Steps for an AI Governance Policy
1) Anchor in Mission and Values
→ Articulate clear guiding principles rooted in your organization’s purpose.
→ Define what “responsible AI” means for your context – include words like equity, transparency, and dignity.
2) Map AI Use-Cases and Risks
→ Identify where AI is already in use or under consideration.
→ Classify each by its potential impact on people and alignment with mission.
3) Establish Oversight and Accountability
→ Form a cross-functional AI governance team that includes leadership, program, and technical voices.
→ Assign owners for review, approval, and audit cycles.
4) Preserve Human Oversight
→ Design every AI system with clear human override and opt-out mechanisms.
→ Protect the right to question, intervene, and pause automation when necessary.
5) Embed Transparency and Feedback Loops
→ Communicate openly about how AI is used.
→ Create easy ways for staff and stakeholders to report concerns or ideas.
6) Educate and Empower Teams
→ Provide continuous AI literacy and ethical training.
→ Celebrate curiosity, skepticism, and reflection as signs of maturity – not resistance.
7) Audit, Report, and Refine
→ Conduct regular audits for fairness, bias, and drift.
→ Share findings internally and update policies as technology evolves.
The Courage to Stay Human
AI is reshaping our world faster than any innovation before it. The temptation to move at machine speed is powerful, but speed without reflection is drift.
The future of the nonprofit sector will belong not to those who adopt the most AI, but to those who adopt it most ethically. To those willing to pause, to question, to balance progress with purpose.
At Virtuous, we see this as our calling – to help the social sector harness technology without losing its soul. To model what it looks like to scale empathy, not just efficiency. To remind our peers that “responsible and beneficial AI” isn’t just a tagline – it’s a covenant.
That’s why I view my role not as predicting the future of technology, but protecting the future of humanity within it.
When I think back to that morning in New York – to those two words I wrote in my notebook, HUMANITY > UTILITY – they feel even truer now. The greatest achievement of AI won’t be in what it automates, but in what it awakens: our collective ability to choose wisdom over velocity, ethics over ease, people over process.
Because the goal of this new era isn’t to outsmart machines. It’s to out-care them.
And if we can do that – if we can stay human while moving forward – the future will be worthy of the word progress.
