Back
Back

More and more YMCAs and nonprofits are asking the same question: How can we use AI in a way that’s smart, safe, and aligned with our mission? To help answer that, we brought together a panel of experts who’ve been on the front lines of AI policy - from legal, nonprofit, and tech perspectives.
Here’s what we covered:
AI tools have quietly made their way into nonprofit workflows. Whether it’s ChatGPT or other generative tools, many staff are already using AI, sometimes without leadership even realizing it. At a recent conference of 150 nonprofit organizations, over 60% of attendees said they were using AI in some form, but when asked how many had a policy in place... not a single hand went up. That’s a gap with real implications. Without a clear framework, organizations risk missteps around data privacy, misinformation, and trust with their community members. Thats why the YMCA of San Diego created a clear AI policy that includes approved AI tools and how to safely use them for daily workflows.
<hr>
- John Merritt, SVP & CIO, YMCA of San Diego County
<hr>
<hr></hr>
With AI rapidly evolving, a one-and-done policy won’t cut it. As Catherine Lake noted, the real work begins after the policy is written - through ongoing communication, staff support, and practical application. John Merritt echoed this sentiment, pointing out that it’s not about limiting staff, it’s about giving them the tools to use AI safely and confidently in their daily work. By outlining what tools staff can use, it allows them to explore the different ways AI can support their work without the fear of the unknown, therefore empowering them to innovate and ultimately serve their community even better.
Some of the most effective ways organizations are bringing policy to life include:
A simple way to help staff know what’s safe is making it clear with a model like Traction Rec’s three-tiered data rule:
<hr>
- Catherine Lake, Partner, Dorsey & Whitney LLP
<hr>
<hr></hr>
Let’s be honest...AI and AI policy can feel like a lot. But creating a safe, responsible foundation doesn’t mean writing a 10-page document today (or ever). Start simple. Sit down for a coffee chat with a staff member you know is already using AI and ask them: What AI tools they are currently using at work, where AI is actually helping them day-to-day, and if there are any other AI tools they think would be beneficial to them and their team?
This kind of open conversation creates space for safe experimentation and gives you clarity on what’s already happening in your organization. From there, it’s about building a lightweight framework that balances innovation and trust. Traction Rec and the YMCA of San Diego County, use a trusted tools approach, that encourages the use of vetted vendors like Salesforce’s Agentforce or ChatGPT with a commercial license, which guarentees the safety of your data.
<hr>
- Jonny Power, CTO, Traction Rec
<hr>
Tips for vetting AI tools at your nonprofit:
<hr></hr>
Start small. Talk to your staff. Understand what’s already in use. Then build a policy that protects your people, your data, and your mission.
Traction Rec has created an AI Readiness Guide to help nonprofit leaders build practical policies, identify risks, and create a foundation for safe experimentation. Reach out and we'll send it right over!
<hr></hr>

Partner, Dorsey & Whitney LLP

Senior Vice President & CIO, YMCA of San Diego County

Chief Technology Officer, Traction Rec

Chief Revenue Officer, Traction Rec