You can’t manage what you don’t see.
Compliance is moving past “having a privacy policy” to algorithmic accountability: governing automated systems that handle donor data, predictions, and beneficiary interactions. The monitor turns that landscape into enforceable history and trackable rules—expand a topic below when you need depth, or use the sidebar to open the tool.
What the monitor tracks
Complivia built the tool around both sides of the compliance equation: enforcement actions regulators have already taken against nonprofits, and new rules that are reshaping obligations. It includes 50 documented enforcement actions from 2024 through 2026—fraudulent solicitation, private inurement, governance failure, tax violations, and more—with the legal basis, the sanctioning authority, and the outcome recorded. On the regulatory side, it tracks 20 current and incoming rules, categorized by impact and rated by severity.
New rules from the IRS, Department of Labor, and state attorneys general are still landing—the One Big Beautiful Budget Act, proposed donor-advised fund regulations, and more. These are effective or incoming obligations, not hypothetical risks.
Four areas of pressure (2026)
The landscape below is reference material. Open any section to read it; leave them closed to skim faster.
1. From Data Privacy to AI Governance
In the early 2020s, compliance focused on data protection (GDPR, CCPA). By 2026, this has evolved into Mandatory AI Impact Assessments.
- Algorithmic Transparency: Several states (e.g., California’s AB 2013 and Colorado’s AI Act) now require nonprofits to disclose the data sources used to train their AI models.
- Bias Audits: If a nonprofit uses AI for hiring or scholarship selection, they are now legally liable for discriminatory outcomes. Annual independent bias audits are becoming a standard requirement in jurisdictions like New York City and Illinois.
- Explainability: Regulators now expect “Explainable AI” (XAI), meaning a nonprofit must be able to explain how an algorithm reached a specific decision (e.g., why a certain donor was flagged for high-capacity giving).
2. IRS & Fiduciary Evolution
The IRS has moved from general oversight to specific “Use Case” governance.
- Private Benefit Doctrine: The IRS now scrutinizes AI vendor contracts. If a nonprofit provides its unique donor data to an AI company to “train” a model in exchange for a discount, the IRS may view this as a Private Benefit to the vendor, potentially risking the organization’s 501(c)(3) status.
- UBTI Risks: Selling AI-generated research or licensing proprietary AI tools developed in-house can trigger Unrelated Business Taxable Income (UBTI).
- Board Responsibility: AI governance is now considered a fiduciary duty. Boards are expected to move beyond “IT oversight” to ensuring AI aligns with the mission and does not create “algorithmic blind spots” that exclude marginalized communities.
3. Financial & Grant Compliance
Technology has automated the “paper trail,” but it has also raised the bar for audit readiness.
- Single Audit Threshold: As of fiscal years ending in 2025, the Single Audit threshold for federal funds increased to $1 million. However, the Uniform Guidance now emphasizes digital procurement documentation and automated subrecipient monitoring.
- Crypto & Digital Assets: New accounting standards (ASU 2023-08) require nonprofits to measure crypto holdings at fair value at each reporting period, necessitating more frequent and tech-integrated financial reporting.
4. Communication & Ethical Compliance
The rise of “Deepfakes” and Generative AI has led to new consumer protection rules.
- Synthetic Likeness (No FAKES Act): Nonprofits using AI-generated avatars or “success stories” in fundraising must conspicuously disclose that the content is synthetic.
- Donor Deception: The FTC now treats AI-generated donor communications that misrepresent how funds are used as “unfair or deceptive practices.”
- Safety Protocols for Chatbots: California’s SB 243 (effective 2026) requires nonprofits using “companion” or “counseling” chatbots to have specific protocols for detecting suicidal ideation and providing human intervention.
Summary checklist
| Category |
Compliance Action |
| Vendor Management |
Update contracts to ensure AI providers indemnify the nonprofit for IP infringement. |
| Data Privacy |
Implement a “Right to Delete” that extends to training data in AI models. |
| Fundraising |
Disclose the use of “synthetic performers” in any AI-generated marketing. |
| Governance |
Create an AI Acceptable Use Policy and an AI Use Case Inventory. |