Governance That’s a Living Conversation, Not a Compliance Document.
I’m Ayşegül Güzel, a Responsible AI Governance Architect. I help mission-driven organizations turn AI anxiety into trustworthy systems through deep listening, collaborative design, and rigorous technical practice.

TRUSTED BY MISSION-DRIVEN ORGANIZATIONS

Most Organizations Approach AI Governance Backwards
They start with a policy document and wonder why nothing changes.
Meanwhile, teams are anxious. Leadership is uncertain. Shadow AI is spreading. And the regulations keep shifting.
The result? Expensive documents that gather dust. Compliance checkboxes that don’t build trust. A widening gap between what you say you value and what your technology actually does.
There’s a better way.
Spring Cleaning for Your AI Strategy
I believe governance isn’t a document, it’s a culture.
My process doesn’t start with checklists. It starts with deep listening. I create safe space for teams to name their tensions, surface their values, and build systems they actually believe in.
Think of it as spring cleaning: a time to open windows, create order, and prepare for what’s coming. You feel more in control. More connected. Ready.
The result isn’t a perfect policy. It’s a living system—built collaboratively, owned collectively, and designed to evolve as your organization grows.
«Ayşegül didn’t just draft our AI policy—she helped us really define our ethical stance and synthesised all the complex and diverse perspectives on the team. She was a real ‘strategy partner.'»
— Organization Leader
From Reacting to AI to Responding with Intention
My career bridges two worlds that rarely speak to each other:
Executive Social Leadership
As Founder & CEO of Zumbara, I scaled the world’s largest time bank network—designing governance and trust protocols for 50,000+ users. I learned that safety isn’t a code. It’s a culture.
Technical AI Practice
As a Data Scientist, I led error analysis for models processing 1 billion strings of transaction data, reducing error rates by 20%. I learned how the «black box» actually works.
What this means for you:
I don’t just speak «ethics»—I speak error rates, community dynamics, and boardroom risk. I’ve sat where you sit. And I can translate between your technical teams, your leadership, and your stakeholders.

From Principles to Practice
Three ways to build AI governance that actually works.
The Governance Roadmap
For Organizations Ready to Lead
In 4-6 months, move from confusion to confidence. Your board signs off on a living governance system—not a document that gathers dust.
Using my «Look, Create, Build» methodology, we’ll:
- Surface the real tensions in your AI use
- Co-create principles your team actually believes in
- Embed governance into daily workflows
Outcome: Full «governance competence.» A system you own.
Human-in-the-Loop Red Teaming
For AI Builders & Buyers
Know exactly where your AI could fail—before the public does.
We stress-test your models for real-world harm: bias, hallucinations, safety failures. I define your specific «bad actors» and vulnerable users, then break your system so you can fix it.
Outcome: A battle-tested system ready for launch.
The Global Governance Launchpad
For Networks & Foundations
Move your entire ecosystem from awareness to practice in one sprint.
A cohort-based program that equips 20+ organizations with governance frameworks simultaneously. Live masterclasses, shared templates, peer learning.
Outcome: Scalable impact across your whole network.
What Leaders Say
«Ayşegül has a rare ability to bridge the gap between technical AI concepts and social impact. Her keynote was not only inspiring but deeply practical. She engaged our diverse audience and left us with clear takeaways on responsible innovation.»
— Conference Organizer
«Ayşegül delivered a friendly and informative session that we hope will trigger reflection at our company and influence our AI strategy. We would be very happy to work together in the future.»
— Corporate Strategy Team
«The Algorithmic Impact Assessment session was a great experience! It provided a solid overview of measuring impacts and assessing risks… The real-world examples were especially insightful.»
— Workshop Participant
Recent Work
Strategic Governance
Architected the Responsible AI Roadmap for Changemaker Exchange —board-approved governance framework now guiding all AI decisions.
Global Education
Supporting the «AI for Social Good» curriculum for TechSoup Europe, training NGO leaders across 15+ countries.
Technical Audit
Conducted Bias Audits under NYC Local Law 144 for BABL AI—ensuring compliance before high-stakes deployment.
University Teaching
Leading the AI Skills Lab at ELISAVA University—training the next generation in AI evaluation techniques and responsible innovation.
Bootcamp Training
Delivering Risk Assessment workshops for Tech to the Rescue AI Bootcamps—equipping social sector technologists with governance fundamentals.
Thought Leadership
Keynotes at Mozilla Fest, and Megaphone—exploring how we build technology that serves life.
The Library
I believe good frameworks should be accessible.
On my Substack, AI of Your Choice, I share the same thinking I bring to client work—plus free templates you can use immediately:
→ AI Policy templates
→ Risk assessment frameworks
→ Governance checklists
→ Real-world case studies
Join a growing community of mission-driven leaders navigating AI with intention.
Ready to Start the Conversation?
I accept a limited number of strategic engagements per quarter.
If you’re a mission-driven organization ready to move from reacting to AI to responding with intention—let’s explore what’s possible.