The Architect

Two decades of building trust-based systems—from community platforms to AI governance frameworks.

I’ve spent my career designing systems where humans can trust each other—and now, where humans can trust technology.


The Story

Safety Is Not a Code. It’s a Culture.

My career bridges two worlds that rarely speak to each other.

The First Chapter: Building Trust at Scale

Before I ever audited an algorithm, I spent fifteen years building communities where trust was the currency.

As Founder & CEO of Zumbara, I created the world’s largest time bank network—a platform where 50,000+ people exchanged skills and services using time instead of money. I didn’t just build a technology platform. I designed governance frameworks for a community that ran on trust: conflict resolution protocols, value systems that emerged from the bottom up, and facilitation practices that held space for messy, human complexity.

I learned that trust isn’t a feature you can code. It’s a culture you cultivate.

As Managing Director of TechSoup Turkey, I bridged the gap between technology and civil society. I helped hundreds of NGOs adopt digital tools—but more importantly, I facilitated the difficult conversations about what technology should and shouldn’t do. I saw firsthand how organizations struggle when technology outpaces their ability to govern it.

The Pivot: Understanding the Black Box

By 2020, I recognized that the future of social systems would be shaped by AI. And I couldn’t guide organizations through that future without understanding the technology from the inside.

So I went back to the beginning.

As a Data Scientist, I led error analysis and internal audits for models processing 1 billion strings of transaction data. I reduced error rates by 20% through rigorous performance investigation. I learned how algorithms actually work—their limitations, their failure modes, their hidden assumptions.

I became a Certified AI Auditor and Red Teamer, partnering with organizations like BABL AI and Humane Intelligence to stress-test AI systems before they could cause harm.

The Convergence

Today, I bring both worlds together.

I don’t just speak «ethics»—I speak error rates, community dynamics, and boardroom risk. I’ve sat where you sit. I’ve built the community platforms. I’ve debugged the models. I’ve facilitated the difficult conversations.

This is what allows me to solve the «Operationalization Gap» in AI governance: the chasm between what organizations say they value and what their technology actually does.

«Ayşegül has a rare ability to bridge the gap between technical AI concepts and social impact.»
— Conference Organizer


The Methodology

My Roots in Research & Design

My governance approach didn’t emerge from compliance frameworks. It emerged from facilitation, design thinking, and ethnographic research—methods I’ve practiced for two decades.

Deep Listening Before Design

Before I write a single policy, I listen. Not for what people should say, but for what they’re actually experiencing.

This practice comes from years of community facilitation—holding space for groups to surface tensions, name desires, and find their own wisdom. In Zumbara, we didn’t impose values from the top. We created safe spaces where 50,000 people could discover shared principles through lived experience.

I bring the same approach to AI governance. My «Look» phase isn’t a survey. It’s confidential interviews, shadow AI mapping, and careful attention to what’s not being said.

Participatory Design & Stakeholder Alignment

I believe the people affected by a system should shape its rules.

This conviction was forged through years of driving multi-stakeholder alignment in complex ecosystems—from projects like Giftival (collaborative gatherings across four continents) and Anadolu Jam (transformative gatherings for changemakers) to participatory governance assemblies where communities defined their own boundaries.

When I help organizations create AI principles, those principles don’t come from a consultant’s template. They emerge from collaborative workshops where your team—diverse voices, messy disagreements, honest tensions—builds something they actually believe in.

Experience Design

I design experiences that transform understanding.

Whether it’s a «Time Machine» visualization session, a Zumbara Festival where thousands exchanged skills without money, or an Executive AI Clinic for senior leaders, I create containers where people can encounter new ideas safely—and leave changed.

This is why my governance work sticks. It’s not a document handed over. It’s an experience your organization lives through together.


The Philosophy

Spring Cleaning for Your Relationship with Technology

I believe governance isn’t a document. It’s a culture.

Think of it as spring cleaning: a time to open windows, create order, and prepare for what’s coming. You feel more in control. More connected. Ready.

The most important piece of Responsible AI isn’t technology. It’s care, gentleness, and the joy you pour into the process. That’s what makes innovation flourish.

Guiding Principles:

→ Governance is a living conversation
Not a compliance checkbox. Not a document that gathers dust. A continuous dialogue that evolves as your organization grows.

→ Principles are your only stable ground
In a world where technology, regulations, and user expectations shift every week, your values are the only thing you can anchor strategy on.

→ We are guardians, not gatekeepers
Responsible AI isn’t about saying «no.» It’s about protecting what matters most while enabling meaningful innovation.

→ The more diverse the voices, the wiser the system
Bottom-up insights combined with top-down commitment. Technical expertise alongside lived experience. This is how governance becomes legitimate.

→ Connection over separation
Conflict resolution and facilitated conversation aren’t obstacles to governance—they’re the foundation.

«If I explained risk management to my grandmother, she would look at me with her bright smiling eyes and say, ‘You’re telling me as if you’ve discovered something new. This is how we humans have been operating for ages.'»


Recognition

Awards & Achievements

My work in social entrepreneurship and technology for good has been recognized by institutions and networks around the world:

🏆 Global Recognition
Awarded by the International Youth Foundation as one of 20 promising global projects

🏆 Social Entrepreneurship Awards
Recognized by Ozyegin University and Bilgi University for innovative approaches in social impact

🏆 MIT Finalist
Finalist in MIT competition for social innovation

🏆 Garanti KAGIDER
Finalist in Women Entrepreneurs competition

🏆 Turkey’s Changemakers
Featured in national video series celebrating impactful changemakers

🏆 Ashoka Recognition
Acknowledged for social entrepreneurship leadership

🏆 TEDx Speaker
«Dream Time: Time Bank» — exploring alternative economies built on trust


In the Media

My initiatives have been featured in major media outlets across Turkey and internationally:

Selected Coverage:

  • National television features on Zumbara and time banking
  • Print media coverage of social entrepreneurship initiatives
  • Digital features on technology for social good
  • Podcast and radio interviews on alternative economies

The Journey

2008–2018: Building Trust-Based Communities

  • Founded Zumbara, scaling to 50,000+ members
  • Pioneered «Time Bank 2.0» concept
  • Facilitated Giftival gatherings across four continents
  • Led Anadolu Jam and community transformation events

2015–2020: Bridging Technology & Civil Society

  • Managing Director, TechSoup Turkey
  • Launched «Things» program for youth social innovation
  • NetSquared Global Leadership Council member
  • Supported hundreds of NGOs in digital transformation

2020–2023: Entering the Black Box

  • Data Scientist (1B+ transaction strings)
  • AI Auditor certification
  • Red Teaming practice development
  • Partnership with BABL AI and Humane Intelligence

2023–Present: Governance Architecture

  • Founded AI governance practice
  • Completed 4 full organizational transformations (2025)
  • Conducted 5 technical AI audits (2025)
  • Teaching at ELISAVA
  • International keynotes on life-centered AI

Current Affiliations

I partner with leading organizations in AI safety, ethics, and social impact:

BABL AI — Technical AI auditing and bias evaluation

Humane Intelligence — AI evaluation research and community

ELISAVA — Teaching responsible AI and design ethics

TechSoup — Ongoing advisory on technology for civil society


Beyond the Work

The Person Behind the Practice

My work is deeply personal.

I grew up in Anatolia, shaped by the ancient guild traditions of Ahilik and Bacılık (Bacıyan-ı Rum). In these systems, craftspeople and women organized themselves not just around production, but around shared values, quality of character, and mutual responsibility.

To me, these aren’t just historical footnotes. They are living templates for decentralized governance and trust. They prove that humans have always known how to build systems that balance economic activity with social care.

This heritage informs my practice of Deep Listening. Whether exploring the transformative power of voice through Turkish Sufi traditions or designing intentional environments for reflection, I am constantly studying how we create harmony out of dissonance.

I believe that the same principles that guide personal transformation—deep listening, safe space, and honoring what matters—must guide how we build technology.

This isn’t separate from my AI governance work. It is the foundation.

«Your sacred space is where you find yourself over and over again.»— Joseph Campbell


Let’s Build Something That Matters

I accept a limited number of strategic engagements per quarter.

If you’re a mission-driven organization ready to move from reacting to AI to responding with intention—let’s explore what’s possible.