top of page

Ethical Systems & Trust Architect

I help organizations design, evaluate, and govern high-stakes socio-technical systems such as AI models, platforms, and institutions, so they earn trust, reduce harm, and measurably improve long-term human well-being.

Focus areas: Trust & Safety • Socio-technical Risk • Responsible AI • Policy & Governance • Fairness & Robustness • Well-being Measurement 

What I Do

I translate rigorous ethical theory into decision-ready tools that hold up under real-world complexity: uneven power, feedback loops, incentives, and long-tail edge cases.

Build Responsible AI / Trust & Safety frameworks
  I develop evaluation criteria, harm taxonomies, governance guardrails, and clear “red-line” constraints so teams can ship systems that are safer, more accountable, and more humane.

Model systemic risk in socio-technical ecosystems
  I analyze how incentives and feedback loops shape outcomes across scales (individual → community → institution), helping teams anticipate downstream harms and design interventions that improve ecosystem health.

Make values measurable (without flattening them)
  I create measurement-minded ethical wayfinding tools: compasses, scorecards, and metrics for tracking dignity, agency, and flourishing over time, so values can guide iteration rather than living in slide decks.

Featured Work

Ethical Field Theory: Ethics as Dynamic Systems, Not Checklists
A field-theoretic model of ethical dynamics that treats rightness, goodness, and virtue as coupled dimensions, allowing for effective diagnosing of how systems amplify harm and support well-being.

Beyond Cost-Benefit Analysis: A Richer Formal Framework for Policy, Institutional Design & Political Economy

The formal tools that dominate policy analysis—cost-benefit analysis, utility maximization, game-theoretic models—represent ethical value as a single number, systematically discarding the coupling between welfare and justice, the feedback between institutional norms and civic character, and the structural asymmetries that determine who benefits and who bears harm. This page makes the case that a tensor-field framework provides formally superior tools for policy evaluation, institutional design, equity analysis, and the design of alternatives to extractive economic arrangements.

Ethosystems: AI & Platforms in Full Human Context
A systems framework that treats AI as embedded in psycho-bio-social, economic, political, and ecological environments, designed to reveal hidden leverage points for safer deployment and healthier communities.

From Ethical Geometry to Institutional Design: Deriving Ostrom's Core Design Principles from Ethical First Principles

An a priori derivation of Elinor Ostrom's Core Design Principles (CDPs) from a formal ideal generated by the Ethical Field Theory framework, which explains why these principles (which are arguably the gold standard in institutional governance and design) make institutions work.

• Governance Design for AI & Platforms

Principled governance designs for recommendation systems, content moderation, capability deployment, and platform ecosystems — each grounded in a formal framework that explains why each design choice is structurally necessary, not just that it works.

Ethical Swampland & Symmetry-Based Objectivity
A principled approach to fairness and objectivity based on invariance under perspective shifts that generates “swampland” constraints that identify high-performing system designs that should never be built or shipped.

• Symmetry-Based Invariance Tests for Fairness & Robustness

A practical method for fairness and accountability: invariance test suites that catch double standards and brittle failures and translate values into auditable evaluation and governance controls.

Measurement-Minded Ethical Wayfinding
A design-driven approach to operationalizing ethical reflection, turning dignity, agency, and flourishing into practical compasses and metrics that teams can test, iterate, and improve over time.
​​

bottom of page