Research

We’re developing Metavaluation — a participatory framework for recognising and rewarding the value of diverse contributions, from creative work and care to logistics and leadership.

At the core of this framework is a simple idea: evaluations are valuable. By recognising peer evaluations as contributions in their own right, Metavaluation generates community-driven metrics that reflect what people actually value — not just what’s easiest to count.

We’re building this into a public good: Wisdom, a free, open-source app that any community can use to run their own peer-led evaluations, generate shared value maps, and coordinate with aligned groups using interoperable metrics.

Why this matters

Across academia, art, activism and open-source, communities face the same problem: essential work is often invisible, undervalued, or unrewarded. Without reliable ways to recognise what matters most, we risk burnout, bias, and misaligned incentives.

Metavaluation offers a new approach. Rather than relying on top-down ratings, black-box algorithms, or abstract scoring systems, Metavaluation uses simple, peer-led evaluations to surface what a community truly values. Crucially, the data generated through this process can also serve as a foundation for training AI systems — offering a pathway toward value-aligned intelligence grounded in real community priorities.

Our research questions

We’re building both the tools and the evidence base for a more participatory future. Key research areas include:

  • How do different communities value contributions like labour, ideas, and emotional support?
  • Can we measure the reliability of peer evaluations without central authority?
  • What makes value systems fair, adaptive, and interoperable across domains?
  • How can collective valuation support better coordination, recognition, and resource distribution?
  • What would it mean to train AI agents on participatory value systems — rather than scraping the internet?

Where we're experimenting

We’re prototyping across multiple domains — from grassroots festivals to academic conferences and research software alliances — to test the system’s adaptability, reliability, and impact. Each instance contributes data, feedback, and improvements to the broader framework.
Tiny OHM #1 (single day festival, Brisbane)
Pre-review, Record, Review, Recognise, Reward, Respect, Research
OHM Gathering (three day festival, Gold Coast)
Pre-Review, Record
AIMOS (open science conference, Brisbane)
Record, Review, Recognise, Reward, Respect, Research
Vibeclipse (tech campout prototype, Texas)
Record
Logische Phantasie (decentralized non-profit, Vienna)
Record, Review, Recognise, Reward, Respect
Funding the Commons (two-day Design Jam, Bangkok)
Record, Review, Recognise, Reward, Respect

Subscribe

Enter your email below to stay up to date on all our gatherings, system developments, and open research.

Get involved

We welcome all communities, researchers, and developers interested in participatory governance, collective intelligence, or ethical AI. Whether you want to test the app, conduct research, or build on the framework, we’d love to hear from you. 

We’re building Wisdom as a public good — free, open-source, and grounded in the values of the communities who use it. Your support helps us keep it that way. Whether you’re a researcher, developer, donor, or dreamer, you can help us grow the Metavaluation ecosystem through:

• Research collaborations

• Community pilots or use cases

• Open-source development

• Financial donations or infrastructure support

Progress

See the posts below for some of our milestones. Follow us on Substack for all the latest updates.

OHM Gathering

OHM Gathering 2023 Our first official OHM Gathering was hosted over 3 days in June 2023, at a magical site in Numinbah Valley, Gold Coast,

Read More »

Tiny OHM #1

Tiny OHM 2022 Tiny OHM #1 was our first official gathering since the charity merger and served as an opportunity to test our new WISDOM

Read More »