Fractal Democracy — Two Consensus Rounds

James Mart
goƒractally
Published in
5 min readJul 26, 2022

--

Background

It is important to be able to conceptualize a truly decentralized organization as a collection of individuals with competing missions/goals, rather than as a monolithic entity with top-down coordination.

My previous article on Mutually Benefiting From Misaligned Goals introduces the concept of graphing each community member’s personal view on the goals/mission/vision of a community in some n-dimensional coordinate space. A community of people with misaligned goals can still achieve, with greater or lesser efficiency, the benefits of economies of scale to amplify the effect of their individual efforts.

The following graph shows a 2-dimensional “goal space” and uses three levels of stratification to isolate directionally aligned sub-groups of individuals in a theoretical community:

The Addendum

The Fractally White Paper Addendum 1 contains several improvement proposals:

  1. Add a second consensus round to the weekly meetings
  2. Allocate $Respect tokens on fibonacci(avg(level)) rather than avg(fibonacci(level))
  3. Use $Respect rate of issuance that drops continuously by 50% per year until it stabilizes at 5% annual inflation
  4. Consider the reward from consensus meetings to be a share in the inflation of the $Respect token, rather than the $Respect token itself

This post is primarily for the purpose of exploring the first proposal in greater depth and is intended to offer more justification for the proposal than what was originally provided. Because the addendum refers specifically to “measurement error,” it is important that we better understand the unique definitions of both “measurement” and “error” in the context of a Fractal Democracy.

“Measurement”

There are at least two ways to conceptualize a “measurement” in the context of Fractal Democracy. The concept used by the addendum describes each individual in a consensus round as the instrument that provides a single measurement of the value of member contributions, and that measurement is integrated into a group consensus that includes the relative values of each group member’s contributions. Another concept describes the final consensus of all levels after all rounds in a given weekly meeting as a single measurement. As a matter of opinion, I find the latter definition of a measurement to be significantly more helpful when analyzing the economic and game-theoretic properties of a Fractal Democracy. The view of a single weekly measurement is also more consistent with my analogy of each weekly consensus meeting to the construction of a single proof of work. Individuals under this model are nothing more than the moving parts in a single instrument that produces a single result.

I could draw an analogy to a thermocouple, for example. Reading the entire set of levels is like a noisy voltage reading, the process of incorporating the set of levels into a set of moving averages is the sample outlier rejection or noise reduction, and the Fibonacci function is the polynomial transformation to convert the voltage into a meaningful metric, like temperature. A single member, therefore, has no associated objective measurement.

The choice of how to view a measurement is not arbitrary, and choosing to view an individual as a measurement instrument can meaningfully confuse modeling/analysis efforts.

“Error”

In a true DAO, there is no true mission, there is only the consensus of a community that changes based on the current membership and their personal goals. In such a context, the very concept of measurement “error” may be called into question, because no single person’s opinion is actually wrong or “in error,” it is simply more or less aligned with the consensus of the rest of the community.

Therefore, I propose that what we can mean by “error” in the analysis of a Fractal Democracy is the extent to which the result of a measurement influences the trajectory of a community to be in greater misalignment with the trajectory defined by the average goals of the community, and thereby decreases the magnitude of the “economy of scale” effect that amplifies the impact of individual efforts.

Statistical Error

Statistical error is reduced by accounting for additional measurements to filter out the effect of noise. But given this understanding of “measurement” and “error,” it is clear that adding a second consensus round does not add any additional measurements, and therefore is not designed to reduce statistical error. Rather, the output of the weekly meeting is still a single measurement that maps contributions to their relative value.

Systematic Error

The addition of a second consensus round is precisely an attempt to improve the measurement technique and thereby improve the quality of the measurement. It achieves this by adding a bias to the measurement directionally aligned with the average opinions of those who are first judged to be responsible for the top 50% of contributions.

I propose that this bias is justified as means to improve the measurement quality if the following syllogism holds:

  • Premise 1: The criteria people use to evaluate contributions are personal/subjective.
  • Premise 2: Those whose own criteria are misaligned with the average member criteria are more likely to contribute in ways deemed less valuable on average by the rest of the community.
  • Premise 3: Based on premises 1 and 2, those with more divergent evaluation criteria on average can be identified by lower average ranking.
  • Premise 4: More globally aligned evaluation criteria will result in a measurement that maximizes the effect of economies of scale.
  • Premise 5: Based on premises 3 and 4, the effects of economies of scale are maximized when the opinions of those with lower average ranking are discounted.
  • Conclusion: Systematic error is minimized when we bias the measurement in the direction of top contributors’ opinions.

Clarification: Premise 3 does not imply that those whose contributions are ranked lower necessarily have more divergent evaluation criteria. It implies that the contributions of those who do have more divergent evaluation criteria will tend to be ranked lower.

Error Amplification

One realization we’ve had as a team since writing the ƒractally Whitepaper is that there should be an effort to minimize the number of consensus rounds in order to minimize the amplification of measurement error. The power of the participants in each subsequent consensus round to allocate community $Respect increases exponentially with the Fibonacci curve. Therefore, unless we believe that the addition of a subsequent consensus round will significantly reduce measurement error, it is better to minimize the number of consensus rounds in order to minimize the magnitude of errors affecting community $Respect distribution.

Conclusion

Hopefully, this post shows some more of the rationale behind the proposal to add a second consensus round. I believe that this change has the potential to maximize the effect of economies of scale in group coordination, which is a reasonable definition for a reduction in measurement error in the context of a Fractal Democracy.

James Mart
Twitter: _JamesMart

--

--

James Mart
goƒractally

Science, education, the pursuit of truth, coordination, simplicity, neutrality, anti-fragility. https://twitter.com/_JamesMart