Fractal governance — A shared submission algorithm

James Mart
8 min readJan 17, 2024

--

Intro

This document proposes an improvement to the process of peer-ranking that happens during fractal governance consensus meetings. It is aimed at simplifying the process, making it more fun, and better achieving its original goals. The proposal was generated as a direct result of participating in and critically observing related real-world experiments: Eden and the Genesis Fractal.

Observations

From the experiment of the Genesis Fractal, it was clear that meetings were competitive, which meant they were often stressful. I would argue that, for many people, this level of competition made the meetings difficult to look forward to and enjoy. This could only be expected to increase in severity if valuable tokens were at stake. For a community to be successful, I think enjoyment is important. Low enjoyment means low participation and low participation in the context of governance is a failure by definition.

Another observation is that it was extremely rare in the Genesis Fractal for a room to fail to “reach consensus” by the end of the meeting. However, it was common for people to express a dislike of the outcome of their meeting. This means that consensus was not reached, despite the appearance of consensus from the shared submission. People were often not genuinely persuaded by the conversation and instead were simply going along with others’ opinions for the sake of submitting something rather than nothing. This left many people frustrated and feeling that their voices weren’t being heard.

The status quo: An unspecified shared submission algorithm

Historically, everyone in a fractal governance consensus group would align on a single shared submission. The group would either align on a shared submission (ie “reach consensus”) or not. But as mentioned, true consensus was often not achieved despite the shared submission. This means that by some unspecified mechanism, people were accepting a submission that diverged from their actual opinion. More often than not, this unspecified mechanism was social pressure.

Recommendation: Specify the shared submission algorithm

Rather than leave the shared submission algorithm to unspecified social pressure, it would be better to plan for this social dynamic and specify the algorithm that generates the shared submission from people’s independent and genuine, even divergent, opinions. Specifying the algorithm makes clear that the submission is in fact not supposed to relay consensus, and no one should expect that the submission perfectly aligns with their opinion. Instead, it must be culturally understood that the submission is intended to reflect a rank ordering that takes into account each person’s perspective, and weights the final submission by the level of consensus that exists.

The shared submission algorithm

Background

Before explaining the algorithm, here is a review of the most up-to-date scoring mechanism:

1. Everyone in a group submits the same rank ordering of group members

2. An equation is applied to incorporate the new Level into the user’s existing (S)core:

S_n = \frac{S_{n-1}*5 + Level}{6}

3. The new (S)core is then used in a continuous Fibonacci function to determine a token distribution:

F_n = \frac{1}{\sqrt{5}}\Phi ^{S_n}

Context

This new shared submission algorithm doesn’t change any of these core calculations mentioned above. It is simply used as a prior step to generate the shared submission used to identify each person’s new level.

The math

Below is a proposal for an algorithm that could generate a shared submission from G inputs. It is a two step algorithm.

Step 1)

Generate a transient score (T) for each person

T = \overline{R}*C
  • T = a person’s transient score. It’s an intermediate value used to generate the final submission.
  • R = the set of rankings for an individual in a consensus group
  • Overline(R) = the mean of R
  • C = the consensus term. A number between 0 and 1 that measures the strength of the consensus on the rankings for this individual, where 1 is unanimous consensus, and 0 is the worst possible consensus.

From a high level, the point of the equation is extremely simple. It simply takes the average of all the rankings given to a single individual, and it weights it by a consensus term.

The question is then, how do we calculate the consensus term? This is one possible proposal for how to calculate the consensus term, but there could be better ways:

C=\left ( 1-\frac{Var(R)}{MaxVar(G)} \right )
  • Var(X) = The variance of the set of rankings R.
  • MaxVar(G) = Maximum variance for a group of size G.

This term is also not hard to understand. It is simply the inversion (“1 — ”) of the percentage of total variance. In other words, it measures the amount of consensus.

Max variance is the maximum possible amount of variance in one person’s weekly ranking. For example, in a group size of six (G=6), the worst possible variance would be achieved with rankings: [0,0,0,6,6,6]. The max variance can be calculated as a function of the group size G, using:

Step 2)

Rank order everyone’s transient scores to determine the final ordering. (If there is a tie, then randomly choose one of the two to go first in the rank ordering. For example: if unix_epoch_day_number %2 == 0, use alphabetical sort; else, use reverse alphabetical;).

For example, if these are the transient scores:

╔═════════╦═════╗
║ Alice ║ 4.3 ║
║ Bob ║ 2.1 ║
║ Charlie ║ 3.3 ║
║ Dana ║ 4.9 ║
║ Evan ║ 3.4 ║
║ Frank ║ 2.3 ║
╚═════════╩═════╝

Then the final submission would be:

╔═════════╦═══╗
║ Dana ║ 6 ║
║ Alice ║ 5 ║
║ Evan ║ 4 ║
║ Charlie ║ 3 ║
║ Frank ║ 2 ║
║ Bob ║ 1 ║
╚═════════╩═══╝

It is important to use the integers 1–6 as the submission, and not directly use the transient score as the new score for the week. This ensures a Pareto distribution of resources is used to correspondingly reward the Pareto distribution of contributions to a community.

Opportunities for improvement

The benefit of this algorithm is that it’s extremely simple. It would take only a small amount of code to write, and it is easily understood by people: It’s simply the average level weighted by the strength of the consensus.

However, there may be other algorithms that are better in various ways. For anyone looking to improve the algorithm, I recommend that you balance any proposed improvement with the concern that this algorithm should be as simple for a layman to understand as possible. Simplicity is an important part of credible neutrality, which is of the utmost importance in a vote-weighting algorithm.

Nevertheless, if you seek to improve this algorithm, there are several ways to do so:

Problem 1: It is currently possible that two people are given the same transient score. In this case, we have no elegant way to handle the situation. Therefore, perhaps there’s a better algorithm that avoids ties or handles them better.

Problem 2: Perhaps variance is not the ideal metric to use to represent the inverse of consensus. Use different statistical metrics that better approximate the amount of consensus for a given set of rankings.

Problem 3: The stronger the consensus is for the value of one person’s contribution, the more power is given to one malicious voter who intentionally votes the opposite of everyone else. Consider the use of an outlier rejection algorithm to prevent one rogue voter from throwing off an otherwise sound consensus.

Example calculations

Each row in the following table shows Alice's rankings over 5 weeks by each person in her group, and her resulting calculated transient score (The people are denoted by the first letter of their name: A, B, etc.).

╔══════╦═══╦═══╦═══╦═══╦═══╦═══╦═════════════════╗
║ Week ║ A ║ B ║ C ║ D ║ E ║ F ║ Transient Score ║
╠══════╬═══╬═══╬═══╬═══╬═══╬═══╬═════════════════╣
║ 1 ║ 6 ║ 6 ║ 6 ║ 6 ║ 6 ║ 6 ║ 6 ║
║ 2 ║ 6 ║ 5 ║ 5 ║ 6 ║ 5 ║ 4 ║ 4.89 ║
║ 3 ║ 6 ║ 6 ║ 6 ║ 1 ║ 1 ║ 1 ║ 1.07 ║
║ 4 ║ 3 ║ 4 ║ 2 ║ 5 ║ 4 ║ 3 ║ 3.14 ║
║ 5 ║ 1 ║ 2 ║ 3 ║ 4 ║ 5 ║ 6 ║ 2.37 ║
╚══════╩═══╩═══╩═══╩═══╩═══╩═══╩═════════════════╝

As you can see, this algorithm seriously punishes strong failures of consensus. In week three Alice was extremely polarizing, with three group members ranking her highest and three ranking her lowest. Her resulting transient score was heavily weighted towards the lowest possible score.

To play this game well, it may be better to be someone who builds consensus around a smaller contribution than it is to contribute in a way that could be perceived as polarizing to your community.

Where the algorithm should run

Although it would be possible to implement this shared submission algorithm in the smart contract, it would be better to only run this shared submission algorithm on the front end. This way, from the perspective of the smart contract, nothing has changed and everyone still submits the same submission, which helps the smart contract remain compatible with other styles of fractal governance that do not use the shared submission algorithm.

Furthermore, keeping this algorithm client side helps with privacy. My vote never makes it to the chain, and therefore from an outside perspective, it is impossible to reconstruct exactly where anyone ranked anyone else in their group. The submission from everyone matches so all anyone on the outside knows is the final shared submission. It would even be possible to use secure multiparty computation to allow a consensus group to collectively calculate their shared submission without ever revealing anyone’s individual rankings to the others in the group.

Privacy in one’s vote is a tradeoff. It allows individuals to secretly submit malicious votes to try to game the system or mess with consensus, but on the other hand, it allows people to vote according to their conscience and prevents bribes (since the briber couldn’t confirm that the bribed voted according to the terms of the bribe). All in all, I suspect that it’s better to keep votes private and accept occasional malicious votes and attempts to game the system as a form of inevitable noise inherent in this measurement instrument.

Conclusion

This algorithm implies a strong incentive to maximize consensus on people’s rank orderings, especially your own. But, given that people are not forced into an all-or-nothing consensus game, people are still free to convey their real feelings about the value of various contributions. This helps to reward instances of true consensus on highly valued contributions.

Using this algorithm should allow participants to have more fun by reducing the level of competitiveness involved in voting. It reduces the amount of time spent deliberating on the shared submission, which could result in shorter consensus meetings, which could allow more people to participate. This algorithm still follows the same principles as the original algorithm, punishing polarization and rewarding consensus.

--

--

James Mart

Science, education, the pursuit of truth, coordination, simplicity, neutrality, anti-fragility. https://twitter.com/_JamesMart