Continuing the discussion from Patron based proposal for mechanism 1.1 (instead of $-based goals):
We should really clean up this language that’s leading to misunderstanding.
The mechanism math cannot determine good or bad
Purely within the math of two mechanisms, we can acknowledge differences such as:
- In dollar-goal: increasing the average pledge size results in a smaller crowd reaching the goal.
- In crowd-goal: increasing the average pledge size results in the goal point having larger funding from the same-size crowd
Or this distinction:[1]
- in dollar-goal, increasing one patron’s pledge increases both the mean and the median donation
- in crowd-goal, increasing one patron’s pledge increases only the mean and not the median donation
(The latter is another way of pointing out the many-times-emphasized idea about “extra matching”)
But we cannot say objectively within the pure math that this difference is good or bad or that it will or won’t result in larger or smaller crowds or more or less funding.
But we could do empirical research
However, we can in principle make further objective measurements of the differences between crowdmatching mechanisms. But that would require empirical research.
It’s definitely possible to run controlled experiments with real people and even real projects and observe objectively that one mechanism or another has a statistically significant difference in either or both crowd size and funding totals.
Put another way: it’s likely true that one mechanism would in fact result in more project funding than the other. But measuring to discover which one is far from trivial.
If we ran experiments, how would we know if the results are representative for all sorts of projects and crowds? Maybe GNU-type free software projects will behave one way while journalism projects will behave a different way. There’s so many variables. And even if there is a real difference, we might fail to capture it in our measurements.
I don’t think we should discard the idea of such research. Maybe it will be extremely stark. One mechanism could utterly fail compared to the other. We should make plans to at least try to get low-cost feedback and simple research to see how people respond. Maybe projects will join us with dollar-goal and refuse to participate with crowd-goal. Maybe one or the other will be 100 times more successful at patrons just “getting it” after the elevator pitch. To find out these potential differences, we have to do the empirical observations.
Such studies can be objective and measurable. So, we shouldn’t say that neither mechanism has an edge in measurable objective benefits. We just don’t yet have any evidence to make that conclusion.
this ignores the tiny edge case of the single pledge switching which side of the median it is on ↩︎