Hi! I haven’t been here for a while and missed out a few things. However I get the impression the following question could trigger a helpful discussion or quickly supply me with a link to something important I missed.
While I tend to personally support those claims: Is there an argument based on current, empirical knowledge, that demonstrates how continuing with the current crowdmatching would lead to unintended consequences (i.e. we know it won’t work well enough for now)? Is there such an argument that demonstrates how trying the current crowdmatching would not be safe enough to try?
Current crowdmatching is safe enough to try, and we will be sticking with it at least through alpha, which is when we are ready to announce that we are are live, with ourselves as the only project.
However, we do know that it has some shortcomings. Particularly, the necessity to explain the budget limit means that the current mechanism is not actually simpler to explain than a goal-based mechanism, which includes the limit in the mechanism itself (what you pledge, is your per-project limit). So, we know a mechanism update will probably happen, to address that and the other drivers.
The reason we are discussing it now is that the alpha design is done (or close to it); the main blocker on getting there is developer bandwidth[1]. So the mechanism discussion is just doing design work that we know we will need to do anyway, while development[2] catches up to design.
My focus. I only chimed in on this topic because the conversation had gotten stuck. I think it's now un-stuck, although that progress happened on Monday in realtime and is not written in notes anywhere yet. ↩︎
While trying to find the optimal approach before launching may seem like harmful perfectionism, there is always the possibility that whatever flaws are exposed end up “burning out” the part of the public whose initial interest we need to gain traction.
That may be easy to recover from, or… this may be the only shot we get for another 10 years. Who knows.
What we do know is that flaws will be exposed, with whatever we try - so we might as well iron out the flaws that we can think of in advance, you know?