Whew, what a ride. I’m just caught up on these 120 posts, 2 years later. Since it seems this discussion spawned many threads that are continued elsewhere, and the discussion here died, mind if I close this one?
The early part of this topic had me seriously considering crowd-based goals. I really like the idea of incentivizing people to invite more people to the project, rather than give more money. Of course, you should do both either way, so the difference lies in incentives, and with these mechanisms, there is only one incentive tool: matching.
I think that’s the word that’s been missing from this discussion: incentives. It’s not framing. It’s not the math of the mechanism. Claims that it’s all about framing, to me, highlight the talking past each other that has happened here.
On detecting and reacting to miscommunications
When we see that someone continues to use a word or make a claim that we feel we’ve disproven a bunch before, I think that’s a signal that you’re not talking about the same thing, or making the same assumptions. It’s not as likely that they’re not listening/stupid/wrong, and that’s the bad faith assumption. My goal, and I would encourage others, to use these signals to get curious rather than righteous. Do they just “not get it”? Or am I not taking the time to imagine how what I’m saying fails to account for (and rule out) a different set of starting assumptions that could reasonably make my statements the illogical ones?
This takes patience of course, but the time taken upfront pays off significantly, in not prolonging the miscommunication - or using subtly emotional rhetoric that makes people less curious rather than more.
What I’ve learned here is that the mechanisms can work identically given end goals that… well, make them work identically. That’s cool, but not really relevant to the real world. I agree with the point that most of the time (practically all the time), we will have partially complete goals, not completed goals. After completion, the goal’s mechanism hardly matters anyway. But what definitely matters is the dynamics that apply on the way to that goal - I contend this isn’t some technical detail, but rather the biggest factor (!) in determining whether goals actually get reached.
That is, assuming you believe that matching psychology works, and has an effect in the first place. If mray thinks matching is “dead”, this might be a fundamental disagreement that prevents further collaboration. The whole goal of this type of mechanism, as opposed to my earlier critical mass one or some others, is to take advantage of the proven psychological effects of donation matching!
That isn’t to say it’s just a psychological guessing game though, which is why it’s not about framing: the misunderstanding dominating the latter part of the thread is that there is a difference in incentives for an individual patron coming in, who yes, because we are assuming matching works, may take into consideration how much his decision causes everyone else to give more. They won’t “just give what they’ll give”, if that were the case we need no mechanism. The way the new patron decides to act we cannot control, but we can control what incentives we give them!
These incentives are not necessarily subjective, and here we have unearthed sound, game-theoretical, objective differences in incentives between the two mechanism styles. So if we can agree that:
- The two mechanisms have different incentives at pledge-decision point
- Patrons, especially pure-rational-actor ones, will not necessarily make the same decisions when faced with different incentives
Then it holds that, we will certainly not see the same sets of decisions made by the same people when considering the other mechanism. But wait, the fact that the mechanisms can come to the same end goal, is only relevant with the assumption that the decisions won’t change! So it isn’t true.
Later in the thread a difference in incentives was proven that is actually a really big deal. In the moment that a new pledger arrives, they get more matching from a below-average pledge, and less matching from an above-average pledge. Yikes! Those are some perverse incentives. Sure, it might be moot if the average shifts later, but the pledger has already made their decision! You can’t know what later pledgers will do, for incentive assessment we can actually consider that this new patron is the very last one that will ever join. How will they make the most of it?
I don’t think it’s realistic that everyone is an ambivalent benevolent donor that doesn’t take incentives into account. If I were new to Snowdrift in this situation, I would have discovered these perverse incentives eventually. I would have probably tried the below-average trick. It’s that one that’s most problematic, moreso than the undermatching from above-average pledges. I think it’s safe to say that we want pledging more to always have either a positive or neutral effect on matching. For me, this is a deal-breaker, because I think it’s possible for us to come up with a mechanism with no perverse incentives.
There is one other gotcha: dollar-based goals virtually eliminates the incentive to create extra sockpuppet accounts. Why do something on another account, when you can do the same thing, with the same effect, on your existing account? This is considering incentives and gaming again, of course. But with Patron-based goals, if we start seeing a lot of sockpuppet accounts, we’ll know for sure that incentives matter! :]
Thank you all for the insights and patient explanations!