Maybe the text on the image was misleading? That’s not the goal. The end of the track is the end of the month: You pay what you pay, others pay what they pay, everybody pays for the reason they pay – it is payday! And it is the same in a patron based or dollar based matching scenario.
Matching rate is just where you put the ball on the track. You may feel “positive” to place it at the tippity top – but if you think that’s going to have an impact in the end – you’re about to be surprised.
This thread is about what we want to express with any “%” - value. Please show me how your two new games are relevant to that question.
My post above has exactly the same set of patrons. Two identical scenarios as much as comparing is possible at all. The patrons make the same pledge sizes. The output, i.e. the actual amount of money that the project gets at payday is different between the two mechanisms.
The scenarios start out as close to identical as you can get with a crowd-goal vs a dollar-goal. The same patrons are indeed making the same size pledges. The payout is indeed different.
The two mechanisms respond differently to the different input. It’s that simple.
Here’s how it works:
in crowd-based goals, each pledge has a fixed effect on the % of the goal that has been reached
in dollar-based goals, each pledge can have a variable effect on the % of the goal that has been reached
And this is also factually true, not an interpretation. This is a fact about the incentives the mechanism sets up (but I’m not saying these are the only incentives or that how much this particular patron description matches the motivations of real-world patrons):
Given a patron with the goal of maximizing the amount of money that others give to the project but to do so at the least possible cost to themselves, then:
in crowd-based goals, their winning strategy is to make the minimum allowed pledge
in dollar-based goals, their winning strategy is to make the maximum allowed pledge
These are the necessary outcomes of simply programming a computer to find the winning strategy given setting the program to have these goals.
for this case, assume they are not allowed to just make the crowd hit the goal and that the goal in fact won't be reached ↩︎
If they don’t stay that way – you just compared two non comparable states. You just replace a working comparison with a non-working and surprise yourself with having found an effect. By definition you have to make sure they remain the same, including your assessment of how dollar <-> patrons do actually relate. But I guess you’re stumbling over the same thing others do: you change something that unilaterally affects only ONE definition of success, while not making sure the same success has to be assumed to be represented in the other scenario as well.
With a crowd-based goal, success is additional patrons (I’ll call this “crowd-success”)
With a dollar-based goal, sucess is additional money (I’ll call this “dollar-success”)
You acknowledge the game is different depending on whether the goal is crowd-based or dollar-based; you simply argue the games should not be compared based on dollar-success, because that is only valid for the game with the dollar-based goal? And this is why you say the two states are not comparable (at least in the way they have been compared)?
That further fleshes out the points I made above which are simple facts:
And this is the core point that causes the two games to be mathematically different. The only way to define them as the same would be to make them the same by changing the design so that the difference is removed (i.e. provide a way to have a variable effect on the crowd-success or remove the ability to variably affect dollar-success).
Not sure I get you right. If by “game” you mean the “mechanism” behind our crowdmatching I do not even acknowledge that the game is different. Everything I consider to be part of the mechanism remains untouched:
Patrons join, wanting to help to fiance projects.
Patrons put a certain amount of money on the table.
Patrons pay part of their money – depending on a success metric.
Staying in that “game” analogy, you’d play the identical game – but then use a different interpretation of the “winning condition” to conclude the games were different.
I’m not arguing the games cannot be compared based on dollar success – I just say the comparison can only be valid if the dollar goal can be fairly represented in a patron goal (which it isn’t if you only change a patron giving more money. You would have to slightly decrease the respective patron goal threshold, and not leave it the same – but absolute accuracy isn’t possible)
As far as Do we want larger crowds or more dollars? (pledge limits) applies to this question, I think it may be a false dichotomy. If it turns out that one mechanism results in 1k patrons pledged at $5 each, and the other results in 10k patrons pledged at $1 each, it is very clear which one is better.
(This is not really a full thought, I just didn’t want to derail that discussion)
But then you talk about changed behavior of patrons, not a mechanism that works differently: that’s the framing part. The pledge distribution in both cases would have the same results in both mechanisms.
So if there is going to be any change it is because of framing, not of the mechanism. That has been my entire point all the time. I’m looking forward to talk about framing. Because that’s where we can actually expect differences. Just don’t suggest that the mechanisms themselves have different mechanics that lead to different distributions.
@mray I feel much more confident in understanding you and the situation, please tell me if we’re on the same page now.
I suggest you use nuanced terms like “about the same” or “statistically equivalent” so people don’t continue to misunderstand you. You know that the math is not literally identical, but you are pointing out correctly that any differences are insignificant (see Defining controlled comparisons )
Here’s a more precise statement, please tell me if this is what you mean:
if there is going to be any significant change, it will be because patrons change their pledge behavior, not because of the way the two mechanisms act on the same pledges
And you’re using “framing” to describe the influence of the mechanism presentation to choose their pledges, right?
To summarize some facts:
The math is not strictly identical
Patrons could, in principle, recognize the difference in the underlying math regardless of how we frame it and what the labels are (this could happen even if the labels were just X and Y, meaningless), but to use the differences to decide on different pledges, patrons must know which mechanism they are pledging under.
If the patrons enter the same pledges in both systems, the differences will be negligible, they are effectively the same
That second long bullet is not literally “framing”, it’s mere presentation. A computer program could see the math difference and enter different pledges accordingly. But, and I think this is what you mean when you emphasize framing, all significant differences exist at the point of the pledge-decision and not in the way the two mechanisms take input and give an output.
If we could somehow make the mechanism a black box, not tell patrons what part of the mechanism was what, then they would put the same pledges in. If we just told patrons that a secret crowdmatching algorithm would be used, and they could set their budget… then both mechanisms would give nearly identical outputs.
One still interesting question is whether the underlying mathematical differences will lead patrons to pledge any differently. It might not. While certain computer program models of patrons will pledge differently, others wouldn’t. What real people will do, we don’t know. And we would have to do careful controlled experiments if we wanted to be able to see whether any difference in pledging behavior remains when all framing is removed.
If real people would pledge differently simply from the mathematical presentation of the two mechanisms, we might be able to frame things in ways that will overcome that difference. Framing is indeed likely to have a strong effect on real patron pledge behavior.
I call it framing, because if the mechanisms were different in a relevant way – it would be just common sense to lean towards the one with the more favorable outcome. But framing isn’t limited to the presentation of the mechanism – its applied to the entire project and throughout all media.
It actually is, if you don’t go into detail. You only have to keep counting in degrees of success and not express it in dollars or patrons and it will be identical. Once you go a step deeper into the details fuzziness emerges.
Yes it is basically everything except the the actual inner mechanical workings of the mechanism that are relevant.
My current understanding is that the only thing we can observe is fuzziness that sometimes might be favorable to a patrons motivation and sometimes not. So despite knowing the “margin” of possible difference (it is relatively low) – it also isn’t even leaning towards one side. I think it is not interesting, but would like to be surprised.
I think we don’t have the resources to really present our entire project in two ways with the same enthusiasm – just for A/B testing.
I think OUR bias towards a presentation/framing that fits our values, ethics overall self-image can boost our effectiveness as presenters that might be even more impactful than catering to a slight bias that the broad masses have.
Sorry for language confusion on my part. By “presentation” in this specific case, what I really meant was “telling people at all which mechanism they are pledging under”.
Yes, framing exists at every level, right in the UI to the political language or even what the projects are etc etc.
But my point is that simply revealing to patrons what the mechanism math is provides them a distinction between the two that could, in principle, lead them to act differently. And this is true even if you use neutral labels.
So, this is the key point that I think has remained part of the miscommunication. Here’s the summary:
The minor fuzzy differences do not show any predictable lean in outputs, given the same inputs.
The arbitrary order of pledges and across the payout points can lead either mechanism to produce more or less total funding than the other.
Over time with lots of projects and large crowds, this fuzziness will get lost in the averaging out, no statistically significant distinction will remain.
But the math behind the fuzziness is predictable in how it behaves, and here’s how it affects the pledge-decision process:
In crowd-based goals:
Patrons making below-average pledges donate less than the amount of matching funds their pledges release from the rest of the crowd.
Patrons making above-average pledges donate more than the amount of matching funds their pledge releases from the rest of the crowd.
In dollar-based goals:
All pledges cost patrons almost exactly the same amount of money as the amount the pledge releases from the rest of the crowd (the picky detail being that they are in a sense matching their own dollars too).
We can consider this “framing” if you like. It is indeed irrelevant unless it changes pledge behavior, and it can only do that by the patrons considering the difference in making their pledges. It’s in the mind of human pledgers or the calculations of a computer program, not in the mechanism itself.
But unlike almost everything else in framing, this detail is one we cannot choose to do differently. As long as the math of the mechanisms is what it is, we are stuck with this framing existing. All we can do is to study whether it affects pledge decisions or not, or perhaps we can bury or overwhelm it if we don’t like the effect, or maybe the effect is fine and will help us choose between the mechanisms. Either way, the existence of this framing-effect is inherent to the mechanisms themselves. We didn’t make it, and we can’t remove it or fully stop patrons from seeing it.
All that said, I think the other framings matter more. What projects and patrons think about the idea of the goal in crowd vs dollars (and what we think for our mission) is much more important than that detail about the degree-of-matching.
To me this sounds like the longer version of: people are bad at math sometimes, and may act on it. I agree. Let’s just take that as a motivation to properly make ourselves understood – no matter what mechanism we end up with
First of all we don’t highlight those differences in any sense and we also don’t present this as a choice! If we just go with one of those – I doubt anybody will ever say: “Oh I see how snowdrift works, but since the mechanism isn’t slightly different in that tiny irrelevant aspect – I don’t pledge”.
It is interesting for us to consider now.
As of your juxtaposition you again fail to see the irrelevance in observing “matching”.
People pay what they pay. Nobody pays more, nobody pays less than they pay.
That’s true for everybody in any case – and both mechanisms yield the same money ALMOST!!!
I agree that your concern makes sense. To some degree. If for some reason we would be worried about tailoring a mechanism for one specific kind of person you’d have a point.
But we literally are speaking about people blow average and above average – and we have to consider both equally because they FACTUALLY are equally important (I wish we had just them above-average-patrons ).
In combination to the knowledge that either “advantage” that you might find comes with a “counter-advantage” of the same intensity. So we literally are considering benefits that even out – and compare it to a scenario that just does that from the get go.
Patrons being good at math means they will see the difference. The issue is not about patrons making any errors, it’s only about what their motivations happen to be.
Here’s the scenario: good-at-math patron looks at the crowd-based mechanism and thinks, “if I make the minimum pledge, I will pay X and release Y from the rest of crowd to the project, and if I pledge any higher, only my X will increase and no increase in Y will happen.” And thus, if they are motivated primarily by releasing funds from others, they will only pledge the minimum. But they could also say, “if I pledge higher, I will be providing more funds that might motivate hesitant others to pledge at all, and I can afford it.” That depends on the attitude of the patron, but either way, they understand the math perfectly.
In the dollar-based mechanism, the same good-at-math patron can say “if I pledge more than the minimum, it will release additional funds from the rest of crowd.” And they could be motivated to pledge higher for this reason which is again perfectly mathematically true.
And if this math difference leads the patrons en masse to pledge differently in the two systems, then they will produce significantly different outputs because of the way the math motivated patrons to change their behavior. If this change happens, we’d see significantly higher pledge values with the dollar-based mechanism. But this difference will not show up if the patrons do not pledge differently, such as if their pledge decision process is based mainly on other factors rather than on the “maximize the released funds from others”. To state again, the significant difference will show up if (and only if) the crowd is made up of people who have the goal of releasing more funds from the rest of the crowd.
Because such a crowd is possible (we are speculating about people’s behavior and motivations), the difference between the two mechanisms might not be irrelevant.
Exactly, so the relevance is entirely based on learning how real people behave, and if they behave in a way that would bring out this difference whether we could influence them to stop that behavior via other framing or whether the difference is fine with us (and could help us choose between the mechanisms).
The core point all along has been that this difference is inherent to the mechanisms and has real potential to lead to substantially different outcomes depending on whether the patron behavior responds to the difference or not. And this is not at all because the patrons are misunderstanding anything.
If the patron was good in logical thinking in general he would see the futility of his choice. All options that are open to him are open to others as well – and he can’t even control those. So being “good at math” should open your eyes in terms of how irrelevant your choice is in the big scheme – like you just tried to explain it to me:
For some reason you point at this scenario and conclude: in the end it all depends on an attitude, but refuse to consider that this insight does not need to be disappear in any mind that has an attitude. Patrons can get this, too.
Not only this. This attitude ITSELF is only a mirage, because despite the image of yourself you prefer (you, the enabler of matching –or– you, the raiser of donations) your impact remains constraint by the same parameters:
what you are willing to give
towards which goal
how far the goal reached.
Matching does not come up in that list anywhere.
Matching is just a story you tell yourself to feel better about “sucking money out of people” like it was a neat trick that you apply for a good cause. But ultimately it is not – because the other person is instrumental to willingly WAIT for you to FINALLY snatch that extra dollars from them. Which suddenly makes your counterpart equally complicit in that trick. There is no escape from that truth. But you can of course keep telling yourself stories about how you “intelligently” chose one route instead of the other and feel intelligent about it, and we may even find out what story is more popular! But even then, I honestly ask: do you really want to use that story as a lever when patrons that actually are good at math can see through that?
…You mean there is a “danger” that more than half the patrons pledge less than average? * cricktets *
Matching – in the way that you hope to put it to use – is a dead concept in my eyes. Not because it does not exist, but because it clashes with our values.
Perhaps you surprise me about what needs to be spelled out to me – in a more effective way than this written back and forth.
I suspect we are more in agreement than not, and we’re just working to see eye to eye. I’m not sure you believe anything wrong (other than maybe some past wrong impressions about what you imagined I believed).
Yes, this is fundamental to game theory. It’s not enough to think of your actions, players think about what others will or won’t do.
Imagine this sort of thought process a patron could make in choosing to pledge to a crowd-goal:
Hmm, I need only pledge the minimum to at least get counted toward the goal. But should I pledge more than the minimum? If I do, then I’ll just be unilaterally donating extra to the project. I mean, I’m still going to give more or less based on the crowd size. Do I care about giving extra money or just growing the crowd? I wonder if making a larger pledge could actually encourage others to pledge who might not otherwise? Perhaps having a larger pool of money to get released as the crowd grows will motivate a greater number of people to pledge at all. But assuming nobody else acts any differently, my extra large pledge is just money from me to the project, it has no impact on how much others donate.
And that’s different from the sort of thing they might think in a dollar-goal:
Hmm, should I pledge more or less? Well, every increase I make in my pledge will get the crowd closer to the goal! And of course, that means everyone will give a larger portion of their pledge. What if I pledge an enormous amount? I could really push us toward the goal that way!
These two stories are not interchangeable. They aren’t mere framing because they would be mathematically wrong if we swapped them, no matter if we adjusted the exact labels of anything.
I’m not sure I agree completely, but I do support the notion that we want people to do all they can and break out of trying to over-think or manipulate the game in some defensive manner.
We don’t know how much real patrons will think things like the stories I wrote above. And we have other levers and framings at our disposal to influence the stories people tell and how they make their decisions.
I’m not arguing that we should want these particular stories or that they will or won’t be common. We could want and perhaps influence patrons to just think:
Do I support this project enough? Yes, I’m in! How much can I justify budgeting to this project? Do I really care a lot or have a lot of extra funds I could put in? Hmm, I guess $X is a good budget for me. Okay, done! Let’s see how it goes!
So, I’m not saying that the difference in “matching” that I’ve highlighted is what we should focus on. It’s just in the list of differences we should acknowledge. We can’t completely block people from having such thoughts once they do in fact understand the math of the mechanisms. We could even encourage them, but that’s not what I’ve been arguing.
It’s possible at least that the inherent mathematical differences that allow certain stories and not others could be significant and decisive, depending on whether they change how real people behave. We can’t dismiss them until we have strong enough evidence that they won’t have that effect.
To formally isolate the math from the framing, we’d have to make a control case of something nobody is suggesting we actually consider: a control case where patrons are allowed to pledge to crowd-goals but choose to count as fractional or multiple patrons. The purpose of that idea is only to suggest how we could control for the mechanism difference versus the framing differences.
I accept that other ideas that aren’t inherent to the math will likely have much more influence. Differences entirely due to framing might have the most impact. For example, will people and projects simply “get” or feel comfortable with the whole idea of crowd-size goals versus dollar-size goals? And how motivating will people find the ideas of dollars-for-projects vs building-a-crowd/movement?
Yes? Not sure what you are agreeing here. It is not that I’m just pointing that out.
What I’m trying to hammer home here is that you are talking about people “discovering” that pledging above average or below average might possibly work in their “favor”. But it is impossible for everybody to pledge below/above average. No matter how smart the patrons, no matter what their intentions are. Individual people can after all pledge above/below, but that would either “push” other people around the threshold against their intent – or – they were willing to be on the respective side of the threshold to begin with. In either case there is no argument to be made.
You literally make up two possible thought processes (among endless possible ones) only to find differences. What is the point of that? You can do that with anything – even things that are absolutely 100% mathematically identical – and find differences. Your example shows how framing makes an impact. You fail to uncover the fundamental similarities though: money is getting pledged, and gets payed out based on a measurement of “success”.
It all boils down to the attitude of a patron how much he is willing to give.
Maybe not the stories, but their outcome certainly is: Both people pledge money at the end of the day. Things happen in peoples heads – maybe even what you took as an example – and then they pledge a certain amount of money. Both goal types yield !!ROUGHLY!!!!! the same amount for the project.
If you want to show they are not interchangeable – show an ACTUAL difference! And I’m not talking about rounding errors. I’m also not talking about stories in heads. I’m talking about differences that are based on the rationale of those to exemplary thought processes you described. If you can’t do that than you just have to admit that – despite many possible stories possibly living in peoples heads for whatever reasons – their stories don’t relate to relevant facts. If they would – you could show them. But you couldn’t.
It boils down to attitude. And attitude may be susceptible to all kinds of thinking, even fallacious thinking, yes. Should we cater to people suffering from a misunderstanding – just to push them a bit further in a direction we like them to go? I doubt that. I want to inform people and make sure they stand behind a decision they understand and is based on facts.
What would be the reason to acknowledge this – we already figured out that factually there is no RELEVANT!!! ! !! ! ! difference. Why is it important to stick to the hypothetical impact on people possibly misunderstanding things and acting on it? If anything it would serve as a guide to try to navigate around that misunderstanding or correct it. I understand there is value in approaching people where they are – but this sounds to me like a pedantic effort to just NOT throw a thought out of the window because there is 0.0001% chance that it might partially be useful for… something.
No Aaron. NO! We agreed that the mathematical differences – to the degree that they even exist – are not relevant. If you go ahead and invent stories about people taking this irrelevant difference and just make it relevant for no good reason in their own heads does not mean it suddenly get relevant to us or reality.
If your premise is that people do fuzzy math in their heads and then go crazy and act on it – we could start to consider all kinds of things that are just WRONG.
By your logic we would have to consider polar bears. Why? Because we have no data that suggests that excessively using the word “polar bear” does not increase pledging. You may say: well but what would be a reason for that animal to play a role to begin with? And the answer would of course be “none”– there is no reason that connects those dots.
The only difference from polar bears to “crowd/dollar based is better” is that thematically it fits, but is equally relevant: not at all.
But it could be, and we would need data to clarify that, so we have to wait until we have the data … and in the meantime we sit here and just acknowledge polar bears.
I don’t even see how acknowledging things can be constructive when working on our mechanism. At best it leaves you with the notion that you can’t know anything for 100%.
Why would you even want to isolate in the first place?
Again you somehow manage to take a leap I’m not willing to take:
You jump from:
“There is no relevant difference.”
“But there might be one!!, If people think there is!!!”
…all that while knowing that there is no relevant difference.
So you still think there are mathematical differences that have a (relevant) impact after all?
I think I still might have misunderstood you the whole time then.