Initial projects to recruit

Continuing the discussion from Does crowdmatching make sense?:

We had some brief chats at a conference with Inkscape folks, and we use Inkscape ourselves. It’s been my general reference example of the balance between history, potential, needs etc. Inkscape isn’t speculative or new nor is it a standout success with existing funding. And it’s end-user focused rather than something of interest only to niche programmer types.

NixOS would be a great option to consider.

We are also talking to Kodi. I say that lightly in that we haven’t communicated much in a long time.

A major step soon is recruiting the first, best-fit projects and discussing concerns with them. We are solidifying some foundational stuff for the platform first, but then project recruitment will be the next big stage. That said, we welcome discussion of it right now.

We have a private GitLab project with brainstormed and semi-sorted list of projects to consider. We need to revisit that and clarify our criteria to find the best candidates.

I didn’t want to open up a can of worms too much here because this is a big topic. But it’s important, and I appreciate people asking about this.

2 Appreciations

feel free to invite me

that’s the first step i guess

that’s a good start for criteria

2 Appreciations

Let’s decide on criteria and their priority.

I guess we can define the goal as test our crowdmatching mechanism in practice and get references for our project. In the spirit of an MVP, we don’t want to start with diverse projects, but choose end-user focused software projects.

The plan can be

  1. start with snowdrift itself
  2. communicate that public beta has started and we want people to test it
  3. ask for feedback on user experience
  4. when we see problems, solve them
  5. when it runs without problems for a week (?), we can add the first external project
  6. again ask the users for feedback
  7. solve arising problems
  8. when it runs fine for one month, we can add the next project

and continue like this as our resource allow. we might already get some funding

This are the existing notes for criteria:

Salt’s criteria:

  • have heard of
  • been around for at least 1 year
  • stands out within category
  • can use additional funds well
  • is important

Other potential criteria, currently unsorted, to re-order by highest priority:

  • FLO and other values/vision alignment
  • Established audience/community size
  • Relative financial need
  • Size of organisation or project team?
  • Variety of projects representing different non-rivalrous public goods

smichel17 additional criteria?

  • Flexibility / willingness to work with us while the site UX is still not great

wolftune

  • end user focused / diversity of initial projects / what our choice of projects signal / quality of initial feedback
  • legal ease of set-up (practical bandwidth concerns)

my own criteria, that are not listed obove:

  • established projects with bigger audience and existing user community
  • where funding has a significant impact
  • but it don’t hurt when this experiment fails or has a rough start (e.g. maintainers are freelancers)
  • audience that use credit card (SEPA/PayPal not supported)

So here i propose the criteria for the initial projects (first X). After beta testing, we can onboard more projects that fit the Project Requirements, transparency requirements, Project Honor Code and Code of Conduct.

To get good references for our project list:

  • Project has to align with free/libre/open values and our vision
  • Project should be known outside the free/libre/open community
  • Project has to be important
  • Project has to stands out within category
  • Diversity within the limited focus

To test our crowdmatching mechanism:

  • Project has to be established with bigger audience and existing user community
  • Project where funding has a significant impact
  • Project has to be willing to work with us to give feedback on the platform
  • The audience has to be able to use credit card (SEPA/PayPal not supported)
  • Diversity in project size (medium, small, not too big. what does “medium” etc. exactly mean?)
  • Project where it don’t hurt when this experiment fails or has a rough start (e.g. maintainers are freelancers)

This is a proposal. Please provide feedback. We can decide in one of the next meetings.

3 Appreciations

I think this is an excellent summary! Great work! It would be nice to pick an even smaller checklist of absolute essentials so that we can sort the pool of potential projects. We want to pick out a core for which there’s nothing fuzzy or iffy. Those will be the projects we prioritize reaching out to.

“alpha” vs “beta”

To just clarify terminology: we’re now saying that snowdrift-test-project is alpha. So, alpha is live now, but we haven’t announced it because the pledge page and the patron dashboard and other things are incomplete. So, we want an alpha that has all the essential elements looking decent at least. Then we will announce the alpha.

We’re calling beta the situation where we have outside projects but we don’t have a consistent onboarding / signup process for whatever projects, we’re just doing testing with the first few curated projects. (We plan to curate even in the long-run, but not in the limited-to-these-few way that we want to launch beta).

1 Appreciation

These seem like good criteria to me. I think the next steps are:

  1. Bring this up at a team meeting to make sure everyone agrees.
  2. Get farther along in development.
    • Last time when we started reaching out to projects, we had (and still have) too much left to do before we’re really ready for them, which lead to this drop-off in communication; at this point, we would need to re-start those talks, not just pick up where we left off. We would like to avoid repeating this.
  3. Update the lists in the private gitlab project to reflect the updated criteria.
    • I originally had this above “get farther in development”, but then I realized the ranking itself may also become out of date. Then I considered moving “get farther…” to the top, but I think it’s worth agreeing on these criteria while they are fresh in our minds, so we don’t need to revisit the conversation again.
  4. Get in touch with the projects we rank most highly.

At our current pace of development, I think “Can the platform support more than one project?” is more likely to be the bottleneck than how long the site is running for :slight_smile:

These were what I was trying to capture with “Flexibility / willingness to work with us while the site UX is still not great”, thank you for the clearer phrasing :slight_smile:

2 Appreciations

This (and accompanying calculations) inspired me; I think we can improve the project selection criteria — specifically “target size”.

One of the points we clarified there is that the crowdmatching level we chose for alpha is arbitrary, and not very informed. We chose it because “$1 per 1000 patrons” makes for simple math & explanations, and because “it seems about right.” We fully intended to adjust it (and still do) depending on our real world findings.

I also mentioned how Snowdrift.coop itself has needed 4 years of buffer before we could begin receiving accumulated donations. That’s probably not sustainable for most projects.

edit: the core point, which the next paragraph is supposed to convey but (upon re-reading) doesn’t, is this: we don’t have to calculate the best size project for our current rate. Instead, we can calculate the optimal rate for projects of different sizes, and then pick the projects with the highest rate as good candidates for bootstrapping.

I think we can calculate the optimal project size and crowdmatching rate using calculations similar to the ones in Does crowdmatching make sense?, if we also consider the credit-card minimum charge. I’ll write out a formula later, but the steps to write it should be something like:

  • What is the most we can expect the average patron to pay? ⇒ make this X

  • How much money does the project need? ⇒ make this Y

    → Calculate the target crowd size.
    → Calculate the crowdmatching rate.

  • How many months of pending donations can we tolerate? We do want projects to get the money, after all; if our time scale is years like it’s been for Snowdrift, cards are likely to expire and be forgotten, etc. ⇒ make this a constant in the formula so we don’t end up with a 3d graph

    → Calculate how many patrons are necessary to reach $3.79 within that time period (minimum sustainable crowd)

    ⇒ The optimal size project has a large enough user base to reach the target crowd size, and the smallest minimum sustainable crowd.

Keeping in mind that the “Project has to be important & known outside the FLO community” criteria — which push us towards larger projects — I think the curve will show a trade off where larger projects are ideal for lowering the per-person contribution, and smaller projects are ideal for reaching sustainable numbers sooner. So then we just need to actually decide on an amount per-person and choose the smallest projects which can satisfy that amount.


I didn’t know where this was headed when I started writing. It’s an interesting conclusion. I don’t have time to think about it more now; it’s really, really late and I need to get to bed. I’m interested to see what the dollar amounts look like, though!

2 Appreciations

This isn’t right really. Snowdrift.coop collected pledges without an appropriate pledge page and without announcing to the world that pledging was even possible (let alone actively promoting it), and we got to a point where it would take 4 years to accumulate chargeable amounts.

The only thing we know is that into-the-hundreds of people are willing to pledge even under those circumstances. This tells us near-zero about the buffer time for even us as a single project if we simply had a complete pledge experience and announced and promoted pledging. That might even drop the buffer to a couple months even with us as the sole project. Who knows.

I think it’s a big mistake to use the pledges that filtered into our unannounced functioning as a metric for anything except an indication that people will pledge at all.

On goals with outside projects

We don’t need a metric now about some exact number. We need projects that can arrive at some target numbers in conversation with us. Then we can discuss those numbers. It’s actually totally useful if we engage with a project, get numbers, and then decide that the numbers make the project not a great fit for the initial beta set.

2 Appreciations

Yes, this circumstance was exceptional. The other post acknowledges this (edited to make it slightly more clear) and the fact that this mention didn’t was just a matter of bad phrasing. However, I didn’t use it as a metric at all, just an illustration that we should probably pick a buffer period of shorter than 4 years :grin:

With outside projects, I don’t think it’s an either-or scenario. We can do some preliminary calculations and remain flexible. Really, it’s just a 3-variable equation that will come out of the post above (I suggested fixing one, to make it easier to graph, but that’s not necessary) — we’re not locking in anything, just getting a better idea of how the numbers might play out. After seeing the analysis in the other thread, I think that’s a very useful thing.

3 Appreciations