Add adaptability, avoid complexity to project honor code?

The honor code is nice indeed. Am I overlooking something, or is the issue of complexity missing? Here’s what I mean (Quotes are from

  • The freedom to run the program as you wish, for any purpose (freedom 0).

If the program can only be used for what the original author intended, even though reusability could have been achieved with little effort, the program does not meet this criterion. The best way to stay reusable creatively is to avoid complexity.

  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.

This implies writing it in a way that is designed to be understood and changed by as many in the userbase as possible. The only effective way to do this is to avoid complexity.

My opinions about this topic are not very common in the FLO world, so I don’t think you would want to adopt them.

1 Like

We could perhaps add something more generic that captures the gist. The point is that projects should ideally aim to actually make it practical for people to adapt the use and to study and modify. I think the scale ranges from purposeful obfuscation to no consideration to good-effort / documentation etc to proactively designed to be adaptable.

I just wouldn’t want to suggest that there’s some ethical failing when a project isn’t specifically designed for adaptability. I can accept that some projects just can not prioritize that. But it’s an overall nice idea, reduced complexity, approachability, accessibility, adaptability…

I sympathize with the idea of “humanitarian code”, which would follow principles of clarity, simplicity, and interoperability. I think a code base is a complex adaptive system, where “the whole is more complex than its parts, and more complicated and meaningful than the aggregate of its parts.” Software systems aren’t designed on the macro scale — they evolve. Therefore I recognize humanitarian code as a difficult-to-achieve goal, even for people motivated to do so. It requires a lot of resources and cooperation, which is what Snowdrift is trying to help facilitate.

1 Like

On one hand, I agree. On the other, software like this still has an advantage over proprietary software: old versions remain available indefinitely.[1] This creates an incentive for the developer not to do anything too crummy.

Of course, this incentive also applies to proprietary projects, which may lose users, but they only have to compete against other software, not previous versions of their own software — users can usually choose not to update, but there’s seldom an option for new users to install a previous version.

I think browsers[2] are a notable exception — they don’t have to compete against old versions because users need to update frequently to address security bugs. And even then, there’s some mild competition in the form of Pale Moon / WaterFox.

  1. Or at least, as long as the build tools work. ↩︎

  2. specifically, browsers that implement a JS engine ↩︎

The idea of Unix, simple, dependencies on dependencies etc. does have some major pitfalls. It can introduce unsustainable chains of security concerns if there are too many projects in too-complex a web of connections. One sloppy developer may open a security hole that undermines the security of the whole ecosystem.

See the left-pad case with npm as an example of overly-simple projects with overly-complex dependencies (trivial code is best as snippets that other projects copy rather than pull in as dependencies, or maybe best if somehow reference to upstream is maintained without actual dependency; nice to know if upstream improved the code but not have it as a hard dependency).

The total complexity of the system should always be minimised, not only the complexity of your own code. The complexity of a program should be defined recursively, i.e. a 1-line-of-code program can be complex by pulling in the complexity via dependencies, or by having a complex dependency graph. (The problem with left-pad is actually not that left-pad had dependencies, but that people depended on left-pad. Left-pad itself was not complex, but the software that became broken was complex.)

“if there are too many projects in too-complex a web of connections” is thus a non-example.

For a different perspective on the left-pad example, see, where a unix-aligned person wonders how people neglected their dependency graphs that much.

I would like to use the opportunity to refer to for more background on the unix philosophy – but it’s 61 pages, so I would totally understand if you don’t take the time to read it.

This is very much aligned with how unix people think. For example, people at copy around a simple command-line parsing library called arg.h (e.g. at dmenu, at farbfeld, at slock)

1 Like

I have to admit that literally all nerds in my proximity understand and support my views, even if less radically. As a very rare counter-example, a friend and myself recently encountered someone who was only concerned with practicality (“why care about the complexity of my dependencies if I can achieve much in little time”) – since he was a web developer, we told him about left-pad and talked about some inherent and philosophical shortcomings of the npm ecosystem. He then pretty much agreed to us that the more beautiful ways have their place and are better in the long term.

However, the people around me are mostly, like me, influenced by mathematics and related thinking, so they inherently have a different approach to beautiful solutions and complexity. Most of the world is sadly focused on producing results quickly. Most users don’t know about beauty/complexity issues, so they choose according to what has more features.

As a result, while I think the minimizing-complexity thing is something that cannot do harm for you to adopt, using the approach effectively and drawing the logical conclusions (i.e. doing it radically) would distance you from the mainstream FLO world. I think that would be a good thing, but I could also understand if you would rather like to stay there.

I’m convinced. I mean, I sympathized with your views in the first place. I now see that you understand these things very well. To this topic’s title: you have my complete support in formulating a good entry for the project-honor-code that captures this value (perhaps with links to the best reference material). I agree that this is important and worth including.

On a side note: I’ve had some excellent discussion with Drew DeVault around SourceHut, his ideals,, etc. and I respect him immensely. He’s both thoughtful and principled.

Thanks for the offer! Not sure when I will get around to this, but I’ll have a look :slight_smile:

1 Like

Hmm, I’m not yet convinced. I’ve read the blog post from Drew DeVault, and I understand that the traditional way of coding leads to complex systems having sustainability problems in practice. That case is clear. But I think it’s a logical leap to arrive at “practical sustainability is impossible without avoiding complexity” in code in general.

For example, the Elm programming language is purely functional in entirety, which allows some rare and awesome static guarantees. When I make a project I can have it depend on as many libraries as I want, because the only things I can inherit are it’s pure functions (a maintainer can’t just add some cryptomining code) and because the language has automatically enforced semantic versioning. A dependency literally cannot break my code, because any breaking changes would put the library/package’s version outside of the range that my program supports, and thus my program would just build with the latest compatible version of the dependency. The same is true for all the dependencies of that dependency. The result is that this clever setup has solved the problem of “dependency hell”. Turns out it’s possible after all -and with a technical solution, not a human one.

The “avoid complexity” idea is, in effect, a human solution to the problem. It’s a fine idea and works well, and again I agree it’s needed in most ecosystems where technical solutions aren’t possible. But all I’m saying is, it’s not the only way. Logically, “we can’t find a way to make complexity good” does not mean “complexity is inherently bad”.

I’ll put my card on the table: personally, I like features.
For example, the FLO software Blender can do a lot of things, and while it’s modular, having several types of applications in one should mean it breaks the unix philosophy. I wonder what you think of Blender?

Investing in a program for a particular task is energy-consuming, so if I can make that 1 investment go a lot further (read: more features) in tackling my current (and future) needs, I much prefer it. For example, the CMS software Tiki Wiki is “the Free / Libre / Open Source Web Application Platform with the most built-in features.” (Yes, they actually spelled out FLO!) and upon encountering their initiatives like

…I tend to agree with them that in several cases, all-in-one is the way to go.

  • I would not want to rig up ffmpeg with a 3d rendering layer with a virtual modeler just to recreate the experience I get with Blender.

  • I would consider using something like Tiki for building a web platform, knowing that I only need a wiki right now but if I want to add a forum later, it’s already built in. No need to rig up the authentication and theme a separate forum system (which is what we did here on snowdrift).

  • I’m creating an app that integrates time tracking, todo lists and calendaring. Not because there aren’t already tons of tools in those categories, but because the tendency to always keep them separate has lead to a world of really lackluster integrations between them. Having them part of the same system opens up new possibilities - even if just considering the feature of “ease of use”.

  • Despite the flak systemd gets for supposedly ‘violating the unix philosophy’, it turns out the design goals are really sound and this talk in particular convinced me that they actually found one of the best approaches to their problem.

Anyway, make of that what you will, but I will still eventually read the Unix Philosophy pdf you linked to (when I get the time). :slight_smile:

1 Like

Regarding your comment about chromium at On solving the world's problems [MOD EDIT: moved to this thread, below]:

I’m not sure that’s a fair way to look at whether a piece of software respects your freedom.

I agree that I may have used the term “free software” incorrectly.

Say in many years, 32-bit machines are not supported by most software, and thus cannot be compiled there.

The point is that the only reason why it’s not compilable on 32-bit is avoidable. Not if the current complex web standards are a given, of course. Note that it takes way too long to compile on 64-bit too – this is less about 32-bit than about not expecting the user to compile the software in the first place.

Along those lines, if there’s any software in the world today that “deserves” to be allowed to have a build process that requires more power than some PCs (that it can ultimately run on) have, I’d say it’s the web browser .

Do you think the web stuff needs to be this complex to implement what it does? I don’t have my opinions from Drew, but let me refer to one of his rants:

1 Like

I agree that it’s a logical leap. It being “impossible” is not even true, you’re right about that. I just like how reducing complexity keeps us from many hassles. I think in current practice, it’s a very useful and recommendable approach.

As a math student, I naturally like functional languages. I think functional concepts are very nice for avoiding complexity.

Dependencies breaking are only a symptom of complexity, not the real problem. Once you’ve solved dependency breaking stuff, you still have e.g. a lot of maintenance effort that you wouldn’t have without the complexity.

I don’t know enough about Blender to judge whether there’s simpler ways to do what it does. I would assume so, but I would also assume it would be a lot of effort to make it simpler.

Some differences between plugins and unix-style modular things:

  • In the unix mentality, there’s developers who are in charge of the entire system. In particular, they can fix duplication/incompatibility issues. (This is done 1. by developing everything in one tree, e.g. like openbsd does, or 2. by caring very much about your dependencies)
  • Plugins are written for the single purpose of extending a certain application. Modules are written to be useful for many purposes. (E.g. none of grep, sed, awk, cut, paste, … are plugins)

It’s very possible to have a modular design be installable and usable as if it was one integrated thing.

I don’t currently have the time to listen to the systemd talk, but I hope I will find some – it’s always interesting to hear stuff from people who have quite the opposite opinion.

1 Like

I’m not sure that’s a fair way to look at whether a piece of software respects your freedom. After all, those of us who follow this movement do so for ethical reasons, above practical ones. But sure, I agree that there’s a possible dilemma if we should encounter software that deliberately counteracts your “practical ability to tinker” (aka use freedom 1) while technically being FLO - but I don’t think that’s really what’s happening, neither here nor commonly in the wild. Instead, it just sounds like an unfortunate mismatch of resources/expectations.

For example:

  • You say you are on a 32-bit machine. Say in many years, 32-bit machines are not supported by most software, and thus cannot be compiled there. Does your inability to compile on your hardware of choice really mean the project is limiting your “practical” freedom? What if the “practical” situation is actually pretty great for the 64-bit users? What if everyone in the world has already moved on to 64-bit except you?
  • Some software has uncommon or expensive hardware as a compilation target. For example, There are FLO projects that are built to run on graphics cards. (They physically cannot run on normal processors - for example, GPU-targeted computer vision applications.) Let’s say you cannot afford such a graphics card, or for whatever reason you refuse to get/use one. The project would probably just say their software is simply not relevant to you. But would you instead say they are violating your practical freedom?

Along those lines, if there’s any software in the world today that “deserves” to be allowed to have a build process that requires more power than some PCs (that it can ultimately run on) have, I’d say it’s the web browser. In this day and age, everyone needs to use it, not just hackers like us but even the most novice computer users and their grandmas, no?

(Edit: Realized this reply is probably better suited for the split off thread)

1 Like

Hm. @Adroit I moved your post to this thread, but unfortunately discourse seems to have dumped it at the bottom, instead of chronological order, and I can’t find a way to manually re-order, the same way I can move posts between topics. So, apologies to weird ordering this has caused. I’ll make sure to do it quickly or not at all, in the future.

(I also made a small, clearly-marked edit at the top of @photm’s post to link to the new location)

1 Like

How about doing a consent decision about the text to be added to the honor code? I.e. I would write a text that I hope is OK with you too, but then you can still object once you’ve seen the text. Once you do object, it would then be very specific and I could change the text to make it less strong here and there.


Yup! For one reason: Backward compatibility. So long as we need to be able to view pages as old as the HTML file itself, and the Web embraces that rule in the standard (which it does), the bloat of modern browsers is more than justified. Fortunately, thanks to WebAssembly, this feature bloat is probably now slowing down…

That’s a good point, though a bit pessimistic in his solution. I agree that it’s a problem, but with the current Web 2.0 I’d say we’re stuck with it for now. Here’s to hoping the next version of the web (preferably the Safe Net) lets us ditch those issues by letting us abandon the old specs.

Right, I’m operating on the assumption that they are. For now.

Sure, but any project not having some feature/capability is usually a thing that could be called “avoidable”. I don’t see how that means that said feature is owed to the users in some way… Supporting multiple compilation platforms doesn’t typically come “for free” in most dev toolchains, so it sounds like you’re expecting people to do work for you, just because they’re giving you some software you like to use, free of charge. Or at least, when you talk about the lack of such efforts as an ethical failing

Me too. It’s especially good for hackers and tinkerers like us who are capable of building something complex, when we need it, by stringing together a bunch of small, reliable pieces (e.g. unix tools) to do the complex job. However, for the rest of the world, that’s a lot to ask them to learn - especially when they don’t value bespoke or DIY. They just want a complex job tackled professionally.

In some scenarios, of course - but in the example I gave, libraries are made only of pure functions, so they quite literally do not need to be maintained. They’re statically verified that they can only do the intended things (because type systems!) and then they work forever. Only language changes, or the desire for more features or a better optimized approach, would require more dev work.

Of course, I will weigh in in the appropriate ways! I am in no way authoritative on the subject, just trying to contribute to the discussion.

No problem - I’ll try to reply the correct way from now on, but it’s unfortunate that there’s no way in Discourse to give an off-topic reply as much prominence as the off-topic post bits that started it. Thread linking that’s more in-context would be a nice feature…