Software disenchantment

Why Modern Software Sucks


Imagine if every restaurant served every cusine, Chinese, Italian, Indian, etc. And all the menus have a thousand items, each that can be customised as much as you'd like.

That would be a nightmare. What are the chances that a restaurant that makes every cusine will do any of them well? Why would you go to one restaurant over another? Fortunately, we don't live in that world. We can go to restaurants that specialise in one cuisine, and offer reasonably sized menus where each dish is done well.

And yet this is what our software is like. It's become a grotesquerie of things that don't fit well together and don't solve a problem. It's plagued by too many options and not enough design.

Unless we radically change the way we make software, it's only going to get worse. As someone that cares about this, I can't in good conscience help make more crap. Hopefully, by understanding what's wrong with software these days, and how we got here, we can reverse engineer good software.

What's wrong with our software?

Too many options

As I described earlier, software has too many options. Every option is a something that should have been decided. It leads to analysis paralysis. Most of the choices aren't that important, and yet now the user must decide. It represents a lack of focus. It also represents wasted screen real estate. The core of the application is littered with irrelevent options. This is a failure to prioritise. It makes software incoherent. There's no rhyme or reason to how the parts fit together so it can be used effectively.

With a handful of options, it's easy to find the ones we need, and it's easy to achieve the core of what we're trying to do. As you add more options, you have to put them them somewhere, and there's an opportunity cost to that.

We don't make physical products like this. Cars don't acquire new buttons every week, eventually requiring panels and boxes and remotes stored in every available space in, on, or under the car. If they did we'd beg the manufacturer to get rid of them. We just want to get from A to B, and maybe turn the air-con on and play some music, and open windows. Radios have a handful of buttons, chairs have one knob, guitars have about 6 strings.

When something is physical, it has to fight for space and we prioritise. But when something is virtual, we effectively have infinite space, and for some reason treat this as progress - not needing to deal with the analog tyranny of scarcity. But physical products don't have knobs and toggles sorted in alphabetical order and curtailed by space, they have the most important options, and everything else has thought out defaults rather than options. They have gears which set multiple qualities at once when one option is changed.

Needle in a haystack

In the analog world, we can rely on location. There’s only so many places we can put things. We can find the same book 5 years later. We can reference seminal books and essays centuries, even millenia after the fact. Not everyone writes books, so there are fewer to search through.

In a talk by Rich Hickey on concurrency and mutability, he contrasts putting your jacket on a coat rack, and putting it on a production line in a factory. Technically they’re both mutable, and concurrent, but even though people are adding and removing jackets from the coat rack you can still expect to find yours, but the same isn’t true for the production line.
There used to be a time where I can easily find the same piece of content, but this is no longer true.

Part of this is social. We encourage people to blog, create videos, etc. "Content creaters" post frequently because that increases their statistics, even when the algorithm doesn't explicitly do so - people reward it.

But it's not just social dynamics. The way we design things encourage this, and results in content being ephemeral. And with AI making it even easier to spit out “content”, this will only get worse. We’ve lowered the quality of content, and made it almost impossible to find high quality content.

Even in non-social contexts, modern software is designed to accumulate and present information in such a way that it is impossible to find useful information.

We need information to act intelligently. If we're writing an article, we need to access, view, and modify the article we're writing, and, to a lesser extent, we need to be able to edit articles we have written in the past. We need analytics for our websites to gain insights into how people engage with them.

Information has a half-life, both in terms of how long it takes before it stops being useful/needed, and how long it takes before it's difficult to find.

Consider a meeting. The half-life of the information in a meeting is short because you start forgetting what you discussed rapidly. After a day you can probably only remember a third of what you discussed. Now consider slack messages (or any other work chat software). Technically, information there is preserved forever, it’s not lost. But, for all intents and purposes, you can’t find information easily. One day after a slack discussion it’s buried under other slack discussions, and you’re back to relying on memory. If the information in that discussion is relevant for, say, 2 weeks, then for most of the time it’s inaccessible. And for the rest of the year, it's still there, burying potentially relelevant other peices of information even though it is no longer relevant.

Instead, if someone took minutes and put it into a physical folder designated for this kind of meeting, the information could be accessed indefinitely, although that depends on how often this kind of meeting happens. If it’s every week then at some point you’ll have to archive older meetings as the folder gets full. And if it’s a few times a year then you can keep everything to see how well goals have been met over time.

In the analog world we organise things by purpose and location, and we limit the number of things we need to organise. In the digital world, we have everything together in the same place, as many things as possible, and barely organised. This is the problem with tools like project management tools, to do lists, and note taking software, for example.

It might be great at the beginning, but when you use it for long enough, it becomes like one of those TV shows about hoarders, knee-deep in stuff but never able to find anything. This is the default metaphor we have for software and it’s insane that we can’t do better.

Processing lots of information is difficult. Even for computers, if you have to search through a list, it’s much faster to do so with an ordered list than an unordered one - from linear to logarithmic time. The difference between humans and computers, in this case, is that the difference in performance kicks in much sooner. If you’re searching through a thousand integers, binary or linear search might not result in a noticable difference for a computer. But at 20 items with a human there’s a huge difference.

Humans can mentally handle only a few things at a time. There’s the 7+/- 2 items we can hold in our short term memory. Our pre-frontal cortexes can also only handle a certain number of tasks, more tasks results in reduced motivation and a reduced ability to complete any task. But, rather than giving us less, we’ve designed software to give us more.

Although there are solutions to this, like filtering, pagination, archiving, and folders, they’re quite lazy. They require clerical work from the user when they could have been done by the system.

Sometimes these different information half-lives are specific to the person using the tool, and therefore can't be addressed by better designing the tool. Often, however, the information half life is known ahead of time by the designers of the tool because it is common in that domain. The designers might not know the precise time, but they can know which group an item belongs in - e.g brush teeth vs read book. In that case, they can build in automatic garbage collection, and separate entities based on which group the information half life is in.

Not very powerful

Software has lots of little features, but until they arrive users can't do anything other than what they've been prescribed. The whole is the sum of a thousand parts.

Good software has a few useful features, but allows users to get even more value by combining things in unanticipated ways.

Overly beginner focussed

If you’re doing things right, you should only be a beginner at any activity for a short amount of time. But it feels like more and more content, resources, and tools are focused on beginners. You can’t cycle very fast with training wheels, so as a result I outgrow software aides and I’m on my own.

You don’t need software to stop being a beginner, you just need to try a little bit. But once you’re an intermediate or an expert, things get more involved. There are diminishing returns when it comes to improvement, more effort is required. There it makes more sense for software to take things off the experts plate so they can focus their expertise, and for software to isolate areas that require improvement.

Software doesn't do anything useful

These days, most of what we call software development is putting stuff in a database and showing it to the user. We’ve effectively created an infinite piece of paper.

This is an improvement in some ways. We can book things online 24/7 rather than having to call someone during work hours. However, in many cases, this is worse than the analog equivalent being replaced. By being limited in physical space, to one piece of paper, we give ourselves mentally managageable lists. We can process 10 items, we can’t process hundreds.

Instead of making things easier for people, by using computers for what they do best and processing lots of data quickly, we’re making things more difficult for people.

This didn’t used to be the case. Computers used to do things. Accounting software summed up accounts, calculated salaries, etc. We still have that software, but we don’t have new kinds of software that does things.

In a game, there is quite a lot there that does not involve storing and retrieving user data. You can make a whole game and then later figure out how to save the user's score or progress in the game.

It can and should be the same with software, otherwise it's just expensive paper.

Missing Core Parts

A lot of software doesn't address major things that are central to the problem they claim to address, leading to cobbled up poor combinations of multiple solutions.

Take React. React is designed to deal with the question "given a new state, how can I update the DOM easily?". It does a decent job answering that question, but it does a poor job answering "how do we get the new state?". There are multiple ways to do this, but they're all bad and seem like an afterthought. An entire ecosystem has developed to try to address this with external tools. Most of the recent changes and features in React (e.g hooks) are ways to deal with the shortcomings in this area.

The consequence is you have this Frankenstein system where the parts don't fit together well.

State managment is central, and the fact that it's not addressed properly is a major issue. "Given a new state" - how do we get that new state? That's not someone elses problem. When you sign up to solve the first problem, the second one is your responsibility.

Let's contrast this with Elm. Elm handles DOM diffing, state management, and events, in a way that fits together neatly. Under the hood, state managment and HTML data are tied together in signal graphs to reduce recomputation and avoid many React pitfalls and improve performance.

If the Javascript ecosystem is Frankenstein's monster, Elm is a human born with all the organs they need, but able to acquire new clothes and tools.

This seems like it's in tension with my point around software having too many features and options, but it's actually a consequence of the same thing: too much time spent on peripherals rather than the core.

If people were born without a major organ, and needed surgery from birth, that would be a big problem. But we're born without clothes and we're fine. We don't need to be born with all the clothes we'll ever need for every climate, and an appendage for everything we do. A brain and hands are enough for us to be able to function and adapt to what we need to do.

When you set out to solve a problem and introduce a feature, you take on the responsibility of everything that is needed to get the job done. If you make a phone it must come with a charging port, that is not optional. If you make a computer it must have an off button. If you set out to make spaghetti, you must have sauce. If I make a point in an article, I must justify it.

Often, to solve a problem well, there are four parts that must be addressed. When software advertises a laundry list of features, it usually takes on many different problems but only addresses one of the four concerns for each problem. In those cases, they should either not sign up to solve the problem by not introducing the feature, or solve it properly by including what's necessary to solve it. Doing the half-assed version and calling it "iterative" is NOT OKAY.

Performance is fucking awful

My workflow hasn't changed much in the past decade. I have roughly the same number of tabs open, maybe less, and the same number of code editors open. In that time I've gone from 4GB of RAM to 32GB. But the amount of resources that the same programs are gobbling up has gone up and up at a ridiculous rate. Moore's Law, if it's still a thing, is no longer able to keep up with how bloated our software is getting.

This article is interesting reading. It explores how much Javascript websites that don't need that much interactivity ship, and how it's gotten worse. Slack, a chat messaging app, is larger than Quake (binary plus resources vs only JavaScript). 55MB! That's just the Javascript sent over the network, not how much memory it takes when its loaded. On Chrome the slack tab consumes 283MB of memory. Less than 4 of those is a Gigabyte!

Somehow, over the last two decades, our industry keeps finding new ways to achieve less with more.

Putting it all together

Modern software doesn't accomplish much. It actively makes things more difficult for users, and it consumes lots of resources while doing so.

That's a pretty shit deal.

How did we get here?

People didn't set out to make crap. They set out to make something they thought was good, using their philosophies on design, development, and project management methodology, and the consequence was crap.

These philosophies often sounded good, they made people feel warm and fuzzy, but they don't stand up to scrutiny. They're half-baked ideas that people picked up by hearing others talk about them. Out of control memes that must be excised if we are to do better.

Bad Design Philosophy

Overemphasis on democratisation

Democratisation makes people feel warm and fuzzy. It conjures up images of a poor person allowed to participate in a world that was controlled by elites, and can now creatively express themselves. It's assumed by default to be good. Anything that is percieved to be a barrier to everyone being able to participate is treated as intolerable.

Guitars are too difficult to learn and can cut your fingers and cause callouses? We need to fix this! Think of the people who could have become Rockstars if it was easier to learn!

It sounds ridiculous when I apply it to guitars, but this is essentially how people design software these days. Rather than expect users to put in any work at all, we need to massively dumb down software so that any idiot can use it.

Democratisation can be good, and sometimes people do make software unneccessarily complex, but sometimes things are complex by nature and trying to dumb it down makes it worse. It drowns out people that are able to do more, and floods the world with mediocre.

Naive belief that digitisation is a panacea

The promise of software was that we could take slow, manual processes, and use the fact that computers can process data millions of times faster than us to usher in a new age of productivity and prosperity, unencumbered by aribtrary physical limitations.

But often we make things worse like this. Take meetings, for example. We've evolved to read all sorts of cues from multiple channels with people, with verbal cues estimated to take around 7%. Now we've gotten rid of all of that by making them virtual, and in the process created Zoom fatigue. Or take meetups and networking. In person, a hall with a hundred people settles into small groups with people talking to nearby people. In virtual meetups, where there's no concept of nearby, a few loud people dominate the discussion and everyone else awkwardly watches.

But people don't think that far. As far as they're concerned, digital = progress.

Done properly that might be the case, but merely slapping a badly designed digital interface over something often makes it worse.

These are ideas absorbed via osmosis, and if you were to grab a random person in this space and create a new project you'll get more of the same sort of design that leads to all these problems.

Programmers have become domesticated

The first programmers were geniuses. They had to be. There was no prior art, they were inventing it as they went along. They were accomplished people from other fields, mathematicians, physicists, biologists, etc. These people pretty much ran computers in their heads.

The next few waves of programmers, whilst not necessarily geniuses, were pretty damn smart. These were people that fell in love with computers as kids. These computers were nowhere near as powerful as what we have today, so they had to be very clever to do interesting things with limited resources.

You can pretty much count on a programmer to be clever back then, and they can introduce software in a domain where it hadn't been applied before.

These people were a good mix of practical and theoretically knowledgable. Being clever was considered an asset. A competitive advantage for a company to hire a genius.

It wasn't all rosy though. The lack of prior art resulted in uncertainty in how to structure things, and poorly adapted management practices. Shipping a project on time and budget was difficult, and people had to deal with bugs, coupling, changing requirements, etc.

Different solutions emerged to tame the beast of complexity, and in the process they threw the baby out with the bathwater, and took a sledgehammer to the bathtub for good measure.

Now, not only can you not expect a programmer to be clever, clever is a bad word. If there is a complex piece of code, the programmer must be doing it wrong.

If a physicist doesn't understand another physicst's equation, they don't say "this is too complex, you're a bad physicist". They say "I need to focus more and brush up on Riemanian geometry". Although simplicity and elegance are things scientists appreciate, they also take reality on its own terms and have to be smart.

In perhaps the most brilliant coup in history, programmers have been convinced that they should aim to be replaceable. Literally the analogy they use is that if they get hit by a bus, it should be easy to resume work on the project by replacing them with another programmer. According to this view, having a genius write code that only another genius could understand is a bad thing, but having a run of the mill programmer write code that any idiot can understand is the ideal.

Imagine if authors worked this way. If Stephen King made sure that if he got hit by a bus, there are people ready to carry on writing for him and no one notices the difference. If that was the case, his books wouldn't be worth reading.

The net result of this is that teams are incapable of producing anything technologically novel. If the difficult stuff isn't in a black box from some external library, packaged in a pretty little bow for the programmers to send stuff to and get stuff from, then its hopeless.

You can't build a rocket by restricting yourself to what the average person can understand, or buying parts designed by smart people and hoping that you can blindly combine them in a better way.

I'm not quite sure how this happened, but I think a few factors contributed:

JavaScript

Javascript, a frontend programming language, deals with the tip of the iceberg - the most visible and least abstract part of a program. The Node Package manager made it possible for people to cobble together things they don't understand into something other people can see. Now everyone can easily download half the internet. Other programming language package systems weren't as "easy". E.g, in the early days of C++, it was difficult to share libraries across organisations because of how linking worked, so every organisation made their own. Whilst this resulted in a lot of wasted repeated effort, it meant that you couldn't coast along as a programmer without being able to make things from scratch.

Once JavaScript took off, other languages followed suit.

This followed the rise in the Web's popularity and abandoning other models of software such as desktop GUIS and licenses.

Test Driven Development

The TDD movement was a dogmatic one, claiming that every line of code must first have a test. "Uncle Bob" - a programmer who influenced many others, was asked what percentage of code should be covered by tests, and he responeded "only the parts you want to work".

I don't know about you, but my hit rate is a fair bit higher than that.

In any case, although people don't practice full TDD, the TDD movement has resulted in a significant change in the industry. There are often coverage requirements in our pipelines.

Testable code has become synonymous with well designed code. The system as a whole has to be mockable and testable, as do all the units.

I think this is a mistake. If the design changes to accomadate testing, that's a bad thing. At least, if good design thinking went into making something work a particular way, before needing to scrap it.

Example based testing is especially problematic. For example based testing to work, you need to be able to tell, at a glance, the precise output you get for a particular input. That means simple inputs, simple outputs, and simple computations. In other words, software shouldn't do anything we can't do.

This inherently makes code less useful. The most useful software I've written involves units that exist solely for other functions to use, with large inputs or outputs and complex computations, to do things I can't. I can only follow along algebraically.

Also, a funny anecdote: When Niklaus Wirth (computer scientist, designer of many langauges such as Pascal) met Kent Beck and Kent Beck introduced him to TDD, Wirth responded “I suppose that’s all very well if you don’t know how to design software.”

People getting into the industry for perverse reasons

Money and status Programming used to be considered weird. People that went into programming went into it despite all that, because they loved the craft. But after a few programmers became super rich and powerful (Bill Gates, Mark Zuckerberg, Elon Musk), people wanted some of that for themselves. But not the same sorts of people. In came the normal people, the jocks, the business majors etc.

Everyone should code! The everyone should code movement sees programming as a form of literacy, and participation in society. Any percieved barrier into getting people to code had to go. They didn't just say that knowing how to code is nice or useful, but that anyone opposing the everyone should code movement is an elistist. If anyone feels like an imposter, they should squash such doubts. Anyone can code! The consequence is that standards are now the enemy.

Good old fashioned nerd bashing

In school nerd bashing is clear and overt. "You're a freak!", stuffing in lockers, etc. In adulthood it's more subtle: "you're pretentious", "you should work more on your soft skills".

Suddenly the industry is filled with people who don't have much in the way of hard skills, but they're buzzing with people skills. But people skills have been largely neglected in this industry, much to the chagrin of the managers, because it's not that relevent.

So they make appeals to humility, they emphasise the importance of soft skills, and they denigrate the focus on hard skills and intelligence, to pave the way for themselves and people like them.

So now the average programmer can't do anything useful. They're destined to create crap on autopilot. More and more the programmers who can do otherwise are exceptions.

Bad tools

Something like React requires a larger bundle size just to install the library - not counting actual application code, than the total size of an application written in Elm and compiled to JavaScript alongside the Elm runtime.

If you're using a tool like React, you're out of the race before it starts. It doesn't matter how careful you are about compressing application code and code-splitting.

Somehow we've settled into a situation where the default is you install half the internet before you do anything.

You're not allowed to use better tools or make your own. If you make your own you're "reinventing the wheel". And you can't use good tools because it's "too hard to learn".

Methodology.

To some extent, it doesn't matter what the designer believes. The software methodology will reliably produce software that's designed a particular way.

And if there's one methodology that is the most responsible for the things I'm complaining about, it's Agile.

No proponent of Agile can call a communist naive. Agile is the only methodology that you can publically say you're practicising, yet time and time again it fails to produce good software. And each time that happens, it's because it "wasn't really agile". It was Waterfall in disguise! At some point Agile has to take responsibility.

Whilst it's true that Agile has been co-opted by corporate people, many of its problems have been carried over from the original idea. I've spent time reading and listening to the original signatories of the Agile Manifesto, and in their own advice I have plenty of issues.

Take Kent Beck, author of Extreme Programming. In a recent podcast he makes the following points:

  • We should work in small two week sprints
  • We should keep working on a project forever. If there's no way we can add more value, that means it has failed

Not mentioned in this podcast, but mentioned elsewhere, is that the tasks completed by developers should be "user stories". We should estimate how long each story takes, and we should prefer small ones to large ones. Large ones are to be looked at with suspicion and attempts should be made to split them up.

Each task, I mean story, has a one to one correspondance to something a user sees on a screen or interaction they have with the system. This is partly responsible for why software doesn't do anything. If you have a short amount of time to do a task, and it has to correspond with something a user directly percieves (the tip of the iceberg), then you latch on to something familiar, a framework or something done a thousand times, and you focus on storage and presentation. You don't have time to do something from first principles.

Lets do some maths here. Let's say we have

  • 3 programmers
  • Working on a project for 5 years
  • Each completing 3 user stories per week
  • That's 2,340 "User Stories"

Is that 2,340 features, or 234 features changed 10 times? It doesn't matter, there is no way the outcome isn't a mess.

Even with a designer who values simplicity, every 2 weeks they need enough tickets on the board for their programmers, and if they're doing this forever they can't have a simple, well designed application.

Maybe it's a good thing that corporate agile unwittingly slows this down with constant draining meetings.

Let's contrast this to Shape Up, a methodology used by 37 Signals that they use to produce software like BaseCamp.

  • 6 week projects with a 2 week cooldown
  • Periods of shaping and pitching followed by project work
  • 1 programmer and 1 designer work together on a "project" - typically part of the larger project, but a self contained unit. E.g, adding Calendar functionality to the Project Management software they produce. If they're running out of time, they cut scope instead of keep going. And if it doesn't deliver the desired functionality, they scrap it. Maybe they'll revisit it later but not automatically.

If we do the maths here, we have 6 of these iterations a year, or 30 after 5 years. And each iteration produces a meaninful, cohesive part with a few sub-features, all in the service of one goal. There may be multiple teams working independently on different parts of the project, but Basecamp has been in business for 20 years. In that time they've rewritten Basecamp from scratch twice (while allowing existing users to retain their version), making sure that everything fits together well, and they've worked on other projects such as Hey (an email service).

I'm not fully on board with all of their thoughts on Software Design, I have some gripes with some of the things they've written, but on the whole they are way ahead of the curve, and their methodology helps with this. I don't necessarily think Shape Up is the way to go, but I encourage you to do the maths for any methodology and extrapolate where it leads. It will have a larger impact on design than the designer. If you care about how something is designed, you must direct your attention to the factory that produces it.

Imagine if any other industry produced things this way. If books were written by teams, in the same room as the readers, and they released every new chapter - or rather every page. And if the book never finished, and was constantly changed. No plot, character, or world is ever planned. Instead, changes to the book are added by tweet, between meetings. Instead of a genre or two, people compromise by adding all of them. I'd be morbidly curious to check out one book like that, but if this was the default way books were produced, I'd quit reading.

So why did Agile become so dominant? Because people didn't think it through.

The original signatories didn't say "Use our methodology to create bloated useless software!".

They made arguments like this:

  • Remember Waterfall? That was terrible. Do agile to protect us from Waterfall. 90% of arguments for agile are like this. Despite the fact that we haven't had it in 20 years. Or the fact that agile and waterfall aren't the only two options. Waterfall is the Boogeyman.
  • Instead of spending a year to find out we made something people didn't want, we should be "scientific" by constantly testing what we made in the wild This sounds nice. Who wants to be unscientific? But there's this implicit assumption that we're one feature away from salvation. It's like constantly testing the hypothesis "is it fairies?". If you have 2000 features, the 2001st feature won't help you. This isn't how scientists come up with hypotheses. They don't have a non-scientific manager to come up with their hypotheses for them "what if we make it blue", "what if we change this molecule", and so on. Actual scientists, who have deep conceptual knowledge and equations and models, make informed hypothesis. Sometimes it takes time to set up the apparatus, sometimes they need lots of funding to make a particle collider. Scientists also control variables, by not changing things all the fucking time. After they make a change they wait for long enough to be sure of the variables impact. They have a theoretical model that allows them to make predictions. They have multiple samples, repeat experiments. Nothing like agile. Agile isn't science. It's pseudo-science.
  • Compassion. No more pointy haired boss! Agile was initially pitched by programmers, to other programmers, as a way of getting managers off their back and giving them autonomy over their work. This was before it was co-opted by managment and turned into scrum. But I don't think it was that surprising. The ceremonies in agile, whilst not appealing to "scientific management" people, are more appealing to managers than programmers. Talking to customers, meetings, planning sessions. And waterfall can actually give programmers more autonomy, depending on how it's done. "Leave me alone until ship date" vs "a new sprint meeting already?". Agile feels like an endless treadmill. A ticket factory, constantly churning out little things with no goal in sight. As a programmer gets better, and wants to wrap their head around challenging things, they get more and more frustrated that they have to deal with simple things.

There's a reason why side projects and open source don't follow agile. When people have to program for fun, they do it their way. They actually have autonomy. Not faux compassion.

More than methodology

Having said all this, there is more to methodology to good software. A bad methodology completely eliminates your chances of making good software, no matter what else you do. But once you have a methodology that isn't self-destructive, then what else you do starts to matter. Different authors, using the same methodology can write books at different levels of quality by being better, and having better ideas of how things should be done.

Once a sensible methodology is put in place, and adjusted as needed, then the design philosophy and quality of programmers begins to matter.

Bad Monetisation Incentives

You can think of software as having an essense - the problem it tries to solve. But only working on that essense is often unsustainable, you have to factor in monetisation. Done properly, and have something well designed that's sustainable. More often than not, however, the design is warped to accomadate monetisation. The result is that the software does not solve the problem it claims to solve.

VCs, advertising, and whales deserve their share of the blame for this.

Initially, with VC backed companies, you aren't supposed to make money. You release the software for free. In theory, this means that you focus only on the essense at the beginning. In practice, however, you need lots of users. This favours appealing to everyone rather than making something that works extremely well for a small group of people. This is also unsustainable, and a few years later, as costs skyrocket, requires a radical shift in the product to make money, and that often ruins quality. Take Quora, for example. What used to be a fountain of knowledge has turned into a cesspool as they've sold all their principles because they weren't making money.

Advertising also warps the design. Instead of making something good enough to achieve a goal, it has to turn the users into addicts, so they can eyeball and buy more stuff. If you weren't trying to make money you wouldn't put all this effort into doing this.

The Whale strategy also isn't great. The idea behind the Whale strategy is you make the software free for most people, and then a small group of people pay a lot of money. These whales are typically giant corporations. The problem with this is that the whales and everyone else are radically different, so different that an application that serves both can't be well designed and focused. The whales are so big that nothing that serves them can be well designed and focused. But they pay all the money, and most of the people that use the software don't. When it comes to deciding what features make it in, the people that pay the money take priority. If someone isn't paying and complains, who cares? Monetising through whales results in software that is not designed for most of the people who use it.

Subscriptions can also be problematic, but I think they're less avoidable than the other things I've mentioned. The problem with subscriptions is that if people are paying forever, it's natural to think that means you should be developing forever. If you compare that to software you sell once, there's a clear difference in number of features and cohesiveness. But if you have software that runs on a server, with continuous costs, unless customers are willing to self-host, then I think a subscription model is the least bad way of funding it. But you do have to be mindful of the effects and put in effort to mitigate them.

For example, take software designed for beginners. If people are only beginners once, it doesn't make sense to keep paying for a tool to stop being beginners again and again forever. If the software properly delivered they would graduate to intermediates and not need it anymore. But people creating SAAS software are told to keep churn low. On the other hand, for intermediate software where you are expected to be an intermediate for a long time, it makes sense to keep paying over and over.

By all means make money, but actually keep your promises while getting there. Don't let your monetisation strategy get in the way of solving the problem.

All sinners or too religious?

In a religious society, everyone is a sinner, no one is perfect, but people try to get closer to their religious ideal. It's an asymptote. Everyone knows roughly what the right thing to do, but they just aren't doing it very well. To people who aren't part of that religious group, however, the problems in that society aren't a result of people not adhering to their beliefs enough, but rather because they're doing it too much. Their idea of the right thing to do is wrong, and that's the real problem.

It's a similar situation with software. It's not that the way the software industry produces software isn't perfectly matching with what "everyone knows", it's that most people in the industry have the same false ideas, and are doomed to produce crap until something changes. If people keep self-flagellating themselves into doing more agile, software is not going to suddenly turn good.

What can we do about it?

Imagine a place with a bad cuisine. All the chefs go to the same schools, learn the same techniques, and buy the same ingredients and equipment. Their food tastes disugsting, and they try to adhere even more closely to what they've been taught.

Then comes along a whiley new chef. This chef buys better, fresher, ingredients. Uses better tools, and better techniques. They're not just different for difference sake though, they understand what they're doing, the underlying chemistry. This chef can make something far better, but they don't have to do all that. Merely doing one of the things better is enough to make a noticable difference. But if they do all of them, that's a, erm, recipe for greatness.

I don't think bad software is inevitable. I've identified problems with software, and a few knobs in our industry that encourage bad software. By turning those knobs in the right way, we can make better software. That said, not all improvements can be made piecemeal. Some improvements have to be made a la carte because they work together or require each other.

One Mind

Instead of committees and teams and votes, try to get one person to design the system. They should be smart, independent, and first principled. They won't parrot bad design principles we see all around us, they'll make their own, consistent, reasoned product. Different people fitting this bill might have incompatible philosophies, but merely adhering to one philosophy is almost as important as the philsophy itself.

Smart Programmers

Get smart programmers to work on something, let them know that it's ok to be smart, and give them autonomy. They'll come up with something great. They can create intelligent software.

Good tools

We can look for better tools where they exist, and make them if they don't.

Charge Money, or do it for fun

Either charge money directly, or create software for your own enjoyment. Reject alternative monetisation models that involve indirect means of making money and require warping your design to satisfy it.

No Agile or Waterfall

If you pare down the number of people involved in a project to as few as possible, you can get rid of most management concerns. Let people manage themselves, in their own styles, and produce good software. When evaluating a methodology, apply the kind of mathematical thinking I demonstrated above to see what the long term implications are.

Conclusion

As an industry software has failed. We are making things worse with a vengeance. The industry is mostly garbage. But if we recognise this, and understand why it is the case, it might be possible to do it properly. We have to try, because going on as we have been is not sustainable. It's not good for the users. And since everyone is making the same crap, it's not good for the bottom line to carry on doing things the same way.

Back to home