Introduction
It's better to have a few features that fit well together than a laundry list of unrelated ones. Often the way to do this is by using a philosophy to decide the features. To do this you need a good philosophy, and a methodology that allows the final product to resemble the philosophy.
To have a good philosophy, the product should ideally be designed by one person. Once you have more than one person, you increase the risk of people interpreting the same words differently, and people sharing platitudes. Also communication is sub-optimal - more on that in another article.
That one mind must be smart, logical, and have good design sensibilities.
There's room for multiple different philosophies, but not all philosophies are equal. Some are better than others. It's a landscape with multiple peaks and even more valleys.
You can have the best philosophy in the world, with a genius designer, but if your methodology is shit your product will be too.
Qualities aren't components, they are consequences. There's no "performance module" or "simplicity chip" that you add to your list of features. The qualities you value in your philosophy are consequences of everything that is there. Therefore the methodology must complement the philosophy. The product is the integral - the area under the curve - of the methodology.
What is Philosophy Driven Design
Bret Victor talks about "inventing on principle", and this is closely related to what I'm talking about. People who invent on principle aren't just trying to make money. They don't see problems as opportunities. They deeply feel that things must be a certain way, and that motivates them to make things when they aren't. For example, one of Bret Victor's principles is that creaters need an immediate connection to the thing they're creating. Compile run cycles where textual code produces visual artefacts, for him, is a tragedy. And in his talk he demonstrated tools that create a symbiotic relationship between code and the final artefact, all motivated by this principle.
The principles Bret Victor talks about, both his own and other people who are like this, underlie everything they make. This is important, but it's not what I'm talking about. Go watch his talk anyway.
What I'm talking about is having a philosophy for the current project you're working on. A sense that, for the kind of problems your project is dealing with, these things should be true, and it's a tragedy that other tools that deal with it aren't like that. This can be more specific than the principles Bret Victor was talking about.
This philosophy is something you should advocate for, even if people don't end up using your project. Something that, in principle, could be executed better by someone else. You're not just selling a tool, you're selling the idea behind the tool.
For example, suppose you're making a project management tool. You're not offering a soup of Gannt charts, kanban boards, and scrum drivel, like Jira. You have some idea for how projects should be done, and the tool is a way to facilitate that, although the idea could be followed by emulating it on pencil and paper, or by some other tool.
Basecamp is a good example of this. The company behind it have a methodology that they follow: Shape Up. They've written books about how they work, and they use Basecamp to implement Shape Up, but there are companies who follow Shape Up without Basecamp.
But you don't need to write a treatise before making software to follow Philosophy Driven Design. Often the philosophy is internalised and implicit. It comes out when you make choices, and later on when you think about the choices you've made you notice that they form a philosophy. This is why it usually doesn't work with multiple people.
Entities
By coming up with a philosophy for your particular project, you should uncover things. Objects. Entities. Whatever you want to call them. The point is that you come up with some underlying model where the parts interact in interesting ways, and where the ultimate design and implementation is a consequence of that interaction. To use a scientific analogy, rather than saying "these celestial bodies are in these locations, at t0, and those locations at t1", merely describing things, you come up with a Newtonian model around forces, masses, gravitational attraction, that predicts elliptical orbits, and tells you where you will find undiscovered planets.
Similarly, when you find the right conceptual model to describe the ideas behind your project, then you can play around with them and discover potentially interesting and useful concepts that way, rather than randomly moving celestial bodies in the "chart of positions" model. This means don't rush to Figma, start by thinking a level of abstraction away from what the user sees.
Continuing with the Project Management tool example, we could say that while most project management tools represent a project as a list of tickets, a project is really a tree. It's hierarchical - each project contains sub-projects. Now that we have the tree model, we see that we can represent the current state of a project as a tree, but also the target state, and we can diff those trees to come up with a new tree that represents what's next. We can also represent proposed design changes with a tree, and use that to see what difference that would make to the overall project. To represent the work needed for a project, we can use state machines and channels, rather than a list. There are different channels/places/hats, and in each one there is a state machine describing the progress, and input and output channels. These concepts have to communicate somehow - we need a way to represent a subtree of the project going into these channels, and coming out back into the new tree. This is vastly different, and better, than the Jira approach of infinte tickets with text conventions, or links, and a smattering of tools from different metholodogies.
A project management tool is a tricky example in the context of this essay, because it's recursive. I'm saying that projects should have philosophies, and my example project to illustrate this concept is a project management tool, therefore an essential step is to examine the nature of a good project. But hopefully it gets the idea across.
Don't Be Dogmatic, Or Unscrupulous
Of course, it's possible to get lost in philosophy. To endlessly split hairs over distinctions, or to become dogmatic about them and mistake the map for the territory, trying to put the square pegs of the world into the round hole of your philosophy. Obviously don't do this. And don't make the opposite mistake of not having any philosophy, of being opportunistic and unscrupulous, of being indifferent and willing to do anything, like a mercenary. Most software today suffers from this, whereas the software I write tends to follow a philosophy.
Some Consequences
If your project has a philosophy, and your thinking about it changes significantly, that's when you should rewrite it from scratch. Refactors can't get you there. Basecamp did this, they rewrote twice, keeping the previous versions available to users, because their thinking on how projects should be managed changed significantly, but there were still people who thought in terms of the previous versions. Trying to refactor your way across makes it a bad implementation of both philosophies for most of the time, and drastically reduces the power of your model. Just like you aren't going to convert Rawls to Nozick through successive changes, or Twilight into Catch Me If You Can, you aren't going to convert a piece of software between philosophies.
This is why it's good to make something more than once, to discover your philosophy. And why you should be in it for the long haul.
Making software this way can also help you discover what your principles are, by seeing what aspects are common over time, and what the philosophy is a consequence of.
Examples
My Compiler
I spent some time working on a compiler.
At the time I wasn't as comfortable with functional programming and lisp. I was used to working with languages like Java, Python, Javascript and Typescript, and Kotlin. I had only ever used Haskell for Project Euler problems, and thought it was fun but that real software needed the mainstream paradigm. I didn't necessarily put it that way when I was working on it, but thinking about it now I have to add in that explanation.
I thought that we were basically there but that popular languages had to be backwards compatible with bad decisions, and that a new language could be a far more effective way to program. Popular languages were like a messy garden, gone out of control, and I was trying to trim this garden so it looks more like the brochure, rather than moving location or making something fundamentally different.
One of the main motivations was control flow. I was inspired by "GoTo Considered Harmful", and thought that we needed better control flow concepts than just for and while, mapping everything to that and doing the bookkeeping for the control flow along with the actual program we were trying to write. I also wasn't a fan of break, continue, return statements, and exceptions, seeing them as effectively GoTos that rudely teleport you and break the symmetry between the structure of the code and it's control flow.
So, in line with that, I made many statements into expressions. If statements became expressions, eliminating the need for a ternary operator. While and for expressions produced lists. I added an until expression. Even blocks (code inside curly brackets), evaluated to return the last expression.
I added union types and other goodies. I combined nominal and structural typing. It was important for me when I added type checking that all the programs I wrote before having type checking should still work without needing significant rewrites. A type annotation here and there is fine, but if my type checker couldn't understand a program that would execute correctly, needing to change the strucure of my code to make the typechecker happy is a tragedy.
I didn't realise it at the time, but I was halfway towards producing a functional language, in a package that was tempting to mainstream programmers.
At this point in my career development, I would much rather use lisp, because I'm not convinced that mainstream languages are basically there, and I think lisp provides much better capabilities to provide all the control flow concepts I would want. But I still think it's worthwhile exploring what those control flow concepts should be, separately from how they're implemented. And, if I had to use a mainstream-eque language, I would lobby for it to be the one that I made.
I worked on this project on and off for years, depending on my motivation and inspiration. I browsed lots of other languages, and thought long and hard at how I can unify things and make things more powerful, and more ergonomic and concise. One thing I was quite proud of was that even though this language had typechecking, I could write more concise programs than my previous dynamically typed language.
My Game Engine
Game programmers tend to fall into two camps:
- Non-technical artsy types. They often have a "dream game", but they're scared of code. Many of them become "idea guys", "I'll split the money with you 50/50 if you make this game for me". Some of them, to their credit, try to pickup something like Unity, but most of them are doomed since their dream game is way too ambitious and they barely know the basics of programming
- Hardcore low-level programmers, unsatisfied with the level of control offered by popular game engines. These people often end up writing their own allocators, and not really doing a whole lot of gameplay programming. Most of their focus is on high performance rendering, and when it comes time to actually do the game part, they slap on an editor for people from group 1 to attach some scripts without messing up their memory layouts.
I think that this misses the point of what games are. They aren't movies. They aren't lifeless, they're about behaviour, and that means code. Not in this narrow frame where you can attach some behaviour to something here and there, code should be first class. There should be no barrier between game and engine.
Here's an example of the kind of thing I did. When I was working on the physics engine, I investigated Box2D, and found that although it was highly performant and capable, the way it was designed significantly reduced its power and applicability.
Often, to do what you want you have to scrap it and roll your own physics, which usually would not be as good as Box2Ds, but would make the game possibe. For example, Box2D provides some capability to filter collisions between different kinds of things, by giving everything a numerical category (up to 16), and then cleverly using bit ors to describe which categories collisions aren't triggered on. But these can't change. This meant that you couldn't have a game mechanic where you can sometimes turn intangible, or change colour and only collide with things the same colour, etc. So I made my engine allow passing custom predicates for collision detection and collision resolution. In theory that has worse performance, but it rarely comes up if you have broadphase collision detection and spatial data structures. This widened the number of posisble games you can make. In fact, collision detection, collision resolution, and physical simulation were disentangled, so that you can override all of them depending on the kind of game that you want.
The UI side was also interesting. I added something like React, to allow people to define reactive/functional UIs in game using code, with my own layout engine so that you rarely have to specify precise sizes and locations, taking inspiration from Flutter's own layout algorithms.
Elm
Adobe Flash was being deprecated. Javascript was, and still is, a PITA. Very error prone, unreliable, confusing. Apps were moving towards the web, and AJAX was becoming a thing - trying to make page changes without reloading the page. But people were doing this by manually searching through the DOM to make the right changes, and this was error prone.
At the same time, Evan Czaplicki was exploring Functional Reactive Programming, and how it can be applied to User Interfaces. This was what his thesis was about. And he found that programming in Haskell was a very different experience to web programming.
And so he set out to make a language with the following goals
- A functional language
- For web programming
- To catch errors, and introduce reliability - no runtime errors!
- Whilst delighting users
- And having a manageable learning curve for people who aren't already deep into functional programming - it doesn't require learning category theory
This stuff sounds like the usual fluff people use to describe their own work "it's easy to use and so nice!", but if you use Elm you'll notice that it's actually true. It's even better than that. And that makes Elm programmers into evangelists. No one could say the same about Java.
Elm has the best error messages I've ever seen. Instead of some incomprehensible stack trace, it highlights the part of your program that was probably wrong, along with an explanation and suggestion. It wasn't part of the original plan, but as Evan was optimising the representations for the compiler internals, he decided to put in good error messages because it seemed consistent with his philosophy - until then ML languages caught lots of errors but you couldn't understand them, and the developers of those languages spent most of their time focusing on catching even more - and since then other languages have started to copy Elm's error messages.
Elm also predates React, and is explicity mentioned as an influence on Redux. Of course it's better than those two, since it's architected correctly from the start and even runs faster, but doesn't have FaceBook throwing money at it. It's also since inspired languages like PureScript.
Clojure
Rich Hickey had a lot of experience working with C++, Java, C#, in things like broadcast automation and machine listening, where time, concurrency, and complexity were front and centre.
The way these languages encouraged people to think about problems, in terms of objects and shared state, was causing practical issues, and Rich turned to a functional style to defend himself from this, but none of the core data structures were setup to work with this style.
He was also frustrated at how these languages dealt with data. On the one hand, they "complected" identity and value, so that you couldn't just work with values, and this screwed up the idea of time, which was central to the applications he was involved with. On the other hand, they wrapped data with other stuff, and got in the way, with brittle abstractions that limited the generality of programs.
Rich wanted a language that:
- Was a lisp, to permit syntactic extensibility
- Was primarly functional. Not pure, but had performant functional data structures and core library functions
- Didn't get in the way of data
- Dealt with concurrency well
- Explicity hosted, able to call the JVM without wrappers and foreign function interfaces, and make use of all the work done there
A common theme
These examples are perhaps a little too similar since they're mostly languages. But the flipside of that is they can show how different philosophies within the same domain can lead to different products. These philosophies are also clearly different from the principles mentioned by Bret Victor, because they don't really make sense in different domains, the way his principle does. Although you can probably find some if you looked for them.
If you asked each person what they did day to day, there would be differences. But I believe that there's a common thread underlying all of these, and that is Philosophy Driven Design. In each of these projects, whenever there was a choice regarding what to add, this often implicit philosophy guided that choice.
How To Achieve This
No Scribes
The same person should come up with the philosophy and write the code.
Just as we don't have novelists coming up with the plot, worldbuilding, characters, and then handing that off to scribes to finish their book, we should not have the same thing in software.
It's not manufacturing or construction where the cost of design is vastly lower than building. With software, and writing, they're the same thing. We can obviously see that trying to separate the design of a story from its implementation is doomed to bad quality, and slow delivery as a result of misinterpretation, lots of back and forth and micromanagement, and just seeming off as the details aren't linked together well.
When we have a non-technical person in charge, giving orders to the technical grunts, we often end up with two bad outcomes:
- Science Fiction Requirements that are not possible to achieve.
- Mundanity After the technical people say most demands are infeasible, what's left is not especially imaginative or useful, but merely doable. This is because all they have to go on for inspiration is Science Fiction, and things they've used before.
What we want is doable and ambitious, and to do that you need to start with a technical person who can discover ambitious things.
Revisiting the physics example, a lot of discoveries were consequences of exploring the "implementation". For example, Schwarzschild discovered black holes by exploring the consequences of Einstein's general relativity. No one would have imagined black holes without a grounding in science.
We should stop treating implementations like black boxes, to be created by grunt workers, and fulfil the desires of non-technical people. Instead we should treat the technical side as a source of inspiration and discovery. Rather than only at best meeting what we imagined and at worst failing to meet it, treat it as an enhancement to what we can think about, something that can generate more than we imagined.
One Mind
Not only should the implementor and designer be the same person, ideally it should be one person. At least at first. If only one person works on a system, they can design it where all the parts fit into one person's head. Where the parts are consistent, or at least harmonious. Where the same implicit philosophy is used to decide all the parts. The entire history is available to the person working on it.
What we absolutely don't want is the revolving door of strangers, where none of the original developers are around, and the software accretes more and more features that don't fit together. It's like stitching together a book from many different authors, where no author has read the entire book.
Lots Of Thinking
In my day job, I'm a scribe. Things get handed down from on high, or randomly added to the backlog. When we're out of things, we have meetings. There's no philosophy. In contrast, my personal projects, including the one I'm trying to make money with, are very different.
Sometimes I'm just programming away, and other times I'm thinking really hard while doing other stuff, like going on walks. Looking at the big picture, then zooming back in to the implementation, and back again. Sometimes the time spent not coding can be quite lengthy, as I think more and more on the subject, and collect my thoughts. For weeks, a section within a project becomes everything I think about in the shower, on walks, and even when seeing people. Some people treat this like procrastination or a waste of time, but years later I can point to parts of my projects that were done this way, and it was definitely worth it.
This is similar to what Rich Hickey discusses in his talk "Hammock Driven Design". One thing he mentions is spending so long thinking about something when you're awake, that your subconscious starts to process it and come up with ideas when you're asleep.
It feels great too, the "aha!" moments, where everything clicks and you come up with a clever, powerful, and simple solution to many of your problems.
This sort of thing is not possible when you constantly have to shift your focus between different meetings, and only work on small tasks before dealing with something else.
Thinking As I Go
With both my game engine and my compiler, there were aspects that was especially interested in from before I started, but I can't just do those parts to have them function, I needed the rest of it. And as I do each part, I have two choices:
- Do the common thing
- Think deeply about how I think it should be approached, and see if that's done elsewhere, and whether that fits with what I've done so far, and want to do with this project
I usually prefer option 2. In fact, this has often ended up becoming the most interesting part of the projects I've made. For example, it's how the physics engine came about. It wasn't something that I decided before working on the game engine. But every game has to deal with physics, and when I was deciding what to do about it then and there, I eventually went with my own one that had certain properties.
Read A Lot
Thinking a lot and trying to derive everything from first principles is valuable but limited. You can increase what you can think about by reading more. You should use reading to supplement thinking, not as a substitute.
There are different kinds of reading you can do to help with this.
Historical reading is great, because it gives you a framework you can use to understand the current paradigm. Often, at least in computer science, there was more variation in the early history, but one approach became popular, and most things since then have been relatively surface level changes within that particular approach. Sometimes it's because these alternate approaches were inadequate, and the best one won out. But, quite often, that is not the case. Often one approach won out for spurious reasons, maybe it got funding by a company that was big at the time, or maybe hardware limits meant other approaches weren't feasible back then, but are now. But, because that one approach became popular, all subsequent explorations had to resemble that popular approach.
For example, Lisp was neglected in favour of C for performance reasons, among other things, and now any popular language has to look like C even if it has worse performance than Lisp.
It makes me sad to see a brilliant idea left behind while inferior ideas are given all the attention, merely because it's different and old. On the flip side, every few years people repeat the same bad ideas, generate hype, and then fail to live up to it, because people don't know history. Like AI replacing programming, or visual programming.
Another useful source of reading is academic. Pretty much no one in the software industry reads academic literature anymore. These papers are just taking shelf-space at universities, not helping anyone, meanwhile programmers just do things they've known how to do for decades. This division of labour is not useful. We should be seeing which academic ideas we can realise. It's as though academics have come up with a model for flying cars, and no one in the automobile industry reads it and tries to manufacture. Meanwhile academics move on and declare flying cars a solved problem.
Rich Hickey implemented a paper by Phil Bagwell to make efficient immutable data structures for Clojure, with similar performance characteristics to ordinary mutable ones, and for some applications this resulted in performance improvements. Real applications have benefited from this (including my own).
You can also read/learn from other fields. This type of reading isn't targeted, it's general. It doesn't have an immediate payoff. But if you do it for long enough you will find transferrable ideas, and be able to solve problems in other fields.
Finally, you can read contemporary things from within your field. It seems like I should start with this, but I've put it last because I think it can limit your thinking, if not done carefully. Rather than keeping up with trends - fashion - try to understand how the currently popular things work, and how that fits in your mental framework.
Experience
In Brandon Sanderson's lectures on writing, he says that you should write 5 books before trying to publish, to learn what your style is, and develop a level of mastery to be able to achieve what you want. He also acknowledges that there are people who succeeded with their first book, but these are rare, and they often wish that they wrote more books first before succeeding, to understand their own style and be able to write better.
Applying this to software, getting good at PDD requires trying to work this way (maybe unwittingly), multiple times, to the point where you've learnt your style, you understand how to work this way, and you know how to implement what you want. You don't get this experience by working in the software industry, no more than you can become a great novelist through your experience as a scribe. We recognise the difference between a novelist and a scribe, or a great artist and someone who paints walls, but we don't have a category for this in programming, and to the extent that we do, we treat it like a pathology. The term is literally "not invented here syndrome".
As a result, this happens infrequently, by accident. I wanted to make games, but I didn't like game engines because they didn't align with the way I thought about games, so I made my own. This is not "pragmatic". We're supposed to tolerate things no matter how bad they are, and never try to change them, and thus never learn how to, and never even imagine a better way.
Rich Hickey made Clojure after he was burnt out with C++, Java, and C#, but wasn't allowed to use Common Lisp. So he took a sabbatical and made another Lisp.
In our industry "a bad craftsman blames his tools". We're supposed to suck it up and dig a tunnel from England to China with a spoon, and put that on our CVs. It takes a special sort of person to overcome this, and gain the experience needed to make good software.
There's another kind of experience that's relevant here. Although I have much more experience making things than I used to, if I enter a new domain I can still think about it wrong, and need experience in the domain to discover the right way. This entails using it, and implementing it. And sometimes this means starting over, and fleshing out the domain with what I've learnt from the previous attempt.
Say No
I've said earlier that you shouldn't be dogmatic about your philosophy. However, if you do it right you should seem dogmatic. You should compromise far less than you would in ordinary life.
If you apply PDD and find that the underlying philosophy of most software that deals with a particular area is vastly different from your own, the world has already failed to compromise with you. If you think something should be a particular way, and the world is 99% a different way, rather than 50/50, then why should you compromise your implementation? You're making a little corner of the world that matches the way you think things should be done, to at least make it possible for people who agree with you to benefit from that. The conventional way is already available. Don't dilute the availability of your philosophy more than it already is.
Evan Czaplicki is a good example of this. From what I've seen of him in his talks, he seems like a very considerate guy. Not very stubborn or imposing in interpersonal relationships. But, as far as Elm is concerned, he has a mission - using functional programming concepts to fix our frontend development woes. And although he listens to lots of people, he is uncompromising. He's gotten a lot of shit for not doing the popular open source thing of immediately implementing whatever people ask for, and not letting people contribute. He lets issues pile up, before looking at all of them and thinking of a minimal solution that addresses most problems whilst still be in the spirit of the project.
Despite this barrage of indignation, against someone who has dedicated his adult life towards providing a free gift to programmers, a decade later his firmness has paid off. People were asking for jQuery integrations, and he refused. Now there is a vibrant Elm ecosystem, because people had to write things from scratch in Elm, since interop with JavaScript was limited. This is one of the things I prefer about Elm over Clojure. Clojure, by making interop a core value, hasn't resulted in an ecosystem of pure Clojure, so you inherit many of the drawbacks of the JS and JVM ecosystems.
Even seemingly small compromises eliminate the benefits of your approach.
Consider hybrid working. Even just having to work 1 day a week in the office - 20% of the work week - eliminates one of the main benefits of remote work: not having to live within commuting distance of the office. This seemingly small concession now restricts you to local talent. If, like me, you believe that people with the right sorts of skills are rare because it relies on rare abilities that aren't cultivated in the industry, that are denigrated even, then sticking to local talent is terrible. Hybrid work is also a worse version of office work, going to the office just to have zoom meetings.
Or standups. Even if they're "just 15 minutes", 45 minutes after work starts, they sabotage your productivity. There's not enough time before standup to do anything serious, and there's not enough time between standup and lunch. And if you've done fuck all in the morning, you're unlikely to be productive in the afternoon. When I've worked without standups, and all the other seemingly small scrum concessions, I can enter a state of flow where I do serious work day after day.
Or functional programming. Once you make it possible to call an impure function, it contaminates all the functions that call it. Now a whole bunch of optimisations don't work. Now Elm's time travel debugger is no longer possible.
To people who aren't familiar with your philosophy, and who are used to "The Wrong Way", they seem like small asks. But, in some cases, it can be massively harmful. It can lead to a situation that is the worst of both worlds, trying to unify two fundamentally incompatible things, rather than embracing one thing and keeping things consistent within that framework.
It's especially important to protect your project in the early stages. Once it matures, and people start to understand it, it becomes more resistant to significant changes that go against its philophy. But, early on, if you make it public and let lots of people contribute, there's immense pressure to turn it into what they're used to. If you start working on an open source text editor, there will be pressure to turn it into VSCode.
All of this is moot if you don't have the power to say no. If you're employed, or you're VC funded, giving up significant equity, needing to do lots of work, and being nowhere near profitable.
To be able to do this, it should either be a hobby project, or bootstrapped, or recieve minimal funding late into development, giving up minimal equity.
If your project is too unfamiliar, you might face difficulty getting it adopted, and that can stifle its impact. If it was a bit more familiar, if it was halfway between your ideal and the current world, maybe people would have an easier time learning it, and it can become a gateway drug to the full ideal. Sometimes this can backfire. People may not see the need to switch if it seems too familiar, and when it's familiar in some ways, they will expect the rest of it to be too, the same way that our minds can auto-correct spelling. A compromised system that is more complicated can also be hard to learn. Sometimes shock therapy with a simple system is the way to go.
When you do compromise, it's only to the extent needed to get your idea going. To not lose lots of money. Not to be a bean counter.
Brandon Sanderson says that when you're writing a book the artist should be solidly in charge. After the book is written, you have to find a different person inside of you, a maniacal business man who runs off with the manuscript and won't stop until it's sold. The job of the business man is to sell what the artist made, not to decide what the artist should do.
We've all watched movies that were like this, where clearly the script was changed because focus groups in current_year prefer demographic_x and setting_y. All the movies that are memorable aren't like that, they've stood the test of time by being unmoored to contemporary fashions.
Caring about Beauty
I'm not talking about how slick the UI looks, or whether Uncle Bob approves. I mean an appreciation for conceptual elegance, of the sort that mathematicians and physicists talk about. That's the analogy I use because I'm a STEM guy, but this is also true for the arts. E.g, not how pretty a book cover is, but how wonderful the experience of reading, the ideas, plot, and worldbuilding are. People who are indifferent, mercenaries, pragmatists without a nose who do whatever as long as "it works" can't work this way.
They will often say that all this talk about beauty is impractical, and costly for a business. Over the last few years I've been able to produce better and more cheaply, with "impractical languages", what these "practical organisations" could never.
As Alan Kay puts it, properly understood beauty and pragmatism aren't in conflict:
... there has to be some exquisite blend between beauty and practicality. There's no reason to sacrifice either one of those, and people who are willing to sacrifice either one of those, I don't think really get what computing is all about. It's like saying I have really great ideas for paintings, but I'm just gonna use a brush but no paint. So my ideas will be represented by the gestures I make over the paper - Alan Kay
It's a false conflict, and it makes the "pragmatists" feel better. Maybe they're not as creative, but at least they get things done. But that's false. The shit polishers, digging from England to China with a spoon, aren't more productive than someone who buys/builds good tunneling machines.
Some people are oriented that way, and PDD is not for them. Neither is great software. But it's not always easy to tell if you're oriented that way before you've tried it, and worked on developing your taste.
Start With The Core
Projects have a core, a chassis, an engine, something that everything else runs on, but isn't directly visible to users. Everything else is an elaboration on the core. It's important to do that, but that comes later. It's easier to change the surface once the core is correct, but it's difficult to swap out the core once you've put a lot of work into the surface. And if you have a bad core, no amount of surface work will make your project good.
If you have a lot of surface, for any small core change you have to make a lot of surface changes. This makes it much harder to change the core.
This is why it's important to leave the surface stuff until the core is good. Before spending lots of time on styling or endpoints or whatever the surface of your system is, focus on the language of the system. The domain, the metaphors, how the entities relate. The bulk of the work you do at first should not be about "user stories".
This is also why making something and getting it to market within 3 months is a bad idea.
And it might help you think about time spans that are longer than the three months that were mentioned by the speaker - I found that laughable, absolutely laughable. But, of course, you can do something in three months. I mean, what if you want to play a Mozart concerto? Yeah, I'll make that a three-month project. Why not?
You don't have time to work on the core, you need to work on the surface right away, and you're stuck with a bad core that you can't fix.
Maximum Control, And Responsibility
If you go for the conventional route - a Minimum Viable Product, using third party libraries for most things, and rarely reinventing the wheel, you might be able to get something to market quickly, but in a sense you're just making more of what's already on the market. You're limited by the configurations provided by these third party libraries, which are just settings on their existing philosophies, or ways to select which quadrant of soup you want.
If you're looking for a very differrent take from what's available, you'll need to write your own things. The first wheel was used for carts and chariots, and is completely unsuitable for modern cars. If you want to make a car, you literally need to reinvent the wheel. You don't toggle and edit the original wheel.
This means you need to understand a lot more implementation than someone who merely glues together libraries. There are some wheels I can't reinvent because their internals are black boxes to me, but it's an explicit goal for my own maturity as a programmer to expand what can be seen by the light of my comprehension, so that I have more control and power to change things.
Good Methodology
As I described in another essay, agile obliterates good software. Many of the things required to implement PDD are not possible with modern software methodology, and therefore not doing agile goes a long way towards achieving this.
It's also philosophically at odds with PDD. The underlying idea behind agile is that nothing is sacred, and that you throw shit at the wall and see what sticks. Philosophy Driven Design is primarily about designing software by thinking about the underlying ideas, which you can correct over time, but that's not the underlying engine. As Rich Hickey puts it when talking about TDD, having railings are useful when you're driving, but you shouldn't rely on them and drive blindly, bumping into them all the time, hoping that they'll lead you to where you need to go. And Test Driven Dentistry is even more hillarious.
I'm still working on formulating the methodology that has led to philosophically interesting projects that I've worked on, and others, and trying to make that the best that it can be. But I can offer some things here until then for what makes a good methology:
No iterations
At least none that are arbitrarily imposed, in the early stages of your project (when you're in Project Mode, so to speak). Things take as long as they take, and you will informally have cycles as a result of the parts of your project, but it's driven by need rather than a schedule. You'll cycle between spending time in the hammock, then lots of programming, and maybe some back and forth between the hammock and the code, until you've done The Right Thing.
Once you've done that, you can then think about the next sub-project you're taking on. You can also start to use what you've just done more and test it out.
No Dealines
To many agile people, critiquing agile, and requiring thinking, means advocating for Waterfall. But that is not the case. The point isn't to think about precisely what functions to write ahead of time, and estimate them all and work on that cadence. The point is to figure out the big ideas ahead of time, and refine their execution, and keep thinking. Estimates and deadlines are beside the point, whether they're done over the course of a year or every sprint. Who cares how accurately we can estimate what we can do in 2 weeks, if we never do anything meaningful? Design takes as long as it takes.
No Shredding
Rather than cutting things into itty bitty tickets and context switching between them, group things into related subprojects, and work on one subproject for an extended period of time. Make it so that you can think about one subproject at a time, even as you work on different parts of it. Do this for as long as you can stay motivated about that particular subproject. This results in a situation where you are thinking about the same sorts of things all the time, on walks, in the shower, etc. Rather than for 2 days a sprint or whatever. Getting to this point is crucial to make good steps in your thinking.
Conclusion
There is a way of making elegant, powerful, useful, and well designed software. It takes very few people. It involves thinking, discovering a philosophy, and executing it. It's not easy to do inside the industry, but it is something you can train. It's something that I did accidentally, and I have only recently noticed it in myself and some others I respected, such as Evan Czaplicki and Rich Hickey (and the early days of Linus Torvalds that led him to make Git).
There's a lot more that goes into making good software, but it is a large component.