Not that this has not been noted before, assuming we design a cone-like system, where the tip is where we have logic to hinge sub-systems and drive the system, reusability is the lowest there at that tip. This of course is what a good design is aiming at. To move the 90%-99% of the work into sub-systems, keep 1-10% of the work at the tip. Eventually, that gives your code a label of 90%-99% reusable or atleast easily replaced.

Abstract that further, break the problem up into n parts. Going by the theory presented by Dijkstra, it is now that much easier to get each of the sub-parts of the problem right 100%. Now, apply our Cone system to each of the sub-parts.

Avoid the overhead of each division by confirming to specifications. You now have a system that is much larger and still obeys the cone-reuse rule above. The failure of design occurs in a case where you introduce coupling between divisions increasing the non-reusability of your system to > 10%? My argument is flawed iff reusability does no require each and every sub-component to be reusable (ie. I am arguing for finer granularity).

I think I have drifted into hybrid territory here (functional versus data driven architecture). This is since we now observe a tree like pattern forming all over again. This is usually seen in functional systems, but can the converse be true too?

Note that Dijkstra also warns that the probability of solving the problem approaches zero as the number of sub-divisions to the problem increases, he never stated that it will grow. In fact, the probability of solving the problem is 100% if you take the whole problem and you can guarantee that that monolith solution will work (which is the tough part in the first place).

Breadth is always better in any solution.