Mostly strategic developments have been about incrementing beyond whatever the other person is doing. Sometimes there are paradigm shifts, which largely mean a different dimension along which to make incremental improvements. But we cannot increment forever. Sometimes there is a well-understood limit we cannot surpass. Energy-Maneuverability theory is a paradigm for designing air superiority fighters. Though the paradigm shifted from the old speed/altitude/turn metrics, we remained constrained by the ability of the human pilot to withstand g forces. We have already built aircraft which can climb higher, accelerate faster, and turn more sharply than men can tolerate without passing out. It may even be possible to design an almost-perfect manned fighter, in which the pilot is the constraint in all dimensions of performance. But now we have unmanned drones. Natural law provides a variety of limits, like the speed of light or the increase of entropy. We have a good command of natural law at the scale where warmachines operate. It seems like it would be a good policy to adopt these as the constraints on strategy-space, and map what we know about our opponents to them. This would have the benefit of letting us know how much room there even is for incremental improvements, and give us some indication of where we are vulnerable to (or have an opportunity to create) a paradigm shift. Since most of these natural limits are well known, and most dimensions of strategy don’t have something obvious like c, there isn’t an obvious motivation for it. But it feels to me like even something as conceptually straightforward as using operations research or mathematical programming would work for this.
I’m not sure I understand your recommendation. You talk about pilot as a constraint and the obvious removal of the constraint (unmanned fighters). This is the opposite of a natural law: it’s an assumed constraint or a constraint within a model, not a natural law.
I think " We have a good command of natural law at the scale where warmachines operate. " is exactly opposite of what I believe. We have some hints as to natural law in those scales, but we’re nowhere near those constraints. There are a huge number of contingent constraints in our technology and modeling of the problem, which are very likely overcome-able with effort. [edit after re-reading] Do you mean "only natural laws should be explicit constraints"? You’re recommending that if we think we’re constrained and can’t identify the natural law that’s binding, the constraint is probably imaginary or contingent on some other thing we should examine?
Comment
I find it very unlikely that it’s useful for a design of warplanes to think about how c constrains their design space and think about how the opponents are constrained by c. It’s too far away from practical considerations.
Comment
I am sympathetic to this feeling, but as it happens *c *pops up almost immediately because of communication and targeting requirements. Radios, radar, laser guidance, and various kinds of telemetry all have to use the speed of light (at least in air) explicitly in their operation.
Comment
This exchange was helpful to me.