Why Is Optimization Everywhere? (And Why Should You Care)
- Samuel Fernández Lorenzo

- Jan 7
- 6 min read
Let me start with a question: how many decisions have you made today? From which route to take to work, what to have for breakfast, how much time to dedicate to each task... All these choices, however mundane they may seem, have something in common: you're trying to optimize something. Perhaps minimize wasted time, maximize your energy, or find the best balance between effort and result.
Optimization isn't just an abstract mathematical concept reserved for engineers and scientists. It is, in essence, the rational way of expressing something deeply human: our constant desire and need to choose well among multiple possibilities. And yes, even if math isn't your thing, optimization is already part of your daily life, whether you like it or not.
And perhaps, even if you know what it means to optimize mathematically, it's very likely that no one has yet told you why optimization appears literally everywhere. Nor will they have explained its natural limits, something crucial to understanding where we can and cannot apply it successfully. In fact, I'll bet you a beer that you probably have some misconceptions about it.
Optimization: The Everyday Art of Choosing Well
Think about the decisions we make constantly: we want to minimize time in traffic, maximize the quality of what we buy for the money we spend, minimize the side effects of a medication, maximize our company's profits. Every time we use the words "maximize" or "minimize," we're implicitly posing an optimization problem.
Maximizing returns and minimizing risks are the rational principles that guide our economic decisions. But optimization goes much further: it's present in science, engineering, finance, and even, as we'll see, in artificial intelligence.
Our life is, at its core, a great succession of choices. And optimizing is simply the rational expression of our desire to choose as well as possible.
When Intuition Isn't Enough: The Need for Mathematics
Now then, why do we need a mathematical representation of these problems? Couldn't we simply trust our intuition?
The answer is that, as the number of variables, objectives, and constraints increases, the problem of choice becomes dizzyingly complex. Imagine you're the director of a company that produces several products. You have to decide how much to produce of each one, considering material costs, production time, market demand, your factory's limitations, available personnel... There are many possibilities, and our mind isn't equipped to deal with all of them.
In scientific, technological, and industrial problems, we cannot rely solely on intuition. We need to make decisions from reason, with a scientific, well-founded, rigorous, and productizable approach. This is where mathematical representation becomes indispensable.
Mathematically, an optimization problem is defined by two main components: an objective function (the quantity we want to maximize or minimize) and a set of constraints (the limitations we must respect). For example, we might want to maximize a company's profit (objective function) subject to budget, production capacity, and resource availability constraints.
Objective functions are represented by mathematical functions, and constraints are expressed through equations or inequalities. In both cases, what we're doing is simply expressing precisely a relationship. The concept of relationship is one of the metaphysical pillars of existence itself, and if you want to delve deeper into this, I recommend reading section 1.4 of Everything I Can Imagine: The Algorithm of Understanding.
This type of mathematical representation allows us something extremely important today: to apply algorithmic and computational methods. Computers can explore that sea of possibilities and discover optimal solutions that a human mind would never reach.
A classic example is the traveling salesman problem (TSP). Imagine that a salesperson must visit a set of cities exactly once each and return to the starting point, minimizing the total distance traveled. Sounds simple, right? But here's the challenge: for 10 cities there are more than 3 million possible routes. For 20 cities, the number exceeds 2 trillion. For 30 cities, the possibilities surpass the number of atoms in the observable universe. Without a mathematical representation and sophisticated computational algorithms, solving this problem optimally would be literally impossible for a human mind.
The Hidden Beauty: Natural Laws as Optimization
When one is willing to use imagination, optimization hides truly fascinating surprises. Did you know that many of the fundamental physical principles that govern our universe can be rewritten as optimization problems?
Take Newton's equations, those we use to design bridges or calculate spacecraft trajectories. It turns out they can be expressed as the minimization of an abstract quantity derived from energy called "action." Action is a magnitude calculated by integrating over time the difference between the kinetic energy and potential energy of the system. This formulation is known as Hamilton's principle of stationary action, and is considered one of the most elegant results in all of physics.
This perspective opens the door to variational calculus, a mathematical discipline where we don't seek optimal values of variables, but optimal functions. The classic example is the brachistochrone problem, posed in 1696: what is the curve along which a body descends in the shortest possible time between two points under the influence of gravity? The answer is a catenary, the same curve adopted by a hanging rope. Due to its magnificent properties, this shape appears in constructions like the Gateway Arch in St. Louis, a colossal catenary 192 meters tall.

Artificial Intelligence: Learning by Optimizing
Another connection unknown to non-technical people is the one that exists with artificial intelligence. When an AI algorithm "learns" from data, what it's really doing is solving an optimization problem!
The process works like this: we start with data (concrete instances) and want to build a model that captures the underlying patterns. The programmer introduces an ansatz, that is, a family of possible models within which to search. For example, we could assume that the relationship between variables is linear, or we could contemplate more complex models like neural networks.
The trick is to formulate learning as an optimization problem: the best model is the one that minimizes a certain cost function defined over the space of models. This function measures how well the model explains the observed data. The algorithm then searches for the minimum point of this function, and that point corresponds to the optimal model.
Thus, when an AI system learns to detect banking fraud or recognize faces in photographs, what it's doing under the hood is solving a sophisticated optimization problem.
Why Does Optimization Appear Everywhere?
Why does optimization appear in so many different contexts?
The most elementary answer is that any structured problem is, at its core, a search problem: we have to locate one (or several) special possibilities within a large set of possibilities. And that's precisely what we do in optimization: distinguish the optimal points (minima or maxima) from the rest.

Points can be generalized to more complex shapes. It's enormously beautiful to associate geometric shapes with solutions to optimization problems, as we discussed earlier with the catenary. But this phenomenon remains self-evident when one understands the essence of optimization, since a geometric shape is nothing more than one possibility among others that we can draw within a geometric space, and therefore is susceptible to being represented as the maximum or minimum of an optimization (only more abstract).
Mathematical optimization provides us with the rigorous framework to rationally confront the problem of choosing among a large number of possibilities. Hence its ubiquity: humans constantly reflect on what decision to make to navigate the complexity of the world.
It can be stated that any structured problem can be rewritten as an optimization problem. |
This has enormous repercussions in industry.
Any industrial process defined by predetermined operations and quantitative productive metrics can be mathematically optimized. |
Reflect for a minute on this statement to fully digest it. When you do, you'll understand that any progress in optimization technologies has an enormous capacity for technological disruption.
Even problems that don't seem like optimization at first glance can be reformulated as such. A sudoku? It can also be posed as an optimization problem! Consequently, we could say that any structured problem can be rewritten as an optimization problem. Quite another thing is whether we're capable of finding that formulation, or whether it's useful to do so in practice...
The Limits of Mathematical Optimization
But it's not all roses. Mathematical optimization, however powerful it may be, has its natural limits. It works extraordinarily well in structured problems: those where we can clearly define the system's variables, the relationships that govern it, and the actions we can perform on it.
But this isn't something that will always occur, and this represents a fundamental obstacle that many working in current artificial intelligence seem not to take into account. But this will be a topic to better address in the next post. For now, I hope I've convinced you that optimization, far from being an abstract and distant concept, is a constant companion in our daily journey through life. Every time you choose rationally, you're optimizing. The difference is that now, perhaps, you're a little more aware of it.
Want to know more? Take a look here.



Comments