An intelligent system, whether a creature, a community, or a machine, must grapple with an overwhelming flood of information. The secret to navigating this deluge lies in discerning what truly matters. This essay expands on the idea that intelligence is fundamentally linked to "saliency"—the ability to identify and focus on the features of the world that are crucial for survival and success.
1. A New Picture of Intelligence
The psychologist and philosopher William James offered a concise and powerful definition of intelligence, suggesting it is the capacity "to adapt one's behavior to the requirements of the moment." This simple statement implies a critical prerequisite: an organism must first understand what the moment requires. To adapt, one must know what to adapt to. Building on this foundation, we can propose a more explicit definition:
Intelligence is the power of a system to maintain its own integrity and pursue its goals by continuously improving its model of the world. This is only possible when the system can distinguish what is worth caring about.
This critical element, the "what is worth caring about," is what we term saliency. The effectiveness of any intelligent system is therefore not just a matter of processing power, but of the precision and adaptability of its "saliency map." A more intelligent entity possesses a sharper, more dynamic understanding of what is important and can rapidly update this map when circumstances change.
2. What Exactly Is Saliency?
Imagine a walk through a dense forest. Your senses are bombarded with data—the rustling of every leaf, the chirping of distant birds, the texture of the bark on each tree. Yet, you remain oblivious to most of it. Your attention is instead captured by a select few signals: the dangerously slippery moss on a rock ahead, the distinct and alarming rattle of a nearby snake, or the sudden darkness cast by gathering storm clouds.
These signals are salient. Failing to notice them could have immediate and severe consequences for your well-being. Saliency, therefore, is an attentional mechanism that allows organisms to focus their limited cognitive resources on the most pertinent sensory data to facilitate learning and survival. It often arises from the contrast between an object and its immediate surroundings
More formally, a feature of the environment is salient if a change in that feature would significantly impact the system's future prospects. This concept can be understood through three complementary lenses:
The first lens is the Utility Gradient, which asks the core question: “If this variable changes, how much will my potential for success change?” Things become salient when they have a major impact on a desired outcome. For instance, a driver pays little attention to the fuel gauge when the tank is full, but watches it obsessively as it approaches empty. Each small drop in the needle becomes highly salient because it now dramatically affects the chances of reaching the destination.
The second perspective is Uncertainty Reduction, sometimes called the Free-Energy Drop. This lens poses the question: “How much 'surprise' will be eliminated if I gather information about this?” A feature is salient if it resolves ambiguity about the immediate future. A hiker noticing dark clouds on the horizon gains crucial information, reducing uncertainty about the likelihood of rain. The color and movement of the clouds become highly salient in that moment. This idea aligns with prominent theories like the Free-Energy Principle, which posits that living systems act to minimize the gap between their expectations and sensory inputs.
Finally, the third lens is Empowerment Gain, which considers: “How many new actions or options become available to me if I can control this?” Saliency is tied to an agent's potential to influence its environment. For example, a child who learns that a chair can be moved to reach a cookie jar has discovered the high saliency of the chair's position. The chair is no longer just an object but a tool that expands the child's capacity to act on the world and achieve goals. This concept is formalized in AI as "empowerment," a measure of an agent's potential to influence its future states.
From these examples, it's clear that saliency is not an inherent property of an object but a dynamic relationship between a system and its environment. It is fluid, shifting with goals, context, and internal states. The aroma of food, for instance, is intensely salient to a hungry person but may go completely unnoticed by someone who has just eaten.
3. Intelligence Rides on Saliency
Any mind, whether biological or artificial, has finite processing capacity. Therefore, the choices it makes about what to pay attention to, what to remember, and what to act upon are the very embodiment of its intelligence in action.
The Thermostat: Its world is incredibly simple. The only salient variable is the ambient temperature. When this single piece of data deviates from a set point, the system acts. Its intelligence is real but confined to this narrow slice of reality.
The Self-Driving Car: For this system, the edges of the road, the velocity of surrounding vehicles, and the probable intentions of a pedestrian are all highly salient. A plastic bag drifting in the wind is typically ignored as noise. However, if that same fluttering object is at the height of a child and near a crosswalk, its saliency skyrockets, prompting the car to brake. The car's intelligence improves as it becomes more adept at refining this complex and context-dependent saliency map.
The Immune System: A T-cell navigates a world of microscopic signals. Its primary task is to distinguish between protein fragments that belong to the "self" and those that signal a foreign "invader." A misplaced saliency marker here can be catastrophic. In autoimmune diseases, the system's intelligence breaks down as it incorrectly flags friendly cells as threats.
A system that consistently misidentifies what is salient—chasing trivialities while ignoring critical threats—will inevitably waste energy on futile efforts and fail to achieve its objectives.
4. Measuring Saliency in Practice
The three lenses on saliency are not just theoretical constructs; they are actively used in various fields of research and engineering to build more intelligent systems.
Utility Gradient: In the field of reinforcement learning, algorithms are trained to recognize which changes in their input (like the pixels on a screen) will lead to the greatest increase in their expected reward. This is a direct application of measuring saliency through utility.
Uncertainty Reduction: Neuroscientists can measure cognitive responses, such as brainwave activity, to see how an organism reacts to new information that resolves uncertainty. A spike in activity when a previously ambiguous signal becomes clear indicates that the signal was highly salient. This reflects the core idea of the Free-Energy Principle, where systems strive to minimize "surprisal".
Empowerment Gain: In robotics, empowerment can be quantified as the number of future states a robot can access. Engineers can design robots that are intrinsically motivated to increase their empowerment, leading them to explore and learn about their environment in a way that maximizes their future options.
Though the mathematical formulations may differ across disciplines, the underlying principle remains the same: identify what will be most important for future success, focus on it, and filter out the rest.
5. Saliency at Multiple Scales
Life is organized hierarchically. What is salient for a single skin cell (e.g., maintaining the integrity of its DNA) is different from what is salient for the organ it is part of (e.g., maintaining elasticity) or for the organism as a whole (e.g., regulating body temperature).
This nested structure reveals a profound aspect of intelligence. When a skin cell detects severe, irreparable DNA damage, it may initiate apoptosis—a process of programmed cell death. From the limited perspective of the cell, this action reduces its own future to zero. However, from the perspective of the entire organism, this act of self-sacrifice is highly intelligent, as it prevents the potential for cancer and thus widens the organism's range of healthy futures. This demonstrates that intelligence is often nested, with each layer working to preserve the critical saliency patterns of the layer above it. Michael Levin's work on basal cognition explores how these multi-scale agential systems make decisions.
6. A Crisper Definition
By incorporating the concept of saliency, we can now assemble these ideas into a more precise and less ambiguous definition. Put more artfully, we might say that intelligence is the art of keeping what matters alive by getting better at predicting and shaping what comes next.
This allows us to formalize our earlier proposal:
Intelligence is the ability to preserve what is vital for a system's continued existence and goal-achievement, both now and in the future, by constantly refining its understanding of how the world operates.
Here, the phrases "what matters" or "what is vital" are no longer vague aspirations. They can be measured and understood through the concrete lenses of utility, uncertainty, and empowerment. When the task, the environment, or the system itself changes, the saliency map must be updated. The capacity to perform this update is the hallmark of true intelligence.
7. Closing Thoughts
The philosopher Alfred North Whitehead once remarked,
"Civilization advances by extending the number of important operations which we can perform without thinking about them."
An intelligent system takes this a step further: it automates these operations only after it has rigorously determined that they are the right operations to automate.
This is why saliency is not merely a feature of intelligence, but its essential guide. If we aim to create smarter technologies, more equitable institutions, or healthier personal habits, our first and most crucial task is to ask a fundamental question: Are we paying attention to, and rewarding, the things that truly matter? Everything else is just noise.
A Few Pointers for Further Reading
• Friston, K. (2010). “The Free-Energy Principle.” Nature Reviews Neuroscience.
• Klyubin, A., Polani, D., & Nehaniv, C. (2005). “Empowerment: A Universal Agent-Centric Measure.” IEEE Congress on Evolutionary Computation.
• James, W. (1890). Principles of Psychology.
• Levin, M. (2024). “Self-Improvising Memory.” Entropy.
As Kim Sterelny wrote in 2003, "robust systems, like detection systems, are behavior-specific. Their function is to link the registration of a salient feature of the world to an appropriate response."