Starting with Target-led Defense
When considering the different elements of our SVO (Subject-Verb-Object) model – Agent, Action, and Target – we might be tempted to focus on the Agent and Action elements. Most cybersecurity vendor products tend to focus on these two elements, highlighting how they incorporate threat intelligence or provide better attack taxonomies. While these are important, a better understanding of the target itself can better drive our defense and anchor our understanding of severity as it relates to our organization. This is much more than asset management or surface posture management; it is understanding the criticality of each element that can become a target in your environment, whether it is identity, accounts, systems, or the organization itself, and often enriching this target criticality with the nuanced overlay of these aspects.
Four Factors
When considering the attributes that constitute a target and define its criticality, there are four main factors: 1) Entity Type or Nature, 2) Operational Scope or Location, 3) Function, Purpose, or Role, and 4) Recoverability, Resilience, Management, and Supply Chain. Let's dive in and explore each of these.
Entity Type or Nature
Understanding the nature or type of an entity is important because it enables the automatic inference of numerous attributes, weaknesses, and potential use cases. This helps answer the fundamental question:
What is this target, and what inherent characteristics does it possess?
For instance, is it an employee workstation, an employee account, a server, or an S3 bucket? Gaining clarity on these distinctions for every entity within your IT infrastructure is foundational to comprehending the other factors that collectively determine a target's overall criticality to the organization.
Operational Scope or Location
Operational scope further contextualizes an entity's type or nature. The criticality of a server residing in the DMZ should be distinctly different from that of an internal domain controller. Furthermore, various nuances arise based on the location within the broader IT infrastructure and its geographical placement. Geographic location is an important factor to consider due to potential geopolitical events and region-specific compliance and regulatory frameworks such as GDPR. Even if an organization does not fully align with the objectives of a given regulatory framework, it should assess the potential impact of a violation when determining a target's criticality. This factor aims to address the questions:
Where does this entity operate or exert influence?
What are its boundaries, environment, and sphere of applicability?
What jurisdictional rules are relevant?
Function, Purpose or Role
Once we have determined the type of system or entity and its geographical and logical location, the subsequent critical step is to comprehend its function or purpose. Is its primary role hosting applications? Is it responsible for access control? Does it facilitate human-machine interaction? Or is it dedicated to data storage? Our inquiry extends beyond the mere technical function of the entity. This factor addresses these questions:
What is the entity's primary role, function, or reason for existence within the broader context?
What does it enable or achieve?
We must also consider the more intricate organizational purpose, not only of the entity itself but also of the broader organization. The manner in which the organization utilizes the entity can imbue it with roles and functions that are vital to the organization's success. For instance, a fundamental understanding of how revenue is generated and sustained within the organization is essential to identify the key entities that underpin these processes. A manufacturing company will, and indeed should, approach the security of numerous entities in a manner distinct from that of a biomedical research company. Recognizing these nuanced differences is paramount and underscores why a one-size-fits-all, or 'cookie cutter,' application of cybersecurity defense models and technology stacks carries the inherent risk of being both excessively costly and ultimately inadequate in the protection it affords.
Recoverability, Resilience, Management, and Supply Chain
Our final key factor in determining a target's criticality is its recoverability. We should operate under the assumption that every endpoint, account, cloud resource, and entity within our environments will eventually be compromised, regardless of the strength of our defenses. Therefore, it is crucial to understand how easily we can reconstitute these elements, not just by recreating the entity itself, but by restoring it to its previous functionality and operational capacity. This involves assessing:
How effectively can the entity be restored, remediated, controlled, or managed following a compromise, failure, or disruption?
How effectively can its operational scope and function be restored after a compromise?
How resilient is it overall?
Furthermore, an often overlooked aspect of this analysis is the entity's "supply chain." For instance, if the open-source base image we utilize is compromised, how swiftly can we transition to a different base image or version? If we need to rotate credentials for our cloud infrastructure, how effectively can we accomplish this? If our operations depend on a single cloud service provider or vendor, and their services are disrupted, how do we restore our operations?
Assessing your entities
Assessing the criticality of targets within your organization can be a daunting and overwhelming endeavor. Not only does this require the integration of data (assuming you possess the necessary data), but it will also consume significant time from understaffed teams. Additionally, when balancing the prioritization of this task with numerous other pressing issues common in the cybersecurity 'day-to-day', it is often overlooked. So, how can one approach this significant undertaking?
A fusion of attributes and relationships
Modeling internal domains and the IT estate is often effectively done through a network graph representation. However, these graph representations frequently employ rigid definitions for nodes and edges based on their primary use case, the product managing them, or the fundamental nature of the graph itself. Assuming a well-connected architecture capable of integrating multiple cybersecurity data sources and enriching that data with observability telemetry—including logs, traces, and metrics—a valuable inference can be made. Specifically, the majority of attributes and properties defining our nodes are the aggregated result of connected edge attributes. The more specific the relationships between nodes and the attributes attached to those edges, the more precise the inferred ‘identity’ of the node becomes.
The data needed to build such an internal network graph representation is not difficult to find. Most EDR products, asset management products, operating system and cloud logs, vulnerability scanning, and IAM platforms together provide the necessary telemetry and can be easily integrated into a larger SIEM or, preferably, a data ops platform or data lake. To be precise, this extends beyond the standard ‘XDR’ platform, representing a fusion of observability and cybersecurity telemetry into a single digital twin of the IT estate.
Additionally, this approach accommodates the dynamic nature of criticality. Target criticality is not static; it evolves as business operations, technology, or threat landscapes change. By leveraging real-time data to model the relationships between entities in your environment, you can build a system less reliant on manual effort.
Process Oriented System Efficiency
Focusing on the internal processes of a company offers another valuable approach that complements the previously discussed data fusion method. Business processes can be defined as a structured and ordered sequence of interactions involving personnel, technology, and potentially other processes. This decomposition continues recursively until all subprocesses are reduced to a fundamental level of ordered interactions between individuals and technology.
Thus, by identifying key processes that are either being built or are built and being used, one can identify the entities involved and determine their target criticality accordingly.
While this approach can be effective for growing, smaller companies, two potential pitfalls exist. First, many undocumented or informally executed processes rely on 'tribal knowledge' within organizations. If management and leadership are not attuned to identifying these, crucial processes and their associated entities may be overlooked. Second, it is important to ensure that second- and third-degree entities—those indirectly connected to the key entities within a process—are also assessed for criticality. Failure to do so risks overlooking tangential targets that could still compromise the organization.
This is also a crucial consideration when developing and implementing AI and Agentic workflows. While some vendors might readily dismiss process definition in favor of allowing AI Agents to independently determine workflows, these agents still require tools, access, data, and other resources. Furthermore, a degree of predefined order is often necessary in the sequence of tasks performed by an AI system. Therefore, applying the same process-based assessment to entities integrated into the AI system, as well as to the AI system itself, is essential.
Operating with Target Criticality
A target-led defense offers a crucial advantage: the ability to prioritize. This includes defenses, software procurement, vulnerability management, alert queues, response actions, and incident response. Security budget priorities are often driven by compliance regulations due to their direct impact on revenue growth, typically enforcing a zero-risk tolerance. However, where security teams can self-prioritize within budget constraints, understanding what is critical to both company security and operations is invaluable.
Furthermore, during an attack or breach, understanding the target criticality of internal entities helps focus limited resources on protecting critical operations. It accelerates response decisions by clarifying the business impact and improves communication with non-technical stakeholders. Ultimately, effective prioritization based on target criticality allows key decision-makers to understand not just cyber risk, but also broader business risks.
Conclusion
Understanding the criticality of elements and entities you rely upon for your defense is not just limited to cybersecurity. During the French and Indian War (or Seven Years’ War), the British and French forces fought the Battle of Quebec. Marquis de Montcalm, who led the French forces defending the city of Quebec, believed themselves to be in a superior position. With part of their forces positioned atop steep cliffs overlooking the St. Lawrence River, they believed that specific sector was mostly impassable by the large British force approaching. However, the British force, led by General James Wolfe, had intelligence that there was a narrow path up the cliffs that was not only scalable, but scalable at night. With this intelligence, the British forces scaled the cliffs through the pass at night, overwhelmed the light forces, and assembled a large force on the Plains of Abraham to attack the city at daybreak. Montcalm woke to find a large British force arrayed on the land outside the city he thought ‘safe’, and the French subsequently lost the larger ensuing battle. If we as defenders fail to understand the criticality of the elements we are defending, regardless of how well we understand the tactics, techniques, and procedures of our attackers, we will one day wake up to a similar situation as Montcalm and wonder how we were breached.
But before we move on to the other parts of our Agent, Action, and Target model, how would assessing your organization’s target criticality change your current cybersecurity strategy and resource allocation?