Created
May 8, 2024 18:54
-
-
Save aamedina/7580cb202173e2a34a3d9e69e7316dc8 to your computer and use it in GitHub Desktop.
ATLAS Tactics (D3FEND)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@prefix d3f: <http://d3fend.mitre.org/ontologies/d3fend.owl#> . | |
@prefix dcterms: <http://purl.org/dc/terms/> . | |
@prefix owl: <http://www.w3.org/2002/07/owl#> . | |
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . | |
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . | |
@prefix skos: <http://www.w3.org/2004/02/skos/core#> . | |
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . | |
d3f:AML.TA0006 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0006>; | |
rdfs:label "Persistence (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0006"; | |
d3f:definition "The adversary is trying to maintain their foothold via machine learning artifacts or software.\n\nPersistence consists of techniques that adversaries use to keep access to systems across restarts, changed credentials, and other interruptions that could cut off their access.\nTechniques used for persistence often involve leaving behind modified ML artifacts such as poisoned training data or backdoored ML models.\n" . | |
d3f:AML.TA0001 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0001>; | |
rdfs:label "ML Attack Staging (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0001"; | |
d3f:definition "The adversary is leveraging their knowledge of and access to the target system to tailor the attack.\n\nML Attack Staging consists of techniques adversaries use to prepare their attack on the target ML model.\nTechniques can include training proxy models, poisoning the target model, and crafting adversarial data to feed the target model.\nSome of these techniques can be performed in an offline manor and are thus difficult to mitigate.\nThese techniques are often used to achieve the adversary's end goal.\n" . | |
d3f:AML.TA0012 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0012>; | |
rdfs:label "Privilege Escalation (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0012"; | |
d3f:definition "The adversary is trying to gain higher-level permissions.\n\nPrivilege Escalation consists of techniques that adversaries use to gain higher-level permissions on a system or network. Adversaries can often enter and explore a network with unprivileged access but require elevated permissions to follow through on their objectives. Common approaches are to take advantage of system weaknesses, misconfigurations, and vulnerabilities. Examples of elevated access include:\n- SYSTEM/root level\n- local administrator\n- user account with admin-like access\n- user accounts with access to specific system or perform specific function\n\nThese techniques often overlap with Persistence techniques, as OS features that let an adversary persist can execute in an elevated context.\n" . | |
d3f:AML.TA0008 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0008>; | |
rdfs:label "Discovery (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0008"; | |
d3f:definition "The adversary is trying to figure out your machine learning environment.\n\nDiscovery consists of techniques an adversary may use to gain knowledge about the system and internal network.\nThese techniques help adversaries observe the environment and orient themselves before deciding how to act.\nThey also allow adversaries to explore what they can control and what's around their entry point in order to discover how it could benefit their current objective.\nNative operating system tools are often used toward this post-compromise information-gathering objective.\n" . | |
d3f:AML.TA0003 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0003>; | |
rdfs:label "Resource Development (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0003"; | |
d3f:definition "The adversary is trying to establish resources they can use to support operations.\n\nResource Development consists of techniques that involve adversaries creating,\npurchasing, or compromising/stealing resources that can be used to support targeting.\nSuch resources include machine learning artifacts, infrastructure, accounts, or capabilities.\nThese resources can be leveraged by the adversary to aid in other phases of the adversary lifecycle, such as ML Attack Staging.\n" . | |
d3f:AML.TA0005 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0005>; | |
rdfs:label "Execution (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0005"; | |
d3f:definition "The adversary is trying to run malicious code embedded in machine learning artifacts or software.\n\nExecution consists of techniques that result in adversary-controlled code running on a local or remote system.\nTechniques that run malicious code are often paired with techniques from all other tactics to achieve broader goals, like exploring a network or stealing data.\nFor example, an adversary might use a remote access tool to run a PowerShell script that does Remote System Discovery.\n" . | |
d3f:AML.TA0000 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0000>; | |
rdfs:label "ML Model Access (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0000"; | |
d3f:definition "The adversary is attempting to gain some level of access to a machine learning model.\n\nML Model Access enables techniques that use various types of access to the machine learning model that can be used by the adversary to gain information, develop attacks, and as a means to input data to the model.\nThe level of access can range from the full knowledge of the internals of the model to access to the physical environment where data is collected for use in the machine learning model.\nThe adversary may use varying levels of model access during the course of their attack, from staging the attack to impacting the target system.\n\nAccess to an ML model may require access to the system housing the model, the model may be publically accessible via an API, or it may be accessed indirectly via interaction with a product or service that utilizes ML as part of its processes.\n" . | |
d3f:AML.TA0011 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0011>; | |
rdfs:label "Impact (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0011"; | |
d3f:definition "The adversary is trying to manipulate, interrupt, erode confidence in, or destroy your machine learning systems and data.\n\nImpact consists of techniques that adversaries use to disrupt availability or compromise integrity by manipulating business and operational processes.\nTechniques used for impact can include destroying or tampering with data.\nIn some cases, business processes can look fine, but may have been altered to benefit the adversaries' goals.\nThese techniques might be used by adversaries to follow through on their end goal or to provide cover for a confidentiality breach.\n" . | |
d3f:AML.TA0007 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0007>; | |
rdfs:label "Defense Evasion (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0007"; | |
d3f:definition "The adversary is trying to avoid being detected by machine learning-enabled security software.\n\nDefense Evasion consists of techniques that adversaries use to avoid detection throughout their compromise.\nTechniques used for defense evasion include evading ML-enabled security software such as malware detectors.\n" . | |
d3f:AML.TA0002 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0002>; | |
rdfs:label "Reconnaissance (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0002"; | |
d3f:definition "The adversary is trying to gather information about the machine learning system they can use to plan future operations.\n\nReconnaissance consists of techniques that involve adversaries actively or passively gathering information that can be used to support targeting.\nSuch information may include details of the victim organizations machine learning capabilities and research efforts.\nThis information can be leveraged by the adversary to aid in other phases of the adversary lifecycle, such as using gathered information to obtain relevant ML artifacts, targeting ML capabilities used by the victim, tailoring attacks to the particular models used by the victim, or to drive and lead further Reconnaissance efforts.\n" . | |
d3f:AML.TA0013 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0013>; | |
rdfs:label "Credential Access (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0013"; | |
d3f:definition "The adversary is trying to steal account names and passwords.\n\nCredential Access consists of techniques for stealing credentials like account names and passwords. Techniques used to get credentials include keylogging or credential dumping. Using legitimate credentials can give adversaries access to systems, make them harder to detect, and provide the opportunity to create more accounts to help achieve their goals.\n" . | |
d3f:AML.TA0009 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0009>; | |
rdfs:label "Collection (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0009"; | |
d3f:definition "The adversary is trying to gather machine learning artifacts and other related information relevant to their goal.\n\nCollection consists of techniques adversaries may use to gather information and the sources information is collected from that are relevant to following through on the adversary's objectives.\nFrequently, the next goal after collecting data is to steal (exfiltrate) the ML artifacts, or use the collected information to stage future operations.\nCommon target sources include software repositories, container registries, model repositories, and object stores.\n" . | |
d3f:AML.TA0004 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0004>; | |
rdfs:label "Initial Access (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0004"; | |
d3f:definition "The adversary is trying to gain access to the machine learning system.\n\nThe target system could be a network, mobile device, or an edge device such as a sensor platform.\nThe machine learning capabilities used by the system could be local with onboard or cloud-enabled ML capabilities.\n\nInitial Access consists of techniques that use various entry vectors to gain their initial foothold within the system.\n" . | |
d3f:AML.TA0010 rdf:type owl:Class , owl:NamedIndividual , d3f:ATLASTactic; | |
rdfs:isDefinedBy <https://atlas.mitre.org/tactics/AML.TA0010>; | |
rdfs:label "Exfiltration (ATLAS Tactic)"; | |
rdfs:subClassOf d3f:ATLASTactic; | |
d3f:atlas-id "AML.TA0010"; | |
d3f:definition "The adversary is trying to steal machine learning artifacts or other information about the machine learning system.\n\nExfiltration consists of techniques that adversaries may use to steal data from your network.\nData may be stolen for its valuable intellectual property, or for use in staging future operations.\n\nTechniques for getting data out of a target network typically include transferring it over their command and control channel or an alternate channel and may also include putting size limits on the transmission.\n" . |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(in-ns 'dev) | |
(require '[clojure.data.json :as json]) | |
(require '[arachne.aristotle :as a]) | |
(require '[arachne.aristotle.registry :as reg]) | |
(defonce atlas | |
(json/read-str (slurp "https://github.com/mitre-atlas/atlas-navigator-data/raw/main/dist/stix-atlas.json"))) | |
(def atlas-registry | |
{"d3f" "http://d3fend.mitre.org/ontologies/d3fend.owl#", | |
"dcterms" "http://purl.org/dc/terms/", | |
"owl" "http://www.w3.org/2002/07/owl#", | |
"rdf" "http://www.w3.org/1999/02/22-rdf-syntax-ns#", | |
"rdfs" "http://www.w3.org/2000/01/rdf-schema#", | |
"skos" "http://www.w3.org/2004/02/skos/core#", | |
"xsd" "http://www.w3.org/2001/XMLSchema#"}) | |
(def atlas-tactics | |
(for [{:strs [type | |
external_references | |
name | |
description] | |
:as tactic} (get atlas "objects") | |
:when (= type "x-mitre-tactic") | |
:let [{:strs [url external_id]} (some (fn [{:strs [source_name] :as ref}] | |
(when (= source_name "mitre-atlas") | |
ref)) | |
external_references)] | |
:when (and url external_id)] | |
{:rdf/type #{:d3f/ATLASTactic :owl/Class :owl/NamedIndividual} | |
:rdfs/subClassOf :d3f/ATLASTactic | |
:rdf/about (keyword "d3f" external_id) | |
:d3f/atlas-id external_id | |
:rdfs/label (str name " (ATLAS Tactic)") | |
:rdfs/isDefinedBy (java.net.URI. url) | |
:d3f/definition description})) | |
(comment | |
;; Write the ATLAS tactics to a TTL file | |
(reg/with atlas-registry | |
(let [g (a/add (a/graph :simple) [atlas-tactics]) | |
pm (doto (.getPrefixMapping g) | |
(.setNsPrefixes atlas-registry))] | |
(a/write g "/tmp/atlas-tactics.ttl" :ttl)))) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment