Skip to content

Instantly share code, notes, and snippets.

@Adefful
Last active December 18, 2023 07:57
Show Gist options
  • Save Adefful/2ce8d1fd45ba422f075769037bddb49b to your computer and use it in GitHub Desktop.
Save Adefful/2ce8d1fd45ba422f075769037bddb49b to your computer and use it in GitHub Desktop.

Proposal for Vulnerability Detection Experiment

sequenceDiagram
    autonumber
    participant Project
    participant Joern
    participant Agent1 as Report Generator
    participant Agent2 as User Input Finder
    participant Agent3 as Validation Finder
    participant Agent4 as Input Synthesizer
    participant Agent5 as Validation Synthesizer

    Project->>Joern: Identify 'exec' sink points
    note right of Joern: Analyzes project for exec() calls

    loop Each Function Related to 'exec'
        Joern->>Agent2: Send function code, call trace, project context
        note over Agent2: Identifies user input in each function
        Joern->>Agent3: Send function code, call trace, project context
        note over Agent3: Identifies validation/sanitization in each function
        
    end
    Agent2->>Agent4: User input findings for list of functions
        note right of Agent4: Aggregates and synthesizes user input data
        Agent3->>Agent5: Validation findings for list of function
        note right of Agent5: Aggregates and synthesizes validation data
    Agent4->>Agent1: Synthesize relevant user input findings
    Agent5->>Agent1: Synthesize relevant validation findings
    Agent1->>Project: Generate comprehensive vulnerability report
    note right of Agent1: Compiles findings into detailed report

Loading
  1. Select Projects: Choose five projects with exec function calls.
  2. Function Context Analysis: For each project, identify the context of exec functions, either by examining all functions in a specific file or through call graph connections.
  3. Manually Execute Pipeline: a. User Input Analysis: Individually input each function into an agent to identify user input. b. Validation/Sanitization Analysis: Individually input each function into an agent to detect validation and sanitization practices. c. Aggregate and Synthesize: Combine and analyze findings from (a) and (b) to understand which functions use validation, sanitization, and where user input occurs. d. Report Generation: Create a report based on the aggregated data, focusing on functions with exec calls.
  4. Manual Evaluation: Assess areas where the model struggled or failed to identify issues.
  5. Iterate Experiment: Develop a new iteration of the experiment based on the evaluation findings.

Context

  • CWE-78: Experiment targets this specific Common Weakness Enumeration.

Checklist Methodology

  • Base: Adopts Slava's checklist for identifying vulnerabilities.
  • Focus: Analyzes vulnerabilities using real-world examples and NVD expert analyses.

Checklist

  1. Signature Finding: Locate runtime.getRuntime.exec() in Java, searchable via AST.
  2. Non-Constant Argument Identification: Ensure arguments of exec calls depend on user input, analyzing data dependencies.
  3. Special Symbol Checks: Look for characters like [: ; & | " ' ] that could lead to arbitrary command execution, ensuring code checks, filters, or escapes them.

Experiment Phases

  1. Commit Level Information Gathering: Analyze vulnerability fix data.
  2. Candidate Search: Identify potential vulnerabilities using gathered data.
  3. Vulnerability Confirmation and Localization: Confirm and localize vulnerabilities in the codebase.

Application of LangChain and Agents

  • Use agents to localize vulnerabilities, focusing on multi-function analysis and trigger points.

Detailed Experiment Steps

  1. Input to LLM: Functions with Exec are fed to LLM, including agent list.
  2. Agent 2: Analyzes each function for user input (takes function code, prompt, additional context).
  3. Agent 3: Searches each function for validation/sanitization. (takes as above)
  4. Agent 4: Aggregates and synthesizes data from Agent 2. (takes list of responses from 2 agent prompt)
  5. Agent 5: Aggregates and synthesizes data from Agent 3. (takes list of responses from 3 agent prompt)
  6. Report Generation: Agent 1 compiles a report from synthesized data. (takes response from 4, 5 agents)

Analysis Methodology

  • Apply taint analysis method, examining function call graphs for:
    • Fs: Function with a sink.
    • Fu: Function with user input.
    • Fval_sanit: Function with validation and sanitization.
  • Main Model 1 compiles a final report on the vulnerability, including assumptions about severity, possible exploitation methods, and remediation recommendations.

Purpose: Track the flow from user input (Fu) to the sink (Fs) while checking for validation/sanitization (Fval_sanit) to determine vulnerability presence and severity.

Implementation Details

  • Provide context, call graphs, and stack traces to models for a comprehensive understanding of each function's role and context.

Objective and Research Question (RQ)

  • Assess LLM limitations and analysis needs.
  • RQ: Is the strategy sufficient for expert-level vulnerability analysis?

Potential Limitations and Risks

  • LLMs' limitations in specific tasks.
  • Lack of sufficient project context.
  • Reliance on external analysis tools.

Ollama

OLLAMA enables the use of large language models on personal hardware, with key features including:

  1. Local Model Execution: Operates major models like Llama 2, Code Llama, deepSeekCoder, and Mistral locally.
  2. GPU Acceleration: Enhances computational speed significantly, suitable for high-performance GPUs.
  3. Quantization: Uses 4-bit quantization by default to balance model accuracy and resource requirements.
  4. User Interface: Offers a CLI and REST API for easy interaction with models.for individuals or systems to interact with large language models.

LangChain is a Python framework that streamlines the development of AI applications, particularly those utilizing large language models (LLMs) for natural language processing tasks. It simplifies interface creation for generative AI applications and integrates with a variety of LLMs, including Ollama and OpenAI's GPT variants, enabling access to extensive data sources.

Key components of LangChain include:

  • Model Interaction: Manages LLM input/output.
  • Data Connection and Retrieval: Enables data handling for LLMs.
  • Chains: Connects LLMs for building complex applications.
  • Agents: Directs LLMs based on user requests.
  • Memory: Provides context retention for LLM interactions.

LangChain integrates with diverse data sources and LLM providers like Hugging Face, Cohere, and OpenAI, facilitating applications that require real-time data processing and retrieval, supported by storage solutions including vector, SQL, Redis, and graph databases.

The integration of RAG and Neo4j

The integration of RAG and Neo4j with LangChain brings sophisticated retrieval strategies that enhance text data handling:

  • Advanced Retrieval Strategies: Employs Neo4j's graph structure for nuanced document indexing.
  • RAG Pattern Implementation: Utilizes libraries for patterns like 'Chat with your PDF', enriching simple vector search with more complex retrieval methods.
  • Parent Document Retrievers: Breaks down large documents into smaller vectors for detailed yet context-aware retrieval.
  • Neo4j Advanced RAG Template: Offers strategies for balancing precise data representation with comprehensive context.

Agents

In the context of vulnerability detection, the implementation of MoE (Mixture of Experts) and Agents within LangChain offers powerful capabilities:

  1. Agents for Decision-Making: Agents in LangChain are critical for driving decision-making processes. They have access to various tools and can decide which tool to use based on user input. This functionality is pivotal in applications like vulnerability detection, where decisions must be made dynamically based on complex and varying data inputs.

  2. LangChain's Support Areas: LangChain aids developers in areas crucial for vulnerability detection applications:

    • Managing LLMs and prompts for effective communication with models.
    • Creating chains for sequences of model interactions.
    • Implementing data-augmented generation to incorporate external data sources.
    • Utilizing agents for decision-making and continuous action based on results.
  3. Use Cases: LangChain supports use cases that are relevant for vulnerability detection:

    • Building chatbots, which could be adapted for interactive vulnerability queries.
    • Creating interactive agents, useful for continuous vulnerability analysis and customer service in cybersecurity.
    • Data summarization, which can help in condensing voluminous vulnerability data.
    • Document retrieval and clustering, vital for organizing and accessing relevant security information.
    • API interactions, allowing integration with various cybersecurity tools and platforms.

Knowledge Graph

Integrating NaLLM into LangChain for constructing knowledge graphs significantly boosts the capability for tasks like vulnerability detection. Here's a structured approach using Neo4j and advanced RAG techniques, with an emphasis on key model capabilities and practical use cases:

High-Level Model Requirements:

  • Entity Recognition: Accurate identification of entities within text.
  • Entity Linking: Correctly linking entities to unique identifiers.
  • Relation Extraction: Identifying relationships between entities.
  • Entity Sentiment: Determining sentiment related to specific entities.

Practical Implementation:

  • Neo4j Vector Index and GraphCypherQAChain: This synergy optimizes information synthesis for generating informed responses, essential for complex query resolution in vulnerability detection.

Pipeline:

  1. Knowledge Graph Construction: Using LLMs and additional tools to build the graph.
  2. Queries to Knowledge Graph: Employing LLMs to query the knowledge graph stored in Neo4j.

Sequence Diagrams for Reference:

The integration of NaLLM with Neo4j using LangChain represents a forward leap in utilizing LLMs for enhancing knowledge graphs, making it highly relevant for sophisticated tasks like vulnerability detection.

Vulnerability Detection Application in LangChain

In the context of vulnerability detection using LangChain and its integration with Neo4j and LLMs, the approach can be structured as follows:

  1. Storing CPG Graph in Neo4j: CPGs can be stored in Neo4j, enabling efficient querying and data manipulation. Agents in LangChain can write queries to this database, retrieving necessary information for vulnerability detection. (get methods, paths, slices, data dependency)

  2. Building Knowledge Bases with LLM + Joern:

    • Joern Integration: Utilize Joern to analyze source code and construct detailed representations like call graphs, lists of variables, and class relationships.
    • LLM Contribution: LLM can augment this data by adding possible sinks, connections related to validation, sanitization, user input, logic, and usage context. This additional layer of understanding provided by LLMs helps in identifying weak points in the code.
    • Comprehensive Analysis: Perform a preliminary analysis of all functions, classes, and variables that could be relevant to the vulnerability detection task.

Chain Use Cases and Capabilities

In the context of vulnerability detection using LangChain, a diverse range of agents and a strategic chain can be implemented:

  1. Agents:

    • JoernAgent: Utilizes Joern queries to analyze code structures and relationships.
    • Neo4JAgent: Employs the Neo4j database and Cypher queries for data retrieval and analysis.
    • CodeAgent: Provides code functions, file contents, or slices upon request.
    • ManagerAgent: Oversees and coordinates the actions of other agents.
    • WeaknessAnalyserAgent: Specializes in identifying and analyzing potential weak points in the code.
    • TraversalAgent: Decides the traversal path, particularly in the call graph, for deeper analysis.
  2. Chain Implementation:

    • Function Analysis Chain: When assessing if a function is vulnerable, the model requests context such as potential sinks, sanitization, or validation within the function. It then decides which function to examine further, varying the depth of traversal in the call graph based on the requirement.
    • Mixture of Experts (MoE) Chain: In another approach, MoE might provide potential sinks, followed by a voting or selection process to identify the most relevant sinks for further investigation.
    • Additional Context Retrieval: The model can also access documentation strings of functions to understand their purpose, adding another layer of context to the vulnerability analysis.

The task at hand is divided into three distinct parts:

  1. Commit Level Information Gathering:

    • Gather detailed information from vulnerability fix data, focusing on the nature of the vulnerability and the reasons and methods behind its rectification. This includes examining how vulnerabilities are structured and addressed, drawing from real-world fixes as well as expert analyses from sources like the NVD.
  2. Potential Vulnerability Candidate Search:

    • Use the aggregated information from the first part to identify potential vulnerabilities in a separate dataset, looking for candidates that match the characteristics and patterns previously determined.
  3. Vulnerability Confirmation and Localization:

    • Confirm and localize the potential vulnerabilities within the codebase, providing a rationale for their classification as vulnerabilities and detailing their exact location and nature.
@acheshkov
Copy link

I'd like to know more details about Mixture of Experts (MoE) Chain

@acheshkov
Copy link

For each task :

  1. Commit Level Information Gathering
  2. Potential Vulnerability Candidate Search
  3. Vulnerability Confirmation and Localization:

how we can measure the effectiveness?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment