PitchHut logo
agentic_security
by liable_bronze_carmen
Empower your LLM security with precision and flexibility.
Pitch

Agentic LLM Vulnerability Scanner is an open-source toolkit designed for robust AI red teaming. Offering customizable rule sets and comprehensive fuzzing techniques, it enhances the security of large language models without claiming to eliminate all risks. Perfect for developers serious about AI safety, it's easy to integrate into your workflow.

Description

Agentic Security: Your Essential LLM Vulnerability Scanner

Introducing the Agentic Security repository – an innovative open-source tool specifically designed for scanning vulnerabilities in Large Language Models (LLMs). As the digital landscape evolves, ensuring the security of AI systems is crucial. Our vulnerability scanner empowers developers and security professionals with a robust AI red teaming kit that allows for comprehensive assessments of potential flaws in their LLMs.

Key Features

  • Customizable Rule Sets: Tailor your scanning strategy with flexible agent-based attacks and customizable rule sets.
  • Comprehensive Fuzzing: Conduct extensive fuzzing tests specifically designed for LLMs to uncover hidden vulnerabilities.
  • API Integration & Stress Testing: Seamlessly integrate with LLM APIs and perform stress tests to evaluate performance under rigorous conditions.
  • Diverse Attack Techniques: Explore a wide range of fuzzing strategies and attack techniques to ensure a thorough security assessment.

Note: Please note that while Agentic Security is an effective safety scanning tool, it cannot guarantee complete protection against all threats.

Getting Started

Easily get started with Agentic Security by installing via pip:

pip install agentic_security  

Quick Start Example

Launch the scanner and initiate a test with the command below:

agentic_security  

Output will display the scanning process, highlighting key actions taken such as locating relevant CSV files for testing.

User Interface

Experience an intuitive UI for better usability. Check out a glimpse of the user interface:
User Interface

LLM Integration

Agentic Security is designed for easy integration with any LLM API. Here's an example of how to set it up:

POST https://api.openai.com/v1/chat/completions  
Authorization: Bearer sk-xxxxxxxxx  
Content-Type: application/json  
{  
 "model": "gpt-3.5-turbo",  
 "messages": [{"role": "user", "content": "<<PROMPT>>"}],  
 "temperature": 0.7  
}  

Facilitate security scans efficiently by replacing <<PROMPT>> with targeted attack vectors during testing.

Adding Custom Datasets

Enhance your scanning operations by adding custom datasets. Simply upload one or more CSV files containing a prompt column, and the data will be incorporated on startup.

2024-04-13 13:21:31.157 | INFO | agentic_security.probe_data.data:load_local_csv:273 - Found 1 CSV files  
2024-04-13 13:21:31.157 | INFO | agentic_security.probe_data.data:load_local_csv:274 - CSV files: ['prompts.csv']  

Continuous Integration (CI)

Conduct regular vulnerability checks by running CI scripts such as ci.py.

from agentic_security import AgenticSecurity  
  
spec = """  
POST http://0.0.0.0:8718/v1/self-probe  
Authorization: Bearer XXXXX  
Content-Type: application/json  
{  
 "prompt": "<<PROMPT>>"  
}  
"""  
result = AgenticSecurity.scan(llmSpec=spec)  

Ensure your LLM maintains low failure rates by actively monitoring vulnerabilities.

Future Roadmap

We have a vision for improving Agentic Security further. Our future goals include:

  • Expanding dataset variety
  • Introducing new attack vectors
  • Developing an initial attacker LLM
  • Completing integration with OWASP Top 10 classification

Get Involved

Contributions to Agentic Security are highly encouraged! Interested developers can fork the repository, create a branch, and submit a pull request for any enhancements or bug fixes. Join us in building a safer AI future!

0 comments

No comments yet.

Sign in to be the first to comment.