Gibs Docs

Integration Examples

Code examples for Python SDK, JavaScript SDK, MCP server, and structured responses.

Python SDK

pip install gibs
from gibs import GibsClient
 
client = GibsClient(api_key="gbs_live_abc123...")
 
# Classify an AI system
result = client.classify("AI chatbot that helps customers choose insurance products")
print(f"Risk level: {result.risk_level}")
print(f"Sources: {[s.article_id for s in result.sources]}")
 
# Ask a compliance question (auto-detects regulation)
answer = client.check("What are the transparency obligations for chatbots?")
print(answer.answer)
print(answer.sources)

Async support:

from gibs import AsyncGibsClient
 
async with AsyncGibsClient() as client:  # reads GIBS_API_KEY env var
    result = await client.classify("CV screening tool for recruitment")
    answer = await client.check("Does GDPR apply to employee monitoring?")

JavaScript / TypeScript SDK

npm install @gibs-dev/sdk
import { GibsClient } from "@gibs-dev/sdk";
 
const client = new GibsClient({ apiKey: "gbs_live_abc123..." });
 
// Classify an AI system
const result = await client.classify({
  description: "AI chatbot that helps customers choose insurance products",
});
console.log(`Risk level: ${result.risk_level}`);
console.log(`Sources: ${result.sources.map(s => s.article_id)}`);
 
// Ask a compliance question
const answer = await client.check({
  question: "Do I need to disclose that my customer service chatbot is AI-powered?",
});
console.log(answer.answer);

MCP Server (Claude / Cursor / AI Agents)

Connect Gibs as an MCP tool so your AI assistant can check compliance inline:

{
  "mcpServers": {
    "gibs": {
      "url": "https://mcp.gibs.dev/sse",
      "env": {
        "GIBS_API_KEY": "gbs_live_abc123..."
      }
    }
  }
}

Once connected, your AI assistant can use Gibs tools directly:

You: "Is our new facial recognition feature compliant with EU regulations?"

Assistant: [calls gibs.classify] Your facial recognition system is classified as high-risk under Article 6(2) and Annex III, point 1(a). You need to implement: risk management (Art. 9), data governance (Art. 10), human oversight (Art. 14), and register in the EU database (Art. 49).

Structured Response Mode

Get machine-readable parsed sections instead of plain markdown:

from gibs import GibsClient
 
client = GibsClient(api_key="gbs_live_abc123...")
 
result = client.check(
    "What are the risk management obligations for high-risk AI?",
    response_format="structured",
)
 
# Numeric confidence for programmatic decisions
if result.confidence_score >= 0.7:
    print(f"Requirements ({len(result.structured.requirements)}):")
    for req in result.structured.requirements:
        print(f"  - {req}")
    print(f"\nArticles: {', '.join(result.structured.articles_cited)}")
else:
    print("Low confidence — verify with legal counsel")
const result = await client.check({
  question: "What are the risk management obligations for high-risk AI?",
  response_format: "structured",
});
 
if (result.confidence_score >= 0.7) {
  console.log(`Summary: ${result.structured.summary}`);
  console.log(`Requirements: ${result.structured.requirements.length}`);
  console.log(`Articles: ${result.structured.articles_cited.join(", ")}`);
}

GitHub Actions (CI/CD)

Add compliance checks to your deployment pipeline. Blocks PRs that deploy prohibited AI systems.

name: AI Compliance Gate
on: [pull_request]
 
jobs:
  compliance:
    runs-on: ubuntu-latest
    steps:
      - name: Classify AI System
        uses: gibbrdev/gibs-action@v1
        with:
          mode: classify
          api-key: ${{ secrets.GIBS_API_KEY }}
          description: "Facial recognition system for building access control"
          data-types: "biometric"
          sector: "security"

Check compliance questions as part of your pipeline:

      - name: GDPR Check
        uses: gibbrdev/gibs-action@v1
        with:
          mode: check
          api-key: ${{ secrets.GIBS_API_KEY }}
          question: "Do we need a DPIA for automated credit scoring?"
          regulation: gdpr

Use outputs in subsequent steps:

      - name: Classify
        id: classify
        uses: gibbrdev/gibs-action@v1
        with:
          mode: classify
          api-key: ${{ secrets.GIBS_API_KEY }}
          description: "AI hiring tool that screens resumes"
          allow-high-risk: "true"
 
      - name: Log Result
        if: steps.classify.outputs.risk-level == 'high'
        run: echo "High-risk system — ensure Article 9-15 obligations are met"

Full documentation: github.com/gibbrdev/gibs-action

Batch Classification

Classify multiple AI systems in one script — useful for auditing your product registry:

from gibs import GibsClient
 
client = GibsClient(api_key="gbs_live_abc123...")
 
systems = [
    {"name": "CV Screener", "description": "AI that ranks job applicants based on CV analysis"},
    {"name": "Chatbot", "description": "Customer support chatbot using LLM"},
    {"name": "Fraud Detection", "description": "ML model detecting fraudulent transactions"},
]
 
for system in systems:
    result = client.classify(system["description"])
    flag = "!" if result.risk_level in ("prohibited", "high") else " "
    print(f"[{flag}] {system['name']}: {result.risk_level}")

Output:

[!] CV Screener: high
[ ] Chatbot: limited
[ ] Fraud Detection: minimal
Integration Examples | Gibs Docs