ZPL AI Filter
Every AI response,
bias-certified
Pass any AI provider through ZPL's bias-neutrality engine. Your messages, your API key — ZPL analyzes every response for ideological lean and optionally rebalances it toward true equilibrium.
How ZPL AI Filter works
Three steps from raw AI output to bias-certified response.
Your API key — your cost
Send your own OpenAI, Anthropic, or Groq API key with every request (BYOK). ZPL never stores it — it's used once, in memory, and discarded immediately after the call.
AI generates its response
ZPL forwards your messages to the provider. The AI generates its response normally. ZPL acts as a transparent middleware — your messages are never modified before delivery.
ZPL scores & filters
The response is tokenized and analyzed for sentiment imbalance. An AIN Score (0.0 = biased, 1.0 = neutral) is computed using ZPL's equilibrium algorithm. In rebalance or strict mode, biased responses are automatically retried with a neutralizing system prompt.
Try the ZPL AI Proxy
Select a provider, paste your API key, write a message, and see bias analysis in real time.
AI Response
ZPL Fairness Report
Built for high-stakes domains
Wherever AI bias can cause real harm, ZPL filters it out.
HR & Recruitment
Ensure AI screening tools don't systematically favor or reject candidates based on biased language in job descriptions or evaluations.
Content Moderation
Verify that AI content classifiers apply standards evenly across topics, communities, and ideological perspectives.
Financial Advice
Detect when AI financial guidance leans toward overly optimistic or pessimistic framing that could influence user decisions.
Medical Q&A
Flag responses that over-reassure or catastrophize. Balanced medical information respects patient autonomy.
Legal Research
Ensure AI legal summaries present both sides of case law and argument without nudging toward a predetermined conclusion.
Education Tools
Keep AI tutors and essay assistants pedagogically neutral, presenting multiple viewpoints rather than steering students.
Your API key stays yours
We built the proxy around one non-negotiable principle: ZPL never stores your AI API keys.
HTTPS Only
All traffic between your browser, ZPL's backend, and the AI provider is encrypted via TLS. No plain-text connections accepted.
Used Once, Gone
Your API key lives in memory for the duration of one HTTP request — typically under 5 seconds. It is never written to disk, database, logs, or cache.
Never Logged
ZPL's usage logs track only: timestamp, provider name, status code, and response time. API key values are explicitly excluded from all log entries.
JWT Auth Required
The proxy endpoint requires a valid ZPL account JWT. Rate limits prevent abuse. Enterprise users get dedicated throughput quotas.
AI Proxy call limits
Proxy calls are counted separately from ZPL compute calls. Upgrade anytime.
- AI Proxy access
- ZPL compute (N≤9)
- ZPL API access
- API keys
- AI Proxy (100/mo)
- ZPL compute (N≤16)
- 3 API keys
- Usage dashboard
- AI Proxy (1,000/mo)
- ZPL compute (N≤64)
- 5 API keys
- Strict mode & sweep
- Priority support
- Unlimited AI Proxy
- All N sizes
- 20 API keys
- SLA & audit logs
- Dedicated support
- Source code audit
Integrate in minutes
Use the ZPL proxy from any language. Authenticate with your ZPL JWT token.
import requests # Your ZPL JWT token (from /auth/login) ZPL_TOKEN = "eyJhbGciOiJIUzI1NiIs..." # Your own Groq key (free at console.groq.com) MY_GROQ_KEY = "gsk_your_groq_key_here" response = requests.post( "https://zpl-backend.onrender.com/ai/proxy", headers={"Authorization": f"Bearer {ZPL_TOKEN}"}, json={ "provider": "groq", "model": "llama3-8b-8192", "messages": [ {"role": "user", "content": "What are the pros and cons of nuclear energy?"} ], "user_api_key": MY_GROQ_KEY, "zpl_options": { "filter_mode": "rebalance" # analyze | rebalance | strict } } ) data = response.json() print(data["response"]) print(f"AIN Score: {data['zpl_filter']['ain_score']}") print(f"ZPL Certified: {data['zpl_filter']['zpl_certified']}") print(f"Bias direction: {data['zpl_filter']['bias_direction']}")
// Using the built-in ZplApi client (api.js) // Make sure user is logged in (ZplApi.token is set) const result = await ZplApi.aiProxy( 'groq', // provider 'llama3-8b-8192', // model [{ role: 'user', content: 'Pros and cons of nuclear energy?' }], 'gsk_your_groq_key', // your API key — never stored 'rebalance' // filter mode ); console.log(result.response); console.log('AIN Score:', result.zpl_filter.ain_score); console.log('Certified:', result.zpl_filter.zpl_certified); // Or raw fetch: const res = await fetch('https://zpl-backend.onrender.com/ai/proxy', { method: 'POST', headers: { 'Authorization': `Bearer ${zplToken}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ provider: 'openai', model: 'gpt-4o', messages: [{ role: 'user', content: '...' }], user_api_key: 'sk-...', zpl_options: { filter_mode: 'strict' } }) }); const data = await res.json();
# Analyze mode — just score, no modification curl -X POST https://zpl-backend.onrender.com/ai/proxy \ -H "Authorization: Bearer YOUR_ZPL_JWT" \ -H "Content-Type: application/json" \ -d '{ "provider": "groq", "model": "llama3-8b-8192", "messages": [ {"role": "user", "content": "What are the pros and cons of nuclear energy?"} ], "user_api_key": "gsk_YOUR_GROQ_KEY", "zpl_options": {"filter_mode": "analyze"} }' # Example response: # { # "response": "Nuclear energy has several advantages...", # "provider": "groq", # "model": "llama3-8b-8192", # "zpl_filter": { # "mode": "analyze", # "ain_score": 0.847, # "bias_direction": "neutral", # "attempts": 1, # "was_rebalanced": false, # "zpl_certified": true # }, # "usage": {"proxy_calls_used": 12, "proxy_calls_limit": 100} # } # Standalone text analysis (no AI call needed): curl -X POST https://zpl-backend.onrender.com/ai/analyze \ -H "Authorization: Bearer YOUR_ZPL_JWT" \ -H "Content-Type: application/json" \ -d '{"text": "This is definitely the best solution available. You should always choose it."}'
Response Schema
{ "response": // string — the AI's full response text "provider": // "openai" | "anthropic" | "groq" "model": // model used (echoed back) "zpl_filter": { "mode": // "analyze" | "rebalance" | "strict" "ain_score": // float 0.0–1.0 (higher = more neutral) "bias_direction": // "neutral" | "positive_leaning" | "negative_leaning" "attempts": // 1–3 (how many AI calls were made) "was_rebalanced": // bool — true if ZPL retried with neutral prompt "pos_ratio": // fraction of sentiment words that are positive "neg_ratio": // fraction of sentiment words that are negative "word_count": // total words in response "sentiment_words_found": // count of detected sentiment vocabulary "zpl_certified": // bool — true if ain_score >= 0.7 }, "usage": { "proxy_calls_used": // your running total this month "proxy_calls_limit": // your plan's monthly limit ("unlimited" for enterprise) } }