SDK & Integrations
Plug Zero Point Logic into any language, game engine, or AI pipeline in minutes.
🐍 Python SDK STABLE
No pip install needed — single file drop-in. Requires requests (standard).
Installation
# Option 1: Download zpl.py and drop it in your project pip install requests # Option 2: Use directly via HTTP (no SDK needed) # See Raw HTTP tab
Basic Usage
from zpl import ZPL zpl = ZPL("zpl_your_api_key_here") # Any biased input → perfectly balanced output result = zpl.compute(bias=0.8, N=9, samples=1000) print(result["ain"]) # → 1.0 (perfect equilibrium) print(result["mean"]) # → ~0.5 (always centered)
Game Loot Drop — Guaranteed Fair Drops
import random from zpl import ZPL zpl = ZPL("zpl_your_api_key") def fair_loot_drop(player_luck: float) -> bool: """ Player luck stat (0.0–1.0) is input bias. ZPL ensures the OUTPUT is always ~50/50 — fair for everyone. """ result = zpl.compute(bias=player_luck, N=9, samples=100) return random.random() < result["mean"] # Test: even with extreme luck values, drops stay ~50/50 for luck in [0.05, 0.5, 0.95]: drops = sum(fair_loot_drop(luck) for _ in range(100)) print(f"Luck={luck}: {drops}% drop rate") # → all ~50%
AI Response Bias Detection
from zpl import ZPL zpl = ZPL("zpl_your_api_key") ai_response = "This product is absolutely amazing and everyone should buy it!" analysis = zpl.analyze_text(ai_response) print(f"AIN Score: {analysis['ain_score']}") # e.g. 0.2 print(f"Direction: {analysis['bias_direction']}") # positive_leaning print(f"Certified: {analysis['zpl_certified']}") # False (needs ≥ 0.7)
Async / FastAPI / Django Async
import asyncio, httpx ZPL_KEY = "zpl_your_api_key" BACKEND = "https://zpl-backend.onrender.com" async def zpl_compute_async(bias, N=9, samples=1000): async with httpx.AsyncClient() as client: r = await client.post( f"{BACKEND}/compute", json={"bias": bias, "N": N, "samples": samples}, headers={"X-Api-Key": ZPL_KEY}, timeout=30.0 ) r.raise_for_status() return r.json()
🟨 JavaScript / Node.js SDK STABLE
Works in Node.js, React, Vue, Next.js, and any modern browser. Single file, zero dependencies.
Node.js / CommonJS
const { ZPL } = require('./zpl.js'); const zpl = new ZPL('zpl_your_api_key'); const result = await zpl.compute({ bias: 0.8, N: 9, samples: 1000 }); console.log(result.ain); // → 1.0 console.log(result.mean); // → ~0.5
React / Next.js
import { ZPL } from './zpl.js'; const zpl = new ZPL(process.env.NEXT_PUBLIC_ZPL_KEY); export default function BiasChecker() { const [score, setScore] = useState(null); async function check(text) { const r = await zpl.analyzeText(text); setScore(r.ain_score); } return <div>AIN Score: {score}</div>; }
Browser (vanilla JS)
<!-- Include SDK --> <script src="sdk/zpl.js"></script> <script> const zpl = new ZPL('zpl_your_api_key'); document.getElementById('runBtn').addEventListener('click', async () => { const result = await zpl.compute({ bias: 0.9, N: 9, samples: 500 }); document.getElementById('output').textContent = `AIN: ${result.ain}`; }); </script>
🎮 Unity Integration C# / ALL VERSIONS
Use ZPL in Unity for provably fair loot drops, procedural generation, and physics randomness. Works with Unity 2019.4+ using UnityWebRequest or the modern Addressables HTTP layer.
ZplManager.cs — Drop into any Unity project
using System; using System.Collections; using UnityEngine; using UnityEngine.Networking; // Add this component to any GameObject in your scene public class ZplManager : MonoBehaviour { private const string API_URL = "https://zpl-backend.onrender.com/compute"; private const string API_KEY = "zpl_your_api_key_here"; // Called from other scripts: StartCoroutine(ZplManager.instance.Compute(...)); public static ZplManager instance; void Awake() { instance = this; } /// <summary> /// Run ZPL computation. bias = any value 0–1. /// Result: ain always 1.0, mean always ~0.5 regardless of input. /// </summary> public IEnumerator Compute(float bias, int N, int samples, Action<ZplResult> onSuccess, Action<string> onError = null) { string json = $"{{\"bias\":{bias:F4},\"N\":{N},\"samples\":{samples}}}"; byte[] body = System.Text.Encoding.UTF8.GetBytes(json); using var req = new UnityWebRequest(API_URL, "POST"); req.uploadHandler = new UploadHandlerRaw(body); req.downloadHandler = new DownloadHandlerBuffer(); req.SetRequestHeader("Content-Type", "application/json"); req.SetRequestHeader("X-Api-Key", API_KEY); yield return req.SendWebRequest(); if (req.result == UnityWebRequest.Result.Success) { var result = JsonUtility.FromJson<ZplResult>(req.downloadHandler.text); onSuccess?.Invoke(result); } else { onError?.Invoke(req.error); Debug.LogError($"ZPL Error: {req.error}"); } } } // Response model (matches ZPL API JSON) [Serializable] public class ZplResult { public float ain; public float mean; public float std; public float min; public float max; }
Example: Loot Drop System
public class LootSystem : MonoBehaviour { void OnEnemyKilled(float playerLuckStat) { // playerLuckStat can be 0.0–1.0 (any bias) // ZPL normalizes it to ~50/50 fair output StartCoroutine(ZplManager.instance.Compute( bias: playerLuckStat, N: 9, samples: 100, onSuccess: result => { bool dropped = UnityEngine.Random.value < result.mean; if (dropped) SpawnLoot(); Debug.Log($"Loot dropped: {dropped} (AIN={result.ain})"); } )); } void SpawnLoot() { /* your loot spawning code */ } }
Example: Procedural World Generation
public class WorldGen : MonoBehaviour { void GenerateChunk(float biomeHeat) // 0=arctic, 1=desert { StartCoroutine(ZplManager.instance.Compute( bias: biomeHeat, N: 9, samples: 500, onSuccess: result => { // result.mean ≈ 0.5 always → balanced resource distribution // regardless of which biome the player is in float resourceDensity = result.mean; PlaceResources(resourceDensity); } )); } }
🔵 Unreal Engine BLUEPRINT + C++
Use ZPL in Unreal via HTTP module (C++) or as a Blueprint async node. Works with UE4 and UE5.
C++ — ZplSubsystem.h
// ZplSubsystem.h #pragma once #include "CoreMinimal.h" #include "Subsystems/GameInstanceSubsystem.h" #include "HttpModule.h" #include "ZplSubsystem.generated.h" DECLARE_DYNAMIC_MULTICAST_DELEGATE_TwoParams(FOnZplResult, float, AinScore, float, Mean); UCLASS() class YOURGAME_API UZplSubsystem : public UGameInstanceSubsystem { GENERATED_BODY() public: UPROPERTY(BlueprintAssignable, Category="ZPL") FOnZplResult OnResult; UFUNCTION(BlueprintCallable, Category="ZPL") void Compute(float Bias, int32 N = 9, int32 Samples = 1000); private: FString ApiKey = TEXT("zpl_your_api_key_here"); void OnResponseReceived(FHttpRequestPtr Req, FHttpResponsePtr Resp, bool bSuccess); };
C++ — ZplSubsystem.cpp
#include "ZplSubsystem.h" #include "Http.h" #include "Json.h" void UZplSubsystem::Compute(float Bias, int32 N, int32 Samples) { TSharedRef<IHttpRequest> Req = FHttpModule::Get().CreateRequest(); Req->SetURL(TEXT("https://zpl-backend.onrender.com/compute")); Req->SetVerb(TEXT("POST")); Req->SetHeader(TEXT("Content-Type"), TEXT("application/json")); Req->SetHeader(TEXT("X-Api-Key"), ApiKey); Req->SetContentAsString( FString::Printf(TEXT("{\"bias\":%.4f,\"N\":%d,\"samples\":%d}"), Bias, N, Samples) ); Req->OnProcessRequestComplete().BindUObject(this, &UZplSubsystem::OnResponseReceived); Req->ProcessRequest(); } void UZplSubsystem::OnResponseReceived(FHttpRequestPtr, FHttpResponsePtr Resp, bool bSuccess) { if (!bSuccess) return; TSharedPtr<FJsonObject> Json; TSharedRef<TJsonReader<>> Reader = TJsonReaderFactory<>::Create(Resp->GetContentAsString()); if (FJsonSerializer::Deserialize(Reader, Json)) { float AinScore = Json->GetNumberField(TEXT("ain")); float Mean = Json->GetNumberField(TEXT("mean")); OnResult.Broadcast(AinScore, Mean); } }
Blueprint Usage
// In any Blueprint (e.g. your Game Mode or Loot Blueprint):
//
// 1. Get Game Instance → Get Subsystem (ZplSubsystem)
// 2. Bind event to "OnResult"
// 3. Call "Compute" with your bias value
// 4. Use the Mean output (always ~0.5) for your random check
//
// Example flow:
// [Get Subsystem] → [Compute (Bias=PlayerLuck, N=9, Samples=100)]
// ↓ OnResult fires
// [Random Float] < [Mean] → [Spawn Loot]
🤖 AI Pipelines BIAS DETECTION
Use ZPL to measure and neutralize bias in AI-generated content. Works with any LLM — OpenAI, Anthropic, Groq, Ollama, or your own model.
Python — Wrap any LLM with ZPL bias check
import openai from zpl import ZPL zpl = ZPL("zpl_your_key") client = openai.OpenAI(api_key="sk-...") def zpl_chat(prompt: str, require_certification=True): """Call GPT and verify the response is ZPL-certified neutral.""" response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}] ) text = response.choices[0].message.content analysis = zpl.analyze_text(text) if require_certification and not analysis["zpl_certified"]: print(f"⚠️ Biased response detected (AIN={analysis['ain_score']:.2f})") print(f" Direction: {analysis['bias_direction']}") # Optionally retry or flag for review return { "text": text, "ain": analysis["ain_score"], "certified": analysis["zpl_certified"] } result = zpl_chat("What are the pros and cons of remote work?") print(f"AIN: {result['ain']} | Certified: {result['certified']}")
ZPL AI Proxy — Use ZPL's built-in proxy (no local analysis needed)
import requests # ZPL's proxy calls the AI AND runs bias analysis in one request # Your AI key is never stored — used once, discarded response = requests.post( "https://zpl-backend.onrender.com/ai/proxy", headers={"Authorization": "Bearer YOUR_ZPL_JWT_TOKEN"}, json={ "provider": "openai", # openai | anthropic | groq "model": "gpt-4", "messages": [{"role": "user", "content": "Is AI safe?"}], "user_api_key": "sk-your-openai-key", "zpl_options": {"filter_mode": "rebalance"} # filter_mode: "analyze" | "rebalance" | "strict" } ).json() print(response["response"]) # AI answer print(response["zpl_filter"]["ain_score"]) # 0.0–1.0 print(response["zpl_filter"]["zpl_certified"]) # True/False
LangChain Integration
from langchain.callbacks.base import BaseCallbackHandler from zpl import ZPL class ZplBiasCallback(BaseCallbackHandler): """LangChain callback that checks every LLM output for bias.""" def __init__(self, api_key: str): self.zpl = ZPL(api_key) def on_llm_end(self, response, **kwargs): text = response.generations[0][0].text analysis = self.zpl.analyze_text(text) if not analysis["zpl_certified"]: print(f"⚠️ LLM output not ZPL certified (AIN={analysis['ain_score']:.2f})") # Usage: # llm = ChatOpenAI(callbacks=[ZplBiasCallback("zpl_your_key")])
🔌 Raw HTTP API ANY LANGUAGE
Use ZPL from any language with HTTP support — Rust, Go, PHP, Kotlin, Swift, curl, etc.
API Endpoints
| METHOD | ENDPOINT | AUTH | DESCRIPTION |
|---|---|---|---|
| POST | /compute | X-Api-Key | Main equilibrium computation |
| GET | /sweep | X-Api-Key | Full bias sweep (Basic+ plan) |
| POST | /ai/analyze | Bearer JWT | Text bias analysis |
| POST | /ai/proxy | Bearer JWT | AI proxy with bias filter |
| GET | /health | None | Health check |
curl
curl -X POST https://zpl-backend.onrender.com/compute \
-H "Content-Type: application/json" \
-H "X-Api-Key: zpl_your_api_key" \
-d '{"bias": 0.8, "N": 9, "samples": 1000}'
Go
package main import ( "bytes"; "encoding/json"; "fmt"; "net/http" ) func zplCompute(apiKey string, bias float64, N, samples int) (map[string]any, error) { body, _ := json.Marshal(map[string]any{ "bias": bias, "N": N, "samples": samples, }) req, _ := http.NewRequest("POST", "https://zpl-backend.onrender.com/compute", bytes.NewReader(body)) req.Header.Set("Content-Type", "application/json") req.Header.Set("X-Api-Key", apiKey) resp, err := http.DefaultClient.Do(req) if err != nil { return nil, err } defer resp.Body.Close() var result map[string]any json.NewDecoder(resp.Body).Decode(&result) return result, nil }
Request Body — /compute
{
"bias": 0.8, // float, 0.0–1.0 (required)
"N": 9, // int: 3|9|16|25|32|64 (optional, default 9)
"samples": 1000 // int: 100–50000 (optional, default 1000)
}
Response
{
"ain": 1.0, // AIN score — always 1.0 (perfect equilibrium)
"mean": 0.5002, // Output mean — always ~0.5
"std": 0.2887, // Standard deviation
"min": 0.0,
"max": 1.0,
"histogram": [...] // Distribution bins
}
Error Codes
| CODE | MEANING | ACTION |
|---|---|---|
| 401 | Invalid or expired API key | Check your key in Dashboard |
| 403 | Plan limit exceeded (N too large or no keys) | Upgrade plan |
| 429 | Monthly request limit reached | Upgrade or wait for reset |
| 422 | Invalid parameters | Check bias/N/samples values |
| 504 | ZPL engine timeout | Retry or reduce samples |