Guardial | Your LLM Gaurd
Link to open source: https://github.com/ACE-codes21/Gaurdial
Link to Live Project: https://gaurdial-93219103935.asia-south2.run.app/
How Guardial is useful for others
-
Prevents common LLM risks before they reach production
- Blocks prompt injection and jailbreak attempts that could reveal system prompts, secrets, or sensitive retrievals.
- Screens for harmful or policy‑violating outputs before they reach end users.
-
Protects sensitive data with layered redaction
- Automatic PII redaction (regex + spaCy NER + optional LLM pass) ensures names, emails, phone numbers, and other entities are removed or masked from prompts and responses.
- Useful for apps handling customer data, legal text, or medical notes.
-
Makes safety decisions explainable and auditable
- Every request returns a human‑readable trace (step, strategy, decision, reason) so developers, reviewers, and compliance teams can see exactly why something was blocked or modified.
- Structured logs (EVENT_JSON) make it trivial to ingest events into SIEMs or monitoring dashboards.
-
Policy‑driven and easy to tune
- policy.json lets teams flip detectors on/off, change strategies (ml | heuristic | llm | hybrid), and adjust thresholds without code changes.
- Enables operational control across environments (dev, staging, prod) with minimal friction.
-
Integrates quickly with existing stacks
- Simple REST API (
POST /shield_prompt) and a demo chat UI allow fast prototyping and integration with web apps, bots, or backend pipelines. - Deploys on Google Cloud Run (Procfile + gunicorn) or any WSGI host; secrets handled via Secret Manager for secure production use.
- Simple REST API (
-
Reduces development and compliance overhead
- Teams can rely on a tested safety pipeline instead of building ad hoc safeguards per feature.
- Lowers the barrier for audits and incident investigations thanks to traceability.
-
Flexible for different risk profiles
- Low-risk apps can use heuristic or ml strategies for speed; high-stakes apps can use hybrid or llm strategies for higher recall and explainability.
- Can be extended to add domain‑specific detectors or external screening services.
-
Great for demos, training, and internal governance
- The live Flow panel and raw JSON view are ideal for stakeholder demos, security reviews, and training content to show exactly how guardrails behave.
- Helpful when onboarding non‑technical reviewers who need to understand decisions.
Quick example use cases
- Customer‑facing chatbots: block malicious prompts and redact PII before sending user text to a model.
- Internal knowledge bases + retrieval: prevent data exfiltration via prompt injection.
- Compliance workflows: provide retained, structured evidence that safety checks ran for each high‑risk request.
- Rapid prototyping: add safety to experimental LLM features with minimal engineering effort.
Call to action
- Try the live demo: https://gaurdial-93219103935.asia-south2.run.app/
- View the code: https://github.com/ACE-codes21/Gaurdial
- Want help integrating Guardial into your project? I can produce example integration snippets (Node/Python) for common server and client patterns.
This build was uploaded as a hackathon project
