Open framework for confidential AI
-
Updated
Apr 20, 2026 - Rust
Open framework for confidential AI
FIBO is a SOTA, first open-source, JSON-native text-to-image model built for controllable, predictable, and legally safe image generation.
Neural Network Verification Software Tool https://www.verivital.com Documentation:
Sagar is a Python-based command-line virtual assistant for CSE students and cybersecurity learners. It supports single-line and multi-line commands to open trusted websites, play curated music links, and answer questions using an AI model—designed for safe automation, learning, and terminal-first exploration.
The course provides guidance on best practices for prompting and building applications with the powerful open commercial license models of Llama 2.
AAAI 2025 Tutorial on AI Safety
Safety-Constrained Reinforcement Learning for Assistive Robot Navigation
Safety harness for autonomous AI agents: Spec-driven AI factory. Use with any agentic CLI. Language-agnostic. Safe by design.
Evaluate high school math reasoning in LLMs with baseline and Chain-of-Thought (CoT) prompts. Includes confidence calibration metrics, JSON output parsing, and reliability analysis.
Production-Grade LLM Alignment Engine (TruthProbe + ADT)
Heike — The deterministic runtime for reliable AI agents. No more prompt roulette. 侍
SOEA-Plus (PDEMC ): 3-task biomedical metacognition benchmark evaluating LLM metacognitive control across 4 frontier models on 300 real PubMed examples. Reveals the Control Collapse Gap."
A lightweight, fast and powerful, external safety guard that analyzes text prompts and images using the xAI Grok API. Designed for AI research labs, red/blue/purple teams, security researchers, compliance teams, and enterprise users who need reliable, auditable, and privacy-conscious prompt blocking and image analysis — even under restricted API
Official implementation of "Uncertainty-Guided Semi-Supervised Learning for Safe Medical Image Classification".
A risk-aware agent framework that dynamically routes AI tasks between autonomous execution, tool usage, human approval, or refusal using LangGraph and structured decision pipelines
Production-ready examples and best practices for designing safe, scoped MCP tools for agentic AI.
A skill for AI coding agents that scaffolds a safe, multi-model chatbot for Telegram or Discord. Supports Claude, GPT, Gemini, and OpenAI-compatible backends. Nine safety layers on by default. Named for R. Daneel Olivaw from Asimov.
A clean, robust, and production-oriented CLI tool for analyzing images using the xAI Grok API. Designed for AI research labs, red/blue/purple teams, security researchers, compliance teams, and enterprise users who need reliable, auditable, and privacy-conscious image analysis — even under restricted API conditions.
Add a description, image, and links to the safe-ai topic page so that developers can more easily learn about it.
To associate your repository with the safe-ai topic, visit your repo's landing page and select "manage topics."