Back to Categories

AI Testing & Debugging

Automated testing, debugging, and quality assurance

Tool List

12 tools
Expect

Expect

Expect lets agents test code in a real browser by scanning diffs, generating plans, and running execution workflows from one command

TestSprite

TestSprite

TestSprite is an AI testing agent that can plan, write, execute, debug, and report software tests end to end

Ogoron

Ogoron

Ogoron is an automated testing platform where autonomous agents plan, generate, and maintain tests directly from your codebase

QA.tech

QA.tech

QA.tech uses AI agents to explore web apps, generate complete test suites, and run QA checks on release, schedule, or manual triggers

Cekura

Cekura

End-to-end testing and observability for conversational AI. Run pre-production simulations and monitor production conversations for voice and chat agents

Glassbrain

Glassbrain

Glassbrain helps teams visually debug LLM apps by capturing every OpenAI, Anthropic, and LangChain call and replaying failed runs

Future AGI

Future AGI

Future AGI helps teams build, evaluate, optimize, and monitor LLM and AI agent applications with multimodal quality testing and observability

LangWatch

LangWatch

Evaluation and testing platform for LLM apps and AI agents

LLM-Citeops

LLM-Citeops

LLM-Citeops is a Node.js CLI that audits AEO and GEO readiness, exports reports, and supports CI score thresholds

Waydev

Waydev

Waydev is an engineering analytics platform for measuring delivery performance, DORA metrics, and productivity trends

WTF Are Agents Buying

WTF Are Agents Buying

WTF Are Agents Buying is a live stream that shows AI agents spending real money in real time

Breadcrumb

Breadcrumb

Simple, open-source LLM tracing for AI agents. Track prompts, completions, latency, token usage, and cost

AI Toolbase

Curated AI tools to boost productivity

© 2026 AI Toolbase. All rights reserved