open-design (nexu-io/open-design) is an open-source AI project on GitHub. Repository summary: 🎨 Local-first, open-source alternative to Anthropic's Claude Design. ⚡ 19 Skills · ✨ 71 brand-grade Design Systems 🖼 Generate web · desktop · mobile prototypes · slides · images · videos · HyperFrames 📦 Sandboxed preview · HTML/PDF/PPTX/MP4 export 🤖 Runs on Claude Code / Codex / Cursor / Gemini / OpenCode / Qwen / Copilot / Hermes / Kimi CLI. Its focus includes MCP and tool-calling integration, developer-centric engineering workflows, image and vision workflows, video generation and processing. It is suitable for extension, integration, and iterative delivery in real workflows.
License
Apache-2.0
Stars
28,415
Homepage
https://open-design.ai/Features
- Core capability: 🎨 Local-first, open-source alternative to Anthropic's Claude Design. ⚡ 19 Skills · ✨ 71 brand-grade Design Systems 🖼 Generate web · desktop · mobile prototypes · slides · images · videos · HyperFrames 📦 Sandboxed preview · HTML/PDF/PPTX/MP4 export 🤖 Runs on Claude Code / Codex / Cursor / Gemini / OpenCode / Qwen / Copilot / Hermes / Kimi CLI.
- Provides MCP or tool-calling integration
- Built for code generation, debugging, or engineering integration
- Supports image generation, editing, or vision understanding
- Covers video generation, editing, or avatar pipelines
- Repository: nexu-io/open-design
Use Cases
- Connects external systems into agent workflows
- Supports AI engineering build-and-iterate workflows for dev teams
- Used for visual content production and model experimentation
- Used for marketing videos, training content, and media production
- Build internal AI workflow prototypes with open-design
- Validate open-design in production-like engineering scenarios
FAQ
Teams should first define integration boundaries and call patterns, then map repository capabilities into concrete interfaces, parameters, and access rules. GitHub repository: https://github.com/nexu-io/open-design. Community traction is around 28,305 stars. License: Apache-2.0.
It usually works as an execution component or capability layer, with common deployment fits such as: Connects external systems into agent workflows, Supports AI engineering build-and-iterate workflows for dev teams, Used for visual content production and model experimentation.