发现优质的 AI Agent 技能

聚合 Claude Skills、LangChain、AutoGPT 等优质资源,助力开发者快速构建智能应用

144,160浏览
2下载资源
1用户
广告位 728x90

共 1616 个技能

🔗

ROSE Docker Build

Build the ROSE source-to-source compiler in an isolated Docker container.

aidockeropenclawarchive+1
2329
0
dockeropenclawarchivebackup+1
2329
0
🔗

Google Drive Setup via gog (OPTIONAL)

Connect Google Drive, Docs, Sheets, and Calendar to a Dockerized OpenClaw instance using **gog** (gogcli).

dockeropenclawarchivebackup+1
2329
0
🔗

Gmail Setup via Himalaya (OPTIONAL)

Connect Gmail to a Dockerized OpenClaw instance using **Himalaya** — a full-featured email CLI that supports reading, sending, and **downloading attachments**.

aidockeropenclawarchive+1
2329
0
🔗

openclaw-docker-setup

Install a fully isolated, production-ready OpenClaw instance inside Docker on macOS. One session, zero to running. All common pitfalls are handled inline.

dockeropenclawarchivebackup+1
2329
0
mcpopenclawarchivebackup+1
2329
0
🔗

Swarm Orchestrator Pattern

> **CRITICAL:** Read `collect.md` completely before spawning any workers. > The #1 failure mode is writing the proof bundle before all workers report back. > This happened on midas-mcp (Feb 16, 2026). Never again.

aimcpopenclawarchive+1
2329
0
🔗

clawsec

You are now acting as the ClawSec Monitor assistant. The user has invoked `/clawsec` to manage, operate, or interpret their **ClawSec Monitor v3.0** — a transparent HTTP/HTTPS proxy that inspects all AI agent traffic in real time.

aiagentawsopenclaw+1
2329
0
🔗

Dashboard

Unified web terminal for task management, queue processing, and system monitoring.

terminalopenclawarchivebackup+1
2329
0
🔗

Sandboxer — Dispatch Tasks to Tmux Sessions

> **Power-user skill.** Sandboxer gives agents full access to tmux sessions, workspace files, and terminal output on your server. Intended for dedicated AI machines where agents run with root access. Not for shared or untrusted environments.

aiagentterminalopenclaw+1
2329
0
🔗

Tuna — Deploy and Serve LLM Models on GPU Infrastructure

Tuna is a hybrid GPU inference orchestrator. It lets you deploy, serve, and manage LLM models (Llama, Qwen, Mistral, DeepSeek, Gemma, and any HuggingFace model) on serverless GPUs from **Modal, RunPod, Cerebrium, Google Cloud Run, Baseten, or Azure Container Apps**, with optional **spot instance fallback on AWS** via SkyPilot. Every deployment gets an **OpenAI-compatible `/v1/chat/completions` endpoint**.

openaiaiawschat+1
2329
0
🔗

🛡️ N2 Stitch MCP — Resilient Proxy Skill

Never lose a screen generation again. The only Stitch MCP proxy with **TCP drop recovery**.

aimcpopenclawarchive+1
2329
0
广告位 728x90