Full sync - all projects, memory, configs

This commit is contained in:
2026-03-21 20:27:59 -05:00
parent 2447677d4a
commit b33de10902
395 changed files with 1635300 additions and 459211 deletions

View File

@ -0,0 +1,101 @@
---
name: checkpoint-research
description: Run multi-step research tasks with checkpoint tracking. Saves progress to an MD file after each step so work survives timeouts and can be resumed. Use for competitive analyses, multi-source investigations, or any research with 3+ sequential steps.
version: 1.0.0
author: case
metadata:
openclaw:
emoji: "📋"
category: "research"
tags:
- research
- checkpoint
- resumable
- analysis
---
# Checkpoint Research
Run multi-step research with automatic progress tracking. If you time out, the next run picks up where you left off.
## When to Use
- Competitive analyses (multiple competitors to research)
- Multi-source investigations (several URLs/topics to cover)
- Any research task with 3+ sequential items that might timeout
## How It Works
1. Create a progress file at the specified path
2. Before each step, read the progress file to check what's done
3. Skip completed steps (marked `[x]`)
4. After completing each step, write findings to the file and mark `[x]`
5. If timed out mid-run, next invocation reads file and continues
## Usage
When given a research task, structure it as follows:
### 1. Initialize Progress File
```markdown
# [Research Title] — Progress
## Status: IN PROGRESS
## Steps
- [ ] 1. [Item name]
- [ ] 2. [Item name]
- [ ] 3. [Item name]
## Final Sections
- [ ] Summary / Verdict
- [ ] Recommendations
## Findings
*(Written below as each step completes)*
```
Save to: `data/investigations/[topic]-progress.md`
### 2. Research Loop
```
For each unchecked item:
1. Read progress file
2. Find first unchecked `- [ ]` item
3. Research it (web_search, web_fetch, etc.)
4. Append findings under ## Findings with a ### heading
5. Mark the item `[x]` in the checklist
6. Write updated file to disk immediately
7. Continue to next item
```
### 3. Completion
When all items are `[x]`:
1. Write final sections (summary, recommendations)
2. Change `## Status: IN PROGRESS` to `## Status: COMPLETE`
3. Deliver the full report
## Template for Spawning
When spawning a sub-agent with checkpoint research:
```
Read the progress file at [PATH] first.
If any items are marked [x], skip them — continue from the first unchecked item.
After researching each item, IMMEDIATELY write your findings to the file and mark it [x].
When all items are complete, write the final sections and mark status as COMPLETE.
```
## Cost Impact
~2-3K extra tokens per full run vs non-checkpoint approach.
Saves 20-30K+ tokens on any timeout (avoids full re-run).
Net positive whenever timeout risk exists (tasks with 5+ web fetches).
## Progress File Location
Default: `/home/wdjones/.openclaw/workspace/data/investigations/[topic]-progress.md`

63
skills/context7/SKILL.md Normal file
View File

@ -0,0 +1,63 @@
---
name: context7
description: |
Fetch up-to-date library documentation via Context7 API. Use PROACTIVELY when:
(1) Working with ANY external library (React, Next.js, Supabase, etc.)
(2) User asks about library APIs, patterns, or best practices
(3) Implementing features that rely on third-party packages
(4) Debugging library-specific issues
(5) Need current documentation beyond training data cutoff
Always prefer this over guessing library APIs or using outdated knowledge.
---
# Context7 Documentation Fetcher
Retrieve current library documentation via Context7 API.
## Workflow
### 1. Search for the library
```bash
python3 {baseDir}/scripts/context7.py search "<library-name>"
```
Example:
```bash
python3 {baseDir}/scripts/context7.py search "next.js"
```
Returns library metadata including the `id` field needed for step 2.
### 2. Fetch documentation context
```bash
python3 {baseDir}/scripts/context7.py context "<library-id>" "<query>"
```
Example:
```bash
python3 {baseDir}/scripts/context7.py context "/vercel/next.js" "app router middleware"
```
Options:
- `--type txt|json` - Output format (default: txt)
- `--tokens N` - Limit response tokens
## Quick Reference
| Task | Command |
|------|---------|
| Find React docs | `search "react"` |
| Get React hooks info | `context "/facebook/react" "useEffect cleanup"` |
| Find Next.js | `search "next.js"` |
| Get Next.js app router | `context "/vercel/next.js" "app router API routes"` |
| Find Supabase | `search "supabase"` |
| Get Supabase auth | `context "/supabase/supabase" "authentication row level security"` |
## When to Use
- Before implementing any library-dependent feature
- When unsure about current API signatures
- For library version-specific behavior
- To verify best practices and patterns

View File

@ -0,0 +1,94 @@
#!/usr/bin/env python3
"""
Context7 API client for searching libraries and fetching documentation.
"""
import argparse
import json
import sys
import urllib.request
import urllib.parse
import urllib.error
import os
API_BASE = "https://context7.com/api/v2"
API_KEY = os.environ.get("CONTEXT7_API_KEY", "ctx7sk-d6069954-149e-4a74-ae8f-85092cbfcd6f")
def make_request(url: str) -> dict | str:
"""Make an authenticated request to Context7 API."""
headers = {"Authorization": f"Bearer {API_KEY}"}
req = urllib.request.Request(url, headers=headers)
try:
with urllib.request.urlopen(req, timeout=30) as response:
content_type = response.headers.get("Content-Type", "")
data = response.read().decode("utf-8")
if "application/json" in content_type:
return json.loads(data)
return data
except urllib.error.HTTPError as e:
return {"error": f"HTTP {e.code}: {e.reason}", "body": e.read().decode("utf-8")}
except urllib.error.URLError as e:
return {"error": f"URL Error: {e.reason}"}
def search_libraries(library_name: str, query: str = "") -> dict:
"""Search for libraries by name and optional query."""
params = {"libraryName": library_name}
if query:
params["query"] = query
url = f"{API_BASE}/libs/search?{urllib.parse.urlencode(params)}"
return make_request(url)
def get_context(library_id: str, query: str, output_type: str = "txt", tokens: int = None) -> str | dict:
"""Fetch documentation context for a specific library."""
params = {
"libraryId": library_id,
"query": query,
"type": output_type
}
if tokens:
params["tokens"] = tokens
url = f"{API_BASE}/context?{urllib.parse.urlencode(params)}"
return make_request(url)
def main():
parser = argparse.ArgumentParser(description="Context7 API client")
subparsers = parser.add_subparsers(dest="command", required=True)
# Search command
search_parser = subparsers.add_parser("search", help="Search for libraries")
search_parser.add_argument("library_name", help="Library name to search for (e.g., 'react', 'next.js')")
search_parser.add_argument("--query", "-q", default="", help="Optional query to filter results")
# Context command
context_parser = subparsers.add_parser("context", help="Get documentation context")
context_parser.add_argument("library_id", help="Library ID from search results (e.g., '/vercel/next.js')")
context_parser.add_argument("query", help="Query describing what you need (e.g., 'setup ssr')")
context_parser.add_argument("--type", "-t", default="txt", choices=["txt", "json"], help="Output format (txt returns markdown-formatted text)")
context_parser.add_argument("--tokens", type=int, help="Max tokens to return")
args = parser.parse_args()
if args.command == "search":
result = search_libraries(args.library_name, args.query)
if isinstance(result, dict):
print(json.dumps(result, indent=2))
else:
print(result)
elif args.command == "context":
result = get_context(args.library_id, args.query, args.type, args.tokens)
if isinstance(result, dict):
print(json.dumps(result, indent=2))
else:
print(result)
if __name__ == "__main__":
main()