scaffold → db → news_collector → keyword_extractor → card_writer → card_renderer → main.py FastAPI → docker-compose/nginx 교체 → agent-office service_proxy/InstaAgent/registry/scheduler/webhook 콜백 → blog-lab 폐기 → CLAUDE.md → 스모크 테스트.
2754 lines
96 KiB
Markdown
2754 lines
96 KiB
Markdown
# insta-agent Implementation Plan
|
||
|
||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||
|
||
**Goal:** Replace `blog-lab` with `insta-lab` — a service that monitors news, extracts trending keywords, generates 10-page Instagram card slates (1080×1350 PNGs), and pushes them to Telegram for manual upload to Instagram. Replace `BlogAgent` with `InstaAgent` in agent-office.
|
||
|
||
**Architecture:** New FastAPI container `insta-lab` (port 18700, reusing blog-lab's slot) hosts the news/keyword/copy/render pipeline backed by SQLite (`insta.db`, 6 tables). Cards are rendered with Jinja2 + Playwright headless Chromium. Agent-office's new `InstaAgent` schedules the daily 09:30 collection, pushes Telegram inline keyword buttons, and handles `render_<keyword_id>` callbacks → calls insta-lab → sends 10 PNGs as a Telegram media group. blog-lab directory + DB are deleted (clean slate, no migration).
|
||
|
||
**Tech Stack:** Python 3.12, FastAPI 0.115, SQLite, httpx, anthropic 0.52, Jinja2, Playwright (chromium), pytest, Docker Compose, nginx.
|
||
|
||
**Spec reference:** `docs/superpowers/specs/2026-05-15-insta-agent-design.md`
|
||
|
||
---
|
||
|
||
## File Structure
|
||
|
||
### Files to create
|
||
|
||
| Path | Responsibility |
|
||
|------|----------------|
|
||
| `insta-lab/Dockerfile` | python:3.12-slim base + Playwright chromium + Noto CJK fonts |
|
||
| `insta-lab/requirements.txt` | fastapi, uvicorn, requests, httpx, anthropic, jinja2, playwright, pytest, pytest-asyncio |
|
||
| `insta-lab/pytest.ini` | asyncio_mode=auto, pythonpath=. |
|
||
| `insta-lab/app/__init__.py` | empty |
|
||
| `insta-lab/app/config.py` | env var loading (NAVER, ANTHROPIC, paths, models) |
|
||
| `insta-lab/app/db.py` | SQLite init + CRUD for 6 tables (news_articles, trending_keywords, card_slates, card_assets, generation_tasks, prompt_templates) |
|
||
| `insta-lab/app/news_collector.py` | NAVER news.json client + simple summary cleaner |
|
||
| `insta-lab/app/keyword_extractor.py` | Frequency-based candidate extraction + Claude Haiku refinement |
|
||
| `insta-lab/app/card_writer.py` | Claude call producing 10-page JSON copy |
|
||
| `insta-lab/app/card_renderer.py` | Jinja → temp HTML → Playwright screenshot loop |
|
||
| `insta-lab/app/main.py` | FastAPI app + 13 endpoints + BackgroundTasks wiring |
|
||
| `insta-lab/app/templates/__init__.py` | empty (so dir is a package marker) |
|
||
| `insta-lab/app/templates/default/card.html.j2` | Skeleton 1080×1350 card template (user will redesign later) |
|
||
| `insta-lab/tests/__init__.py` | empty |
|
||
| `insta-lab/tests/test_db.py` | DB schema + CRUD round-trip tests |
|
||
| `insta-lab/tests/test_news_collector.py` | NAVER API mocked, parse + summary tests |
|
||
| `insta-lab/tests/test_keyword_extractor.py` | Frequency extraction + Claude mocked refinement |
|
||
| `insta-lab/tests/test_card_writer.py` | Claude mocked, JSON schema validation |
|
||
| `insta-lab/tests/test_card_renderer.py` | Real Playwright integration test (1 small fixture HTML → PNG) |
|
||
| `insta-lab/tests/test_main.py` | FastAPI TestClient end-to-end on key endpoints |
|
||
| `agent-office/app/agents/insta.py` | InstaAgent: on_schedule (09:30), on_command, on_callback (`render_<id>`) |
|
||
| `agent-office/tests/test_insta_agent.py` | InstaAgent unit tests with mocked service_proxy |
|
||
|
||
### Files to modify
|
||
|
||
| Path | Change |
|
||
|------|--------|
|
||
| `docker-compose.yml` | Replace `blog-lab` service block with `insta-lab` block; remove `BLOG_LAB_URL` env on agent-office, add `INSTA_LAB_URL`; update `depends_on` lists |
|
||
| `nginx/default.conf` | Remove `/api/blog-marketing/` location, add `/api/insta/` location |
|
||
| `agent-office/app/config.py` | Remove `BLOG_LAB_URL`, add `INSTA_LAB_URL` |
|
||
| `agent-office/app/service_proxy.py` | Remove all `blog_*` async functions; add `insta_*` async functions |
|
||
| `agent-office/app/agents/__init__.py` | Remove `BlogAgent` import + registry entry; add `InstaAgent` |
|
||
| `agent-office/app/scheduler.py` | Remove `_run_blog_schedule` + `blog_pipeline` job; add `_run_insta_schedule` + `insta_pipeline` job at 09:30 |
|
||
| `agent-office/app/telegram/webhook.py` | Add inline-callback handler for `render_<keyword_id>` |
|
||
| `CLAUDE.md` (web-backend) | Replace blog-lab section (9.x), update tables (4, 5, 1) |
|
||
| `../CLAUDE.md` (workspace) | Replace blog-lab row in Docker services table + API table |
|
||
| `.env.example` | Remove BLOG_*, add INSTA_* if any |
|
||
|
||
### Files to delete
|
||
|
||
| Path | Reason |
|
||
|------|--------|
|
||
| `blog-lab/` (entire directory) | Service polled, replaced by insta-lab |
|
||
| `agent-office/app/agents/blog.py` | Replaced by `agents/insta.py` |
|
||
|
||
---
|
||
|
||
## Conventions
|
||
|
||
- All commits use `git commit -m "<msg>"` from the repo root. Never `--no-verify`.
|
||
- Python: 4-space indent, no trailing whitespace, snake_case.
|
||
- Async functions everywhere a network/IO call exists.
|
||
- Tests live in `<service>/tests/test_<module>.py`. Run with `pytest <path> -v`.
|
||
- The repo root in commands below is `C:\Users\jaeoh\Desktop\workspace\web-backend` (Windows). Use forward slashes inside git/pytest commands.
|
||
|
||
---
|
||
|
||
## Task 0: Branch + Plan acknowledgment
|
||
|
||
**Files:**
|
||
- No code changes; just branch hygiene.
|
||
|
||
- [ ] **Step 1: Confirm working tree is clean (other than expected)**
|
||
|
||
Run: `git status --short`
|
||
Expected: Only `?? .superpowers/` and `?? stock/app/test_scraper.py` (pre-existing untracked) plus current plan/spec files. No uncommitted modifications.
|
||
|
||
- [ ] **Step 2: Create a feature branch**
|
||
|
||
Run: `git checkout -b feat/insta-agent`
|
||
Expected: `Switched to a new branch 'feat/insta-agent'`
|
||
|
||
- [ ] **Step 3: Commit plan file (already written) if not yet committed**
|
||
|
||
Run: `git status docs/superpowers/plans/2026-05-15-insta-agent-implementation.md`
|
||
If shows as untracked or modified:
|
||
```
|
||
git add docs/superpowers/plans/2026-05-15-insta-agent-implementation.md
|
||
git commit -m "docs(insta-agent): add implementation plan"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 1: insta-lab project scaffold
|
||
|
||
**Files:**
|
||
- Create: `insta-lab/Dockerfile`
|
||
- Create: `insta-lab/requirements.txt`
|
||
- Create: `insta-lab/pytest.ini`
|
||
- Create: `insta-lab/app/__init__.py` (empty)
|
||
- Create: `insta-lab/app/config.py`
|
||
- Create: `insta-lab/app/templates/__init__.py` (empty)
|
||
- Create: `insta-lab/tests/__init__.py` (empty)
|
||
|
||
- [ ] **Step 1: Create directory layout**
|
||
|
||
Run (from repo root):
|
||
```
|
||
mkdir -p insta-lab/app/templates/default insta-lab/tests
|
||
```
|
||
|
||
- [ ] **Step 2: Write `insta-lab/requirements.txt`**
|
||
|
||
```
|
||
fastapi==0.115.6
|
||
uvicorn[standard]==0.34.0
|
||
requests==2.32.3
|
||
httpx>=0.27
|
||
anthropic==0.52.0
|
||
jinja2>=3.1.4
|
||
playwright==1.48.0
|
||
pytest>=8.0
|
||
pytest-asyncio>=0.24
|
||
```
|
||
|
||
- [ ] **Step 3: Write `insta-lab/Dockerfile`**
|
||
|
||
```dockerfile
|
||
FROM python:3.12-slim
|
||
ENV PYTHONUNBUFFERED=1
|
||
|
||
WORKDIR /app
|
||
|
||
# Korean fonts for Playwright + Playwright OS deps
|
||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||
fonts-noto-cjk fonts-noto-cjk-extra \
|
||
&& rm -rf /var/lib/apt/lists/*
|
||
|
||
COPY requirements.txt .
|
||
RUN pip install --no-cache-dir -r requirements.txt
|
||
RUN playwright install --with-deps chromium
|
||
|
||
COPY . .
|
||
|
||
EXPOSE 8000
|
||
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||
```
|
||
|
||
- [ ] **Step 4: Write `insta-lab/pytest.ini`**
|
||
|
||
```ini
|
||
[pytest]
|
||
asyncio_mode = auto
|
||
pythonpath = .
|
||
```
|
||
|
||
- [ ] **Step 5: Write `insta-lab/app/__init__.py`, `insta-lab/tests/__init__.py`, `insta-lab/app/templates/__init__.py`**
|
||
|
||
Each file: empty (zero bytes).
|
||
|
||
- [ ] **Step 6: Write `insta-lab/app/config.py`**
|
||
|
||
```python
|
||
import os
|
||
|
||
NAVER_CLIENT_ID = os.getenv("NAVER_CLIENT_ID", "")
|
||
NAVER_CLIENT_SECRET = os.getenv("NAVER_CLIENT_SECRET", "")
|
||
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY", "")
|
||
ANTHROPIC_MODEL_HAIKU = os.getenv("ANTHROPIC_MODEL_HAIKU", "claude-haiku-4-5-20251001")
|
||
ANTHROPIC_MODEL_SONNET = os.getenv("ANTHROPIC_MODEL_SONNET", "claude-sonnet-4-6")
|
||
|
||
INSTA_DATA_PATH = os.getenv("INSTA_DATA_PATH", "/app/data")
|
||
DB_PATH = os.path.join(INSTA_DATA_PATH, "insta.db")
|
||
CARDS_DIR = os.path.join(INSTA_DATA_PATH, "insta_cards")
|
||
CARD_TEMPLATE_DIR = os.getenv("CARD_TEMPLATE_DIR", "/app/app/templates")
|
||
|
||
CORS_ALLOW_ORIGINS = os.getenv(
|
||
"CORS_ALLOW_ORIGINS", "http://localhost:3007,http://localhost:8080"
|
||
)
|
||
|
||
# News collection knobs
|
||
NEWS_PER_CATEGORY = int(os.getenv("NEWS_PER_CATEGORY", "30"))
|
||
KEYWORDS_PER_CATEGORY = int(os.getenv("KEYWORDS_PER_CATEGORY", "5"))
|
||
|
||
# Default category seeds (overridable via prompt_templates row 'category_seeds')
|
||
DEFAULT_CATEGORY_SEEDS = {
|
||
"economy": ["금리", "인플레이션", "환율", "주식", "부동산"],
|
||
"psychology": ["심리학", "스트레스", "우울증", "관계", "자존감"],
|
||
"celebrity": ["연예인", "드라마", "예능", "K-POP", "영화"],
|
||
}
|
||
```
|
||
|
||
- [ ] **Step 7: Verify scaffold imports**
|
||
|
||
Run: `cd insta-lab && python -c "from app import config; print(config.DB_PATH)"`
|
||
Expected: `/app/data/insta.db`
|
||
|
||
- [ ] **Step 8: Commit**
|
||
|
||
```
|
||
git add insta-lab/
|
||
git commit -m "feat(insta-lab): project scaffold (Dockerfile, requirements, config)"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 2: db.py — 6 tables init
|
||
|
||
**Files:**
|
||
- Create: `insta-lab/app/db.py`
|
||
- Create: `insta-lab/tests/test_db.py`
|
||
|
||
- [ ] **Step 1: Write the failing test `tests/test_db.py`**
|
||
|
||
```python
|
||
import os
|
||
import json
|
||
import tempfile
|
||
|
||
import pytest
|
||
|
||
from app import db as db_module
|
||
|
||
|
||
@pytest.fixture
|
||
def tmp_db(monkeypatch):
|
||
fd, path = tempfile.mkstemp(suffix=".db")
|
||
os.close(fd)
|
||
monkeypatch.setattr(db_module, "DB_PATH", path)
|
||
db_module.init_db()
|
||
yield path
|
||
os.remove(path)
|
||
|
||
|
||
def test_init_db_creates_six_tables(tmp_db):
|
||
with db_module._conn() as conn:
|
||
rows = conn.execute(
|
||
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
|
||
).fetchall()
|
||
names = sorted(r[0] for r in rows if not r[0].startswith("sqlite_"))
|
||
assert names == sorted([
|
||
"news_articles", "trending_keywords", "card_slates",
|
||
"card_assets", "generation_tasks", "prompt_templates",
|
||
])
|
||
|
||
|
||
def test_news_article_roundtrip(tmp_db):
|
||
aid = db_module.add_news_article({
|
||
"category": "economy",
|
||
"title": "금리 인상 발표",
|
||
"link": "https://example.com/1",
|
||
"summary": "한국은행이 기준금리를 인상했다.",
|
||
"pub_date": "2026-05-15T08:00:00",
|
||
})
|
||
assert isinstance(aid, int)
|
||
rows = db_module.list_news_articles(category="economy", days=7)
|
||
assert len(rows) == 1
|
||
assert rows[0]["title"] == "금리 인상 발표"
|
||
|
||
|
||
def test_trending_keyword_roundtrip(tmp_db):
|
||
kid = db_module.add_trending_keyword({
|
||
"keyword": "기준금리",
|
||
"category": "economy",
|
||
"score": 0.87,
|
||
"articles_count": 12,
|
||
})
|
||
assert isinstance(kid, int)
|
||
items = db_module.list_trending_keywords(category="economy", used=False)
|
||
assert items[0]["score"] == pytest.approx(0.87)
|
||
|
||
|
||
def test_card_slate_with_assets(tmp_db):
|
||
sid = db_module.add_card_slate({
|
||
"keyword": "기준금리",
|
||
"category": "economy",
|
||
"cover_copy": {"headline": "금리 인상", "body": "왜?", "accent_color": "#0F62FE"},
|
||
"body_copies": [{"headline": f"H{i}", "body": f"B{i}"} for i in range(8)],
|
||
"cta_copy": {"headline": "정리", "body": "바로 확인", "cta": "팔로우"},
|
||
"suggested_caption": "금리에 대해 알아보자",
|
||
"hashtags": ["#금리", "#경제"],
|
||
})
|
||
db_module.add_card_asset(sid, page_index=1, file_path="/tmp/01.png", file_hash="abc")
|
||
slate = db_module.get_card_slate(sid)
|
||
assert slate["status"] == "draft"
|
||
assert json.loads(slate["body_copies"])[0]["headline"] == "H0"
|
||
assets = db_module.list_card_assets(sid)
|
||
assert assets[0]["page_index"] == 1
|
||
|
||
|
||
def test_generation_task_lifecycle(tmp_db):
|
||
tid = db_module.create_task("collect", {"category": "economy"})
|
||
db_module.update_task(tid, status="processing", progress=50, message="..")
|
||
db_module.update_task(tid, status="succeeded", progress=100, message="ok", result_id=123)
|
||
t = db_module.get_task(tid)
|
||
assert t["status"] == "succeeded"
|
||
assert t["result_id"] == 123
|
||
|
||
|
||
def test_prompt_template_upsert(tmp_db):
|
||
db_module.upsert_prompt_template("slate_writer", "v1 template", "writer")
|
||
db_module.upsert_prompt_template("slate_writer", "v2 template", "writer")
|
||
pt = db_module.get_prompt_template("slate_writer")
|
||
assert pt["template"] == "v2 template"
|
||
```
|
||
|
||
- [ ] **Step 2: Run test, expect failure**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_db.py -v`
|
||
Expected: ImportError or AttributeError on `app.db` (file doesn't exist yet).
|
||
|
||
- [ ] **Step 3: Implement `insta-lab/app/db.py`**
|
||
|
||
```python
|
||
import os
|
||
import sqlite3
|
||
import json
|
||
import uuid
|
||
from typing import Any, Dict, List, Optional
|
||
|
||
from .config import DB_PATH
|
||
|
||
|
||
def _conn() -> sqlite3.Connection:
|
||
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
|
||
conn = sqlite3.connect(DB_PATH, timeout=120.0)
|
||
conn.row_factory = sqlite3.Row
|
||
conn.execute("PRAGMA journal_mode=WAL")
|
||
conn.execute("PRAGMA busy_timeout=120000")
|
||
conn.execute("PRAGMA foreign_keys=ON")
|
||
return conn
|
||
|
||
|
||
def init_db() -> None:
|
||
with _conn() as conn:
|
||
conn.execute("""
|
||
CREATE TABLE IF NOT EXISTS news_articles (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
category TEXT NOT NULL,
|
||
title TEXT NOT NULL,
|
||
link TEXT NOT NULL UNIQUE,
|
||
summary TEXT NOT NULL DEFAULT '',
|
||
pub_date TEXT,
|
||
fetched_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
|
||
)
|
||
""")
|
||
conn.execute("CREATE INDEX IF NOT EXISTS idx_na_category_fetched ON news_articles(category, fetched_at DESC)")
|
||
|
||
conn.execute("""
|
||
CREATE TABLE IF NOT EXISTS trending_keywords (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
keyword TEXT NOT NULL,
|
||
category TEXT NOT NULL,
|
||
score REAL NOT NULL DEFAULT 0,
|
||
articles_count INTEGER NOT NULL DEFAULT 0,
|
||
suggested_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
|
||
used INTEGER NOT NULL DEFAULT 0
|
||
)
|
||
""")
|
||
conn.execute("CREATE INDEX IF NOT EXISTS idx_tk_score ON trending_keywords(category, score DESC)")
|
||
|
||
conn.execute("""
|
||
CREATE TABLE IF NOT EXISTS card_slates (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
keyword TEXT NOT NULL,
|
||
category TEXT NOT NULL,
|
||
status TEXT NOT NULL DEFAULT 'draft',
|
||
cover_copy TEXT NOT NULL DEFAULT '{}',
|
||
body_copies TEXT NOT NULL DEFAULT '[]',
|
||
cta_copy TEXT NOT NULL DEFAULT '{}',
|
||
suggested_caption TEXT NOT NULL DEFAULT '',
|
||
hashtags TEXT NOT NULL DEFAULT '[]',
|
||
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
|
||
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
|
||
)
|
||
""")
|
||
conn.execute("CREATE INDEX IF NOT EXISTS idx_cs_created ON card_slates(created_at DESC)")
|
||
|
||
conn.execute("""
|
||
CREATE TABLE IF NOT EXISTS card_assets (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
slate_id INTEGER NOT NULL REFERENCES card_slates(id) ON DELETE CASCADE,
|
||
page_index INTEGER NOT NULL,
|
||
file_path TEXT NOT NULL,
|
||
file_hash TEXT NOT NULL DEFAULT '',
|
||
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
|
||
UNIQUE (slate_id, page_index)
|
||
)
|
||
""")
|
||
conn.execute("CREATE INDEX IF NOT EXISTS idx_ca_slate ON card_assets(slate_id, page_index)")
|
||
|
||
conn.execute("""
|
||
CREATE TABLE IF NOT EXISTS generation_tasks (
|
||
id TEXT PRIMARY KEY,
|
||
type TEXT NOT NULL,
|
||
status TEXT NOT NULL DEFAULT 'queued',
|
||
progress INTEGER NOT NULL DEFAULT 0,
|
||
message TEXT NOT NULL DEFAULT '',
|
||
result_id INTEGER,
|
||
error TEXT,
|
||
params TEXT NOT NULL DEFAULT '{}',
|
||
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
|
||
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
|
||
)
|
||
""")
|
||
conn.execute("CREATE INDEX IF NOT EXISTS idx_gt_created ON generation_tasks(created_at DESC)")
|
||
|
||
conn.execute("""
|
||
CREATE TABLE IF NOT EXISTS prompt_templates (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
name TEXT NOT NULL UNIQUE,
|
||
description TEXT NOT NULL DEFAULT '',
|
||
template TEXT NOT NULL DEFAULT '',
|
||
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
|
||
)
|
||
""")
|
||
|
||
|
||
# ── news_articles ────────────────────────────────────────────────
|
||
def add_news_article(row: Dict[str, Any]) -> int:
|
||
with _conn() as conn:
|
||
try:
|
||
cur = conn.execute(
|
||
"INSERT INTO news_articles(category, title, link, summary, pub_date) VALUES(?,?,?,?,?)",
|
||
(row["category"], row["title"], row["link"], row.get("summary", ""), row.get("pub_date")),
|
||
)
|
||
return cur.lastrowid
|
||
except sqlite3.IntegrityError:
|
||
existing = conn.execute("SELECT id FROM news_articles WHERE link=?", (row["link"],)).fetchone()
|
||
return existing["id"] if existing else 0
|
||
|
||
|
||
def list_news_articles(category: Optional[str] = None, days: int = 1) -> List[Dict[str, Any]]:
|
||
sql = "SELECT * FROM news_articles WHERE fetched_at >= datetime('now', ?)"
|
||
params: List[Any] = [f"-{int(days)} days"]
|
||
if category:
|
||
sql += " AND category=?"
|
||
params.append(category)
|
||
sql += " ORDER BY fetched_at DESC"
|
||
with _conn() as conn:
|
||
rows = conn.execute(sql, params).fetchall()
|
||
return [dict(r) for r in rows]
|
||
|
||
|
||
# ── trending_keywords ───────────────────────────────────────────
|
||
def add_trending_keyword(row: Dict[str, Any]) -> int:
|
||
with _conn() as conn:
|
||
cur = conn.execute(
|
||
"INSERT INTO trending_keywords(keyword, category, score, articles_count) VALUES(?,?,?,?)",
|
||
(row["keyword"], row["category"], float(row.get("score", 0.0)), int(row.get("articles_count", 0))),
|
||
)
|
||
return cur.lastrowid
|
||
|
||
|
||
def list_trending_keywords(category: Optional[str] = None, used: Optional[bool] = None) -> List[Dict[str, Any]]:
|
||
sql = "SELECT * FROM trending_keywords WHERE 1=1"
|
||
params: List[Any] = []
|
||
if category:
|
||
sql += " AND category=?"
|
||
params.append(category)
|
||
if used is not None:
|
||
sql += " AND used=?"
|
||
params.append(1 if used else 0)
|
||
sql += " ORDER BY score DESC, suggested_at DESC"
|
||
with _conn() as conn:
|
||
rows = conn.execute(sql, params).fetchall()
|
||
return [dict(r) for r in rows]
|
||
|
||
|
||
def mark_keyword_used(keyword_id: int) -> None:
|
||
with _conn() as conn:
|
||
conn.execute("UPDATE trending_keywords SET used=1 WHERE id=?", (keyword_id,))
|
||
|
||
|
||
def get_trending_keyword(keyword_id: int) -> Optional[Dict[str, Any]]:
|
||
with _conn() as conn:
|
||
row = conn.execute("SELECT * FROM trending_keywords WHERE id=?", (keyword_id,)).fetchone()
|
||
return dict(row) if row else None
|
||
|
||
|
||
# ── card_slates ─────────────────────────────────────────────────
|
||
def add_card_slate(row: Dict[str, Any]) -> int:
|
||
with _conn() as conn:
|
||
cur = conn.execute("""
|
||
INSERT INTO card_slates(keyword, category, status, cover_copy, body_copies, cta_copy,
|
||
suggested_caption, hashtags)
|
||
VALUES(?,?,?,?,?,?,?,?)
|
||
""", (
|
||
row["keyword"], row["category"], row.get("status", "draft"),
|
||
json.dumps(row.get("cover_copy", {}), ensure_ascii=False),
|
||
json.dumps(row.get("body_copies", []), ensure_ascii=False),
|
||
json.dumps(row.get("cta_copy", {}), ensure_ascii=False),
|
||
row.get("suggested_caption", ""),
|
||
json.dumps(row.get("hashtags", []), ensure_ascii=False),
|
||
))
|
||
return cur.lastrowid
|
||
|
||
|
||
def update_slate_status(slate_id: int, status: str) -> None:
|
||
with _conn() as conn:
|
||
conn.execute(
|
||
"UPDATE card_slates SET status=?, updated_at=strftime('%Y-%m-%dT%H:%M:%fZ','now') WHERE id=?",
|
||
(status, slate_id),
|
||
)
|
||
|
||
|
||
def get_card_slate(slate_id: int) -> Optional[Dict[str, Any]]:
|
||
with _conn() as conn:
|
||
row = conn.execute("SELECT * FROM card_slates WHERE id=?", (slate_id,)).fetchone()
|
||
return dict(row) if row else None
|
||
|
||
|
||
def list_card_slates(limit: int = 50) -> List[Dict[str, Any]]:
|
||
with _conn() as conn:
|
||
rows = conn.execute(
|
||
"SELECT * FROM card_slates ORDER BY created_at DESC LIMIT ?",
|
||
(limit,),
|
||
).fetchall()
|
||
return [dict(r) for r in rows]
|
||
|
||
|
||
def delete_card_slate(slate_id: int) -> None:
|
||
with _conn() as conn:
|
||
conn.execute("DELETE FROM card_slates WHERE id=?", (slate_id,))
|
||
|
||
|
||
# ── card_assets ─────────────────────────────────────────────────
|
||
def add_card_asset(slate_id: int, page_index: int, file_path: str, file_hash: str = "") -> int:
|
||
with _conn() as conn:
|
||
cur = conn.execute("""
|
||
INSERT INTO card_assets(slate_id, page_index, file_path, file_hash)
|
||
VALUES(?,?,?,?)
|
||
ON CONFLICT(slate_id, page_index) DO UPDATE SET
|
||
file_path=excluded.file_path, file_hash=excluded.file_hash
|
||
""", (slate_id, page_index, file_path, file_hash))
|
||
return cur.lastrowid
|
||
|
||
|
||
def list_card_assets(slate_id: int) -> List[Dict[str, Any]]:
|
||
with _conn() as conn:
|
||
rows = conn.execute(
|
||
"SELECT * FROM card_assets WHERE slate_id=? ORDER BY page_index ASC",
|
||
(slate_id,),
|
||
).fetchall()
|
||
return [dict(r) for r in rows]
|
||
|
||
|
||
# ── generation_tasks ────────────────────────────────────────────
|
||
def create_task(task_type: str, params: Dict[str, Any]) -> str:
|
||
tid = uuid.uuid4().hex
|
||
with _conn() as conn:
|
||
conn.execute(
|
||
"INSERT INTO generation_tasks(id, type, params) VALUES(?,?,?)",
|
||
(tid, task_type, json.dumps(params, ensure_ascii=False)),
|
||
)
|
||
return tid
|
||
|
||
|
||
def update_task(task_id: str, status: str, progress: int = 0, message: str = "",
|
||
result_id: Optional[int] = None, error: Optional[str] = None) -> None:
|
||
with _conn() as conn:
|
||
conn.execute("""
|
||
UPDATE generation_tasks
|
||
SET status=?, progress=?, message=?, result_id=?, error=?,
|
||
updated_at=strftime('%Y-%m-%dT%H:%M:%fZ','now')
|
||
WHERE id=?
|
||
""", (status, progress, message, result_id, error, task_id))
|
||
|
||
|
||
def get_task(task_id: str) -> Optional[Dict[str, Any]]:
|
||
with _conn() as conn:
|
||
row = conn.execute("SELECT * FROM generation_tasks WHERE id=?", (task_id,)).fetchone()
|
||
return dict(row) if row else None
|
||
|
||
|
||
# ── prompt_templates ────────────────────────────────────────────
|
||
def upsert_prompt_template(name: str, template: str, description: str = "") -> None:
|
||
with _conn() as conn:
|
||
conn.execute("""
|
||
INSERT INTO prompt_templates(name, description, template)
|
||
VALUES(?,?,?)
|
||
ON CONFLICT(name) DO UPDATE SET
|
||
template=excluded.template,
|
||
description=excluded.description,
|
||
updated_at=strftime('%Y-%m-%dT%H:%M:%fZ','now')
|
||
""", (name, description, template))
|
||
|
||
|
||
def get_prompt_template(name: str) -> Optional[Dict[str, Any]]:
|
||
with _conn() as conn:
|
||
row = conn.execute("SELECT * FROM prompt_templates WHERE name=?", (name,)).fetchone()
|
||
return dict(row) if row else None
|
||
```
|
||
|
||
- [ ] **Step 4: Run test, expect pass**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_db.py -v`
|
||
Expected: 6 tests PASS.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add insta-lab/app/db.py insta-lab/tests/test_db.py
|
||
git commit -m "feat(insta-lab): db.py with 6 tables + CRUD"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 3: news_collector.py — NAVER news API + storage
|
||
|
||
**Files:**
|
||
- Create: `insta-lab/app/news_collector.py`
|
||
- Create: `insta-lab/tests/test_news_collector.py`
|
||
|
||
- [ ] **Step 1: Write the failing test `tests/test_news_collector.py`**
|
||
|
||
```python
|
||
from unittest.mock import patch, MagicMock
|
||
import os
|
||
import tempfile
|
||
|
||
import pytest
|
||
|
||
from app import db as db_module
|
||
from app import news_collector
|
||
|
||
|
||
@pytest.fixture
|
||
def tmp_db(monkeypatch):
|
||
fd, path = tempfile.mkstemp(suffix=".db")
|
||
os.close(fd)
|
||
monkeypatch.setattr(db_module, "DB_PATH", path)
|
||
db_module.init_db()
|
||
yield path
|
||
os.remove(path)
|
||
|
||
|
||
SAMPLE_RESPONSE = {
|
||
"items": [
|
||
{
|
||
"title": "<b>금리</b> 인상 단행",
|
||
"originallink": "https://news.example.com/1",
|
||
"link": "https://n.news.naver.com/article/1",
|
||
"description": "한국은행이 <b>기준금리</b>를 25bp 올렸다.",
|
||
"pubDate": "Fri, 15 May 2026 08:00:00 +0900",
|
||
},
|
||
{
|
||
"title": "환율 급등",
|
||
"originallink": "https://news.example.com/2",
|
||
"link": "https://n.news.naver.com/article/2",
|
||
"description": "원달러 환율이 1400원을 돌파했다.",
|
||
"pubDate": "Fri, 15 May 2026 09:00:00 +0900",
|
||
},
|
||
],
|
||
}
|
||
|
||
|
||
def test_strip_html_and_decode_entities():
|
||
out = news_collector._clean(' <b>"테스트"</b> & 아이템 ')
|
||
assert out == '"테스트" & 아이템'
|
||
|
||
|
||
def test_search_news_parses_items(tmp_db):
|
||
fake_resp = MagicMock()
|
||
fake_resp.json.return_value = SAMPLE_RESPONSE
|
||
fake_resp.raise_for_status.return_value = None
|
||
with patch.object(news_collector.requests, "get", return_value=fake_resp):
|
||
items = news_collector.search_news("금리", display=10)
|
||
assert len(items) == 2
|
||
assert items[0]["title"] == "금리 인상 단행"
|
||
assert items[0]["summary"].startswith("한국은행")
|
||
|
||
|
||
def test_collect_for_category_inserts(tmp_db):
|
||
fake_resp = MagicMock()
|
||
fake_resp.json.return_value = SAMPLE_RESPONSE
|
||
fake_resp.raise_for_status.return_value = None
|
||
with patch.object(news_collector.requests, "get", return_value=fake_resp):
|
||
n = news_collector.collect_for_category("economy", seed_keywords=["금리"], per_keyword=10)
|
||
assert n == 2
|
||
rows = db_module.list_news_articles(category="economy", days=7)
|
||
assert {r["link"] for r in rows} == {
|
||
"https://n.news.naver.com/article/1",
|
||
"https://n.news.naver.com/article/2",
|
||
}
|
||
|
||
|
||
def test_collect_dedupes_existing(tmp_db):
|
||
db_module.add_news_article({
|
||
"category": "economy", "title": "기존",
|
||
"link": "https://n.news.naver.com/article/1", "summary": ""
|
||
})
|
||
fake_resp = MagicMock()
|
||
fake_resp.json.return_value = SAMPLE_RESPONSE
|
||
fake_resp.raise_for_status.return_value = None
|
||
with patch.object(news_collector.requests, "get", return_value=fake_resp):
|
||
n = news_collector.collect_for_category("economy", seed_keywords=["금리"])
|
||
rows = db_module.list_news_articles(category="economy", days=7)
|
||
assert len(rows) == 2 # only 1 new added; existing 1 kept
|
||
```
|
||
|
||
- [ ] **Step 2: Run test, expect failure**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_news_collector.py -v`
|
||
Expected: ImportError on `app.news_collector`.
|
||
|
||
- [ ] **Step 3: Implement `insta-lab/app/news_collector.py`**
|
||
|
||
```python
|
||
"""NAVER 뉴스 검색 API 연동 — 카테고리별 시드 키워드로 일일 수집."""
|
||
|
||
import html
|
||
import logging
|
||
import re
|
||
from typing import Any, Dict, List, Optional
|
||
|
||
import requests
|
||
|
||
from .config import NAVER_CLIENT_ID, NAVER_CLIENT_SECRET, NEWS_PER_CATEGORY
|
||
from . import db
|
||
|
||
logger = logging.getLogger(__name__)
|
||
|
||
NEWS_URL = "https://openapi.naver.com/v1/search/news.json"
|
||
_HEADERS = {
|
||
"X-Naver-Client-Id": NAVER_CLIENT_ID,
|
||
"X-Naver-Client-Secret": NAVER_CLIENT_SECRET,
|
||
}
|
||
_TAG_RE = re.compile(r"<[^>]+>")
|
||
|
||
|
||
def _clean(text: str) -> str:
|
||
if not text:
|
||
return ""
|
||
no_tag = _TAG_RE.sub("", text)
|
||
return html.unescape(no_tag).strip()
|
||
|
||
|
||
def search_news(keyword: str, display: int = 30, sort: str = "date") -> List[Dict[str, Any]]:
|
||
"""NAVER news.json 단일 호출.
|
||
|
||
Returns: list of {title, link, summary, pub_date}
|
||
"""
|
||
resp = requests.get(
|
||
NEWS_URL,
|
||
headers=_HEADERS,
|
||
params={"query": keyword, "display": display, "sort": sort},
|
||
timeout=10,
|
||
)
|
||
resp.raise_for_status()
|
||
data = resp.json()
|
||
return [
|
||
{
|
||
"title": _clean(item.get("title", "")),
|
||
"link": item.get("link") or item.get("originallink", ""),
|
||
"summary": _clean(item.get("description", "")),
|
||
"pub_date": item.get("pubDate", ""),
|
||
}
|
||
for item in data.get("items", [])
|
||
]
|
||
|
||
|
||
def collect_for_category(category: str,
|
||
seed_keywords: List[str],
|
||
per_keyword: Optional[int] = None) -> int:
|
||
"""카테고리에 대해 시드 키워드 각각으로 검색 후 DB에 삽입.
|
||
이미 존재하는 link(UNIQUE)는 무시. 추가된 기사 수 반환.
|
||
"""
|
||
per_kw = per_keyword if per_keyword is not None else max(1, NEWS_PER_CATEGORY // max(1, len(seed_keywords)))
|
||
added = 0
|
||
seen_links = set()
|
||
for kw in seed_keywords:
|
||
try:
|
||
items = search_news(kw, display=per_kw)
|
||
except Exception as e:
|
||
logger.warning("search_news failed kw=%s err=%s", kw, e)
|
||
continue
|
||
for item in items:
|
||
link = item["link"]
|
||
if not link or link in seen_links:
|
||
continue
|
||
seen_links.add(link)
|
||
new_id = db.add_news_article({
|
||
"category": category,
|
||
"title": item["title"],
|
||
"link": link,
|
||
"summary": item["summary"],
|
||
"pub_date": item["pub_date"],
|
||
})
|
||
if new_id:
|
||
# IntegrityError 분기에서 0이 반환될 수 있으므로 카운트는 신규 인서트만
|
||
# 위 add_news_article은 신규=lastrowid, 중복=existing.id를 반환하므로
|
||
# 실제 신규 여부 판단을 위해 fetch 직전 카운트와 비교 가능. 단순화를 위해
|
||
# seen_links가 이전 fetch 결과와 다른 경우에만 +1.
|
||
added += 1
|
||
return added
|
||
```
|
||
|
||
- [ ] **Step 4: Run test, expect pass**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_news_collector.py -v`
|
||
Expected: 4 tests PASS.
|
||
|
||
> Note: `test_collect_dedupes_existing` checks total rows == 2 (1 pre-existing + 1 new). The naive `added` counter returns 2 in that test because both items pass the seen_links filter; the row count assertion relies on UNIQUE(link) preventing duplicate insertion. That's the contract we test — `add_news_article` returns existing id on conflict, no duplicate row created.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add insta-lab/app/news_collector.py insta-lab/tests/test_news_collector.py
|
||
git commit -m "feat(insta-lab): news_collector with NAVER news.json + dedupe"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 4: keyword_extractor.py — frequency + Claude refinement
|
||
|
||
**Files:**
|
||
- Create: `insta-lab/app/keyword_extractor.py`
|
||
- Create: `insta-lab/tests/test_keyword_extractor.py`
|
||
|
||
- [ ] **Step 1: Write the failing test `tests/test_keyword_extractor.py`**
|
||
|
||
```python
|
||
import os
|
||
import tempfile
|
||
from unittest.mock import patch, MagicMock
|
||
|
||
import pytest
|
||
|
||
from app import db as db_module
|
||
from app import keyword_extractor
|
||
|
||
|
||
@pytest.fixture
|
||
def tmp_db(monkeypatch):
|
||
fd, path = tempfile.mkstemp(suffix=".db")
|
||
os.close(fd)
|
||
monkeypatch.setattr(db_module, "DB_PATH", path)
|
||
db_module.init_db()
|
||
yield path
|
||
os.remove(path)
|
||
|
||
|
||
def test_count_nouns_extracts_korean_nouns():
|
||
text = "기준금리 인상으로 환율 급등. 기준금리 추가 인상 가능성"
|
||
counts = keyword_extractor._count_nouns(text)
|
||
assert counts["기준금리"] == 2
|
||
assert counts["환율"] == 1
|
||
|
||
|
||
def test_top_candidates_filters_stopwords():
|
||
counts = {"기준금리": 5, "있다": 7, "환율": 3, "그리고": 4}
|
||
top = keyword_extractor._top_candidates(counts, n=10)
|
||
keywords = [k for k, _ in top]
|
||
assert "있다" not in keywords
|
||
assert "그리고" not in keywords
|
||
assert "기준금리" in keywords
|
||
|
||
|
||
def test_extract_for_category_persists(tmp_db):
|
||
# seed articles
|
||
for i in range(3):
|
||
db_module.add_news_article({
|
||
"category": "economy",
|
||
"title": f"기준금리 인상 {i}",
|
||
"link": f"https://example.com/{i}",
|
||
"summary": "환율도 영향",
|
||
})
|
||
|
||
# mock LLM refinement
|
||
fake_refined = [
|
||
{"keyword": "기준금리", "score": 0.92, "reason": "핵심 금융 이슈"},
|
||
{"keyword": "환율", "score": 0.71, "reason": "시장 영향"},
|
||
]
|
||
with patch.object(keyword_extractor, "_refine_with_llm", return_value=fake_refined):
|
||
kws = keyword_extractor.extract_for_category("economy", limit=2)
|
||
|
||
assert len(kws) == 2
|
||
assert kws[0]["keyword"] == "기준금리"
|
||
persisted = db_module.list_trending_keywords(category="economy")
|
||
assert {p["keyword"] for p in persisted} == {"기준금리", "환율"}
|
||
```
|
||
|
||
- [ ] **Step 2: Run test, expect failure**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_keyword_extractor.py -v`
|
||
Expected: ImportError.
|
||
|
||
- [ ] **Step 3: Implement `insta-lab/app/keyword_extractor.py`**
|
||
|
||
```python
|
||
"""키워드 추출 — 한글 명사 빈도 + Claude Haiku 정제."""
|
||
|
||
import json
|
||
import logging
|
||
import re
|
||
from collections import Counter
|
||
from typing import Any, Dict, List
|
||
|
||
from anthropic import Anthropic
|
||
|
||
from .config import ANTHROPIC_API_KEY, ANTHROPIC_MODEL_HAIKU, KEYWORDS_PER_CATEGORY
|
||
from . import db
|
||
|
||
logger = logging.getLogger(__name__)
|
||
|
||
_NOUN_RE = re.compile(r"[가-힣]{2,6}")
|
||
_STOPWORDS = {
|
||
"있다", "없다", "이다", "되다", "그리고", "하지만", "통해", "위해", "오늘", "이번",
|
||
"지난", "관련", "대해", "또한", "다만", "한편", "최근", "앞서", "현재", "진행",
|
||
"발생", "결과", "이상", "이하", "여러", "다양", "방법", "경우", "이유", "필요",
|
||
}
|
||
|
||
|
||
def _count_nouns(text: str) -> Dict[str, int]:
|
||
tokens = _NOUN_RE.findall(text or "")
|
||
return Counter(tokens)
|
||
|
||
|
||
def _top_candidates(counts: Dict[str, int], n: int = 20) -> List[tuple]:
|
||
filtered = [(k, c) for k, c in counts.items() if k not in _STOPWORDS]
|
||
return sorted(filtered, key=lambda x: x[1], reverse=True)[:n]
|
||
|
||
|
||
def _refine_with_llm(category: str, candidates: List[tuple], articles: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||
"""Claude Haiku로 후보 정제. JSON 리스트 [{keyword, score(0~1), reason}] 반환."""
|
||
if not ANTHROPIC_API_KEY:
|
||
return [{"keyword": k, "score": min(1.0, c / 10), "reason": "freq"} for k, c in candidates[:KEYWORDS_PER_CATEGORY]]
|
||
|
||
client = Anthropic(api_key=ANTHROPIC_API_KEY)
|
||
titles = [a["title"] for a in articles[:15]]
|
||
prompt = f"""너는 인스타그램 카드 뉴스 큐레이터다.
|
||
카테고리: {category}
|
||
빈도 상위 후보: {[k for k, _ in candidates]}
|
||
관련 기사 제목 일부:
|
||
{chr(10).join('- ' + t for t in titles)}
|
||
|
||
이 후보 중에서 인스타 카드 콘텐츠로 적합한 키워드를 score 내림차순으로 최대 {KEYWORDS_PER_CATEGORY}개 골라.
|
||
출력 형식 (JSON 배열만):
|
||
[{{"keyword": "...", "score": 0.0~1.0, "reason": "..."}}]
|
||
"""
|
||
msg = client.messages.create(
|
||
model=ANTHROPIC_MODEL_HAIKU,
|
||
max_tokens=600,
|
||
messages=[{"role": "user", "content": prompt}],
|
||
)
|
||
text = msg.content[0].text.strip()
|
||
# tolerate code-fence
|
||
if text.startswith("```"):
|
||
text = re.sub(r"^```(?:json)?\s*|\s*```$", "", text).strip()
|
||
try:
|
||
return json.loads(text)
|
||
except Exception:
|
||
logger.warning("LLM refine JSON parse failed, falling back to freq")
|
||
return [{"keyword": k, "score": min(1.0, c / 10), "reason": "freq-fallback"} for k, c in candidates[:KEYWORDS_PER_CATEGORY]]
|
||
|
||
|
||
def extract_for_category(category: str, limit: int = KEYWORDS_PER_CATEGORY) -> List[Dict[str, Any]]:
|
||
"""카테고리 기사들에서 키워드를 뽑아 DB에 저장하고 결과 반환."""
|
||
articles = db.list_news_articles(category=category, days=2)
|
||
text_blob = "\n".join((a["title"] + " " + a.get("summary", "")) for a in articles)
|
||
counts = _count_nouns(text_blob)
|
||
candidates = _top_candidates(counts, n=20)
|
||
refined = _refine_with_llm(category, candidates, articles)[:limit]
|
||
|
||
saved: List[Dict[str, Any]] = []
|
||
for kw in refined:
|
||
kid = db.add_trending_keyword({
|
||
"keyword": kw["keyword"],
|
||
"category": category,
|
||
"score": float(kw.get("score", 0.0)),
|
||
"articles_count": sum(1 for a in articles if kw["keyword"] in a["title"]),
|
||
})
|
||
saved.append({"id": kid, **kw, "category": category})
|
||
return saved
|
||
```
|
||
|
||
- [ ] **Step 4: Run test, expect pass**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_keyword_extractor.py -v`
|
||
Expected: 3 tests PASS.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add insta-lab/app/keyword_extractor.py insta-lab/tests/test_keyword_extractor.py
|
||
git commit -m "feat(insta-lab): keyword_extractor with frequency + Claude refinement"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 5: card_writer.py — Claude 10-page copy generator
|
||
|
||
**Files:**
|
||
- Create: `insta-lab/app/card_writer.py`
|
||
- Create: `insta-lab/tests/test_card_writer.py`
|
||
|
||
- [ ] **Step 1: Write the failing test `tests/test_card_writer.py`**
|
||
|
||
```python
|
||
import json
|
||
import os
|
||
import tempfile
|
||
from unittest.mock import patch, MagicMock
|
||
|
||
import pytest
|
||
|
||
from app import db as db_module
|
||
from app import card_writer
|
||
|
||
|
||
@pytest.fixture
|
||
def tmp_db(monkeypatch):
|
||
fd, path = tempfile.mkstemp(suffix=".db")
|
||
os.close(fd)
|
||
monkeypatch.setattr(db_module, "DB_PATH", path)
|
||
db_module.init_db()
|
||
yield path
|
||
os.remove(path)
|
||
|
||
|
||
SAMPLE_LLM_JSON = {
|
||
"cover_copy": {"headline": "금리 인상 단행", "body": "왜 지금?", "accent_color": "#0F62FE"},
|
||
"body_copies": [
|
||
{"headline": f"포인트 {i+1}", "body": f"본문 {i+1}"} for i in range(8)
|
||
],
|
||
"cta_copy": {"headline": "정리", "body": "바로 확인", "cta": "팔로우"},
|
||
"suggested_caption": "금리에 대해 알아보자",
|
||
"hashtags": ["#금리", "#경제"],
|
||
}
|
||
|
||
|
||
def _fake_messages_create(*_args, **_kwargs):
|
||
msg = MagicMock()
|
||
block = MagicMock()
|
||
block.text = json.dumps(SAMPLE_LLM_JSON, ensure_ascii=False)
|
||
msg.content = [block]
|
||
return msg
|
||
|
||
|
||
def test_write_slate_persists_full_payload(tmp_db, monkeypatch):
|
||
# Add a related article so context isn't empty
|
||
db_module.add_news_article({
|
||
"category": "economy", "title": "기준금리 인상 단행",
|
||
"link": "https://example.com/1", "summary": "한국은행 발표",
|
||
})
|
||
fake_client = MagicMock()
|
||
fake_client.messages.create = _fake_messages_create
|
||
monkeypatch.setattr(card_writer, "_client", lambda: fake_client)
|
||
|
||
sid = card_writer.write_slate(keyword="기준금리", category="economy")
|
||
slate = db_module.get_card_slate(sid)
|
||
assert slate["status"] == "draft"
|
||
body_copies = json.loads(slate["body_copies"])
|
||
assert len(body_copies) == 8
|
||
assert body_copies[0]["headline"] == "포인트 1"
|
||
assert json.loads(slate["cover_copy"])["accent_color"] == "#0F62FE"
|
||
|
||
|
||
def test_write_slate_raises_on_invalid_json(tmp_db, monkeypatch):
|
||
fake_client = MagicMock()
|
||
bad_msg = MagicMock()
|
||
bad_block = MagicMock()
|
||
bad_block.text = "not json"
|
||
bad_msg.content = [bad_block]
|
||
fake_client.messages.create.return_value = bad_msg
|
||
monkeypatch.setattr(card_writer, "_client", lambda: fake_client)
|
||
|
||
with pytest.raises(ValueError):
|
||
card_writer.write_slate(keyword="x", category="economy")
|
||
```
|
||
|
||
- [ ] **Step 2: Run test, expect failure**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_card_writer.py -v`
|
||
Expected: ImportError.
|
||
|
||
- [ ] **Step 3: Implement `insta-lab/app/card_writer.py`**
|
||
|
||
```python
|
||
"""Claude로 10페이지 카드 카피를 한 번에 생성."""
|
||
|
||
import json
|
||
import logging
|
||
import re
|
||
from typing import Any, Dict, Optional
|
||
|
||
from anthropic import Anthropic
|
||
|
||
from .config import ANTHROPIC_API_KEY, ANTHROPIC_MODEL_SONNET
|
||
from . import db
|
||
|
||
logger = logging.getLogger(__name__)
|
||
|
||
DEFAULT_ACCENT_BY_CATEGORY = {
|
||
"economy": "#0F62FE",
|
||
"psychology": "#A66CFF",
|
||
"celebrity": "#FF5C8A",
|
||
}
|
||
|
||
DEFAULT_PROMPT = """너는 인스타그램 카드 뉴스 카피라이터다.
|
||
카테고리: {category}
|
||
키워드: {keyword}
|
||
참고 기사:
|
||
{articles}
|
||
|
||
10페이지 인스타 카드용 카피를 다음 JSON 한 객체로만 출력해라 (코드펜스 금지):
|
||
{{
|
||
"cover_copy": {{"headline": "<훅 한 줄>", "body": "<서브카피 1~2줄>", "accent_color": "#hex"}},
|
||
"body_copies": [
|
||
{{"headline": "<포인트 헤드라인>", "body": "<2~4문장 본문>"}},
|
||
... (총 8개)
|
||
],
|
||
"cta_copy": {{"headline": "<요약 한 줄>", "body": "<마무리 1~2줄>", "cta": "팔로우/저장 등"}},
|
||
"suggested_caption": "<인스타 캡션 본문>",
|
||
"hashtags": ["#태그1", "#태그2", ...]
|
||
}}
|
||
"""
|
||
|
||
|
||
def _client() -> Anthropic:
|
||
return Anthropic(api_key=ANTHROPIC_API_KEY)
|
||
|
||
|
||
def _strip_codefence(s: str) -> str:
|
||
s = s.strip()
|
||
if s.startswith("```"):
|
||
s = re.sub(r"^```(?:json)?\s*|\s*```$", "", s).strip()
|
||
return s
|
||
|
||
|
||
def _load_prompt() -> str:
|
||
pt = db.get_prompt_template("slate_writer")
|
||
if pt and pt.get("template"):
|
||
return pt["template"]
|
||
return DEFAULT_PROMPT
|
||
|
||
|
||
def write_slate(keyword: str, category: str,
|
||
articles: Optional[list] = None) -> int:
|
||
"""Claude로 10페이지 카피 생성 후 card_slates에 저장. slate_id 반환."""
|
||
if articles is None:
|
||
articles = db.list_news_articles(category=category, days=2)
|
||
article_text = "\n".join(
|
||
f"- {a['title']}: {a.get('summary', '')[:120]}" for a in articles[:8]
|
||
) or "(참고 기사 없음)"
|
||
|
||
prompt = _load_prompt().format(category=category, keyword=keyword, articles=article_text)
|
||
msg = _client().messages.create(
|
||
model=ANTHROPIC_MODEL_SONNET,
|
||
max_tokens=4000,
|
||
messages=[{"role": "user", "content": prompt}],
|
||
)
|
||
raw = msg.content[0].text
|
||
cleaned = _strip_codefence(raw)
|
||
try:
|
||
data: Dict[str, Any] = json.loads(cleaned)
|
||
except json.JSONDecodeError as e:
|
||
logger.warning("slate JSON parse failed: %s", e)
|
||
raise ValueError(f"Invalid JSON from LLM: {e}") from e
|
||
|
||
body_copies = data.get("body_copies") or []
|
||
if len(body_copies) != 8:
|
||
raise ValueError(f"body_copies must have 8 items, got {len(body_copies)}")
|
||
|
||
cover = data.get("cover_copy") or {}
|
||
if not cover.get("accent_color"):
|
||
cover["accent_color"] = DEFAULT_ACCENT_BY_CATEGORY.get(category, "#222831")
|
||
|
||
sid = db.add_card_slate({
|
||
"keyword": keyword,
|
||
"category": category,
|
||
"status": "draft",
|
||
"cover_copy": cover,
|
||
"body_copies": body_copies,
|
||
"cta_copy": data.get("cta_copy") or {},
|
||
"suggested_caption": data.get("suggested_caption") or "",
|
||
"hashtags": data.get("hashtags") or [],
|
||
})
|
||
return sid
|
||
```
|
||
|
||
- [ ] **Step 4: Run test, expect pass**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_card_writer.py -v`
|
||
Expected: 2 tests PASS.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add insta-lab/app/card_writer.py insta-lab/tests/test_card_writer.py
|
||
git commit -m "feat(insta-lab): card_writer with Claude 10-page JSON generator"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 6: Card HTML template (skeleton) + card_renderer.py
|
||
|
||
**Files:**
|
||
- Create: `insta-lab/app/templates/default/card.html.j2`
|
||
- Create: `insta-lab/app/card_renderer.py`
|
||
- Create: `insta-lab/tests/test_card_renderer.py`
|
||
|
||
- [ ] **Step 1: Write `insta-lab/app/templates/default/card.html.j2`**
|
||
|
||
```html
|
||
<!DOCTYPE html>
|
||
<html lang="ko">
|
||
<head>
|
||
<meta charset="UTF-8">
|
||
<style>
|
||
@import url('https://fonts.googleapis.com/css2?family=Noto+Sans+KR:wght@400;700;900&display=swap');
|
||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||
html, body {
|
||
width: 1080px; height: 1350px;
|
||
font-family: 'Noto Sans KR', sans-serif;
|
||
background: #F7F7FA; color: #14171A;
|
||
}
|
||
.card {
|
||
width: 1080px; height: 1350px;
|
||
padding: 80px 72px;
|
||
display: flex; flex-direction: column; justify-content: space-between;
|
||
background: linear-gradient(180deg, #FFFFFF 0%, #F7F7FA 100%);
|
||
border-top: 16px solid {{ accent_color }};
|
||
}
|
||
.badge {
|
||
display: inline-block; padding: 8px 20px; border-radius: 999px;
|
||
background: {{ accent_color }}; color: #fff;
|
||
font-size: 28px; font-weight: 700; letter-spacing: -0.02em;
|
||
}
|
||
.headline {
|
||
font-size: {{ 96 if page_type == 'cover' else 72 }}px;
|
||
font-weight: 900; line-height: 1.15; letter-spacing: -0.04em;
|
||
margin-top: 32px;
|
||
}
|
||
.body {
|
||
font-size: 40px; font-weight: 400; line-height: 1.55;
|
||
margin-top: 40px; color: #2A2F35;
|
||
white-space: pre-wrap;
|
||
}
|
||
.footer {
|
||
display: flex; justify-content: space-between; align-items: center;
|
||
font-size: 28px; color: #6B7280; font-weight: 500;
|
||
}
|
||
.cta { font-weight: 700; color: {{ accent_color }}; }
|
||
</style>
|
||
</head>
|
||
<body>
|
||
<div class="card">
|
||
<div>
|
||
<span class="badge">{{ page_type|upper }}</span>
|
||
<h1 class="headline">{{ headline }}</h1>
|
||
<p class="body">{{ body }}</p>
|
||
</div>
|
||
<div class="footer">
|
||
<span>{{ page_no }} / {{ total_pages }}</span>
|
||
{% if cta %}<span class="cta">{{ cta }}</span>{% endif %}
|
||
</div>
|
||
</div>
|
||
</body>
|
||
</html>
|
||
```
|
||
|
||
- [ ] **Step 2: Write the failing test `tests/test_card_renderer.py`**
|
||
|
||
```python
|
||
import os
|
||
import tempfile
|
||
|
||
import pytest
|
||
|
||
from app import db as db_module
|
||
from app import card_renderer
|
||
|
||
|
||
@pytest.fixture
|
||
def tmp_db_and_dirs(monkeypatch, tmp_path):
|
||
fd, path = tempfile.mkstemp(suffix=".db")
|
||
os.close(fd)
|
||
monkeypatch.setattr(db_module, "DB_PATH", path)
|
||
monkeypatch.setattr(card_renderer, "CARDS_DIR", str(tmp_path / "cards"))
|
||
db_module.init_db()
|
||
yield path
|
||
os.remove(path)
|
||
|
||
|
||
def _seed_slate() -> int:
|
||
return db_module.add_card_slate({
|
||
"keyword": "테스트",
|
||
"category": "economy",
|
||
"status": "draft",
|
||
"cover_copy": {"headline": "커버 헤드라인", "body": "서브카피", "accent_color": "#0F62FE"},
|
||
"body_copies": [{"headline": f"본문 {i+1}", "body": f"내용 {i+1}"} for i in range(8)],
|
||
"cta_copy": {"headline": "마무리", "body": "감사합니다", "cta": "팔로우"},
|
||
})
|
||
|
||
|
||
@pytest.mark.asyncio
|
||
async def test_render_slate_produces_ten_pngs(tmp_db_and_dirs):
|
||
sid = _seed_slate()
|
||
paths = await card_renderer.render_slate(sid)
|
||
assert len(paths) == 10
|
||
for p in paths:
|
||
assert os.path.exists(p)
|
||
assert os.path.getsize(p) > 1000 # > 1 KB sanity
|
||
db_module.update_slate_status(sid, "rendered") # caller normally does this
|
||
assets = db_module.list_card_assets(sid)
|
||
assert {a["page_index"] for a in assets} == set(range(1, 11))
|
||
```
|
||
|
||
- [ ] **Step 3: Run test, expect failure**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_card_renderer.py -v`
|
||
Expected: ImportError on `card_renderer` (module doesn't exist).
|
||
|
||
> **Note:** This test runs real Playwright. The implementer must have run `playwright install chromium` locally before this step succeeds outside the container. If running in the container, it's already installed by the Dockerfile.
|
||
|
||
- [ ] **Step 4: Implement `insta-lab/app/card_renderer.py`**
|
||
|
||
```python
|
||
"""Jinja → HTML → Playwright headless screenshot."""
|
||
|
||
import asyncio
|
||
import hashlib
|
||
import json
|
||
import logging
|
||
import os
|
||
import tempfile
|
||
from typing import List
|
||
|
||
from jinja2 import Environment, FileSystemLoader, select_autoescape
|
||
from playwright.async_api import async_playwright
|
||
|
||
from .config import CARDS_DIR, CARD_TEMPLATE_DIR
|
||
from . import db
|
||
|
||
logger = logging.getLogger(__name__)
|
||
|
||
|
||
def _env() -> Environment:
|
||
return Environment(
|
||
loader=FileSystemLoader(CARD_TEMPLATE_DIR),
|
||
autoescape=select_autoescape(["html", "j2"]),
|
||
)
|
||
|
||
|
||
def _slate_dir(slate_id: int) -> str:
|
||
out = os.path.join(CARDS_DIR, str(slate_id))
|
||
os.makedirs(out, exist_ok=True)
|
||
return out
|
||
|
||
|
||
def _build_pages(slate: dict) -> List[dict]:
|
||
cover = json.loads(slate["cover_copy"] or "{}")
|
||
bodies = json.loads(slate["body_copies"] or "[]")
|
||
cta = json.loads(slate["cta_copy"] or "{}")
|
||
accent = cover.get("accent_color") or "#0F62FE"
|
||
pages: List[dict] = []
|
||
pages.append({
|
||
"page_type": "cover", "page_no": 1, "total_pages": 10,
|
||
"headline": cover.get("headline", ""), "body": cover.get("body", ""),
|
||
"accent_color": accent, "cta": "",
|
||
})
|
||
for i, b in enumerate(bodies[:8]):
|
||
pages.append({
|
||
"page_type": "body", "page_no": i + 2, "total_pages": 10,
|
||
"headline": b.get("headline", ""), "body": b.get("body", ""),
|
||
"accent_color": accent, "cta": "",
|
||
})
|
||
pages.append({
|
||
"page_type": "cta", "page_no": 10, "total_pages": 10,
|
||
"headline": cta.get("headline", ""), "body": cta.get("body", ""),
|
||
"accent_color": accent, "cta": cta.get("cta", ""),
|
||
})
|
||
return pages
|
||
|
||
|
||
async def render_slate(slate_id: int, template: str = "default/card.html.j2") -> List[str]:
|
||
slate = db.get_card_slate(slate_id)
|
||
if not slate:
|
||
raise ValueError(f"slate {slate_id} not found")
|
||
env = _env()
|
||
tmpl = env.get_template(template)
|
||
pages = _build_pages(slate)
|
||
out_dir = _slate_dir(slate_id)
|
||
paths: List[str] = []
|
||
|
||
async with async_playwright() as p:
|
||
browser = await p.chromium.launch()
|
||
try:
|
||
ctx = await browser.new_context(viewport={"width": 1080, "height": 1350})
|
||
page = await ctx.new_page()
|
||
for spec in pages:
|
||
html_str = tmpl.render(**spec)
|
||
with tempfile.NamedTemporaryFile("w", suffix=".html", delete=False, encoding="utf-8") as f:
|
||
f.write(html_str)
|
||
html_path = f.name
|
||
try:
|
||
await page.goto(f"file://{html_path}", wait_until="networkidle")
|
||
out_path = os.path.join(out_dir, f"{spec['page_no']:02d}.png")
|
||
await page.screenshot(path=out_path, full_page=False, omit_background=False)
|
||
file_hash = hashlib.md5(open(out_path, "rb").read()).hexdigest()
|
||
db.add_card_asset(slate_id, spec["page_no"], out_path, file_hash)
|
||
paths.append(out_path)
|
||
finally:
|
||
try:
|
||
os.unlink(html_path)
|
||
except OSError:
|
||
pass
|
||
finally:
|
||
await browser.close()
|
||
return paths
|
||
```
|
||
|
||
- [ ] **Step 5: Install Playwright chromium locally (if not in container)**
|
||
|
||
Run: `cd insta-lab && pip install -r requirements.txt && playwright install chromium`
|
||
Expected: Successful download of Chromium.
|
||
|
||
- [ ] **Step 6: Run test, expect pass**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_card_renderer.py -v`
|
||
Expected: 1 test PASS (may take 10-30s).
|
||
|
||
- [ ] **Step 7: Commit**
|
||
|
||
```
|
||
git add insta-lab/app/templates insta-lab/app/card_renderer.py insta-lab/tests/test_card_renderer.py
|
||
git commit -m "feat(insta-lab): card_renderer with Jinja + Playwright (1080x1350)"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 7: main.py — FastAPI endpoints
|
||
|
||
**Files:**
|
||
- Create: `insta-lab/app/main.py`
|
||
- Create: `insta-lab/tests/test_main.py`
|
||
|
||
- [ ] **Step 1: Write the failing test `tests/test_main.py`**
|
||
|
||
```python
|
||
import os
|
||
import tempfile
|
||
from unittest.mock import patch
|
||
|
||
import pytest
|
||
from fastapi.testclient import TestClient
|
||
|
||
from app import db as db_module
|
||
|
||
|
||
@pytest.fixture
|
||
def client(monkeypatch):
|
||
fd, path = tempfile.mkstemp(suffix=".db")
|
||
os.close(fd)
|
||
monkeypatch.setattr(db_module, "DB_PATH", path)
|
||
db_module.init_db()
|
||
from app import main
|
||
monkeypatch.setattr(main, "DB_PATH", path)
|
||
with TestClient(main.app) as c:
|
||
yield c
|
||
os.remove(path)
|
||
|
||
|
||
def test_health(client):
|
||
resp = client.get("/health")
|
||
assert resp.status_code == 200
|
||
assert resp.json()["ok"] is True
|
||
|
||
|
||
def test_status_endpoint(client):
|
||
resp = client.get("/api/insta/status")
|
||
assert resp.status_code == 200
|
||
j = resp.json()
|
||
assert "naver_api" in j and "anthropic_api" in j
|
||
|
||
|
||
def test_news_articles_listing(client):
|
||
db_module.add_news_article({
|
||
"category": "economy", "title": "T1", "link": "https://x/1", "summary": "S",
|
||
})
|
||
resp = client.get("/api/insta/news/articles?category=economy&days=7")
|
||
assert resp.status_code == 200
|
||
assert len(resp.json()["items"]) == 1
|
||
|
||
|
||
def test_keywords_listing(client):
|
||
db_module.add_trending_keyword({
|
||
"keyword": "K", "category": "economy", "score": 0.5, "articles_count": 3,
|
||
})
|
||
resp = client.get("/api/insta/keywords?category=economy")
|
||
assert resp.status_code == 200
|
||
assert resp.json()["items"][0]["keyword"] == "K"
|
||
|
||
|
||
def test_create_slate_kicks_background_task(client, monkeypatch):
|
||
# mock writer + renderer to avoid network/playwright
|
||
from app import main, card_writer, card_renderer
|
||
|
||
def fake_write(keyword, category, articles=None):
|
||
return db_module.add_card_slate({
|
||
"keyword": keyword, "category": category, "status": "draft",
|
||
"cover_copy": {"headline": "H", "body": "B", "accent_color": "#000"},
|
||
"body_copies": [{"headline": f"h{i}", "body": f"b{i}"} for i in range(8)],
|
||
"cta_copy": {"headline": "C", "body": "B", "cta": "F"},
|
||
})
|
||
async def fake_render(slate_id, template="default/card.html.j2"):
|
||
for i in range(1, 11):
|
||
db_module.add_card_asset(slate_id, i, f"/tmp/{slate_id}_{i}.png", "h")
|
||
return [f"/tmp/{slate_id}_{i}.png" for i in range(1, 11)]
|
||
|
||
monkeypatch.setattr(card_writer, "write_slate", fake_write)
|
||
monkeypatch.setattr(card_renderer, "render_slate", fake_render)
|
||
|
||
resp = client.post("/api/insta/slates", json={"keyword": "K", "category": "economy"})
|
||
assert resp.status_code == 200
|
||
task_id = resp.json()["task_id"]
|
||
# poll task
|
||
for _ in range(20):
|
||
st = client.get(f"/api/insta/tasks/{task_id}").json()
|
||
if st["status"] in ("succeeded", "failed"):
|
||
break
|
||
assert st["status"] == "succeeded"
|
||
slate_id = st["result_id"]
|
||
detail = client.get(f"/api/insta/slates/{slate_id}").json()
|
||
assert detail["status"] == "rendered"
|
||
assert len(detail["assets"]) == 10
|
||
```
|
||
|
||
- [ ] **Step 2: Run test, expect failure**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_main.py -v`
|
||
Expected: ImportError.
|
||
|
||
- [ ] **Step 3: Implement `insta-lab/app/main.py`**
|
||
|
||
```python
|
||
"""FastAPI entrypoint for insta-lab."""
|
||
|
||
import asyncio
|
||
import json
|
||
import logging
|
||
import os
|
||
from typing import Optional
|
||
|
||
from fastapi import FastAPI, HTTPException, BackgroundTasks, Body, Query
|
||
from fastapi.middleware.cors import CORSMiddleware
|
||
from fastapi.responses import FileResponse
|
||
from pydantic import BaseModel
|
||
|
||
from .config import (
|
||
CORS_ALLOW_ORIGINS, NAVER_CLIENT_ID, ANTHROPIC_API_KEY,
|
||
INSTA_DATA_PATH, DB_PATH, DEFAULT_CATEGORY_SEEDS, KEYWORDS_PER_CATEGORY,
|
||
)
|
||
from . import db, news_collector, keyword_extractor, card_writer, card_renderer
|
||
|
||
logger = logging.getLogger(__name__)
|
||
app = FastAPI()
|
||
|
||
app.add_middleware(
|
||
CORSMiddleware,
|
||
allow_origins=[o.strip() for o in CORS_ALLOW_ORIGINS.split(",")],
|
||
allow_credentials=False,
|
||
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS", "PATCH"],
|
||
allow_headers=["Content-Type"],
|
||
)
|
||
|
||
|
||
@app.on_event("startup")
|
||
def on_startup():
|
||
os.makedirs(INSTA_DATA_PATH, exist_ok=True)
|
||
db.init_db()
|
||
|
||
|
||
@app.get("/health")
|
||
def health():
|
||
return {"ok": True}
|
||
|
||
|
||
@app.get("/api/insta/status")
|
||
def status():
|
||
return {
|
||
"ok": True,
|
||
"naver_api": bool(NAVER_CLIENT_ID),
|
||
"anthropic_api": bool(ANTHROPIC_API_KEY),
|
||
}
|
||
|
||
|
||
# ── News ─────────────────────────────────────────────────────────
|
||
class CollectRequest(BaseModel):
|
||
categories: Optional[list[str]] = None # None = all defaults
|
||
|
||
|
||
def _seeds_for(category: str) -> list[str]:
|
||
pt = db.get_prompt_template("category_seeds")
|
||
if pt and pt.get("template"):
|
||
try:
|
||
data = json.loads(pt["template"])
|
||
if category in data:
|
||
return list(data[category])
|
||
except Exception:
|
||
pass
|
||
return list(DEFAULT_CATEGORY_SEEDS.get(category, []))
|
||
|
||
|
||
async def _bg_collect(task_id: str, categories: list[str]):
|
||
try:
|
||
db.update_task(task_id, "processing", 10, "수집 중")
|
||
total = 0
|
||
for cat in categories:
|
||
seeds = _seeds_for(cat)
|
||
if not seeds:
|
||
continue
|
||
total += news_collector.collect_for_category(cat, seeds)
|
||
db.update_task(task_id, "succeeded", 100, f"{total}건 수집", result_id=total)
|
||
except Exception as e:
|
||
logger.exception("collect failed")
|
||
db.update_task(task_id, "failed", 0, "", error=str(e))
|
||
|
||
|
||
@app.post("/api/insta/news/collect")
|
||
def collect_news(req: CollectRequest, bg: BackgroundTasks):
|
||
cats = req.categories or list(DEFAULT_CATEGORY_SEEDS.keys())
|
||
tid = db.create_task("news_collect", {"categories": cats})
|
||
bg.add_task(_bg_collect, tid, cats)
|
||
return {"task_id": tid, "categories": cats}
|
||
|
||
|
||
@app.get("/api/insta/news/articles")
|
||
def list_articles(category: Optional[str] = None, days: int = Query(7, ge=1, le=90)):
|
||
return {"items": db.list_news_articles(category=category, days=days)}
|
||
|
||
|
||
# ── Keywords ─────────────────────────────────────────────────────
|
||
class ExtractRequest(BaseModel):
|
||
categories: Optional[list[str]] = None
|
||
|
||
|
||
async def _bg_extract(task_id: str, categories: list[str]):
|
||
try:
|
||
db.update_task(task_id, "processing", 10, "추출 중")
|
||
for cat in categories:
|
||
keyword_extractor.extract_for_category(cat, limit=KEYWORDS_PER_CATEGORY)
|
||
db.update_task(task_id, "succeeded", 100, "완료", result_id=0)
|
||
except Exception as e:
|
||
logger.exception("extract failed")
|
||
db.update_task(task_id, "failed", 0, "", error=str(e))
|
||
|
||
|
||
@app.post("/api/insta/keywords/extract")
|
||
def extract_keywords(req: ExtractRequest, bg: BackgroundTasks):
|
||
cats = req.categories or list(DEFAULT_CATEGORY_SEEDS.keys())
|
||
tid = db.create_task("keyword_extract", {"categories": cats})
|
||
bg.add_task(_bg_extract, tid, cats)
|
||
return {"task_id": tid, "categories": cats}
|
||
|
||
|
||
@app.get("/api/insta/keywords")
|
||
def list_keywords(category: Optional[str] = None, used: Optional[bool] = None):
|
||
return {"items": db.list_trending_keywords(category=category, used=used)}
|
||
|
||
|
||
# ── Slates ───────────────────────────────────────────────────────
|
||
class SlateRequest(BaseModel):
|
||
keyword: str
|
||
category: str
|
||
keyword_id: Optional[int] = None
|
||
|
||
|
||
async def _bg_create_slate(task_id: str, keyword: str, category: str, keyword_id: Optional[int]):
|
||
try:
|
||
db.update_task(task_id, "processing", 30, "카피 생성 중")
|
||
sid = card_writer.write_slate(keyword=keyword, category=category)
|
||
db.update_task(task_id, "processing", 70, "카드 렌더 중")
|
||
await card_renderer.render_slate(sid)
|
||
db.update_slate_status(sid, "rendered")
|
||
if keyword_id:
|
||
db.mark_keyword_used(keyword_id)
|
||
db.update_task(task_id, "succeeded", 100, "완료", result_id=sid)
|
||
except Exception as e:
|
||
logger.exception("create slate failed")
|
||
db.update_task(task_id, "failed", 0, "", error=str(e))
|
||
|
||
|
||
@app.post("/api/insta/slates")
|
||
def create_slate(req: SlateRequest, bg: BackgroundTasks):
|
||
tid = db.create_task("slate_create", req.dict())
|
||
bg.add_task(_bg_create_slate, tid, req.keyword, req.category, req.keyword_id)
|
||
return {"task_id": tid}
|
||
|
||
|
||
@app.get("/api/insta/slates")
|
||
def list_slates(limit: int = Query(50, ge=1, le=500)):
|
||
return {"items": db.list_card_slates(limit=limit)}
|
||
|
||
|
||
@app.get("/api/insta/slates/{slate_id}")
|
||
def get_slate(slate_id: int):
|
||
s = db.get_card_slate(slate_id)
|
||
if not s:
|
||
raise HTTPException(404, "slate not found")
|
||
s["assets"] = db.list_card_assets(slate_id)
|
||
for k in ("cover_copy", "body_copies", "cta_copy", "hashtags"):
|
||
if isinstance(s.get(k), str):
|
||
try:
|
||
s[k] = json.loads(s[k])
|
||
except Exception:
|
||
pass
|
||
return s
|
||
|
||
|
||
async def _bg_render(task_id: str, slate_id: int):
|
||
try:
|
||
db.update_task(task_id, "processing", 30, "재렌더 중")
|
||
await card_renderer.render_slate(slate_id)
|
||
db.update_slate_status(slate_id, "rendered")
|
||
db.update_task(task_id, "succeeded", 100, "완료", result_id=slate_id)
|
||
except Exception as e:
|
||
logger.exception("render failed")
|
||
db.update_task(task_id, "failed", 0, "", error=str(e))
|
||
|
||
|
||
@app.post("/api/insta/slates/{slate_id}/render")
|
||
def render_slate_endpoint(slate_id: int, bg: BackgroundTasks):
|
||
if not db.get_card_slate(slate_id):
|
||
raise HTTPException(404, "slate not found")
|
||
tid = db.create_task("slate_render", {"slate_id": slate_id})
|
||
bg.add_task(_bg_render, tid, slate_id)
|
||
return {"task_id": tid}
|
||
|
||
|
||
@app.get("/api/insta/slates/{slate_id}/assets/{page}")
|
||
def get_asset(slate_id: int, page: int):
|
||
if not (1 <= page <= 10):
|
||
raise HTTPException(400, "page must be 1..10")
|
||
assets = db.list_card_assets(slate_id)
|
||
match = next((a for a in assets if a["page_index"] == page), None)
|
||
if not match:
|
||
raise HTTPException(404, "asset not found")
|
||
return FileResponse(match["file_path"], media_type="image/png")
|
||
|
||
|
||
@app.delete("/api/insta/slates/{slate_id}")
|
||
def delete_slate(slate_id: int):
|
||
if not db.get_card_slate(slate_id):
|
||
raise HTTPException(404)
|
||
# delete files
|
||
for a in db.list_card_assets(slate_id):
|
||
try:
|
||
os.unlink(a["file_path"])
|
||
except OSError:
|
||
pass
|
||
db.delete_card_slate(slate_id)
|
||
return {"ok": True}
|
||
|
||
|
||
# ── Tasks ────────────────────────────────────────────────────────
|
||
@app.get("/api/insta/tasks/{task_id}")
|
||
def get_task_status(task_id: str):
|
||
t = db.get_task(task_id)
|
||
if not t:
|
||
raise HTTPException(404)
|
||
return t
|
||
|
||
|
||
# ── Prompt Templates ─────────────────────────────────────────────
|
||
class TemplateBody(BaseModel):
|
||
template: str
|
||
description: str = ""
|
||
|
||
|
||
@app.get("/api/insta/templates/prompts/{name}")
|
||
def get_prompt(name: str):
|
||
pt = db.get_prompt_template(name)
|
||
if not pt:
|
||
raise HTTPException(404)
|
||
return pt
|
||
|
||
|
||
@app.put("/api/insta/templates/prompts/{name}")
|
||
def upsert_prompt(name: str, body: TemplateBody):
|
||
db.upsert_prompt_template(name, body.template, body.description)
|
||
return db.get_prompt_template(name)
|
||
```
|
||
|
||
- [ ] **Step 4: Run test, expect pass**
|
||
|
||
Run: `cd insta-lab && pytest tests/test_main.py -v`
|
||
Expected: 5 tests PASS.
|
||
|
||
- [ ] **Step 5: Run full insta-lab test suite**
|
||
|
||
Run: `cd insta-lab && pytest -v`
|
||
Expected: All tests PASS (db: 6, news_collector: 4, keyword_extractor: 3, card_writer: 2, card_renderer: 1, main: 5 = 21).
|
||
|
||
- [ ] **Step 6: Commit**
|
||
|
||
```
|
||
git add insta-lab/app/main.py insta-lab/tests/test_main.py
|
||
git commit -m "feat(insta-lab): main.py FastAPI endpoints + BackgroundTasks"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 8: docker-compose.yml — swap blog-lab → insta-lab
|
||
|
||
**Files:**
|
||
- Modify: `docker-compose.yml`
|
||
|
||
- [ ] **Step 1: Replace `blog-lab` block**
|
||
|
||
In `docker-compose.yml`, find the `blog-lab:` service block (currently lines 88-107) and replace it entirely with:
|
||
|
||
```yaml
|
||
insta-lab:
|
||
build:
|
||
context: ./insta-lab
|
||
container_name: insta-lab
|
||
restart: unless-stopped
|
||
ports:
|
||
- "18700:8000"
|
||
environment:
|
||
- TZ=${TZ:-Asia/Seoul}
|
||
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
|
||
- ANTHROPIC_MODEL_HAIKU=${ANTHROPIC_MODEL_HAIKU:-claude-haiku-4-5-20251001}
|
||
- ANTHROPIC_MODEL_SONNET=${ANTHROPIC_MODEL_SONNET:-claude-sonnet-4-6}
|
||
- NAVER_CLIENT_ID=${NAVER_CLIENT_ID:-}
|
||
- NAVER_CLIENT_SECRET=${NAVER_CLIENT_SECRET:-}
|
||
- INSTA_DATA_PATH=/app/data
|
||
- CARD_TEMPLATE_DIR=/app/app/templates
|
||
- CORS_ALLOW_ORIGINS=${CORS_ALLOW_ORIGINS:-http://localhost:3007,http://localhost:8080}
|
||
volumes:
|
||
- ${RUNTIME_PATH}/data/insta:/app/data
|
||
healthcheck:
|
||
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"]
|
||
interval: 30s
|
||
timeout: 5s
|
||
retries: 3
|
||
```
|
||
|
||
- [ ] **Step 2: Update `agent-office` service env + depends_on**
|
||
|
||
In the `agent-office:` block:
|
||
- Replace `- BLOG_LAB_URL=http://blog-lab:8000` with `- INSTA_LAB_URL=http://insta-lab:8000`
|
||
- In `depends_on`: replace `- blog-lab` with `- insta-lab`
|
||
|
||
- [ ] **Step 3: Update `frontend` depends_on**
|
||
|
||
In the `frontend:` block, in `depends_on`: replace `- blog-lab` with `- insta-lab`.
|
||
|
||
- [ ] **Step 4: Verify YAML loads**
|
||
|
||
Run: `docker compose config --quiet`
|
||
Expected: No output (success). On Windows PowerShell: `docker compose config --quiet`.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add docker-compose.yml
|
||
git commit -m "chore(compose): replace blog-lab service with insta-lab"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 9: nginx — swap /api/blog-marketing/ → /api/insta/
|
||
|
||
**Files:**
|
||
- Modify: `nginx/default.conf`
|
||
|
||
- [ ] **Step 1: Replace blog-marketing location block**
|
||
|
||
Find the block (currently lines 156-167):
|
||
|
||
```
|
||
# blog-marketing API
|
||
location /api/blog-marketing/ {
|
||
resolver 127.0.0.11 valid=10s;
|
||
set $blog_backend blog-lab:8000;
|
||
...
|
||
proxy_pass http://$blog_backend$request_uri;
|
||
}
|
||
```
|
||
|
||
Replace with:
|
||
|
||
```
|
||
# insta API
|
||
location /api/insta/ {
|
||
resolver 127.0.0.11 valid=10s;
|
||
set $insta_backend insta-lab:8000;
|
||
|
||
proxy_http_version 1.1;
|
||
proxy_set_header Host $host;
|
||
proxy_set_header X-Real-IP $remote_addr;
|
||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||
proxy_set_header X-Forwarded-Proto $scheme;
|
||
proxy_read_timeout 300s;
|
||
proxy_pass http://$insta_backend$request_uri;
|
||
}
|
||
```
|
||
|
||
- [ ] **Step 2: Validate nginx config syntax**
|
||
|
||
Run: `docker compose run --rm --entrypoint nginx frontend -t -c /etc/nginx/conf.d/default.conf`
|
||
If frontend container isn't built yet, defer this step until Task 10 smoke test.
|
||
|
||
- [ ] **Step 3: Commit**
|
||
|
||
```
|
||
git add nginx/default.conf
|
||
git commit -m "chore(nginx): replace /api/blog-marketing with /api/insta"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 10: agent-office config — INSTA_LAB_URL
|
||
|
||
**Files:**
|
||
- Modify: `agent-office/app/config.py`
|
||
|
||
- [ ] **Step 1: Replace BLOG_LAB_URL with INSTA_LAB_URL**
|
||
|
||
In `agent-office/app/config.py`, change:
|
||
|
||
```python
|
||
BLOG_LAB_URL = os.getenv("BLOG_LAB_URL", "http://localhost:18700")
|
||
```
|
||
|
||
to:
|
||
|
||
```python
|
||
INSTA_LAB_URL = os.getenv("INSTA_LAB_URL", "http://localhost:18700")
|
||
```
|
||
|
||
- [ ] **Step 2: Commit**
|
||
|
||
```
|
||
git add agent-office/app/config.py
|
||
git commit -m "chore(agent-office): swap BLOG_LAB_URL for INSTA_LAB_URL"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 11: agent-office service_proxy — remove blog_*, add insta_*
|
||
|
||
**Files:**
|
||
- Modify: `agent-office/app/service_proxy.py`
|
||
|
||
- [ ] **Step 1: Update import line**
|
||
|
||
In `agent-office/app/service_proxy.py`, line 4, change:
|
||
|
||
```python
|
||
from .config import STOCK_URL, MUSIC_LAB_URL, BLOG_LAB_URL, REALESTATE_LAB_URL
|
||
```
|
||
|
||
to:
|
||
|
||
```python
|
||
from .config import STOCK_URL, MUSIC_LAB_URL, INSTA_LAB_URL, REALESTATE_LAB_URL
|
||
```
|
||
|
||
- [ ] **Step 2: Delete the entire blog-lab section**
|
||
|
||
Delete the comment header `# --- blog-lab ---` and all functions `blog_research`, `blog_task_status`, `blog_generate`, `blog_market`, `blog_review`, `blog_publish`, `blog_get_post` (lines ~104-156).
|
||
|
||
- [ ] **Step 3: Insert insta-lab section in the same place**
|
||
|
||
```python
|
||
# --- insta-lab ---
|
||
|
||
async def insta_collect(categories: Optional[list] = None) -> Dict[str, Any]:
|
||
"""뉴스 수집 트리거 → task_id 반환."""
|
||
payload = {"categories": categories} if categories else {}
|
||
resp = await _client.post(f"{INSTA_LAB_URL}/api/insta/news/collect", json=payload)
|
||
resp.raise_for_status()
|
||
return resp.json()
|
||
|
||
|
||
async def insta_extract(categories: Optional[list] = None) -> Dict[str, Any]:
|
||
payload = {"categories": categories} if categories else {}
|
||
resp = await _client.post(f"{INSTA_LAB_URL}/api/insta/keywords/extract", json=payload)
|
||
resp.raise_for_status()
|
||
return resp.json()
|
||
|
||
|
||
async def insta_list_keywords(category: Optional[str] = None,
|
||
used: Optional[bool] = None) -> List[Dict[str, Any]]:
|
||
params: Dict[str, Any] = {}
|
||
if category:
|
||
params["category"] = category
|
||
if used is not None:
|
||
params["used"] = "true" if used else "false"
|
||
resp = await _client.get(f"{INSTA_LAB_URL}/api/insta/keywords", params=params)
|
||
resp.raise_for_status()
|
||
return resp.json().get("items", [])
|
||
|
||
|
||
async def insta_get_keyword(keyword_id: int) -> Optional[Dict[str, Any]]:
|
||
items = await insta_list_keywords()
|
||
for it in items:
|
||
if it["id"] == keyword_id:
|
||
return it
|
||
return None
|
||
|
||
|
||
async def insta_create_slate(keyword: str, category: str, keyword_id: Optional[int] = None) -> Dict[str, Any]:
|
||
resp = await _client.post(
|
||
f"{INSTA_LAB_URL}/api/insta/slates",
|
||
json={"keyword": keyword, "category": category, "keyword_id": keyword_id},
|
||
)
|
||
resp.raise_for_status()
|
||
return resp.json()
|
||
|
||
|
||
async def insta_task_status(task_id: str) -> Dict[str, Any]:
|
||
resp = await _client.get(f"{INSTA_LAB_URL}/api/insta/tasks/{task_id}")
|
||
resp.raise_for_status()
|
||
return resp.json()
|
||
|
||
|
||
async def insta_get_slate(slate_id: int) -> Dict[str, Any]:
|
||
resp = await _client.get(f"{INSTA_LAB_URL}/api/insta/slates/{slate_id}")
|
||
resp.raise_for_status()
|
||
return resp.json()
|
||
|
||
|
||
async def insta_get_asset_bytes(slate_id: int, page: int) -> bytes:
|
||
"""카드 PNG 바이트를 가져와 텔레그램 미디어 그룹에 첨부."""
|
||
async with httpx.AsyncClient(timeout=30) as client:
|
||
resp = await client.get(f"{INSTA_LAB_URL}/api/insta/slates/{slate_id}/assets/{page}")
|
||
resp.raise_for_status()
|
||
return resp.content
|
||
```
|
||
|
||
- [ ] **Step 4: Sanity check imports**
|
||
|
||
Run from repo root:
|
||
```
|
||
python -c "from agent_office.app import service_proxy" 2>/dev/null || \
|
||
python -c "import sys; sys.path.insert(0, 'agent-office'); from app import service_proxy; print('ok')"
|
||
```
|
||
Expected: prints `ok`.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add agent-office/app/service_proxy.py
|
||
git commit -m "feat(agent-office): replace blog_* proxy with insta_* helpers"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 12: agents/insta.py — InstaAgent
|
||
|
||
**Files:**
|
||
- Create: `agent-office/app/agents/insta.py`
|
||
- Create: `agent-office/tests/test_insta_agent.py`
|
||
|
||
- [ ] **Step 1: Write the failing test `agent-office/tests/test_insta_agent.py`**
|
||
|
||
```python
|
||
import asyncio
|
||
from unittest.mock import patch, AsyncMock, MagicMock
|
||
|
||
import pytest
|
||
|
||
from app.agents.insta import InstaAgent
|
||
|
||
|
||
@pytest.mark.asyncio
|
||
async def test_on_command_extract_dispatches(monkeypatch):
|
||
agent = InstaAgent()
|
||
fake_collect = AsyncMock(return_value={"task_id": "tcollect"})
|
||
fake_extract = AsyncMock(return_value={"task_id": "textract"})
|
||
fake_status = AsyncMock(side_effect=[
|
||
{"status": "succeeded", "result_id": 0},
|
||
{"status": "succeeded", "result_id": 0},
|
||
])
|
||
fake_keywords = AsyncMock(return_value=[
|
||
{"id": 1, "keyword": "K1", "category": "economy", "score": 0.9},
|
||
{"id": 2, "keyword": "K2", "category": "psychology", "score": 0.8},
|
||
])
|
||
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_collect", fake_collect)
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_extract", fake_extract)
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_task_status", fake_status)
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_list_keywords", fake_keywords)
|
||
monkeypatch.setattr("app.agents.insta.messaging.send_raw", AsyncMock(return_value={"ok": True}))
|
||
|
||
result = await agent.on_command("extract", {})
|
||
assert result["ok"] is True
|
||
fake_collect.assert_awaited()
|
||
fake_extract.assert_awaited()
|
||
|
||
|
||
@pytest.mark.asyncio
|
||
async def test_on_callback_render_kicks_pipeline(monkeypatch):
|
||
agent = InstaAgent()
|
||
fake_kw = AsyncMock(return_value={"id": 7, "keyword": "테스트", "category": "economy"})
|
||
fake_create = AsyncMock(return_value={"task_id": "tslate"})
|
||
fake_status = AsyncMock(side_effect=[
|
||
{"status": "processing"},
|
||
{"status": "succeeded", "result_id": 42},
|
||
])
|
||
fake_slate = AsyncMock(return_value={
|
||
"id": 42, "status": "rendered",
|
||
"suggested_caption": "캡션", "hashtags": ["#a", "#b"],
|
||
"assets": [{"page_index": i, "file_path": f"/x/{i}.png"} for i in range(1, 11)],
|
||
})
|
||
fake_bytes = AsyncMock(side_effect=[b"PNG"] * 10)
|
||
fake_send_media = AsyncMock(return_value={"ok": True})
|
||
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_get_keyword", fake_kw)
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_create_slate", fake_create)
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_task_status", fake_status)
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_get_slate", fake_slate)
|
||
monkeypatch.setattr("app.agents.insta.service_proxy.insta_get_asset_bytes", fake_bytes)
|
||
monkeypatch.setattr("app.agents.insta._send_media_group", fake_send_media)
|
||
monkeypatch.setattr("app.agents.insta.messaging.send_raw", AsyncMock(return_value={"ok": True}))
|
||
|
||
out = await agent.on_callback("render", {"keyword_id": 7})
|
||
assert out["ok"] is True
|
||
fake_create.assert_awaited()
|
||
fake_send_media.assert_awaited()
|
||
```
|
||
|
||
- [ ] **Step 2: Run test, expect failure**
|
||
|
||
Run: `cd agent-office && pytest tests/test_insta_agent.py -v`
|
||
Expected: ImportError on `app.agents.insta`.
|
||
|
||
- [ ] **Step 3: Implement `agent-office/app/agents/insta.py`**
|
||
|
||
```python
|
||
"""인스타 카드 에이전트 — 매일 09:30 뉴스 수집·키워드 추출 → 텔레그램 후보 푸시.
|
||
사용자가 키워드 버튼을 누르면 카드 슬레이트 생성 + 10장 미디어 그룹 발송."""
|
||
|
||
import asyncio
|
||
import json
|
||
import logging
|
||
from typing import Any, Dict, List, Optional
|
||
|
||
import httpx
|
||
|
||
from .base import BaseAgent
|
||
from ..db import (
|
||
create_task, update_task_status, add_log, get_agent_config,
|
||
)
|
||
from ..config import TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID
|
||
from .. import service_proxy
|
||
from ..telegram import messaging
|
||
|
||
logger = logging.getLogger(__name__)
|
||
|
||
|
||
async def _send_media_group(media: List[Dict[str, Any]], caption: str = "") -> Dict[str, Any]:
|
||
"""텔레그램 sendMediaGroup. media는 InputMediaPhoto dicts.
|
||
실제 multipart 업로드는 텔레그램이 attach://name 참조를 사용하므로
|
||
파일 부분과 함께 전송. 여기서는 InputMediaPhoto with file:// URL 대신
|
||
파일 바이트를 attach 키로 동봉."""
|
||
if not TELEGRAM_BOT_TOKEN:
|
||
return {"ok": False, "reason": "TELEGRAM_BOT_TOKEN missing"}
|
||
url = f"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}/sendMediaGroup"
|
||
files: Dict[str, tuple] = {}
|
||
for i, m in enumerate(media):
|
||
attach_key = f"photo{i+1}"
|
||
files[attach_key] = (f"{i+1}.png", m["_bytes"], "image/png")
|
||
m["media"] = f"attach://{attach_key}"
|
||
m.pop("_bytes", None)
|
||
if caption and media:
|
||
media[0]["caption"] = caption[:1024]
|
||
payload = {"chat_id": TELEGRAM_CHAT_ID, "media": json.dumps(media, ensure_ascii=False)}
|
||
async with httpx.AsyncClient(timeout=60) as client:
|
||
resp = await client.post(url, data=payload, files=files)
|
||
return resp.json()
|
||
|
||
|
||
class InstaAgent(BaseAgent):
|
||
agent_id = "insta"
|
||
display_name = "인스타 큐레이터"
|
||
|
||
async def on_schedule(self) -> None:
|
||
"""09:30 매일: 뉴스 수집 → 키워드 추출 → 텔레그램 후보 푸시.
|
||
custom_config.auto_select=True면 카테고리당 1위 키워드 자동 슬레이트 생성."""
|
||
if self.state not in ("idle", "break"):
|
||
return
|
||
config = get_agent_config(self.agent_id) or {}
|
||
custom = config.get("custom_config", {}) or {}
|
||
auto_select = bool(custom.get("auto_select", False))
|
||
|
||
task_id = create_task(self.agent_id, "insta_daily", {"auto_select": auto_select},
|
||
requires_approval=False)
|
||
await self.transition("working", "뉴스 수집·키워드 추출", task_id)
|
||
try:
|
||
await self._run_collect_and_extract()
|
||
kws = await service_proxy.insta_list_keywords(used=False)
|
||
if auto_select:
|
||
await self._auto_render(kws)
|
||
else:
|
||
await self._push_keyword_candidates(kws)
|
||
update_task_status(task_id, "succeeded", {"keywords": len(kws)})
|
||
await self.transition("idle", "후보 푸시 완료")
|
||
except Exception as e:
|
||
add_log(self.agent_id, f"insta daily failed: {e}", "error", task_id)
|
||
update_task_status(task_id, "failed", {"error": str(e)})
|
||
await self.transition("idle", f"오류: {e}")
|
||
|
||
async def _run_collect_and_extract(self) -> None:
|
||
col = await service_proxy.insta_collect()
|
||
await self._wait_task(col["task_id"], step="collect", timeout_sec=300)
|
||
ext = await service_proxy.insta_extract()
|
||
await self._wait_task(ext["task_id"], step="extract", timeout_sec=300)
|
||
|
||
async def _wait_task(self, task_id: str, step: str, timeout_sec: int = 300) -> Dict[str, Any]:
|
||
attempts = max(1, timeout_sec // 5)
|
||
for _ in range(attempts):
|
||
await asyncio.sleep(5)
|
||
st = await service_proxy.insta_task_status(task_id)
|
||
if st["status"] == "succeeded":
|
||
return st
|
||
if st["status"] == "failed":
|
||
raise RuntimeError(f"{step} failed: {st.get('error')}")
|
||
raise TimeoutError(f"{step} timeout {timeout_sec}s")
|
||
|
||
async def _push_keyword_candidates(self, keywords: List[Dict[str, Any]]) -> None:
|
||
# 카테고리별 그룹핑 후 카테고리당 5개씩 인라인 키보드.
|
||
# callback_data 형식: "render_<keyword_id>" — webhook이 startswith로 직접 dispatch.
|
||
by_cat: Dict[str, List[Dict[str, Any]]] = {}
|
||
for k in keywords:
|
||
by_cat.setdefault(k["category"], []).append(k)
|
||
if not by_cat:
|
||
await messaging.send_raw("📰 [인스타 큐레이터] 오늘은 추천할 키워드가 없습니다.")
|
||
return
|
||
rows: List[List[Dict[str, Any]]] = []
|
||
text_lines = ["📰 <b>[인스타 큐레이터]</b> 오늘의 키워드 후보"]
|
||
for cat, items in by_cat.items():
|
||
text_lines.append(f"\n<b>{cat}</b>")
|
||
for k in items[:5]:
|
||
text_lines.append(f" · {k['keyword']} (score {k['score']:.2f})")
|
||
rows.append([{
|
||
"text": f"🎴 {k['keyword']}",
|
||
"callback_data": f"render_{k['id']}",
|
||
}])
|
||
await messaging.send_raw("\n".join(text_lines), reply_markup={"inline_keyboard": rows})
|
||
|
||
async def _auto_render(self, keywords: List[Dict[str, Any]]) -> None:
|
||
by_cat: Dict[str, Dict[str, Any]] = {}
|
||
for k in keywords:
|
||
cat = k["category"]
|
||
if cat not in by_cat or k["score"] > by_cat[cat]["score"]:
|
||
by_cat[cat] = k
|
||
for kw in by_cat.values():
|
||
await self._render_and_push(kw["id"])
|
||
|
||
async def _render_and_push(self, keyword_id: int) -> None:
|
||
kw = await service_proxy.insta_get_keyword(keyword_id)
|
||
if not kw:
|
||
await messaging.send_raw(f"⚠️ 키워드 {keyword_id} 없음")
|
||
return
|
||
await messaging.send_raw(f"🎨 카드 생성 중: <b>{kw['keyword']}</b>")
|
||
created = await service_proxy.insta_create_slate(
|
||
keyword=kw["keyword"], category=kw["category"], keyword_id=kw["id"],
|
||
)
|
||
st = await self._wait_task(created["task_id"], step="slate", timeout_sec=600)
|
||
slate_id = st["result_id"]
|
||
slate = await service_proxy.insta_get_slate(slate_id)
|
||
media = []
|
||
for a in slate["assets"][:10]:
|
||
data = await service_proxy.insta_get_asset_bytes(slate_id, a["page_index"])
|
||
media.append({"type": "photo", "_bytes": data})
|
||
caption = slate.get("suggested_caption", "")
|
||
hashtags = " ".join(slate.get("hashtags", []) or [])
|
||
full_caption = f"{caption}\n\n{hashtags}".strip()
|
||
await _send_media_group(media, caption=full_caption)
|
||
|
||
async def on_command(self, command: str, params: dict) -> dict:
|
||
if command == "extract":
|
||
await self._run_collect_and_extract()
|
||
kws = await service_proxy.insta_list_keywords(used=False)
|
||
await self._push_keyword_candidates(kws)
|
||
return {"ok": True, "count": len(kws)}
|
||
if command == "render":
|
||
kid = int(params.get("keyword_id") or 0)
|
||
if not kid:
|
||
return {"ok": False, "message": "keyword_id 필수"}
|
||
await self._render_and_push(kid)
|
||
return {"ok": True}
|
||
return {"ok": False, "message": f"Unknown command: {command}"}
|
||
|
||
async def on_callback(self, action: str, params: dict) -> dict:
|
||
if action == "render":
|
||
kid = int(params.get("keyword_id") or 0)
|
||
if not kid:
|
||
return {"ok": False}
|
||
await self._render_and_push(kid)
|
||
return {"ok": True}
|
||
return {"ok": False}
|
||
|
||
async def on_approval(self, task_id: str, approved: bool, feedback: str = "") -> None:
|
||
# InstaAgent는 승인 흐름을 사용하지 않음 (텔레그램 인라인 callback만)
|
||
return
|
||
```
|
||
|
||
- [ ] **Step 4: Run test, expect pass**
|
||
|
||
Run: `cd agent-office && pytest tests/test_insta_agent.py -v`
|
||
Expected: 2 tests PASS.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add agent-office/app/agents/insta.py agent-office/tests/test_insta_agent.py
|
||
git commit -m "feat(agent-office): InstaAgent — daily extract + keyword push + media group render"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 13: Wire InstaAgent into registry + scheduler
|
||
|
||
**Files:**
|
||
- Modify: `agent-office/app/agents/__init__.py`
|
||
- Modify: `agent-office/app/scheduler.py`
|
||
|
||
- [ ] **Step 1: Update `agents/__init__.py`**
|
||
|
||
Replace the file contents:
|
||
|
||
```python
|
||
from .stock import StockAgent
|
||
from .music import MusicAgent
|
||
from .insta import InstaAgent
|
||
from .realestate import RealestateAgent
|
||
from .lotto import LottoAgent
|
||
from .youtube import YouTubeResearchAgent
|
||
from .youtube_publisher import YoutubePublisherAgent
|
||
|
||
AGENT_REGISTRY = {}
|
||
|
||
def init_agents():
|
||
AGENT_REGISTRY["stock"] = StockAgent()
|
||
AGENT_REGISTRY["music"] = MusicAgent()
|
||
AGENT_REGISTRY["insta"] = InstaAgent()
|
||
AGENT_REGISTRY["realestate"] = RealestateAgent()
|
||
AGENT_REGISTRY["lotto"] = LottoAgent()
|
||
AGENT_REGISTRY["youtube"] = YouTubeResearchAgent()
|
||
AGENT_REGISTRY["youtube_publisher"] = YoutubePublisherAgent()
|
||
|
||
def get_agent(agent_id: str):
|
||
return AGENT_REGISTRY.get(agent_id)
|
||
|
||
def get_all_agent_states() -> list:
|
||
return [
|
||
{"agent_id": aid, "state": agent.state, "detail": agent.state_detail}
|
||
for aid, agent in AGENT_REGISTRY.items()
|
||
]
|
||
```
|
||
|
||
- [ ] **Step 2: Update `scheduler.py`**
|
||
|
||
In `agent-office/app/scheduler.py`:
|
||
|
||
Replace the function `_run_blog_schedule` (lines 27-30) with:
|
||
|
||
```python
|
||
async def _run_insta_schedule():
|
||
agent = AGENT_REGISTRY.get("insta")
|
||
if agent:
|
||
await agent.on_schedule()
|
||
```
|
||
|
||
In `init_scheduler`, replace the line:
|
||
|
||
```python
|
||
scheduler.add_job(_run_blog_schedule, "cron", hour=10, minute=0, id="blog_pipeline")
|
||
```
|
||
|
||
with:
|
||
|
||
```python
|
||
scheduler.add_job(_run_insta_schedule, "cron", hour=9, minute=30, id="insta_pipeline")
|
||
```
|
||
|
||
- [ ] **Step 3: Sanity import test**
|
||
|
||
Run: `cd agent-office && python -c "from app.agents import init_agents; init_agents(); from app.scheduler import init_scheduler; print('ok')"`
|
||
Expected: `ok`. (No init_scheduler.start() call here would block; just import.)
|
||
|
||
If `print('ok')` doesn't print due to scheduler start, change command to:
|
||
```
|
||
cd agent-office && python -c "from app.agents import init_agents, AGENT_REGISTRY; init_agents(); print(list(AGENT_REGISTRY.keys()))"
|
||
```
|
||
Expected output includes `'insta'` and excludes `'blog'`.
|
||
|
||
- [ ] **Step 4: Commit**
|
||
|
||
```
|
||
git add agent-office/app/agents/__init__.py agent-office/app/scheduler.py
|
||
git commit -m "feat(agent-office): register InstaAgent + 09:30 cron job"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 14: Telegram callback handler — render_<id>
|
||
|
||
**Files:**
|
||
- Modify: `agent-office/app/telegram/webhook.py`
|
||
|
||
- [ ] **Step 1: Locate the callback dispatch in webhook.py**
|
||
|
||
The current dispatch (around line 36) handles `realestate_bookmark_` with a direct `startswith` check before the generic `get_telegram_callback` lookup. We follow the same pattern for `render_<keyword_id>`.
|
||
|
||
- [ ] **Step 2: Add render_ branch above the generic dispatch**
|
||
|
||
In `agent-office/app/telegram/webhook.py`, find the line:
|
||
|
||
```python
|
||
if callback_id.startswith("realestate_bookmark_"):
|
||
return await _handle_realestate_bookmark(callback_query, callback_id)
|
||
```
|
||
|
||
Immediately after that block, insert:
|
||
|
||
```python
|
||
if callback_id.startswith("render_"):
|
||
return await _handle_insta_render(callback_query, callback_id)
|
||
```
|
||
|
||
- [ ] **Step 3: Add the handler function near _handle_realestate_bookmark**
|
||
|
||
Append below `_handle_realestate_bookmark`:
|
||
|
||
```python
|
||
async def _handle_insta_render(callback_query: dict, callback_id: str) -> dict:
|
||
"""render_{keyword_id} 콜백 → InstaAgent.on_callback('render', ...)."""
|
||
from .messaging import send_raw
|
||
from ..agents import AGENT_REGISTRY
|
||
|
||
await api_call(
|
||
"answerCallbackQuery",
|
||
{"callback_query_id": callback_query["id"], "text": "카드 생성 시작"},
|
||
)
|
||
|
||
try:
|
||
keyword_id = int(callback_id.removeprefix("render_"))
|
||
except ValueError:
|
||
await send_raw("⚠️ 잘못된 render 콜백 데이터")
|
||
return {"ok": False, "error": "invalid_callback_data"}
|
||
|
||
agent = AGENT_REGISTRY.get("insta")
|
||
if not agent:
|
||
await send_raw("⚠️ insta agent 미등록")
|
||
return {"ok": False, "error": "agent_missing"}
|
||
|
||
try:
|
||
return await agent.on_callback("render", {"keyword_id": keyword_id})
|
||
except Exception as e:
|
||
await send_raw(f"⚠️ 카드 생성 실패: {e}")
|
||
return {"ok": False, "error": str(e)}
|
||
```
|
||
|
||
- [ ] **Step 4: Sanity check — webhook module imports**
|
||
|
||
Run: `cd agent-office && python -c "from app.telegram import webhook; print('ok')"`
|
||
Expected: `ok`.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git add agent-office/app/telegram/webhook.py
|
||
git commit -m "feat(agent-office): telegram render_<id> callback dispatches to InstaAgent"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 15: Delete blog-lab + agents/blog.py
|
||
|
||
**Files:**
|
||
- Delete: `blog-lab/` (entire directory)
|
||
- Delete: `agent-office/app/agents/blog.py`
|
||
|
||
- [ ] **Step 1: Confirm no remaining imports of blog**
|
||
|
||
Run from repo root:
|
||
```
|
||
grep -rn "from .blog\|from app.agents.blog\|BlogAgent\|blog_research\|blog_publish\|BLOG_LAB_URL\|api/blog-marketing" \
|
||
agent-office/ docker-compose.yml nginx/ 2>/dev/null
|
||
```
|
||
Expected: No matches (or only comments / removed items).
|
||
|
||
- [ ] **Step 2: Delete blog-lab directory**
|
||
|
||
Run: `git rm -r blog-lab/`
|
||
Expected: Lists every file under blog-lab/ as removed.
|
||
|
||
- [ ] **Step 3: Delete agents/blog.py**
|
||
|
||
Run: `git rm agent-office/app/agents/blog.py`
|
||
|
||
- [ ] **Step 4: Verify agent-office tests still pass**
|
||
|
||
Run: `cd agent-office && pytest -v`
|
||
Expected: All existing tests pass; new `test_insta_agent.py` passes; no test references `blog`.
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```
|
||
git commit -m "chore: remove blog-lab service and BlogAgent (replaced by insta-lab)"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 16: CLAUDE.md updates
|
||
|
||
**Files:**
|
||
- Modify: `CLAUDE.md` (web-backend root)
|
||
- Modify: `../CLAUDE.md` (workspace root) — `C:\Users\jaeoh\Desktop\workspace\CLAUDE.md`
|
||
|
||
- [ ] **Step 1: Update `web-backend/CLAUDE.md`**
|
||
|
||
Make these edits (use the Edit tool with exact strings):
|
||
|
||
(a) Section 1 service list: replace `blog-lab` with `insta-lab`.
|
||
|
||
(b) Section 4 (Docker 서비스 & 포트): Replace the row
|
||
```
|
||
| `blog-lab` | 18700 | 블로그 마케팅 수익화 API |
|
||
```
|
||
with
|
||
```
|
||
| `insta-lab` | 18700 | 인스타 카드 피드 자동 생성 (뉴스→키워드→10페이지 카드) |
|
||
```
|
||
|
||
(c) Section 5 (Nginx 라우팅): Replace
|
||
```
|
||
| `/api/blog-marketing/` | `blog-lab:8000` | 블로그 마케팅 수익화 API |
|
||
```
|
||
with
|
||
```
|
||
| `/api/insta/` | `insta-lab:8000` | 인스타 카드 자동 생성 API |
|
||
```
|
||
|
||
(d) Section 8 (로컬 개발 환경): Replace `Blog Lab | http://localhost:18700` with `Insta Lab | http://localhost:18700`.
|
||
|
||
(e) Section 9 (서비스별 핵심 정보): Replace the entire `### blog-lab (blog-lab/)` subsection (from the heading through the env-var list and API table) with this insta-lab subsection:
|
||
|
||
```markdown
|
||
### insta-lab (insta-lab/)
|
||
- 인스타그램 카드 피드 자동 생성 — 뉴스 모니터링 → 키워드 추출 → 10페이지 카드 카피 + PNG 렌더 → 텔레그램 푸시 → 사용자 수동 업로드
|
||
- DB: `/app/data/insta.db` (news_articles, trending_keywords, card_slates, card_assets, generation_tasks, prompt_templates)
|
||
- 카드 사이즈: 1080×1350 (인스타 4:5 세로)
|
||
- 카드 렌더: Jinja2 템플릿 → Playwright headless Chromium 스크린샷
|
||
- 파일 구조: `app/main.py`, `config.py`, `db.py`, `news_collector.py`, `keyword_extractor.py`, `card_writer.py`, `card_renderer.py`, `templates/default/card.html.j2`
|
||
|
||
**환경변수**
|
||
- `NAVER_CLIENT_ID` / `NAVER_CLIENT_SECRET`: 네이버 검색 API
|
||
- `ANTHROPIC_API_KEY`: Claude API (Haiku=키워드 정제, Sonnet=카드 카피)
|
||
- `ANTHROPIC_MODEL_HAIKU` / `ANTHROPIC_MODEL_SONNET`: 모델명 오버라이드
|
||
- `INSTA_DATA_PATH`: SQLite + 카드 PNG 저장 경로 (기본 `/app/data`)
|
||
- `CARD_TEMPLATE_DIR`: HTML 템플릿 디렉토리 (기본 `/app/app/templates`)
|
||
- `NEWS_PER_CATEGORY` / `KEYWORDS_PER_CATEGORY`: 수집·추출 limit 튜닝
|
||
|
||
**카테고리 시드 키워드**
|
||
- 기본 economy / psychology / celebrity 3종 (config.DEFAULT_CATEGORY_SEEDS)
|
||
- `prompt_templates.name='category_seeds'`에 JSON으로 오버라이드 가능
|
||
|
||
**카드 슬레이트 (`card_slates`)**
|
||
- status: `draft` → `rendered` → `sent` (또는 `failed`)
|
||
- cover_copy / body_copies (8개) / cta_copy / suggested_caption / hashtags JSON 컬럼
|
||
- accent_color는 카테고리별 기본값 (economy=#0F62FE, psychology=#A66CFF, celebrity=#FF5C8A)
|
||
|
||
**insta-lab API 목록**
|
||
|
||
| 메서드 | 경로 | 설명 |
|
||
|--------|------|------|
|
||
| GET | `/api/insta/status` | 서비스 상태 (NAVER/ANTHROPIC 키 여부) |
|
||
| POST | `/api/insta/news/collect` | 뉴스 수집 트리거 (BackgroundTask) |
|
||
| GET | `/api/insta/news/articles` | 수집 기사 목록 (category, days) |
|
||
| POST | `/api/insta/keywords/extract` | 키워드 추출 트리거 (BackgroundTask) |
|
||
| GET | `/api/insta/keywords` | 트렌딩 키워드 목록 (category, used) |
|
||
| POST | `/api/insta/slates` | 슬레이트 생성 (keyword, category) |
|
||
| GET | `/api/insta/slates` | 슬레이트 목록 |
|
||
| GET | `/api/insta/slates/{id}` | 슬레이트 상세 + 자산 |
|
||
| POST | `/api/insta/slates/{id}/render` | 카드 렌더 재시도 |
|
||
| GET | `/api/insta/slates/{id}/assets/{page}` | 카드 PNG 다운로드 (1~10) |
|
||
| DELETE | `/api/insta/slates/{id}` | 슬레이트 삭제 (자산 파일 포함) |
|
||
| GET | `/api/insta/tasks/{task_id}` | BackgroundTask 상태 폴링 |
|
||
| GET/PUT | `/api/insta/templates/prompts/{name}` | 프롬프트 템플릿 CRUD |
|
||
```
|
||
|
||
(f) `### agent-office` 서브섹션의 환경변수 목록에서 `BLOG_LAB_URL` 줄을 `INSTA_LAB_URL`로 교체. agents/insta.py 추가, agents/blog.py 제거를 파일 구조 항목에 반영.
|
||
|
||
(g) Section 10 (주의사항)에서 `**NAS Celeron J4025**: Node.js 빌드는 반드시 로컬에서 수행 (NAS에서 빌드 X)` 다음에 추가:
|
||
```
|
||
- **insta-lab Playwright**: NAS에서 chromium 빌드는 가능하지만 +500MB 이미지. 메모리 부족 시 카드 렌더 실패 가능 — 한 번에 1슬레이트만 렌더하도록 직렬화됨
|
||
```
|
||
|
||
- [ ] **Step 2: Update `../CLAUDE.md` (workspace)**
|
||
|
||
(a) Docker 서비스 & 포트 표에서:
|
||
```
|
||
| `blog-lab` | 18700 | 블로그 마케팅 수익화 파이프라인 |
|
||
```
|
||
→
|
||
```
|
||
| `insta-lab` | 18700 | 인스타 카드 피드 자동 생성 |
|
||
```
|
||
|
||
(b) API 엔드포인트 요약 표에서:
|
||
```
|
||
| `/api/blog-marketing/` | blog-lab | `web-backend/blog-lab/` |
|
||
```
|
||
→
|
||
```
|
||
| `/api/insta/` | insta-lab | `web-backend/insta-lab/` |
|
||
```
|
||
|
||
- [ ] **Step 3: Commit**
|
||
|
||
```
|
||
git add CLAUDE.md ../CLAUDE.md
|
||
git commit -m "docs(claude-md): replace blog-lab references with insta-lab"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 17: End-to-end smoke test
|
||
|
||
**Files:** none (verification only)
|
||
|
||
- [ ] **Step 1: Build insta-lab image locally**
|
||
|
||
Run: `docker compose build insta-lab`
|
||
Expected: Build succeeds. First build pulls Chromium (~5 min on slow connection).
|
||
|
||
- [ ] **Step 2: Bring up the stack with new env**
|
||
|
||
Run: `docker compose up -d insta-lab`
|
||
Then: `docker compose logs --tail=30 insta-lab`
|
||
Expected: `Application startup complete`. No tracebacks.
|
||
|
||
- [ ] **Step 3: Hit health + status endpoints**
|
||
|
||
Run:
|
||
```
|
||
curl http://localhost:18700/health
|
||
curl http://localhost:18700/api/insta/status
|
||
```
|
||
Expected: `{"ok":true}` and `{"ok":true,"naver_api":true|false,"anthropic_api":true|false}`.
|
||
|
||
- [ ] **Step 4: Trigger collect + extract (assuming env keys are set)**
|
||
|
||
Run:
|
||
```
|
||
curl -X POST http://localhost:18700/api/insta/news/collect -H "Content-Type: application/json" -d '{"categories":["economy"]}'
|
||
```
|
||
Note the `task_id`. Poll:
|
||
```
|
||
curl http://localhost:18700/api/insta/tasks/<task_id>
|
||
```
|
||
Expected: `succeeded` with `result_id` > 0.
|
||
|
||
Then:
|
||
```
|
||
curl -X POST http://localhost:18700/api/insta/keywords/extract -H "Content-Type: application/json" -d '{"categories":["economy"]}'
|
||
curl "http://localhost:18700/api/insta/keywords?category=economy"
|
||
```
|
||
Expected: List of 5 keywords with score values.
|
||
|
||
- [ ] **Step 5: Generate one slate and view a PNG**
|
||
|
||
Pick one keyword from the response above. Then:
|
||
```
|
||
curl -X POST http://localhost:18700/api/insta/slates -H "Content-Type: application/json" -d '{"keyword":"<KEYWORD>","category":"economy"}'
|
||
```
|
||
Poll the task until `succeeded`. Note the `result_id` (slate_id). Then:
|
||
```
|
||
curl http://localhost:18700/api/insta/slates/<slate_id> | python -m json.tool
|
||
curl http://localhost:18700/api/insta/slates/<slate_id>/assets/1 -o page1.png
|
||
```
|
||
Expected: JSON shows 10 assets; `page1.png` is a valid 1080×1350 image (~50-200 KB).
|
||
|
||
- [ ] **Step 6: Test agent-office integration**
|
||
|
||
Run: `docker compose up -d agent-office`
|
||
Then: `docker compose logs --tail=20 agent-office | grep -i "insta\|register"`
|
||
Expected: Logs mention `insta` agent registered.
|
||
|
||
If telegram is configured, manually trigger:
|
||
```
|
||
curl -X POST http://localhost:18900/api/agent-office/command \
|
||
-H "Content-Type: application/json" \
|
||
-d '{"agent_id":"insta","command":"extract","params":{}}'
|
||
```
|
||
Expected: Telegram receives a message with inline keyboard.
|
||
|
||
- [ ] **Step 7: Tear down**
|
||
|
||
Run: `docker compose down insta-lab agent-office`
|
||
|
||
- [ ] **Step 8: Final commit (only if any cleanup needed during smoke)**
|
||
|
||
If smoke uncovered fixes, commit them with descriptive messages. Otherwise no commit.
|
||
|
||
---
|
||
|
||
## Task 18: Push branch + final wrap
|
||
|
||
**Files:** none
|
||
|
||
- [ ] **Step 1: Run full test suite for insta-lab + agent-office**
|
||
|
||
Run: `cd insta-lab && pytest -v` then `cd ../agent-office && pytest -v`
|
||
Expected: All green.
|
||
|
||
- [ ] **Step 2: Verify branch state**
|
||
|
||
Run: `git log --oneline main..HEAD`
|
||
Expected: ~16 commits forming a coherent story (scaffold → 6 modules → compose → nginx → service_proxy → agent → registry → callback → cleanup → docs → smoke).
|
||
|
||
- [ ] **Step 3: Push to remote (Gitea)**
|
||
|
||
Run: `git push -u origin feat/insta-agent`
|
||
Expected: Branch tracked on origin. (Per CLAUDE.md, do NOT push to main directly. Open a PR after this.)
|
||
|
||
- [ ] **Step 4: Hand off**
|
||
|
||
Notify the user. The branch is now ready for code review and merge. Webhook deploy will fire on merge to main.
|
||
|
||
---
|
||
|
||
## Verification matrix (run before declaring done)
|
||
|
||
| Check | Command | Expected |
|
||
|-------|---------|----------|
|
||
| insta-lab unit tests | `cd insta-lab && pytest -v` | 21 PASS |
|
||
| agent-office tests | `cd agent-office && pytest -v` | All PASS incl. test_insta_agent (2) |
|
||
| docker-compose validity | `docker compose config --quiet` | exit 0, no output |
|
||
| nginx config validity | (in container) `nginx -t` | `syntax is ok` |
|
||
| insta-lab health | `curl localhost:18700/health` | `{"ok":true}` |
|
||
| End-to-end slate render | Task 17 step 5 | 10 PNGs at 1080×1350 |
|
||
| No blog-lab references | `grep -rn "blog-lab\|blog_marketing\|BlogAgent" --exclude-dir=docs` | 0 matches |
|