blog-lab: 블로그 마케팅 수익화 서비스 추가

네이버 검색 API 키워드 분석 + Claude AI 글 생성 + 품질 리뷰 + 수익 추적
- blog-lab/ 서비스 전체 (FastAPI, SQLite 5테이블, 18 엔드포인트)
- docker-compose.yml: blog-lab 서비스 (port 18700)
- nginx: /api/blog-marketing/ 라우팅 추가
- .env.example: NAVER_CLIENT_ID/SECRET 추가

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-05 19:59:25 +09:00
parent bb76e62774
commit ba33e00ce3
14 changed files with 1460 additions and 0 deletions

View File

@@ -57,6 +57,14 @@ ADMIN_API_KEY=
# Anthropic API Key (AI Coach 프록시, 미설정 시 AI Coach 비활성화)
ANTHROPIC_API_KEY=
# [BLOG LAB]
# Naver Search API (https://developers.naver.com 에서 발급)
NAVER_CLIENT_ID=
NAVER_CLIENT_SECRET=
# 블로그 데이터 저장 경로
# BLOG_DATA_PATH=./data/blog
# [MUSIC LAB]
# Suno API Key (https://suno.com 에서 발급, 미설정 시 Suno provider 비활성화)
SUNO_API_KEY=

View File

@@ -56,6 +56,7 @@ Synology NAS 기반의 개인 웹 플랫폼 백엔드 모노레포.
| `lotto-backend` | 18000 | 로또 데이터 수집·분석·추천 API |
| `stock-lab` | 18500 | 주식 뉴스·AI 분석·KIS API 연동 |
| `music-lab` | 18600 | AI 음악 생성·라이브러리 관리 API |
| `blog-lab` | 18700 | 블로그 마케팅 수익화 API |
| `travel-proxy` | 19000 | 여행 사진 API + 썸네일 생성 |
| `lotto-frontend` (nginx) | 8080 | 정적 SPA 서빙 + API 리버스 프록시 |
| `webpage-deployer` | 19010 | Gitea Webhook 수신 → 자동 배포 |
@@ -72,6 +73,7 @@ Synology NAS 기반의 개인 웹 플랫폼 백엔드 모노레포.
| `/api/trade/` | `stock-lab:8000` | KIS 실계좌 API |
| `/api/portfolio` | `stock-lab:8000` | trailing slash 유무 모두 매칭 |
| `/api/music/` | `music-lab:8000` | AI 음악 생성·라이브러리 API |
| `/api/blog-marketing/` | `blog-lab:8000` | 블로그 마케팅 수익화 API |
| `/webhook`, `/webhook/` | `deployer:9000` | Gitea Webhook |
| `/media/music/` | `/data/music/` (파일 직접 서빙) | 생성된 오디오 파일 |
| `/media/travel/.thumb/` | `/data/thumbs/` (파일 직접 서빙) | 썸네일 캐시 |
@@ -122,6 +124,7 @@ docker compose up -d
| Lotto Backend | http://localhost:18000 |
| Travel API | http://localhost:19000 |
| Stock Lab | http://localhost:18500 |
| Blog Lab | http://localhost:18700 |
---
@@ -279,6 +282,53 @@ docker compose up -d
| GET | `/api/travel/photos` | 사진 목록 (region, page=1, size=20) |
| POST | `/api/travel/reload` | 메모리 캐시 초기화 |
### blog-lab (blog-lab/)
- 블로그 마케팅 수익화 서비스 (키워드 분석 → AI 글 생성 → 품질 리뷰 → 포스팅 → 수익 추적)
- AI 엔진: Claude API (Anthropic, `claude-sonnet-4-20250514`)
- 웹 검색: Naver Search API (블로그 + 쇼핑)
- DB: `/app/data/blog_marketing.db`
- 파일 구조: `main.py`, `db.py`, `config.py`, `naver_search.py`, `content_generator.py`, `quality_reviewer.py`
**blog_marketing.db 테이블**
| 테이블 | 설명 |
|--------|------|
| `keyword_analyses` | 키워드 분석 결과 (네이버 검색 데이터 + 경쟁도/기회 점수) |
| `blog_posts` | 블로그 글 (draft → reviewed → published) |
| `commissions` | 포스트별 월간 클릭/구매/수익 |
| `generation_tasks` | 비동기 작업 상태 (research/generate/review) |
| `prompt_templates` | AI 프롬프트 템플릿 (DB 저장, 코드 배포 없이 수정 가능) |
**blog-lab API 목록**
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/blog-marketing/status` | 서비스 상태 (API 키 설정 현황) |
| POST | `/api/blog-marketing/research` | 키워드 분석 시작 (BackgroundTask) |
| GET | `/api/blog-marketing/research/history` | 분석 이력 조회 |
| GET | `/api/blog-marketing/research/{id}` | 분석 상세 조회 |
| DELETE | `/api/blog-marketing/research/{id}` | 분석 삭제 |
| GET | `/api/blog-marketing/task/{task_id}` | 작업 상태 폴링 |
| POST | `/api/blog-marketing/generate` | AI 글 생성 (트렌드 브리프 + 본문) |
| POST | `/api/blog-marketing/review/{post_id}` | 품질 리뷰 (5기준 × 10점) |
| POST | `/api/blog-marketing/regenerate/{post_id}` | 피드백 기반 재생성 |
| GET | `/api/blog-marketing/posts` | 포스트 목록 (status 필터) |
| GET | `/api/blog-marketing/posts/{id}` | 포스트 상세 |
| PUT | `/api/blog-marketing/posts/{id}` | 포스트 수정 |
| DELETE | `/api/blog-marketing/posts/{id}` | 포스트 삭제 |
| POST | `/api/blog-marketing/posts/{id}/publish` | 발행 (네이버 URL 등록) |
| GET | `/api/blog-marketing/commissions` | 수익 내역 조회 |
| POST | `/api/blog-marketing/commissions` | 수익 기록 추가 |
| PUT | `/api/blog-marketing/commissions/{id}` | 수익 기록 수정 |
| DELETE | `/api/blog-marketing/commissions/{id}` | 수익 기록 삭제 |
| GET | `/api/blog-marketing/dashboard` | 대시보드 집계 |
**환경변수**
- `ANTHROPIC_API_KEY`: Claude API 키 (미설정 시 AI 생성 비활성화)
- `NAVER_CLIENT_ID`: 네이버 검색 API 클라이언트 ID
- `NAVER_CLIENT_SECRET`: 네이버 검색 API 시크릿
- `BLOG_DATA_PATH`: SQLite DB 저장 경로 (기본 `./data/blog`)
### deployer (deployer/)
- Webhook 검증: `X-Gitea-Signature` (HMAC SHA256, `compare_digest` 사용)
- `WEBHOOK_SECRET` 환경변수로 시크릿 관리

4
blog-lab/.dockerignore Normal file
View File

@@ -0,0 +1,4 @@
__pycache__
*.pyc
.env
data/

14
blog-lab/Dockerfile Normal file
View File

@@ -0,0 +1,14 @@
FROM python:3.12-alpine
WORKDIR /app
RUN apk add --no-cache gcc musl-dev
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

0
blog-lab/app/__init__.py Normal file
View File

15
blog-lab/app/config.py Normal file
View File

@@ -0,0 +1,15 @@
import os
# Anthropic Claude API
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY", "")
CLAUDE_MODEL = os.getenv("CLAUDE_MODEL", "claude-sonnet-4-20250514")
# Naver Search API
NAVER_CLIENT_ID = os.getenv("NAVER_CLIENT_ID", "")
NAVER_CLIENT_SECRET = os.getenv("NAVER_CLIENT_SECRET", "")
# Database
DB_PATH = os.getenv("BLOG_DB_PATH", "/app/data/blog_marketing.db")
# CORS
CORS_ALLOW_ORIGINS = os.getenv("CORS_ALLOW_ORIGINS", "http://localhost:3007,http://localhost:8080")

View File

@@ -0,0 +1,162 @@
"""Claude API 기반 콘텐츠 생성 — 트렌드 브리프 + 블로그 글 작성."""
import json
import logging
from typing import Any, Dict, Optional
import anthropic
from .config import ANTHROPIC_API_KEY, CLAUDE_MODEL
from .db import get_template
logger = logging.getLogger(__name__)
_client: Optional[anthropic.Anthropic] = None
def _get_client() -> anthropic.Anthropic:
global _client
if _client is None:
_client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)
return _client
def _call_claude(prompt: str, max_tokens: int = 4096) -> str:
"""Claude API 호출. 단일 user 메시지."""
client = _get_client()
resp = client.messages.create(
model=CLAUDE_MODEL,
max_tokens=max_tokens,
messages=[{"role": "user", "content": prompt}],
)
return resp.content[0].text
def generate_trend_brief(analysis: Dict[str, Any]) -> str:
"""키워드 분석 데이터를 바탕으로 트렌드 브리프 생성."""
template = get_template("trend_brief")
if not template:
raise RuntimeError("trend_brief 템플릿이 없습니다")
top_blogs_text = "\n".join(
f"- {b.get('title', '')}" for b in analysis.get("top_blogs", [])
) or "없음"
top_products_text = "\n".join(
f"- {p.get('title', '')} ({p.get('lprice', '?')}원, {p.get('mallName', '')})"
for p in analysis.get("top_products", [])
) or "없음"
prompt = template.format(
keyword=analysis.get("keyword", ""),
competition=analysis.get("competition", 0),
opportunity=analysis.get("opportunity", 0),
top_blogs=top_blogs_text,
top_products=top_products_text,
)
return _call_claude(prompt)
def generate_blog_post(analysis: Dict[str, Any], trend_brief: str) -> Dict[str, str]:
"""트렌드 브리프를 바탕으로 블로그 글 작성.
Returns:
{"title": str, "body": str, "excerpt": str, "tags": [...]}
"""
template = get_template("blog_write")
if not template:
raise RuntimeError("blog_write 템플릿이 없습니다")
top_products_text = "\n".join(
f"- {p.get('title', '')} ({p.get('lprice', '?')}원, {p.get('mallName', '')})"
for p in analysis.get("top_products", [])
) or "없음"
prompt = template.format(
keyword=analysis.get("keyword", ""),
trend_brief=trend_brief,
top_products=top_products_text,
)
# 구조화된 응답을 위한 추가 지시
prompt += (
"\n\n---\n"
"응답은 반드시 아래 JSON 형식으로 해주세요 (JSON만 출력, 다른 텍스트 없이):\n"
'{"title": "블로그 제목", "body": "HTML 본문", "excerpt": "2줄 요약", '
'"tags": ["태그1", "태그2", ...]}'
)
raw = _call_claude(prompt, max_tokens=8192)
# JSON 파싱 시도
try:
# ```json ... ``` 블록 제거
text = raw.strip()
if text.startswith("```"):
lines = text.split("\n")
lines = [l for l in lines if not l.strip().startswith("```")]
text = "\n".join(lines)
result = json.loads(text)
return {
"title": result.get("title", ""),
"body": result.get("body", ""),
"excerpt": result.get("excerpt", ""),
"tags": result.get("tags", []),
}
except (json.JSONDecodeError, KeyError):
# JSON 파싱 실패 시 원본 텍스트를 body로
logger.warning("Blog post JSON parse failed, using raw text")
return {
"title": f"{analysis.get('keyword', '')} 추천 리뷰",
"body": raw,
"excerpt": raw[:200],
"tags": [analysis.get("keyword", "")],
}
def regenerate_blog_post(
analysis: Dict[str, Any],
trend_brief: str,
previous_body: str,
feedback: str,
) -> Dict[str, str]:
"""피드백을 반영하여 블로그 글 재생성."""
prompt = (
"당신은 네이버 블로그에서 월 100만 이상 수익을 올리는 전문 블로거입니다.\n"
f"키워드: {analysis.get('keyword', '')}\n\n"
f"이전에 작성한 글:\n{previous_body[:3000]}\n\n"
f"리뷰어 피드백:\n{feedback}\n\n"
"위 피드백을 반영하여 글을 개선해주세요.\n"
"작성 규칙: 1인칭 체험기, 1,500자 이상, 자연스러운 구어체, "
"제품 비교표 포함, 광고 고지 문구 포함.\n"
"HTML 형식으로 작성하되, 네이버 블로그에서 바로 붙여넣기 가능한 형태로.\n\n"
"---\n"
"응답은 반드시 아래 JSON 형식으로 해주세요 (JSON만 출력):\n"
'{"title": "블로그 제목", "body": "HTML 본문", "excerpt": "2줄 요약", '
'"tags": ["태그1", "태그2", ...]}'
)
raw = _call_claude(prompt, max_tokens=8192)
try:
text = raw.strip()
if text.startswith("```"):
lines = text.split("\n")
lines = [l for l in lines if not l.strip().startswith("```")]
text = "\n".join(lines)
result = json.loads(text)
return {
"title": result.get("title", ""),
"body": result.get("body", ""),
"excerpt": result.get("excerpt", ""),
"tags": result.get("tags", []),
}
except (json.JSONDecodeError, KeyError):
logger.warning("Regenerate JSON parse failed, using raw text")
return {
"title": f"{analysis.get('keyword', '')} 추천 리뷰 (개선)",
"body": raw,
"excerpt": raw[:200],
"tags": [analysis.get("keyword", "")],
}

579
blog-lab/app/db.py Normal file
View File

@@ -0,0 +1,579 @@
import os
import sqlite3
import json
from typing import Any, Dict, List, Optional
from .config import DB_PATH
def _conn() -> sqlite3.Connection:
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
conn.execute("PRAGMA journal_mode=WAL")
return conn
def init_db() -> None:
with _conn() as conn:
# 키워드/상품 분석 결과
conn.execute("""
CREATE TABLE IF NOT EXISTS keyword_analyses (
id INTEGER PRIMARY KEY AUTOINCREMENT,
keyword TEXT NOT NULL,
blog_total INTEGER NOT NULL DEFAULT 0,
shop_total INTEGER NOT NULL DEFAULT 0,
competition REAL NOT NULL DEFAULT 0,
opportunity REAL NOT NULL DEFAULT 0,
avg_price INTEGER,
min_price INTEGER,
max_price INTEGER,
top_products TEXT NOT NULL DEFAULT '[]',
top_blogs TEXT NOT NULL DEFAULT '[]',
ai_summary TEXT NOT NULL DEFAULT '',
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
)
""")
conn.execute("CREATE INDEX IF NOT EXISTS idx_ka_created ON keyword_analyses(created_at DESC)")
conn.execute("CREATE INDEX IF NOT EXISTS idx_ka_keyword ON keyword_analyses(keyword)")
# 블로그 포스트
conn.execute("""
CREATE TABLE IF NOT EXISTS blog_posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
keyword_id INTEGER REFERENCES keyword_analyses(id),
title TEXT NOT NULL DEFAULT '',
body TEXT NOT NULL DEFAULT '',
excerpt TEXT NOT NULL DEFAULT '',
tags TEXT NOT NULL DEFAULT '[]',
status TEXT NOT NULL DEFAULT 'draft',
review_score INTEGER,
review_detail TEXT NOT NULL DEFAULT '{}',
naver_url TEXT NOT NULL DEFAULT '',
trend_brief TEXT NOT NULL DEFAULT '',
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
)
""")
conn.execute("CREATE INDEX IF NOT EXISTS idx_bp_created ON blog_posts(created_at DESC)")
conn.execute("CREATE INDEX IF NOT EXISTS idx_bp_status ON blog_posts(status)")
# 수익(커미션) 추적
conn.execute("""
CREATE TABLE IF NOT EXISTS commissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
post_id INTEGER REFERENCES blog_posts(id),
month TEXT NOT NULL,
clicks INTEGER NOT NULL DEFAULT 0,
purchases INTEGER NOT NULL DEFAULT 0,
revenue INTEGER NOT NULL DEFAULT 0,
note TEXT NOT NULL DEFAULT '',
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
)
""")
conn.execute("CREATE INDEX IF NOT EXISTS idx_comm_month ON commissions(month)")
conn.execute("CREATE INDEX IF NOT EXISTS idx_comm_post ON commissions(post_id)")
# 비동기 작업 상태 (research / generate / review)
conn.execute("""
CREATE TABLE IF NOT EXISTS generation_tasks (
id TEXT PRIMARY KEY,
type TEXT NOT NULL DEFAULT 'research',
status TEXT NOT NULL DEFAULT 'queued',
progress INTEGER NOT NULL DEFAULT 0,
message TEXT NOT NULL DEFAULT '',
result_id INTEGER,
error TEXT,
params TEXT NOT NULL DEFAULT '{}',
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
)
""")
conn.execute("CREATE INDEX IF NOT EXISTS idx_gt_created ON generation_tasks(created_at DESC)")
# AI 프롬프트 템플릿
conn.execute("""
CREATE TABLE IF NOT EXISTS prompt_templates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL UNIQUE,
description TEXT NOT NULL DEFAULT '',
template TEXT NOT NULL DEFAULT '',
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
)
""")
# 기본 프롬프트 템플릿 시딩 (존재하지 않을 때만)
_seed_templates(conn)
def _seed_templates(conn: sqlite3.Connection) -> None:
"""기본 프롬프트 템플릿을 DB에 시딩."""
templates = [
{
"name": "trend_brief",
"description": "네이버 블로그 트렌드 분석 + 제목/훅 전략 브리프",
"template": (
"당신은 네이버 블로그 마케팅 전문가입니다.\n"
"아래 키워드 분석 데이터를 바탕으로 블로그 포스팅 전략 브리프를 작성하세요.\n\n"
"키워드: {keyword}\n"
"블로그 경쟁도: {competition} (0-100, 높을수록 경쟁 치열)\n"
"쇼핑 기회 점수: {opportunity} (0-100, 높을수록 기회 큼)\n"
"상위 블로그 제목들: {top_blogs}\n"
"상위 상품들: {top_products}\n\n"
"다음을 포함해주세요:\n"
"1. 클릭을 유도하는 제목 공식 3가지\n"
"2. 도입부 훅 전략 (공감형, 질문형, 충격형 중 추천)\n"
"3. 추천 해시태그 5-10개\n"
"4. 경쟁 분석 요약 (기존 글 대비 차별화 포인트)\n"
"5. SEO 키워드 배치 전략"
),
},
{
"name": "blog_write",
"description": "공감형 1인칭 체험기 블로그 글 작성",
"template": (
"당신은 네이버 블로그에서 월 100만 이상 수익을 올리는 전문 블로거입니다.\n"
"아래 브리프를 바탕으로 블로그 글을 작성하세요.\n\n"
"키워드: {keyword}\n"
"트렌드 브리프: {trend_brief}\n"
"상위 상품 정보: {top_products}\n\n"
"작성 규칙:\n"
"- 1인칭 체험기 형식 (\"제가 직접 써봤는데요\")\n"
"- 1,500자 이상\n"
"- 자연스러운 구어체 (네이버 블로그 톤)\n"
"- 제품 비교표 포함 (마크다운 테이블)\n"
"- 장단점 솔직하게 작성\n"
"- 광고 고지 문구 포함: \"이 포스팅은 쿠팡 파트너스 활동의 일환으로, 이에 따른 일정액의 수수료를 제공받습니다.\"\n"
"- 추천 매트릭스 (가성비/품질/디자인 기준)\n"
"- 자연스러운 CTA (구매 링크 유도)\n\n"
"HTML 형식으로 작성하되, 네이버 블로그에서 바로 붙여넣기 가능한 형태로 만들어주세요."
),
},
{
"name": "quality_review",
"description": "블로그 글 품질 리뷰 (5기준 × 10점)",
"template": (
"당신은 블로그 콘텐츠 품질 평가 전문가입니다.\n"
"아래 블로그 글을 5가지 기준으로 평가해주세요.\n\n"
"제목: {title}\n"
"본문: {body}\n\n"
"평가 기준 (각 1-10점):\n"
"1. 독자 공감도: 1인칭 체험기가 자연스럽고 공감되는가?\n"
"2. 제목 클릭 유도력: 검색 결과에서 클릭하고 싶은 제목인가?\n"
"3. 구매 전환력: 읽고 나서 제품을 사고 싶어지는가?\n"
"4. SEO 최적화: 키워드 배치, 소제목, 길이가 적절한가?\n"
"5. 형식 완성도: 비교표, 이미지 설명, 단락 구성이 잘 되어있는가?\n\n"
"JSON 형식으로 응답:\n"
"{{\n"
" \"scores\": {{\n"
" \"empathy\": N,\n"
" \"click_appeal\": N,\n"
" \"conversion\": N,\n"
" \"seo\": N,\n"
" \"format\": N\n"
" }},\n"
" \"total\": N,\n"
" \"pass\": true/false,\n"
" \"feedback\": \"개선 사항 설명\"\n"
"}}"
),
},
]
for t in templates:
existing = conn.execute(
"SELECT id FROM prompt_templates WHERE name = ?", (t["name"],)
).fetchone()
if not existing:
conn.execute(
"INSERT INTO prompt_templates (name, description, template) VALUES (?, ?, ?)",
(t["name"], t["description"], t["template"]),
)
# ── keyword_analyses CRUD ────────────────────────────────────────────────────
def _ka_row_to_dict(r) -> Dict[str, Any]:
return {
"id": r["id"],
"keyword": r["keyword"],
"blog_total": r["blog_total"],
"shop_total": r["shop_total"],
"competition": r["competition"],
"opportunity": r["opportunity"],
"avg_price": r["avg_price"],
"min_price": r["min_price"],
"max_price": r["max_price"],
"top_products": json.loads(r["top_products"]) if r["top_products"] else [],
"top_blogs": json.loads(r["top_blogs"]) if r["top_blogs"] else [],
"ai_summary": r["ai_summary"],
"created_at": r["created_at"],
}
def add_keyword_analysis(data: Dict[str, Any]) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"""INSERT INTO keyword_analyses
(keyword, blog_total, shop_total, competition, opportunity,
avg_price, min_price, max_price, top_products, top_blogs, ai_summary)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(
data.get("keyword", ""),
data.get("blog_total", 0),
data.get("shop_total", 0),
data.get("competition", 0),
data.get("opportunity", 0),
data.get("avg_price"),
data.get("min_price"),
data.get("max_price"),
json.dumps(data.get("top_products", []), ensure_ascii=False),
json.dumps(data.get("top_blogs", []), ensure_ascii=False),
data.get("ai_summary", ""),
),
)
row = conn.execute(
"SELECT * FROM keyword_analyses WHERE rowid = last_insert_rowid()"
).fetchone()
return _ka_row_to_dict(row)
def get_keyword_analysis(analysis_id: int) -> Optional[Dict[str, Any]]:
with _conn() as conn:
row = conn.execute(
"SELECT * FROM keyword_analyses WHERE id = ?", (analysis_id,)
).fetchone()
return _ka_row_to_dict(row) if row else None
def get_keyword_analyses(limit: int = 30) -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute(
"SELECT * FROM keyword_analyses ORDER BY created_at DESC LIMIT ?", (limit,)
).fetchall()
return [_ka_row_to_dict(r) for r in rows]
def delete_keyword_analysis(analysis_id: int) -> bool:
with _conn() as conn:
row = conn.execute(
"SELECT id FROM keyword_analyses WHERE id = ?", (analysis_id,)
).fetchone()
if not row:
return False
conn.execute("DELETE FROM keyword_analyses WHERE id = ?", (analysis_id,))
return True
# ── blog_posts CRUD ──────────────────────────────────────────────────────────
def _post_row_to_dict(r) -> Dict[str, Any]:
return {
"id": r["id"],
"keyword_id": r["keyword_id"],
"title": r["title"],
"body": r["body"],
"excerpt": r["excerpt"],
"tags": json.loads(r["tags"]) if r["tags"] else [],
"status": r["status"],
"review_score": r["review_score"],
"review_detail": json.loads(r["review_detail"]) if r["review_detail"] else {},
"naver_url": r["naver_url"],
"trend_brief": r["trend_brief"],
"created_at": r["created_at"],
"updated_at": r["updated_at"],
}
def add_post(data: Dict[str, Any]) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"""INSERT INTO blog_posts
(keyword_id, title, body, excerpt, tags, status, review_score,
review_detail, naver_url, trend_brief)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(
data.get("keyword_id"),
data.get("title", ""),
data.get("body", ""),
data.get("excerpt", ""),
json.dumps(data.get("tags", []), ensure_ascii=False),
data.get("status", "draft"),
data.get("review_score"),
json.dumps(data.get("review_detail", {}), ensure_ascii=False),
data.get("naver_url", ""),
data.get("trend_brief", ""),
),
)
row = conn.execute(
"SELECT * FROM blog_posts WHERE rowid = last_insert_rowid()"
).fetchone()
return _post_row_to_dict(row)
def get_post(post_id: int) -> Optional[Dict[str, Any]]:
with _conn() as conn:
row = conn.execute(
"SELECT * FROM blog_posts WHERE id = ?", (post_id,)
).fetchone()
return _post_row_to_dict(row) if row else None
def get_posts(status: Optional[str] = None, limit: int = 50) -> List[Dict[str, Any]]:
with _conn() as conn:
if status:
rows = conn.execute(
"SELECT * FROM blog_posts WHERE status = ? ORDER BY created_at DESC LIMIT ?",
(status, limit),
).fetchall()
else:
rows = conn.execute(
"SELECT * FROM blog_posts ORDER BY created_at DESC LIMIT ?", (limit,)
).fetchall()
return [_post_row_to_dict(r) for r in rows]
def update_post(post_id: int, data: Dict[str, Any]) -> Optional[Dict[str, Any]]:
with _conn() as conn:
fields = []
values = []
for k in ("title", "body", "excerpt", "status", "naver_url", "trend_brief"):
if k in data:
fields.append(f"{k} = ?")
values.append(data[k])
if "tags" in data:
fields.append("tags = ?")
values.append(json.dumps(data["tags"], ensure_ascii=False))
if "review_score" in data:
fields.append("review_score = ?")
values.append(data["review_score"])
if "review_detail" in data:
fields.append("review_detail = ?")
values.append(json.dumps(data["review_detail"], ensure_ascii=False))
if not fields:
return get_post(post_id)
fields.append("updated_at = strftime('%Y-%m-%dT%H:%M:%fZ','now')")
values.append(post_id)
conn.execute(
f"UPDATE blog_posts SET {', '.join(fields)} WHERE id = ?", values
)
row = conn.execute(
"SELECT * FROM blog_posts WHERE id = ?", (post_id,)
).fetchone()
return _post_row_to_dict(row) if row else None
def delete_post(post_id: int) -> bool:
with _conn() as conn:
row = conn.execute(
"SELECT id FROM blog_posts WHERE id = ?", (post_id,)
).fetchone()
if not row:
return False
conn.execute("DELETE FROM blog_posts WHERE id = ?", (post_id,))
return True
# ── commissions CRUD ─────────────────────────────────────────────────────────
def _comm_row_to_dict(r) -> Dict[str, Any]:
return {
"id": r["id"],
"post_id": r["post_id"],
"month": r["month"],
"clicks": r["clicks"],
"purchases": r["purchases"],
"revenue": r["revenue"],
"note": r["note"],
"created_at": r["created_at"],
}
def add_commission(data: Dict[str, Any]) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"""INSERT INTO commissions (post_id, month, clicks, purchases, revenue, note)
VALUES (?, ?, ?, ?, ?, ?)""",
(
data.get("post_id"),
data.get("month", ""),
data.get("clicks", 0),
data.get("purchases", 0),
data.get("revenue", 0),
data.get("note", ""),
),
)
row = conn.execute(
"SELECT * FROM commissions WHERE rowid = last_insert_rowid()"
).fetchone()
return _comm_row_to_dict(row)
def get_commissions(post_id: Optional[int] = None, limit: int = 100) -> List[Dict[str, Any]]:
with _conn() as conn:
if post_id:
rows = conn.execute(
"SELECT * FROM commissions WHERE post_id = ? ORDER BY month DESC LIMIT ?",
(post_id, limit),
).fetchall()
else:
rows = conn.execute(
"SELECT * FROM commissions ORDER BY month DESC LIMIT ?", (limit,)
).fetchall()
return [_comm_row_to_dict(r) for r in rows]
def update_commission(comm_id: int, data: Dict[str, Any]) -> Optional[Dict[str, Any]]:
with _conn() as conn:
fields = []
values = []
for k in ("month", "clicks", "purchases", "revenue", "note"):
if k in data:
fields.append(f"{k} = ?")
values.append(data[k])
if not fields:
return None
values.append(comm_id)
conn.execute(
f"UPDATE commissions SET {', '.join(fields)} WHERE id = ?", values
)
row = conn.execute(
"SELECT * FROM commissions WHERE id = ?", (comm_id,)
).fetchone()
return _comm_row_to_dict(row) if row else None
def delete_commission(comm_id: int) -> bool:
with _conn() as conn:
row = conn.execute(
"SELECT id FROM commissions WHERE id = ?", (comm_id,)
).fetchone()
if not row:
return False
conn.execute("DELETE FROM commissions WHERE id = ?", (comm_id,))
return True
def get_dashboard_stats() -> Dict[str, Any]:
"""대시보드 집계: 총 포스트/클릭/구매/수익 + 월별 추이."""
with _conn() as conn:
total_posts = conn.execute("SELECT COUNT(*) FROM blog_posts").fetchone()[0]
published = conn.execute(
"SELECT COUNT(*) FROM blog_posts WHERE status = 'published'"
).fetchone()[0]
agg = conn.execute(
"SELECT COALESCE(SUM(clicks),0), COALESCE(SUM(purchases),0), COALESCE(SUM(revenue),0) FROM commissions"
).fetchone()
monthly = conn.execute(
"""SELECT month, SUM(clicks) as clicks, SUM(purchases) as purchases, SUM(revenue) as revenue
FROM commissions GROUP BY month ORDER BY month DESC LIMIT 12"""
).fetchall()
top_posts = conn.execute(
"""SELECT bp.id, bp.title, COALESCE(SUM(c.revenue),0) as total_revenue
FROM blog_posts bp LEFT JOIN commissions c ON c.post_id = bp.id
GROUP BY bp.id ORDER BY total_revenue DESC LIMIT 5"""
).fetchall()
return {
"total_posts": total_posts,
"published_posts": published,
"total_clicks": agg[0],
"total_purchases": agg[1],
"total_revenue": agg[2],
"monthly": [
{"month": r["month"], "clicks": r["clicks"], "purchases": r["purchases"], "revenue": r["revenue"]}
for r in monthly
],
"top_posts": [
{"id": r["id"], "title": r["title"], "total_revenue": r["total_revenue"]}
for r in top_posts
],
}
# ── generation_tasks CRUD ────────────────────────────────────────────────────
def _task_row_to_dict(r) -> Dict[str, Any]:
return {
"task_id": r["id"],
"type": r["type"],
"status": r["status"],
"progress": r["progress"],
"message": r["message"],
"result_id": r["result_id"],
"error": r["error"],
"params": json.loads(r["params"]) if r["params"] else {},
"created_at": r["created_at"],
"updated_at": r["updated_at"],
}
def create_task(task_id: str, task_type: str, params: Dict[str, Any]) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"INSERT INTO generation_tasks (id, type, params) VALUES (?, ?, ?)",
(task_id, task_type, json.dumps(params, ensure_ascii=False)),
)
row = conn.execute(
"SELECT * FROM generation_tasks WHERE id = ?", (task_id,)
).fetchone()
return _task_row_to_dict(row)
def update_task(
task_id: str,
status: str,
progress: int,
message: str,
result_id: Optional[int] = None,
error: Optional[str] = None,
) -> None:
with _conn() as conn:
conn.execute(
"""UPDATE generation_tasks
SET status = ?, progress = ?, message = ?, result_id = ?, error = ?,
updated_at = strftime('%Y-%m-%dT%H:%M:%fZ','now')
WHERE id = ?""",
(status, progress, message, result_id, error, task_id),
)
def get_task(task_id: str) -> Optional[Dict[str, Any]]:
with _conn() as conn:
row = conn.execute(
"SELECT * FROM generation_tasks WHERE id = ?", (task_id,)
).fetchone()
return _task_row_to_dict(row) if row else None
# ── prompt_templates CRUD ────────────────────────────────────────────────────
def get_template(name: str) -> Optional[str]:
with _conn() as conn:
row = conn.execute(
"SELECT template FROM prompt_templates WHERE name = ?", (name,)
).fetchone()
return row["template"] if row else None
def get_all_templates() -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute("SELECT * FROM prompt_templates ORDER BY name").fetchall()
return [
{"id": r["id"], "name": r["name"], "description": r["description"],
"template": r["template"], "updated_at": r["updated_at"]}
for r in rows
]
def update_template(name: str, template: str) -> bool:
with _conn() as conn:
conn.execute(
"UPDATE prompt_templates SET template = ?, updated_at = strftime('%Y-%m-%dT%H:%M:%fZ','now') WHERE name = ?",
(template, name),
)
return conn.execute(
"SELECT id FROM prompt_templates WHERE name = ?", (name,)
).fetchone() is not None

338
blog-lab/app/main.py Normal file
View File

@@ -0,0 +1,338 @@
import os
import uuid
import logging
from fastapi import FastAPI, HTTPException, BackgroundTasks
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from typing import List, Optional
from .config import CORS_ALLOW_ORIGINS, NAVER_CLIENT_ID, ANTHROPIC_API_KEY
from .db import (
init_db,
get_keyword_analyses, get_keyword_analysis, delete_keyword_analysis,
add_keyword_analysis,
get_posts, get_post, add_post, update_post, delete_post,
get_commissions, add_commission, update_commission, delete_commission,
get_dashboard_stats,
get_task, create_task, update_task,
)
from .naver_search import analyze_keyword
from .content_generator import generate_trend_brief, generate_blog_post, regenerate_blog_post
from .quality_reviewer import review_post
logger = logging.getLogger(__name__)
app = FastAPI()
_cors_origins = CORS_ALLOW_ORIGINS.split(",")
app.add_middleware(
CORSMiddleware,
allow_origins=[o.strip() for o in _cors_origins],
allow_credentials=False,
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allow_headers=["Content-Type"],
)
@app.on_event("startup")
def on_startup():
init_db()
os.makedirs("/app/data", exist_ok=True)
@app.get("/health")
def health():
return {"ok": True}
@app.get("/api/blog-marketing/status")
def service_status():
"""서비스 상태 및 설정 현황."""
return {
"ok": True,
"naver_api": bool(NAVER_CLIENT_ID),
"claude_api": bool(ANTHROPIC_API_KEY),
}
# ── 키워드 분석 API ──────────────────────────────────────────────────────────
class ResearchRequest(BaseModel):
keyword: str
def _run_research(task_id: str, keyword: str):
"""BackgroundTask: 네이버 검색 → 키워드 분석 → DB 저장."""
try:
update_task(task_id, "processing", 30, "네이버 검색 중...")
result = analyze_keyword(keyword)
update_task(task_id, "processing", 80, "분석 결과 저장 중...")
saved = add_keyword_analysis(result)
update_task(task_id, "succeeded", 100, "분석 완료", result_id=saved["id"])
except Exception as e:
logger.exception("Research failed for keyword=%s", keyword)
update_task(task_id, "failed", 0, "", error=str(e))
@app.post("/api/blog-marketing/research")
def start_research(req: ResearchRequest, background_tasks: BackgroundTasks):
"""키워드 분석 시작 (BackgroundTask). task_id 즉시 반환."""
if not NAVER_CLIENT_ID:
raise HTTPException(status_code=400, detail="Naver API 키가 설정되지 않았습니다")
if not req.keyword.strip():
raise HTTPException(status_code=400, detail="키워드를 입력하세요")
task_id = str(uuid.uuid4())
create_task(task_id, "research", {"keyword": req.keyword.strip()})
background_tasks.add_task(_run_research, task_id, req.keyword.strip())
return {"task_id": task_id}
@app.get("/api/blog-marketing/research/history")
def list_research(limit: int = 30):
return {"analyses": get_keyword_analyses(limit)}
@app.get("/api/blog-marketing/research/{analysis_id}")
def get_research(analysis_id: int):
result = get_keyword_analysis(analysis_id)
if not result:
raise HTTPException(status_code=404, detail="Analysis not found")
return result
@app.delete("/api/blog-marketing/research/{analysis_id}")
def remove_research(analysis_id: int):
if not delete_keyword_analysis(analysis_id):
raise HTTPException(status_code=404, detail="Analysis not found")
return {"ok": True}
# ── 작업 상태 폴링 API ──────────────────────────────────────────────────────
@app.get("/api/blog-marketing/task/{task_id}")
def get_task_status(task_id: str):
task = get_task(task_id)
if not task:
raise HTTPException(status_code=404, detail="Task not found")
return task
# ── AI 글 생성 API ──────────────────────────────────────────────────────────
class GenerateRequest(BaseModel):
keyword_id: int # keyword_analyses.id
def _run_generate(task_id: str, keyword_id: int):
"""BackgroundTask: 트렌드 브리프 → 블로그 글 생성 → DB 저장."""
try:
analysis = get_keyword_analysis(keyword_id)
if not analysis:
update_task(task_id, "failed", 0, "", error="키워드 분석 결과를 찾을 수 없습니다")
return
update_task(task_id, "processing", 20, "트렌드 브리프 생성 중...")
trend_brief = generate_trend_brief(analysis)
update_task(task_id, "processing", 60, "블로그 글 작성 중...")
post_data = generate_blog_post(analysis, trend_brief)
update_task(task_id, "processing", 90, "저장 중...")
saved = add_post({
"keyword_id": keyword_id,
"title": post_data["title"],
"body": post_data["body"],
"excerpt": post_data["excerpt"],
"tags": post_data["tags"],
"status": "draft",
"trend_brief": trend_brief,
})
update_task(task_id, "succeeded", 100, "글 생성 완료", result_id=saved["id"])
except Exception as e:
logger.exception("Generate failed for keyword_id=%s", keyword_id)
update_task(task_id, "failed", 0, "", error=str(e))
@app.post("/api/blog-marketing/generate")
def start_generate(req: GenerateRequest, background_tasks: BackgroundTasks):
"""AI 블로그 글 생성 시작. task_id 즉시 반환."""
if not ANTHROPIC_API_KEY:
raise HTTPException(status_code=400, detail="Claude API 키가 설정되지 않았습니다")
analysis = get_keyword_analysis(req.keyword_id)
if not analysis:
raise HTTPException(status_code=404, detail="키워드 분석 결과를 찾을 수 없습니다")
task_id = str(uuid.uuid4())
create_task(task_id, "generate", {"keyword_id": req.keyword_id})
background_tasks.add_task(_run_generate, task_id, req.keyword_id)
return {"task_id": task_id}
# ── 품질 리뷰 API ───────────────────────────────────────────────────────────
def _run_review(task_id: str, post_id: int):
"""BackgroundTask: 블로그 글 품질 리뷰."""
try:
post = get_post(post_id)
if not post:
update_task(task_id, "failed", 0, "", error="포스트를 찾을 수 없습니다")
return
update_task(task_id, "processing", 50, "품질 리뷰 중...")
result = review_post(post["title"], post["body"])
update_post(post_id, {
"review_score": result["total"],
"review_detail": result,
"status": "reviewed" if result["pass"] else "draft",
})
update_task(task_id, "succeeded", 100, "리뷰 완료", result_id=post_id)
except Exception as e:
logger.exception("Review failed for post_id=%s", post_id)
update_task(task_id, "failed", 0, "", error=str(e))
@app.post("/api/blog-marketing/review/{post_id}")
def start_review(post_id: int, background_tasks: BackgroundTasks):
"""블로그 글 품질 리뷰 시작. task_id 즉시 반환."""
if not ANTHROPIC_API_KEY:
raise HTTPException(status_code=400, detail="Claude API 키가 설정되지 않았습니다")
post = get_post(post_id)
if not post:
raise HTTPException(status_code=404, detail="Post not found")
task_id = str(uuid.uuid4())
create_task(task_id, "review", {"post_id": post_id})
background_tasks.add_task(_run_review, task_id, post_id)
return {"task_id": task_id}
# ── 재생성 API ───────────────────────────────────────────────────────────────
def _run_regenerate(task_id: str, post_id: int):
"""BackgroundTask: 피드백 기반 블로그 글 재생성."""
try:
post = get_post(post_id)
if not post:
update_task(task_id, "failed", 0, "", error="포스트를 찾을 수 없습니다")
return
analysis = get_keyword_analysis(post["keyword_id"]) if post["keyword_id"] else {}
feedback = post.get("review_detail", {}).get("feedback", "개선이 필요합니다")
update_task(task_id, "processing", 50, "글 재생성 중...")
result = regenerate_blog_post(
analysis or {"keyword": ""},
post.get("trend_brief", ""),
post["body"],
feedback,
)
update_post(post_id, {
"title": result["title"],
"body": result["body"],
"excerpt": result["excerpt"],
"tags": result["tags"],
"status": "draft",
"review_score": None,
"review_detail": {},
})
update_task(task_id, "succeeded", 100, "재생성 완료", result_id=post_id)
except Exception as e:
logger.exception("Regenerate failed for post_id=%s", post_id)
update_task(task_id, "failed", 0, "", error=str(e))
@app.post("/api/blog-marketing/regenerate/{post_id}")
def start_regenerate(post_id: int, background_tasks: BackgroundTasks):
"""피드백 기반 블로그 글 재생성. task_id 즉시 반환."""
if not ANTHROPIC_API_KEY:
raise HTTPException(status_code=400, detail="Claude API 키가 설정되지 않았습니다")
post = get_post(post_id)
if not post:
raise HTTPException(status_code=404, detail="Post not found")
task_id = str(uuid.uuid4())
create_task(task_id, "regenerate", {"post_id": post_id})
background_tasks.add_task(_run_regenerate, task_id, post_id)
return {"task_id": task_id}
# ── 포스트 CRUD API ──────────────────────────────────────────────────────────
@app.get("/api/blog-marketing/posts")
def list_posts(status: str = None, limit: int = 50):
return {"posts": get_posts(status=status, limit=limit)}
@app.get("/api/blog-marketing/posts/{post_id}")
def get_post_detail(post_id: int):
post = get_post(post_id)
if not post:
raise HTTPException(status_code=404, detail="Post not found")
return post
@app.put("/api/blog-marketing/posts/{post_id}")
def edit_post(post_id: int, data: dict):
result = update_post(post_id, data)
if not result:
raise HTTPException(status_code=404, detail="Post not found")
return result
@app.delete("/api/blog-marketing/posts/{post_id}")
def remove_post(post_id: int):
if not delete_post(post_id):
raise HTTPException(status_code=404, detail="Post not found")
return {"ok": True}
@app.post("/api/blog-marketing/posts/{post_id}/publish")
def publish_post(post_id: int, data: dict = None):
"""네이버 URL 등록 + 상태를 published로 변경."""
naver_url = (data or {}).get("naver_url", "")
result = update_post(post_id, {"status": "published", "naver_url": naver_url})
if not result:
raise HTTPException(status_code=404, detail="Post not found")
return result
# ── 수익 추적 API ────────────────────────────────────────────────────────────
@app.get("/api/blog-marketing/commissions")
def list_commissions(post_id: int = None, limit: int = 100):
return {"commissions": get_commissions(post_id=post_id, limit=limit)}
@app.post("/api/blog-marketing/commissions", status_code=201)
def create_commission(data: dict):
return add_commission(data)
@app.put("/api/blog-marketing/commissions/{comm_id}")
def edit_commission(comm_id: int, data: dict):
result = update_commission(comm_id, data)
if not result:
raise HTTPException(status_code=404, detail="Commission not found")
return result
@app.delete("/api/blog-marketing/commissions/{comm_id}")
def remove_commission(comm_id: int):
if not delete_commission(comm_id):
raise HTTPException(status_code=404, detail="Commission not found")
return {"ok": True}
# ── 대시보드 API ─────────────────────────────────────────────────────────────
@app.get("/api/blog-marketing/dashboard")
def dashboard():
return get_dashboard_stats()

View File

@@ -0,0 +1,174 @@
"""네이버 검색 API 연동 — 블로그 + 쇼핑 검색."""
import re
import requests
from typing import Any, Dict, List, Optional
from .config import NAVER_CLIENT_ID, NAVER_CLIENT_SECRET
BLOG_URL = "https://openapi.naver.com/v1/search/blog.json"
SHOP_URL = "https://openapi.naver.com/v1/search/shop.json"
_HEADERS = {
"X-Naver-Client-Id": NAVER_CLIENT_ID,
"X-Naver-Client-Secret": NAVER_CLIENT_SECRET,
}
_TAG_RE = re.compile(r"<[^>]+>")
def _strip_html(text: str) -> str:
return _TAG_RE.sub("", text).strip()
def search_blog(keyword: str, display: int = 10, sort: str = "sim") -> Dict[str, Any]:
"""네이버 블로그 검색.
Args:
keyword: 검색 키워드
display: 결과 수 (1-100)
sort: sim(정확도) | date(날짜)
Returns:
{"total": int, "items": [...]}
"""
resp = requests.get(
BLOG_URL,
headers=_HEADERS,
params={"query": keyword, "display": display, "sort": sort},
timeout=10,
)
resp.raise_for_status()
data = resp.json()
items = [
{
"title": _strip_html(item.get("title", "")),
"description": _strip_html(item.get("description", "")),
"link": item.get("link", ""),
"bloggername": item.get("bloggername", ""),
"postdate": item.get("postdate", ""),
}
for item in data.get("items", [])
]
return {"total": data.get("total", 0), "items": items}
def search_shopping(keyword: str, display: int = 20, sort: str = "sim") -> Dict[str, Any]:
"""네이버 쇼핑 검색.
Args:
keyword: 검색 키워드
display: 결과 수 (1-100)
sort: sim(정확도) | date(날짜) | asc(가격↑) | dsc(가격↓)
Returns:
{"total": int, "items": [...], "price_stats": {...}}
"""
resp = requests.get(
SHOP_URL,
headers=_HEADERS,
params={"query": keyword, "display": display, "sort": sort},
timeout=10,
)
resp.raise_for_status()
data = resp.json()
items = []
prices = []
for item in data.get("items", []):
lprice = _safe_int(item.get("lprice"))
hprice = _safe_int(item.get("hprice"))
parsed = {
"title": _strip_html(item.get("title", "")),
"link": item.get("link", ""),
"image": item.get("image", ""),
"lprice": lprice,
"hprice": hprice,
"mallName": item.get("mallName", ""),
"productId": item.get("productId", ""),
"productType": item.get("productType", ""),
"category1": item.get("category1", ""),
"category2": item.get("category2", ""),
"category3": item.get("category3", ""),
"brand": item.get("brand", ""),
"maker": item.get("maker", ""),
}
items.append(parsed)
if lprice and lprice > 0:
prices.append(lprice)
price_stats = None
if prices:
price_stats = {
"min": min(prices),
"max": max(prices),
"avg": int(sum(prices) / len(prices)),
"count": len(prices),
}
return {
"total": data.get("total", 0),
"items": items,
"price_stats": price_stats,
}
def _safe_int(val) -> Optional[int]:
if val is None:
return None
try:
return int(val)
except (ValueError, TypeError):
return None
def analyze_keyword(keyword: str) -> Dict[str, Any]:
"""키워드 경쟁도/기회 분석.
블로그 총 결과수, 쇼핑 총 결과수, 가격 통계를 기반으로
competition_score(경쟁도)와 opportunity_score(기회점수) 산출.
Returns:
{
"keyword", "blog_total", "shop_total",
"competition", "opportunity",
"avg_price", "min_price", "max_price",
"top_products": [...], "top_blogs": [...]
}
"""
blog = search_blog(keyword, display=10, sort="sim")
shop = search_shopping(keyword, display=20, sort="sim")
blog_total = blog["total"]
shop_total = shop["total"]
# 경쟁도: 블로그 결과 수 기반 (로그 스케일 0-100)
import math
if blog_total > 0:
competition = min(100, int(math.log10(blog_total + 1) * 15))
else:
competition = 0
# 기회 점수: 쇼핑 수요가 높고 블로그 경쟁이 낮을수록 높음
if shop_total > 0 and blog_total > 0:
ratio = shop_total / blog_total
opportunity = min(100, int(ratio * 20))
elif shop_total > 0:
opportunity = 90 # 경쟁 없이 수요만 있으면 높은 기회
else:
opportunity = 10 # 쇼핑 수요 없음
price_stats = shop.get("price_stats") or {}
return {
"keyword": keyword,
"blog_total": blog_total,
"shop_total": shop_total,
"competition": competition,
"opportunity": opportunity,
"avg_price": price_stats.get("avg"),
"min_price": price_stats.get("min"),
"max_price": price_stats.get("max"),
"top_products": shop["items"][:5],
"top_blogs": blog["items"][:5],
}

View File

@@ -0,0 +1,76 @@
"""Claude API 기반 블로그 글 품질 리뷰 — 5기준 × 10점, 35/50 통과."""
import json
import logging
from typing import Any, Dict, Optional
import anthropic
from .config import ANTHROPIC_API_KEY, CLAUDE_MODEL
from .db import get_template
logger = logging.getLogger(__name__)
PASS_THRESHOLD = 35 # 50점 만점 중 35점 이상이면 통과
_client: Optional[anthropic.Anthropic] = None
def _get_client() -> anthropic.Anthropic:
global _client
if _client is None:
_client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)
return _client
def review_post(title: str, body: str) -> Dict[str, Any]:
"""블로그 글 품질 리뷰.
Returns:
{
"scores": {"empathy": N, "click_appeal": N, "conversion": N, "seo": N, "format": N},
"total": N,
"pass": bool,
"feedback": str
}
"""
template = get_template("quality_review")
if not template:
raise RuntimeError("quality_review 템플릿이 없습니다")
prompt = template.format(title=title, body=body[:6000])
client = _get_client()
resp = client.messages.create(
model=CLAUDE_MODEL,
max_tokens=2048,
messages=[{"role": "user", "content": prompt}],
)
raw = resp.content[0].text
try:
text = raw.strip()
if text.startswith("```"):
lines = text.split("\n")
lines = [l for l in lines if not l.strip().startswith("```")]
text = "\n".join(lines)
result = json.loads(text)
scores = result.get("scores", {})
total = sum(scores.values())
passed = total >= PASS_THRESHOLD
return {
"scores": scores,
"total": total,
"pass": passed,
"feedback": result.get("feedback", ""),
}
except (json.JSONDecodeError, KeyError, TypeError) as e:
logger.warning("Quality review JSON parse failed: %s", e)
return {
"scores": {"empathy": 0, "click_appeal": 0, "conversion": 0, "seo": 0, "format": 0},
"total": 0,
"pass": False,
"feedback": f"리뷰 파싱 실패. 원본 응답:\n{raw[:500]}",
}

View File

@@ -0,0 +1,4 @@
fastapi==0.115.6
uvicorn[standard]==0.34.0
requests==2.32.3
anthropic==0.52.0

View File

@@ -68,6 +68,27 @@ services:
timeout: 5s
retries: 3
blog-lab:
build:
context: ./blog-lab
container_name: blog-lab
restart: unless-stopped
ports:
- "18700:8000"
environment:
- TZ=${TZ:-Asia/Seoul}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- NAVER_CLIENT_ID=${NAVER_CLIENT_ID:-}
- NAVER_CLIENT_SECRET=${NAVER_CLIENT_SECRET:-}
- CORS_ALLOW_ORIGINS=${CORS_ALLOW_ORIGINS:-http://localhost:3007,http://localhost:8080}
volumes:
- ${BLOG_DATA_PATH:-./data/blog}:/app/data
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"]
interval: 30s
timeout: 5s
retries: 3
travel-proxy:
build: ./travel-proxy
container_name: travel-proxy
@@ -97,6 +118,7 @@ services:
restart: unless-stopped
depends_on:
- music-lab
- blog-lab
ports:
- "8080:80"
volumes:

View File

@@ -99,6 +99,20 @@ server {
proxy_pass http://stock-lab:8000/api/trade/;
}
# blog-marketing API
location /api/blog-marketing/ {
resolver 127.0.0.11 valid=10s;
set $blog_backend blog-lab:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 120s;
proxy_pass http://$blog_backend$request_uri;
}
# portfolio API (Stock Lab) — trailing slash 유무 모두 매칭
location /api/portfolio {
proxy_http_version 1.1;