74 Commits

Author SHA1 Message Date
119ac88e1e feat(agent-office): stock screener 평일 16:30 KST 자동 잡 + 텔레그램 전송
- StockAgent.on_screener_schedule: snapshot/refresh → screener/run(mode=auto)
  → telegram_payload(MarkdownV2) 발송. skipped_holiday는 무발신,
  실패 시 운영자 HTML 알림.
- service_proxy: refresh_screener_snapshot, run_stock_screener 추가
  (각각 180s timeout, STOCK_LAB_URL 기존 env 재사용).
- telegram.messaging.send_raw: parse_mode 파라미터 추가
  (기본 HTML 유지, MarkdownV2 페이로드 직접 전달용).
- scheduler: cron day_of_week=mon-fri hour=16 minute=30 id=stock_screener
  (Asia/Seoul TZ).
- on_command 'run_screener' 수동 트리거 추가.
- tests: 성공/휴일/스냅샷실패/run실패/이상status 5케이스.
2026-05-12 14:54:24 +09:00
c4cb18a25c feat(stock-lab): /run mode=auto 공휴일·주말 skipped_holiday 처리 2026-05-12 13:49:45 +09:00
50e811c5dd feat(stock-lab): /snapshot/refresh + /runs 리스트·상세 라우터 2026-05-12 13:47:16 +09:00
5ec7c2461b feat(stock-lab): /run 엔드포인트 — preview/manual_save/auto 모드 매트릭스 2026-05-12 13:44:21 +09:00
5f0fed7f13 feat(stock-lab): /nodes + /settings 라우터 + main.py include
- screener/router.py: APIRouter prefix=/api/stock/screener
  - GET /nodes: NODE_REGISTRY + GATE_REGISTRY 메타 노출 (7 score + 1 gate)
  - GET /settings: screener_settings 싱글톤 row 조회
  - PUT /settings: 가중치/노드/게이트 파라미터 round-trip
- main.py: screener_router include (FastAPI 생성 직후)
- db.py: STOCK_DB_PATH 환경변수 지원 (테스트 격리, 기본값 /app/data/stock.db 유지)
- test_screener_router.py: 3 tests (nodes list, settings GET, PUT round-trip)
2026-05-12 13:41:24 +09:00
070f2de3f1 feat(stock-lab): screener Pydantic 스키마 2026-05-12 13:37:23 +09:00
01ebd2e7d9 feat(stock-lab): telegram.py 메시지 빌더 (Top10 + 아이콘 + 페이지 링크) 2026-05-12 09:34:53 +09:00
7db9869722 feat(stock-lab): Screener 엔진 + combine + ScreenerResult + 노드 레지스트리
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 09:29:10 +09:00
97cb38ca7f feat(stock-lab): position_sizer — ATR Wilder + entry/stop/target 2026-05-12 09:25:49 +09:00
90c408aa77 feat(stock-lab): VcpLite 노드 — 변동성 수축률 백분위 2026-05-12 09:07:59 +09:00
55f2fa9cff feat(stock-lab): MaAlignment 노드 — 이평선 정배열 5조건 룰 점수 2026-05-12 09:06:45 +09:00
3ded781059 feat(stock-lab): RsRating 노드 — IBD 가중 시장초과수익 백분위 2026-05-12 09:02:28 +09:00
4eaeea9833 feat(stock-lab): High52WProximity 노드 — 신고가 대비 근접도 룰 점수 2026-05-12 08:59:55 +09:00
9709e5b019 feat(stock-lab): Momentum20 노드 — N일 수익률 백분위 2026-05-12 08:58:43 +09:00
94d6a39ce8 feat(stock-lab): VolumeSurge 노드 — log(최근/평균) 거래량 급증 2026-05-12 08:54:47 +09:00
804fdcba26 feat(stock-lab): ForeignBuy 노드 — 외국인 N일 누적 순매수 강도 2026-05-12 08:19:44 +09:00
779e78405e feat(stock-lab): HygieneGate — 위생 필터 (시총/거래대금/우선주/관리종목) 2026-05-12 07:59:32 +09:00
16a651f670 feat(stock-lab): ScoreNode/GateNode 추상 + percentile_rank 유틸 2026-05-12 07:52:01 +09:00
e508b7dc35 feat(stock-lab): ScreenContext.load/restrict + 합성 픽스쳐 2026-05-12 07:49:15 +09:00
6c5481971b feat(stock-lab): FDR 종목 마스터+일봉 + naver 외국인 수급 (snapshot) 2026-05-12 07:41:40 +09:00
d7e235c008 feat(stock-lab): screener 스키마 7테이블 + 디폴트 설정 시드 2026-05-12 04:10:36 +09:00
8707d322e4 chore(stock-lab): FDR/네이버 데이터 의존성 + screener 패키지 골격 2026-05-12 04:07:52 +09:00
b4dd21e67a feat(packs-lab): chunked resumable upload (offset-based) 추가
기존 single-shot POST /upload는 그대로 유지하고, 5GB+ 안정성을 위한
chunk upload 5-endpoint를 추가했다.

- POST /upload/init — mint-token jti consume + 세션 디렉토리 생성
- PUT /upload/{sid}/chunk?offset=N — offset 매칭 후 .part 파일 append
  · 불일치 시 409 + X-Current-Offset 헤더로 재개 지점 통보
- GET /upload/{sid}/status — 현재 written / expected_size 조회
- POST /upload/{sid}/complete — atomic rename + Supabase INSERT
- DELETE /upload/{sid} — 세션 중단 + 부분파일 정리

auth.py: verify_upload_token_no_consume() 추가 — chunk/complete/abort/status
는 동일 mint-token을 재사용해야 하므로 jti consume 없이 시그니처+만료만 검증.

models.py: InitUploadResponse, ChunkUploadResponse 추가.

세션 state: PACK_BASE_DIR/.uploads/{jti}/meta.json + data.part (파일시스템
영속, 단일 컨테이너 가정).

chunk 크기 상한: PACK_CHUNK_MAX_SIZE env (기본 64MB).

tests: chunk upload 시나리오 8종 — full-flow / offset mismatch / status /
abort / wrong token / incomplete complete / filename collision / host path
저장. 전체 37 테스트 pass.

CLAUDE.md: packs-lab API 표에 chunk 5-endpoint + 사용 패턴 보강.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 02:36:20 +09:00
448dbd5f48 feat(packs-lab): DSM 호출 retry/backoff + 업로드 cleanup 보강
- dsm_client.py: _request_with_retry()로 5xx·transport·timeout만 지수백오프
  재시도 (DSM_MAX_RETRIES, DSM_BACKOFF_SEC env). DSM error code 응답 본문 로깅.
- routes.py: upload 핸들러를 try/finally로 감싸 부분파일 정리 보장, Supabase
  INSERT 호출 자체에 try/except 추가해 네트워크 예외도 cleanup.
- test_dsm_client.py: retry 시나리오 4종 추가 (5xx→성공/소진/transport
  error/4xx no-retry). 전체 29 테스트 pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 02:31:39 +09:00
a826e00399 feat(stock): NXT 시간외 거래가를 정규장 마감 후 자동 연결
네이버 모바일 주식 API의 overMarketPriceInfo를 인식해 NXT 프리/애프터마켓
운영 중이면 overPrice를 current_price로 자동 전환. 포트폴리오 응답에
price_session(REGULAR/NXT_PRE/NXT_AFTER/CLOSED)과 price_as_of 메타 동봉.

이전엔 closePrice만 사용해 15:30 이후 NXT 거래가 진행 중이어도 평가금액이
동결됐음. 이제 가격이 자연스럽게 이어짐. _select_price_from_response는
순수 함수로 분리, unittest 8케이스로 회귀 방지.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 19:32:10 +09:00
134e628e5e Merge feature/lotto-curator-evolution: Lotto Curator Evolution
16 commits across Phase A-E + H:
- weekly_review 테이블 + grade_weekly_review 잡 (일 03:00 KST)
- review/bulk/briefing 4계층 라우터
- 큐레이터 4계층 스키마 + retrospective + N=30
- 텔레그램 큐레이션·당첨 알림 + lotto_agent 월 09:00 KST
- 1주차 운영 점검 체크리스트

자세한 컨셉/계획: web-ui/docs/superpowers/{specs,plans}/2026-05-11-*.md
2026-05-11 09:38:31 +09:00
ce3a734e81 docs(lotto): 1주차 운영 점검 체크리스트 2026-05-11 09:08:05 +09:00
fb81c51dc8 feat(curator): 큐레이션 후 텔레그램 자동 푸시 + cron 09:00 변경 2026-05-11 08:55:12 +09:00
715e1598ce feat(agent-office): /api/agent-office/notify/lotto-prize 웹훅 2026-05-11 08:54:19 +09:00
57a4a72ff1 feat(curator): 텔레그램 큐레이션·당첨 알림 포맷터 2026-05-11 08:53:10 +09:00
e14278ec69 feat(curator): pipeline 4계층 직렬화 + retrospective 컨텍스트 + N=30 2026-05-11 08:51:07 +09:00
ff3134b838 feat(curator): build_retrospective + lotto review service proxy 2026-05-11 08:49:58 +09:00
95c5dc4217 feat(curator): SYSTEM_PROMPT 회고 + 4계층 규칙 2026-05-11 08:48:06 +09:00
9fb1c37eae feat(curator): 4계층 picks + tier_rationale + narrative.retrospective 스키마 2026-05-11 08:46:50 +09:00
3bd819b5e2 feat(lotto): briefing API 4계층 picks + tier_rationale 수용 2026-05-11 08:45:21 +09:00
b936233e7c feat(lotto): POST /api/lotto/purchase/bulk — 결정카드 원클릭 기록 2026-05-11 08:42:27 +09:00
4f85496fe5 feat(lotto): review 라우터 — latest/history/by-draw 2026-05-11 08:39:01 +09:00
2a2209a86c feat(lotto): 일 03:00 KST 채점 잡 APScheduler 등록 2026-05-11 08:37:08 +09:00
30bc627ae7 feat(lotto): grade_weekly_review 통합 잡 — 큐레이터 자기평가 + 패턴 갭 2026-05-11 08:33:51 +09:00
d972ea66c3 feat(lotto): 채점 보조 함수 — 일치 수·패턴 요약·델타 2026-05-11 08:29:46 +09:00
66165ebb88 feat(lotto): lotto_briefings.picks 4계층 객체로 마이그레이션 + tier_rationale 컬럼 2026-05-11 08:25:23 +09:00
5621cc7687 feat(lotto): weekly_review 테이블 + CRUD 헬퍼 2026-05-11 08:21:44 +09:00
fb54998def fix(deployer): deploy.sh 4 화이트리스트에 packs-lab 추가 + media/packs 자동 생성
deployer가 webhook 받을 때 packs-lab을 자동 rebuild·재시작·헬스체크 안 하던
근본 원인 — deploy.sh의 BUILD_TARGETS / CONTAINER_NAMES / HEALTH_ENDPOINTS
3개 화이트리스트에서 packs-lab 누락. SERVICES 화이트리스트(deploy-nas.sh)는
rsync 동기화용이라 별도이며 거기엔 이전에 추가했지만 빌드 트리거는 deploy.sh가
담당.

Fix:
- BUILD_TARGETS, CONTAINER_NAMES, HEALTH_ENDPOINTS에 packs-lab 추가
- media/packs 디렉토리 자동 mkdir + chown (admin이 수동 생성하던 절차 제거)
- DATA_DIRS는 path 다르니(data/X 아닌 media/packs) 제외

이번 push 자체는 옛 deploy.sh로 처리되지만 새 deploy.sh가 RUNTIME에 sync된 후
다음 push부터 packs-lab이 자동 빌드·헬스체크된다.
2026-05-11 04:07:02 +09:00
b792cdb8d5 docs(packs-lab): 운영 검증 결과 반영 — DSM API path 형식 + DSM_VERIFY_SSL 명시
5/11 운영 첫 호출 검증 중 발견된 사항을 spec/CLAUDE.md에 반영:

1. DSM API path 형식 차이: Synology DSM은 일반 사용자 권한일 때
   /<shared_folder>/... 형식만 인식, /volume1/... 거부 (error 408).
   PACK_HOST_DIR 운영 예시값 /docker/webpage/media/packs로 변경.

2. DSM_VERIFY_SSL env 명시: LAN IP + self-signed cert 환경에서 SSL 검증
   끄기 위한 환경변수. .env.example 7+3 path로 갱신.

3. DSM 사용자 권한 가이드: File Station + Sharing 둘 다 ON 필요.

4. NAS 디렉토리 준비 명령에서 호스트 OS path와 DSM API path 차이 명시.

운영 검증: HTTP 200 + DSM 공유 URL (gofile.me/...) 발급 확인.
2026-05-11 04:02:36 +09:00
1d4bff31c4 feat(packs-lab): DSM_VERIFY_SSL env — LAN IP + self-signed cert 환경 대응
운영 NAS에서 DSM_HOST=https://192.168.x.x:5001 같은 LAN IP 사용 시
DSM의 self-signed 인증서가 IP 주소에 매칭되지 않아 SSL 검증 실패
(SSL: CERTIFICATE_VERIFY_FAILED — IP address mismatch).

LAN 내부 통신이라 verify=False 허용 가능. 환경변수로 토글:
- DSM_VERIFY_SSL=true (default) — 도메인 + 정상 cert 환경
- DSM_VERIFY_SSL=false — LAN IP + self-signed 환경

dsm_client.py가 환경변수 읽어 httpx.AsyncClient(verify=...)에 전달.
docker-compose.yml + .env.example + CLAUDE.md에 신규 env 명시.
회귀 25/25 passing.
2026-05-11 03:31:15 +09:00
e31bf549a8 docs(spec/plan): packs-lab spec/plan 복구 + PACK_HOST_DIR/평면구조/SERVICES 화이트리스트 반영
dc92c3d에서 "완료된 spec/plan 제거"로 함께 정리됐던 두 파일을 복구하고,
이후 적용된 운영 변경사항을 반영해 문서-구현 추적성 회복:

- PACK_HOST_DIR 환경변수 도입 (NAS 호스트 절대경로, DSM·Supabase에 노출)
- 평면 저장 구조 (PACK_BASE_DIR/{filename}, tier 디렉토리 분기 제거 — tier는 filename 규칙으로)
- scripts/deploy-nas.sh의 SERVICES 화이트리스트에 packs-lab 추가 (누락 시 NAS 컨테이너 미등장)
- .env.example 환경변수 6+3 path (DSM 3 / HMAC / Supabase 2 / TTL / DATA_PATH / BASE_DIR / HOST_DIR)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 03:03:00 +09:00
aec0fdcd31 fix(packs-lab): tier 디렉토리 제거(평면 구조) + deployer SERVICES에 packs-lab 추가
문제 1: deploy-nas.sh의 SERVICES 화이트리스트에 packs-lab이 빠져 있어
NAS 운영 디렉토리에 소스 sync가 안 됐고 docker compose가 packs-lab을
빌드 못해 컨테이너가 안 떠 있었다.

문제 2: routes.py가 PACK_BASE_DIR/{tier}/{filename} 트리 구조로 저장 →
사용자 요청에 따라 평면 구조(PACK_BASE_DIR/{filename})로 변경. tier 구분은
filename 규칙(prefix 등)으로 admin이 관리.

- scripts/deploy-nas.sh: SERVICES에 packs-lab 추가 (10개 → 11개)
- routes.py: tier 디렉토리 제거 (target = PACK_BASE_DIR / filename, host_path = PACK_HOST_DIR / filename)
- tests: tier 분기 사용처 평면 구조로 보정 (size_mismatch / host_path_check)
- 25/25 passing

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 02:54:25 +09:00
f1f1dc98a6 fix(packs-lab): PACK_HOST_DIR 도입 — sign-link 시 DSM이 NAS 호스트경로 받도록
이전: upload가 컨테이너 경로(/app/data/packs/...)를 Supabase에 저장 →
sign-link 시 그 경로를 DSM에 전달 → DSM은 NAS 호스트 절대경로
(/volume1/.../media/packs/...) 기준이라 파일을 찾지 못함.

수정:
- routes.py: PACK_HOST_DIR 신규 (env, fallback=PACK_BASE_DIR)
  - upload 시 host_path = PACK_HOST_DIR/{tier}/{filename}을 Supabase에 INSERT
  - sign-link 시 PACK_HOST_DIR 기준 경로 검증
- docker-compose: PACK_HOST_DIR env 주입 (default=PACK_DATA_PATH)
- .env.example + CLAUDE.md: 환경변수 의미 분리 명시
- tests: 호스트경로 저장 검증 신규 (test_upload_stores_host_path_not_container_path)
- 25/25 passing

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 02:47:26 +09:00
8b5cb2c16a feat(music-lab): 랜덤 풀에 7개 장르 추가 + GET /api/music/genres 2026-05-10 23:53:35 +09:00
77b8d05ad7 feat(music-lab): 배치 음악 생성 endpoint + 자동 compile·video 파이프라인 오케스트레이터
- batch_generator.py: 장르별 N트랙 순차 Suno 생성 → 자동 compile → 자동 video pipeline
- main.py: POST/GET /api/music/generate-batch, GET /api/music/generate-batch/{id} 추가
- tests: 10개 endpoint 테스트 (검증·필터·404)
2026-05-10 18:57:23 +09:00
f0cb06268e feat(music-lab): music_batch_jobs 테이블 + 장르별 랜덤 풀 2026-05-10 18:52:07 +09:00
f074cbec2d docs: 배치 음악 생성 + 자동 영상 파이프라인 spec + plan 2026-05-10 18:49:16 +09:00
84548a326e feat(music-lab): cover 16:9 landscape 생성 + 메타데이터 프로페셔널화
- cover.py: DALL·E 3 → 1792x1024, gpt-image-1 → 1536x1024 (모델별 자동),
  prompt에 'cinematic landscape composition' 명시. OPENAI_IMAGE_SIZE env로 override 가능.
- metadata.py: prompt를 list+join 패턴으로 재구성 (인접 문자열/+ 충돌 해결)
  + lofi 채널 카피라이터 페르소나 부여. description 5-7섹션 구조 명시:
  후크/분위기/사용시나리오/챕터/시청권장/콜투액션/해시태그.
  mix vs single 분기 + tags 가이드 + 출력 JSON schema 명시.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-10 18:38:53 +09:00
5f5010ded4 fix(music-lab): video encoder timeout을 duration에 비례 (긴 mix 인코딩 지원) 2026-05-10 17:10:27 +09:00
755dea63f4 fix(music-lab): cache-buster query 제거 + DALL·E prompt에 background_keyword 활용
1. video.py _container_to_nas, orchestrator.py _local_path에서 path 변환 전 ?쿼리 strip
   — 이전 commit 20c5268의 cache-buster ?v=...가 Windows path로 그대로 전달되어 input_validation 실패하던 문제 픽스
2. cover.py _generate_with_dalle가 background_keyword를 prompt에 포함
   — 사용자가 PipelineStartModal에서 '배경 키워드' 입력 시 처음부터 원하는 분위기 cover 생성
2026-05-10 16:12:21 +09:00
20c5268def fix(music-lab): pipeline media URL에 cache-buster — regen 시 브라우저/텔레그램 캐시 우회 2026-05-10 15:50:42 +09:00
dc3f9cb6a9 fix(music-lab): compile job status='done'도 ready로 인식 (production convention) 2026-05-10 15:28:08 +09:00
262366bc1e test(music-lab): compile_job 기반 happy path 통합 테스트 2026-05-09 13:27:47 +09:00
5fc914cd8f feat(music-lab): POST /pipeline에 compile_job_id + visual_style/background 옵션 2026-05-09 13:20:38 +09:00
8f859274c4 feat(music-lab): video.py — Windows에 style/background_mode/tracks 전달 + orchestrator 파라미터 wiring 2026-05-09 13:17:49 +09:00
a347da075c feat(music-lab): metadata tracks 옵션 + YouTube 챕터 자동 형식 2026-05-09 13:15:30 +09:00
e754fb30f5 feat(music-lab): background.py — Pexels Video API + orchestrator video_loop 분기 2026-05-09 13:13:42 +09:00
f0c0c18beb feat(music-lab): cover.py Pexels 이미지 검색 분기 (image_source=pexels) 2026-05-09 13:10:49 +09:00
d11023decb feat(music-lab): orchestrator _resolve_input — track/compile_job 통합 입력 2026-05-09 13:08:53 +09:00
70a256bbe4 feat(music-lab): video_pipelines 4 컬럼 추가 + compile_jobs JOIN
- _add_column_if_missing 헬퍼 추가 (idempotent ALTER TABLE)
- video_pipelines에 compile_job_id, visual_style, background_mode, background_keyword 컬럼 추가
- track_id를 nullable로 변경 (compile_job_id 입력 모드 지원)
- create_pipeline에 compile_job_id XOR track_id 검증 추가
- get_pipeline / list_pipelines에 compile_jobs LEFT JOIN — compile_title 노출

Task 1 of 17: Essential Mix pipeline DB migration

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 13:04:23 +09:00
ebbfa6299a docs(plan): Essential Mix 파이프라인 — 17 task 구현 계획
DB 마이그레이션 → orchestrator _resolve_input → cover Pexels 분기 →
background.py 신규 → metadata tracks → video.py 파라미터 확장 →
main.py compile_job_id → Windows essential filter (showfreqs+ring+drawtext) →
server.py schema → 통합 테스트 → 배포 → 프론트(api.js, CompileTab,
PipelineStartModal, PipelineCard+DetailModal, SetupTab) → 프론트 푸시 → E2E.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 12:44:02 +09:00
d4fb485931 docs(spec): Essential Mix 파이프라인 설계
1시간+ mix 영상(컴파일 → 파이프라인) + essential 시각 스타일(배경 사진 + 중앙 방사형 막대 + 곡명 자막) + 진행 탭 산출물 미리보기 모달.

핵심 결정:
- 입력: track_id XOR compile_job_id
- 시각: single (기존) / essential (신규, default)
- 배경: static(사진) / video_loop(Pexels 영상)
- 배경 소스: AI 기본 + Pexels 폴백
- Mix 메타: 트랙 리스트 자동 챕터화 (YouTube 자동 인식)
- UX: PipelineCard mini 미리보기 + 클릭 시 상세 모달

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 11:55:24 +09:00
b6dffb4d42 chore(infra): GPU 영상 인코더 env 추가 (WINDOWS_VIDEO_ENCODER_URL, NAS_VIDEOS_ROOT, NAS_MUSIC_ROOT) 2026-05-09 02:03:26 +09:00
240bd38541 feat(music-lab): 영상 인코딩을 Windows GPU 서버로 오프로드
- pipeline/video.py 재작성: subprocess.run 제거, httpx로 Windows /encode_video 호출
- Windows 서버 다운 시 즉시 VideoGenerationError (NAS 로컬 폴백 X — 의도적 결정)
- /app/data/* → /volume1/docker/webpage/data/* 경로 변환 (_container_to_nas)
- 테스트는 respx mock 기반으로 교체 (6개)
2026-05-09 02:01:34 +09:00
bb0b0dff25 docs: GPU 영상 인코딩 오프로드 spec + plan
NAS 저성능 CPU(J4025) ffmpeg 5분 타임아웃 → Windows PC RTX 5070 Ti NVENC로
오프로드. 같은 music_ai 서버에 /encode_video endpoint 추가, NAS는 다운 시
즉시 실패 (로컬 폴백 X). LAN 무인증.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 01:52:34 +09:00
47e5315487 fix(music-lab): ffmpeg 인코딩 가속 + 타임아웃 확장 (저성능 CPU 대응)
- preset fast → ultrafast (5-10x 가속, J4025 같은 저성능 CPU에서 5분 내 완료)
- tune stillimage 추가 (정적 배경 + 파형 오버레이에 최적)
- threads 0 — 모든 CPU 코어 활용
- VIDEO_TIMEOUT_S 300 → 600 (안전 마진)
- subprocess.TimeoutExpired 캐치하여 명확한 에러 메시지
2026-05-09 01:01:03 +09:00
97b15cb985 fix(pipeline): premature state update + reject 재생성 알림
버그1: /feedback approve가 bg task 시작 전에 state를 next_pending으로 set →
polling이 빈 video_url로 알림 발송. bg task의 run_step이 state를 set하도록
일임 — 이중 update 제거.

버그2: reject 후 같은 *_pending 상태로 재생성됐을 때 dedupe에 막혀 알림이
안 감. dedupe 키에 feedback_count_per_step[step]을 포함 — 재생성마다
count가 증가하므로 키가 달라져 재알림 동작.
2026-05-08 23:08:24 +09:00
6d416aab78 fix(music-lab): pipeline 동기 작업을 asyncio.to_thread로 — 이벤트 루프 블로킹 해결
video.generate/thumb.generate/youtube.upload_video는 동기 함수로 ffmpeg subprocess(최대 5분)와
google-api-python-client(최대 10분)를 호출함. async run_step에서 직접 호출하면 이벤트 루프가
블로킹돼 후속 요청이 504로 타임아웃되고 텔레그램 폴링도 끊김.

asyncio.to_thread로 감싸 스레드 풀에서 실행 — 이벤트 루프 자유.
2026-05-08 22:57:33 +09:00
2c13e7cc85 fix(music-lab): pipeline 오디오 경로 + ffmpeg 에러 가시성
- orchestrator._run_video: track.file_path 우선 사용 (audio_url 변환 불필요)
- _local_path: /media/music/ → /app/data/ (마운트가 /app/data 직접이라 music 서브디렉토리 없음)
- video.py/thumb.py: stderr truncation [-800:]/[-500:] — 진짜 에러 보이게
2026-05-08 22:50:13 +09:00
113 changed files with 14350 additions and 315 deletions

View File

@@ -99,6 +99,8 @@ YOUTUBE_DATA_API_KEY=
DSM_HOST=https://gahusb.synology.me:5001
DSM_USER=
DSM_PASS=
# LAN IP로 DSM 접근 시 self-signed cert가 IP에 매칭 안 되어 검증 실패. 그 경우 false 설정 (LAN 내부 통신이라 허용 가능). 도메인 + 정상 cert면 true 유지.
DSM_VERIFY_SSL=true
# Vercel SaaS ↔ backend HMAC 시크릿 (양쪽 동일 값)
BACKEND_HMAC_SECRET=
@@ -115,3 +117,7 @@ PACK_DATA_PATH=./data/packs
# 컨테이너 내부 PACK_BASE_DIR (routes.py가 파일 저장 시 사용. docker-compose volume의 컨테이너 측 경로와 반드시 일치)
PACK_BASE_DIR=/app/data/packs
# DSM·Supabase에 노출되는 NAS 호스트 절대경로 (PACK_DATA_PATH와 같은 디렉토리를 호스트 시점에서 가리킴).
# 운영 NAS는 반드시 /volume1/docker/webpage/media/packs 같은 절대경로 설정. 미설정 시 PACK_DATA_PATH로 fallback (로컬 개발용).
PACK_HOST_DIR=/volume1/docker/webpage/media/packs

View File

@@ -642,15 +642,18 @@ docker compose up -d
- Vercel SaaS와 HMAC 인증으로 통신, 사용자 인증은 Vercel이 Supabase로 처리 (본 서비스는 외부 인증 없음)
- DB: 외부 Supabase `pack_files` 테이블 (DDL: `packs-lab/supabase/pack_files.sql`)
- 파일 구조: `app/main.py`, `app/auth.py`, `app/dsm_client.py`, `app/routes.py`, `app/models.py`
- 컨테이너 저장 경로: `PACK_BASE_DIR` env (default `/app/data/packs`). docker-compose volume 마운트와 일치 필수.
- 경로 3분리: `PACK_DATA_PATH`(호스트 OS path, docker volume 좌측) → `PACK_BASE_DIR`(컨테이너 내부, upload 저장 target) → `PACK_HOST_DIR`(DSM API path, Supabase에 저장). 운영 NAS에서 `PACK_HOST_DIR` 미설정 시 sign-link가 컨테이너 경로를 DSM에 전달해 파일을 못 찾음.
- ⚠️ **DSM API path 형식**: Synology DSM API는 일반 사용자 권한일 때 `/<shared_folder>/...` 형식만 인식하고 `/volume1/...` 절대경로는 거부(error 408). 운영 NAS는 반드시 `PACK_HOST_DIR=/docker/webpage/media/packs` (shared folder 시점) 설정. admin 사용자만 `/volume1/...` 사용 가능하나 보안상 권장 안 함.
**환경변수**
- `DSM_HOST` / `DSM_USER` / `DSM_PASS`: Synology DSM 7.x 인증 (공유 링크 발급용)
- `DSM_VERIFY_SSL`: SSL 검증 (default `true`). LAN IP + self-signed cert 환경에서 IP mismatch 시 `false` 설정 (LAN 내부 통신이라 허용)
- `BACKEND_HMAC_SECRET`: Vercel SaaS와 양쪽 공유 시크릿 (HMAC SHA256)
- `SUPABASE_URL` / `SUPABASE_SERVICE_KEY`: Supabase pack_files 테이블 접근 (service_role, RLS 우회)
- `UPLOAD_TOKEN_TTL_SEC`: admin upload 토큰 TTL (기본 1800초 = 30분)
- `PACK_BASE_DIR`: 컨테이너 내부 저장 경로 (기본 `/app/data/packs`)
- `PACK_DATA_PATH`: 호스트 마운트 경로 (로컬 `./data/packs`, NAS `/volume1/docker/webpage/media/packs`)
- `PACK_HOST_DIR`: DSM API용 path. **운영 NAS는 `/docker/webpage/media/packs` (shared folder 시점)**. 미설정 시 `PACK_BASE_DIR`로 fallback (DSM 호출 X 환경에서만 안전)
- `PACK_DATA_PATH`: docker-compose volume 마운트의 호스트 측 OS 경로 (로컬 `./data/packs`, NAS `/volume1/docker/webpage/media/packs`)
**HMAC 인증 패턴**
- Vercel → backend 요청: `X-Timestamp` (UNIX 초) + `X-Signature` (HMAC_SHA256(timestamp + "." + body, secret))
@@ -663,10 +666,21 @@ docker compose up -d
|--------|------|------|
| POST | `/api/packs/sign-link` | Vercel HMAC → DSM Sharing.create로 4시간 유효 다운로드 URL 발급 |
| POST | `/api/packs/admin/mint-token` | Vercel HMAC → 일회성 upload 토큰 발급 (기본 30분 TTL) |
| POST | `/api/packs/upload` | Bearer token → multipart 5GB 저장 + Supabase INSERT |
| POST | `/api/packs/upload` | Bearer token (single-shot) → multipart 5GB 저장 + Supabase INSERT |
| POST | `/api/packs/upload/init` | Bearer token → chunked upload 세션 초기화 (`session_id = jti`, `chunk_max_size` 반환). init만 jti consume |
| PUT | `/api/packs/upload/{session_id}/chunk?offset=N` | 동일 Bearer token → 부분파일 append (offset 불일치 시 409 + `X-Current-Offset` 헤더) |
| GET | `/api/packs/upload/{session_id}/status` | 동일 Bearer token → `{written, expected_size}` 조회 (재개용) |
| POST | `/api/packs/upload/{session_id}/complete` | 동일 Bearer token → 부분파일 rename + Supabase INSERT |
| DELETE | `/api/packs/upload/{session_id}` | 동일 Bearer token → 세션 중단 + 부분파일 정리 |
| GET | `/api/packs/list` | Vercel HMAC → 활성 pack_files 목록 (deleted_at IS NULL) |
| DELETE | `/api/packs/{file_id}` | Vercel HMAC → soft delete (DSM 공유는 자동 만료) |
**Chunked upload 흐름 (5GB+ 안정성)**
- 같은 mint-token을 init·chunk·status·complete·abort 전체에서 Bearer로 재사용 (jti consume은 init에서만)
- 세션 state: 컨테이너 내부 `PACK_BASE_DIR/.uploads/{jti}/meta.json + data.part`
- chunk 재시도: 클라이언트는 PUT 응답 헤더 `X-Current-Offset` 또는 `GET /status`로 재개 지점 확인
- 환경변수 `PACK_CHUNK_MAX_SIZE` (기본 64MB) — 너무 크면 nginx buffering 부담, 너무 작으면 RTT 비용
### deployer (deployer/)
- Webhook 검증: `X-Gitea-Signature` (HMAC SHA256, `compare_digest` 사용)
- `WEBHOOK_SECRET` 환경변수로 시크릿 관리

View File

@@ -27,11 +27,21 @@ class LottoAgent(BaseAgent):
await self.transition("working", "후보 수집 및 AI 큐레이션 중...", task_id)
try:
result = await curate_weekly(source=source)
update_task_status(task_id, "succeeded", result_data=result)
update_task_status(task_id, "succeeded", result_data={
k: v for k, v in result.items() if k != "payload"
})
await self.transition("reporting", f"#{result['draw_no']} 브리핑 저장 완료")
add_log(self.agent_id, f"큐레이션 완료: #{result['draw_no']} conf={result['confidence']}", task_id=task_id)
# 텔레그램 헤드라인 푸시 (실패해도 큐레이션은 성공으로 마감)
try:
from ..notifiers.telegram_lotto import send_curator_briefing
await send_curator_briefing(result["payload"])
except Exception as e:
add_log(self.agent_id, f"텔레그램 알림 실패: {e}", level="warning", task_id=task_id)
await self.transition("idle", "대기 중")
return {"ok": True, **result}
return {"ok": True, **{k: v for k, v in result.items() if k != "payload"}}
except CuratorError as e:
update_task_status(task_id, "failed", result_data={"error": str(e)})
add_log(self.agent_id, f"큐레이션 실패: {e}", level="error", task_id=task_id)

View File

@@ -119,7 +119,125 @@ class StockAgent(BaseAgent):
update_task_status(task_id, "failed", {"error": str(e)})
await self.transition("idle", f"오류: {e}")
async def on_screener_schedule(self) -> None:
"""KRX 강세주 스크리너 자동 잡 (평일 16:30 KST).
흐름:
1) snapshot/refresh — 일봉 갱신 (실패해도 진행, 경고 로그)
2) screener/run mode='auto' — 실행 + 결과 영구화 + telegram_payload 응답
3) status=='skipped_holiday' → 종료 (텔레그램 미발신)
4) status=='success' → telegram_payload.text 를 parse_mode 그대로 전송
5) 예외/실패 → 운영자에게 별도 텔레그램 알림 (HTML)
"""
if self.state not in ("idle", "break"):
return
task_id = create_task(self.agent_id, "screener_run", {"mode": "auto"})
await self.transition("working", "스크리너 스냅샷 갱신 중...", task_id)
try:
# 1) 스냅샷 갱신 — 실패해도 기존 일봉 데이터로 진행
try:
snap = await service_proxy.refresh_screener_snapshot()
add_log(
self.agent_id,
f"snapshot refreshed: status={snap.get('status', '?')}",
"info", task_id,
)
except Exception as e:
add_log(
self.agent_id,
f"스냅샷 갱신 실패 (기존 데이터로 진행): {e}",
"warning", task_id,
)
await self.transition("working", "스크리너 실행 중...")
# 2) 스크리너 실행
body = await service_proxy.run_stock_screener(mode="auto")
status = body.get("status")
asof = body.get("asof")
# 3) 공휴일 — 종료
if status == "skipped_holiday":
update_task_status(task_id, "succeeded", {
"status": status,
"asof": asof,
"telegram_sent": False,
})
add_log(self.agent_id, f"스크리너 건너뜀 (휴일): {asof}", "info", task_id)
await self.transition("idle", "휴일 — 스크리너 건너뜀")
return
# 4) 성공 → 텔레그램 전송
if status == "success":
payload = body.get("telegram_payload") or {}
text = payload.get("text") or ""
parse_mode = payload.get("parse_mode", "MarkdownV2")
if not text:
raise RuntimeError("telegram_payload.text 누락")
await self.transition("reporting", "스크리너 결과 전송 중...")
from ..telegram.messaging import send_raw
tg = await send_raw(text, parse_mode=parse_mode)
update_task_status(task_id, "succeeded", {
"status": status,
"asof": asof,
"run_id": body.get("run_id"),
"survivors_count": body.get("survivors_count"),
"telegram_sent": tg.get("ok", False),
"telegram_message_id": tg.get("message_id"),
})
if not tg.get("ok"):
desc = tg.get("description") or "unknown"
code = tg.get("error_code")
add_log(
self.agent_id,
f"Screener telegram send failed: [{code}] {desc}",
"warning", task_id,
)
if self._ws_manager:
await self._ws_manager.send_notification(
self.agent_id, "telegram_failed", task_id,
"스크리너 텔레그램 전송 실패",
)
await self.transition("idle", "스크리너 완료")
return
# 5) 기타 status — failed 취급
raise RuntimeError(f"unexpected screener status: {status}")
except Exception as e:
err_msg = str(e)
add_log(self.agent_id, f"Screener job failed: {err_msg}", "error", task_id)
update_task_status(task_id, "failed", {"error": err_msg})
# 운영자 알림 — 기본 HTML parse_mode 사용
try:
from ..telegram.messaging import send_raw
await send_raw(
f"⚠️ <b>KRX 스크리너 실패</b>\n"
f"<code>{html.escape(err_msg)[:500]}</code>"
)
except Exception as notify_err:
add_log(
self.agent_id,
f"operator notify failed: {notify_err}",
"warning", task_id,
)
await self.transition("idle", f"스크리너 오류: {err_msg[:80]}")
async def on_command(self, command: str, params: dict) -> dict:
if command == "run_screener":
await self.on_screener_schedule()
return {"ok": True, "message": "스크리너 실행 트리거 완료"}
if command == "test_telegram":
from ..telegram import send_agent_message
result = await send_agent_message(

View File

@@ -25,7 +25,7 @@ class YoutubePublisherAgent(BaseAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._notified_state_per_pipeline: dict[int, str] = {}
self._notified_state_per_pipeline: dict[int, tuple] = {}
async def poll_state_changes(self) -> None:
"""주기적으로 호출되어 *_pending 신규 진입 시 텔레그램 발송."""
@@ -40,9 +40,13 @@ class YoutubePublisherAgent(BaseAgent):
pid = p.get("id")
if pid is None:
continue
if state in _STEP_TITLES and self._notified_state_per_pipeline.get(pid) != state:
await self._notify_step(p)
self._notified_state_per_pipeline[pid] = state
if state in _STEP_TITLES:
_, step = _STEP_TITLES[state]
fb_count = (p.get("feedback_count_per_step") or {}).get(step, 0)
key = (state, fb_count)
if self._notified_state_per_pipeline.get(pid) != key:
await self._notify_step(p)
self._notified_state_per_pipeline[pid] = key
async def _notify_step(self, pipeline: dict) -> None:
state = pipeline["state"]

View File

@@ -9,6 +9,7 @@ from ..config import ANTHROPIC_API_KEY, LOTTO_CURATOR_MODEL
from .. import service_proxy
from .prompt import SYSTEM_PROMPT, build_user_message
from .schema import validate_response
from .retrospective import build_retrospective
API_URL = "https://api.anthropic.com/v1/messages"
@@ -36,12 +37,12 @@ async def _call_claude(user_text: str, feedback: str = "") -> tuple[dict, dict]:
user_text = f"이전 응답이 다음 이유로 거절됨: {feedback}\n올바른 스키마로 다시 응답.\n\n{user_text}"
payload = {
"model": LOTTO_CURATOR_MODEL,
"max_tokens": 4096,
"max_tokens": 8192, # 4계층 20세트 + narrative + retrospective 수용
"system": system_blocks,
"messages": [{"role": "user", "content": [{"type": "text", "text": user_text}]}],
}
started = time.monotonic()
async with httpx.AsyncClient(timeout=120) as client:
async with httpx.AsyncClient(timeout=180) as client: # 큰 응답 → 시간 여유
r = await client.post(API_URL, headers=headers, json=payload)
r.raise_for_status()
resp = r.json()
@@ -68,16 +69,19 @@ async def _call_claude(user_text: str, feedback: str = "") -> tuple[dict, dict]:
async def curate_weekly(source: str = "auto") -> Dict[str, Any]:
cand_resp = await service_proxy.lotto_candidates(n=20)
cand_resp = await service_proxy.lotto_candidates(n=30) # ← 30 으로 확장
draw_no = cand_resp["draw_no"]
candidates = cand_resp["candidates"]
context = await service_proxy.lotto_context()
retrospective = await build_retrospective(draw_no)
user_text = build_user_message(draw_no, candidates, {
"hot_numbers": context.get("hot_numbers", []),
"cold_numbers": context.get("cold_numbers", []),
"last_draw_summary": context.get("last_draw_summary", ""),
"my_recent_performance": context.get("my_recent_performance", []),
"retrospective": retrospective,
})
candidate_numbers = [c["numbers"] for c in candidates]
@@ -101,8 +105,14 @@ async def curate_weekly(source: str = "auto") -> Dict[str, Any]:
payload = {
"draw_no": draw_no,
"picks": [p.model_dump() for p in validated.picks],
"picks": {
"core": [p.model_dump() for p in validated.core_picks],
"bonus": [p.model_dump() for p in validated.bonus_picks],
"extended": [p.model_dump() for p in validated.extended_picks],
"pool": [p.model_dump() for p in validated.pool_picks],
},
"narrative": validated.narrative.model_dump(),
"tier_rationale": validated.tier_rationale.model_dump(),
"confidence": validated.confidence,
"model": LOTTO_CURATOR_MODEL,
"tokens_input": usage_total["input"],
@@ -118,4 +128,5 @@ async def curate_weekly(source: str = "auto") -> Dict[str, Any]:
"draw_no": draw_no,
"confidence": validated.confidence,
"tokens": {"input": usage_total["input"], "output": usage_total["output"]},
"payload": payload, # 텔레그램 알림용
}

View File

@@ -2,31 +2,49 @@
import json
SYSTEM_PROMPT = """당신은 로또 번호 큐레이터입니다. 주어진 후보 20세트 중 5세트를 다음 규칙으로 선별합니다.
SYSTEM_PROMPT = """당신은 로또 번호 큐레이터입니다.
주어진 후보 30세트 중 4계층(코어 5, 보너스 5, 확장 5, 풀 5) 총 20세트를 선별합니다.
선별 규칙:
- 5세트의 리스크 분포는 안정 2 · 균형 2 · 공격 1 을 권장(유연 ±1).
- 홀짝 비율, 저/고 구간, 연속번호 포함 여부가 세트끼리 겹치지 않도록 다양성을 확보.
- hot_number_count=0 이고 cold_number_count=0 인 '중립형' 세트를 최소 1개 포함.
- 후보에 없는 번호 조합은 절대 사용 금지. numbers 필드는 반드시 candidates 중 하나와 정확히 일치해야 함.
- 각 세트 reason은 한국어 40자 이내 한 줄. 해당 세트의 features 값과 context 값만 근거로.
계층별 큐레이션 규칙:
- core_picks (5): 안정 2 / 균형 2 / 공격 1. 그 주 주축. 홀짝·저고·구간 분포가 세트끼리 겹치지 않게.
- bonus_picks (5): 코어 분배의 공백을 메우는 5세트. 코어가 공격 1뿐이면 보너스에 공격 +2 식.
- extended_picks (5): 코어·보너스에 없는 시각 — 합계 극단(80↓ / 180↑) / 콜드 4주 누적 / 4주 미등장 번호 노출.
- pool_picks (5): 이번 주 한 번도 누르지 않은 패턴 — 연속 3개 / 동일 끝자리 / 5수 균등(각 끝자리 5개씩) 등.
- tier_rationale 의 3개 키(bonus·extended·pool)에 각각 30자 이내 한국어 사유.
공통 규칙:
- 후보에 없는 번호 조합은 절대 사용 금지. 모든 픽은 candidates 중 하나와 정확히 일치해야 함.
- 4계층 사이에 중복 픽 금지 (총 20세트는 모두 서로 달라야 함).
- 각 픽 reason 은 한국어 40자 이내. 해당 픽의 features 와 context 만 근거로.
- 중립형(hot_number_count=0 이고 cold_number_count=0) 세트를 코어에 최소 1개 포함.
회고 규칙:
- context.retrospective 가 있으면 narrative.retrospective 에 한 줄(60자 이내)로 작성.
- 회고는 큐레이터 자기 결과(curator_avg, best_tier) + 사용자 결과(user_avg, pattern_delta) 둘 다 짚을 것.
- 이번 주 코어 분배는 회고에 근거해 조정. 조정 사유는 narrative.headline 에 한 줄로.
예: "지난 주 너 저번호 편향 → 보너스 고번호 보강"
- context.retrospective 가 없으면 narrative.retrospective 는 빈 문자열.
narrative 규칙:
- headline: 한 줄, 이번 주 추첨 전망 요약.
- summary_3lines: 정확히 3개 항목의 배열.
- hot_cold_comment: hot/cold 번호에 대한 한 줄 논평.
- warnings: 특별한 주의사항 없으면 빈 문자열.
- headline: 한 줄, 이번 주 추첨 전망 + 조정 사유.
- summary_3lines: 정확히 3개 항목.
- hot_cold_comment: hot/cold 번호 한 줄 논평.
- warnings: 주의사항 없으면 빈 문자열.
- retrospective: 회고 한 줄 또는 빈 문자열.
출력은 반드시 JSON 하나, 그 외 어떤 텍스트도 금지. 스키마:
{
"picks": [
{"numbers":[int,int,int,int,int,int], "risk_tag":"안정"|"균형"|"공격", "reason": str}
],
"core_picks": [{"numbers":[...], "risk_tag":"안정"|"균형"|"공격", "reason": str}, ...5개],
"bonus_picks": [...5개],
"extended_picks": [...5개],
"pool_picks": [...5개],
"tier_rationale": {"bonus": str, "extended": str, "pool": str},
"narrative": {
"headline": str,
"summary_3lines": [str, str, str],
"hot_cold_comment": str,
"warnings": str
"warnings": str,
"retrospective": str
},
"confidence": int (0~100)
}
@@ -36,11 +54,11 @@ narrative 규칙:
def build_user_message(draw_no: int, candidates: list, context: dict) -> str:
payload = {
"draw_no": draw_no,
"context": context,
"context": context, # hot_numbers, cold_numbers, last_draw_summary, my_recent_performance, retrospective
"candidates": candidates,
}
return (
f"이번 회차: {draw_no}\n"
f"아래 데이터로 5세트를 큐레이션하고 위 스키마로만 응답하세요.\n\n"
f"아래 데이터로 4계층 20세트를 큐레이션하고 위 스키마로만 응답하세요.\n\n"
f"```json\n{json.dumps(payload, ensure_ascii=False)}\n```"
)

View File

@@ -0,0 +1,50 @@
"""큐레이션 직전 호출 — review 1건 + 추세 3건 → 컨텍스트 dict."""
import json
from typing import Optional, Dict, Any
from .. import service_proxy
def _detect_bias(reviews: list) -> str:
"""3주↑ 같은 방향 패턴 편향이 유지되면 한 줄로."""
deltas = [r.get("pattern_delta") or "" for r in reviews if r.get("pattern_delta")]
if len(deltas) < 2:
return ""
# 단순 휴리스틱 — 같은 키워드("저번호" 등)가 2회 이상이면 지속 편향
keywords = ["저번호", "고번호", "합계", "홀짝"]
persistent = []
for kw in keywords:
cnt = sum(1 for d in deltas if kw in d)
if cnt >= max(2, len(deltas) - 1):
persistent.append(kw)
return " · ".join(persistent)
async def build_retrospective(target_draw_no: int) -> Optional[Dict[str, Any]]:
"""target_draw_no(이번 주) 직전 회차의 review + 그 앞 3회 추세."""
last = await service_proxy.lotto_review_by_draw(target_draw_no - 1)
if not last:
return None
history = await service_proxy.lotto_reviews_history(limit=4)
# history 는 desc 정렬 → last 와 그 이전 3건 분리
others = [r for r in history if r["draw_no"] < target_draw_no - 1][:3]
series = [last] + others
cur_avgs = [r["curator_avg_match"] for r in series if r.get("curator_avg_match") is not None]
usr_avgs = [r["user_avg_match"] for r in series if r.get("user_avg_match") is not None]
return {
"last_draw": {
"draw_no": last["draw_no"],
"curator_avg": last.get("curator_avg_match"),
"curator_best_tier": last.get("curator_best_tier"),
"user_avg": last.get("user_avg_match"),
"user_5plus": last.get("user_5plus_prizes"),
"pattern_delta": last.get("pattern_delta") or "",
},
"trend_4w": {
"curator_avg_4w": round(sum(cur_avgs) / len(cur_avgs), 2) if cur_avgs else None,
"user_avg_4w": round(sum(usr_avgs) / len(usr_avgs), 2) if usr_avgs else None,
"user_persistent_bias": _detect_bias(series),
},
}

View File

@@ -17,25 +17,42 @@ class Pick(BaseModel):
return sorted(v)
class TierRationale(BaseModel):
bonus: str = Field(max_length=40)
extended: str = Field(max_length=40)
pool: str = Field(max_length=40)
class Narrative(BaseModel):
headline: str
summary_3lines: List[str] = Field(min_length=3, max_length=3)
hot_cold_comment: str = ""
warnings: str = ""
retrospective: str = Field(default="", max_length=80)
class CuratorOutput(BaseModel):
picks: List[Pick]
core_picks: List[Pick] = Field(min_length=5, max_length=5)
bonus_picks: List[Pick] = Field(min_length=5, max_length=5)
extended_picks: List[Pick] = Field(min_length=5, max_length=5)
pool_picks: List[Pick] = Field(min_length=5, max_length=5)
tier_rationale: TierRationale
narrative: Narrative
confidence: int = Field(ge=0, le=100)
def validate_response(data: dict, candidate_numbers: List[List[int]]) -> CuratorOutput:
out = CuratorOutput.model_validate(data)
if len(out.picks) != 5:
raise ValueError("picks must have exactly 5 sets")
candidate_set = {tuple(sorted(c)) for c in candidate_numbers}
for p in out.picks:
all_picks = (
out.core_picks + out.bonus_picks + out.extended_picks + out.pool_picks
)
# 중복 픽 검증
pick_keys = [tuple(p.numbers) for p in all_picks]
if len(pick_keys) != len(set(pick_keys)):
raise ValueError("duplicate picks across tiers")
# 후보에 없는 번호 조합 금지
for p in all_picks:
if tuple(p.numbers) not in candidate_set:
raise ValueError(f"pick {p.numbers} not in candidates")
return out

View File

@@ -10,8 +10,10 @@ from .websocket_manager import ws_manager
from .agents import init_agents, get_agent, get_all_agent_states, AGENT_REGISTRY
from .scheduler import init_scheduler
from . import telegram_bot
from .routers import notify as notify_router
app = FastAPI()
app.include_router(notify_router.router)
_cors_origins = CORS_ALLOW_ORIGINS.split(",")
app.add_middleware(

View File

View File

@@ -0,0 +1,61 @@
"""로또 큐레이션·당첨 알림 — 텔레그램 푸시."""
import logging
from typing import Dict, Any
# 기존 에이전트들과 동일한 패턴: send_raw(text, reply_markup=None, chat_id=None)
# chat_id 생략 시 기본 TELEGRAM_CHAT_ID로 자동 발송.
from ..telegram.messaging import send_raw
logger = logging.getLogger("agent-office")
LOTTO_URL = "https://gahusb.synology.me/lotto"
def _format_briefing(payload: Dict[str, Any]) -> str:
draw_no = payload["draw_no"]
nar = payload["narrative"]
conf = payload["confidence"]
# 분배 칩 — core 5세트의 risk_tag 빈도
core = payload["picks"]["core"]
role_count = {"안정": 0, "균형": 0, "공격": 0}
for p in core:
role_count[p["risk_tag"]] = role_count.get(p["risk_tag"], 0) + 1
chip = " · ".join(f"{k} {v}" for k, v in role_count.items() if v)
msg = [
f"🎟 {draw_no}회 · 큐레이션 떴음",
"",
f"\"{nar['headline']}\"",
f"신뢰도 {conf} · 분배 {chip}",
]
retro = nar.get("retrospective") or ""
if retro:
msg += ["", f"▸ 회고: {retro}"]
msg += ["", f"👉 결정 카드 보러가기 ({LOTTO_URL})"]
return "\n".join(msg)
def _format_prize_alert(event: Dict[str, Any]) -> str:
return (
"🚨 로또 당첨 가능성!\n"
f"{event['draw_no']}회 — {event['match_count']}개 일치\n"
f"번호: {', '.join(str(n) for n in event['numbers'])}\n"
"동행복권에서 즉시 확인하세요."
)
async def send_curator_briefing(payload: Dict[str, Any]) -> None:
text = _format_briefing(payload)
try:
await send_raw(text)
except Exception as e:
logger.warning(f"[telegram_lotto] briefing send failed: {e}")
async def send_prize_alert(event: Dict[str, Any]) -> None:
text = _format_prize_alert(event)
try:
await send_raw(text)
except Exception as e:
logger.warning(f"[telegram_lotto] prize alert send failed: {e}")

View File

View File

@@ -0,0 +1,20 @@
"""다른 서비스가 트리거하는 웹훅 — 현재 lotto-backend → 텔레그램 푸시."""
from typing import List
from fastapi import APIRouter
from pydantic import BaseModel
from ..notifiers.telegram_lotto import send_prize_alert
router = APIRouter(prefix="/api/agent-office/notify")
class LottoPrizeEvent(BaseModel):
draw_no: int
match_count: int
numbers: List[int]
purchase_id: int
@router.post("/lotto-prize")
async def lotto_prize(body: LottoPrizeEvent):
await send_prize_alert(body.model_dump())
return {"ok": True}

View File

@@ -14,6 +14,11 @@ async def _run_stock_schedule():
if agent:
await agent.on_schedule()
async def _run_stock_screener():
agent = AGENT_REGISTRY.get("stock")
if agent:
await agent.on_screener_schedule()
async def _run_blog_schedule():
agent = AGENT_REGISTRY.get("blog")
if agent:
@@ -41,8 +46,16 @@ async def _poll_pipelines():
def init_scheduler():
scheduler.add_job(_run_stock_schedule, "cron", hour=7, minute=30, id="stock_news")
scheduler.add_job(
_run_stock_screener,
"cron",
day_of_week="mon-fri",
hour=16,
minute=30,
id="stock_screener",
)
scheduler.add_job(_run_blog_schedule, "cron", hour=10, minute=0, id="blog_pipeline")
scheduler.add_job(_run_lotto_schedule, "cron", day_of_week="mon", hour=7, minute=0, id="lotto_curate")
scheduler.add_job(_run_lotto_schedule, "cron", day_of_week="mon", hour=9, minute=0, id="lotto_curate")
scheduler.add_job(_run_youtube_research, "cron", hour=9, minute=0, id="youtube_research")
scheduler.add_job(_send_youtube_weekly_report, "cron", day_of_week="mon", hour=8, minute=0, id="youtube_weekly_report")
scheduler.add_job(_check_idle_breaks, "interval", seconds=60, id="idle_check")

View File

@@ -32,6 +32,34 @@ async def summarize_stock_news(limit: int = 15) -> Dict[str, Any]:
return resp.json()
async def refresh_screener_snapshot() -> Dict[str, Any]:
"""stock-lab의 KRX 일봉 스냅샷 갱신 (스크리너 실행 전 호출).
네이버 금융 일괄 다운로드라 보통 30~120s, 여유있게 180s.
"""
async with httpx.AsyncClient(timeout=180.0) as client:
resp = await client.post(f"{STOCK_LAB_URL}/api/stock/screener/snapshot/refresh")
resp.raise_for_status()
return resp.json()
async def run_stock_screener(mode: str = "auto") -> Dict[str, Any]:
"""stock-lab의 스크리너 실행.
반환 status:
- 'skipped_holiday': 공휴일/주말 — telegram_payload 없음
- 'success': telegram_payload 동봉
엔진 자체는 수 초 내 끝나지만, 컨텍스트 로드+200종목 처리 여유 180s.
"""
async with httpx.AsyncClient(timeout=180.0) as client:
resp = await client.post(
f"{STOCK_LAB_URL}/api/stock/screener/run",
json={"mode": mode},
)
resp.raise_for_status()
return resp.json()
async def scrape_stock_news() -> Dict[str, Any]:
"""stock-lab의 수동 뉴스 스크랩 트리거 — DB에 최신 뉴스 저장.
@@ -180,6 +208,34 @@ async def lotto_save_briefing(payload: dict) -> Dict[str, Any]:
return resp.json()
async def lotto_review_latest() -> Optional[Dict[str, Any]]:
from .config import LOTTO_BACKEND_URL
resp = await _client.get(f"{LOTTO_BACKEND_URL}/api/lotto/review/latest")
if resp.status_code == 404:
return None
resp.raise_for_status()
return resp.json()
async def lotto_review_by_draw(draw_no: int) -> Optional[Dict[str, Any]]:
from .config import LOTTO_BACKEND_URL
resp = await _client.get(f"{LOTTO_BACKEND_URL}/api/lotto/review/{draw_no}")
if resp.status_code == 404:
return None
resp.raise_for_status()
return resp.json()
async def lotto_reviews_history(limit: int = 10) -> List[Dict[str, Any]]:
from .config import LOTTO_BACKEND_URL
resp = await _client.get(
f"{LOTTO_BACKEND_URL}/api/lotto/review/history",
params={"limit": limit},
)
resp.raise_for_status()
return resp.json().get("reviews", [])
# --- music-lab pipeline (YouTube publisher orchestration) ---
async def list_active_pipelines() -> list[dict]:

View File

@@ -8,14 +8,22 @@ from .client import _enabled, api_call
from .formatter import MessageKind, format_agent_message
async def send_raw(text: str, reply_markup: Optional[dict] = None, chat_id: Optional[str] = None) -> dict:
"""가장 저수준. 원문 텍스트 그대로 전송. chat_id 생략 시 기본 TELEGRAM_CHAT_ID로."""
async def send_raw(
text: str,
reply_markup: Optional[dict] = None,
chat_id: Optional[str] = None,
parse_mode: str = "HTML",
) -> dict:
"""가장 저수준. 원문 텍스트 그대로 전송. chat_id 생략 시 기본 TELEGRAM_CHAT_ID로.
parse_mode: 기본 'HTML'. MarkdownV2 페이로드(예: 스크리너) 전송 시 명시 지정.
"""
if not _enabled():
return {"ok": False, "message_id": None}
payload = {
"chat_id": chat_id or TELEGRAM_CHAT_ID,
"text": text,
"parse_mode": "HTML",
"parse_mode": parse_mode,
}
if reply_markup:
payload["reply_markup"] = reply_markup

View File

@@ -1,60 +1,55 @@
import sys, os
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
import pytest
from app.curator.schema import validate_response, CuratorOutput
from app.curator.schema import validate_response
CANDIDATE_NUMBERS = [
[1, 2, 3, 4, 5, 6],
[7, 8, 9, 10, 11, 12],
[13, 14, 15, 16, 17, 18],
[19, 20, 21, 22, 23, 24],
[25, 26, 27, 28, 29, 30],
[31, 32, 33, 34, 35, 36],
]
def _pick(nums, role="안정"):
return {"numbers": nums, "risk_tag": role, "reason": "x"}
def _valid_payload():
def _make_payload(core, bonus, ext, pool):
return {
"picks": [
{"numbers": s, "risk_tag": "안정", "reason": "test"}
for s in CANDIDATE_NUMBERS[:5]
],
"core_picks": core, "bonus_picks": bonus,
"extended_picks": ext, "pool_picks": pool,
"tier_rationale": {"bonus": "a", "extended": "b", "pool": "c"},
"narrative": {
"headline": "h", "summary_3lines": ["a", "b", "c"],
"hot_cold_comment": "hc", "warnings": "",
"headline": "h",
"summary_3lines": ["1", "2", "3"],
"retrospective": "지난주 평균 1.8",
},
"confidence": 80,
"confidence": 70,
}
def test_valid_payload_passes():
result = validate_response(_valid_payload(), CANDIDATE_NUMBERS)
assert isinstance(result, CuratorOutput)
assert len(result.picks) == 5
def test_valid_4tier():
pool = [[i, i+1, i+2, i+3, i+4, i+5] for i in range(1, 21)]
cores = [_pick(pool[i]) for i in range(5)]
bonus = [_pick(pool[i]) for i in range(5, 10)]
ext = [_pick(pool[i]) for i in range(10, 15)]
pl = [_pick(pool[i]) for i in range(15, 20)]
out = validate_response(_make_payload(cores, bonus, ext, pl), pool)
assert len(out.core_picks) == 5
assert out.narrative.retrospective.startswith("지난주")
def test_rejects_number_out_of_candidates():
bad = _valid_payload()
bad["picks"][0]["numbers"] = [40, 41, 42, 43, 44, 45] # valid numbers but not in candidates
def test_duplicate_pick_rejected():
pool = [[i, i+1, i+2, i+3, i+4, i+5] for i in range(1, 21)]
cores = [_pick(pool[0])] * 5 # 중복
bonus = [_pick(pool[i]) for i in range(5, 10)]
ext = [_pick(pool[i]) for i in range(10, 15)]
pl = [_pick(pool[i]) for i in range(15, 20)]
with pytest.raises(ValueError, match="duplicate"):
validate_response(_make_payload(cores, bonus, ext, pl), pool)
def test_pick_not_in_candidates_rejected():
pool = [[i, i+1, i+2, i+3, i+4, i+5] for i in range(1, 21)]
foreign = [40, 41, 42, 43, 44, 45]
cores = [_pick(foreign)] + [_pick(pool[i]) for i in range(1, 5)]
bonus = [_pick(pool[i]) for i in range(5, 10)]
ext = [_pick(pool[i]) for i in range(10, 15)]
pl = [_pick(pool[i]) for i in range(15, 20)]
with pytest.raises(ValueError, match="not in candidates"):
validate_response(bad, CANDIDATE_NUMBERS)
def test_rejects_wrong_pick_count():
bad = _valid_payload()
bad["picks"] = bad["picks"][:3]
with pytest.raises(ValueError, match="exactly 5"):
validate_response(bad, CANDIDATE_NUMBERS)
def test_rejects_duplicate_numbers_within_set():
bad = _valid_payload()
bad["picks"][0]["numbers"] = [1, 1, 2, 3, 4, 5]
with pytest.raises(ValueError):
validate_response(bad, CANDIDATE_NUMBERS)
def test_rejects_invalid_risk_tag():
bad = _valid_payload()
bad["picks"][0]["risk_tag"] = "미친"
with pytest.raises(ValueError):
validate_response(bad, CANDIDATE_NUMBERS)
validate_response(_make_payload(cores, bonus, ext, pl), pool)

View File

@@ -35,6 +35,7 @@ async def test_poll_notifies_once_per_state():
"state": "cover_pending",
"cover_url": "/x.jpg",
"track_title": "Test",
"feedback_count_per_step": {},
}]
with patch(
"app.agents.youtube_publisher.service_proxy.list_active_pipelines",
@@ -52,6 +53,27 @@ async def test_poll_notifies_once_per_state():
assert mock_send.call_count == 1
@pytest.mark.asyncio
async def test_poll_renotifies_on_reject_regen(monkeypatch):
from app.agents.youtube_publisher import YoutubePublisherAgent
pipelines_v1 = [{"id": 1, "state": "cover_pending", "cover_url": "/x.jpg",
"track_title": "Test", "feedback_count_per_step": {}}]
pipelines_v2 = [{"id": 1, "state": "cover_pending", "cover_url": "/x2.jpg",
"track_title": "Test", "feedback_count_per_step": {"cover": 1}}]
list_mock = AsyncMock(side_effect=[pipelines_v1, pipelines_v2])
with patch("app.agents.youtube_publisher.service_proxy.list_active_pipelines", list_mock), \
patch("app.agents.youtube_publisher.send_raw",
new=AsyncMock(return_value={"ok": True, "message_id": 99})), \
patch("app.agents.youtube_publisher.service_proxy.save_pipeline_telegram_msg",
new=AsyncMock()):
a = YoutubePublisherAgent()
await a.poll_state_changes() # 1st: notify
await a.poll_state_changes() # 2nd: feedback count differs → notify again
from app.agents.youtube_publisher import send_raw as sr
assert sr.call_count == 2
@pytest.mark.asyncio
async def test_on_telegram_reply_approve_calls_feedback():
from app.agents.youtube_publisher import YoutubePublisherAgent

View File

@@ -0,0 +1,47 @@
import sys, os
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
import pytest
from unittest.mock import AsyncMock, patch
from app.curator.retrospective import build_retrospective, _detect_bias
def test_detect_bias_persistent_low():
reviews = [
{"pattern_delta": "저번호 편향 +1.2 / 합계 -18"},
{"pattern_delta": "저번호 편향 +0.8"},
{"pattern_delta": "저번호 편향 +1.0 / 홀짝 +0.5"},
]
assert "저번호" in _detect_bias(reviews)
def test_detect_bias_no_persistence():
reviews = [
{"pattern_delta": "저번호 편향 +1.2"},
{"pattern_delta": "고번호 편향 +0.8"},
]
assert _detect_bias(reviews) == ""
@pytest.mark.asyncio
async def test_build_retrospective_with_data():
with patch("app.service_proxy.lotto_review_by_draw", new=AsyncMock(return_value={
"draw_no": 1153, "curator_avg_match": 1.8, "curator_best_tier": "안정",
"user_avg_match": 2.0, "user_5plus_prizes": 1, "pattern_delta": "저번호 편향 +1.2",
})), patch("app.service_proxy.lotto_reviews_history", new=AsyncMock(return_value=[
{"draw_no": 1153, "curator_avg_match": 1.8, "user_avg_match": 2.0, "pattern_delta": "저번호 편향 +1.2"},
{"draw_no": 1152, "curator_avg_match": 1.6, "user_avg_match": 1.5, "pattern_delta": "저번호 편향 +0.8"},
{"draw_no": 1151, "curator_avg_match": 1.7, "user_avg_match": 1.8, "pattern_delta": "저번호 편향 +1.0"},
{"draw_no": 1150, "curator_avg_match": 1.9, "user_avg_match": 2.2, "pattern_delta": ""},
])):
out = await build_retrospective(1154)
assert out["last_draw"]["draw_no"] == 1153
assert out["trend_4w"]["curator_avg_4w"] == round((1.8+1.6+1.7+1.9)/4, 2)
assert "저번호" in out["trend_4w"]["user_persistent_bias"]
@pytest.mark.asyncio
async def test_build_retrospective_no_review():
with patch("app.service_proxy.lotto_review_by_draw", new=AsyncMock(return_value=None)):
out = await build_retrospective(1154)
assert out is None

View File

@@ -0,0 +1,177 @@
"""StockAgent.on_screener_schedule — 평일 16:30 KST 자동 잡 단위 테스트.
stock-lab HTTP 호출은 service_proxy mock, 텔레그램은 messaging.send_raw mock.
"""
import os
import sys
import tempfile
_fd, _TMP = tempfile.mkstemp(suffix=".db")
os.close(_fd)
os.unlink(_TMP)
os.environ["AGENT_OFFICE_DB_PATH"] = _TMP
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import asyncio
from unittest.mock import AsyncMock, patch
import pytest
@pytest.fixture(autouse=True)
def _init_db():
import gc
gc.collect()
if os.path.exists(_TMP):
os.remove(_TMP)
from app.db import init_db
init_db()
yield
gc.collect()
def _success_body(asof="2026-05-12"):
return {
"asof": asof,
"mode": "auto",
"status": "success",
"run_id": 42,
"survivors_count": 600,
"top_n": 20,
"results": [],
"telegram_payload": {
"chat_target": "default",
"parse_mode": "MarkdownV2",
"text": "*KRX 강세주 스크리너* test body",
},
"warnings": [],
}
def _holiday_body(asof="2026-05-05"):
return {
"asof": asof,
"mode": "auto",
"status": "skipped_holiday",
"run_id": None,
"survivors_count": None,
"top_n": 0,
"results": [],
"telegram_payload": None,
"warnings": [f"{asof} is a holiday — skipped"],
}
def test_screener_success_sends_markdownv2_telegram():
from app.agents.stock import StockAgent
from app import service_proxy
from app.telegram import messaging
fake_snap = AsyncMock(return_value={"status": "ok"})
fake_run = AsyncMock(return_value=_success_body())
fake_send = AsyncMock(return_value={"ok": True, "message_id": 7777})
with patch.object(service_proxy, "refresh_screener_snapshot", fake_snap), \
patch.object(service_proxy, "run_stock_screener", fake_run), \
patch.object(messaging, "send_raw", fake_send):
agent = StockAgent()
asyncio.run(agent.on_screener_schedule())
fake_snap.assert_awaited_once()
fake_run.assert_awaited_once_with(mode="auto")
fake_send.assert_awaited_once()
args, kwargs = fake_send.call_args
# 첫 인자(text) 또는 kwargs로 전달
text = args[0] if args else kwargs.get("text")
assert "KRX 강세주 스크리너" in text
assert kwargs.get("parse_mode") == "MarkdownV2"
assert agent.state == "idle"
def test_screener_holiday_skips_telegram():
from app.agents.stock import StockAgent
from app import service_proxy
from app.telegram import messaging
fake_snap = AsyncMock(return_value={"status": "skipped_weekend"})
fake_run = AsyncMock(return_value=_holiday_body())
fake_send = AsyncMock(return_value={"ok": True, "message_id": 1})
with patch.object(service_proxy, "refresh_screener_snapshot", fake_snap), \
patch.object(service_proxy, "run_stock_screener", fake_run), \
patch.object(messaging, "send_raw", fake_send):
agent = StockAgent()
asyncio.run(agent.on_screener_schedule())
fake_run.assert_awaited_once()
# 휴일이면 텔레그램 미발신
fake_send.assert_not_awaited()
assert agent.state == "idle"
def test_screener_snapshot_failure_still_runs_screener():
"""스냅샷 실패는 경고만 남기고 screener 호출은 계속됨."""
from app.agents.stock import StockAgent
from app import service_proxy
from app.telegram import messaging
fake_snap = AsyncMock(side_effect=RuntimeError("snapshot upstream down"))
fake_run = AsyncMock(return_value=_success_body())
fake_send = AsyncMock(return_value={"ok": True, "message_id": 8888})
with patch.object(service_proxy, "refresh_screener_snapshot", fake_snap), \
patch.object(service_proxy, "run_stock_screener", fake_run), \
patch.object(messaging, "send_raw", fake_send):
agent = StockAgent()
asyncio.run(agent.on_screener_schedule())
fake_snap.assert_awaited_once()
fake_run.assert_awaited_once_with(mode="auto")
fake_send.assert_awaited_once()
def test_screener_run_failure_notifies_operator():
"""screener/run 실패 시 운영자 알림 텔레그램 발송."""
from app.agents.stock import StockAgent
from app import service_proxy
from app.telegram import messaging
fake_snap = AsyncMock(return_value={"status": "ok"})
fake_run = AsyncMock(side_effect=RuntimeError("stock-lab 500"))
fake_send = AsyncMock(return_value={"ok": True, "message_id": 1})
with patch.object(service_proxy, "refresh_screener_snapshot", fake_snap), \
patch.object(service_proxy, "run_stock_screener", fake_run), \
patch.object(messaging, "send_raw", fake_send):
agent = StockAgent()
asyncio.run(agent.on_screener_schedule())
# 운영자 알림 1회는 호출
assert fake_send.await_count == 1
args, kwargs = fake_send.call_args
text = args[0] if args else kwargs.get("text")
assert "스크리너 실패" in text
assert agent.state == "idle"
def test_screener_unexpected_status_treated_as_failure():
from app.agents.stock import StockAgent
from app import service_proxy
from app.telegram import messaging
fake_snap = AsyncMock(return_value={"status": "ok"})
fake_run = AsyncMock(return_value={"status": "weird", "asof": "2026-05-12"})
fake_send = AsyncMock(return_value={"ok": True, "message_id": 1})
with patch.object(service_proxy, "refresh_screener_snapshot", fake_snap), \
patch.object(service_proxy, "run_stock_screener", fake_run), \
patch.object(messaging, "send_raw", fake_send):
agent = StockAgent()
asyncio.run(agent.on_screener_schedule())
# 운영자 알림 1회 + screener payload 미발송
assert fake_send.await_count == 1
args, kwargs = fake_send.call_args
text = args[0] if args else kwargs.get("text")
assert "스크리너 실패" in text

View File

@@ -0,0 +1,44 @@
import sys, os
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from app.notifiers.telegram_lotto import _format_briefing, _format_prize_alert
def test_briefing_with_retrospective():
payload = {
"draw_no": 1154,
"confidence": 72,
"narrative": {
"headline": "안정 +1, 콜드 누적 보강",
"summary_3lines": ["a", "b", "c"],
"retrospective": "너 2.0 / 나 1.8 — 저번호 편향",
},
"picks": {
"core": [
{"risk_tag": "안정"}, {"risk_tag": "안정"}, {"risk_tag": "안정"},
{"risk_tag": "균형"}, {"risk_tag": "공격"},
],
"bonus": [], "extended": [], "pool": [],
},
}
text = _format_briefing(payload)
assert "1154회" in text
assert "신뢰도 72" in text
assert "안정 3" in text
assert "회고: 너 2.0" in text
def test_briefing_without_retrospective():
payload = {
"draw_no": 1, "confidence": 50,
"narrative": {"headline": "h", "summary_3lines": ["a","b","c"], "retrospective": ""},
"picks": {"core": [{"risk_tag":"안정"}]*5, "bonus":[],"extended":[],"pool":[]},
}
text = _format_briefing(payload)
assert "회고" not in text
def test_prize_alert():
text = _format_prize_alert({"draw_no": 1154, "match_count": 5, "numbers": [3,11,17,25,33,8]})
assert "5개 일치" in text
assert "3, 11, 17, 25, 33, 8" in text

View File

@@ -73,6 +73,9 @@ services:
- CLAUDE_HAIKU_MODEL=${CLAUDE_HAIKU_MODEL:-claude-haiku-4-5-20251001}
- CLAUDE_SONNET_MODEL=${CLAUDE_SONNET_MODEL:-claude-sonnet-4-6}
- VIDEO_DATA_DIR=${VIDEO_DATA_DIR:-/app/data/videos}
- WINDOWS_VIDEO_ENCODER_URL=${WINDOWS_VIDEO_ENCODER_URL:-}
- NAS_VIDEOS_ROOT=${NAS_VIDEOS_ROOT:-/volume1/docker/webpage/data/videos}
- NAS_MUSIC_ROOT=${NAS_MUSIC_ROOT:-/volume1/docker/webpage/data/music}
volumes:
- ${RUNTIME_PATH}/data/music:/app/data
- ${RUNTIME_PATH:-.}/data/videos:/app/data/videos
@@ -195,11 +198,13 @@ services:
- DSM_HOST=${DSM_HOST:-}
- DSM_USER=${DSM_USER:-}
- DSM_PASS=${DSM_PASS:-}
- DSM_VERIFY_SSL=${DSM_VERIFY_SSL:-true}
- BACKEND_HMAC_SECRET=${BACKEND_HMAC_SECRET:-}
- SUPABASE_URL=${SUPABASE_URL:-}
- SUPABASE_SERVICE_KEY=${SUPABASE_SERVICE_KEY:-}
- UPLOAD_TOKEN_TTL_SEC=${UPLOAD_TOKEN_TTL_SEC:-1800}
- PACK_BASE_DIR=${PACK_BASE_DIR:-/app/data/packs}
- PACK_HOST_DIR=${PACK_HOST_DIR:-${PACK_DATA_PATH:-./data/packs}}
volumes:
- ${PACK_DATA_PATH:-./data/packs}:${PACK_BASE_DIR:-/app/data/packs}
healthcheck:

View File

@@ -0,0 +1,977 @@
# packs-lab 인프라 통합 + admin mint-token Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** packs-lab을 운영 가능 상태로 만든다 — admin upload 토큰 발급 endpoint + Supabase 스키마 + docker-compose/nginx/env 통합 + 통합 테스트 + 문서 갱신.
**Architecture:** 기존 코드(HMAC + DSM client + 4 라우트)는 그대로 유지하고, 신규 라우트 1개(`POST /api/packs/admin/mint-token`)를 routes.py에 추가한다. Supabase `pack_files` DDL 파일과 인프라(docker-compose 18950, nginx 5GB streaming, .env.example 6+1 환경변수)를 신설하고, 통합 테스트(routes + dsm_client mock)와 CLAUDE.md 5+1곳을 갱신한다.
**Tech Stack:** Python 3.12 / FastAPI / pytest + unittest.mock / Supabase(PostgreSQL) / Synology DSM 7.x API / nginx / Docker Compose
**스펙 참조:** `docs/superpowers/specs/2026-05-05-packs-lab-infra-integration-design.md`
**작업 디렉토리:** `C:\Users\jaeoh\Desktop\workspace\web-backend` (기존 web-backend repo)
---
## Task 1: 테스트 인프라 — `tests/conftest.py`
기존 `tests/test_auth.py``BACKEND_HMAC_SECRET=secret` 같은 fixture가 없어 환경변수 의존. 모든 테스트가 동일한 secret으로 동작하도록 autouse fixture를 conftest에 정리.
**Files:**
- Create: `packs-lab/tests/conftest.py`
- [ ] **Step 1: conftest.py 생성**
`packs-lab/tests/conftest.py`:
```python
"""packs-lab 테스트 공통 fixture."""
import pytest
@pytest.fixture(autouse=True)
def _hmac_secret(monkeypatch):
"""모든 테스트에서 동일한 HMAC secret 사용. auth._SECRET 모듈 캐시까지 갱신."""
monkeypatch.setenv("BACKEND_HMAC_SECRET", "test-secret-do-not-use-in-prod")
# auth.py 모듈은 import 시점에 _SECRET을 캐시하므로 monkeypatch로 함께 갱신
from app import auth
monkeypatch.setattr(auth, "_SECRET", "test-secret-do-not-use-in-prod")
```
- [ ] **Step 2: 기존 test_auth.py 회귀 검증**
```bash
cd C:\Users\jaeoh\Desktop\workspace\web-backend\packs-lab
python -m pytest tests/test_auth.py -v
```
Expected: 기존 테스트 모두 PASS (conftest 영향 없거나 PASS 그대로 유지). 만약 secret 인코딩 차이로 실패 시 해당 테스트의 secret 사용 부분을 conftest 값과 일치시킨다.
- [ ] **Step 3: 커밋**
```bash
git add packs-lab/tests/conftest.py
git commit -m "test(packs-lab): conftest로 HMAC secret 통일"
```
---
## Task 2: admin mint-token 라우트 (스키마 + 구현 + 테스트)
`POST /api/packs/admin/mint-token` 신규. Pydantic 스키마 추가 + 라우트 구현 + 통합 테스트.
**Files:**
- Modify: `packs-lab/app/models.py` (스키마 2개 추가)
- Modify: `packs-lab/app/routes.py` (import 보강 + 라우트 추가)
- Create: `packs-lab/tests/test_routes.py` (mint-token 관련 테스트만 우선)
- [ ] **Step 1: failing 테스트 작성**
`packs-lab/tests/test_routes.py`:
```python
"""packs-lab 라우트 통합 테스트.
DSM·Supabase는 mock. HMAC 검증·토큰 발급·검증은 실제 코드 사용.
"""
import hashlib
import hmac
import json
import time
from unittest.mock import patch, MagicMock
from fastapi.testclient import TestClient
from app.main import app
SECRET = "test-secret-do-not-use-in-prod"
def _hmac_headers(body_bytes: bytes) -> dict:
"""body에 대한 X-Timestamp + X-Signature 헤더 생성."""
ts = str(int(time.time()))
sig = hmac.new(SECRET.encode(), ts.encode() + b"." + body_bytes, hashlib.sha256).hexdigest()
return {"X-Timestamp": ts, "X-Signature": sig}
def test_mint_token_hmac_required():
"""HMAC 헤더 누락 → 401."""
client = TestClient(app)
body = {"tier": "pro", "label": "샘플", "filename": "x.zip", "size_bytes": 1024}
resp = client.post("/api/packs/admin/mint-token", json=body)
assert resp.status_code == 401
def test_mint_token_returns_valid_token():
"""발급된 token이 verify_upload_token으로 통과해야 한다."""
from app.auth import verify_upload_token
body = {"tier": "pro", "label": "샘플", "filename": "test.zip", "size_bytes": 2048}
body_bytes = json.dumps(body).encode()
headers = _hmac_headers(body_bytes)
headers["Content-Type"] = "application/json"
client = TestClient(app)
resp = client.post("/api/packs/admin/mint-token", content=body_bytes, headers=headers)
assert resp.status_code == 200
data = resp.json()
assert "token" in data and "expires_at" in data and "jti" in data
payload = verify_upload_token(data["token"])
assert payload["tier"] == "pro"
assert payload["label"] == "샘플"
assert payload["filename"] == "test.zip"
assert payload["size_bytes"] == 2048
assert payload["jti"] == data["jti"]
def test_mint_token_invalid_filename():
"""허용 외 확장자 → 400."""
body = {"tier": "pro", "label": "샘플", "filename": "x.exe", "size_bytes": 1024}
body_bytes = json.dumps(body).encode()
headers = _hmac_headers(body_bytes)
headers["Content-Type"] = "application/json"
client = TestClient(app)
resp = client.post("/api/packs/admin/mint-token", content=body_bytes, headers=headers)
assert resp.status_code == 400
```
- [ ] **Step 2: 실패 확인**
```bash
cd packs-lab
python -m pytest tests/test_routes.py -v
```
Expected: 모든 테스트 FAIL — `/api/packs/admin/mint-token` 라우트 없음 (404 또는 405).
- [ ] **Step 3: models.py에 스키마 추가**
`packs-lab/app/models.py` 끝부분에 추가:
```python
class MintTokenRequest(BaseModel):
"""Vercel → backend: admin upload 토큰 발급 요청."""
tier: PackTier
label: str = Field(..., max_length=200)
filename: str = Field(..., max_length=255)
size_bytes: int = Field(..., gt=0, le=5 * 1024 * 1024 * 1024)
class MintTokenResponse(BaseModel):
token: str
expires_at: datetime
jti: str
```
- [ ] **Step 4: routes.py에 mint-token 라우트 추가**
`packs-lab/app/routes.py` 상단 import 블록에 다음을 추가:
```python
import time
from datetime import timezone
```
(이미 `import uuid`, `from datetime import datetime`은 있음)
`from .auth import` 라인을 다음과 같이 확장:
```python
from .auth import mint_upload_token, verify_request_hmac, verify_upload_token
```
`from .models import` 라인을 다음과 같이 확장:
```python
from .models import (
MintTokenRequest,
MintTokenResponse,
PackFileItem,
SignLinkRequest,
SignLinkResponse,
UploadResponse,
)
```
상수 추가 (`MAX_BYTES` 다음 줄에):
```python
UPLOAD_TOKEN_TTL_SEC = int(os.getenv("UPLOAD_TOKEN_TTL_SEC", "1800")) # 30분 default
```
라우트 추가 (`sign_link` 함수 다음, `upload` 함수 앞):
```python
@router.post("/admin/mint-token", response_model=MintTokenResponse)
async def mint_token(
request: Request,
x_timestamp: str = Header(""),
x_signature: str = Header(""),
):
body = await request.body()
verify_request_hmac(body, x_timestamp, x_signature)
payload = MintTokenRequest.model_validate_json(body)
_check_filename(payload.filename)
jti = str(uuid.uuid4())
expires_ts = int(time.time()) + UPLOAD_TOKEN_TTL_SEC
token = mint_upload_token({
"tier": payload.tier,
"label": payload.label,
"filename": payload.filename,
"size_bytes": payload.size_bytes,
"jti": jti,
"expires_at": expires_ts,
})
return MintTokenResponse(
token=token,
expires_at=datetime.fromtimestamp(expires_ts, tz=timezone.utc),
jti=jti,
)
```
- [ ] **Step 5: 테스트 통과 확인**
```bash
cd packs-lab
python -m pytest tests/test_routes.py -v
```
Expected: 3 passed.
- [ ] **Step 6: 커밋**
```bash
git add packs-lab/app/models.py packs-lab/app/routes.py packs-lab/tests/test_routes.py
git commit -m "feat(packs-lab): POST /api/packs/admin/mint-token 라우트 + 통합 테스트"
```
---
## Task 3: 기존 4 라우트 통합 테스트 (sign-link / upload / list / delete)
기존 라우트는 변경 없음. 테스트만 추가해 회귀 안전망 확보.
**Files:**
- Modify: `packs-lab/tests/test_routes.py` (테스트 8개 추가)
- [ ] **Step 1: sign-link 테스트 추가**
`tests/test_routes.py` 끝에 추가:
```python
def test_sign_link_hmac_required():
"""HMAC 헤더 없으면 401."""
client = TestClient(app)
body = {"file_path": "/volume1/docker/webpage/media/packs/pro/x.zip"}
resp = client.post("/api/packs/sign-link", json=body)
assert resp.status_code == 401
def test_sign_link_outside_base_dir():
"""PACK_BASE_DIR 외부 경로 → 400."""
body = {"file_path": "/etc/passwd"}
body_bytes = json.dumps(body).encode()
headers = _hmac_headers(body_bytes)
headers["Content-Type"] = "application/json"
client = TestClient(app)
resp = client.post("/api/packs/sign-link", content=body_bytes, headers=headers)
assert resp.status_code == 400
def test_sign_link_calls_dsm():
"""DSM client 호출되고 응답 URL 반환."""
from datetime import datetime, timezone
from unittest.mock import AsyncMock
body = {"file_path": "/volume1/docker/webpage/media/packs/pro/sample.zip"}
body_bytes = json.dumps(body).encode()
headers = _hmac_headers(body_bytes)
headers["Content-Type"] = "application/json"
fake_url = "https://gahusb.synology.me:5001/sharing/abc123"
fake_expires = datetime(2026, 5, 5, 13, 0, tzinfo=timezone.utc)
with patch("app.routes.create_share_link", new=AsyncMock(return_value=(fake_url, fake_expires))) as mock:
client = TestClient(app)
resp = client.post("/api/packs/sign-link", content=body_bytes, headers=headers)
assert resp.status_code == 200
data = resp.json()
assert data["url"] == fake_url
mock.assert_awaited_once()
```
- [ ] **Step 2: upload 테스트 추가**
```python
def _make_upload_token(tier="pro", label="샘플", filename="test.zip", size_bytes=1024, jti=None, ttl=1800):
"""테스트용 upload token 생성. mint_token endpoint 거치지 않고 직접."""
import uuid
from app.auth import mint_upload_token
return mint_upload_token({
"tier": tier,
"label": label,
"filename": filename,
"size_bytes": size_bytes,
"jti": jti or str(uuid.uuid4()),
"expires_at": int(time.time()) + ttl,
})
def test_upload_token_required():
"""Authorization Bearer 누락 → 401."""
client = TestClient(app)
resp = client.post("/api/packs/upload", files={"file": ("x.zip", b"hello")})
assert resp.status_code == 401
def test_upload_size_mismatch(tmp_path, monkeypatch):
"""토큰 size_bytes ≠ 실제 → 400 + 파일 정리됨."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
token = _make_upload_token(size_bytes=999) # 실제 5바이트지만 토큰엔 999
client = TestClient(app)
resp = client.post(
"/api/packs/upload",
files={"file": ("test.zip", b"hello")},
headers={"Authorization": f"Bearer {token}"},
)
assert resp.status_code == 400
assert "크기" in resp.json()["detail"]
def test_upload_jti_replay(tmp_path, monkeypatch):
"""같은 jti 토큰 두 번 → 두 번째 409."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
fake_supabase = MagicMock()
fake_supabase.table.return_value.insert.return_value.execute.return_value = MagicMock(
data=[{"uploaded_at": "2026-05-05T12:00:00+00:00"}]
)
token = _make_upload_token(filename="replay.zip", size_bytes=5, jti="replay-jti-1")
with patch("app.routes._supabase", return_value=fake_supabase):
client = TestClient(app)
# 1차: 성공
resp1 = client.post(
"/api/packs/upload",
files={"file": ("replay.zip", b"hello")},
headers={"Authorization": f"Bearer {token}"},
)
assert resp1.status_code == 200
# 2차: 동일 토큰 재사용 — 두 번째 파일은 다른 이름으로 보내 파일명 충돌 회피
resp2 = client.post(
"/api/packs/upload",
files={"file": ("replay.zip", b"world")},
headers={"Authorization": f"Bearer {token}"},
)
assert resp2.status_code == 409
```
- [ ] **Step 3: list / delete 테스트 추가**
```python
def test_list_returns_active_only():
"""mock supabase가 deleted_at IS NULL 행만 반환하는지 (쿼리 빌더 호출 검증)."""
fake_rows = [
{
"id": "11111111-1111-1111-1111-111111111111",
"min_tier": "pro",
"label": "샘플",
"file_path": "/volume1/docker/webpage/media/packs/pro/a.zip",
"filename": "a.zip",
"size_bytes": 1024,
"sort_order": 0,
"uploaded_at": "2026-05-05T12:00:00+00:00",
}
]
fake_supabase = MagicMock()
chain = fake_supabase.table.return_value.select.return_value
chain.is_.return_value.order.return_value.order.return_value.execute.return_value = MagicMock(data=fake_rows)
body_bytes = b""
headers = _hmac_headers(body_bytes)
with patch("app.routes._supabase", return_value=fake_supabase):
client = TestClient(app)
resp = client.get("/api/packs/list", headers=headers)
assert resp.status_code == 200
items = resp.json()
assert len(items) == 1
assert items[0]["filename"] == "a.zip"
fake_supabase.table.return_value.select.return_value.is_.assert_called_with("deleted_at", "null")
def test_delete_soft_deletes():
"""DELETE 시 supabase update에 deleted_at ISO timestamp가 들어가야 한다."""
fake_supabase = MagicMock()
fake_supabase.table.return_value.update.return_value.eq.return_value.execute.return_value = MagicMock(
data=[{"id": "abc"}]
)
body_bytes = b""
headers = _hmac_headers(body_bytes)
with patch("app.routes._supabase", return_value=fake_supabase):
client = TestClient(app)
resp = client.delete("/api/packs/abc", headers=headers)
assert resp.status_code == 200
update_call = fake_supabase.table.return_value.update.call_args
update_kwargs = update_call.args[0]
assert "deleted_at" in update_kwargs
# ISO 8601 timestamp 형식 검증 (예: 2026-05-05T12:00:00+00:00)
assert "T" in update_kwargs["deleted_at"]
```
- [ ] **Step 4: 테스트 실행**
```bash
cd packs-lab
python -m pytest tests/test_routes.py -v
```
Expected: 11 passed (3 from Task 2 + 3 sign-link + 3 upload + 2 list/delete).
- [ ] **Step 5: 커밋**
```bash
git add packs-lab/tests/test_routes.py
git commit -m "test(packs-lab): 기존 4 라우트 통합 테스트 (sign-link, upload, list, delete)"
```
---
## Task 4: `tests/test_dsm_client.py` — DSM client mock 테스트
**Files:**
- Create: `packs-lab/tests/test_dsm_client.py`
- [ ] **Step 1: DSM client 테스트 작성**
`packs-lab/tests/test_dsm_client.py`:
```python
"""DSM 7.x API client 테스트 — httpx mock으로 외부 호출 차단."""
import asyncio
from unittest.mock import patch, MagicMock
import pytest
import httpx
from app.dsm_client import create_share_link, DSMError, _login, _logout
@pytest.fixture(autouse=True)
def _dsm_env(monkeypatch):
monkeypatch.setenv("DSM_HOST", "https://test-nas:5001")
monkeypatch.setenv("DSM_USER", "test-user")
monkeypatch.setenv("DSM_PASS", "test-pass")
# 모듈 캐시도 갱신
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_HOST", "https://test-nas:5001")
monkeypatch.setattr(dsm_client, "DSM_USER", "test-user")
monkeypatch.setattr(dsm_client, "DSM_PASS", "test-pass")
def _make_response(json_data, status_code=200):
"""httpx.Response mock."""
mock = MagicMock(spec=httpx.Response)
mock.json.return_value = json_data
mock.status_code = status_code
mock.raise_for_status = MagicMock()
return mock
def test_create_share_link_login_logout():
"""login → Sharing.create → logout 순서가 보장되어야 한다."""
call_order = []
async def fake_get(self, url, *, params=None, **kw):
api = (params or {}).get("api", "")
method = (params or {}).get("method", "")
call_order.append(f"{api}.{method}")
if api == "SYNO.API.Auth" and method == "login":
return _make_response({"success": True, "data": {"sid": "fake-sid"}})
if api == "SYNO.API.Auth" and method == "logout":
return _make_response({"success": True})
if api == "SYNO.FileStation.Sharing" and method == "create":
return _make_response({
"success": True,
"data": {"links": [{"url": "https://test-nas:5001/sharing/abc"}]},
})
return _make_response({"success": False, "error": "unexpected"})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
url, expires_at = asyncio.run(create_share_link("/volume1/test/file.zip", expires_in_sec=3600))
assert url == "https://test-nas:5001/sharing/abc"
assert call_order == [
"SYNO.API.Auth.login",
"SYNO.FileStation.Sharing.create",
"SYNO.API.Auth.logout",
]
def test_create_share_link_returns_url_and_expiry():
"""응답 파싱 — links[0].url 사용."""
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
if method == "login":
return _make_response({"success": True, "data": {"sid": "sid"}})
if method == "create":
return _make_response({
"success": True,
"data": {"links": [{"url": "https://nas/sharing/xyz"}]},
})
return _make_response({"success": True})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
url, expires_at = asyncio.run(create_share_link("/volume1/test/file.zip", expires_in_sec=7200))
assert url == "https://nas/sharing/xyz"
assert expires_at is not None
def test_dsm_login_failure_raises():
"""login API success=False → DSMError."""
async def fake_get(self, url, *, params=None, **kw):
return _make_response({"success": False, "error": {"code": 400}})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
with pytest.raises(DSMError, match="login 실패"):
asyncio.run(create_share_link("/volume1/test/file.zip"))
def test_dsm_share_failure_logs_out():
"""Sharing.create 실패해도 logout 호출 (try/finally)."""
call_order = []
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
call_order.append(method)
if method == "login":
return _make_response({"success": True, "data": {"sid": "sid"}})
if method == "create":
return _make_response({"success": False, "error": {"code": 401}})
if method == "logout":
return _make_response({"success": True})
return _make_response({"success": False})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
with pytest.raises(DSMError, match="Sharing.create 실패"):
asyncio.run(create_share_link("/volume1/test/file.zip"))
assert "login" in call_order
assert "logout" in call_order, "logout이 호출되지 않음 (finally 누락 의심)"
```
- [ ] **Step 2: 테스트 실행**
```bash
cd packs-lab
python -m pytest tests/test_dsm_client.py -v
```
Expected: 4 passed.
- [ ] **Step 3: 커밋**
```bash
git add packs-lab/tests/test_dsm_client.py
git commit -m "test(packs-lab): DSM client mock 테스트 (login/share/logout 순서)"
```
---
## Task 5: DELETE 라우트 docstring 수정
`routes.py` 모듈 docstring의 한 줄 변경.
**Files:**
- Modify: `packs-lab/app/routes.py:1-7` (모듈 docstring)
- [ ] **Step 1: docstring 수정**
`packs-lab/app/routes.py` 첫 docstring을 다음으로 변경:
```python
"""packs-lab API 엔드포인트.
- POST /api/packs/sign-link — Vercel HMAC 인증 → DSM 공유 링크
- POST /api/packs/admin/mint-token — Vercel HMAC 인증 → 일회성 upload 토큰
- POST /api/packs/upload — 일회성 토큰 인증 → multipart 저장 + supabase INSERT
- GET /api/packs/list — Vercel HMAC 인증 → pack_files 전체 조회
- DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete (DSM 공유는 자동 만료)
"""
```
(변경: `정리``자동 만료`, mint-token 줄 추가)
- [ ] **Step 2: 회귀 검증**
```bash
cd packs-lab
python -m pytest tests/ -v
```
Expected: 모든 테스트 그대로 통과 (15 passed).
- [ ] **Step 3: 커밋**
```bash
git add packs-lab/app/routes.py
git commit -m "docs(packs-lab): routes 모듈 docstring 정리 (mint-token 추가, DSM 자동 만료 명시)"
```
---
## Task 6: Supabase `pack_files` DDL
운영 적용 시 Supabase SQL editor에서 실행할 SQL 파일.
**Files:**
- Create: `packs-lab/supabase/pack_files.sql`
- [ ] **Step 1: SQL 파일 생성**
`packs-lab/supabase/pack_files.sql`:
```sql
-- pack_files: NAS에 저장된 다운로드 가능한 패키지 파일 메타
-- 운영 적용: Supabase Dashboard → SQL editor에서 실행
create table if not exists public.pack_files (
id uuid primary key default gen_random_uuid(),
min_tier text not null check (min_tier in ('starter','pro','master')),
label text not null,
file_path text not null unique,
filename text not null,
size_bytes bigint not null check (size_bytes > 0),
sort_order integer not null default 0,
uploaded_at timestamptz not null default now(),
deleted_at timestamptz
);
-- list 라우트 hot path: deleted_at IS NULL + tier/order 정렬
create index if not exists pack_files_active_idx
on public.pack_files (min_tier, sort_order)
where deleted_at is null;
-- soft-deleted 통계 / cleanup 잡 대비
create index if not exists pack_files_deleted_at_idx
on public.pack_files (deleted_at)
where deleted_at is not null;
```
- [ ] **Step 2: 커밋**
```bash
git add packs-lab/supabase/pack_files.sql
git commit -m "feat(packs-lab): Supabase pack_files DDL + 활성/삭제 인덱스"
```
---
## Task 7: 인프라 통합 — docker-compose / nginx / .env.example / deploy-nas.sh
**Files:**
- Modify: `docker-compose.yml` (packs-lab 서비스 추가, env에 PACK_BASE_DIR/PACK_HOST_DIR 포함)
- Modify: `nginx/default.conf` (`/api/packs/` 라우팅)
- Modify: `.env.example` (DSM/HMAC/Supabase 6 + PACK 3 path)
- Modify: `scripts/deploy-nas.sh` (SERVICES 화이트리스트에 `packs-lab` 추가 — 누락 시 NAS 컨테이너 미등장)
- [ ] **Step 1: docker-compose.yml — packs-lab 서비스 추가**
`docker-compose.yml`에서 다른 lab 서비스(예: `realestate-lab`) 정의 다음에 추가:
```yaml
packs-lab:
build:
context: ./packs-lab
dockerfile: Dockerfile
container_name: packs-lab
restart: unless-stopped
ports:
- "18950:8000"
environment:
TZ: Asia/Seoul
DSM_HOST: ${DSM_HOST}
DSM_USER: ${DSM_USER}
DSM_PASS: ${DSM_PASS}
BACKEND_HMAC_SECRET: ${BACKEND_HMAC_SECRET}
SUPABASE_URL: ${SUPABASE_URL}
SUPABASE_SERVICE_KEY: ${SUPABASE_SERVICE_KEY}
UPLOAD_TOKEN_TTL_SEC: ${UPLOAD_TOKEN_TTL_SEC:-1800}
volumes:
- ${PACK_DATA_PATH:-./data/packs}:/volume1/docker/webpage/media/packs
```
- [ ] **Step 2: nginx/default.conf — /api/packs/ 라우팅**
기존 `location /api/agent-office/ { ... }` 다음(또는 다른 `/api/...` 라우트들 근처)에 추가:
```nginx
location /api/packs/ {
proxy_pass http://packs-lab:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# 5GB 멀티파트 업로드 대응
client_max_body_size 5G;
proxy_request_buffering off;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
```
- [ ] **Step 3: .env.example — 6+1 환경변수 추가**
`.env.example` 끝에 추가:
```bash
# ─── packs-lab — NAS 자료 다운로드 자동화 ────────────────────────────
# Synology DSM 7.x 인증 (공유 링크 발급용)
DSM_HOST=https://gahusb.synology.me:5001
DSM_USER=
DSM_PASS=
# Vercel SaaS ↔ backend HMAC 시크릿 (양쪽 동일 값)
BACKEND_HMAC_SECRET=
# Supabase pack_files 테이블 접근 (service_role 키, RLS 우회)
SUPABASE_URL=https://<project>.supabase.co
SUPABASE_SERVICE_KEY=
# admin upload 토큰 TTL (초). default 1800 = 30분
UPLOAD_TOKEN_TTL_SEC=1800
# 로컬 개발: ./data/packs / NAS 운영: /volume1/docker/webpage/media/packs
PACK_DATA_PATH=./data/packs
```
- [ ] **Step 4: docker compose config 검증**
```bash
cd C:\Users\jaeoh\Desktop\workspace\web-backend
docker compose config 2>&1 | grep -A 10 "packs-lab:"
```
Expected: packs-lab 서비스 정의가 정상 출력 (port mapping, environment 변수, volumes 모두 보임). 환경변수가 비어있어도 docker compose config는 통과.
> ⚠️ Docker가 로컬에 설치되어 있어야 검증 가능. 실제 실행은 NAS에서. 로컬 docker가 없으면 step skip하고 nginx config 문법만 별도 검증.
- [ ] **Step 5: 커밋**
```bash
git add docker-compose.yml nginx/default.conf .env.example
git commit -m "chore(infra): packs-lab 서비스 통합 (compose 18950 + nginx 5GB streaming + env 7개)"
```
---
## Task 8: NAS 디렉토리 준비 가이드 + 문서 갱신
**Files:**
- Modify: `web-backend/CLAUDE.md` (5곳 갱신)
- Modify: `workspace/CLAUDE.md` (1줄 추가)
- [ ] **Step 1: web-backend/CLAUDE.md — 1.프로젝트 개요**
찾을 위치 (1.프로젝트 개요 섹션):
```
- **서비스**: lotto-lab, stock-lab, travel-proxy, music-lab, blog-lab, realestate-lab, agent-office, personal, deployer (9개)
```
다음으로 수정:
```
- **서비스**: lotto-lab, stock-lab, travel-proxy, music-lab, blog-lab, realestate-lab, agent-office, personal, packs-lab, deployer (10개)
```
같은 섹션의 인프라 줄도:
```
- **인프라**: Docker Compose (10컨테이너) + Nginx(리버스 프록시) + Gitea Webhook 자동 배포
```
- [ ] **Step 2: web-backend/CLAUDE.md — 4.Docker 서비스 표**
표 마지막에 신규 행 추가 (deployer 행 직전 또는 personal 행 다음 — 알파벳 순):
```
| `packs-lab` | 18950 | NAS 자료 다운로드 자동화 (DSM 공유 링크 + 5GB 업로드, Vercel SaaS와 HMAC 통신) |
```
- [ ] **Step 3: web-backend/CLAUDE.md — 5.Nginx 라우팅 표**
표 적절한 위치에 신규 행 추가:
```
| `/api/packs/` | `packs-lab:8000` | 5GB 업로드 대응 (`client_max_body_size 5G`, `proxy_request_buffering off`, 1800s timeout) |
```
- [ ] **Step 4: web-backend/CLAUDE.md — 8.로컬 개발 표**
표 끝에 신규 행 추가:
```
| Packs Lab | http://localhost:18950 |
```
- [ ] **Step 5: web-backend/CLAUDE.md — 9.서비스별 packs-lab 신규 섹션**
`### deployer (deployer/)` 섹션 직전에 추가 (또는 personal 다음):
```
### packs-lab (packs-lab/)
- NAS 자료 다운로드 자동화 — Synology DSM 공유링크 발급 + 5GB 멀티파트 업로드 수신
- Vercel SaaS와 HMAC 인증으로 통신, 사용자 인증은 Vercel이 Supabase로 처리 (본 서비스는 외부 인증 없음)
- DB: 외부 Supabase `pack_files` 테이블 (DDL: `packs-lab/supabase/pack_files.sql`)
- 파일 구조: `app/main.py`, `app/auth.py`, `app/dsm_client.py`, `app/routes.py`, `app/models.py`
- 운영 디렉토리: `/volume1/docker/webpage/media/packs/{starter,pro,master}/` (NAS PUID:PGID 권한 필요)
**환경변수**
- `DSM_HOST` / `DSM_USER` / `DSM_PASS`: Synology DSM 7.x 인증 (공유 링크 발급용)
- `BACKEND_HMAC_SECRET`: Vercel SaaS와 양쪽 공유 시크릿 (HMAC SHA256)
- `SUPABASE_URL` / `SUPABASE_SERVICE_KEY`: Supabase pack_files 테이블 접근 (service_role, RLS 우회)
- `UPLOAD_TOKEN_TTL_SEC`: admin upload 토큰 TTL (기본 1800초 = 30분)
- `PACK_DATA_PATH`: 호스트 마운트 경로 (로컬 `./data/packs`, NAS `/volume1/docker/webpage/media/packs`)
**HMAC 인증 패턴**
- Vercel → backend 요청: `X-Timestamp` (UNIX 초) + `X-Signature` (HMAC_SHA256(timestamp + "." + body, secret))
- Replay 방어: 타임스탬프 ±5분 윈도우
- admin browser → backend upload: `Authorization: Bearer <token>` (jti 단발성)
**packs-lab API 목록**
| 메서드 | 경로 | 설명 |
|--------|------|------|
| POST | `/api/packs/sign-link` | Vercel HMAC → DSM Sharing.create로 4시간 유효 다운로드 URL 발급 |
| POST | `/api/packs/admin/mint-token` | Vercel HMAC → 일회성 upload 토큰 발급 (기본 30분 TTL) |
| POST | `/api/packs/upload` | Bearer token → multipart 5GB 저장 + Supabase INSERT |
| GET | `/api/packs/list` | Vercel HMAC → 활성 pack_files 목록 (deleted_at IS NULL) |
| DELETE | `/api/packs/{file_id}` | Vercel HMAC → soft delete (DSM 공유는 자동 만료) |
```
- [ ] **Step 6: workspace/CLAUDE.md — 컨테이너 표 한 줄 추가**
`workspace/CLAUDE.md`의 "Docker 서비스 & 포트" 표에 추가:
```
| `packs-lab` | 18950 | NAS 자료 다운로드 자동화 (Vercel SaaS와 HMAC 통신) |
```
(personal 행 다음 또는 적절한 위치)
- [ ] **Step 7: 커밋 (web-backend repo의 CLAUDE.md만)**
작업 디렉토리는 `C:\Users\jaeoh\Desktop\workspace\web-backend`. 그 안의 `CLAUDE.md`만 git 추적 대상.
```bash
git add CLAUDE.md
git commit -m "docs(claude): packs-lab 10번째 서비스로 등록 (포트/라우팅/API 표 + 신규 섹션)"
```
> `workspace/CLAUDE.md`(상위 디렉토리의 워크스페이스 메모)는 git repo가 아님. 텍스트 편집만 하고 commit 대상에서 제외.
---
## Task 9: 회귀 검증 + NAS 디렉토리 가이드
전체 테스트 + docker compose config + NAS 배포 전 가이드.
**Files:**
- (검증만)
- [ ] **Step 1: 전체 pytest**
```bash
cd packs-lab
python -m pytest tests/ -v
```
Expected: 모든 테스트 통과 (test_auth + test_routes + test_dsm_client = 약 15+ tests).
- [ ] **Step 2: docker compose config 검증**
```bash
cd C:\Users\jaeoh\Desktop\workspace\web-backend
docker compose config 2>&1 | tail -30
```
Expected: error 없이 packs-lab 포함된 전체 config 출력.
> ⚠️ Docker 미설치 시 skip. NAS에서 git push 후 webhook 배포 시점에 검증됨.
- [ ] **Step 3: NAS 배포 전 가이드 출력**
배포 전 NAS에서 SSH로 1회 실행할 명령들을 README 또는 NAS 배포 노트로 정리. 본 task에서는 명령만 제시 (실행은 사용자):
```bash
# NAS SSH로 접속 후
mkdir -p /volume1/docker/webpage/media/packs/{starter,pro,master}
chown -R PUID:PGID /volume1/docker/webpage/media/packs # PUID/PGID는 .env 값 사용
# .env에 신규 환경변수 추가 (DSM_*, BACKEND_HMAC_SECRET, SUPABASE_*, UPLOAD_TOKEN_TTL_SEC, PACK_DATA_PATH=/volume1/docker/webpage/media/packs)
# Supabase에서 packs-lab/supabase/pack_files.sql 실행
# git push 후 webhook이 자동 배포
```
- [ ] **Step 4: 최종 commit (검증 결과 빈 commit으로 마일스톤 표시 — 선택)**
```bash
# 만약 위 step에서 어떤 자동 수정이 있었으면 commit. 없으면 skip.
git status
```
회귀 검증으로 변경 사항 없으면 별도 commit 없이 종료.
---
## 완료 기준
- 모든 task의 step 통과 (체크박스 모두 체크)
- `cd packs-lab && python -m pytest tests/ -v` — 통과 (test_auth + test_routes + test_dsm_client)
- `docker compose config` — packs-lab 포함된 전체 config 정상
- web-backend/CLAUDE.md 5곳 갱신 + workspace/CLAUDE.md 1줄
- Supabase DDL 파일 존재 (운영 적용은 사용자가 NAS에서 SQL editor로)
- NAS 디렉토리 준비 명령은 사용자가 SSH로 실행 (배포 전 1회)
---
## 배포
git push → Gitea webhook → deployer rsync → docker compose up -d --build (자동).
**배포 전 사용자 액션 (1회)**:
1. Supabase에서 `pack_files` 테이블 생성 (DDL 실행)
2. NAS SSH로 `/volume1/docker/webpage/media/packs/{starter,pro,master}` 디렉토리 생성 + 권한
3. NAS `.env`에 신규 7개 환경변수 입력 (DSM 인증, HMAC secret, Supabase 키 등)
---
## 참고 — 후속 별도 plan (스코프 외)
- Vercel SaaS-side admin UI / 사용자 다운로드 UI / Supabase user 테이블
- DSM 공유 추적 (즉시 차단 필요 시)
- deleted_at + N일 후 실제 파일 삭제 cron
- multi-admin 토큰 발급 권한 분리
- resumable multipart 업로드 (5GB tus 등)
- pack_files sort_order 편집 endpoint
- 모니터링 (업로드 실패율, DSM API latency)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,737 @@
# GPU 영상 인코딩 오프로드 — 구현 계획
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development.
**Goal:** NAS의 ffmpeg 영상 인코딩을 Windows PC(RTX 5070 Ti) NVENC로 오프로드.
**Architecture:** music-lab(NAS) → HTTP POST → music_ai(Windows, port 8765 `/encode_video`) → ffmpeg NVENC → SMB로 NAS에 직접 mp4 저장. Windows 서버 다운 시 NAS는 즉시 실패.
**Tech Stack:** httpx (NAS 측 HTTP 클라이언트), FastAPI (Windows 서버 endpoint), ffmpeg.exe with NVENC.
**Spec:** `docs/superpowers/specs/2026-05-09-gpu-video-offload-design.md`
---
## File Structure
| 경로 | 책임 |
|------|------|
| `music_ai/video_encoder.py` (new) | 경로 변환 + ffmpeg NVENC subprocess 호출 + 검증 |
| `music_ai/server.py` (modify) | `/encode_video` POST endpoint 등록, `/health`에 ffmpeg/nvenc 정보 추가 |
| `music_ai/.env.example` (modify) | NAS_VOLUME_PREFIX, WINDOWS_DRIVE_ROOT, FFMPEG_PATH 문서화 |
| `music_ai/tests/test_video_encoder.py` (new) | translate_path, encode endpoint 단위 테스트 |
| `music-lab/app/pipeline/video.py` (rewrite) | subprocess 제거, httpx로 Windows 서버 호출 |
| `music-lab/tests/test_video_thumb.py` (rewrite video tests) | respx mock 기반 |
| `web-backend/docker-compose.yml` (modify) | music-lab env 3개 추가 |
---
## Task 1: Windows `music_ai/video_encoder.py` + 테스트
**Files:**
- Create: `music_ai/video_encoder.py`
- Create: `music_ai/tests/test_video_encoder.py`
### Step 1: Write failing test
```python
# music_ai/tests/test_video_encoder.py
import os
import pytest
from unittest.mock import patch, MagicMock
from video_encoder import translate_path, encode_video, EncodeError
@pytest.fixture
def env(monkeypatch):
monkeypatch.setenv("NAS_VOLUME_PREFIX", "/volume1/")
monkeypatch.setenv("WINDOWS_DRIVE_ROOT", "Z:\\")
monkeypatch.setenv("FFMPEG_PATH", "C:\\ffmpeg\\bin\\ffmpeg.exe")
def test_translate_path_basic(env):
assert translate_path("/volume1/docker/webpage/data/x.jpg") == r"Z:\docker\webpage\data\x.jpg"
def test_translate_path_nested(env):
assert translate_path("/volume1/docker/webpage/data/videos/3/cover.jpg") == r"Z:\docker\webpage\data\videos\3\cover.jpg"
def test_translate_path_rejects_bad_prefix(env):
with pytest.raises(ValueError):
translate_path("/etc/passwd")
@patch("subprocess.run")
def test_encode_video_success(mock_run, env, tmp_path):
# 입력 파일 fake
cover = tmp_path / "cover.jpg"
cover.write_bytes(b"\x00" * 100)
audio = tmp_path / "audio.mp3"
audio.write_bytes(b"\x00" * 100)
out = tmp_path / "video.mp4"
def fake_run(cmd, **kwargs):
# ffmpeg 실행을 흉내내어 출력 파일을 만듦
out.write_bytes(b"\x00" * (2 * 1024 * 1024)) # 2MB
return MagicMock(returncode=0, stderr="")
mock_run.side_effect = fake_run
# translate_path를 mock해서 입력 경로를 직접 사용
with patch("video_encoder.translate_path", side_effect=lambda p: str(p).replace("/volume1/", str(tmp_path) + "/")):
result = encode_video(
cover_path_nas="/volume1/cover.jpg",
audio_path_nas="/volume1/audio.mp3",
output_path_nas="/volume1/video.mp4",
resolution="1920x1080",
duration_sec=120,
)
assert result["ok"] is True
assert result["encoder"] == "h264_nvenc"
assert result["output_bytes"] > 1024 * 1024
@patch("subprocess.run")
def test_encode_video_input_missing(mock_run, env, tmp_path):
with pytest.raises(EncodeError) as exc:
encode_video(
cover_path_nas="/volume1/missing.jpg",
audio_path_nas="/volume1/missing.mp3",
output_path_nas="/volume1/out.mp4",
resolution="1920x1080",
duration_sec=120,
)
assert "input_validation" in str(exc.value)
@patch("subprocess.run")
def test_encode_video_ffmpeg_failure(mock_run, env, tmp_path):
cover = tmp_path / "cover.jpg"; cover.write_bytes(b"\x00")
audio = tmp_path / "audio.mp3"; audio.write_bytes(b"\x00")
mock_run.return_value = MagicMock(returncode=1, stderr="invalid codec\n" * 50)
with patch("video_encoder.translate_path", side_effect=lambda p: str(p).replace("/volume1/", str(tmp_path) + "/")):
with pytest.raises(EncodeError) as exc:
encode_video(
cover_path_nas="/volume1/cover.jpg",
audio_path_nas="/volume1/audio.mp3",
output_path_nas="/volume1/out.mp4",
resolution="1920x1080",
duration_sec=120,
)
assert "ffmpeg" in str(exc.value).lower()
@patch("subprocess.run")
def test_encode_video_output_too_small(mock_run, env, tmp_path):
cover = tmp_path / "cover.jpg"; cover.write_bytes(b"\x00")
audio = tmp_path / "audio.mp3"; audio.write_bytes(b"\x00")
def fake_run(cmd, **kwargs):
(tmp_path / "out.mp4").write_bytes(b"\x00" * 100) # 100 bytes — too small
return MagicMock(returncode=0, stderr="")
mock_run.side_effect = fake_run
with patch("video_encoder.translate_path", side_effect=lambda p: str(p).replace("/volume1/", str(tmp_path) + "/")):
with pytest.raises(EncodeError) as exc:
encode_video(
cover_path_nas="/volume1/cover.jpg",
audio_path_nas="/volume1/audio.mp3",
output_path_nas="/volume1/out.mp4",
resolution="1920x1080",
duration_sec=120,
)
assert "output_check" in str(exc.value)
def test_resolution_validation(env):
with pytest.raises(EncodeError) as exc:
encode_video(
cover_path_nas="/volume1/x.jpg",
audio_path_nas="/volume1/x.mp3",
output_path_nas="/volume1/out.mp4",
resolution="invalid",
duration_sec=120,
)
assert "resolution" in str(exc.value).lower()
```
### Step 2: Run test to verify it fails
```bash
cd music_ai && python -m pytest tests/test_video_encoder.py -v
```
Expected: ImportError on `video_encoder` module.
### Step 3: Implement `video_encoder.py`
```python
"""GPU(NVENC) 영상 인코더 — NAS music-lab에서 호출."""
import os
import re
import subprocess
import logging
logger = logging.getLogger("music_ai.video_encoder")
NAS_VOLUME_PREFIX = os.getenv("NAS_VOLUME_PREFIX", "/volume1/")
WINDOWS_DRIVE_ROOT = os.getenv("WINDOWS_DRIVE_ROOT", "Z:\\")
FFMPEG_PATH = os.getenv("FFMPEG_PATH", "ffmpeg")
FFMPEG_TIMEOUT_S = 180
RESOLUTION_RE = re.compile(r"^\d{3,4}x\d{3,4}$")
MIN_OUTPUT_BYTES = 1024 * 1024 # 1MB
class EncodeError(Exception):
"""{stage: input_validation|path_translate|ffmpeg|output_check, message: ...}"""
def __init__(self, stage: str, message: str):
self.stage = stage
self.message = message
super().__init__(f"[{stage}] {message}")
def translate_path(nas_path: str) -> str:
"""NAS 절대경로 → Windows SMB 경로."""
if not nas_path.startswith(NAS_VOLUME_PREFIX):
raise ValueError(f"NAS prefix 불일치: {nas_path}")
rel = nas_path[len(NAS_VOLUME_PREFIX):]
return WINDOWS_DRIVE_ROOT + rel.replace("/", "\\")
def encode_video(*, cover_path_nas: str, audio_path_nas: str,
output_path_nas: str, resolution: str,
duration_sec: int = 0, style: str = "visualizer") -> dict:
"""영상 인코딩 + Z:\\에 직접 저장."""
# 1) Resolution 검증
if not RESOLUTION_RE.match(resolution):
raise EncodeError("input_validation", f"invalid resolution: {resolution}")
w, h = resolution.split("x")
# 2) 경로 변환
try:
cover_win = translate_path(cover_path_nas)
audio_win = translate_path(audio_path_nas)
out_win = translate_path(output_path_nas)
except ValueError as e:
raise EncodeError("path_translate", str(e))
# 3) 입력 존재 확인
if not os.path.isfile(cover_win):
raise EncodeError("input_validation", f"cover not found: {cover_win}")
if not os.path.isfile(audio_win):
raise EncodeError("input_validation", f"audio not found: {audio_win}")
# 4) 출력 디렉토리 보장
os.makedirs(os.path.dirname(out_win), exist_ok=True)
# 5) ffmpeg 명령
cmd = [
FFMPEG_PATH, "-y",
"-hwaccel", "cuda",
"-loop", "1", "-i", cover_win,
"-i", audio_win,
"-filter_complex",
f"[0:v]scale={w}:{h},format=yuv420p[bg];"
f"[1:a]showwaves=s={w}x200:mode=cline:colors=0xFF4444@0.8[wave];"
f"[bg][wave]overlay=0:({h}-200)[out]",
"-map", "[out]", "-map", "1:a",
"-c:v", "h264_nvenc",
"-preset", "p4",
"-rc", "vbr",
"-cq", "23",
"-b:v", "0",
"-pix_fmt", "yuv420p",
"-c:a", "aac", "-b:a", "192k",
"-shortest", out_win,
]
logger.info("ffmpeg: %s", " ".join(cmd))
# 6) ffmpeg 실행
import time
t0 = time.time()
try:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=FFMPEG_TIMEOUT_S)
except subprocess.TimeoutExpired:
raise EncodeError("ffmpeg", f"timeout after {FFMPEG_TIMEOUT_S}s")
duration_ms = int((time.time() - t0) * 1000)
if result.returncode != 0:
raise EncodeError("ffmpeg", f"returncode={result.returncode}: {result.stderr[-800:]}")
# 7) 출력 검증
if not os.path.isfile(out_win):
raise EncodeError("output_check", "output file not created")
output_bytes = os.path.getsize(out_win)
if output_bytes < MIN_OUTPUT_BYTES:
raise EncodeError("output_check", f"output too small: {output_bytes} bytes")
return {
"ok": True,
"duration_ms": duration_ms,
"output_path_nas": output_path_nas,
"output_bytes": output_bytes,
"encoder": "h264_nvenc",
"preset": "p4",
}
def check_ffmpeg_nvenc() -> bool:
"""서버 시작 시 NVENC 가용성 확인."""
try:
result = subprocess.run(
[FFMPEG_PATH, "-encoders"],
capture_output=True, text=True, timeout=10,
)
return "h264_nvenc" in result.stdout
except Exception:
return False
```
### Step 4: Run tests
```bash
cd music_ai && python -m pytest tests/test_video_encoder.py -v
```
Expected: 6 PASS
### Step 5: Commit
```bash
cd C:/Users/jaeoh/Desktop/workspace/music_ai
git init 2>/dev/null || true # may not be a git repo, that's OK
# music_ai is local-only per CLAUDE.md, no remote push
```
(music_ai is local-only; just save the file. No git push needed.)
---
## Task 2: Windows `music_ai/server.py` — `/encode_video` endpoint + 헬스 확장
**Files:**
- Modify: `music_ai/server.py`
- Modify: `music_ai/.env.example`
### Step 1: Read existing server.py to understand FastAPI pattern + existing /health
### Step 2: Add `/encode_video` endpoint
```python
# server.py — 추가
from pydantic import BaseModel
from fastapi import HTTPException
import video_encoder
class EncodeVideoRequest(BaseModel):
cover_path_nas: str
audio_path_nas: str
output_path_nas: str
resolution: str = "1920x1080"
duration_sec: int = 0
style: str = "visualizer"
@app.post("/encode_video")
def encode_video_endpoint(req: EncodeVideoRequest):
try:
result = video_encoder.encode_video(
cover_path_nas=req.cover_path_nas,
audio_path_nas=req.audio_path_nas,
output_path_nas=req.output_path_nas,
resolution=req.resolution,
duration_sec=req.duration_sec,
style=req.style,
)
return result
except video_encoder.EncodeError as e:
# input_validation, path_translate → 400
# ffmpeg, output_check → 500
status_code = 400 if e.stage in ("input_validation", "path_translate") else 500
raise HTTPException(
status_code=status_code,
detail={"ok": False, "stage": e.stage, "error": e.message},
)
```
### Step 3: 확장된 `/health`
기존 `/health` 응답에 추가:
```python
import torch # if existing health uses it
import video_encoder
# Module-level cache so health doesn't run ffmpeg every call
_FFMPEG_NVENC_CACHED = None
def _ffmpeg_nvenc_available():
global _FFMPEG_NVENC_CACHED
if _FFMPEG_NVENC_CACHED is None:
_FFMPEG_NVENC_CACHED = video_encoder.check_ffmpeg_nvenc()
return _FFMPEG_NVENC_CACHED
@app.get("/health")
def health():
return {
"ok": True,
"gpu": torch.cuda.get_device_name(0) if torch.cuda.is_available() else None, # 또는 기존 형식 유지
"musicgen_loaded": True, # 기존 그대로
"ffmpeg_path": video_encoder.FFMPEG_PATH,
"ffmpeg_nvenc": _ffmpeg_nvenc_available(),
}
```
(기존 `/health`의 정확한 형식은 코드 읽고 매칭. 위는 예시.)
### Step 4: `.env.example` 업데이트
```env
# Existing
MODEL_NAME=facebook/musicgen-stereo-large
OUTPUT_DIR=output
SERVER_PORT=8765
# New for video encoder
NAS_VOLUME_PREFIX=/volume1/
WINDOWS_DRIVE_ROOT=Z:\
FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe
```
### Step 5: 수동 검증
```bash
cd music_ai && start.bat # 또는 적절한 시작 명령
curl http://localhost:8765/health
# Expected: {..., "ffmpeg_nvenc": true}
curl -X POST http://localhost:8765/encode_video -H "Content-Type: application/json" -d '{
"cover_path_nas": "/volume1/docker/webpage/data/videos/3/cover.jpg",
"audio_path_nas": "/volume1/docker/webpage/data/1c695df3-8a82-4c09-ba7b-82c07608ec5b.mp3",
"output_path_nas": "/volume1/docker/webpage/data/videos/test/video.mp4",
"resolution": "1920x1080",
"duration_sec": 176
}'
# Expected: 200 + duration_ms ~ 10-20초
```
(실제 파일 경로는 사용자 환경에 맞게 조정)
### Step 6: Commit (music_ai is local-only, no remote)
---
## Task 3: NAS music-lab — `pipeline/video.py` 재작성 + 테스트
**Files:**
- Rewrite: `music-lab/app/pipeline/video.py`
- Rewrite: `music-lab/tests/test_video_thumb.py` (video 부분만)
### Step 1: Replace failing tests
```python
# music-lab/tests/test_video_thumb.py — video 관련 테스트 부분만 교체
import pytest
import respx
import httpx
from httpx import Response
from app.pipeline import video, thumb, storage
@pytest.fixture
def encoder_env(monkeypatch):
monkeypatch.setenv("WINDOWS_VIDEO_ENCODER_URL", "http://192.168.45.59:8765")
monkeypatch.setattr(video, "ENCODER_URL", "http://192.168.45.59:8765")
@respx.mock
def test_generate_video_calls_remote_encoder(encoder_env, tmp_path, monkeypatch):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
respx.post("http://192.168.45.59:8765/encode_video").mock(
return_value=Response(200, json={
"ok": True, "duration_ms": 12000,
"output_path_nas": "/volume1/docker/webpage/data/videos/3/video.mp4",
"output_bytes": 28000000,
"encoder": "h264_nvenc", "preset": "p4",
})
)
out = video.generate(
pipeline_id=3,
audio_path="/app/data/1c695df3.mp3",
cover_path="/app/data/videos/3/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
style="visualizer",
)
assert out["url"].endswith("/3/video.mp4")
assert out["used_fallback"] is False
assert out["encode_duration_ms"] == 12000
@respx.mock
def test_generate_video_raises_on_connection_error(encoder_env, monkeypatch, tmp_path):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
respx.post("http://192.168.45.59:8765/encode_video").mock(
side_effect=httpx.ConnectError("Connection refused")
)
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(
pipeline_id=4,
audio_path="/app/data/x.mp3", cover_path="/app/data/videos/4/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
)
assert "연결 실패" in str(exc.value) or "Connection" in str(exc.value)
@respx.mock
def test_generate_video_raises_on_500(encoder_env, monkeypatch, tmp_path):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
respx.post("http://192.168.45.59:8765/encode_video").mock(
return_value=Response(500, json={"ok": False, "stage": "ffmpeg", "error": "bad codec"})
)
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(
pipeline_id=5,
audio_path="/app/data/x.mp3", cover_path="/app/data/videos/5/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
)
assert "Windows 인코더 오류" in str(exc.value)
assert "ffmpeg" in str(exc.value)
def test_generate_video_no_url_configured(monkeypatch, tmp_path):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
monkeypatch.setattr(video, "ENCODER_URL", "")
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(
pipeline_id=6,
audio_path="/app/data/x.mp3", cover_path="/app/data/videos/6/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
)
assert "WINDOWS_VIDEO_ENCODER_URL" in str(exc.value)
def test_container_to_nas_videos_path(monkeypatch):
monkeypatch.setenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
monkeypatch.setenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
assert video._container_to_nas("/app/data/videos/3/cover.jpg") == "/volume1/docker/webpage/data/videos/3/cover.jpg"
def test_container_to_nas_music_path(monkeypatch):
monkeypatch.setenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
monkeypatch.setenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
assert video._container_to_nas("/app/data/abc.mp3") == "/volume1/docker/webpage/data/music/abc.mp3"
```
기존 `test_generate_video_calls_ffmpeg`, `test_generate_video_failure_marks_failed` 삭제. thumb 관련 테스트는 그대로 유지.
### Step 2: Run, verify fail
```bash
cd music-lab && python -m pytest tests/test_video_thumb.py -v
```
Expected: video 관련 테스트들이 실패 (또는 ImportError).
### Step 3: Rewrite `app/pipeline/video.py`
```python
"""영상 비주얼 생성 — Windows GPU 서버 (NVENC) 호출.
Windows 서버 다운/실패 시 즉시 예외 (NAS 로컬 폴백 없음 — 의도적 결정).
"""
import os
import logging
import httpx
from . import storage
logger = logging.getLogger("music-lab.video")
ENCODER_URL = os.getenv("WINDOWS_VIDEO_ENCODER_URL", "")
ENCODER_TIMEOUT_S = 200 # Windows 서버 ffmpeg 180s + 마진
# NAS 호스트 절대경로 prefix — docker bind mount의 host 측
NAS_VIDEOS_ROOT = os.getenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
NAS_MUSIC_ROOT = os.getenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
class VideoGenerationError(Exception):
pass
def generate(*, pipeline_id: int, audio_path: str, cover_path: str,
genre: str, duration_sec: int, resolution: str = "1920x1080",
style: str = "visualizer") -> dict:
"""원격 Windows GPU 서버 호출. 다운/실패 시 즉시 예외."""
if not ENCODER_URL:
raise VideoGenerationError(
"WINDOWS_VIDEO_ENCODER_URL 미설정 — Windows 인코더 서버 주소 필요"
)
out_path = os.path.join(storage.pipeline_dir(pipeline_id), "video.mp4")
nas_audio = _container_to_nas(audio_path)
nas_cover = _container_to_nas(cover_path)
nas_output = _container_to_nas(out_path)
payload = {
"cover_path_nas": nas_cover,
"audio_path_nas": nas_audio,
"output_path_nas": nas_output,
"resolution": resolution,
"duration_sec": duration_sec,
"style": style,
}
logger.info("Windows 인코더 호출: pipeline=%d audio=%s", pipeline_id, audio_path)
try:
with httpx.Client(timeout=ENCODER_TIMEOUT_S) as client:
resp = client.post(f"{ENCODER_URL}/encode_video", json=payload)
except (httpx.ConnectError, httpx.ReadTimeout, httpx.WriteTimeout, httpx.NetworkError) as e:
raise VideoGenerationError(f"Windows 인코더 연결 실패: {e}")
if resp.status_code != 200:
try:
detail = resp.json().get("detail", resp.json())
except Exception:
detail = {"error": resp.text[:300]}
stage = detail.get("stage", "?") if isinstance(detail, dict) else "?"
error = detail.get("error", str(detail)) if isinstance(detail, dict) else str(detail)
raise VideoGenerationError(
f"Windows 인코더 오류 ({resp.status_code}): {stage}{error}"
)
data = resp.json()
if not data.get("ok"):
raise VideoGenerationError(f"Windows 인코더 응답 ok=false: {data}")
return {
"url": storage.media_url(pipeline_id, "video.mp4"),
"used_fallback": False,
"duration_sec": duration_sec,
"encode_duration_ms": data.get("duration_ms"),
"encoder": data.get("encoder", "h264_nvenc"),
}
def _container_to_nas(container_path: str) -> str:
""" /app/data/videos/3/cover.jpg → /volume1/docker/webpage/data/videos/3/cover.jpg
/app/data/abc.mp3 → /volume1/docker/webpage/data/music/abc.mp3
"""
if container_path.startswith("/app/data/videos/"):
return container_path.replace("/app/data/videos/", NAS_VIDEOS_ROOT + "/", 1)
if container_path.startswith("/app/data/"):
rel = container_path[len("/app/data/"):]
return NAS_MUSIC_ROOT + "/" + rel
return container_path
```
### Step 4: Run tests
```bash
cd music-lab && python -m pytest tests/ -v
```
Expected: 73 PASS — 2 (제거) + 6 (신규) = 77? 아니면 73 그대로 — count 확인.
### Step 5: Commit + push
```bash
git -C C:/Users/jaeoh/Desktop/workspace/web-backend add music-lab/app/pipeline/video.py \
music-lab/tests/test_video_thumb.py
git -C C:/Users/jaeoh/Desktop/workspace/web-backend commit -m "feat(music-lab): 영상 인코딩을 Windows GPU 서버로 오프로드
- pipeline/video.py 재작성: subprocess.run 제거, httpx로 192.168.45.59:8765/encode_video 호출
- Windows 서버 다운 시 즉시 VideoGenerationError (NAS 로컬 폴백 X)
- /app/data/* → /volume1/docker/webpage/data/* 경로 변환 (_container_to_nas)
- 테스트는 respx mock 기반으로 교체 (6개 신규)"
git -C C:/Users/jaeoh/Desktop/workspace/web-backend push origin main
```
---
## Task 4: docker-compose.yml env 추가
**Files:**
- Modify: `web-backend/docker-compose.yml`
### Step 1: music-lab 서비스 environment에 추가
```yaml
music-lab:
environment:
# ... existing ...
- WINDOWS_VIDEO_ENCODER_URL=${WINDOWS_VIDEO_ENCODER_URL}
- NAS_VIDEOS_ROOT=${NAS_VIDEOS_ROOT:-/volume1/docker/webpage/data/videos}
- NAS_MUSIC_ROOT=${NAS_MUSIC_ROOT:-/volume1/docker/webpage/data/music}
```
### Step 2: docker-compose syntax 검증
```bash
cd C:/Users/jaeoh/Desktop/workspace/web-backend && python -c "import yaml; yaml.safe_load(open('docker-compose.yml'))" && echo OK
```
### Step 3: Commit + push
```bash
git -C C:/Users/jaeoh/Desktop/workspace/web-backend add docker-compose.yml
git -C C:/Users/jaeoh/Desktop/workspace/web-backend commit -m "chore(infra): GPU 인코더 env 추가 (WINDOWS_VIDEO_ENCODER_URL)"
git -C C:/Users/jaeoh/Desktop/workspace/web-backend push origin main
```
---
## Task 5: 사용자 매뉴얼 단계 (사람이 직접)
후속 단계, 코드 작업 아님:
1. **Windows PC: ffmpeg 설치 + PATH 설정**
- https://www.gyan.dev/ffmpeg/builds/ → "release full" 다운로드
- `C:\ffmpeg\` 압축 해제 → `C:\ffmpeg\bin\ffmpeg.exe` 확인
- 시스템 PATH에 `C:\ffmpeg\bin` 추가
- 검증: `ffmpeg -version` + `ffmpeg -encoders | findstr h264_nvenc`
2. **Windows PC: `music_ai/.env` 추가**
```env
NAS_VOLUME_PREFIX=/volume1/
WINDOWS_DRIVE_ROOT=Z:\
FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe
```
3. **Windows PC: SMB 마운트 확인** — `Z:\docker\webpage\data\` 접근 가능
4. **Windows PC: `music_ai` 서버 재시작**`start.bat`
5. **Windows PC 헬스 체크**`curl http://localhost:8765/health``ffmpeg_nvenc: true` 확인
6. **NAS `.env`에 추가**
```env
WINDOWS_VIDEO_ENCODER_URL=http://192.168.45.59:8765
```
7. **NAS music-lab 재시작** — `docker compose up -d music-lab`
8. **E2E 테스트** — 진행 탭에서 새 파이프라인 시작, 영상 단계가 1020초에 완료되는지 확인
---
## Self-Review
**Spec coverage:**
- §4 Windows endpoint → Task 1, 2 ✓
- §5 NAS video.py → Task 3 ✓
- §6 에러 처리 → Task 3 (httpx 예외 catch) ✓
- §7 헬스 모니터링 → Task 2 (`/health` 확장) ✓
- §8 테스트 → Task 1, 3 ✓
- §9 Windows 사전 준비 → Task 5 (사용자 수동) ✓
- §10 산출물 → 4 task로 모두 커버
**Placeholder scan:** 없음.
**Type consistency:**
- `EncodeError(stage, message)` Task 1 정의, Task 2에서 `e.stage`/`e.message` 사용 ✓
- `VideoGenerationError` Task 3에서 raise, 기존 orchestrator에서 catch ✓
- 응답 JSON 형식 spec §4-2와 일치 ✓
- 환경변수 이름 일관 (`NAS_VOLUME_PREFIX`, `WINDOWS_DRIVE_ROOT`, `FFMPEG_PATH`, `WINDOWS_VIDEO_ENCODER_URL`, `NAS_VIDEOS_ROOT`, `NAS_MUSIC_ROOT`)
---

View File

@@ -0,0 +1,815 @@
# Batch Music Generation — Implementation Plan
> **For agentic workers:** Use `superpowers:subagent-driven-development`. Steps use `- [ ]` checkboxes.
**Goal:** 장르 1개로 N(1-10) 트랙 Suno 자동 순차 생성 + 자동 컴파일 + 영상 파이프라인 자동 시작.
**Architecture:** music-lab 신규 `batch_generator` 모듈이 BackgroundTask로 N회 Suno 호출 → compile_job 자동 생성 → orchestrator.run_step("cover") 자동 호출.
**Spec:** `docs/superpowers/specs/2026-05-10-batch-music-generation-design.md`
---
## File Structure
| 경로 | 책임 |
|------|------|
| `music-lab/app/db.py` (modify) | `music_batch_jobs` 테이블 + 5 헬퍼 |
| `music-lab/app/random_pools.py` (new) | 장르별 mood/instr/BPM/key/scale 랜덤 풀 + `randomize()` |
| `music-lab/app/batch_generator.py` (new) | `run_batch(batch_id)` 순차 오케스트레이션 |
| `music-lab/app/main.py` (modify) | 3개 endpoint (POST /generate-batch, GET /:id, GET 목록) |
| `web-ui/src/api.js` (modify) | 3개 헬퍼 |
| `web-ui/src/pages/music/components/BatchProgress.jsx` (new) | 진행 표시 컴포넌트 |
| `web-ui/src/pages/music/MusicStudio.jsx` (modify) | Create 탭에 배치 섹션 + 폴링 |
| `web-ui/src/pages/music/MusicStudio.css` (modify) | 배치 섹션 스타일 |
---
## Task 1: DB 테이블 + 헬퍼 + random_pools
**Files:**
- Modify: `music-lab/app/db.py`
- Create: `music-lab/app/random_pools.py`
- Test: `music-lab/tests/test_batch_db.py`
- [ ] **Step 1: random_pools.py 작성**
```python
"""장르별 음악 파라미터 랜덤 풀."""
import random
POOLS = {
"lo-fi": {
"moods": ["chill", "relaxing", "dreamy", "melancholic", "mellow", "nostalgic", "peaceful"],
"instruments_pool": ["piano", "synth", "drums", "vinyl", "rhodes", "soft bass", "ambient pads"],
"instruments_count": (3, 4),
"bpm": (70, 90),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["minor", "major"],
"prompt_modifiers": ["cozy bedroom vibes", "rainy night", "late night study", "cafe ambience"],
},
"phonk": {
"moods": ["dark", "aggressive", "moody", "intense", "hypnotic"],
"instruments_pool": ["808 bass", "hi-hat", "synth lead", "vocal chops", "bass drops", "trap drums"],
"instruments_count": (3, 4),
"bpm": (130, 160),
"keys": ["C", "D", "F", "G"],
"scales": ["minor"],
"prompt_modifiers": ["drift atmosphere", "dark neon", "midnight drive"],
},
"ambient": {
"moods": ["peaceful", "meditative", "ethereal", "spacious", "dreamy"],
"instruments_pool": ["pad synths", "atmospheric guitar", "soft strings", "field recordings", "drone bass"],
"instruments_count": (2, 3),
"bpm": (50, 75),
"keys": ["C", "D", "E", "G", "A"],
"scales": ["major", "minor"],
"prompt_modifiers": ["misty mountain morning", "deep space", "still water", "forest dawn"],
},
"pop": {
"moods": ["uplifting", "happy", "energetic", "romantic", "catchy"],
"instruments_pool": ["acoustic guitar", "piano", "drums", "bass", "synth", "vocals harmonies"],
"instruments_count": (3, 5),
"bpm": (95, 130),
"keys": ["C", "D", "E", "F", "G", "A"],
"scales": ["major"],
"prompt_modifiers": ["radio-ready", "summer vibe", "feel-good"],
},
"default": {
"moods": ["chill", "relaxing", "uplifting", "mellow"],
"instruments_pool": ["piano", "synth", "drums", "guitar", "bass", "strings"],
"instruments_count": (3, 4),
"bpm": (80, 110),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["minor", "major"],
"prompt_modifiers": [""],
},
}
def randomize(genre: str, rng=None) -> dict:
rng = rng or random.Random()
pool = POOLS.get(genre.lower(), POOLS["default"])
n_instr = rng.randint(*pool["instruments_count"])
instruments = rng.sample(pool["instruments_pool"], min(n_instr, len(pool["instruments_pool"])))
return {
"moods": [rng.choice(pool["moods"])],
"instruments": instruments,
"bpm": rng.randint(*pool["bpm"]),
"key": rng.choice(pool["keys"]),
"scale": rng.choice(pool["scales"]),
"prompt_modifier": rng.choice(pool["prompt_modifiers"]),
}
```
- [ ] **Step 2: DB 테이블 + 헬퍼 추가** (db.py)
`init_db()`에 추가:
```python
cursor.execute("""
CREATE TABLE IF NOT EXISTS music_batch_jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
genre TEXT NOT NULL,
count INTEGER NOT NULL,
target_duration_sec INTEGER NOT NULL DEFAULT 180,
auto_pipeline INTEGER NOT NULL DEFAULT 1,
completed INTEGER NOT NULL DEFAULT 0,
track_ids_json TEXT NOT NULL DEFAULT '[]',
current_track_index INTEGER NOT NULL DEFAULT 0,
current_track_status TEXT,
status TEXT NOT NULL DEFAULT 'queued',
error TEXT,
compile_job_id INTEGER,
pipeline_id INTEGER,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
)
""")
```
`db.py` 끝에 헬퍼:
```python
_BATCH_ALLOWED_COLS = frozenset([
"completed", "track_ids_json", "current_track_index",
"current_track_status", "status", "error",
"compile_job_id", "pipeline_id",
])
def create_batch_job(genre: str, count: int, target_duration_sec: int = 180,
auto_pipeline: bool = True) -> int:
with _conn() as conn:
now = _now()
cur = conn.cursor()
cur.execute("""
INSERT INTO music_batch_jobs
(genre, count, target_duration_sec, auto_pipeline,
status, created_at, updated_at)
VALUES (?, ?, ?, ?, 'queued', ?, ?)
""", (genre, count, target_duration_sec, 1 if auto_pipeline else 0, now, now))
return cur.lastrowid
def get_batch_job(batch_id: int) -> dict | None:
with _conn() as conn:
row = conn.execute(
"SELECT * FROM music_batch_jobs WHERE id = ?", (batch_id,)
).fetchone()
if not row:
return None
d = dict(row)
d["track_ids"] = json.loads(d.get("track_ids_json") or "[]")
return d
def update_batch_job(batch_id: int, **fields) -> None:
unknown = set(fields) - _BATCH_ALLOWED_COLS
if unknown:
raise ValueError(f"unknown batch job columns: {unknown}")
cols = ", ".join(f"{k} = ?" for k in fields)
vals = list(fields.values()) + [_now(), batch_id]
with _conn() as conn:
conn.execute(
f"UPDATE music_batch_jobs SET {cols}, updated_at = ? WHERE id = ?",
vals,
)
def append_batch_track(batch_id: int, track_id: int) -> None:
"""track_ids_json에 새 track_id 추가 + completed += 1 (atomic)."""
with _conn() as conn:
row = conn.execute(
"SELECT track_ids_json, completed FROM music_batch_jobs WHERE id = ?",
(batch_id,),
).fetchone()
if not row:
return
ids = json.loads(row["track_ids_json"] or "[]")
ids.append(track_id)
conn.execute(
"UPDATE music_batch_jobs SET track_ids_json = ?, completed = ?, updated_at = ? WHERE id = ?",
(json.dumps(ids), row["completed"] + 1, _now(), batch_id),
)
def list_batch_jobs(active_only: bool = False) -> list[dict]:
sql = "SELECT * FROM music_batch_jobs"
if active_only:
sql += " WHERE status NOT IN ('failed','cancelled','piped')"
sql += " ORDER BY created_at DESC"
with _conn() as conn:
rows = conn.execute(sql).fetchall()
out = []
for r in rows:
d = dict(r)
d["track_ids"] = json.loads(d.get("track_ids_json") or "[]")
out.append(d)
return out
```
- [ ] **Step 3: Test 작성**
```python
# tests/test_batch_db.py
import pytest
from app import db
@pytest.fixture
def fresh_db(monkeypatch, tmp_path):
monkeypatch.setattr(db, "DB_PATH", str(tmp_path / "music.db"))
db.init_db()
return db
def test_create_batch_job(fresh_db):
bid = db.create_batch_job(genre="lo-fi", count=10)
j = db.get_batch_job(bid)
assert j["genre"] == "lo-fi"
assert j["count"] == 10
assert j["status"] == "queued"
assert j["track_ids"] == []
assert j["auto_pipeline"] == 1
def test_update_batch_job(fresh_db):
bid = db.create_batch_job(genre="phonk", count=5)
db.update_batch_job(bid, status="generating", current_track_index=2)
j = db.get_batch_job(bid)
assert j["status"] == "generating"
assert j["current_track_index"] == 2
def test_update_batch_rejects_unknown_col(fresh_db):
bid = db.create_batch_job(genre="lo-fi", count=1)
with pytest.raises(ValueError):
db.update_batch_job(bid, evil_col="x")
def test_append_batch_track(fresh_db):
bid = db.create_batch_job(genre="lo-fi", count=3)
db.append_batch_track(bid, 101)
db.append_batch_track(bid, 102)
j = db.get_batch_job(bid)
assert j["track_ids"] == [101, 102]
assert j["completed"] == 2
def test_list_batch_jobs_active_filter(fresh_db):
b1 = db.create_batch_job(genre="lo-fi", count=1)
b2 = db.create_batch_job(genre="phonk", count=1)
db.update_batch_job(b1, status="failed")
actives = db.list_batch_jobs(active_only=True)
assert all(j["status"] not in ("failed",) for j in actives)
assert any(j["id"] == b2 for j in actives)
assert not any(j["id"] == b1 for j in actives)
def test_random_pools_randomize():
from app.random_pools import randomize, POOLS
import random
rng = random.Random(42)
result = randomize("lo-fi", rng)
assert result["bpm"] in range(70, 91)
assert result["key"] in POOLS["lo-fi"]["keys"]
assert result["scale"] in POOLS["lo-fi"]["scales"]
assert len(result["moods"]) == 1
assert result["moods"][0] in POOLS["lo-fi"]["moods"]
assert 3 <= len(result["instruments"]) <= 4
def test_random_pools_unknown_genre_uses_default():
from app.random_pools import randomize, POOLS
import random
result = randomize("nonexistent", random.Random(0))
assert result["bpm"] in range(80, 111) # default range
```
- [ ] **Step 4: Run + commit**
```bash
cd music-lab && python -m pytest tests/test_batch_db.py -v
```
Expected: 7 PASS.
```bash
git add music-lab/app/db.py music-lab/app/random_pools.py music-lab/tests/test_batch_db.py
git commit -m "feat(music-lab): music_batch_jobs 테이블 + 장르별 랜덤 풀"
```
---
## Task 2: batch_generator + 3 엔드포인트
**Files:**
- Create: `music-lab/app/batch_generator.py`
- Modify: `music-lab/app/main.py`
- Test: `music-lab/tests/test_batch_endpoints.py`
- [ ] **Step 1: batch_generator.py 작성**
```python
"""배치 음악 생성 + 자동 컴파일·영상 파이프라인."""
import asyncio
import logging
from . import db
from .random_pools import randomize
logger = logging.getLogger("music-lab.batch")
POLL_INTERVAL_S = 5
TRACK_GEN_TIMEOUT_S = 240
async def run_batch(batch_id: int) -> None:
job = db.get_batch_job(batch_id)
if not job:
return
genre = job["genre"]
count = job["count"]
duration = job["target_duration_sec"]
auto_pipe = bool(job["auto_pipeline"])
db.update_batch_job(batch_id, status="generating")
track_ids: list[int] = []
for i in range(1, count + 1):
title = f"{genre.title()} Mix Track {i}"
params = randomize(genre)
db.update_batch_job(batch_id,
current_track_index=i,
current_track_status="generating")
track_id = await _generate_one_track(title=title, genre=genre,
duration_sec=duration,
params=params)
if track_id:
track_ids.append(track_id)
db.append_batch_track(batch_id, track_id)
db.update_batch_job(batch_id, current_track_status="succeeded")
else:
db.update_batch_job(batch_id, current_track_status="failed")
logger.warning("배치 %d 트랙 %d 실패 — 계속 진행", batch_id, i)
if not track_ids:
db.update_batch_job(batch_id, status="failed",
error="모든 트랙 생성 실패")
return
db.update_batch_job(batch_id, status="generated")
if not auto_pipe:
return
# 자동 컴파일
db.update_batch_job(batch_id, status="compiling")
try:
compile_id = db.create_compile_job(
title=f"{genre.title()} Mix",
track_ids=track_ids,
crossfade_sec=3,
)
db.update_batch_job(batch_id, compile_job_id=compile_id)
except Exception as e:
db.update_batch_job(batch_id, status="failed", error=f"compile create: {e}")
return
from . import compiler
try:
await asyncio.to_thread(compiler.run, compile_id)
except Exception as e:
db.update_batch_job(batch_id, status="failed", error=f"compile run: {e}")
return
job_after = db.get_compile_job(compile_id)
if not job_after or job_after.get("status") not in ("done", "succeeded"):
db.update_batch_job(
batch_id, status="failed",
error=f"compile not done (status={job_after.get('status') if job_after else 'unknown'})"
)
return
# 자동 영상 파이프라인
pipeline_id = db.create_pipeline(compile_job_id=compile_id)
db.update_batch_job(batch_id, pipeline_id=pipeline_id, status="piped")
from .pipeline import orchestrator
await orchestrator.run_step(pipeline_id, "cover")
async def _generate_one_track(*, title: str, genre: str, duration_sec: int,
params: dict) -> int | None:
"""기존 Suno generate 호출 + 완료까지 polling. 성공 시 새 track id 반환."""
from .suno_provider import run_suno_generation
from .db import create_task, get_task
import uuid
task_id = str(uuid.uuid4())
suno_params = {
"title": title,
"genre": genre,
"moods": params["moods"],
"instruments": params["instruments"],
"duration_sec": duration_sec,
"bpm": params["bpm"],
"key": params["key"],
"scale": params["scale"],
"prompt": params.get("prompt_modifier", ""),
}
create_task(task_id, suno_params, provider="suno")
# Suno background task 직접 호출 (BackgroundTasks 미사용 — 우리가 await)
asyncio.create_task(asyncio.to_thread(run_suno_generation, task_id, suno_params))
# Polling
waited = 0
while waited < TRACK_GEN_TIMEOUT_S:
await asyncio.sleep(POLL_INTERVAL_S)
waited += POLL_INTERVAL_S
task = get_task(task_id)
if not task:
continue
if task.get("status") == "succeeded":
tr = task.get("track")
return tr.get("id") if tr else None
if task.get("status") == "failed":
return None
return None # timeout
```
NOTE: This assumes existing `db.create_task`, `db.get_task`, `suno_provider.run_suno_generation` are reusable. Read existing code to confirm function signatures, adjust if needed (especially `task["track"]["id"]` vs other format).
- [ ] **Step 2: main.py에 3 endpoint 추가**
```python
from app.batch_generator import run_batch as _run_batch
class BatchGenerateRequest(BaseModel):
genre: str
count: int = 10
target_duration_sec: int = 180
auto_pipeline: bool = True
@app.post("/api/music/generate-batch", status_code=201)
async def generate_batch(req: BatchGenerateRequest, bg: BackgroundTasks):
if not (1 <= req.count <= 10):
raise HTTPException(400, "count는 1-10 사이")
if not (60 <= req.target_duration_sec <= 300):
raise HTTPException(400, "target_duration_sec는 60-300 사이")
if not req.genre:
raise HTTPException(400, "genre 필수")
if not SUNO_API_KEY:
raise HTTPException(400, "SUNO_API_KEY 미설정")
batch_id = _db_module.create_batch_job(
genre=req.genre, count=req.count,
target_duration_sec=req.target_duration_sec,
auto_pipeline=req.auto_pipeline,
)
bg.add_task(_run_batch, batch_id)
return _db_module.get_batch_job(batch_id)
@app.get("/api/music/generate-batch/{batch_id}")
def get_batch(batch_id: int):
j = _db_module.get_batch_job(batch_id)
if not j:
raise HTTPException(404)
# tracks 메타 LEFT JOIN (id, title, audio_url)
if j["track_ids"]:
ids_csv = ",".join(str(i) for i in j["track_ids"])
# 간단한 in-Python 매핑 (sqlite IN (...))
import sqlite3
conn = sqlite3.connect(_db_module.DB_PATH)
conn.row_factory = sqlite3.Row
rows = conn.execute(
f"SELECT id, title, audio_url, duration_sec FROM music_library WHERE id IN ({ids_csv})"
).fetchall()
conn.close()
j["tracks"] = [dict(r) for r in rows]
else:
j["tracks"] = []
return j
@app.get("/api/music/generate-batch")
def list_batches(status: str = "all"):
return {"batches": _db_module.list_batch_jobs(active_only=(status == "active"))}
```
(SUNO_API_KEY는 main.py에 이미 import돼있다고 가정. 없으면 `_db_module` 패턴처럼 처리.)
- [ ] **Step 3: 테스트 작성**
```python
# tests/test_batch_endpoints.py
import pytest
from unittest.mock import AsyncMock, patch, MagicMock
from fastapi.testclient import TestClient
from app.main import app
from app import db
@pytest.fixture
def client(monkeypatch, tmp_path):
monkeypatch.setattr(db, "DB_PATH", str(tmp_path / "music.db"))
db.init_db()
monkeypatch.setenv("SUNO_API_KEY", "test")
return TestClient(app)
def test_create_batch_201(client):
with patch("app.main._run_batch", new=AsyncMock()):
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 3})
assert r.status_code == 201
body = r.json()
assert body["genre"] == "lo-fi"
assert body["count"] == 3
assert body["status"] == "queued"
def test_create_batch_rejects_count_too_high(client):
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 11})
assert r.status_code == 400
def test_create_batch_rejects_count_zero(client):
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 0})
assert r.status_code == 400
def test_create_batch_rejects_no_genre(client):
r = client.post("/api/music/generate-batch", json={"count": 3})
# Pydantic missing 필드 → 422 (FastAPI default validation)
assert r.status_code in (400, 422)
def test_get_batch_returns_tracks(client):
bid = db.create_batch_job(genre="lo-fi", count=2)
db.append_batch_track(bid, 999) # phantom track id (not in library)
r = client.get(f"/api/music/generate-batch/{bid}")
assert r.status_code == 200
body = r.json()
assert body["track_ids"] == [999]
# tracks 배열은 비어있음 (해당 track 미존재)
assert body["tracks"] == []
def test_list_batches(client):
db.create_batch_job(genre="lo-fi", count=1)
db.create_batch_job(genre="phonk", count=2)
r = client.get("/api/music/generate-batch")
assert len(r.json()["batches"]) == 2
```
- [ ] **Step 4: Run + commit + push**
```bash
cd music-lab && python -m pytest tests/ -v
```
Expected: 모두 PASS.
```bash
git -C C:/Users/jaeoh/Desktop/workspace/web-backend add music-lab/app/batch_generator.py \
music-lab/app/main.py \
music-lab/tests/test_batch_endpoints.py
git -C C:/Users/jaeoh/Desktop/workspace/web-backend commit -m "feat(music-lab): 배치 음악 생성 endpoint + orchestrator"
git -C C:/Users/jaeoh/Desktop/workspace/web-backend push origin main
```
---
## Task 3: Frontend Create 탭 배치 섹션
**Files:**
- Modify: `web-ui/src/api.js`
- Create: `web-ui/src/pages/music/components/BatchProgress.jsx`
- Modify: `web-ui/src/pages/music/MusicStudio.jsx`
- Modify: `web-ui/src/pages/music/MusicStudio.css`
- [ ] **Step 1: api.js 헬퍼**
```javascript
// === Batch generation ===
export const startBatchGen = (payload) => apiPost('/api/music/generate-batch', payload);
export const getBatchJob = (id) => apiGet(`/api/music/generate-batch/${id}`);
export const listBatchJobs = (status='all') => apiGet(`/api/music/generate-batch?status=${status}`);
```
- [ ] **Step 2: BatchProgress.jsx 신규**
```jsx
const STATUS_LABELS = {
queued: '대기 중', generating: '음악 생성 중', generated: '음악 완료, 컴파일 대기',
compiling: '컴파일 중', piped: '영상 파이프라인 시작됨',
failed: '실패', cancelled: '취소',
};
export default function BatchProgress({ batch }) {
if (!batch) return null;
const trackList = Array.from({ length: batch.count }, (_, i) => i + 1);
return (
<div className="ms-batch-progress">
<div className="ms-batch-header">
배치 #{batch.id} {batch.genre} ·{' '}
{batch.completed}/{batch.count} 완료 ·{' '}
<strong>{STATUS_LABELS[batch.status] || batch.status}</strong>
</div>
{batch.error && <div className="ms-error">에러: {batch.error}</div>}
<ol className="ms-batch-tracks">
{trackList.map(n => {
const completed = n <= batch.completed;
const current = n === batch.current_track_index && batch.status === 'generating';
const tr = (batch.tracks || [])[n - 1];
return (
<li key={n} className={completed ? 'done' : current ? 'current' : 'pending'}>
{completed ? '✓' : current ? '⏳' : '○'}
{' '}Track {n}: {tr?.title || (current ? '생성 중...' : '대기')}
</li>
);
})}
</ol>
{batch.compile_job_id && (
<div className="ms-batch-link">📀 컴파일 #{batch.compile_job_id}</div>
)}
{batch.pipeline_id && (
<div className="ms-batch-link">
🎬 영상 파이프라인 #{batch.pipeline_id}
{' '}<em>YouTube 진행 탭에서 확인</em>
</div>
)}
</div>
);
}
```
- [ ] **Step 3: MusicStudio.jsx Create 탭에 배치 섹션 추가**
Create 탭 jsx 영역 (handleGenerate 근처) 위 또는 옆에:
```jsx
import BatchProgress from './components/BatchProgress';
import { startBatchGen, getBatchJob } from '../../api';
// 컴포넌트 내부 state:
const [batchOpen, setBatchOpen] = useState(false);
const [batchGenre, setBatchGenre] = useState('lo-fi');
const [batchCount, setBatchCount] = useState(10);
const [batchDuration, setBatchDuration] = useState(180);
const [batchAutoPipe, setBatchAutoPipe] = useState(true);
const [currentBatch, setCurrentBatch] = useState(null);
const [batchPolling, setBatchPolling] = useState(false);
const batchPollRef = useRef(null);
const startBatch = async () => {
try {
const res = await startBatchGen({
genre: batchGenre,
count: batchCount,
target_duration_sec: batchDuration,
auto_pipeline: batchAutoPipe,
});
setCurrentBatch(res);
setBatchPolling(true);
} catch (e) {
alert(`배치 시작 실패: ${e.message || e}`);
}
};
useEffect(() => {
if (!batchPolling || !currentBatch?.id) return;
const tick = async () => {
const j = await getBatchJob(currentBatch.id).catch(() => null);
if (j) {
setCurrentBatch(j);
if (['piped', 'failed', 'cancelled'].includes(j.status)) {
setBatchPolling(false);
if (j.pipeline_id) loadLibrary?.(); // refresh library to show new tracks
}
}
};
batchPollRef.current = setInterval(tick, 5000);
return () => clearInterval(batchPollRef.current);
}, [batchPolling, currentBatch?.id]);
// ... Create 탭 jsx 안:
<details className="ms-batch-section" open={batchOpen} onToggle={(e) => setBatchOpen(e.target.open)}>
<summary>🎲 배치 생성 (장르 1-10트랙 + 자동 영상)</summary>
<div className="ms-batch-form">
<label>장르
<select value={batchGenre} onChange={e => setBatchGenre(e.target.value)}>
<option value="lo-fi">Lo-Fi</option>
<option value="phonk">Phonk</option>
<option value="ambient">Ambient</option>
<option value="pop">Pop</option>
</select>
</label>
<label>트랙 : {batchCount}
<input type="range" min={1} max={10} value={batchCount}
onChange={e => setBatchCount(parseInt(e.target.value))} />
</label>
<label>트랙당 길이: {batchDuration}
<input type="range" min={60} max={300} step={10} value={batchDuration}
onChange={e => setBatchDuration(parseInt(e.target.value))} />
</label>
<label className="ms-batch-checkbox">
<input type="checkbox" checked={batchAutoPipe}
onChange={e => setBatchAutoPipe(e.target.checked)} />
모든 트랙 생성 자동 영상 파이프라인 시작
</label>
<p className="ms-batch-estimate">
예상: {Math.ceil(batchCount * 1.5)}-{batchCount * 2} ·
비용 ~${(batchCount * 0.005 + (batchAutoPipe ? 0.05 : 0)).toFixed(2)}
</p>
<button className="button primary" onClick={startBatch}
disabled={batchPolling}>
🎵 배치 생성 시작
</button>
</div>
{currentBatch && <BatchProgress batch={currentBatch} />}
</details>
```
- [ ] **Step 4: CSS 추가**
```css
/* === Batch generation section === */
.ms-batch-section { margin: 16px 0; padding: 12px; background: rgba(0,0,0,.2);
border: 1px solid var(--ms-line, #2a2a3a); border-radius: 12px; }
.ms-batch-section summary { cursor: pointer; font-weight: bold; color: var(--ms-text, #f0f0f5); }
.ms-batch-form { display: flex; flex-direction: column; gap: 10px; padding: 12px 0; }
.ms-batch-form label { display: flex; flex-direction: column; gap: 4px; font-size: 13px; }
.ms-batch-form input[type="range"] { width: 100%; }
.ms-batch-checkbox { flex-direction: row !important; align-items: center; gap: 8px; }
.ms-batch-checkbox input { width: auto; }
.ms-batch-estimate { font-size: 12px; color: var(--ms-muted, #a0a0b0); }
.ms-batch-progress { margin-top: 12px; padding: 12px; background: rgba(0,0,0,.3);
border-radius: 8px; }
.ms-batch-header { font-size: 13px; margin-bottom: 8px; }
.ms-batch-tracks { padding-left: 24px; font-size: 12px; }
.ms-batch-tracks li { margin: 2px 0; }
.ms-batch-tracks li.done { color: #86efac; }
.ms-batch-tracks li.current { color: var(--ms-accent, #38bdf8); font-weight: bold; }
.ms-batch-tracks li.pending { color: var(--ms-muted, #a0a0b0); }
.ms-batch-link { margin-top: 8px; font-size: 12px; color: var(--ms-muted, #a0a0b0); }
```
- [ ] **Step 5: Build + verify + commit + push + deploy**
```bash
cd web-ui && npm run build 2>&1 | tail -5
npx eslint src/pages/music/components/BatchProgress.jsx src/pages/music/MusicStudio.jsx 2>&1 | tail
```
```bash
git -C C:/Users/jaeoh/Desktop/workspace/web-ui add src/api.js \
src/pages/music/components/BatchProgress.jsx \
src/pages/music/MusicStudio.jsx \
src/pages/music/MusicStudio.css
git -C C:/Users/jaeoh/Desktop/workspace/web-ui commit -m "feat(web-ui): Create 탭 배치 생성 섹션 + BatchProgress"
git -C C:/Users/jaeoh/Desktop/workspace/web-ui push origin main
cd C:/Users/jaeoh/Desktop/workspace/web-ui && npm run release:nas
```
---
## Task 4: 수동 E2E 검증
- [ ] Create 탭 → 배치 생성 섹션 펼침 → genre=lo-fi, count=3 (테스트로 적게), duration=120s, auto_pipeline=on → "배치 생성 시작"
- [ ] BatchProgress에 Track 1/2/3 진행 표시 확인
- [ ] ~5분 후 Library에 3개 트랙 추가됨
- [ ] 컴파일 진행 확인 (status: compiling)
- [ ] 영상 파이프라인 시작됨 (status: piped) + pipeline_id 표시
- [ ] YouTube 탭 → 진행 탭에 새 카드, cover 단계 진행 중
- [ ] 텔레그램에 cover 알림 도착
- [ ] 일반 흐름대로 5단계 승인 후 발행
---
## Self-Review
**Spec coverage:**
- §3 사용자 흐름 → Task 3 (UI 섹션)
- §4 데이터 모델 → Task 1
- §5 백엔드 (random_pools, batch_generator) → Task 1, 2
- §6 API → Task 2
- §7 프론트엔드 → Task 3
- §8 에러 처리 → Task 2 (validation, try/except)
- §9 테스트 → Task 1, 2
- §10 산출물 → 4 task로 모두 커버
**Placeholder scan:** 없음.
**Type consistency:**
- `batch_id` int, `count` int, `genre` str — 일관
- `track_ids` list[int]
- `status` 7값 (queued/generating/generated/compiling/piped/failed/cancelled) 일관
**스펙 보정:** §5-2 batch_generator의 `_generate_one_track`에서 `db.create_task`/`db.get_task` 사용 — 이 함수들이 기존 db.py에 있는지 미확인. Task 2 Step 1 NOTE에 명시함.

View File

@@ -0,0 +1,471 @@
# packs-lab 인프라 통합 + admin mint-token 설계
> 대상: `web-backend/packs-lab/`
> 외부 의존: Supabase(`pack_files` 테이블) + Vercel SaaS(HMAC 호출자)
> 후속 별도 스펙: Vercel-side admin UI / 사용자 다운로드 / cleanup cron / multi-admin
---
## 1. 목표
`packs-lab`은 NAS 자료 다운로드 자동화 백엔드. Synology DSM 공유 링크 발급 + 5GB 멀티파트 업로드 수신을 담당하고, Vercel SaaS와 HMAC으로 통신한다. 사용자 인증은 Vercel이 Supabase로 처리하고 본 서비스는 외부 인증을 다루지 않는다.
이미 코드(HMAC 미들웨어 / DSM client / 4 라우트)는 작성되어 있으나 인프라 통합 + Supabase 스키마 + admin upload 토큰 발급 흐름이 빠져 있어 운영 가능 상태가 아니다. 본 스펙은 그 갭을 메운다.
### 핵심 변경
- **신규 라우트**: `POST /api/packs/admin/mint-token` (Vercel HMAC → 일회성 업로드 토큰)
- **Supabase DDL**: `pack_files` 테이블 + 활성·삭제 인덱스
- **인프라**: docker-compose `packs-lab` 서비스 등록(18950) + nginx `/api/packs/` 5GB 통과 + `.env.example` 6+1 환경변수
- **테스트**: routes 통합 + DSM client mock
- **문서**: web-backend / workspace CLAUDE.md 5곳 갱신
- **DELETE 라우트 docstring**: "DSM 공유 정리" 표현을 "DSM 공유 자동 만료"로 수정 (실제 동작과 일치)
### 변경하지 않는 것
- 기존 `auth.py` (`mint_upload_token` 그대로 활용)
- 기존 `dsm_client.py`
- 기존 `routes.py`의 sign-link / upload / list / delete 본문
- DSM 공유 추적 테이블 — 4시간 자동 만료로 충분(브레인스토밍 결정)
---
## 2. 컴포넌트 + 통신 흐름
### 2.1 변경 받는 파일
| 영역 | 파일 | 변경 |
|------|------|------|
| 백엔드 | `packs-lab/app/routes.py` | DELETE docstring 수정 + admin mint-token 라우트 추가 |
| 백엔드 | `packs-lab/app/models.py` | `MintTokenRequest`, `MintTokenResponse` 스키마 추가 |
| 백엔드 | `packs-lab/app/auth.py` | 변경 없음 (기존 `mint_upload_token` 활용) |
| 테스트 | `packs-lab/tests/conftest.py` (신규) | autouse `BACKEND_HMAC_SECRET` 셋팅 |
| 테스트 | `packs-lab/tests/test_routes.py` (신규) | 5 라우트 통합 테스트 |
| 테스트 | `packs-lab/tests/test_dsm_client.py` (신규) | DSM 7.x API mock 테스트 |
| DB | `packs-lab/supabase/pack_files.sql` (신규) | DDL + 인덱스 |
| 인프라 | `docker-compose.yml` | `packs-lab` 서비스 추가 |
| 인프라 | `nginx/default.conf` | `/api/packs/` 라우팅 (`client_max_body_size 5G` + streaming) |
| 인프라 | `.env.example` | 6+1 신규 환경변수 |
| 문서 | `web-backend/CLAUDE.md` | 1·4·5·8·9 섹션 갱신 |
| 문서 | `workspace/CLAUDE.md` | 컨테이너 표 한 줄 추가 |
### 2.2 통신 흐름
**ADMIN 업로드**
```
Vercel admin UI ─────→ Vercel API (HMAC 헤더 추가)
POST /api/packs/admin/mint-token
backend: verify_request_hmac
mint_upload_token({tier, label, filename, size_bytes, jti, expires_at})
Vercel ←─────────────── token ──────┘
admin browser → POST /api/packs/upload
Authorization: Bearer <token>
multipart body (≤5GB)
backend: verify_upload_token + JTI mark
파일 저장 (PACK_BASE_DIR/{filename}, 평면 구조 — tier는 filename 규칙으로 구분)
Supabase INSERT pack_files
```
**사용자 다운로드**
```
사용자 → Vercel SaaS (Supabase auth + tier·결제 검증)
POST /api/packs/sign-link (HMAC + file_path)
backend: verify_request_hmac
DSM Sharing.create (4시간 만료)
사용자 ← Vercel ← 다운로드 URL (4시간 유효)
```
### 2.3 기각된 대안
| 대안 | 기각 사유 |
|------|-----------|
| Vercel-side 토큰 발급 | 토큰 포맷 양쪽 분산, 변경 시 동기화 부담 |
| admin browser → backend 직접 HMAC | admin browser에 secret 노출, 보안 약화 |
| DSM 공유 추적 테이블 | 4시간 자동 만료로 충분, YAGNI |
| Resumable multipart upload | 5GB는 단일 stream으로 충분, 복잡도 증가 |
| `pack_files.min_tier`를 PostgreSQL ENUM | tier 추가 시 ALTER TYPE 번거로움. text+CHECK 채택 |
---
## 3. `POST /api/packs/admin/mint-token`
### 3.1 Pydantic 스키마 (`models.py` 추가)
```python
class MintTokenRequest(BaseModel):
"""Vercel → backend: admin upload 토큰 발급 요청."""
tier: PackTier
label: str = Field(..., max_length=200)
filename: str = Field(..., max_length=255)
size_bytes: int = Field(..., gt=0, le=5 * 1024 * 1024 * 1024)
class MintTokenResponse(BaseModel):
token: str
expires_at: datetime
jti: str
```
### 3.2 라우트 본문 (`routes.py` 추가)
```python
import time, uuid
from datetime import datetime, timezone
from .auth import mint_upload_token, verify_request_hmac
from .models import MintTokenRequest, MintTokenResponse
UPLOAD_TOKEN_TTL_SEC = int(os.getenv("UPLOAD_TOKEN_TTL_SEC", "1800")) # 30분 default
@router.post("/admin/mint-token", response_model=MintTokenResponse)
async def mint_token(
request: Request,
x_timestamp: str = Header(""),
x_signature: str = Header(""),
):
body = await request.body()
verify_request_hmac(body, x_timestamp, x_signature)
payload = MintTokenRequest.model_validate_json(body)
_check_filename(payload.filename) # upload 라우트와 동일 검증
jti = str(uuid.uuid4())
expires_ts = int(time.time()) + UPLOAD_TOKEN_TTL_SEC
token = mint_upload_token({
"tier": payload.tier,
"label": payload.label,
"filename": payload.filename,
"size_bytes": payload.size_bytes,
"jti": jti,
"expires_at": expires_ts,
})
return MintTokenResponse(
token=token,
expires_at=datetime.fromtimestamp(expires_ts, tz=timezone.utc),
jti=jti,
)
```
### 3.3 결정 근거
| 항목 | 값 | 근거 |
|------|-----|------|
| TTL default | 1800s (30분) | 5GB 업로드 시작 + 진행 시간 여유. 1Gbps에서 약 40s, 50Mbps에서 약 14분 |
| TTL env override | `UPLOAD_TOKEN_TTL_SEC` | 운영 중 조정 가능 |
| filename 검증 | upload와 동일 (`_check_filename`) | 토큰 발급 시점에 미리 거부 → admin UI 즉시 피드백 |
| jti 응답 포함 | yes | admin이 업로드 결과 추적용 |
| Vercel ↔ backend | HMAC (`X-Timestamp` + `X-Signature`) | 다른 admin 라우트와 동일 패턴 |
| admin browser ↔ backend | Bearer token (단발성 jti) | 기존 upload 라우트 그대로 |
### 3.4 DELETE 라우트 docstring 수정
`routes.py` 모듈 docstring에서:
```diff
- DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete + DSM 공유 정리
+ DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete (DSM 공유는 자동 만료)
```
`delete_file` 함수에는 변경 없음.
---
## 4. Supabase `pack_files` DDL
**파일**: `packs-lab/supabase/pack_files.sql` (신규, 운영 배포 시 Supabase SQL editor에서 실행)
```sql
-- pack_files: NAS에 저장된 다운로드 가능한 패키지 파일 메타
create table if not exists public.pack_files (
id uuid primary key default gen_random_uuid(),
min_tier text not null check (min_tier in ('starter','pro','master')),
label text not null,
file_path text not null unique, -- NAS 절대경로, 동일 경로 중복 방지
filename text not null,
size_bytes bigint not null check (size_bytes > 0),
sort_order integer not null default 0,
uploaded_at timestamptz not null default now(),
deleted_at timestamptz
);
-- list 라우트의 hot path: deleted_at IS NULL + tier/order 정렬
create index if not exists pack_files_active_idx
on public.pack_files (min_tier, sort_order)
where deleted_at is null;
-- soft-deleted 통계 / cleanup 잡 대비
create index if not exists pack_files_deleted_at_idx
on public.pack_files (deleted_at)
where deleted_at is not null;
```
### 4.1 필드 결정 근거
| 필드 | 타입 / 제약 | 근거 |
|------|------------|------|
| `id` | uuid PK + `gen_random_uuid()` default | routes.py가 client-side `uuid.uuid4()` 생성하지만 default도 둬 fallback |
| `min_tier` | text + CHECK | enum 대신 text+CHECK가 PostgreSQL에서 더 유연 |
| `file_path` | text NOT NULL UNIQUE | 같은 tier/filename 충돌은 파일시스템에서 잡지만 DB 레벨도 보강 |
| `size_bytes` | bigint + CHECK > 0 | 5GB는 int 범위 안이지만 미래 대비 bigint |
| `sort_order` | int NOT NULL default 0 | routes INSERT가 sort_order 미지정 → 0 기본 |
| `uploaded_at` | timestamptz default now() | routes 코드가 `res.data[0]["uploaded_at"]` 그대로 응답에 사용 — DB가 채워줌 |
| `deleted_at` | nullable | soft delete |
### 4.2 RLS
비활성. backend가 `service_role` key 사용하므로 RLS 우회. Vercel/사용자 직접 접근 없음 → unsafe 아님.
---
## 5. 인프라 통합
### 5.1 `docker-compose.yml` — `packs-lab` 서비스
```yaml
packs-lab:
build:
context: ./packs-lab
dockerfile: Dockerfile
container_name: packs-lab
restart: unless-stopped
ports:
- "18950:8000"
environment:
TZ: Asia/Seoul
DSM_HOST: ${DSM_HOST}
DSM_USER: ${DSM_USER}
DSM_PASS: ${DSM_PASS}
BACKEND_HMAC_SECRET: ${BACKEND_HMAC_SECRET}
SUPABASE_URL: ${SUPABASE_URL}
SUPABASE_SERVICE_KEY: ${SUPABASE_SERVICE_KEY}
UPLOAD_TOKEN_TTL_SEC: ${UPLOAD_TOKEN_TTL_SEC:-1800}
PACK_BASE_DIR: ${PACK_BASE_DIR:-/app/data/packs}
PACK_HOST_DIR: ${PACK_HOST_DIR:-${PACK_DATA_PATH:-./data/packs}}
volumes:
- ${PACK_DATA_PATH:-./data/packs}:${PACK_BASE_DIR:-/app/data/packs}
```
| 결정 | 값 | 근거 |
|------|-----|------|
| 포트 | 18950 | 18800(realestate) → 18900(agent-office) → 18950(packs) 순차 |
| `PACK_BASE_DIR` (컨테이너 내부) | `/app/data/packs` | routes.py upload target. docker-compose volume 우측. |
| `PACK_HOST_DIR` (NAS 호스트) | 운영 `/volume1/docker/webpage/media/packs` / 로컬 fallback `./data/packs` | DSM·Supabase에 노출되는 절대경로. routes.py가 file_path로 저장. 미설정 시 `PACK_BASE_DIR`로 fallback. |
| `PACK_DATA_PATH` (호스트 마운트) | default `./data/packs` (로컬), NAS `/volume1/docker/webpage/media/packs` | docker-compose volume 좌측만 사용 |
### 5.2 `nginx/default.conf` — `/api/packs/` 라우팅
```nginx
location /api/packs/ {
proxy_pass http://packs-lab:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# 5GB 멀티파트 업로드 대응
client_max_body_size 5G;
proxy_request_buffering off; # 스트리밍 통과 (메모리/디스크 buffer 회피)
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
```
| 결정 | 근거 |
|------|------|
| `client_max_body_size 5G` | 라우트 단위 — 다른 location은 default 유지 |
| `proxy_request_buffering off` | 5GB 파일을 nginx가 모두 받고 backend에 forward하면 ~5GB 디스크 buffer 발생 |
| `proxy_read/send_timeout 1800s` | 30분 — 업로드 토큰 TTL과 일치, 느린 업링크에서 5GB 전송 여유 |
### 5.3 `.env.example` — 신규 환경변수 (7 + 3 path)
```bash
# ─── packs-lab — NAS 자료 다운로드 자동화 ────────────────────────────
# Synology DSM 7.x 인증 (공유 링크 발급용)
DSM_HOST=https://gahusb.synology.me:5001
DSM_USER=
DSM_PASS=
# LAN IP + self-signed cert 환경에서 IP mismatch 시 false (LAN 내부 통신이라 허용)
DSM_VERIFY_SSL=false
# Vercel SaaS ↔ backend HMAC 시크릿 (양쪽 동일 값)
BACKEND_HMAC_SECRET=
# Supabase pack_files 테이블 접근 (service_role 키, RLS 우회)
SUPABASE_URL=https://<project>.supabase.co
SUPABASE_SERVICE_KEY=
# admin upload 토큰 TTL (초). default 1800 = 30분
UPLOAD_TOKEN_TTL_SEC=1800
# 호스트 마운트 경로 (로컬 ./data/packs, NAS /volume1/docker/webpage/media/packs)
PACK_DATA_PATH=./data/packs
# 컨테이너 내부 저장 경로 (routes.py upload target. docker-compose volume 우측)
PACK_BASE_DIR=/app/data/packs
# DSM API용 path. Synology DSM API는 일반 사용자 권한일 때 /<shared_folder>/... 형식만 인식하고 /volume1/... 절대경로는 거부(error 408).
# 운영 NAS는 반드시 shared folder 시점 — /docker/webpage/media/packs.
# admin 사용자는 /volume1/... 도 가능하지만 보안상 별도 packs-bot user 권장.
PACK_HOST_DIR=/docker/webpage/media/packs
```
### 5.4 NAS 디렉토리 준비
운영 첫 배포 시 SSH로 1회. 파일은 `PACK_HOST_DIR` 평면에 직접 저장 — tier 디렉토리 분기는 만들지 않음(tier 구분은 filename 규칙으로 admin이 관리):
```bash
mkdir -p /volume1/docker/webpage/media/packs # 호스트 OS path (volume 마운트용)
chown -R PUID:PGID /volume1/docker/webpage/media/packs
```
PUID/PGID는 `.env`의 기존 값 사용.
> ⚠️ **DSM 사용자 권한 — File Station + Sharing 둘 다 필요**: Control Panel → User → packs-bot(또는 admin) → Permissions → File Station에서 `docker` shared folder Read 권한 + Applications → Sharing 권한 ON.
### 5.5 `scripts/deploy-nas.sh` SERVICES 화이트리스트
webhook 자동 배포(deployer)가 호출하는 sync 스크립트는 화이트리스트로 동기화 대상 디렉토리를 명시한다. 신규 서비스 추가 시 반드시 함께 수정해야 NAS 운영 디렉토리에 소스 sync + docker compose 빌드가 동작한다.
```bash
SERVICES="lotto travel-proxy deployer stock-lab music-lab blog-lab realestate-lab agent-office personal packs-lab nginx scripts"
```
(packs-lab 누락 시 `docker compose ps`에 packs-lab 미등장 — 첫 배포 시 가장 흔한 누락 항목)
---
## 6. 테스트 전략
기존 `tests/test_auth.py` 유지. 신규 3 파일.
### 6.1 `tests/conftest.py` (신규)
```python
import pytest
@pytest.fixture(autouse=True)
def _hmac_secret(monkeypatch):
"""모든 테스트에서 동일한 HMAC secret 사용."""
monkeypatch.setenv("BACKEND_HMAC_SECRET", "test-secret-do-not-use-in-prod")
```
### 6.2 `tests/test_routes.py` (신규) — 통합 테스트
DSM·Supabase 모두 mock. `pytest`, `monkeypatch`, `unittest.mock`, `fastapi.testclient.TestClient` 사용.
| 테스트 | 검증 |
|--------|------|
| `test_sign_link_hmac_required` | timestamp/signature 헤더 누락 → 401 |
| `test_sign_link_outside_base_dir` | file_path가 `PACK_BASE_DIR` 외부 → 400 |
| `test_sign_link_calls_dsm` | mock된 `create_share_link` 호출 검증, URL 응답 |
| `test_mint_token_hmac_required` | HMAC 누락 → 401 |
| `test_mint_token_returns_valid_token` | 발급된 token이 `verify_upload_token`으로 통과 |
| `test_mint_token_invalid_filename` | 확장자 미허용 → 400 |
| `test_upload_token_required` | Authorization Bearer 누락 → 401 |
| `test_upload_size_mismatch` | 토큰 size_bytes ≠ 실제 → 400 |
| `test_upload_jti_replay` | 같은 토큰 두 번 → 두 번째 409 |
| `test_list_returns_active_only` | mock supabase 응답에서 deleted_at NULL만 반환 |
| `test_delete_soft_deletes` | mock supabase update에 deleted_at ISO timestamp 들어감 |
### 6.3 `tests/test_dsm_client.py` (신규)
httpx mock(`respx` 또는 `MockTransport`) 또는 `monkeypatch.setattr` 패치.
| 테스트 | 검증 |
|--------|------|
| `test_create_share_link_login_logout` | login → Sharing.create → logout 순서 |
| `test_create_share_link_returns_url_and_expiry` | 응답 파싱 |
| `test_dsm_login_failure_raises` | login API success=false → DSMError |
| `test_dsm_share_failure_logs_out` | Sharing.create 실패해도 logout 호출 (try/finally) |
---
## 7. 문서 갱신
### 7.1 `web-backend/CLAUDE.md` — 5곳
**1. 1.프로젝트 개요**
```diff
- 서비스: lotto-lab, stock-lab, travel-proxy, music-lab, blog-lab, realestate-lab, agent-office, personal, deployer (9개)
+ 서비스: lotto-lab, stock-lab, travel-proxy, music-lab, blog-lab, realestate-lab, agent-office, personal, packs-lab, deployer (10개)
```
**2. 4.Docker 서비스 표** — 신규 행
```
| `packs-lab` | 18950 | NAS 자료 다운로드 자동화 (DSM 공유 링크 + 5GB 업로드, Vercel SaaS와 HMAC 통신) |
```
**3. 5.Nginx 라우팅 표** — 신규 행
```
| `/api/packs/` | `packs-lab:8000` | 5GB 업로드 (`client_max_body_size 5G` + `proxy_request_buffering off`) |
```
**4. 8.로컬 개발 표** — 신규 행
```
| Packs Lab | http://localhost:18950 |
```
**5. 9.서비스별**`### packs-lab (packs-lab/)` 신규 섹션
내용:
- 용도 (NAS DSM 공유링크 + 5GB 업로드 + Vercel HMAC, 사용자 인증은 Vercel이 Supabase로 처리)
- 환경변수 6+1개
- DB는 외부 Supabase `pack_files` (DDL은 `packs-lab/supabase/pack_files.sql`)
- 파일 구조: `main.py`, `auth.py`, `dsm_client.py`, `routes.py`, `models.py`
- API 표 5개:
- `POST /api/packs/sign-link` (Vercel HMAC → DSM Sharing.create)
- `POST /api/packs/admin/mint-token` (Vercel HMAC → upload 토큰)
- `POST /api/packs/upload` (Bearer token → multipart 5GB)
- `GET /api/packs/list` (Vercel HMAC → 활성 파일 목록)
- `DELETE /api/packs/{file_id}` (Vercel HMAC → soft delete)
### 7.2 `workspace/CLAUDE.md`
컨테이너 표에 한 줄 추가:
```
| `packs-lab` | 18950 | NAS 자료 다운로드 자동화 (Vercel SaaS와 HMAC 통신) |
```
---
## 8. 스코프
### 본 spec 범위
- ✅ admin mint-token 라우트 신설
- ✅ Supabase `pack_files` DDL
- ✅ docker-compose / nginx / .env.example / NAS 디렉토리 마운트
- ✅ tests (auth 유지 + routes 통합 + dsm_client mock)
- ✅ CLAUDE.md 2곳 갱신
- ✅ DELETE 라우트 docstring 수정
### 후속 별도 spec
- ❌ Vercel SaaS-side admin UI / 사용자 다운로드 UI / Supabase pricing & user 테이블
- ❌ DSM 공유 추적 (즉시 차단 필요시)
- ❌ deleted_at + N일 후 실제 파일 삭제 cron
- ❌ multi-admin 토큰 발급 권한 분리
- ❌ resumable multipart 업로드 (5GB tus 등)
- ❌ pack_files sort_order 편집 endpoint (admin UI 단계)
- ❌ monitoring (업로드 실패율, DSM API latency)

View File

@@ -0,0 +1,706 @@
# Essential Mix 파이프라인 — 1시간 mix + essential 시각 스타일 + UX 강화 설계
> 작성일: 2026-05-09
> 관련 spec:
> - `2026-05-07-music-youtube-pipeline-design.md` (본 파이프라인의 베이스)
> - `2026-05-09-gpu-video-offload-design.md` (Windows GPU 인코딩)
---
## 1. 배경
현재 파이프라인은 **단일 트랙 → 단일 영상**(커버 + 가장자리 파형)만 지원. 사용자는 YouTube essential 채널처럼 **1시간 이상의 음악 mix + 차분한 배경 + 중앙 비주얼라이저** 영상을 원함.
또한 진행 중 산출물(커버·썸네일·영상)을 NAS 파일시스템에서 직접 확인하는 게 번거로워, 진행 탭에서도 미리보기 가능했으면 함.
---
## 2. 비목표
- 사용자 직접 업로드 사진/영상 (P3로 미룸)
- 360° 정확한 방사형 비주얼라이저 (ffmpeg 단독으로 한계 — `showfreqs` + ring overlay로 근사)
- Mix 자동 큐레이션(곡 자동 선택) — 기존 컴파일 탭의 수동 선택 그대로 활용
- AI 검토 가중치 자동 튜닝 (Mix와 단일 트랙의 다른 기준 등 — P3)
- 텔레그램 사진 첨부 — 본 작업의 PipelineDetailModal로 우선 해결, 차후 P3
---
## 3. 사용자 흐름
### 3-1. Mix 영상 만들기
```
[사용자] Compile 탭에서 트랙 N개 선택 → crossfade 설정 → 컴파일 시작
→ 컴파일 완료 (1시간+ mp3 생성, 기존 흐름)
→ 컴파일 카드에 [🎬 영상 만들기] 버튼 클릭
→ 백엔드: POST /api/music/pipeline { compile_job_id, visual_style: 'essential' }
→ 진행 탭으로 자동 이동, 새 카드 생성
→ 단계별 텔레그램 승인 (기존과 동일):
cover (또는 background_video) → video → thumbnail → metadata → AI 검토 → 발행
→ YouTube 비공개 영상 1편
```
### 3-2. 단일 트랙 영상 만들기 (기존)
진행 탭 모달에 라디오 "단일 트랙 / Mix" 추가. 단일 선택 시 기존 흐름 그대로.
### 3-3. 산출물 미리보기
진행 탭 카드의 cover/thumbnail 미니 썸네일 → 카드 클릭 → 상세 모달 → 큰 이미지 + 영상 플레이어 + 메타·검토 JSON.
---
## 4. 데이터 모델 변경
### 4-1. `video_pipelines` 테이블 확장
신규 컬럼:
```sql
ALTER TABLE video_pipelines ADD COLUMN compile_job_id INTEGER NULL REFERENCES compile_jobs(id);
ALTER TABLE video_pipelines ADD COLUMN visual_style TEXT NOT NULL DEFAULT 'essential';
ALTER TABLE video_pipelines ADD COLUMN background_mode TEXT NOT NULL DEFAULT 'static';
ALTER TABLE video_pipelines ADD COLUMN background_keyword TEXT;
```
| 컬럼 | 의미 |
|------|------|
| `track_id` (기존) | 단일 트랙 입력 시 |
| `compile_job_id` (신규) | Mix 입력 시 — `track_id` XOR `compile_job_id` |
| `visual_style` | `single` / `essential` |
| `background_mode` | `static` (사진) / `video_loop` (영상) |
| `background_keyword` | Pexels 검색용 (예: "rainy window cafe"). 비어있으면 장르 기반 자동 |
마이그레이션: `ADD COLUMN`은 SQLite에서 안전. 기존 행은 NULL 또는 default 값 부여.
### 4-2. `youtube_setup.visual_defaults` JSON 확장
기존:
```json
{"resolution": "1920x1080", "style": "visualizer", "background": "ai_cover"}
```
신규:
```json
{
"resolution": "1920x1080",
"default_visual_style": "essential",
"default_background_mode": "static",
"default_background_keyword": "",
"background_image_source": "ai", // ai | pexels (Mix는 default ai)
"subtitle_track_titles": true // Mix에서 곡명 자막 표시
}
```
기존 클라이언트 호환을 위해 미설정 키는 default로 fallback.
---
## 5. API 변경
### 5-1. `POST /api/music/pipeline` 요청 body 확장
```json
{
"track_id": 13,
// 또는
"compile_job_id": 5,
// 옵션 (default는 setup에서)
"visual_style": "essential", // single | essential
"background_mode": "static", // static | video_loop
"background_keyword": "rainy cafe"
}
```
검증:
- `track_id` XOR `compile_job_id` 정확히 하나만 — 둘 다거나 둘 다 없으면 400
- `compile_job_id`인 경우 `compile_jobs` 테이블에서 status='succeeded' 확인 — 아니면 400
- `visual_style` 미지정 시 `youtube_setup.visual_defaults.default_visual_style`
- `background_mode` 미지정 시 `youtube_setup.visual_defaults.default_background_mode`
응답:
```json
{
"id": 7,
"track_id": null,
"compile_job_id": 5,
"visual_style": "essential",
"background_mode": "static",
"state": "created",
...
}
```
### 5-2. `GET /api/music/pipeline/{id}` 응답 확장
신규 필드: `compile_job_id`, `visual_style`, `background_mode`, `background_keyword`, `tracks` (Mix면 트랙 리스트, 단일이면 단일 트랙 1개)
`tracks` 형식:
```json
[
{"id": 13, "title": "Lo-Fi Drive", "start_offset_sec": 0, "duration_sec": 176},
{"id": 14, "title": "Midnight Cafe", "start_offset_sec": 173, "duration_sec": 200},
...
]
```
`start_offset_sec`은 컴파일 시 acrossfade 적용을 고려한 누적 시작 시각 (=영상 자막 트리거 타이밍).
### 5-3. 변경 없음
`/feedback`, `/cancel`, `/publish`, `/setup`, `/youtube/*` 모두 그대로.
---
## 6. 백엔드 — NAS music-lab
### 6-1. `pipeline/orchestrator.py` 변경
`run_step`에 입력 audio 결정 로직 추가:
```python
def _resolve_input(p: dict) -> dict:
"""파이프라인 입력 = 단일 트랙 또는 컴파일 결과.
반환: {"audio_path": str, "duration_sec": int, "tracks": list[dict],
"title": str, "genre": str, "moods": list, ...}
"""
if p.get("compile_job_id"):
job = db.get_compile_job(p["compile_job_id"])
if not job or job["status"] != "succeeded":
raise ValueError(f"compile job {p['compile_job_id']} not ready")
# 누적 offset 계산 (acrossfade 고려)
tracks = []
offset = 0.0
crossfade = job["crossfade_sec"]
for tid in job["track_ids"]:
t = db.get_track_by_id(tid)
tracks.append({
"id": tid, "title": t["title"],
"start_offset_sec": offset,
"duration_sec": t["duration_sec"],
})
offset += t["duration_sec"] - crossfade # acrossfade overlap만큼 차감
return {
"audio_path": job["audio_path"], # /app/data/compiles/{id}.mp3
"duration_sec": int(offset + crossfade), # 마지막 트랙은 풀 길이
"tracks": tracks,
"title": job["title"] or "Mix",
"genre": "mix",
"moods": [],
}
else:
t = db.get_track_by_id(p["track_id"])
return {
"audio_path": t["file_path"],
"duration_sec": t["duration_sec"],
"tracks": [{"id": t["id"], "title": t["title"],
"start_offset_sec": 0, "duration_sec": t["duration_sec"]}],
"title": t["title"], "genre": t["genre"], "moods": t.get("moods", []),
}
```
각 step runner는 `_resolve_input(p)` 결과를 사용:
- `_run_cover`: `genre`, `moods`, `title` 활용 (Mix면 `genre="mix"` → "mix" 키 prompt 또는 default)
- `_run_video`: `audio_path`, `duration_sec`, `tracks` 모두 Windows로 전달
- `_run_meta`: `tracks` 리스트를 메타 prompt에 포함
- `_run_review`: `tracks` 리스트를 검토 prompt에 포함 (트랙 수, 다양한 장르 등)
### 6-2. `pipeline/cover.py` Pexels 폴백/대안
```python
async def generate(*, pipeline_id: int, genre: str, prompt_template: str,
mood: str = "", track_title: str = "", feedback: str = "",
image_source: str = "ai") -> dict:
"""image_source: 'ai' (DALL·E) | 'pexels' (스톡 검색)."""
if image_source == "pexels":
return await _generate_with_pexels(pipeline_id, genre, mood, track_title)
# 기존 AI 흐름 그대로
...
# AI 실패 시 — 그라데이션 폴백 대신 Pexels 시도 (config 옵션)
...
```
신규 `_generate_with_pexels`:
- Pexels API: `GET https://api.pexels.com/v1/search?query={keyword}&per_page=10`
- 결과 1번째 큰 사진 다운로드 → `/app/data/videos/{id}/cover.jpg`
- API key 미설정/실패 시 그라데이션 폴백
### 6-3. 신규 `pipeline/background.py` (video_loop 모드)
```python
async def fetch_video_loop(pipeline_id: int, keyword: str) -> dict:
"""Pexels Video API로 515초 루프 영상 받아옴.
/app/data/videos/{id}/loop.mp4 저장.
"""
# GET https://api.pexels.com/videos/search?query=...&per_page=5
# SD/HD 720p 중에서 골라 다운로드
...
return {"path": "/app/data/videos/{id}/loop.mp4", "duration_sec": ...}
```
오케스트레이터에서 `background_mode == "video_loop"` 분기 시 cover step 대신 또는 보조로 호출 (디자인 결정: cover step을 두 모드의 공통 입력 준비 단계로 통합 — 정적이면 cover.jpg, 영상이면 loop.mp4).
### 6-4. `pipeline/metadata.py` Mix 지원
`generate(*, track, template, trend_keywords, feedback="", tracks=None)` 시그니처 확장. `tracks` 있으면 Claude prompt에 다음 추가:
```
이 영상은 {len(tracks)}개 트랙의 mix입니다. 트랙 리스트:
1. [00:00] Lo-Fi Drive — lo-fi
2. [03:00] Midnight Cafe — lo-fi
...
설명에는 트랙 리스트를 타임스탬프와 함께 포함하세요.
```
응답 description은 자동으로 트랙리스트 포함됨. 이는 YouTube에서 챕터로 자동 인식.
### 6-5. `pipeline/video.py` (NAS측, 변경 작음)
기존 함수에 추가 파라미터 전달:
```python
def generate(*, pipeline_id, audio_path, cover_path, genre, duration_sec,
resolution="1920x1080", style="essential",
background_mode="static", background_path=None,
tracks=None) -> dict:
payload = {
"audio_path_nas": ..., "cover_path_nas": ...,
"output_path_nas": ...,
"resolution": resolution,
"duration_sec": duration_sec,
"style": style, # NEW: single | essential
"background_mode": background_mode, # NEW: static | video_loop
"background_path_nas": ..., # NEW: video_loop일 때 loop.mp4 경로
"tracks": tracks, # NEW: Mix면 트랙 리스트 (자막용)
}
...
```
### 6-6. `db.py` 변경
신규 컬럼 추가 마이그레이션 + `get_compile_job(id)` (없으면 추가) + `get_track_by_id(id)` 활용.
---
## 7. 백엔드 — Windows music_ai
### 7-1. `/encode_video` 요청 확장
```json
{
"audio_path_nas": "...",
"cover_path_nas": "...",
"output_path_nas": "...",
"resolution": "1920x1080",
"duration_sec": 3600,
"style": "essential", // NEW
"background_mode": "static", // NEW
"background_path_nas": "...", // NEW: video_loop면 loop.mp4
"tracks": [ // NEW: 자막용
{"start_offset_sec": 0, "title": "Lo-Fi Drive"},
{"start_offset_sec": 173, "title": "Midnight Cafe"}
]
}
```
### 7-2. `video_encoder.py` 분기 로직
```python
def encode_video(*, ..., style="essential", background_mode="static",
background_path_nas=None, tracks=None):
if style == "single":
cmd = build_single_track_cmd(...)
else: # essential
if background_mode == "static":
cmd = build_essential_static_cmd(cover, audio, out, w, h, tracks)
else:
bg = translate_path(background_path_nas)
cmd = build_essential_video_loop_cmd(bg, audio, out, w, h, tracks)
...
```
### 7-3. Essential 정적 ffmpeg 명령
핵심 filter_complex 구조:
```
[0:v]scale=1920:1080,format=yuv420p[bg]; # 정적 배경 사진
[1:a]showfreqs=s=400x200:mode=bar:cmode=combined:colors=0xFFFFFF@0.9[bars]; # 중앙 막대
[2:v]format=rgba[ring]; # 데코 ring PNG (사전 제작 1장)
[bg][bars]overlay=(W-w)/2:(H-h)/2[mid]; # 막대 정중앙 배치
[mid][ring]overlay=(W-w)/2:(H-h)/2[viz]; # ring 데코 같은 위치
[viz]drawtext=...:enable='between(t,0,5)+between(t,173,178)+...'[final]
```
- `showfreqs s=400x200 mode=bar` — 가로 막대 (방사형 근사 1차 버전)
- `ring.png` — 사전 제작된 투명 PNG (`music_ai/assets/visualizer_ring.png`, 단순 흰색 원 + 외곽 점선)
- `drawtext` — 트랙 리스트 순회하며 enable expression 동적 생성
향후(V2): `showcqt``showspectrum` 시도 + 진짜 360° 방사형은 외부 도구(예: SuperCollider, butterchurn) 검토.
### 7-4. Essential 영상 루프 ffmpeg 명령
```
[0:v]scale=1920:1080,setpts=PTS-STARTPTS[bg_loop];
loop=loop=-1:size=N # 루프 영상 무한 반복
[1:a]showfreqs=...[bars];
[bg_loop][bars]overlay=center[mid];
[mid][ring]overlay=center[viz];
... drawtext 동일
```
루프는 `-stream_loop -1 -i loop.mp4` 입력 옵션 + `-shortest` 출력으로 audio 길이만큼 반복.
### 7-5. 자막(곡명) drawtext
```python
def build_drawtext_filter(tracks, total_duration):
expressions = []
for tr in tracks:
start = tr["start_offset_sec"]
end = start + 5 # 5초 표시
# alpha fade in/out
text = tr["title"].replace(":", r"\:").replace("'", r"\'")
expressions.append(
f"drawtext=fontfile='Arial Bold':text='{text}'"
f":fontcolor=white:fontsize=36:x=(w-text_w)/2:y=h-100"
f":alpha='if(between(t,{start},{end}),"
f" if(lt(t-{start},1), t-{start}," # 0~1s fade in
f" if(gt(t-{start},4), {end}-t, 1)), 0)'" # 4~5s fade out
)
return ",".join(expressions) # 체인으로 연결
```
폰트는 Windows에 기본 설치된 Arial 또는 NanumGothic 사용. 한글 트랙명 지원 위해 NanumGothic 권장.
### 7-6. 신규 자산 파일
`music_ai/assets/visualizer_ring.png` — 1920×1080 캔버스 정중앙 400×400 영역에 그려진 흰색 원형 (외곽선 + 옅은 inner glow). 사전 제작 1장 — Pillow로 자동 생성도 가능 (서버 시작 시 없으면 생성).
---
## 8. 프론트엔드 변경
### 8-1. `CompileTab.jsx` — 영상 만들기 버튼
완료된 compile job 카드에 버튼 추가:
```jsx
{job.status === 'succeeded' && (
<button onClick={() => handleVideoFromCompile(job.id)}>
🎬 영상 만들기
</button>
)}
```
`handleVideoFromCompile`:
```js
async (compileJobId) => {
const p = await createPipeline({ compile_job_id: compileJobId });
await startPipeline(p.id);
// 진행 탭으로 이동 (router push 또는 setTab + setOpenPipelineFor 패턴)
};
```
### 8-2. `PipelineStartModal.jsx` 확장
```jsx
const [inputType, setInputType] = useState('track'); // 'track' | 'compile'
const [compileJobs, setCompileJobs] = useState([]);
useEffect(() => {
if (inputType === 'compile') getCompileJobs().then(setCompileJobs);
}, [inputType]);
return (
<div className="modal-body">
<h3> 파이프라인 시작</h3>
<fieldset>
<legend>입력</legend>
<label><input type="radio" checked={inputType==='track'}
onChange={() => setInputType('track')}/> 단일 트랙</label>
<label><input type="radio" checked={inputType==='compile'}
onChange={() => setInputType('compile')}/> Mix (컴파일 결과)</label>
</fieldset>
{inputType === 'track' && (
<select>{library.map(...)}</select>
)}
{inputType === 'compile' && (
<select>{compileJobs.filter(j=>j.status==='succeeded').map(j =>
<option key={j.id} value={j.id}>{j.title} ({j.tracks_count}, {fmtDuration(j.duration_sec)})</option>
)}</select>
)}
{/* 시각 모드 override */}
<details>
<summary>고급 옵션</summary>
<select>visual_style: single | essential</select>
<select>background_mode: static | video_loop</select>
<input>background_keyword</input>
</details>
{/* ... 기존 시작/취소 버튼 */}
</div>
);
```
### 8-3. `PipelineCard.jsx` — 미리보기 inline
```jsx
return (
<div className="pipeline-card" onClick={() => setShowDetail(true)}>
<div className="pipeline-card__head">
<h4>{pipeline.track_title || pipeline.compile_title || `Pipeline #${pipeline.id}`}</h4>
<span className="pipeline-style-badge">{pipeline.visual_style}</span>
...
</div>
{/* 미니 미리보기 */}
<div className="pipeline-previews">
{pipeline.cover_url && <img src={pipeline.cover_url} alt="" className="pipeline-preview-mini" />}
{pipeline.thumbnail_url && <img src={pipeline.thumbnail_url} alt="" className="pipeline-preview-mini" />}
{pipeline.video_url && <span className="pipeline-video-icon"></span>}
</div>
{/* 진행도 바 + 현재 상태 (기존) */}
...
</div>
);
```
### 8-4. `PipelineDetailModal.jsx` (신규)
```jsx
export default function PipelineDetailModal({ pipeline, onClose }) {
return (
<div className="modal-overlay" onClick={onClose}>
<div className="modal-body modal-body--lg" onClick={e=>e.stopPropagation()}>
<header>
<h3>{pipeline.compile_title || pipeline.track_title}</h3>
<span className="badge">{pipeline.visual_style}</span>
<button onClick={onClose}>×</button>
</header>
{/* 큰 미리보기 그리드 */}
<div className="pdm-grid">
{pipeline.cover_url && (
<figure>
<img src={pipeline.cover_url} alt="cover" />
<figcaption>커버 (배경)</figcaption>
</figure>
)}
{pipeline.thumbnail_url && (
<figure>
<img src={pipeline.thumbnail_url} alt="thumbnail" />
<figcaption>썸네일</figcaption>
</figure>
)}
</div>
{/* 영상 플레이어 */}
{pipeline.video_url && (
<div className="pdm-video">
<video src={pipeline.video_url} controls width="100%" />
</div>
)}
{/* 메타데이터 */}
{pipeline.metadata && (
<section className="pdm-meta">
<h4>메타데이터</h4>
<p><strong>제목:</strong> {pipeline.metadata.title}</p>
<details>
<summary>설명</summary>
<pre>{pipeline.metadata.description}</pre>
</details>
<p><strong>태그:</strong> {pipeline.metadata.tags?.join(', ')}</p>
</section>
)}
{/* AI 검토 */}
{pipeline.review && (
<section className="pdm-review">
<h4>AI 검토 <span className="badge">{pipeline.review.verdict}</span> ({pipeline.review.weighted_total}/100)</h4>
<table>
<tbody>
<tr><td>메타데이터 품질</td><td>{pipeline.review.metadata_quality.score}</td></tr>
<tr><td>콘텐츠 정책</td><td>{pipeline.review.policy_compliance.score}</td></tr>
<tr><td>시청 경험</td><td>{pipeline.review.viewer_experience.score}</td></tr>
<tr><td>트렌드 정렬</td><td>{pipeline.review.trend_alignment.score}</td></tr>
</tbody>
</table>
<p><em>{pipeline.review.summary}</em></p>
</section>
)}
{/* 트랙 리스트 (Mix일 때) */}
{pipeline.tracks && pipeline.tracks.length > 1 && (
<section className="pdm-tracks">
<h4>트랙 리스트 ({pipeline.tracks.length})</h4>
<ol>
{pipeline.tracks.map(t => (
<li key={t.id}>
[{fmtTimestamp(t.start_offset_sec)}] {t.title} ({fmtDuration(t.duration_sec)})
</li>
))}
</ol>
</section>
)}
{/* 피드백 히스토리 */}
{pipeline.feedback && pipeline.feedback.length > 0 && (
<section className="pdm-feedback">
<h4>피드백 ({pipeline.feedback.length})</h4>
<ul>
{pipeline.feedback.map(f => (
<li key={f.id}>
<code>[{f.step}]</code> {f.feedback_text}
<small>{f.received_at}</small>
</li>
))}
</ul>
</section>
)}
{/* YouTube 링크 */}
{pipeline.youtube_video_id && (
<a href={`https://youtu.be/${pipeline.youtube_video_id}`}
target="_blank" rel="noreferrer" className="pdm-youtube">
🎬 YouTube에서 보기
</a>
)}
</div>
</div>
);
}
```
### 8-5. `SetupTab.jsx` 확장
영상 비주얼 기본값 카드 확장:
- **default_visual_style** 드롭다운: `single` / `essential`
- **default_background_mode** 드롭다운: `static` / `video_loop`
- **default_background_keyword** 텍스트 입력 (예: "lofi cafe")
- **background_image_source** 드롭다운: `ai` / `pexels`
- **subtitle_track_titles** 체크박스: Mix에서 곡명 자막 표시
---
## 9. 환경변수 (NAS측)
신규 — 이미 `.env`에 있을 가능성 높음:
```env
PEXELS_API_KEY=xxx # 이미 있음 (현재 미사용)
```
신규 (Windows측 — music_ai/.env):
```env
# 한글 자막용 폰트 경로 (선택)
SUBTITLE_FONT=C:\Windows\Fonts\malgun.ttf
```
---
## 10. 에러 처리
| 시나리오 | 결과 |
|---------|------|
| compile_job 미완료 (status != succeeded) | POST /pipeline 시 400 |
| compile_job 삭제됨 | get_pipeline에서 `compile_title=null`, 진행 탭에 "삭제됨" 배지 |
| Pexels API 실패 (image) | AI 폴백 |
| Pexels API 실패 (video) | 단색 폴백 + 텔레그램에 "Pexels 실패" 명시 |
| drawtext 자막 한글 폰트 누락 | 자막 없이 인코딩 + 경고 로그 |
| 1시간 NVENC timeout | 영상 단계 timeout 600s → 그래도 부족하면 failed (보통 NVENC면 5분 내) |
---
## 11. 테스트 전략
### 11-1. 단위 테스트 (NAS music-lab)
| 대상 | 테스트 |
|------|--------|
| `orchestrator._resolve_input` | track_id 분기 / compile_job_id 분기 / 둘 다 / 둘 다 없음 / compile not ready |
| `cover.generate` `image_source='pexels'` | Pexels API mock + 다운로드 + 파일 저장 |
| `background.fetch_video_loop` | Pexels Video API mock + mp4 다운로드 |
| `metadata.generate` `tracks=[...]` | 트랙 리스트가 prompt에 포함되는지, 응답 description에 chapter 포맷 |
| API `POST /pipeline { compile_job_id }` | 정상 / not ready 400 / 둘 다 400 / 단일은 기존 작동 |
| DB 마이그레이션 | 새 컬럼 default 값 |
### 11-2. 단위 테스트 (Windows music_ai)
| 대상 | 테스트 |
|------|--------|
| `build_essential_static_cmd` | filter_complex 문자열 검증 (showfreqs, overlay 위치 등) |
| `build_drawtext_filter` | 트랙 N개 → enable expression N개 생성, alpha fade 검증 |
| `encode_video` `style='essential'` | 새 분기 호출됨 |
| `encode_video` `style='single'` | 기존 단일 트랙 명령 그대로 |
| 자산 ring.png 자동 생성 | 서버 시작 시 없으면 PIL로 생성 |
### 11-3. 통합 테스트
`test_essential_pipeline_flow.py`:
- compile job 생성 → 파이프라인 시작 (compile_job_id) → 모든 단계 mock → published → tracks 리스트가 metadata description에 포함됐는지
### 11-4. 수동 E2E
- [ ] 컴파일 탭에서 3-5분 mix 컴파일
- [ ] "🎬 영상 만들기" 클릭 → 진행 탭 카드 생성, visual_style=essential
- [ ] cover 단계 → 텔레그램 알림 + 카드에 cover 미니 썸네일 표시
- [ ] 카드 클릭 → 상세 모달 → cover 큰 이미지, 메타·검토 영역 표시 (해당 단계 진행 시)
- [ ] 모든 단계 승인 → 발행 → YouTube 비공개 영상에 essential 시각 + 챕터 자동 인식 확인
- [ ] 1시간 mix로 동일 흐름 — Windows NVENC 인코딩 시간 5분 미만 확인
- [ ] background_mode=video_loop로 시도 — Pexels 영상 다운로드 + 루프 인코딩
---
## 12. 마이그레이션 + 배포
### 12-1. DB 마이그레이션
`init_db()` 신규 컬럼 `ALTER TABLE` (SQLite는 idempotent: 컬럼 존재 확인 후 추가):
```python
def _add_column_if_missing(cursor, table, column, ddl):
cursor.execute(f"PRAGMA table_info({table})")
cols = [r[1] for r in cursor.fetchall()]
if column not in cols:
cursor.execute(f"ALTER TABLE {table} ADD COLUMN {column} {ddl}")
```
### 12-2. 자산 파일
`music_ai/assets/visualizer_ring.png`은 git에 커밋 (small, ~30KB). Windows 측이므로 사용자가 수동 배포 (이미 music_ai는 로컬 전용).
또는 **서버 시작 시 자동 생성** (PIL로 단순 ring 그리기) — 권장. assets 디렉토리도 자동 생성.
### 12-3. 환경변수
NAS `.env` 변경 없음 (PEXELS_API_KEY 이미 있음).
Windows `.env``SUBTITLE_FONT` 추가 (선택).
---
## 13. 산출물
| 영역 | 파일 |
|------|------|
| Spec/Plan | 본 문서 + plan |
| NAS music-lab | `db.py` (마이그레이션), `pipeline/orchestrator.py` (resolve_input), `pipeline/cover.py` (Pexels 분기), `pipeline/background.py` (신규), `pipeline/metadata.py` (tracks 옵션), `pipeline/video.py` (style/background 파라미터), `app/main.py` (POST /pipeline body 확장) |
| Windows music_ai | `video_encoder.py` (style 분기, drawtext, ring), `server.py` (요청 schema 확장), `assets/visualizer_ring.png` (자동 생성), Pillow 이미 있음 |
| Frontend | `CompileTab.jsx` (영상 만들기 버튼), `PipelineStartModal.jsx` (라디오), `PipelineCard.jsx` (미리보기 inline), `PipelineDetailModal.jsx` (신규), `SetupTab.jsx` (visual_defaults 확장), `api.js` 헬퍼 추가, `MusicStudio.css` 스타일 |
| 테스트 | NAS 단위 6+ / Windows 단위 5+ / 통합 1 / 수동 E2E |
---
## 14. 후속 (P3)
- 사용자 직접 사진/영상 업로드
- 텔레그램에 cover/thumbnail 사진 첨부
- 360° 진짜 방사형 visualizer (외부 도구 또는 GPU shader)
- AI 검토 가중치 mix vs 단일 자동 분리
- Pexels 검색 미리보기 UI (구성 탭에서 "이 키워드로 검색해보기" 버튼)
---

View File

@@ -0,0 +1,486 @@
# GPU 영상 인코딩 오프로드 — 설계
> 작성일: 2026-05-09
> 관련: `2026-05-07-music-youtube-pipeline-design.md` (Task 4 대체)
---
## 1. 배경
NAS Synology Celeron J4025(2 cores @ 2.0GHz, GPU 없음)에서 1920×1080 visualizer 영상 인코딩이 너무 느림. 176초 트랙 인코딩에 5분 초과 → ffmpeg `subprocess.TimeoutExpired`. `-preset ultrafast`로 가속해도 한계 있고 화질 저하.
대안: 사용자 Windows PC(RTX 5070 Ti, 16GB VRAM)에서 NVIDIA NVENC 하드웨어 인코딩으로 처리. 같은 영상이 **1020초**에 완료(20×+ 빠름).
이미 `music_ai` 서버(Windows, port 8765)가 MusicGen용으로 동작 중이므로 **같은 서버에 영상 인코딩 endpoint를 추가**하는 것이 가장 자연스럽다.
---
## 2. 비목표
- 다중 GPU/멀티 머신 — 단일 Windows PC만 지원
- NAS 로컬 ffmpeg 폴백 — 사용자 결정으로 제외 (Windows 서버 다운 시 명확한 실패 선호)
- 영상 길이 제한 — 일반 트랙 길이(110분) 가정
- 인증 — LAN 전용, 무인증
---
## 3. 아키텍처
```
┌────────────────────────────────────────────────────────────┐
│ NAS (Synology) │
│ │
│ music-lab container │
│ pipeline/video.py │
│ ↓ HTTP POST {paths, resolution} │
│ ↓ 192.168.45.59:8765/encode_video │
│ │
│ /volume1/docker/webpage/data/ │
│ videos/{id}/cover.jpg ← input │
│ videos/{id}/video.mp4 ← output (Windows가 직접 씀) │
│ {audio}.mp3 ← input │
└────────────────────────────────────────────────────────────┘
↓ HTTP ↑ SMB read/write
↓ ↑ (Z:\ 마운트)
┌────────────────────────────────────────────────────────────┐
│ Windows PC (192.168.45.59) │
│ │
│ music_ai server.py (port 8765) │
│ • POST /generate (기존, MusicGen) │
│ • POST /encode_video (신규) │
│ ↓ 경로 변환: /volume1/... → Z:\... │
│ ↓ ffmpeg.exe -hwaccel cuda -c:v h264_nvenc ... │
│ ↓ 입력/출력 모두 Z:\ 직접 (SMB) │
│ ↓ 응답: {ok, duration_ms, output_path} │
│ │
│ Z:\docker\webpage\data\ (NAS SMB mount, 기존) │
│ videos\{id}\cover.jpg │
│ videos\{id}\video.mp4 │
│ {audio}.mp3 │
└────────────────────────────────────────────────────────────┘
```
**핵심 원칙:** 파일은 SMB로 직접 읽고 쓰기 — HTTP는 메타데이터(경로 + 옵션)만 전달.
---
## 4. Windows `music_ai` 서버 — `/encode_video` endpoint
### 4-1. Request
```http
POST /encode_video HTTP/1.1
Host: 192.168.45.59:8765
Content-Type: application/json
```
| 필드 | 타입 | 필수 | 설명 |
|------|------|------|------|
| `cover_path_nas` | string | ✓ | 배경 이미지 NAS 절대경로 |
| `audio_path_nas` | string | ✓ | 오디오 파일 NAS 절대경로 |
| `output_path_nas` | string | ✓ | 출력 mp4 NAS 절대경로 |
| `resolution` | string | ✓ | `WIDTHxHEIGHT` (예: `1920x1080`) |
| `duration_sec` | int | | 트랙 길이 — 진행 추적용 (옵션) |
| `style` | string | | 현재 `visualizer`만 (확장용) |
### 4-2. Response
**성공 (200):**
```json
{
"ok": true,
"duration_ms": 12340,
"output_path_nas": "/volume1/docker/webpage/data/videos/3/video.mp4",
"output_bytes": 28470000,
"encoder": "h264_nvenc",
"preset": "p4"
}
```
**실패 (4xx/5xx):**
```json
{
"ok": false,
"error": "ffmpeg returncode=1: ...",
"stage": "ffmpeg" // path_translate | input_validation | ffmpeg | output_check
}
```
### 4-3. 경로 변환
Windows 서버는 `nas_path → windows_path` 변환을 환경변수 기반으로 수행:
```python
# .env (Windows music_ai)
NAS_VOLUME_PREFIX=/volume1/
WINDOWS_DRIVE_ROOT=Z:\
```
변환 로직:
```python
def translate_path(nas_path: str) -> str:
# /volume1/docker/webpage/data/videos/3/cover.jpg
# → Z:\docker\webpage\data\videos\3\cover.jpg
if not nas_path.startswith(NAS_VOLUME_PREFIX):
raise ValueError(f"NAS prefix 불일치: {nas_path}")
rel = nas_path[len(NAS_VOLUME_PREFIX):] # "docker/webpage/..."
return WINDOWS_DRIVE_ROOT + rel.replace("/", "\\")
```
### 4-4. 입력 검증
ffmpeg 호출 전:
- `cover_path` 변환된 Windows 경로의 파일 존재 확인 → 없으면 400 stage=input_validation
- `audio_path` 동일
- `output_path`의 부모 디렉토리 존재 확인 — 없으면 자동 생성
- `resolution` 정규식 `^\d{3,4}x\d{3,4}$` 검증 → 실패 시 400
### 4-5. ffmpeg 명령 (NVENC)
```python
def build_visualizer_cmd(cover_win, audio_win, out_win, w, h):
return [
"ffmpeg", "-y",
"-hwaccel", "cuda",
"-loop", "1", "-i", cover_win,
"-i", audio_win,
"-filter_complex",
f"[0:v]scale={w}:{h},format=yuv420p[bg];"
f"[1:a]showwaves=s={w}x200:mode=cline:colors=0xFF4444@0.8[wave];"
f"[bg][wave]overlay=0:({h}-200)[out]",
"-map", "[out]", "-map", "1:a",
"-c:v", "h264_nvenc",
"-preset", "p4", # quality preset (p1=fastest, p7=slowest/best)
"-rc", "vbr",
"-cq", "23", # quality (lower=better, 18-25 sane range)
"-b:v", "0", # let CQ control bitrate
"-pix_fmt", "yuv420p", # YouTube 호환
"-c:a", "aac", "-b:a", "192k",
"-shortest", out_win,
]
```
**주요 플래그 설명:**
- `-hwaccel cuda` — CUDA 사용
- `-c:v h264_nvenc` — NVIDIA NVENC H.264 인코더
- `-preset p4` — 품질·속도 균형 (5070 Ti 기준 1080p 영상 ~1020s)
- `-rc vbr -cq 23 -b:v 0` — VBR + 일정 품질 (CQ 23 = ~CRF 23)
- `format=yuv420p` 명시 — NVENC가 가끔 yuv444 출력하는데 YouTube 호환 X
### 4-6. 타임아웃 + 출력 검증
- ffmpeg subprocess timeout: **180초** (NAS 측 HTTP timeout 200s 미만)
- 종료 후 출력 파일 존재 + 크기 > 1MB 검증 → 미달 시 stage=output_check 실패
- 종료 코드 0이지만 파일 비어있는 케이스 catch
### 4-7. 동시 처리
별도 큐 없음. 동시 호출 시 ffmpeg 프로세스 병렬 실행 — RTX 5070 Ti는 NVENC 세션 5개까지 지원.
단일 사용자 시나리오에서 동시 인코딩은 거의 발생 안 함. 발생해도 GPU 리소스 충분.
### 4-8. 헬스 체크 확장
기존 `GET /health`에 인코더 가용성 정보 추가:
```json
{
"ok": true,
"gpu": "NVIDIA GeForce RTX 5070 Ti",
"musicgen_loaded": true,
"ffmpeg_path": "C:/ffmpeg/bin/ffmpeg.exe",
"ffmpeg_nvenc": true
}
```
`ffmpeg_nvenc` 검증: 서버 시작 시 `ffmpeg -encoders | grep h264_nvenc` 한 번 실행 + 캐시.
---
## 5. NAS music-lab — `pipeline/video.py` 리팩토링
### 5-1. 환경변수 (필수)
```env
WINDOWS_VIDEO_ENCODER_URL=http://192.168.45.59:8765
```
미설정 시: `pipeline/video.py`가 기동 시 명확한 에러로 실패 (ImportError 또는 RuntimeError).
### 5-2. `video.generate(...)` — 새 구현
```python
"""영상 비주얼 생성 — Windows GPU 서버 (NVENC) 호출."""
import os
import logging
import httpx
from . import storage
logger = logging.getLogger("music-lab.video")
ENCODER_URL = os.getenv("WINDOWS_VIDEO_ENCODER_URL", "")
ENCODER_TIMEOUT_S = 200 # Windows 서버 ffmpeg 180s + 마진
class VideoGenerationError(Exception):
pass
def generate(*, pipeline_id: int, audio_path: str, cover_path: str,
genre: str, duration_sec: int, resolution: str = "1920x1080",
style: str = "visualizer") -> dict:
"""원격 Windows 서버 호출. 다운/실패 시 즉시 예외."""
if not ENCODER_URL:
raise VideoGenerationError(
"WINDOWS_VIDEO_ENCODER_URL 미설정 — Windows 인코더 서버 주소 필요"
)
out_path = os.path.join(storage.pipeline_dir(pipeline_id), "video.mp4")
nas_audio = _container_to_nas(audio_path)
nas_cover = _container_to_nas(cover_path)
nas_output = _container_to_nas(out_path)
payload = {
"cover_path_nas": nas_cover,
"audio_path_nas": nas_audio,
"output_path_nas": nas_output,
"resolution": resolution,
"duration_sec": duration_sec,
"style": style,
}
logger.info("Windows 인코더 호출: %s%s", audio_path, out_path)
try:
with httpx.Client(timeout=ENCODER_TIMEOUT_S) as client:
resp = client.post(f"{ENCODER_URL}/encode_video", json=payload)
except (httpx.ConnectError, httpx.ReadTimeout, httpx.WriteTimeout) as e:
raise VideoGenerationError(f"Windows 인코더 연결 실패: {e}")
if resp.status_code != 200:
try:
detail = resp.json()
except Exception:
detail = {"error": resp.text[:300]}
raise VideoGenerationError(
f"Windows 인코더 오류 ({resp.status_code}): "
f"{detail.get('stage','?')}{detail.get('error','?')}"
)
data = resp.json()
if not data.get("ok"):
raise VideoGenerationError(f"Windows 인코더 응답 ok=false: {data}")
return {
"url": storage.media_url(pipeline_id, "video.mp4"),
"used_fallback": False,
"duration_sec": duration_sec,
"encode_duration_ms": data.get("duration_ms"),
"encoder": data.get("encoder", "h264_nvenc"),
}
def _container_to_nas(container_path: str) -> str:
""" /app/data/videos/3/cover.jpg → /volume1/docker/webpage/data/videos/3/cover.jpg
/app/data/abc.mp3 → /volume1/docker/webpage/data/music/abc.mp3
"""
nas_videos_root = os.getenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
nas_music_root = os.getenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
if container_path.startswith("/app/data/videos/"):
return container_path.replace("/app/data/videos/", nas_videos_root + "/", 1)
if container_path.startswith("/app/data/"):
# 음악 파일 마운트가 /app/data 직접이라 서브디렉토리 없음 → music root에 직접
rel = container_path[len("/app/data/"):]
return nas_music_root + "/" + rel
return container_path # fallback (shouldn't happen)
```
### 5-3. 제거 항목
- `subprocess.run(...)` ffmpeg 호출 — 완전 제거
- `VIDEO_TIMEOUT_S = 600` — 사용 안 함 (`ENCODER_TIMEOUT_S`로 대체)
- `_build_visualizer_cmd` — 제거 (Windows 서버로 이전)
- `subprocess.TimeoutExpired` 예외 처리 — 제거
### 5-4. 환경변수 (NAS music-lab)
```yaml
# docker-compose.yml music-lab service environment
WINDOWS_VIDEO_ENCODER_URL: ${WINDOWS_VIDEO_ENCODER_URL}
NAS_VIDEOS_ROOT: ${NAS_VIDEOS_ROOT:-/volume1/docker/webpage/data/videos}
NAS_MUSIC_ROOT: ${NAS_MUSIC_ROOT:-/volume1/docker/webpage/data/music}
```
NAS `.env` 추가:
```env
WINDOWS_VIDEO_ENCODER_URL=http://192.168.45.59:8765
```
---
## 6. 에러 응답 매트릭스
| 상황 | NAS 측 결과 | 사용자 경험 |
|------|------------|-------------|
| Windows PC 꺼짐 | `VideoGenerationError("연결 실패")` | 진행 카드 `failed`, 텔레그램에 명확한 에러 |
| Windows ffmpeg 실패 | `VideoGenerationError("Windows 인코더 오류 500: ffmpeg — ...")` | 동일 |
| 입력 파일 NAS에 없음 | Windows가 400 응답 | "input_validation: cover not found" 메시지 |
| 출력 파일이 비어있음 | Windows가 500 응답 | "output_check: file empty" |
| 타임아웃 (180s+) | Windows가 504 응답 또는 connection close | "타임아웃 — GPU 부하 또는 입력 손상" |
| WINDOWS_VIDEO_ENCODER_URL 미설정 | 즉시 `VideoGenerationError` | 환경 미설정 안내 |
모두 pipeline state `failed`로 전이. 재생성 5회 한도 적용.
---
## 7. 헬스 모니터링
NAS music-lab 시작 시 1회 `GET {ENCODER_URL}/health` 호출 → 결과를 로그에 출력:
- 성공 + `ffmpeg_nvenc=true` → 인코더 사용 가능
- 실패 → 경고 로그 (구동은 계속, 호출 시점에 명확한 에러)
---
## 8. 테스트 전략
### 8-1. NAS music-lab 단위 테스트
`music-lab/tests/test_video_thumb.py` — 기존 ffmpeg 테스트를 HTTP mock 기반으로 교체:
```python
@respx.mock
def test_generate_video_calls_remote_encoder(monkeypatch):
monkeypatch.setenv("WINDOWS_VIDEO_ENCODER_URL", "http://192.168.45.59:8765")
monkeypatch.setattr(video, "ENCODER_URL", "http://192.168.45.59:8765")
respx.post("http://192.168.45.59:8765/encode_video").mock(
return_value=Response(200, json={
"ok": True, "duration_ms": 12000,
"output_path_nas": "/volume1/...",
"encoder": "h264_nvenc", "preset": "p4"
})
)
out = video.generate(...)
assert out["url"].endswith("/video.mp4")
assert out["encode_duration_ms"] == 12000
@respx.mock
def test_generate_video_raises_on_connection_error(monkeypatch):
monkeypatch.setattr(video, "ENCODER_URL", "http://192.168.45.59:8765")
respx.post("http://192.168.45.59:8765/encode_video").mock(
side_effect=httpx.ConnectError("Connection refused")
)
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(...)
assert "연결 실패" in str(exc.value)
def test_generate_video_no_url_configured(monkeypatch):
monkeypatch.setattr(video, "ENCODER_URL", "")
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(...)
assert "WINDOWS_VIDEO_ENCODER_URL" in str(exc.value)
```
기존 `test_generate_video_calls_ffmpeg` / `test_generate_video_failure_marks_failed` 제거.
### 8-2. Windows `music_ai` 단위 테스트
`music_ai/tests/test_video_encoder.py` (신규):
```python
@patch("subprocess.run")
def test_translate_path():
assert video_encoder.translate_path("/volume1/docker/webpage/data/x.jpg") == r"Z:\docker\webpage\data\x.jpg"
def test_translate_path_rejects_bad_prefix():
with pytest.raises(ValueError):
video_encoder.translate_path("/something/else/x.jpg")
@patch("subprocess.run")
def test_encode_endpoint_success(mock_run, client, tmp_path):
# mock paths exist + ffmpeg succeeds
...
@patch("subprocess.run")
def test_encode_endpoint_input_missing(mock_run, client):
# 입력 파일 안 보이면 400
...
@patch("subprocess.run")
def test_encode_endpoint_ffmpeg_fails(mock_run, client, tmp_path):
# ffmpeg returncode=1 → 500 stage=ffmpeg
...
```
### 8-3. 통합 테스트
기존 `test_pipeline_flow.py``cover.generate`를 mock하므로 영향 없음. video도 같이 mock — 변경 없음.
### 8-4. 수동 E2E
- [ ] Windows PC에서 `music_ai` 서버 시작 → `curl http://192.168.45.59:8765/health``ffmpeg_nvenc: true` 확인
- [ ] NAS에서 `curl -X POST http://192.168.45.59:8765/encode_video -d '{...}'` 직접 호출 → 200 응답 + Z:\에 video.mp4 생성 확인
- [ ] 진행 탭에서 새 파이프라인 시작 → 영상 단계가 1020초 안에 완료 → 텔레그램 알림 도착
- [ ] Windows PC 꺼두고 새 파이프라인 시작 → 영상 단계 즉시 실패 → 진행 카드 failed + 명확한 에러 메시지
---
## 9. Windows PC 사전 준비
사용자가 Windows PC에서 1회 수행할 작업:
1. **ffmpeg + NVENC 빌드 설치**
- https://www.gyan.dev/ffmpeg/builds/ → "release full" 다운로드
- 압축 해제 → `C:\ffmpeg\bin\ffmpeg.exe`
- PATH 환경변수에 `C:\ffmpeg\bin` 추가
- 검증: `ffmpeg -version` 동작, `ffmpeg -encoders | findstr h264_nvenc` 결과 출력
2. **NVIDIA 드라이버** — 이미 MusicGen용으로 설치돼 있음
3. **SMB 마운트 확인**`Z:\docker\webpage\` 접근 가능해야 함
4. **방화벽** — 포트 8765 LAN 인바운드 허용 (이미 MusicGen용으로 설정돼 있음)
5. **`music_ai/.env`에 추가**:
```env
NAS_VOLUME_PREFIX=/volume1/
WINDOWS_DRIVE_ROOT=Z:\
FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe
```
6. **`music_ai/start.bat` 재시작** — 새 endpoint 활성화
---
## 10. 산출물
| 영역 | 파일 |
|------|------|
| Windows | `music_ai/video_encoder.py` (신규) |
| Windows | `music_ai/server.py` (수정 — `/encode_video` endpoint 등록, `/health` 확장) |
| Windows | `music_ai/.env.example` (수정 — 새 변수 문서화) |
| Windows | `music_ai/tests/test_video_encoder.py` (신규) |
| NAS | `music-lab/app/pipeline/video.py` (재작성) |
| NAS | `music-lab/tests/test_video_thumb.py` (수정 — HTTP mock 기반) |
| Infra | `web-backend/docker-compose.yml` (env 3개 추가) |
| Infra | NAS `.env` (사용자 수동, 1개 추가) |
---
## 11. 후속
- (P3) 영상 인코딩 진행률 실시간 보고 — Windows에서 ffmpeg progress 파싱 후 진행 탭 카드에 표시 (현재는 단순 "running")
- (P3) Windows 서버 다중 큐 — 동시 요청 시 GPU 부하 추적 + 큐잉
- (P4) 인코딩 옵션을 youtube_setup `visual_defaults`로 추가 — preset(p1~p7), CQ, 해상도 옵션 노출
- (P4) Shorts 전용 1080×1920 인코딩 프로파일
---
## 11. 후속
- (P3) 영상 인코딩 진행률 실시간 보고 — Windows에서 ffmpeg progress 파싱 후 진행 탭 카드에 표시 (현재는 단순 "running")
- (P3) Windows 서버 다중 큐 — 동시 요청 시 GPU 부하 추적 + 큐잉
- (P4) 인코딩 옵션을 youtube_setup `visual_defaults`로 추가 — preset(p1~p7), CQ, 해상도 옵션 노출
- (P4) Shorts 전용 1080×1920 인코딩 프로파일
---

View File

@@ -0,0 +1,505 @@
# 배치 음악 생성 + 자동 영상 파이프라인 설계
> 작성일: 2026-05-10
> 관련: `2026-05-09-essential-mix-pipeline-design.md` (영상 파이프라인 베이스)
---
## 1. 배경
현재 Create 탭은 사용자가 모든 파라미터(genre/mood/instruments/BPM/key/scale/duration/prompt) 수동 입력 후 1트랙 생성. 1시간+ mix 영상 만들려면 동일 장르 트랙 10개를 일일이 만들어야 함.
목표: **장르 1개만 입력 → 10트랙 자동 생성 → 자동 컴파일 → 자동 영상 파이프라인 시작 → 텔레그램 승인만 하면 발행 완료**.
전체 흐름:
```
[사용자] Create 탭 → 배치 모드 → 장르 + 트랙 수 선택 → 생성 시작
↓ Suno API 순차 호출 (트랙당 ~1-2분)
↓ Track 1: "{Genre} Mix Track 1", 랜덤 mood/instr/BPM/key
↓ Track 2: "{Genre} Mix Track 2", ...
↓ ... Track 10
↓ 모두 완료 → compile_job 자동 생성 (acrossfade 3s)
↓ compile 완료 → video_pipeline 자동 시작 (cover step)
↓ 텔레그램에 "🎵 [{Genre} Mix] 커버 검토" 알림
[사용자] 5번 승인으로 영상 발행
```
---
## 2. 비목표
- 병렬 음악 생성 — VRAM 부담 회피, 순차로 단순하게
- 트랙별 prompt 자동 작성(Claude) — Suno는 genre+mood+instruments만으로도 충분
- 트랙별 길이 가변 — 모든 트랙 동일 `target_duration_sec` (default 180s)
- 사용자가 진행 중 트랙 prompt 편집 — 한 번 시작하면 끝까지
---
## 3. 사용자 흐름
### 3-1. Create 탭의 신규 "배치 생성" 섹션
```
┌─ 🎲 배치 생성 (장르 + 자동 영상까지) ─────────────────┐
│ │
│ 장르 [▼ lo-fi ] │
│ 트랙 수 [● 1 — 10] (10) │
│ 트랙당 길이 [● 60 — 300s] (180s) │
│ ☑ 모든 트랙 생성 후 자동 영상 파이프라인 시작 │
│ │
│ 예상 시간: 약 15-25분 (트랙당 1-2분 × 10) │
│ 예상 비용: ~$0.10 (Suno 10트랙 + DALL·E + Claude) │
│ │
│ [🎵 배치 생성 시작] │
│ │
│ ── 진행 상태 ────────────────────────────────────── │
│ 배치 #3 — lo-fi · 7/10 완료 · 2:43 경과 │
│ ✓ Track 1: Lo-Fi Mix Track 1 (chill, piano+synth) │
│ ✓ Track 2: Lo-Fi Mix Track 2 (relaxing, piano+drums) │
│ ... │
│ ⏳ Track 8: 생성 중... │
│ ○ Track 9: 대기 │
│ ○ Track 10: 대기 │
└──────────────────────────────────────────────────────┘
```
### 3-2. 완료 후
10트랙 모두 Library에 저장됨. compile_job_id가 자동 생성되고 영상 파이프라인이 cover step부터 시작 → 텔레그램 알림. 진행 탭에 카드 1장 추가.
---
## 4. 데이터 모델
### 4-1. 신규 테이블 `music_batch_jobs`
```sql
CREATE TABLE music_batch_jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
genre TEXT NOT NULL,
count INTEGER NOT NULL, -- 1-10
target_duration_sec INTEGER NOT NULL DEFAULT 180,
auto_pipeline INTEGER NOT NULL DEFAULT 1, -- 0/1 boolean
completed INTEGER NOT NULL DEFAULT 0,
track_ids_json TEXT NOT NULL DEFAULT '[]',
current_track_index INTEGER NOT NULL DEFAULT 0, -- 진행 중 트랙 (1..count)
current_track_status TEXT, -- queued | generating | failed
status TEXT NOT NULL DEFAULT 'queued',
-- queued: 시작 전
-- generating: 트랙 생성 중
-- generated: 모든 트랙 생성 완료 (compile 시작 전)
-- compiling: compile 진행 중
-- piped: 영상 파이프라인 시작됨 (=cover_pending 상태)
-- failed: 어느 단계에서 실패
-- cancelled: 사용자 취소
error TEXT,
compile_job_id INTEGER,
pipeline_id INTEGER,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
```
`init_db()``CREATE TABLE IF NOT EXISTS` 추가.
### 4-2. 헬퍼 함수 (`db.py` 추가)
- `create_batch_job(genre, count, target_duration_sec, auto_pipeline) -> int`
- `get_batch_job(id) -> dict | None`
- `update_batch_job(id, **fields)` — allowlist 검증
- `list_batch_jobs(active_only=False) -> list[dict]`
- `append_batch_track(batch_id, track_id)` — 완료된 트랙 ID 추가, completed++
---
## 5. 백엔드 — 랜덤 풀 + 배치 실행
### 5-1. `app/random_pools.py` (신규)
장르별 음악적으로 어울리는 랜덤 풀 정의:
```python
"""장르별 음악 파라미터 랜덤 풀."""
import random
POOLS = {
"lo-fi": {
"moods": ["chill", "relaxing", "dreamy", "melancholic", "mellow", "nostalgic", "peaceful"],
"instruments_pool": ["piano", "synth", "drums", "vinyl", "rhodes", "soft bass", "ambient pads"],
"instruments_count": (3, 4),
"bpm": (70, 90),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["minor", "major"],
"prompt_modifiers": ["cozy bedroom vibes", "rainy night", "late night study", "cafe ambience"],
},
"phonk": {
"moods": ["dark", "aggressive", "moody", "intense", "hypnotic"],
"instruments_pool": ["808 bass", "hi-hat", "synth lead", "vocal chops", "bass drops", "trap drums"],
"instruments_count": (3, 4),
"bpm": (130, 160),
"keys": ["C", "D", "F", "G"],
"scales": ["minor"],
"prompt_modifiers": ["drift atmosphere", "dark neon", "midnight drive"],
},
"ambient": {
"moods": ["peaceful", "meditative", "ethereal", "spacious", "dreamy"],
"instruments_pool": ["pad synths", "atmospheric guitar", "soft strings", "field recordings", "drone bass"],
"instruments_count": (2, 3),
"bpm": (50, 75),
"keys": ["C", "D", "E", "G", "A"],
"scales": ["major", "minor"],
"prompt_modifiers": ["misty mountain morning", "deep space", "still water", "forest dawn"],
},
"pop": {
"moods": ["uplifting", "happy", "energetic", "romantic", "catchy"],
"instruments_pool": ["acoustic guitar", "piano", "drums", "bass", "synth", "vocals harmonies"],
"instruments_count": (3, 5),
"bpm": (95, 130),
"keys": ["C", "D", "E", "F", "G", "A"],
"scales": ["major"],
"prompt_modifiers": ["radio-ready", "summer vibe", "feel-good"],
},
"default": { # 알 수 없는 장르 fallback
"moods": ["chill", "relaxing", "uplifting", "mellow"],
"instruments_pool": ["piano", "synth", "drums", "guitar", "bass", "strings"],
"instruments_count": (3, 4),
"bpm": (80, 110),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["minor", "major"],
"prompt_modifiers": [""],
},
}
def randomize(genre: str, rng: random.Random | None = None) -> dict:
"""랜덤 음악 파라미터 1세트 생성."""
rng = rng or random.Random()
pool = POOLS.get(genre.lower(), POOLS["default"])
n_instr = rng.randint(*pool["instruments_count"])
instruments = rng.sample(pool["instruments_pool"], min(n_instr, len(pool["instruments_pool"])))
return {
"moods": [rng.choice(pool["moods"])],
"instruments": instruments,
"bpm": rng.randint(*pool["bpm"]),
"key": rng.choice(pool["keys"]),
"scale": rng.choice(pool["scales"]),
"prompt_modifier": rng.choice(pool["prompt_modifiers"]),
}
```
향후(P3): 장르별 풀을 `youtube_setup`/별도 테이블로 옮겨 SetupTab에서 편집 가능하게.
### 5-2. `app/batch_generator.py` (신규) — 순차 실행 오케스트레이터
```python
"""배치 음악 생성 + 자동 컴파일·영상 파이프라인."""
import asyncio
import logging
import json
from . import db
from .suno_provider import run_suno_generation
from .random_pools import randomize
logger = logging.getLogger("music-lab.batch")
POLL_INTERVAL_S = 5
TRACK_GEN_TIMEOUT_S = 240 # 트랙당 최대 4분
async def run_batch(batch_id: int) -> None:
"""1) genre로 N트랙 순차 Suno 생성
2) 모두 완료 후 compile_job 자동 생성·실행
3) compile 완료 후 영상 파이프라인 시작 (cover step)
"""
job = db.get_batch_job(batch_id)
if not job:
return
genre = job["genre"]
count = job["count"]
duration = job["target_duration_sec"]
auto_pipe = bool(job["auto_pipeline"])
db.update_batch_job(batch_id, status="generating")
track_ids: list[int] = []
for i in range(1, count + 1):
title = f"{genre.title()} Mix Track {i}"
params = randomize(genre)
db.update_batch_job(batch_id,
current_track_index=i,
current_track_status="generating")
# Suno 호출 (기존 task 패턴 활용)
task_id = _start_suno(title=title, genre=genre,
duration_sec=duration, **params)
track_id = await _wait_for_track(task_id, timeout=TRACK_GEN_TIMEOUT_S)
if track_id:
track_ids.append(track_id)
db.append_batch_track(batch_id, track_id)
else:
logger.warning("배치 %d 트랙 %d 실패 — 계속 진행", batch_id, i)
db.update_batch_job(batch_id, current_track_status="failed")
# 정책: 실패한 트랙은 skip하고 계속 (나머지 9개라도 만든다)
if not track_ids:
db.update_batch_job(batch_id, status="failed",
error="모든 트랙 생성 실패")
return
db.update_batch_job(batch_id, status="generated")
if not auto_pipe:
return # 음악만 만들고 종료
# === 자동 compile ===
db.update_batch_job(batch_id, status="compiling")
compile_id = db.create_compile_job(
title=f"{genre.title()} Mix",
track_ids=track_ids,
crossfade_sec=3,
)
db.update_batch_job(batch_id, compile_job_id=compile_id)
# 기존 compiler 호출 (동기 → asyncio.to_thread)
from . import compiler
await asyncio.to_thread(compiler.run, compile_id)
job_after = db.get_compile_job(compile_id)
if not job_after or job_after.get("status") not in ("done", "succeeded"):
db.update_batch_job(batch_id, status="failed",
error=f"compile 실패 (status={job_after.get('status') if job_after else 'unknown'})")
return
# === 자동 영상 파이프라인 ===
pipeline_id = db.create_pipeline(compile_job_id=compile_id)
db.update_batch_job(batch_id, pipeline_id=pipeline_id, status="piped")
from .pipeline import orchestrator
await orchestrator.run_step(pipeline_id, "cover")
```
- `_start_suno(...)` — 기존 `run_suno_generation` 호출, task_id 반환
- `_wait_for_track(task_id, timeout)` — task 완료 폴링, 성공 시 music_library의 새 track id 반환
### 5-3. 변경되는 기존 모듈
`app/main.py`에 신규 endpoint 3개 + BackgroundTask. 변경 없는 기존 endpoint들은 그대로.
`db.py`에 헬퍼 함수 5개 추가 + `init_db()``music_batch_jobs` CREATE 추가.
---
## 6. API 엔드포인트
### 6-1. `POST /api/music/generate-batch`
Request:
```json
{
"genre": "lo-fi",
"count": 10,
"target_duration_sec": 180,
"auto_pipeline": true
}
```
Validation:
- `count` 1-10
- `target_duration_sec` 60-300
- `genre` 필수
Response 201:
```json
{
"id": 3,
"status": "queued",
...
}
```
배치 작업은 BackgroundTask로 실행 (~15-25분 소요).
### 6-2. `GET /api/music/generate-batch/{id}`
진행 상태 조회. 응답 예:
```json
{
"id": 3,
"genre": "lo-fi",
"count": 10,
"completed": 7,
"current_track_index": 8,
"current_track_status": "generating",
"status": "generating",
"track_ids": [12, 13, 14, 15, 16, 17, 18],
"tracks": [
{"id": 12, "title": "Lo-Fi Mix Track 1", ...},
...
],
"compile_job_id": null,
"pipeline_id": null,
"created_at": "2026-05-10T17:00:00",
"updated_at": "2026-05-10T17:08:30"
}
```
`tracks` 필드는 LEFT JOIN으로 채워짐 (각 트랙 메타 포함).
### 6-3. `GET /api/music/generate-batch?status=active`
전체 배치 목록. `active`면 queued/generating/compiling/piped 만.
---
## 7. 프론트엔드 — Create 탭 배치 섹션
### 7-1. `MusicStudio.jsx` Create 영역에 신규 collapsible
Create form 위 또는 옆에 새 섹션 (`<details>` 또는 토글):
```jsx
<details className="ms-batch-section" open={batchOpen}>
<summary onClick={...}>🎲 배치 생성 (1-10트랙 + 자동 영상)</summary>
<div className="ms-batch-form">
<label>장르
<select value={batchGenre} onChange={...}>
<option value="lo-fi">Lo-Fi</option>
<option value="phonk">Phonk</option>
<option value="ambient">Ambient</option>
<option value="pop">Pop</option>
</select>
</label>
<label>트랙 : {batchCount}
<input type="range" min={1} max={10} value={batchCount} onChange={...}/>
</label>
<label>트랙당 길이: {batchDuration}
<input type="range" min={60} max={300} step={10} value={batchDuration} onChange={...}/>
</label>
<label>
<input type="checkbox" checked={autoPipeline} onChange={...}/>
모든 트랙 생성 자동 영상 파이프라인 시작
</label>
<p className="ms-batch-estimate">
예상: {batchCount * 1.5 | 0}-{batchCount * 2} · 비용 ~${(batchCount * 0.005 + (autoPipeline ? 0.05 : 0)).toFixed(2)}
</p>
<button className="button primary" onClick={startBatch} disabled={generating}>
🎵 배치 생성 시작
</button>
</div>
{currentBatch && <BatchProgress batch={currentBatch} />}
</details>
```
### 7-2. 신규 컴포넌트 `BatchProgress.jsx`
```jsx
export default function BatchProgress({ batch }) {
return (
<div className="ms-batch-progress">
<div className="ms-batch-header">
배치 #{batch.id} {batch.genre} ·
{' '}{batch.completed}/{batch.count} 완료 ·
{' '}status: <strong>{batch.status}</strong>
</div>
<ol className="ms-batch-tracks">
{Array.from({ length: batch.count }, (_, i) => i + 1).map(n => {
const completed = n <= batch.completed;
const current = n === batch.current_track_index && batch.status === 'generating';
const track = (batch.tracks || []).find(t => t._batch_index === n);
return (
<li key={n} className={completed ? 'done' : current ? 'current' : 'pending'}>
{completed ? '✓' : current ? '⏳' : '○'}
{' '}Track {n}: {track ? track.title : (current ? '생성 중...' : '대기')}
</li>
);
})}
</ol>
{batch.compile_job_id && <div>📀 컴파일 #{batch.compile_job_id}</div>}
{batch.pipeline_id && (
<div>
🎬 영상 파이프라인 #{batch.pipeline_id}
<a href={`#youtube-pipeline-${batch.pipeline_id}`}> 진행 탭에서 확인</a>
</div>
)}
</div>
);
}
```
### 7-3. 폴링
배치 시작 시 5초 간격 `getBatchJob(id)` 호출. status가 `piped`/`failed`/`cancelled`되면 폴링 중지.
### 7-4. `api.js` 헬퍼
```javascript
export const startBatchGen = (payload) => apiPost('/api/music/generate-batch', payload);
export const getBatchJob = (id) => apiGet(`/api/music/generate-batch/${id}`);
export const listBatchJobs = (status='all') => apiGet(`/api/music/generate-batch?status=${status}`);
```
---
## 8. 에러 처리
| 시나리오 | 동작 |
|---------|------|
| Suno API 트랙 1개 실패 | 로그 + skip + 다음 트랙 진행. 최종 track_ids에 누락. |
| 모든 트랙 실패 | status=failed, error 기록 |
| compile 실패 | status=failed, compile_job_id 보존 |
| 영상 파이프라인 cover step 실패 | pipeline 자체에서 failed로 마크. batch는 piped 상태 그대로 (파이프라인 측에서 처리) |
| count > 10 또는 < 1 | 400 |
| genre 누락 | 400 |
| Suno API key 미설정 | 400 ("SUNO_API_KEY 미설정") |
---
## 9. 테스트 전략
### 9-1. 단위 테스트
- `random_pools.randomize(genre)` — 각 장르별 결과가 풀 안에 있는지, 시드 고정 시 재현 가능
- `db.create_batch_job` / `update_batch_job` / `append_batch_track` — 정상 흐름
- `_wait_for_track` — task 성공/실패/timeout mock
### 9-2. 통합 테스트
- `POST /api/music/generate-batch` 호출 → 201 반환 + 배치 row 생성
- `GET /api/music/generate-batch/{id}` 응답 schema
- `run_batch` mocked Suno + mocked compiler + mocked orchestrator → 전체 흐름 happy path
### 9-3. 수동 E2E
- Create 탭 → 배치 생성 → 장르 선택 → 시작 → 진행 표시 확인
- 10트랙 완료 → Library에 10개 추가 확인 → compile_job 자동 생성 확인 → 진행 탭에 새 카드 등장 확인
---
## 10. 산출물
| 영역 | 파일 |
|------|------|
| Spec/Plan | 본 문서 + plan |
| NAS music-lab | `db.py` (테이블/헬퍼), `random_pools.py` (신규), `batch_generator.py` (신규), `main.py` (3 endpoints) |
| Frontend | `MusicStudio.jsx` (Create 배치 섹션), `BatchProgress.jsx` (신규), `MusicStudio.css`, `api.js` 헬퍼 |
| 테스트 | NAS 단위 + 통합, 수동 E2E |
---
## 11. 후속 (P3)
- 장르별 풀 SetupTab에서 편집 가능
- 트랙별 prompt에 시나리오/카페 분위기 등 자동 추가 (트랙간 다양성 증대)
- 배치 일시정지/재개
- 한 배치 안에서 Track-N별 재생성 (실패한 트랙만)
- 트랙 길이 가변 (랜덤 분포)

View File

@@ -259,6 +259,45 @@ def init_db() -> None:
""")
conn.execute("CREATE INDEX IF NOT EXISTS idx_briefings_draw ON lotto_briefings(draw_no DESC)")
# ── weekly_review 테이블 (큐레이터 자기 평가 + 사용자 패턴 갭) ────────
conn.execute("""
CREATE TABLE IF NOT EXISTS weekly_review (
id INTEGER PRIMARY KEY AUTOINCREMENT,
draw_no INTEGER UNIQUE NOT NULL,
curator_avg_match REAL,
curator_best_tier TEXT,
curator_best_match INTEGER,
curator_5plus_prizes INTEGER,
user_avg_match REAL,
user_best_match INTEGER,
user_5plus_prizes INTEGER,
user_pattern_summary TEXT,
draw_pattern_summary TEXT,
pattern_delta TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now','localtime'))
)
""")
conn.execute("CREATE INDEX IF NOT EXISTS idx_review_draw ON weekly_review(draw_no DESC)")
# ── lotto_briefings.picks 4계층 마이그레이션 (1회 변환) ───────────────
# 기존: picks가 JSON 리스트 [{numbers,risk_tag,reason}]
# 신규: picks가 JSON 객체 {core:[...], bonus:[], extended:[], pool:[]}
rows = conn.execute("SELECT id, picks FROM lotto_briefings").fetchall()
for r in rows:
try:
p = json.loads(r["picks"])
if isinstance(p, list):
new_picks = {"core": p, "bonus": [], "extended": [], "pool": []}
conn.execute(
"UPDATE lotto_briefings SET picks=? WHERE id=?",
(json.dumps(new_picks, ensure_ascii=False), r["id"]),
)
except (json.JSONDecodeError, TypeError):
continue
_ensure_column(conn, "lotto_briefings", "tier_rationale",
"ALTER TABLE lotto_briefings ADD COLUMN tier_rationale TEXT NOT NULL DEFAULT '{}'")
@@ -952,39 +991,88 @@ def update_purchase_results(purchase_id: int, results: list, total_prize: int) -
)
def bulk_insert_purchases_from_briefing(draw_no: int, tier_mode: str, amount: int) -> Dict[str, Any]:
"""tier_mode 에 해당하는 큐레이터 picks 를 purchase_history 에 일괄 INSERT.
tier_mode: "core" | "core_bonus" | "core_bonus_extended" | "full"
"""
briefing = get_briefing(draw_no)
if not briefing:
return {"ok": False, "reason": "briefing not found"}
picks = briefing.get("picks") or {}
if isinstance(picks, list):
# 마이그레이션 이전 형태
picks = {"core": picks, "bonus": [], "extended": [], "pool": []}
tier_chain = {
"core": ["core"],
"core_bonus": ["core", "bonus"],
"core_bonus_extended": ["core", "bonus", "extended"],
"full": ["core", "bonus", "extended", "pool"],
}.get(tier_mode)
if not tier_chain:
return {"ok": False, "reason": f"unknown tier_mode: {tier_mode}"}
inserted_ids = []
with _conn() as conn:
for tier in tier_chain:
for idx, pick in enumerate(picks.get(tier) or []):
source_strategy = f"curator_{tier}"
source_detail = json.dumps({
"tier": tier,
"role": pick.get("risk_tag"),
"set_index": idx,
"draw_no": draw_no,
}, ensure_ascii=False)
numbers_json = json.dumps([pick.get("numbers")], ensure_ascii=False)
cur = conn.execute(
"""INSERT INTO purchase_history
(draw_no, amount, sets, prize, note, numbers, is_real, source_strategy, source_detail)
VALUES (?, ?, 1, 0, '', ?, 1, ?, ?)""",
(draw_no, 1000, numbers_json, source_strategy, source_detail),
)
inserted_ids.append(cur.lastrowid)
return {"ok": True, "inserted_ids": inserted_ids, "sets": len(inserted_ids)}
# --- Lotto Briefings ---
def save_briefing(data: Dict[str, Any]) -> int:
picks_json = json.dumps(data["picks"], ensure_ascii=False)
narrative_json = json.dumps(data["narrative"], ensure_ascii=False)
tier_rationale_json = json.dumps(data.get("tier_rationale") or {}, ensure_ascii=False)
with _conn() as conn:
cur = conn.execute("""
cur = conn.execute(
"""
INSERT INTO lotto_briefings
(draw_no, picks, narrative, confidence, model,
tokens_input, tokens_output, cache_read, cache_write,
latency_ms, source)
VALUES (?,?,?,?,?,?,?,?,?,?,?)
latency_ms, source, tier_rationale)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(draw_no) DO UPDATE SET
picks=excluded.picks, narrative=excluded.narrative,
confidence=excluded.confidence, model=excluded.model,
picks=excluded.picks,
narrative=excluded.narrative,
confidence=excluded.confidence,
model=excluded.model,
tokens_input=excluded.tokens_input,
tokens_output=excluded.tokens_output,
cache_read=excluded.cache_read,
cache_write=excluded.cache_write,
latency_ms=excluded.latency_ms,
source=excluded.source,
tier_rationale=excluded.tier_rationale,
generated_at=datetime('now','localtime')
""", (
data["draw_no"],
json.dumps(data["picks"], ensure_ascii=False),
json.dumps(data["narrative"], ensure_ascii=False),
int(data["confidence"]),
data["model"],
int(data.get("tokens_input", 0)),
int(data.get("tokens_output", 0)),
int(data.get("cache_read", 0)),
int(data.get("cache_write", 0)),
int(data.get("latency_ms", 0)),
data.get("source", "auto"),
))
""",
(
data["draw_no"], picks_json, narrative_json,
data["confidence"], data["model"],
data.get("tokens_input", 0), data.get("tokens_output", 0),
data.get("cache_read", 0), data.get("cache_write", 0),
data.get("latency_ms", 0), data.get("source", "auto"),
tier_rationale_json,
),
)
return cur.lastrowid
@@ -994,6 +1082,7 @@ def _briefing_row(r) -> Dict[str, Any]:
"draw_no": r["draw_no"],
"picks": json.loads(r["picks"]),
"narrative": json.loads(r["narrative"]),
"tier_rationale": json.loads(r["tier_rationale"]) if r["tier_rationale"] else {},
"confidence": r["confidence"],
"model": r["model"],
"tokens_input": r["tokens_input"],
@@ -1052,3 +1141,88 @@ def get_curator_usage(days: int = 30) -> Dict[str, Any]:
"avg_latency_ms": round(float(r["avg_latency"] or 0), 1),
}
def save_review(data: Dict[str, Any]) -> int:
with _conn() as conn:
cur = conn.execute(
"""
INSERT INTO weekly_review (
draw_no,
curator_avg_match, curator_best_tier, curator_best_match, curator_5plus_prizes,
user_avg_match, user_best_match, user_5plus_prizes,
user_pattern_summary, draw_pattern_summary, pattern_delta
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(draw_no) DO UPDATE SET
curator_avg_match=excluded.curator_avg_match,
curator_best_tier=excluded.curator_best_tier,
curator_best_match=excluded.curator_best_match,
curator_5plus_prizes=excluded.curator_5plus_prizes,
user_avg_match=excluded.user_avg_match,
user_best_match=excluded.user_best_match,
user_5plus_prizes=excluded.user_5plus_prizes,
user_pattern_summary=excluded.user_pattern_summary,
draw_pattern_summary=excluded.draw_pattern_summary,
pattern_delta=excluded.pattern_delta
""",
(
data["draw_no"],
data.get("curator_avg_match"), data.get("curator_best_tier"),
data.get("curator_best_match"), data.get("curator_5plus_prizes"),
data.get("user_avg_match"), data.get("user_best_match"),
data.get("user_5plus_prizes"),
data.get("user_pattern_summary"), data.get("draw_pattern_summary"),
data.get("pattern_delta"),
),
)
return cur.lastrowid
def _review_row(r) -> Optional[Dict[str, Any]]:
if not r:
return None
return {
"id": r["id"],
"draw_no": r["draw_no"],
"curator_avg_match": r["curator_avg_match"],
"curator_best_tier": r["curator_best_tier"],
"curator_best_match": r["curator_best_match"],
"curator_5plus_prizes": r["curator_5plus_prizes"],
"user_avg_match": r["user_avg_match"],
"user_best_match": r["user_best_match"],
"user_5plus_prizes": r["user_5plus_prizes"],
"user_pattern_summary": r["user_pattern_summary"],
"draw_pattern_summary": r["draw_pattern_summary"],
"pattern_delta": r["pattern_delta"],
"created_at": r["created_at"],
}
def get_review(draw_no: int) -> Optional[Dict[str, Any]]:
with _conn() as conn:
r = conn.execute("SELECT * FROM weekly_review WHERE draw_no=?", (draw_no,)).fetchone()
return _review_row(r)
def get_latest_review() -> Optional[Dict[str, Any]]:
with _conn() as conn:
r = conn.execute("SELECT * FROM weekly_review ORDER BY draw_no DESC LIMIT 1").fetchone()
return _review_row(r)
def get_reviews_range(start_drw: int, end_drw: int) -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute(
"SELECT * FROM weekly_review WHERE draw_no BETWEEN ? AND ? ORDER BY draw_no ASC",
(start_drw, end_drw),
).fetchall()
return [_review_row(r) for r in rows]
def list_reviews(limit: int = 10) -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute(
"SELECT * FROM weekly_review ORDER BY draw_no DESC LIMIT ?",
(limit,),
).fetchall()
return [_review_row(r) for r in rows]

View File

View File

@@ -0,0 +1,154 @@
"""주간 회고 채점 통합 잡 — 일요일 03:00 KST 실행.
1) 기존 purchase_manager.check_purchases_for_draw() 로 사용자 구매 자동 채점
2) 큐레이터 4계층 picks vs 추첨 결과 비교
3) 패턴 요약·갭 계산
4) weekly_review UPSERT
5) 4등 이상 발견 시 agent-office webhook 호출
"""
import json
import logging
import os
from typing import Optional
import httpx
from .. import db
from ..purchase_manager import check_purchases_for_draw
from .grading_helpers import (
score_picks_against_draw,
summarize_pattern,
aggregate_pattern_summaries,
compute_pattern_delta,
)
logger = logging.getLogger("lotto-backend")
AGENT_OFFICE_URL = os.environ.get("AGENT_OFFICE_URL", "http://agent-office:8000")
def _flatten_curator_picks(briefing: dict) -> list:
"""4계층 picks 를 모두 합쳐 단일 리스트(score 계산용)."""
picks = briefing.get("picks") or {}
if isinstance(picks, list):
return picks
out = []
for tier in ("core", "bonus", "extended", "pool"):
out.extend(picks.get(tier) or [])
return out
def _curator_score(briefing: dict, win_nums: list, bonus: int) -> dict:
if not briefing:
return {}
flat = _flatten_curator_picks(briefing)
if not flat:
return {}
return score_picks_against_draw(flat, win_nums, bonus)
def _user_score(drw_no: int, win_nums: list) -> dict:
purchases = db.get_purchases(draw_no=drw_no)
if not purchases:
return {}
matches = []
win_set = set(win_nums)
pattern_summaries = []
for p in purchases:
for nums in (p.get("numbers") or []):
if not nums:
continue
m = len(set(nums) & win_set)
matches.append(m)
pattern_summaries.append(summarize_pattern(nums))
if not matches:
return {}
return {
"avg_match": round(sum(matches) / len(matches), 2),
"best_match": max(matches),
"five_plus_prizes": sum(1 for m in matches if m >= 3),
"pattern_avg": aggregate_pattern_summaries(pattern_summaries),
}
def _trigger_prize_alert(drw_no: int, match_count: int, numbers: list, purchase_id: int) -> None:
try:
with httpx.Client(timeout=10) as client:
client.post(
f"{AGENT_OFFICE_URL}/api/agent-office/notify/lotto-prize",
json={
"draw_no": drw_no,
"match_count": match_count,
"numbers": numbers,
"purchase_id": purchase_id,
},
)
except Exception as e:
logger.warning(f"[grade_weekly_review] prize alert webhook failed: {e}")
def run_weekly_grading(drw_no: int) -> dict:
"""주어진 회차에 대해 채점 잡 1회 실행. 멱등."""
draw = db.get_draw(drw_no)
if not draw:
logger.warning(f"[grade_weekly_review] draw {drw_no} not found, skip")
return {"ok": False, "reason": "no draw"}
win_nums = [draw["n1"], draw["n2"], draw["n3"], draw["n4"], draw["n5"], draw["n6"]]
bonus = draw["bonus"]
# 1) 사용자 구매 자동 채점 (기존 인프라)
try:
check_purchases_for_draw(drw_no)
except Exception as e:
logger.warning(f"[grade_weekly_review] check_purchases_for_draw failed: {e}")
# 2) 4등 이상 발견 시 webhook
purchases = db.get_purchases(draw_no=drw_no, checked=True)
for p in purchases:
for r in (p.get("results") or []):
if r.get("correct", 0) >= 4:
_trigger_prize_alert(drw_no, r["correct"], r["numbers"], p["id"])
# 3) 큐레이터 자기 평가
briefing = db.get_briefing(drw_no)
cur = _curator_score(briefing, win_nums, bonus)
# 4) 사용자 평가 (재로드, 구매가 다 채점된 후 패턴 계산)
usr = _user_score(drw_no, win_nums)
# 5) 추첨 패턴 요약 + 델타
draw_summary = summarize_pattern(win_nums)
draw_pattern = {
"low_avg": draw_summary["low_count"],
"odd_avg": draw_summary["odd_count"],
"sum_avg": draw_summary["sum"],
}
user_pattern = usr.get("pattern_avg", {})
delta = compute_pattern_delta(user_pattern, draw_pattern) if user_pattern else ""
# 6) UPSERT
payload = {
"draw_no": drw_no,
"curator_avg_match": cur.get("avg_match"),
"curator_best_tier": cur.get("best_tier"),
"curator_best_match": cur.get("best_match"),
"curator_5plus_prizes": cur.get("five_plus_prizes"),
"user_avg_match": usr.get("avg_match"),
"user_best_match": usr.get("best_match"),
"user_5plus_prizes": usr.get("five_plus_prizes"),
"user_pattern_summary": json.dumps(user_pattern, ensure_ascii=False) if user_pattern else None,
"draw_pattern_summary": json.dumps(draw_pattern, ensure_ascii=False),
"pattern_delta": delta,
}
rid = db.save_review(payload)
logger.info(f"[grade_weekly_review] saved review id={rid} for draw {drw_no}")
return {"ok": True, "review_id": rid}
def run_for_latest() -> dict:
"""가장 최근 sync된 추첨 회차로 채점 — cron 진입점."""
latest = db.get_latest_draw()
if not latest:
return {"ok": False, "reason": "no draws"}
return run_weekly_grading(latest["drw_no"])

View File

@@ -0,0 +1,93 @@
"""채점 보조 — 일치 수 계산, 패턴 요약, 패턴 갭."""
from typing import List, Dict, Any
LOW_HIGH_CUT = 22 # curator_helpers.py 와 동일
def score_picks_against_draw(picks: List[Dict[str, Any]],
win_nums: List[int],
bonus: int) -> Dict[str, Any]:
"""4계층 중 한 그룹(예: core_picks 5세트) vs 추첨 결과 채점.
picks 는 [{numbers, risk_tag, reason}] 리스트.
"""
if not picks:
return {"avg_match": None, "best_match": 0, "five_plus_prizes": 0, "best_tier": None}
win_set = set(win_nums)
matches = []
for p in picks:
nums = p.get("numbers") or []
m = len(set(nums) & win_set)
matches.append((m, p.get("risk_tag")))
avg = sum(m for m, _ in matches) / len(matches)
best_match, best_tier = max(matches, key=lambda x: x[0])
five_plus = sum(1 for m, _ in matches if m >= 3) # 5등 이상
# tier별 평균 → 가장 잘 맞은 risk_tag
tier_scores: Dict[str, List[int]] = {}
for m, t in matches:
if t:
tier_scores.setdefault(t, []).append(m)
if tier_scores:
best_tier = max(tier_scores.items(),
key=lambda kv: sum(kv[1]) / len(kv[1]))[0]
return {
"avg_match": round(avg, 2),
"best_match": best_match,
"five_plus_prizes": five_plus,
"best_tier": best_tier,
}
def summarize_pattern(nums: List[int]) -> Dict[str, int]:
"""한 세트의 패턴 요약 — 저/고, 홀/짝, 합계."""
nums = sorted(nums)
odd = sum(1 for n in nums if n % 2 == 1)
low = sum(1 for n in nums if n <= LOW_HIGH_CUT)
return {
"odd_count": odd,
"even_count": 6 - odd,
"low_count": low,
"high_count": 6 - low,
"sum": sum(nums),
}
def aggregate_pattern_summaries(summaries: List[Dict[str, int]]) -> Dict[str, float]:
"""여러 세트의 패턴 요약 → 평균(low_avg, odd_avg, sum_avg)."""
if not summaries:
return {"low_avg": None, "odd_avg": None, "sum_avg": None}
n = len(summaries)
return {
"low_avg": round(sum(s["low_count"] for s in summaries) / n, 2),
"odd_avg": round(sum(s["odd_count"] for s in summaries) / n, 2),
"sum_avg": round(sum(s["sum"] for s in summaries) / n, 1),
}
def compute_pattern_delta(user_summary: Dict[str, float],
draw_summary: Dict[str, float]) -> str:
"""사용자 평균 vs 추첨 패턴의 가장 큰 격차 1~2개를 한 줄로."""
if not user_summary or user_summary.get("low_avg") is None:
return ""
deltas = []
if user_summary.get("low_avg") is not None and draw_summary.get("low_avg") is not None:
d = round(user_summary["low_avg"] - draw_summary["low_avg"], 2)
if abs(d) >= 0.5:
sign = "+" if d > 0 else ""
deltas.append(("저번호", d, f"저번호 편향 {sign}{d}"))
if user_summary.get("sum_avg") is not None and draw_summary.get("sum_avg") is not None:
d = round(user_summary["sum_avg"] - draw_summary["sum_avg"], 1)
if abs(d) >= 10:
sign = "+" if d > 0 else ""
deltas.append(("합계", d, f"합계 {sign}{d}"))
if user_summary.get("odd_avg") is not None and draw_summary.get("odd_avg") is not None:
d = round(user_summary["odd_avg"] - draw_summary["odd_avg"], 2)
if abs(d) >= 0.5:
sign = "+" if d > 0 else ""
deltas.append(("홀짝", d, f"홀짝 {sign}{d}"))
deltas.sort(key=lambda x: -abs(x[1]))
return " / ".join(d[2] for d in deltas[:2])

View File

@@ -19,6 +19,7 @@ from .db import (
get_recommendation_performance,
# Phase 2: 구매 이력
add_purchase, get_purchases, update_purchase, delete_purchase, get_purchase_stats,
bulk_insert_purchases_from_briefing,
# Phase 2: 주간 리포트 캐시
save_weekly_report, get_weekly_report_list, get_weekly_report,
# Phase 2: 개인 패턴 분석
@@ -39,10 +40,13 @@ from .strategy_evolver import (
)
from .routers import curator as curator_router
from .routers import briefing as briefing_router
from .routers import review as review_router
from .jobs.grade_weekly_review import run_for_latest as grade_run_for_latest
app = FastAPI()
app.include_router(curator_router.router)
app.include_router(briefing_router.router)
app.include_router(review_router.router)
scheduler = BackgroundScheduler(timezone=os.getenv("TZ", "Asia/Seoul"))
ALL_URL = os.getenv("LOTTO_ALL_URL", "https://smok95.github.io/lotto/results/all.json")
@@ -95,6 +99,17 @@ def on_startup():
scheduler.add_job(_save_weekly_report_job, "cron", day_of_week="sat", hour=9, minute=0)
# 4. 주간 채점 (매주 일요일 03:00 KST — 토요일 추첨 다음날 새벽)
# 당첨번호 sync 이후 추천 vs 실제 결과 비교 → reviews 테이블 저장
scheduler.add_job(
grade_run_for_latest,
"cron",
day_of_week="sun",
hour=3,
minute=0,
id="grade_weekly_review",
)
scheduler.start()
@@ -329,6 +344,22 @@ def api_purchase_delete(purchase_id: int):
return {"ok": True}
class BulkPurchaseRequest(BaseModel):
draw_no: int
tier_mode: str # core | core_bonus | core_bonus_extended | full
sets: int # 검증용 — 실제 INSERT는 briefing 기준
amount: int # 검증용
@app.post("/api/lotto/purchase/bulk", status_code=201)
def api_purchase_bulk(body: BulkPurchaseRequest):
"""결정카드 원클릭 기록 — 큐레이터 브리핑 picks 를 tier_mode 기준으로 일괄 기록."""
result = bulk_insert_purchases_from_briefing(body.draw_no, body.tier_mode, body.amount)
if not result["ok"]:
raise HTTPException(status_code=400, detail=result["reason"])
return result
# ── 전략 진화 API ──────────────────────────────────────────────────────────
@app.get("/api/lotto/strategy/weights")

View File

@@ -7,10 +7,24 @@ from .. import db
router = APIRouter(prefix="/api/lotto")
class TierRationale(BaseModel):
bonus: str = ""
extended: str = ""
pool: str = ""
class BriefingPicks(BaseModel):
core: List[Dict[str, Any]] = Field(default_factory=list)
bonus: List[Dict[str, Any]] = Field(default_factory=list)
extended: List[Dict[str, Any]] = Field(default_factory=list)
pool: List[Dict[str, Any]] = Field(default_factory=list)
class BriefingRequest(BaseModel):
draw_no: int
picks: List[Dict[str, Any]]
picks: BriefingPicks
narrative: Dict[str, Any]
tier_rationale: TierRationale = Field(default_factory=TierRationale)
confidence: int = Field(ge=0, le=100)
model: str
tokens_input: int = 0

View File

@@ -0,0 +1,26 @@
"""주간 회고(weekly_review) 조회 엔드포인트."""
from fastapi import APIRouter, HTTPException
from .. import db
router = APIRouter(prefix="/api/lotto/review")
@router.get("/latest")
def latest():
r = db.get_latest_review()
if not r:
raise HTTPException(404, "no review yet")
return r
@router.get("/history")
def history(limit: int = 10):
return {"reviews": db.list_reviews(limit)}
@router.get("/{draw_no}")
def get_one(draw_no: int):
r = db.get_review(draw_no)
if not r:
raise HTTPException(404, f"no review for draw {draw_no}")
return r

View File

@@ -0,0 +1,28 @@
# Lotto Curator Evolution — 1주차 운영 점검
## 일요일 (추첨 다음날)
- [ ] 03:05 KST: lotto-backend 로그에 `[grade_weekly_review] saved review id=N` 출력 확인
- [ ] `curl http://localhost:18000/api/lotto/review/latest` → JSON 정상
- [ ] purchase_history 의 직전 회차 행이 `checked=1`, `total_prize` 채워졌는지
## 월요일
- [ ] 09:05 KST: agent-office 로그에 `큐레이션 완료: #NNNN` + `[telegram_lotto] briefing` 출력
- [ ] 텔레그램 봇 채팅에 헤드라인 알림 도착 (회고 단락 포함/생략 정확)
- [ ] `curl http://localhost:18000/api/lotto/briefing/latest` → 4계층 picks(core/bonus/extended/pool 각 5세트) + tier_rationale + narrative.retrospective
## 사이트 확인
- [ ] http://localhost:3007/lotto 브리핑 탭 결정 카드 정상 렌더
- [ ] 모드 토글 4단계 동작 (5/10/15/20 펼침/접힘)
- [ ] localStorage `lotto.tier_mode` 마지막 선택 기억 (새로고침 후 유지)
- [ ] "이대로 N세트 구매" 클릭 → 토스트 + 구매탭 갱신
- [ ] 자료실 탭 첫 진입 시 모든 패널 접힘
- [ ] 구매탭 추세 차트 1주차에는 점 1개, 2주차부터 라인 형성
## 실패 케이스
- [ ] 큐레이션 실패(Anthropic API 다운): agent-office 로그 + lotto_agent state=idle, 에러 텔레그램
- [ ] 4등 이상 발견: 별도 텔레그램 푸시 도착 (3개 이하만 있으면 미발송)
- [ ] briefing 없는 회차에 bulk purchase 시도: 400 응답, 토스트 표시
## cron 시간 조정 (필요 시)
- 채점 잡: `lotto/app/main.py``scheduler.add_job(grade_run_for_latest, "cron", day_of_week="sun", hour=3, minute=0)`
- 큐레이션: `agent-office/app/scheduler.py` `add_job(_run_lotto_schedule, ..., hour=9, minute=0)`

View File

@@ -0,0 +1,52 @@
import sys, os
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
import pytest
from app import db
@pytest.fixture(autouse=True)
def setup_db(tmp_path, monkeypatch):
test_db = tmp_path / "test.db"
monkeypatch.setattr(db, "DB_PATH", str(test_db))
db.init_db()
yield
def test_save_briefing_4tier_roundtrip():
payload = {
"draw_no": 9999,
"picks": {"core":[{"numbers":[1,2,3,4,5,6],"risk_tag":"안정","reason":"x"}],
"bonus":[], "extended":[], "pool":[]},
"narrative": {"headline":"H","summary_3lines":["a","b","c"],"retrospective":"r"},
"tier_rationale": {"bonus":"b1","extended":"e1","pool":"p1"},
"confidence": 70,
"model": "test",
}
bid = db.save_briefing(payload)
assert bid > 0
got = db.get_briefing(9999)
assert got["picks"]["core"][0]["numbers"] == [1,2,3,4,5,6]
assert got["tier_rationale"]["bonus"] == "b1"
assert got["narrative"]["retrospective"] == "r"
def test_save_briefing_upsert_overwrites():
db.save_briefing({
"draw_no": 8888,
"picks": {"core":[], "bonus":[], "extended":[], "pool":[]},
"narrative": {"headline":"old","summary_3lines":["a","b","c"]},
"confidence": 50, "model": "v1",
})
db.save_briefing({
"draw_no": 8888,
"picks": {"core":[{"numbers":[10,20,30,40,41,42],"risk_tag":"공격","reason":"y"}],
"bonus":[], "extended":[], "pool":[]},
"narrative": {"headline":"new","summary_3lines":["x","y","z"]},
"tier_rationale": {"bonus":"","extended":"","pool":""},
"confidence": 90, "model": "v2",
})
got = db.get_briefing(8888)
assert got["narrative"]["headline"] == "new"
assert got["confidence"] == 90
assert got["picks"]["core"][0]["risk_tag"] == "공격"

View File

@@ -0,0 +1,53 @@
import sys, os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
import pytest
from app import db
@pytest.fixture(autouse=True)
def setup_db(tmp_path, monkeypatch):
test_db = tmp_path / "test.db"
monkeypatch.setattr(db, "DB_PATH", str(test_db))
db.init_db()
yield
def _seed_briefing(drw=1153):
picks = {
"core": [{"numbers": [1, 2, 3, 4, 5, 6], "risk_tag": "안정", "reason": "x"}] * 5,
"bonus": [{"numbers": [7, 8, 9, 10, 11, 12], "risk_tag": "균형", "reason": "x"}] * 5,
"extended": [{"numbers": [13, 14, 15, 16, 17, 18], "risk_tag": "공격", "reason": "x"}] * 5,
"pool": [{"numbers": [19, 20, 21, 22, 23, 24], "risk_tag": "안정", "reason": "x"}] * 5,
}
db.save_briefing({
"draw_no": drw, "picks": picks,
"narrative": {"headline": "h", "summary_3lines": ["a", "b", "c"]},
"confidence": 70, "model": "test",
})
def test_bulk_core_inserts_5():
_seed_briefing()
r = db.bulk_insert_purchases_from_briefing(1153, "core", 5000)
assert r["ok"] and r["sets"] == 5
rows = db.get_purchases(draw_no=1153)
assert len(rows) == 5
assert all(row["source_strategy"] == "curator_core" for row in rows)
def test_bulk_full_inserts_20():
_seed_briefing()
r = db.bulk_insert_purchases_from_briefing(1153, "full", 20000)
assert r["ok"] and r["sets"] == 20
def test_bulk_unknown_tier_mode():
_seed_briefing()
r = db.bulk_insert_purchases_from_briefing(1153, "garbage", 1000)
assert r["ok"] is False and "garbage" in r["reason"]
def test_bulk_no_briefing():
r = db.bulk_insert_purchases_from_briefing(9999, "core", 5000)
assert r["ok"] is False and "not found" in r["reason"]

View File

@@ -0,0 +1,60 @@
import sys, os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
import json
import pytest
from app import db
from app.jobs.grade_weekly_review import run_weekly_grading
@pytest.fixture(autouse=True)
def setup_db(tmp_path, monkeypatch):
test_db = tmp_path / "test.db"
monkeypatch.setattr(db, "DB_PATH", str(test_db))
db.init_db()
yield
def _seed_draw(drw_no=1153):
db.upsert_draw({
"drw_no": drw_no, "drw_date": "2026-05-09",
"n1": 3, "n2": 11, "n3": 17, "n4": 25, "n5": 33, "n6": 41, "bonus": 8,
})
def _seed_briefing(drw_no=1153):
picks = {
"core": [
{"numbers": [3, 11, 17, 25, 33, 41], "risk_tag": "안정", "reason": "x"}, # 6
{"numbers": [1, 2, 3, 4, 5, 6], "risk_tag": "안정", "reason": "x"}, # 1
{"numbers": [3, 11, 17, 4, 5, 6], "risk_tag": "균형", "reason": "x"}, # 3
{"numbers": [11, 25, 33, 7, 8, 9], "risk_tag": "균형", "reason": "x"}, # 3
{"numbers": [3, 11, 17, 25, 33, 9], "risk_tag": "공격", "reason": "x"}, # 5
],
"bonus": [], "extended": [], "pool": [],
}
db.save_briefing({
"draw_no": drw_no, "picks": picks,
"narrative": {"headline": "h", "summary_3lines": ["a", "b", "c"], "retrospective": ""},
"confidence": 70, "model": "test",
})
def test_grade_with_curator_only_no_purchase():
_seed_draw()
_seed_briefing()
run_weekly_grading(1153)
rev = db.get_review(1153)
assert rev is not None
assert rev["curator_avg_match"] == round((6+1+3+3+5)/5, 2)
assert rev["curator_best_match"] == 6
assert rev["curator_5plus_prizes"] == 4 # 6,3,3,5 ≥3 (네 개)
assert rev["user_avg_match"] is None # 구매 없음
def test_grade_with_no_briefing():
_seed_draw()
run_weekly_grading(1153)
rev = db.get_review(1153)
assert rev is not None
assert rev["curator_avg_match"] is None

View File

@@ -0,0 +1,42 @@
import sys, os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
from app.jobs.grading_helpers import (
score_picks_against_draw,
summarize_pattern,
compute_pattern_delta,
)
def test_score_picks_against_draw_basic():
win_nums = [3, 11, 17, 25, 33, 41]
bonus = 8
picks = [
{"numbers": [3, 11, 17, 25, 33, 41], "risk_tag": "안정"}, # 6 일치
{"numbers": [1, 2, 3, 4, 5, 6], "risk_tag": "공격"}, # 1 일치
{"numbers": [3, 11, 17, 4, 5, 6], "risk_tag": "안정"}, # 3 일치 → 5등
]
out = score_picks_against_draw(picks, win_nums, bonus)
# 함수가 round(avg, 2) 로 반환하므로 rounded 비교
assert out["avg_match"] == 3.33
assert out["best_match"] == 6
assert out["five_plus_prizes"] == 2 # 3개 이상 카운트(5등 이상)
assert out["best_tier"] == "안정"
def test_summarize_pattern():
nums = [3, 11, 17, 25, 33, 41]
s = summarize_pattern(nums)
# 저번호(<=22) 3개, 고번호 3개, 모두 홀수이므로 홀:짝 = 6:0
assert s["low_count"] == 3
assert s["odd_count"] == 6
assert s["sum"] == 130
def test_compute_pattern_delta_picks_dominant_axis():
# 사용자가 평균 저번호 4.2개 / 추첨 평균 3 → 저번호 편향 +1.2
user = {"low_avg": 4.2, "odd_avg": 3.4, "sum_avg": 124}
draw = {"low_avg": 3.0, "odd_avg": 3.0, "sum_avg": 142}
delta = compute_pattern_delta(user, draw)
assert "저번호" in delta or "low" in delta
assert "+1.2" in delta or "1.2" in delta

View File

@@ -0,0 +1,148 @@
"""배치 음악 생성 + 자동 컴파일·영상 파이프라인."""
import asyncio
import logging
import uuid
from . import db
from .random_pools import randomize
logger = logging.getLogger("music-lab.batch")
POLL_INTERVAL_S = 5
TRACK_GEN_TIMEOUT_S = 240
async def run_batch(batch_id: int) -> None:
"""1) genre로 N트랙 순차 Suno 생성
2) 모두 완료 후 compile_job 자동 생성·실행
3) compile 완료 후 영상 파이프라인 시작 (cover step)
"""
job = db.get_batch_job(batch_id)
if not job:
return
genre = job["genre"]
count = job["count"]
duration = job["target_duration_sec"]
auto_pipe = bool(job["auto_pipeline"])
db.update_batch_job(batch_id, status="generating")
track_ids: list[int] = []
for i in range(1, count + 1):
title = f"{genre.title()} Mix Track {i}"
params = randomize(genre)
db.update_batch_job(batch_id,
current_track_index=i,
current_track_status="generating")
track_id = await _generate_one_track(
title=title, genre=genre,
duration_sec=duration, params=params,
)
if track_id:
track_ids.append(track_id)
db.append_batch_track(batch_id, track_id)
db.update_batch_job(batch_id, current_track_status="succeeded")
else:
db.update_batch_job(batch_id, current_track_status="failed")
logger.warning("배치 %d 트랙 %d 실패 — 계속 진행", batch_id, i)
if not track_ids:
db.update_batch_job(batch_id, status="failed",
error="모든 트랙 생성 실패")
return
db.update_batch_job(batch_id, status="generated")
if not auto_pipe:
return
# 자동 컴파일
db.update_batch_job(batch_id, status="compiling")
try:
compile_id = db.create_compile_job(
title=f"{genre.title()} Mix",
track_ids=track_ids,
crossfade_sec=3.0,
)
db.update_batch_job(batch_id, compile_job_id=compile_id)
except Exception as e:
logger.exception("compile create failed")
db.update_batch_job(batch_id, status="failed", error=f"compile create: {e}")
return
from . import compiler
try:
await asyncio.to_thread(compiler.run_compile, compile_id)
except Exception as e:
logger.exception("compile run failed")
db.update_batch_job(batch_id, status="failed", error=f"compile run: {e}")
return
job_after = db.get_compile_job(compile_id)
status_after = job_after.get("status") if job_after else None
if status_after not in ("done", "succeeded"):
db.update_batch_job(
batch_id, status="failed",
error=f"compile not done (status={status_after})"
)
return
# 자동 영상 파이프라인
try:
pipeline_id = db.create_pipeline(compile_job_id=compile_id)
db.update_batch_job(batch_id, pipeline_id=pipeline_id, status="piped")
from .pipeline import orchestrator
await orchestrator.run_step(pipeline_id, "cover")
except Exception as e:
logger.exception("pipeline launch failed")
db.update_batch_job(batch_id, status="failed", error=f"pipeline launch: {e}")
async def _generate_one_track(*, title: str, genre: str, duration_sec: int,
params: dict) -> int | None:
"""기존 Suno generate 호출 + 완료까지 polling. 성공 시 새 track id, 실패 시 None."""
from .suno_provider import run_suno_generation
task_id = str(uuid.uuid4())
suno_params = {
"title": title,
"genre": genre,
"moods": params["moods"],
"instruments": params["instruments"],
"duration_sec": duration_sec,
"bpm": params["bpm"],
"key": params["key"],
"scale": params["scale"],
"prompt": params.get("prompt_modifier", ""),
}
db.create_task(task_id, suno_params, provider="suno")
# Suno background task — 우리가 await로 기다림 (BackgroundTasks 미사용)
asyncio.create_task(asyncio.to_thread(run_suno_generation, task_id, suno_params))
waited = 0
while waited < TRACK_GEN_TIMEOUT_S:
await asyncio.sleep(POLL_INTERVAL_S)
waited += POLL_INTERVAL_S
task = db.get_task(task_id)
if not task:
continue
status = task.get("status")
if status == "succeeded":
# task["track"] 또는 task["result"]["track"] 형태 시도, 없으면 task_id로 조회
tr = task.get("track")
if tr and isinstance(tr, dict):
return tr.get("id")
result = task.get("result", {}) or {}
if isinstance(result, dict) and isinstance(result.get("track"), dict):
return result["track"].get("id")
# Fallback: music_library에서 task_id로 검색
track = db.get_track_by_task_id(task_id)
if track:
return track.get("id")
return None
if status == "failed":
return None
return None # timeout

View File

@@ -14,6 +14,85 @@ def _conn() -> sqlite3.Connection:
return conn
def _add_column_if_missing(cursor, table: str, column: str, ddl: str) -> None:
"""SQLite-safe ALTER TABLE ADD COLUMN — idempotent.
SQLite의 ALTER TABLE은 컬럼 존재 시 에러. PRAGMA로 미리 확인.
"""
cursor.execute(f"PRAGMA table_info({table})")
existing = {row[1] for row in cursor.fetchall()}
if column not in existing:
cursor.execute(f"ALTER TABLE {table} ADD COLUMN {column} {ddl}")
def _is_column_not_null(cursor, table: str, column: str) -> bool:
"""PRAGMA table_info row format: (cid, name, type, notnull, dflt_value, pk)."""
cursor.execute(f"PRAGMA table_info({table})")
for row in cursor.fetchall():
if row[1] == column:
return row[3] == 1
return False
def _relax_video_pipelines_track_id_nullable(cursor) -> None:
"""track_id NOT NULL → NULL (compile_job_id 만 있는 pipeline 지원).
SQLite는 ALTER COLUMN을 지원하지 않아 표준 패턴 — 새 테이블 생성 → 데이터 복사 → 교체.
Idempotent: 이미 NULL이면 no-op.
"""
if not _is_column_not_null(cursor, "video_pipelines", "track_id"):
return # already nullable
# 새 컬럼 4개도 함께 포함된 최종 스키마로 새 테이블 생성
cursor.execute("""
CREATE TABLE video_pipelines_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
track_id INTEGER,
state TEXT NOT NULL DEFAULT 'created',
state_started_at TEXT NOT NULL,
cover_url TEXT,
video_url TEXT,
thumbnail_url TEXT,
metadata_json TEXT,
review_json TEXT,
youtube_video_id TEXT,
feedback_count_per_step TEXT NOT NULL DEFAULT '{}',
last_telegram_msg_ids TEXT NOT NULL DEFAULT '{}',
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
cancelled_at TEXT,
failed_reason TEXT,
compile_job_id INTEGER,
visual_style TEXT NOT NULL DEFAULT 'essential',
background_mode TEXT NOT NULL DEFAULT 'static',
background_keyword TEXT
)
""")
# 기존 컬럼 모두 명시적으로 SELECT (새 컬럼은 default로 채워짐)
cursor.execute("""
INSERT INTO video_pipelines_new
(id, track_id, state, state_started_at, cover_url, video_url,
thumbnail_url, metadata_json, review_json, youtube_video_id,
feedback_count_per_step, last_telegram_msg_ids,
created_at, updated_at, cancelled_at, failed_reason,
compile_job_id, visual_style, background_mode, background_keyword)
SELECT
id, track_id, state, state_started_at, cover_url, video_url,
thumbnail_url, metadata_json, review_json, youtube_video_id,
feedback_count_per_step, last_telegram_msg_ids,
created_at, updated_at, cancelled_at, failed_reason,
COALESCE(compile_job_id, NULL),
COALESCE(visual_style, 'essential'),
COALESCE(background_mode, 'static'),
COALESCE(background_keyword, NULL)
FROM video_pipelines
""")
cursor.execute("DROP TABLE video_pipelines")
cursor.execute("ALTER TABLE video_pipelines_new RENAME TO video_pipelines")
def init_db() -> None:
with _conn() as conn:
conn.execute("""
@@ -185,11 +264,33 @@ def init_db() -> None:
)
""")
# ── music_batch_jobs 테이블 ──────────────────────────────────────
conn.execute("""
CREATE TABLE IF NOT EXISTS music_batch_jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
genre TEXT NOT NULL,
count INTEGER NOT NULL,
target_duration_sec INTEGER NOT NULL DEFAULT 180,
auto_pipeline INTEGER NOT NULL DEFAULT 1,
completed INTEGER NOT NULL DEFAULT 0,
track_ids_json TEXT NOT NULL DEFAULT '[]',
current_track_index INTEGER NOT NULL DEFAULT 0,
current_track_status TEXT,
status TEXT NOT NULL DEFAULT 'queued',
error TEXT,
compile_job_id INTEGER,
pipeline_id INTEGER,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
)
""")
# ── YouTube pipeline 테이블 (5개) ─────────────────────────────────
# track_id는 nullable: compile_job_id로 입력하는 essential mix 모드 지원
conn.execute("""
CREATE TABLE IF NOT EXISTS video_pipelines (
id INTEGER PRIMARY KEY AUTOINCREMENT,
track_id INTEGER NOT NULL,
track_id INTEGER,
state TEXT NOT NULL DEFAULT 'created',
state_started_at TEXT NOT NULL,
cover_url TEXT,
@@ -206,6 +307,13 @@ def init_db() -> None:
failed_reason TEXT
)
""")
# Migration for essential mix pipeline (task 2026-05-09)
cur = conn.cursor()
_add_column_if_missing(cur, "video_pipelines", "compile_job_id", "INTEGER")
_add_column_if_missing(cur, "video_pipelines", "visual_style", "TEXT NOT NULL DEFAULT 'essential'")
_add_column_if_missing(cur, "video_pipelines", "background_mode", "TEXT NOT NULL DEFAULT 'static'")
_add_column_if_missing(cur, "video_pipelines", "background_keyword", "TEXT")
_relax_video_pipelines_track_id_nullable(cur)
conn.execute("""
CREATE TABLE IF NOT EXISTS pipeline_jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -943,25 +1051,46 @@ def _parse_pipeline_row(row: sqlite3.Row) -> Dict[str, Any]:
d["metadata"] = json.loads(d["metadata_json"])
if d.get("review_json"):
d["review"] = json.loads(d["review_json"])
# Cache-bust media URLs — append ?v={updated_at_compact} so browsers/telegram fetch fresh after regen
updated_at = d.get("updated_at", "") or ""
if updated_at:
cache_key = updated_at.replace(":", "").replace("-", "").replace("T", "").replace(".", "")
for url_key in ("cover_url", "video_url", "thumbnail_url"):
url = d.get(url_key)
if url and "?" not in url:
d[url_key] = f"{url}?v={cache_key}"
return d
def create_pipeline(track_id: int) -> int:
def create_pipeline(track_id: Optional[int] = None, *,
compile_job_id: Optional[int] = None,
visual_style: str = "essential",
background_mode: str = "static",
background_keyword: Optional[str] = None) -> int:
"""track_id XOR compile_job_id 검증."""
if (track_id is None) == (compile_job_id is None):
raise ValueError("track_id와 compile_job_id 중 정확히 하나만 지정")
with _conn() as conn:
cur = conn.cursor()
now = _now()
cur = conn.execute("""
INSERT INTO video_pipelines (track_id, state, state_started_at, created_at, updated_at)
VALUES (?, 'created', ?, ?, ?)
""", (track_id, now, now, now))
cur.execute("""
INSERT INTO video_pipelines
(track_id, compile_job_id, visual_style, background_mode, background_keyword,
state, state_started_at, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, 'created', ?, ?, ?)
""", (track_id, compile_job_id, visual_style, background_mode,
background_keyword, now, now, now))
return cur.lastrowid
def get_pipeline(pid: int) -> Optional[Dict[str, Any]]:
with _conn() as conn:
row = conn.execute("""
SELECT vp.*, ml.title AS track_title
SELECT vp.*, ml.title AS track_title, cj.title AS compile_title
FROM video_pipelines vp
LEFT JOIN music_library ml ON ml.id = vp.track_id
LEFT JOIN compile_jobs cj ON cj.id = vp.compile_job_id
WHERE vp.id = ?
""", (pid,)).fetchone()
if not row:
@@ -991,9 +1120,10 @@ def update_pipeline_state(pid: int, state: str, **fields) -> None:
def list_pipelines(active_only: bool = False) -> List[Dict[str, Any]]:
sql = """
SELECT vp.*, ml.title AS track_title
SELECT vp.*, ml.title AS track_title, cj.title AS compile_title
FROM video_pipelines vp
LEFT JOIN music_library ml ON ml.id = vp.track_id
LEFT JOIN compile_jobs cj ON cj.id = vp.compile_job_id
"""
if active_only:
sql += " WHERE vp.state NOT IN ('published','cancelled','failed','awaiting_manual')"
@@ -1148,3 +1278,85 @@ def get_oauth_token() -> Optional[Dict[str, Any]]:
def delete_oauth_token() -> None:
with _conn() as conn:
conn.execute("DELETE FROM youtube_oauth_tokens")
# ── music_batch_jobs CRUD ─────────────────────────────────────────────────────
_BATCH_ALLOWED_COLS = frozenset([
"completed", "track_ids_json", "current_track_index",
"current_track_status", "status", "error",
"compile_job_id", "pipeline_id",
])
def create_batch_job(genre: str, count: int, target_duration_sec: int = 180,
auto_pipeline: bool = True) -> int:
with _conn() as conn:
now = _now()
cur = conn.cursor()
cur.execute("""
INSERT INTO music_batch_jobs
(genre, count, target_duration_sec, auto_pipeline,
status, created_at, updated_at)
VALUES (?, ?, ?, ?, 'queued', ?, ?)
""", (genre, count, target_duration_sec, 1 if auto_pipeline else 0, now, now))
return cur.lastrowid
def get_batch_job(batch_id: int) -> dict | None:
with _conn() as conn:
row = conn.execute(
"SELECT * FROM music_batch_jobs WHERE id = ?", (batch_id,)
).fetchone()
if not row:
return None
d = dict(row)
d["track_ids"] = json.loads(d.get("track_ids_json") or "[]")
return d
def update_batch_job(batch_id: int, **fields) -> None:
unknown = set(fields) - _BATCH_ALLOWED_COLS
if unknown:
raise ValueError(f"unknown batch job columns: {unknown}")
if not fields:
return
cols = ", ".join(f"{k} = ?" for k in fields)
vals = list(fields.values()) + [_now(), batch_id]
with _conn() as conn:
conn.execute(
f"UPDATE music_batch_jobs SET {cols}, updated_at = ? WHERE id = ?",
vals,
)
def append_batch_track(batch_id: int, track_id: int) -> None:
"""track_ids_json에 새 track_id 추가 + completed 증가 (atomic)."""
with _conn() as conn:
row = conn.execute(
"SELECT track_ids_json, completed FROM music_batch_jobs WHERE id = ?",
(batch_id,),
).fetchone()
if not row:
return
ids = json.loads(row["track_ids_json"] or "[]")
ids.append(track_id)
conn.execute(
"UPDATE music_batch_jobs SET track_ids_json = ?, completed = ?, updated_at = ? WHERE id = ?",
(json.dumps(ids), row["completed"] + 1, _now(), batch_id),
)
def list_batch_jobs(active_only: bool = False) -> list[dict]:
sql = "SELECT * FROM music_batch_jobs"
if active_only:
sql += " WHERE status NOT IN ('failed','cancelled','piped')"
sql += " ORDER BY created_at DESC"
with _conn() as conn:
rows = conn.execute(sql).fetchall()
out = []
for r in rows:
d = dict(r)
d["track_ids"] = json.loads(d.get("track_ids_json") or "[]")
out.append(d)
return out

View File

@@ -35,6 +35,7 @@ from .suno_provider import (
generate_lyrics, get_credits, get_timestamped_lyrics, generate_style_boost,
SUNO_API_KEY, SUNO_MODELS,
)
from .batch_generator import run_batch as _run_batch
app = FastAPI()
@@ -849,6 +850,69 @@ def export_compile(job_id: int):
}
# ── 배치 음악 생성 API ────────────────────────────────────────────────────────
class BatchGenerateRequest(BaseModel):
genre: str
count: int = 10
target_duration_sec: int = 180
auto_pipeline: bool = True
@app.post("/api/music/generate-batch", status_code=201)
async def generate_batch(req: BatchGenerateRequest, bg: BackgroundTasks):
if not (1 <= req.count <= 10):
raise HTTPException(status_code=400, detail="count는 1-10 사이")
if not (60 <= req.target_duration_sec <= 300):
raise HTTPException(status_code=400, detail="target_duration_sec는 60-300 사이")
if not req.genre:
raise HTTPException(status_code=400, detail="genre 필수")
if not SUNO_API_KEY:
raise HTTPException(status_code=400, detail="SUNO_API_KEY 미설정")
batch_id = _db_module.create_batch_job(
genre=req.genre, count=req.count,
target_duration_sec=req.target_duration_sec,
auto_pipeline=req.auto_pipeline,
)
bg.add_task(_run_batch, batch_id)
return _db_module.get_batch_job(batch_id)
@app.get("/api/music/generate-batch/{batch_id}")
def get_batch(batch_id: int):
j = _db_module.get_batch_job(batch_id)
if not j:
raise HTTPException(status_code=404, detail="Not found")
if j["track_ids"]:
ids_csv = ",".join(str(i) for i in j["track_ids"])
import sqlite3
conn = sqlite3.connect(_db_module.DB_PATH)
conn.row_factory = sqlite3.Row
rows = conn.execute(
f"SELECT id, title, audio_url, duration_sec FROM music_library WHERE id IN ({ids_csv})"
).fetchall()
conn.close()
# 트랙을 batch.track_ids 순서대로 정렬
by_id = {r["id"]: dict(r) for r in rows}
j["tracks"] = [by_id.get(tid) for tid in j["track_ids"] if tid in by_id]
else:
j["tracks"] = []
return j
@app.get("/api/music/generate-batch")
def list_batches(status: str = "all"):
return {"batches": _db_module.list_batch_jobs(active_only=(status == "active"))}
@app.get("/api/music/genres")
def list_supported_genres():
"""배치 생성에서 사용 가능한 장르 목록 — random_pools의 키."""
from .random_pools import list_genres
return {"genres": list_genres()}
# ── 수익화 추적 API ───────────────────────────────────────────────────────────
@app.get("/api/music/revenue/dashboard")
@@ -929,7 +993,11 @@ def market_suggest(limit: int = 5):
# ── Pipeline endpoints ────────────────────────────────────────────────────────
class PipelineCreate(BaseModel):
track_id: int
track_id: int | None = None
compile_job_id: int | None = None
visual_style: str | None = None # single | essential
background_mode: str | None = None # static | video_loop
background_keyword: str | None = None
class FeedbackRequest(BaseModel):
@@ -940,10 +1008,34 @@ class FeedbackRequest(BaseModel):
@app.post("/api/music/pipeline", status_code=201)
def create_pipeline(req: PipelineCreate):
# XOR 검증
if (req.track_id is None) == (req.compile_job_id is None):
raise HTTPException(400, "track_id 또는 compile_job_id 중 정확히 하나를 지정")
# compile_job 상태 확인
if req.compile_job_id is not None:
job = _db_module.get_compile_job(req.compile_job_id)
if not job:
raise HTTPException(404, f"compile job {req.compile_job_id} 없음")
if job.get("status") not in ("done", "succeeded"):
raise HTTPException(400, f"compile job {req.compile_job_id} not ready (status={job.get('status')})")
# 동일 입력으로 이미 active 파이프라인 있으면 409
actives = _db_module.list_pipelines(active_only=True)
if any(p["track_id"] == req.track_id for p in actives):
raise HTTPException(409, "이미 진행 중인 파이프라인이 있습니다")
pid = _db_module.create_pipeline(req.track_id)
for p in actives:
if (req.track_id and p.get("track_id") == req.track_id) or \
(req.compile_job_id and p.get("compile_job_id") == req.compile_job_id):
raise HTTPException(409, "이미 진행 중인 파이프라인이 있습니다")
setup = _db_module.get_youtube_setup()
vd = setup["visual_defaults"]
pid = _db_module.create_pipeline(
track_id=req.track_id,
compile_job_id=req.compile_job_id,
visual_style=req.visual_style or vd.get("default_visual_style", "essential"),
background_mode=req.background_mode or vd.get("default_background_mode", "static"),
background_keyword=req.background_keyword or vd.get("default_background_keyword") or None,
)
return _db_module.get_pipeline(pid)
@@ -1009,11 +1101,19 @@ async def feedback(pid: int, req: FeedbackRequest, bg: BackgroundTasks):
if req.intent == "approve":
from .pipeline.state_machine import next_state_on_approve
next_st = next_state_on_approve(state)
_db_module.update_pipeline_state(pid, next_st)
# Validate transition is legal
try:
next_st = next_state_on_approve(state)
except ValueError as e:
raise HTTPException(400, str(e))
next_step = _state_to_step(next_st)
if next_step:
# bg task will set state to the new *_pending when step completes
bg.add_task(orchestrator.run_step, pid, next_step)
else:
# No step to run — fall through to direct state update
# (defensive — current code paths don't hit this)
_db_module.update_pipeline_state(pid, next_st)
return {"ok": True}
elif req.intent == "reject":

View File

@@ -0,0 +1,60 @@
"""Pexels Video API로 background loop 영상 받아오기 (video_loop 모드용)."""
import os
import logging
import httpx
from . import storage
logger = logging.getLogger("music-lab.background")
TIMEOUT_S = 60
async def fetch_video_loop(pipeline_id: int, keyword: str) -> dict:
"""Pexels Video API → 720p HD mp4 다운로드 → /app/data/videos/{id}/loop.mp4 저장.
반환: {"path": str | None, "used_fallback": bool, "error": str | None}
"""
api_key = os.getenv("PEXELS_API_KEY", "")
if not api_key:
return {"path": None, "used_fallback": True, "error": "PEXELS_API_KEY 미설정"}
out_dir = storage.pipeline_dir(pipeline_id)
out_path = os.path.join(out_dir, "loop.mp4")
try:
async with httpx.AsyncClient(timeout=TIMEOUT_S) as client:
resp = await client.get(
"https://api.pexels.com/videos/search",
headers={"Authorization": api_key},
params={"query": keyword or "ambient calm", "per_page": 5,
"orientation": "landscape"},
)
resp.raise_for_status()
data = resp.json()
videos = data.get("videos", [])
if not videos:
return {"path": None, "used_fallback": True,
"error": f"Pexels 결과 없음: {keyword}"}
# 720p/1080p HD 우선, 없으면 첫 번째 video file
chosen = None
for v in videos:
for f in v.get("video_files", []):
if f.get("quality") == "hd" and f.get("width") in (1280, 1920):
chosen = f
break
if chosen:
break
if not chosen:
chosen = videos[0]["video_files"][0]
video_url = chosen["link"]
vid_resp = await client.get(video_url)
vid_resp.raise_for_status()
with open(out_path, "wb") as f:
f.write(vid_resp.content)
return {"path": out_path, "used_fallback": False, "error": None}
except (httpx.HTTPError, httpx.TimeoutException, KeyError, ValueError, OSError) as e:
logger.warning("Pexels video fetch 실패: %s", e)
return {"path": None, "used_fallback": True, "error": str(e)}

View File

@@ -13,6 +13,7 @@ from .gradient import make_gradient_with_title
logger = logging.getLogger("music-lab.cover")
DALLE_TIMEOUT_S = 90
PEXELS_IMG_TIMEOUT_S = 30
def _get_api_key() -> str:
@@ -23,13 +24,68 @@ def _get_model() -> str:
return os.getenv("OPENAI_IMAGE_MODEL", "gpt-image-1")
def _get_pexels_key() -> str:
return os.getenv("PEXELS_API_KEY", "")
async def _generate_with_pexels(genre: str, mood: str, track_title: str,
out_path: str, keyword_override: str = "") -> bool:
"""Pexels 이미지 검색·다운로드. 성공 시 True. API key 없거나 0 결과면 False."""
api_key = _get_pexels_key()
if not api_key:
return False
keyword = keyword_override or f"{genre} aesthetic background"
try:
async with httpx.AsyncClient(timeout=PEXELS_IMG_TIMEOUT_S) as client:
resp = await client.get(
"https://api.pexels.com/v1/search",
headers={"Authorization": api_key},
params={"query": keyword, "per_page": 5, "orientation": "landscape"},
)
resp.raise_for_status()
data = resp.json()
photos = data.get("photos", [])
if not photos:
return False
img_url = photos[0]["src"].get("large2x") or photos[0]["src"].get("original")
img_resp = await client.get(img_url)
img_resp.raise_for_status()
with Image.open(BytesIO(img_resp.content)) as src:
img = src.convert("RGB")
img.save(out_path, "JPEG", quality=92)
return True
except (httpx.HTTPError, httpx.TimeoutException, KeyError, ValueError, OSError) as e:
logger.warning("Pexels 이미지 검색 실패: %s", e)
return False
async def generate(*, pipeline_id: int, genre: str, prompt_template: str,
mood: str = "", track_title: str = "", feedback: str = "") -> dict:
mood: str = "", track_title: str = "", feedback: str = "",
image_source: str = "ai",
background_keyword: str = "") -> dict:
"""커버 아트 생성. 성공 시 jpg 저장 + URL 반환. 실패 시 그라데이션 폴백.
image_source: 'ai' (DALL·E 기본) | 'pexels' (스톡 사진).
반환: {"url": str, "used_fallback": bool, "error": str | None}
"""
out_path = os.path.join(storage.pipeline_dir(pipeline_id), "cover.jpg")
if image_source == "pexels":
ok = await _generate_with_pexels(genre, mood, track_title, out_path, background_keyword)
if ok:
return {
"url": storage.media_url(pipeline_id, "cover.jpg"),
"used_fallback": False,
"error": None,
}
# Pexels 실패 → 그라데이션 폴백
make_gradient_with_title(genre, track_title, out_path)
return {
"url": storage.media_url(pipeline_id, "cover.jpg"),
"used_fallback": True,
"error": "Pexels 검색 실패 또는 API 키 없음",
}
used_fallback = False
error = None
@@ -38,7 +94,8 @@ async def generate(*, pipeline_id: int, genre: str, prompt_template: str,
if api_key:
try:
await _generate_with_dalle(prompt_template, mood, feedback, out_path,
api_key=api_key, model=model)
api_key=api_key, model=model,
background_keyword=background_keyword)
except (httpx.HTTPError, httpx.TimeoutException, KeyError, ValueError, OSError) as e:
logger.warning("DALL·E 실패 — 폴백: %s", e)
error = str(e)
@@ -56,21 +113,45 @@ async def generate(*, pipeline_id: int, genre: str, prompt_template: str,
}
def _get_image_size(model: str) -> str:
"""모델별 16:9에 가장 가까운 landscape 사이즈.
OPENAI_IMAGE_SIZE 환경변수로 override 가능.
- gpt-image-1: 1536x1024 (3:2)
- dall-e-3: 1792x1024 (7:4)
- 기타: 1024x1024 (square 폴백)
"""
override = os.getenv("OPENAI_IMAGE_SIZE", "")
if override:
return override
m = (model or "").lower()
if "dall-e-3" in m or "dalle3" in m:
return "1792x1024"
if "gpt-image" in m:
return "1536x1024"
return "1024x1024"
async def _generate_with_dalle(prompt_template: str, mood: str,
feedback: str, out_path: str,
*, api_key: str, model: str) -> None:
*, api_key: str, model: str,
background_keyword: str = "") -> None:
prompt = prompt_template
if background_keyword:
prompt = f"{prompt}, {background_keyword}" # 사용자 직접 지정 keyword 우선 적용
if mood:
prompt = f"{prompt}, {mood} mood"
if feedback:
prompt = f"{prompt}. 추가 지시: {feedback}"
prompt = f"{prompt}, no text, high quality"
# cinematic landscape 명시 — 16:9 영상에 시각적으로 fit하도록 구도 유도
prompt = f"{prompt}, no text, high quality, cinematic landscape composition, wide aspect"
image_size = _get_image_size(model)
async with httpx.AsyncClient(timeout=DALLE_TIMEOUT_S) as client:
resp = await client.post(
"https://api.openai.com/v1/images/generations",
headers={"Authorization": f"Bearer {api_key}"},
json={"model": model, "prompt": prompt, "size": "1024x1024", "n": 1},
json={"model": model, "prompt": prompt, "size": image_size, "n": 1},
)
resp.raise_for_status()
data = resp.json()["data"][0]

View File

@@ -19,26 +19,46 @@ def _get_model() -> str:
return os.getenv("CLAUDE_HAIKU_MODEL", CLAUDE_HAIKU_MODEL_DEFAULT)
def _format_chapters(tracks: list[dict]) -> str:
"""YouTube 챕터 자동 인식 형식: '[mm:ss] 제목' 한 줄씩.
1시간 이상이면 hh:mm:ss 형식.
"""
if not tracks:
return ""
lines = []
for t in tracks:
offset = int(t.get("start_offset_sec", 0))
m, s = divmod(offset, 60)
h, m = divmod(m, 60)
if h > 0:
ts = f"{h:02d}:{m:02d}:{s:02d}"
else:
ts = f"{m:02d}:{s:02d}"
lines.append(f"{ts} {t.get('title', '')}")
return "\n".join(lines)
async def generate(*, track: dict, template: dict, trend_keywords: list[str],
feedback: str = "") -> dict:
feedback: str = "", tracks: list[dict] | None = None) -> dict:
"""메타데이터 생성. 성공 시 LLM, 실패/미설정 시 템플릿 치환 폴백.
반환: {"title", "description", "tags", "category_id", "used_fallback", "error"}
"""
api_key = _get_api_key()
if not api_key:
return {**_fallback_template(track, template), "used_fallback": True, "error": "no api key"}
return {**_fallback_template(track, template, tracks), "used_fallback": True, "error": "no api key"}
try:
result = await _call_claude(track, template, trend_keywords, feedback,
result = await _call_claude(track, template, trend_keywords, feedback, tracks,
api_key=api_key, model=_get_model())
return {**result, "used_fallback": False, "error": None}
except (httpx.HTTPError, httpx.TimeoutException, KeyError, ValueError, json.JSONDecodeError) as e:
logger.warning("메타데이터 LLM 실패 — 폴백: %s", e)
return {**_fallback_template(track, template), "used_fallback": True, "error": str(e)}
return {**_fallback_template(track, template, tracks), "used_fallback": True, "error": str(e)}
def _fallback_template(track: dict, template: dict) -> dict:
def _fallback_template(track: dict, template: dict, tracks: list[dict] | None = None) -> dict:
fmt_vars = {
"title": track.get("title", ""),
"genre": track.get("genre", ""),
@@ -48,6 +68,8 @@ def _fallback_template(track: dict, template: dict) -> dict:
}
title = template.get("title", "{title}").format(**fmt_vars)
description = template.get("description", "{title}").format(**fmt_vars)
if tracks and len(tracks) > 1:
description = description + "\n\n" + _format_chapters(tracks)
return {
"title": title[:100],
"description": description[:5000],
@@ -56,20 +78,87 @@ def _fallback_template(track: dict, template: dict) -> dict:
}
async def _call_claude(track: dict, template: dict, trend_keywords: list[str],
feedback: str, *, api_key: str, model: str) -> dict:
user_prompt = (
"다음 트랙의 YouTube 메타데이터를 생성하세요. JSON으로만 응답.\n\n"
f"트랙: {json.dumps(track, ensure_ascii=False)}\n"
f"템플릿: {json.dumps(template, ensure_ascii=False)}\n"
f"트렌드 키워드: {', '.join(trend_keywords)}\n"
)
def _build_prompt(track: dict, template: dict, trend_keywords: list[str],
feedback: str, tracks: list[dict] | None) -> str:
"""프로페셔널 lofi/ambient 채널 수준의 메타데이터 작성 prompt.
list + join 패턴으로 조립 — 인접 문자열 리터럴/+ 충돌 회피, 한글 인코딩 안전.
"""
is_mix = bool(tracks and len(tracks) > 1)
parts: list[str] = []
# === 입력 ===
parts.append("당신은 lo-fi/ambient YouTube 채널의 카피라이터입니다.")
parts.append("프로페셔널 메타데이터를 작성하세요. JSON으로만 응답.")
parts.append("")
parts.append("## 입력")
parts.append(f"트랙: {json.dumps(track, ensure_ascii=False)}")
parts.append(f"사용자 템플릿(참고용, 이 정보 포함하되 단순 치환은 X): {json.dumps(template, ensure_ascii=False)}")
trend_str = ", ".join(trend_keywords) if trend_keywords else "(없음)"
parts.append(f"트렌드 키워드: {trend_str}")
if is_mix:
chapters = _format_chapters(tracks)
parts.append("")
parts.append(f"이 영상은 {len(tracks)}개 트랙의 **mix 컴필레이션**입니다. 챕터 리스트:")
parts.append(chapters)
if feedback:
user_prompt += f"\n사용자 피드백: {feedback}\n"
user_prompt += (
'\n출력 JSON: {"title": "60자 이내", "description": "1000자 이내, 3-5문단",'
' "tags": ["15개 이내"], "category_id": 10}'
)
parts.append("")
parts.append(f"사용자 피드백 (이 방향으로 수정): {feedback}")
# === title 가이드 ===
parts.append("")
parts.append("## title (60자 이내, 클릭률 + SEO)")
if is_mix:
parts.append("- mix면 '시간 + 분위기 + 사용 시나리오' 형식. 예: '1 Hour Lo-Fi Chill Mix — Late Night Study & Relaxation'")
else:
parts.append("- 단일 트랙이면 '제목 — 분위기' 또는 '[장르] 제목 ({BPM}BPM)' 형식")
parts.append("- 이모지 1개 정도 OK (🎧 📻 ☕ 🌙 등). 과한 이모지 X")
parts.append("- 영문 + 한글 혼용 자연스럽게")
# === description 가이드 ===
parts.append("")
parts.append("## description (1500자 이내, 57 섹션, 자연스러운 톤)")
parts.append("1. **한 문장 후크** — 누구를 위한 무엇인지 (예: '집중과 휴식이 필요한 모든 순간을 위한 차분한 lo-fi 컴필레이션.')")
parts.append("2. **분위기 묘사** (23 문장) — 시각적 imagery 포함 (예: '비 오는 카페 창가의 따뜻한 조명처럼')")
parts.append("3. **추천 사용 상황** — 공부, 작업, 코딩, 명상, 운전, 카페 BGM 등 구체적 시나리오")
if is_mix:
parts.append("4. **트랙 리스트 / 챕터** — 위 챕터 리스트를 그대로 포함 (YouTube 자동 챕터 인식). 각 줄에 `[mm:ss] 제목` 형식 유지")
else:
parts.append("4. **음향 정보** — 장르, BPM, Key 등 트랙 정보를 자연스럽게 풀어서 설명")
parts.append("5. **시청 권장사항** — '🎧 헤드폰으로 들으시면 더 좋은 사운드를 경험하실 수 있습니다' 같은 안내")
parts.append("6. **콜투액션** — 구독, 좋아요, 댓글로 분위기/요청 공유 유도. 자연스럽고 짧게")
parts.append("7. **(선택) 해시태그** — 끝에 #lofi #studymusic #공부음악 등 510개 (description 본문 안)")
parts.append("- 톤: 차분하고 진정성 있는 lofi 채널 큐레이터. 광고스러운 과장 X")
# === tags 가이드 ===
parts.append("")
parts.append("## tags (15개 이내, SEO)")
parts.append("- 영문 키워드 우선: lofi, lo-fi, chill beats, study music, work music, ambient, instrumental, relaxing, focus, night vibes 등")
parts.append("- 한글 보조: 공부음악, 작업음악, 카페음악, 집중력, 잔잔한 음악 등")
parts.append("- 트렌드 키워드(있으면) 반드시 포함")
if is_mix:
parts.append("- 'mix', 'compilation', 'long mix' 같은 mix 특화 태그 추가")
# === category + 출력 ===
parts.append("")
parts.append("## category_id")
parts.append("10 (Music) 고정")
parts.append("")
parts.append("## 출력")
parts.append("```json")
parts.append('{"title": "...", "description": "...", "tags": [...], "category_id": 10}')
parts.append("```")
parts.append("JSON 외 다른 텍스트는 출력 X.")
return "\n".join(parts)
async def _call_claude(track: dict, template: dict, trend_keywords: list[str],
feedback: str, tracks: list[dict] | None,
*, api_key: str, model: str) -> dict:
user_prompt = _build_prompt(track, template, trend_keywords, feedback, tracks)
async with httpx.AsyncClient(timeout=TIMEOUT_S) as client:
resp = await client.post(
@@ -81,7 +170,7 @@ async def _call_claude(track: dict, template: dict, trend_keywords: list[str],
},
json={
"model": model,
"max_tokens": 1024,
"max_tokens": 2048, # mix 더 길어서
"messages": [{"role": "user", "content": user_prompt}],
},
)

View File

@@ -1,11 +1,13 @@
"""파이프라인 오케스트레이터 — 단계별 BackgroundTask 등록 및 산출물 → DB 반영."""
import asyncio
import json
import logging
import os
import sqlite3
from app import db
from . import cover, video, thumb, metadata, review, youtube
from . import cover, video, thumb, metadata, review, youtube, background, storage
from .gradient import make_gradient_with_title
logger = logging.getLogger("music-lab.orchestrator")
@@ -19,21 +21,26 @@ async def run_step(pipeline_id: int, step: str, feedback: str = "") -> None:
job_id = db.create_pipeline_job(pipeline_id, step)
db.update_pipeline_job(job_id, status="running")
p = db.get_pipeline(pipeline_id)
track = _get_track(p["track_id"])
try:
ctx = _resolve_input(p)
except ValueError as e:
db.update_pipeline_job(job_id, status="failed", error=str(e))
db.update_pipeline_state(pipeline_id, "failed", failed_reason=f"{step}: {e}")
return
try:
if step == "cover":
result = await _run_cover(p, track, feedback)
result = await _run_cover(p, ctx, feedback)
elif step == "video":
result = await _run_video(p, track)
result = await _run_video(p, ctx)
elif step == "thumb":
result = await _run_thumb(p, track, feedback)
result = await _run_thumb(p, ctx, feedback)
elif step == "meta":
result = await _run_meta(p, track, feedback)
result = await _run_meta(p, ctx, feedback)
elif step == "review":
result = await _run_review(p, track)
result = await _run_review(p, ctx)
elif step == "publish":
result = await _run_publish(p, track)
result = await _run_publish(p, ctx)
else:
raise ValueError(f"unknown step: {step}")
db.update_pipeline_job(job_id, status="succeeded")
@@ -44,6 +51,78 @@ async def run_step(pipeline_id: int, step: str, feedback: str = "") -> None:
db.update_pipeline_state(pipeline_id, "failed", failed_reason=f"{step}: {e}")
def _resolve_input(p: dict) -> dict:
"""파이프라인 입력 = 단일 트랙 또는 컴파일 결과.
반환: {
"audio_path": str, # 컨테이너 절대경로
"duration_sec": int,
"tracks": list[{"id", "title", "start_offset_sec", "duration_sec"}],
"title": str,
"genre": str, # mix는 "mix"
"moods": list[str],
}
"""
track_id = p.get("track_id")
compile_id = p.get("compile_job_id")
if track_id is None and compile_id is None:
raise ValueError("track_id 또는 compile_job_id 중 하나는 필요")
if compile_id is not None:
job = db.get_compile_job(compile_id)
if not job or job.get("status") not in ("done", "succeeded"):
raise ValueError(
f"compile job {compile_id} not ready "
f"(status={job.get('status') if job else None})"
)
tracks = []
offset = 0.0
crossfade = job.get("crossfade_sec", 0) or 0
track_ids = job.get("track_ids") or []
for tid in track_ids:
t = db.get_track_by_id(tid)
if not t:
continue
dur = t.get("duration_sec", 0)
tracks.append({
"id": tid,
"title": t.get("title", ""),
"start_offset_sec": int(offset),
"duration_sec": dur,
})
offset += dur - crossfade
# 마지막 트랙은 풀 길이 반영 (crossfade 빼기 한 것 복구)
total = int(offset + crossfade) if tracks else 0
return {
"audio_path": job.get("audio_path") or job.get("output_path") or "",
"duration_sec": total,
"tracks": tracks,
"title": job.get("title") or "Mix",
"genre": "mix",
"moods": [],
}
# 단일 트랙
t = db.get_track_by_id(track_id)
if not t:
raise ValueError(f"track {track_id} 없음")
return {
"audio_path": t.get("file_path") or _local_path(t.get("audio_url", "")),
"duration_sec": t.get("duration_sec", 0),
"tracks": [{
"id": t["id"],
"title": t.get("title", ""),
"start_offset_sec": 0,
"duration_sec": t.get("duration_sec", 0),
}],
"title": t.get("title", ""),
"genre": t.get("genre", "default"),
"moods": t.get("moods", []) or [],
}
def _get_track(track_id: int) -> dict:
# tracks 테이블 헬퍼 — 기존 db에 있는 함수 사용
t = None
@@ -87,58 +166,95 @@ def _fetch_track_fallback(track_id: int) -> dict | None:
return None
async def _run_cover(p, track, feedback):
async def _run_cover(p, ctx, feedback):
setup = db.get_youtube_setup()
vd = setup["visual_defaults"]
bg_mode = p.get("background_mode") or vd.get("default_background_mode", "static")
keyword = p.get("background_keyword") or vd.get("default_background_keyword", "")
if bg_mode == "video_loop":
# Pexels 영상 다운로드 시도 — 성공 여부와 무관하게 cover.jpg는 그라데이션으로 별도 생성
# (실패 시 video.py가 cover.jpg를 fallback 배경으로 사용 가능)
await background.fetch_video_loop(p["id"], keyword)
out_path = os.path.join(storage.pipeline_dir(p["id"]), "cover.jpg")
make_gradient_with_title(ctx["genre"], ctx["title"], out_path)
return {"next_state": "cover_pending",
"fields": {"cover_url": storage.media_url(p["id"], "cover.jpg")}}
# 정적 모드 — 기존 cover.generate 흐름
prompts = setup["cover_prompts"]
template = prompts.get(track.get("genre", "default").lower(), prompts.get("default", ""))
template = prompts.get(ctx["genre"].lower(), prompts.get("default", ""))
image_source = vd.get("background_image_source", "ai")
out = await cover.generate(
pipeline_id=p["id"], genre=track.get("genre", "default"),
pipeline_id=p["id"], genre=ctx["genre"],
prompt_template=template,
mood=", ".join(track.get("moods", []) or []),
track_title=track.get("title", ""),
feedback=feedback,
mood=", ".join(ctx["moods"] or []),
track_title=ctx["title"], feedback=feedback,
image_source=image_source,
background_keyword=keyword,
)
return {"next_state": "cover_pending", "fields": {"cover_url": out["url"]}}
async def _run_video(p, track):
async def _run_video(p, ctx):
setup = db.get_youtube_setup()
vd = setup["visual_defaults"]
audio_path = _local_path(track.get("audio_url", ""))
audio_path = ctx["audio_path"]
cover_path = _local_path(p["cover_url"])
out = video.generate(
style = p.get("visual_style") or vd.get("default_visual_style", "essential")
bg_mode = p.get("background_mode") or vd.get("default_background_mode", "static")
bg_path = None
if bg_mode == "video_loop":
loop_local = os.path.join(storage.pipeline_dir(p["id"]), "loop.mp4")
bg_path = loop_local if os.path.isfile(loop_local) else None
out = await asyncio.to_thread(
video.generate,
pipeline_id=p["id"], audio_path=audio_path, cover_path=cover_path,
genre=track.get("genre", "default"),
duration_sec=track.get("duration_sec", 120),
resolution=vd["resolution"], style=vd["style"],
genre=ctx["genre"],
duration_sec=ctx["duration_sec"],
resolution=vd.get("resolution", "1920x1080"),
style=style,
background_mode=bg_mode,
background_path=bg_path,
tracks=ctx["tracks"] if len(ctx["tracks"]) > 1 else None,
)
return {"next_state": "video_pending", "fields": {"video_url": out["url"]}}
async def _run_thumb(p, track, feedback):
async def _run_thumb(p, ctx, feedback):
video_path = _local_path(p["video_url"])
out = thumb.generate(pipeline_id=p["id"], video_path=video_path,
track_title=track.get("title", ""), overlay_text=True)
out = await asyncio.to_thread(
thumb.generate,
pipeline_id=p["id"], video_path=video_path,
track_title=ctx["title"], overlay_text=True,
)
return {"next_state": "thumb_pending", "fields": {"thumbnail_url": out["url"]}}
async def _run_meta(p, track, feedback):
async def _run_meta(p, ctx, feedback):
setup = db.get_youtube_setup()
trend_top = _get_trend_top()
out = await metadata.generate(
track=track, template=setup["metadata_template"],
track={"title": ctx["title"], "genre": ctx["genre"],
"duration_sec": ctx["duration_sec"], "moods": ctx["moods"]},
template=setup["metadata_template"],
trend_keywords=trend_top, feedback=feedback,
tracks=ctx["tracks"] if len(ctx["tracks"]) > 1 else None,
)
return {"next_state": "meta_pending",
"fields": {"metadata_json": json.dumps(out, ensure_ascii=False)}}
async def _run_review(p, track):
async def _run_review(p, ctx):
setup = db.get_youtube_setup()
meta = json.loads(p["metadata_json"]) if p.get("metadata_json") else {}
result = await review.run_4_axis(
pipeline=p, track=track,
video_meta={"length_sec": track.get("duration_sec", 120),
pipeline=p,
track={"title": ctx["title"], "genre": ctx["genre"], "duration_sec": ctx["duration_sec"]},
video_meta={"length_sec": ctx["duration_sec"],
"resolution": setup["visual_defaults"]["resolution"]},
metadata=meta, thumbnail_url=p.get("thumbnail_url", ""),
trend_top=_get_trend_top(),
@@ -148,11 +264,12 @@ async def _run_review(p, track):
"fields": {"review_json": json.dumps(result, ensure_ascii=False)}}
async def _run_publish(p, track):
async def _run_publish(p, ctx):
setup = db.get_youtube_setup()
meta = json.loads(p["metadata_json"]) if p.get("metadata_json") else {}
privacy = setup["publish_policy"].get("privacy", "private")
result = youtube.upload_video(
result = await asyncio.to_thread(
youtube.upload_video,
video_path=_local_path(p["video_url"]),
thumbnail_path=_local_path(p["thumbnail_url"]) if p.get("thumbnail_url") else None,
metadata=meta, privacy=privacy,
@@ -162,14 +279,19 @@ async def _run_publish(p, track):
def _local_path(media_url: str) -> str:
""" /media/videos/123/cover.jpg → /app/data/videos/123/cover.jpg """
""" /media/videos/123/cover.jpg → /app/data/videos/123/cover.jpg
/media/music/abc.mp3 → /app/data/abc.mp3 (music mount at /app/data, no subdir)
"""
if not media_url:
return ""
# Strip query string (e.g., cache-buster ?v=...)
media_url = media_url.split("?", 1)[0]
base_media = os.getenv("VIDEO_MEDIA_BASE", "/media/videos")
base_data = os.getenv("VIDEO_DATA_DIR", "/app/data/videos")
if media_url.startswith(base_media):
return media_url.replace(base_media, base_data, 1)
# /media/music/abc.mp3 → /app/data/music/abc.mp3
if media_url.startswith("/media/music/"):
return media_url.replace("/media/music/", "/app/data/", 1)
return media_url.replace("/media/", "/app/data/", 1)

View File

@@ -21,7 +21,7 @@ def generate(*, pipeline_id: int, video_path: str,
"-ss", "00:00:05", "-vframes", "1", "-q:v", "2", out_path]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=THUMB_TIMEOUT_S)
if result.returncode != 0:
raise ThumbGenerationError(f"ffmpeg 썸네일 실패: {result.stderr[:300]}")
raise ThumbGenerationError(f"ffmpeg 썸네일 실패: {result.stderr[-500:]}")
if overlay_text and track_title:
_overlay_title(out_path, track_title)

View File

@@ -1,13 +1,29 @@
"""영상 비주얼 생성 — visualizer/슬라이드쇼 스타일."""
"""영상 비주얼 생성 — Windows GPU 서버 (NVENC) 호출.
Windows 서버 다운/실패 시 즉시 예외 (NAS 로컬 폴백 없음 — 의도적 결정).
"""
import os
import subprocess
import logging
import httpx
from . import storage
logger = logging.getLogger("music-lab.video")
VIDEO_TIMEOUT_S = 300 # 5분
ENCODER_URL = os.getenv("WINDOWS_VIDEO_ENCODER_URL", "")
ENCODER_TIMEOUT_BASE_S = 300 # 짧은 영상용 base
def _encoder_timeout(duration_sec: int) -> int:
"""duration에 비례한 HTTP timeout — Windows ffmpeg timeout (~0.3x duration + 180)에 60s 마진 추가.
1분 → 300s, 30분 → 780s, 60분 → 1320s, 120분 → 2400s
"""
return max(ENCODER_TIMEOUT_BASE_S, int(duration_sec * 0.3) + 240)
# NAS 호스트 절대경로 prefix — docker bind mount의 host 측
NAS_VIDEOS_ROOT = os.getenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
NAS_MUSIC_ROOT = os.getenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
class VideoGenerationError(Exception):
@@ -16,40 +32,84 @@ class VideoGenerationError(Exception):
def generate(*, pipeline_id: int, audio_path: str, cover_path: str,
genre: str, duration_sec: int, resolution: str = "1920x1080",
style: str = "visualizer") -> dict:
"""영상 생성. 성공 시 mp4 저장 + URL 반환. 실패 시 예외."""
w, h = resolution.split("x")
style: str = "essential",
background_mode: str = "static",
background_path: str | None = None,
tracks: list[dict] | None = None) -> dict:
"""원격 Windows GPU 서버 호출. 다운/실패 시 즉시 예외."""
if not ENCODER_URL:
raise VideoGenerationError(
"WINDOWS_VIDEO_ENCODER_URL 미설정 — Windows 인코더 서버 주소 필요"
)
out_path = os.path.join(storage.pipeline_dir(pipeline_id), "video.mp4")
nas_audio = _container_to_nas(audio_path)
nas_cover = _container_to_nas(cover_path)
nas_output = _container_to_nas(out_path)
nas_bg = _container_to_nas(background_path) if background_path else None
if style == "visualizer":
cmd = _build_visualizer_cmd(audio_path, cover_path, out_path, w, h)
else:
# 차후: 슬라이드쇼 등 다른 스타일 — 현재는 visualizer 폴백
cmd = _build_visualizer_cmd(audio_path, cover_path, out_path, w, h)
payload = {
"cover_path_nas": nas_cover,
"audio_path_nas": nas_audio,
"output_path_nas": nas_output,
"resolution": resolution,
"duration_sec": duration_sec,
"style": style,
"background_mode": background_mode,
"background_path_nas": nas_bg,
"tracks": tracks or [],
}
logger.info("ffmpeg 실행: %s", " ".join(cmd))
result = subprocess.run(cmd, capture_output=True, text=True, timeout=VIDEO_TIMEOUT_S)
if result.returncode != 0:
raise VideoGenerationError(f"ffmpeg 실패: {result.stderr[:500]}")
timeout_s = _encoder_timeout(duration_sec)
logger.info("Windows 인코더 호출 (timeout=%ds): pipeline=%d duration=%ds style=%s bg_mode=%s",
timeout_s, pipeline_id, duration_sec, style, background_mode)
try:
with httpx.Client(timeout=timeout_s) as client:
resp = client.post(f"{ENCODER_URL}/encode_video", json=payload)
except (httpx.ConnectError, httpx.ReadTimeout, httpx.WriteTimeout, httpx.NetworkError) as e:
raise VideoGenerationError(f"Windows 인코더 연결 실패: {e}")
if resp.status_code != 200:
try:
body = resp.json()
# FastAPI HTTPException wraps in {"detail": ...}
detail = body.get("detail", body) if isinstance(body, dict) else body
except Exception:
detail = {"error": resp.text[:300]}
if isinstance(detail, dict):
stage = detail.get("stage", "?")
error = detail.get("error", str(detail))
else:
stage = "?"
error = str(detail)
raise VideoGenerationError(
f"Windows 인코더 오류 ({resp.status_code}): {stage}{error}"
)
data = resp.json()
if not data.get("ok"):
raise VideoGenerationError(f"Windows 인코더 응답 ok=false: {data}")
return {
"url": storage.media_url(pipeline_id, "video.mp4"),
"used_fallback": False,
"duration_sec": duration_sec,
"encode_duration_ms": data.get("duration_ms"),
"encoder": data.get("encoder", "h264_nvenc"),
}
def _build_visualizer_cmd(audio: str, bg: str, out: str, w: str, h: str) -> list:
return [
"ffmpeg", "-y",
"-loop", "1", "-i", bg,
"-i", audio,
"-filter_complex",
f"[0:v]scale={w}:{h}[bg];"
f"[1:a]showwaves=s={w}x200:mode=cline:colors=0xFF4444@0.8[wave];"
f"[bg][wave]overlay=0:({h}-200)[out]",
"-map", "[out]", "-map", "1:a",
"-c:v", "libx264", "-preset", "fast", "-crf", "23",
"-c:a", "aac", "-b:a", "192k",
"-shortest", out,
]
def _container_to_nas(container_path: str) -> str:
""" /app/data/videos/3/cover.jpg → /volume1/docker/webpage/data/videos/3/cover.jpg
/app/data/abc.mp3 → /volume1/docker/webpage/data/music/abc.mp3
"""
if not container_path:
return ""
# Strip query string (e.g., cache-buster ?v=...)
container_path = container_path.split("?", 1)[0]
if container_path.startswith("/app/data/videos/"):
return container_path.replace("/app/data/videos/", NAS_VIDEOS_ROOT + "/", 1)
if container_path.startswith("/app/data/"):
rel = container_path[len("/app/data/"):]
return NAS_MUSIC_ROOT + "/" + rel
return container_path

View File

@@ -0,0 +1,137 @@
"""장르별 음악 파라미터 랜덤 풀 — 음악적으로 어울리는 결과 유도."""
import random
POOLS = {
"lo-fi": {
"moods": ["chill", "relaxing", "dreamy", "melancholic", "mellow", "nostalgic", "peaceful"],
"instruments_pool": ["piano", "synth", "drums", "vinyl", "rhodes", "soft bass", "ambient pads"],
"instruments_count": (3, 4),
"bpm": (70, 90),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["minor", "major"],
"prompt_modifiers": ["cozy bedroom vibes", "rainy night", "late night study", "cafe ambience"],
},
"phonk": {
"moods": ["dark", "aggressive", "moody", "intense", "hypnotic"],
"instruments_pool": ["808 bass", "hi-hat", "synth lead", "vocal chops", "bass drops", "trap drums"],
"instruments_count": (3, 4),
"bpm": (130, 160),
"keys": ["C", "D", "F", "G"],
"scales": ["minor"],
"prompt_modifiers": ["drift atmosphere", "dark neon", "midnight drive"],
},
"ambient": {
"moods": ["peaceful", "meditative", "ethereal", "spacious", "dreamy"],
"instruments_pool": ["pad synths", "atmospheric guitar", "soft strings", "field recordings", "drone bass"],
"instruments_count": (2, 3),
"bpm": (50, 75),
"keys": ["C", "D", "E", "G", "A"],
"scales": ["major", "minor"],
"prompt_modifiers": ["misty mountain morning", "deep space", "still water", "forest dawn"],
},
"pop": {
"moods": ["uplifting", "happy", "energetic", "romantic", "catchy"],
"instruments_pool": ["acoustic guitar", "piano", "drums", "bass", "synth", "vocals harmonies"],
"instruments_count": (3, 5),
"bpm": (95, 130),
"keys": ["C", "D", "E", "F", "G", "A"],
"scales": ["major"],
"prompt_modifiers": ["radio-ready", "summer vibe", "feel-good"],
},
"synthwave": {
"moods": ["retro", "nostalgic", "futuristic", "dreamy", "moody"],
"instruments_pool": ["synth lead", "synth bass", "drum machine", "arp synth", "pad synth", "vocoder"],
"instruments_count": (3, 4),
"bpm": (90, 120),
"keys": ["A", "D", "F", "G"],
"scales": ["minor", "major"],
"prompt_modifiers": ["80s neon city night", "retro arcade glow", "VHS aesthetic", "cyberpunk skyline"],
},
"chillhop": {
"moods": ["chill", "groovy", "warm", "nostalgic", "head-nodding"],
"instruments_pool": ["jazz piano", "bass guitar", "drum kit", "saxophone", "vinyl crackle", "rhodes", "muted trumpet"],
"instruments_count": (3, 5),
"bpm": (75, 95),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["major", "minor"],
"prompt_modifiers": ["jazz bar lounge", "chill summer afternoon", "vintage warm tape"],
},
"jazz": {
"moods": ["smooth", "elegant", "moody", "warm", "sophisticated"],
"instruments_pool": ["piano", "double bass", "jazz drums", "saxophone", "trumpet", "jazz guitar"],
"instruments_count": (3, 5),
"bpm": (75, 130),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["major", "minor"],
"prompt_modifiers": ["smoky jazz club", "fireplace evening", "elegant lounge", "rainy speakeasy"],
},
"hip-hop": {
"moods": ["confident", "groovy", "head-nodding", "dark", "energetic"],
"instruments_pool": ["808 bass", "trap drums", "synth lead", "vocal samples", "piano chords", "vinyl scratch", "boom bap drums"],
"instruments_count": (3, 4),
"bpm": (85, 100),
"keys": ["C", "D", "F", "G"],
"scales": ["minor"],
"prompt_modifiers": ["urban night", "boom bap classic", "street vibe", "underground"],
},
"electronic": {
"moods": ["energetic", "uplifting", "hypnotic", "futuristic", "driving"],
"instruments_pool": ["synth lead", "synth bass", "drum machine", "fx pad", "arp", "kick", "snare claps"],
"instruments_count": (3, 5),
"bpm": (110, 140),
"keys": ["A", "C", "D", "F", "G"],
"scales": ["minor", "major"],
"prompt_modifiers": ["club energy", "festival vibe", "neon dance floor"],
},
"classical": {
"moods": ["serene", "elegant", "melancholic", "majestic", "tender"],
"instruments_pool": ["piano", "strings", "cello", "violin", "harp", "flute", "oboe"],
"instruments_count": (1, 3),
"bpm": (60, 100),
"keys": ["C", "D", "E", "F", "G", "A"],
"scales": ["major", "minor"],
"prompt_modifiers": ["orchestra hall", "candlelight evening", "morning piano study", "stately concert"],
},
"funk": {
"moods": ["groovy", "funky", "energetic", "uplifting", "playful"],
"instruments_pool": ["bass guitar", "wah guitar", "horn section", "drums", "clavinet", "rhodes"],
"instruments_count": (3, 5),
"bpm": (95, 120),
"keys": ["C", "D", "E", "F", "G"],
"scales": ["major", "minor"],
"prompt_modifiers": ["70s groove", "disco funk", "soul party"],
},
"default": {
"moods": ["chill", "relaxing", "uplifting", "mellow"],
"instruments_pool": ["piano", "synth", "drums", "guitar", "bass", "strings"],
"instruments_count": (3, 4),
"bpm": (80, 110),
"keys": ["C", "D", "F", "G", "A"],
"scales": ["minor", "major"],
"prompt_modifiers": [""],
},
}
def list_genres() -> list[str]:
"""프론트에 노출할 장르 목록 — POOLS의 키 (default 제외)."""
return [g for g in POOLS.keys() if g != "default"]
def randomize(genre: str, rng=None) -> dict:
"""장르 → 랜덤 음악 파라미터 1세트.
반환: {moods, instruments, bpm, key, scale, prompt_modifier}
"""
rng = rng or random.Random()
pool = POOLS.get(genre.lower(), POOLS["default"])
n_instr = rng.randint(*pool["instruments_count"])
instruments = rng.sample(pool["instruments_pool"], min(n_instr, len(pool["instruments_pool"])))
return {
"moods": [rng.choice(pool["moods"])],
"instruments": instruments,
"bpm": rng.randint(*pool["bpm"]),
"key": rng.choice(pool["keys"]),
"scale": rng.choice(pool["scales"]),
"prompt_modifier": rng.choice(pool["prompt_modifiers"]),
}

View File

@@ -0,0 +1,51 @@
import os
import pytest
import respx
from httpx import Response
from app.pipeline import background, storage
@pytest.fixture
def tmp_storage(monkeypatch, tmp_path):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
return tmp_path
@pytest.mark.asyncio
@respx.mock
async def test_fetch_video_loop_success(tmp_storage, monkeypatch):
monkeypatch.setenv("PEXELS_API_KEY", "k")
video_url = "https://videos.pexels.com/video-files/123/sample.mp4"
respx.get("https://api.pexels.com/videos/search").mock(
return_value=Response(200, json={
"videos": [{
"id": 123, "duration": 10,
"video_files": [
{"quality": "hd", "width": 1920, "link": video_url},
],
}],
})
)
respx.get(video_url).mock(return_value=Response(200, content=b"\x00" * 4096))
result = await background.fetch_video_loop(pipeline_id=10, keyword="rainy window")
assert result["used_fallback"] is False
assert (tmp_storage / "10" / "loop.mp4").exists()
@pytest.mark.asyncio
async def test_fetch_video_loop_no_api_key(tmp_storage, monkeypatch):
monkeypatch.delenv("PEXELS_API_KEY", raising=False)
result = await background.fetch_video_loop(pipeline_id=11, keyword="rain")
assert result["used_fallback"] is True
@pytest.mark.asyncio
@respx.mock
async def test_fetch_video_loop_zero_results(tmp_storage, monkeypatch):
monkeypatch.setenv("PEXELS_API_KEY", "k")
respx.get("https://api.pexels.com/videos/search").mock(
return_value=Response(200, json={"videos": []})
)
result = await background.fetch_video_loop(pipeline_id=12, keyword="impossible-keyword")
assert result["used_fallback"] is True

View File

@@ -0,0 +1,96 @@
import pytest
from app import db
@pytest.fixture
def fresh_db(monkeypatch, tmp_path):
monkeypatch.setattr(db, "DB_PATH", str(tmp_path / "music.db"))
db.init_db()
return db
def test_create_batch_job(fresh_db):
bid = db.create_batch_job(genre="lo-fi", count=10)
j = db.get_batch_job(bid)
assert j["genre"] == "lo-fi"
assert j["count"] == 10
assert j["status"] == "queued"
assert j["track_ids"] == []
assert j["auto_pipeline"] == 1
assert j["target_duration_sec"] == 180
def test_create_batch_job_no_auto_pipeline(fresh_db):
bid = db.create_batch_job(genre="phonk", count=5, auto_pipeline=False)
j = db.get_batch_job(bid)
assert j["auto_pipeline"] == 0
def test_update_batch_job(fresh_db):
bid = db.create_batch_job(genre="phonk", count=5)
db.update_batch_job(bid, status="generating", current_track_index=2)
j = db.get_batch_job(bid)
assert j["status"] == "generating"
assert j["current_track_index"] == 2
def test_update_batch_rejects_unknown_col(fresh_db):
bid = db.create_batch_job(genre="lo-fi", count=1)
with pytest.raises(ValueError):
db.update_batch_job(bid, evil_col="x")
def test_append_batch_track(fresh_db):
bid = db.create_batch_job(genre="lo-fi", count=3)
db.append_batch_track(bid, 101)
db.append_batch_track(bid, 102)
j = db.get_batch_job(bid)
assert j["track_ids"] == [101, 102]
assert j["completed"] == 2
def test_list_batch_jobs_active_filter(fresh_db):
b1 = db.create_batch_job(genre="lo-fi", count=1)
b2 = db.create_batch_job(genre="phonk", count=1)
db.update_batch_job(b1, status="failed")
actives = db.list_batch_jobs(active_only=True)
assert all(j["status"] not in ("failed",) for j in actives)
assert any(j["id"] == b2 for j in actives)
assert not any(j["id"] == b1 for j in actives)
def test_random_pools_lofi():
from app.random_pools import randomize, POOLS
import random
rng = random.Random(42)
result = randomize("lo-fi", rng)
assert result["bpm"] in range(70, 91)
assert result["key"] in POOLS["lo-fi"]["keys"]
assert result["scale"] in POOLS["lo-fi"]["scales"]
assert len(result["moods"]) == 1
assert result["moods"][0] in POOLS["lo-fi"]["moods"]
assert 3 <= len(result["instruments"]) <= 4
def test_random_pools_phonk():
from app.random_pools import randomize
import random
rng = random.Random(0)
result = randomize("phonk", rng)
assert result["bpm"] in range(130, 161)
assert result["scale"] == "minor"
def test_random_pools_unknown_genre_uses_default():
from app.random_pools import randomize
import random
result = randomize("nonexistent", random.Random(0))
assert result["bpm"] in range(80, 111)
def test_random_pools_seed_reproducible():
from app.random_pools import randomize
import random
a = randomize("lo-fi", random.Random(123))
b = randomize("lo-fi", random.Random(123))
assert a == b

View File

@@ -0,0 +1,89 @@
import pytest
from unittest.mock import AsyncMock, patch
from fastapi.testclient import TestClient
import app.main as main_module
from app import db
@pytest.fixture
def client(monkeypatch, tmp_path):
monkeypatch.setattr(db, "DB_PATH", str(tmp_path / "music.db"))
db.init_db()
monkeypatch.setenv("SUNO_API_KEY", "test")
# main.py의 SUNO_API_KEY 모듈 변수도 갱신 필요할 수 있음
monkeypatch.setattr(main_module, "SUNO_API_KEY", "test", raising=False)
return TestClient(main_module.app)
def test_create_batch_201(client):
with patch.object(main_module, "_run_batch", new=AsyncMock()):
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 3})
assert r.status_code == 201, r.text
body = r.json()
assert body["genre"] == "lo-fi"
assert body["count"] == 3
assert body["status"] == "queued"
def test_create_batch_rejects_count_too_high(client):
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 11})
assert r.status_code == 400
def test_create_batch_rejects_count_zero(client):
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 0})
assert r.status_code == 400
def test_create_batch_rejects_no_genre(client):
r = client.post("/api/music/generate-batch", json={"count": 3})
assert r.status_code in (400, 422)
def test_create_batch_rejects_invalid_duration(client):
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 3, "target_duration_sec": 30})
assert r.status_code == 400
def test_create_batch_rejects_no_suno_key(client, monkeypatch):
monkeypatch.setattr(main_module, "SUNO_API_KEY", "", raising=False)
r = client.post("/api/music/generate-batch",
json={"genre": "lo-fi", "count": 3})
assert r.status_code == 400
def test_get_batch_returns_tracks(client):
bid = db.create_batch_job(genre="lo-fi", count=2)
db.append_batch_track(bid, 999) # phantom track
r = client.get(f"/api/music/generate-batch/{bid}")
assert r.status_code == 200
body = r.json()
assert body["track_ids"] == [999]
assert body["tracks"] == [] # 999 not in music_library
def test_get_batch_404(client):
r = client.get("/api/music/generate-batch/99999")
assert r.status_code == 404
def test_list_batches(client):
db.create_batch_job(genre="lo-fi", count=1)
db.create_batch_job(genre="phonk", count=2)
r = client.get("/api/music/generate-batch")
assert r.status_code == 200
assert len(r.json()["batches"]) == 2
def test_list_batches_active_filter(client):
b1 = db.create_batch_job(genre="lo-fi", count=1)
b2 = db.create_batch_job(genre="phonk", count=2)
db.update_batch_job(b1, status="failed")
r = client.get("/api/music/generate-batch?status=active")
ids = [j["id"] for j in r.json()["batches"]]
assert b2 in ids
assert b1 not in ids

View File

@@ -91,3 +91,86 @@ async def test_dalle_b64_response_handled(tmp_storage, monkeypatch):
prompt_template="x", mood="", track_title="X")
assert out["used_fallback"] is False
assert (tmp_storage / "46" / "cover.jpg").exists()
@pytest.mark.asyncio
@respx.mock
async def test_pexels_image_source(tmp_storage, monkeypatch):
monkeypatch.setenv("PEXELS_API_KEY", "test-pexels-key")
img_url = "https://images.pexels.com/photos/123/photo.jpg"
respx.get("https://api.pexels.com/v1/search").mock(
return_value=Response(200, json={
"photos": [{
"id": 123,
"src": {"large2x": img_url, "original": img_url},
}],
})
)
png_bytes = bytes.fromhex(
"89504e470d0a1a0a0000000d49484452000000010000000108020000009077"
"53de0000000c4944415478da6300010000050001"
"0d0a2db40000000049454e44ae426082"
)
respx.get(img_url).mock(return_value=Response(200, content=png_bytes))
out = await cover.generate(
pipeline_id=99, genre="lo-fi", prompt_template="ignored",
mood="chill", track_title="Mix",
image_source="pexels",
)
assert out["used_fallback"] is False
assert out["url"].endswith("/cover.jpg")
assert (tmp_storage / "99" / "cover.jpg").exists()
@pytest.mark.asyncio
async def test_pexels_no_api_key_falls_back(tmp_storage, monkeypatch):
monkeypatch.delenv("PEXELS_API_KEY", raising=False)
out = await cover.generate(
pipeline_id=98, genre="lo-fi", prompt_template="x",
mood="", track_title="Test",
image_source="pexels",
)
assert out["used_fallback"] is True
@pytest.mark.asyncio
@respx.mock
async def test_pexels_zero_results_falls_back(tmp_storage, monkeypatch):
monkeypatch.setenv("PEXELS_API_KEY", "test-key")
respx.get("https://api.pexels.com/v1/search").mock(
return_value=Response(200, json={"photos": []})
)
out = await cover.generate(
pipeline_id=97, genre="lo-fi", prompt_template="x",
mood="", track_title="Test",
image_source="pexels",
)
assert out["used_fallback"] is True
@pytest.mark.asyncio
@respx.mock
async def test_dalle_uses_background_keyword(tmp_storage, monkeypatch):
monkeypatch.setenv("OPENAI_API_KEY", "test-key")
captured = {}
def hook(req):
import json as _json
captured["body"] = _json.loads(req.content)
return Response(200, json={"data": [{"url": "https://x"}]})
respx.post("https://api.openai.com/v1/images/generations").mock(side_effect=hook)
png_bytes = bytes.fromhex(
"89504e470d0a1a0a0000000d49484452000000010000000108020000009077"
"53de0000000c4944415478da6300010000050001"
"0d0a2db40000000049454e44ae426082"
)
respx.get("https://x").mock(return_value=Response(200, content=png_bytes))
await cover.generate(
pipeline_id=80, genre="lo-fi",
prompt_template="moody anime",
mood="chill", track_title="X",
image_source="ai",
background_keyword="skateboard park bright atmosphere",
)
assert "skateboard" in captured["body"]["prompt"]
assert "bright" in captured["body"]["prompt"]

View File

@@ -80,3 +80,75 @@ async def test_metadata_falls_back_on_api_error(monkeypatch):
)
assert result["used_fallback"] is True
assert "Drive" in result["title"]
@pytest.mark.asyncio
@respx.mock
async def test_metadata_with_tracks_includes_chapter_format(monkeypatch):
monkeypatch.setenv("ANTHROPIC_API_KEY", "k")
captured = {}
def hook(req):
import json as _json
captured["body"] = _json.loads(req.content)
return Response(200, json={"content": [{"type": "text", "text":
'{"title":"Lo-Fi Mix 3 Tracks","description":"Track 1: [00:00] T1\\nTrack 2: [03:00] T2",'
'"tags":["lofi","mix"],"category_id":10}'}]})
respx.post("https://api.anthropic.com/v1/messages").mock(side_effect=hook)
result = await metadata.generate(
track={"title": "Mix", "genre": "mix", "duration_sec": 600,
"moods": []},
template={"title": "{title}", "description": "{title}",
"tags": [], "category_id": 10},
trend_keywords=[],
tracks=[
{"id": 1, "title": "T1", "start_offset_sec": 0, "duration_sec": 180},
{"id": 2, "title": "T2", "start_offset_sec": 180, "duration_sec": 200},
{"id": 3, "title": "T3", "start_offset_sec": 380, "duration_sec": 220},
],
)
body_str = str(captured["body"])
assert "T1" in body_str and "T2" in body_str and "T3" in body_str
assert "00:00" in body_str
assert result["used_fallback"] is False
@pytest.mark.asyncio
async def test_metadata_fallback_with_tracks(monkeypatch):
"""API 키 없을 때 폴백에서도 트랙 챕터 포함."""
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
result = await metadata.generate(
track={"title": "Mix", "genre": "mix", "duration_sec": 600, "moods": []},
template={"title": "{title}", "description": "{title}",
"tags": [], "category_id": 10},
trend_keywords=[],
tracks=[
{"id": 1, "title": "T1", "start_offset_sec": 0, "duration_sec": 180},
{"id": 2, "title": "T2", "start_offset_sec": 180, "duration_sec": 200},
],
)
assert result["used_fallback"] is True
assert "00:00" in result["description"]
assert "T1" in result["description"]
assert "T2" in result["description"]
def test_format_chapters_under_hour():
from app.pipeline.metadata import _format_chapters
out = _format_chapters([
{"start_offset_sec": 0, "title": "T1"},
{"start_offset_sec": 180, "title": "T2"},
])
assert "00:00 T1" in out
assert "03:00 T2" in out
def test_format_chapters_over_hour():
from app.pipeline.metadata import _format_chapters
out = _format_chapters([
{"start_offset_sec": 0, "title": "T1"},
{"start_offset_sec": 3700, "title": "T2"},
])
assert "00:00 T1" in out
assert "01:01:40 T2" in out

View File

@@ -0,0 +1,83 @@
import pytest
from unittest.mock import patch, MagicMock
from app.pipeline.orchestrator import _resolve_input
def test_resolve_input_track():
pipeline = {"id": 1, "track_id": 13, "compile_job_id": None}
track = {
"id": 13, "title": "Lo-Fi Drive", "genre": "lo-fi",
"moods": ["chill"], "duration_sec": 176,
"file_path": "/app/data/x.mp3", "audio_url": "/media/music/x.mp3",
}
with patch("app.pipeline.orchestrator.db.get_track_by_id", return_value=track):
result = _resolve_input(pipeline)
assert result["audio_path"] == "/app/data/x.mp3"
assert result["duration_sec"] == 176
assert len(result["tracks"]) == 1
assert result["tracks"][0]["start_offset_sec"] == 0
assert result["title"] == "Lo-Fi Drive"
assert result["genre"] == "lo-fi"
def test_resolve_input_compile_job():
pipeline = {"id": 2, "track_id": None, "compile_job_id": 5}
job = {
"id": 5, "status": "succeeded", "title": "Chill Mix",
"audio_path": "/app/data/compiles/5.mp3",
"track_ids": [13, 14, 15],
"crossfade_sec": 3,
}
tracks = {
13: {"id": 13, "title": "T1", "duration_sec": 180},
14: {"id": 14, "title": "T2", "duration_sec": 200},
15: {"id": 15, "title": "T3", "duration_sec": 150},
}
with patch("app.pipeline.orchestrator.db.get_compile_job", return_value=job), \
patch("app.pipeline.orchestrator.db.get_track_by_id", side_effect=lambda i: tracks[i]):
result = _resolve_input(pipeline)
assert result["audio_path"] == "/app/data/compiles/5.mp3"
# 누적 = 180+200+150 - 2*3(crossfade pair gaps) = 524
assert result["duration_sec"] == 524
assert len(result["tracks"]) == 3
assert result["tracks"][0]["start_offset_sec"] == 0
assert result["tracks"][1]["start_offset_sec"] == 177 # 180 - 3
assert result["tracks"][2]["start_offset_sec"] == 374 # 177 + 200 - 3
assert result["title"] == "Chill Mix"
assert result["genre"] == "mix"
def test_resolve_input_compile_not_ready():
pipeline = {"id": 3, "track_id": None, "compile_job_id": 6}
job = {"id": 6, "status": "rendering"}
with patch("app.pipeline.orchestrator.db.get_compile_job", return_value=job):
with pytest.raises(ValueError, match="not ready"):
_resolve_input(pipeline)
def test_resolve_input_neither():
pipeline = {"id": 4, "track_id": None, "compile_job_id": None}
with pytest.raises(ValueError):
_resolve_input(pipeline)
def test_resolve_input_compile_job_done_status():
"""compile job status='done'도 accept (production convention)."""
pipeline = {"id": 5, "track_id": None, "compile_job_id": 7}
job = {
"id": 7, "status": "done", "title": "Done Mix",
"audio_path": "/app/data/compiles/7.mp3",
"track_ids": [1], "crossfade_sec": 0,
}
track = {"id": 1, "title": "T1", "duration_sec": 100}
with patch("app.pipeline.orchestrator.db.get_compile_job", return_value=job), \
patch("app.pipeline.orchestrator.db.get_track_by_id", return_value=track):
result = _resolve_input(pipeline)
assert result["audio_path"] == "/app/data/compiles/7.mp3"
assert result["title"] == "Done Mix"
def test_local_path_strips_cache_buster():
from app.pipeline.orchestrator import _local_path
# /media/videos/3/cover.jpg?v=... → /app/data/videos/3/cover.jpg
assert _local_path("/media/videos/3/cover.jpg?v=20260510065642") == "/app/data/videos/3/cover.jpg"

View File

@@ -94,3 +94,129 @@ def test_update_pipeline_job_rejects_unknown_column(fresh_db):
job_id = db.create_pipeline_job(pid, "cover")
with pytest.raises(ValueError):
db.update_pipeline_job(job_id, evil_col="x")
def test_create_pipeline_with_compile_job(fresh_db):
pid = db.create_pipeline(track_id=None, compile_job_id=42,
visual_style="essential", background_mode="static",
background_keyword="rainy cafe")
row = db.get_pipeline(pid)
assert row["track_id"] is None
assert row["compile_job_id"] == 42
assert row["visual_style"] == "essential"
assert row["background_mode"] == "static"
assert row["background_keyword"] == "rainy cafe"
def test_create_pipeline_with_track_keeps_defaults(fresh_db):
pid = db.create_pipeline(track_id=1)
row = db.get_pipeline(pid)
assert row["track_id"] == 1
assert row["compile_job_id"] is None
assert row["visual_style"] == "essential" # default
assert row["background_mode"] == "static" # default
assert row["background_keyword"] is None
def test_create_pipeline_rejects_neither(fresh_db):
import pytest
with pytest.raises(ValueError):
db.create_pipeline()
def test_create_pipeline_rejects_both(fresh_db):
import pytest
with pytest.raises(ValueError):
db.create_pipeline(track_id=1, compile_job_id=2)
def test_migration_idempotent(monkeypatch, tmp_path):
"""init_db 두 번 호출해도 ALTER TABLE 에러 없이 통과."""
db_path = tmp_path / "music.db"
monkeypatch.setattr(db, "DB_PATH", str(db_path))
db.init_db()
db.init_db() # 두 번째 — 컬럼 이미 존재해도 OK여야
import sqlite3
conn = sqlite3.connect(str(db_path))
cols = [r[1] for r in conn.execute("PRAGMA table_info(video_pipelines)").fetchall()]
assert "compile_job_id" in cols
assert "visual_style" in cols
assert "background_mode" in cols
assert "background_keyword" in cols
conn.close()
def test_pipeline_response_includes_compile_title(fresh_db):
"""compile_jobs LEFT JOIN — pipeline 응답에 compile_title 포함."""
import sqlite3
conn = sqlite3.connect(db.DB_PATH)
cur = conn.cursor()
cur.execute("""CREATE TABLE IF NOT EXISTS compile_jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT, title TEXT, status TEXT,
track_ids_json TEXT, crossfade_sec INTEGER, audio_path TEXT, created_at TEXT)""")
cur.execute("INSERT INTO compile_jobs (id, title, status) VALUES (1, 'My Mix', 'succeeded')")
conn.commit()
conn.close()
pid = db.create_pipeline(compile_job_id=1)
p = db.get_pipeline(pid)
assert p.get("compile_title") == "My Mix"
def test_migration_relaxes_existing_not_null_track_id(monkeypatch, tmp_path):
"""기존 production-like DB(track_id NOT NULL)를 nullable로 마이그레이션."""
db_path = tmp_path / "music.db"
monkeypatch.setattr(db, "DB_PATH", str(db_path))
# 1) 옛 스키마(track_id NOT NULL)로 직접 생성
import sqlite3
conn = sqlite3.connect(str(db_path))
conn.execute("""
CREATE TABLE video_pipelines (
id INTEGER PRIMARY KEY AUTOINCREMENT,
track_id INTEGER NOT NULL,
state TEXT NOT NULL DEFAULT 'created',
state_started_at TEXT NOT NULL,
cover_url TEXT,
video_url TEXT,
thumbnail_url TEXT,
metadata_json TEXT,
review_json TEXT,
youtube_video_id TEXT,
feedback_count_per_step TEXT NOT NULL DEFAULT '{}',
last_telegram_msg_ids TEXT NOT NULL DEFAULT '{}',
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
cancelled_at TEXT,
failed_reason TEXT
)
""")
# 옛 데이터 1행
conn.execute("""
INSERT INTO video_pipelines (track_id, state_started_at, created_at, updated_at)
VALUES (1, '2026-05-01T00:00:00', '2026-05-01T00:00:00', '2026-05-01T00:00:00')
""")
conn.commit()
conn.close()
# 2) init_db 실행 (마이그레이션 트리거)
db.init_db()
# 3) NOT NULL 제약 해제 확인
conn = sqlite3.connect(str(db_path))
cur = conn.cursor()
cur.execute("PRAGMA table_info(video_pipelines)")
cols = {r[1]: r[3] for r in cur.fetchall()} # name → notnull
assert cols["track_id"] == 0 # not null released
# 새 컬럼들도 존재
assert "compile_job_id" in cols
assert "visual_style" in cols
# 기존 데이터 보존
cur.execute("SELECT track_id FROM video_pipelines WHERE id=1")
assert cur.fetchone()[0] == 1
conn.close()
# 4) compile_job_id-only INSERT 가능 확인
pid = db.create_pipeline(compile_job_id=99)
p = db.get_pipeline(pid)
assert p["track_id"] is None
assert p["compile_job_id"] == 99

View File

@@ -108,3 +108,93 @@ def test_youtube_status_when_disconnected(client):
r = client.get("/api/music/youtube/status")
assert r.status_code == 200
assert r.json() == {"connected": False}
def test_create_pipeline_with_compile_job(client, monkeypatch):
import sqlite3
conn = sqlite3.connect(db.DB_PATH)
cur = conn.cursor()
try:
cur.execute("""
INSERT INTO compile_jobs (title, track_ids_json, crossfade_sec,
audio_path, status, created_at)
VALUES ('Test Mix', '[1,2,3]', 3, '/app/data/compiles/9.mp3',
'succeeded', datetime())
""")
except sqlite3.OperationalError:
pytest.skip("compile_jobs schema mismatch")
conn.commit()
cid = cur.lastrowid
conn.close()
r = client.post("/api/music/pipeline", json={"compile_job_id": cid})
assert r.status_code == 201
body = r.json()
assert body["track_id"] is None
assert body["compile_job_id"] == cid
assert body["visual_style"] == "essential"
def test_create_pipeline_rejects_both_inputs(client):
r = client.post("/api/music/pipeline", json={"track_id": 1, "compile_job_id": 1})
assert r.status_code == 400
def test_create_pipeline_rejects_neither(client):
r = client.post("/api/music/pipeline", json={})
assert r.status_code == 400
def test_create_pipeline_rejects_compile_not_ready(client):
import sqlite3
conn = sqlite3.connect(db.DB_PATH)
cur = conn.cursor()
try:
cur.execute("""
INSERT INTO compile_jobs (title, status, created_at)
VALUES ('Pending', 'rendering', datetime())
""")
except sqlite3.OperationalError:
pytest.skip("compile_jobs schema mismatch")
conn.commit()
cid = cur.lastrowid
conn.close()
r = client.post("/api/music/pipeline", json={"compile_job_id": cid})
assert r.status_code == 400
def test_create_pipeline_with_visual_options(client):
r = client.post("/api/music/pipeline", json={
"track_id": 1, "visual_style": "single",
"background_mode": "video_loop", "background_keyword": "rain",
})
assert r.status_code == 201
body = r.json()
assert body["visual_style"] == "single"
assert body["background_mode"] == "video_loop"
assert body["background_keyword"] == "rain"
def test_create_pipeline_with_done_compile_job(client):
"""compile job status='done' (production convention) — accept as ready."""
import sqlite3
conn = sqlite3.connect(db.DB_PATH)
cur = conn.cursor()
try:
cur.execute("""
INSERT INTO compile_jobs (title, track_ids, crossfade_sec,
output_path, status, created_at)
VALUES ('Done Mix', '[1,2]', 3, '/app/data/compiles/X.mp3',
'done', datetime())
""")
except sqlite3.OperationalError:
pytest.skip("compile_jobs schema mismatch")
conn.commit()
cid = cur.lastrowid
conn.close()
r = client.post("/api/music/pipeline", json={"compile_job_id": cid})
assert r.status_code == 201, r.text
body = r.json()
assert body["compile_job_id"] == cid

View File

@@ -111,3 +111,65 @@ def test_pipeline_reject_and_regenerate(client):
assert p["feedback_count_per_step"]["cover"] == 1
history = db.get_feedback_history(pid)
assert history[0]["feedback_text"] == "더 어둡게"
@patch("app.pipeline.youtube.upload_video", return_value={"video_id": "MIX_VID"})
@patch("app.pipeline.review.run_4_axis", new=AsyncMock(return_value={
"metadata_quality": {"score": 90, "notes": ""},
"policy_compliance": {"score": 95, "issues": []},
"viewer_experience": {"score": 85, "notes": ""},
"trend_alignment": {"score": 70, "matched_keywords": []},
"weighted_total": 87.0, "verdict": "pass", "summary": "ok",
"used_fallback": False,
}))
@patch("app.pipeline.metadata.generate", new=AsyncMock(return_value={
"title": "Mix", "description": "Track desc",
"tags": ["lofi"], "category_id": 10,
"used_fallback": False, "error": None,
}))
@patch("app.pipeline.thumb.generate", return_value={
"url": "/media/videos/X/thumbnail.jpg", "used_fallback": False,
})
@patch("app.pipeline.video.generate", return_value={
"url": "/media/videos/X/video.mp4", "used_fallback": False, "duration_sec": 600,
})
@patch("app.pipeline.cover.generate", new=AsyncMock(return_value={
"url": "/media/videos/X/cover.jpg", "used_fallback": False, "error": None,
}))
def test_full_pipeline_compile_job_happy_path(mock_video, mock_thumb, mock_yt, client):
# compile_job 1개 추가 (succeeded)
conn = sqlite3.connect(db.DB_PATH)
cur = conn.cursor()
try:
cur.execute("""
INSERT INTO compile_jobs (title, track_ids, crossfade_sec, output_path,
status, created_at)
VALUES ('Test Mix', '[1]', 3, '/app/data/compiles/1.mp3', 'succeeded', datetime())
""")
except sqlite3.OperationalError:
pytest.skip("compile_jobs schema mismatch — skip integration test")
conn.commit()
cid = cur.lastrowid
conn.close()
pid = client.post("/api/music/pipeline", json={"compile_job_id": cid}).json()["id"]
assert db.get_pipeline(pid)["state"] == "created"
assert db.get_pipeline(pid)["compile_job_id"] == cid
assert db.get_pipeline(pid)["track_id"] is None
client.post(f"/api/music/pipeline/{pid}/start")
p = db.get_pipeline(pid)
assert p["state"] == "cover_pending"
for step in ["cover", "video", "thumb", "meta"]:
r = client.post(f"/api/music/pipeline/{pid}/feedback",
json={"step": step, "intent": "approve"})
assert r.status_code == 202
p = db.get_pipeline(pid)
assert p["state"] == "publish_pending"
client.post(f"/api/music/pipeline/{pid}/publish")
p = db.get_pipeline(pid)
assert p["state"] == "published"
assert p["youtube_video_id"] == "MIX_VID"

View File

@@ -3,6 +3,10 @@ import pytest
from unittest.mock import patch, MagicMock
from app.pipeline import video, thumb, storage
import respx
import httpx
from httpx import Response
@pytest.fixture
def tmp_storage(monkeypatch, tmp_path):
@@ -17,31 +21,6 @@ def tmp_storage(monkeypatch, tmp_path):
return tmp_path
@patch("subprocess.run")
def test_generate_video_calls_ffmpeg(mock_run, tmp_storage):
mock_run.return_value = MagicMock(returncode=0, stderr="")
out = video.generate(pipeline_id=50, audio_path=str(tmp_storage / "audio.mp3"),
cover_path=str(tmp_storage / "50" / "cover.jpg"),
genre="lo-fi", duration_sec=120, resolution="1920x1080",
style="visualizer")
assert out["url"].endswith("/50/video.mp4")
assert out["used_fallback"] is False
args = mock_run.call_args[0][0]
assert args[0] == "ffmpeg"
assert "-i" in args
assert "showwaves" in " ".join(args)
@patch("subprocess.run")
def test_generate_video_failure_marks_failed(mock_run, tmp_storage):
mock_run.return_value = MagicMock(returncode=1, stderr="bad codec")
with pytest.raises(video.VideoGenerationError):
video.generate(pipeline_id=51, audio_path=str(tmp_storage / "audio.mp3"),
cover_path=str(tmp_storage / "50" / "cover.jpg"),
genre="lo-fi", duration_sec=120, resolution="1920x1080",
style="visualizer")
@patch("subprocess.run")
def test_thumb_extracts_frame(mock_run, tmp_storage):
mock_run.return_value = MagicMock(returncode=0, stderr="")
@@ -64,3 +43,127 @@ def test_thumb_failure_raises(mock_run, tmp_storage):
with pytest.raises(thumb.ThumbGenerationError):
thumb.generate(pipeline_id=61, video_path=str(video_path),
track_title="X", overlay_text=False)
# ===== Video tests (replacing the FFmpeg-based tests) =====
@pytest.fixture
def encoder_env(monkeypatch):
monkeypatch.setattr(video, "ENCODER_URL", "http://192.168.45.59:8765")
monkeypatch.setattr(video, "NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
monkeypatch.setattr(video, "NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
@respx.mock
def test_generate_video_calls_remote_encoder(encoder_env, tmp_path, monkeypatch):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
respx.post("http://192.168.45.59:8765/encode_video").mock(
return_value=Response(200, json={
"ok": True, "duration_ms": 12000,
"output_path_nas": "/volume1/docker/webpage/data/videos/3/video.mp4",
"output_bytes": 28000000,
"encoder": "h264_nvenc", "preset": "p4",
})
)
out = video.generate(
pipeline_id=3,
audio_path="/app/data/1c695df3.mp3",
cover_path="/app/data/videos/3/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
style="visualizer",
)
assert out["url"].endswith("/3/video.mp4")
assert out["used_fallback"] is False
assert out["encode_duration_ms"] == 12000
@respx.mock
def test_generate_video_raises_on_connection_error(encoder_env, monkeypatch, tmp_path):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
respx.post("http://192.168.45.59:8765/encode_video").mock(
side_effect=httpx.ConnectError("Connection refused")
)
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(
pipeline_id=4,
audio_path="/app/data/x.mp3", cover_path="/app/data/videos/4/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
)
assert "연결 실패" in str(exc.value) or "Connection" in str(exc.value)
@respx.mock
def test_generate_video_raises_on_500(encoder_env, monkeypatch, tmp_path):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
respx.post("http://192.168.45.59:8765/encode_video").mock(
return_value=Response(500, json={"detail": {"ok": False, "stage": "ffmpeg", "error": "bad codec"}})
)
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(
pipeline_id=5,
audio_path="/app/data/x.mp3", cover_path="/app/data/videos/5/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
)
assert "Windows 인코더 오류" in str(exc.value)
assert "ffmpeg" in str(exc.value)
def test_generate_video_no_url_configured(monkeypatch, tmp_path):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
monkeypatch.setattr(video, "ENCODER_URL", "")
with pytest.raises(video.VideoGenerationError) as exc:
video.generate(
pipeline_id=6,
audio_path="/app/data/x.mp3", cover_path="/app/data/videos/6/cover.jpg",
genre="lo-fi", duration_sec=120, resolution="1920x1080",
)
assert "WINDOWS_VIDEO_ENCODER_URL" in str(exc.value)
def test_container_to_nas_videos_path(monkeypatch):
monkeypatch.setattr(video, "NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
monkeypatch.setattr(video, "NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
assert video._container_to_nas("/app/data/videos/3/cover.jpg") == "/volume1/docker/webpage/data/videos/3/cover.jpg"
def test_container_to_nas_music_path(monkeypatch):
monkeypatch.setattr(video, "NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
monkeypatch.setattr(video, "NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
assert video._container_to_nas("/app/data/abc.mp3") == "/volume1/docker/webpage/data/music/abc.mp3"
def test_container_to_nas_strips_cache_buster(monkeypatch):
monkeypatch.setattr(video, "NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
monkeypatch.setattr(video, "NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
# cache-busted path → strip ?v=... before NAS conversion
assert video._container_to_nas("/app/data/videos/3/cover.jpg?v=20260510065642") == "/volume1/docker/webpage/data/videos/3/cover.jpg"
@respx.mock
def test_generate_video_passes_essential_params(encoder_env, tmp_path, monkeypatch):
monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
captured = {}
def hook(req):
import json as _json
captured["body"] = _json.loads(req.content)
return Response(200, json={"ok": True, "duration_ms": 5000,
"output_path_nas": "/v/3/video.mp4",
"output_bytes": 10_000_000,
"encoder": "h264_nvenc", "preset": "p4"})
respx.post("http://192.168.45.59:8765/encode_video").mock(side_effect=hook)
out = video.generate(
pipeline_id=3, audio_path="/app/data/x.mp3",
cover_path="/app/data/videos/3/cover.jpg",
genre="mix", duration_sec=3600, resolution="1920x1080",
style="essential", background_mode="video_loop",
background_path="/app/data/videos/3/loop.mp4",
tracks=[{"id": 1, "title": "T1", "start_offset_sec": 0}],
)
body = captured["body"]
assert body["style"] == "essential"
assert body["background_mode"] == "video_loop"
assert body["background_path_nas"] == "/volume1/docker/webpage/data/videos/3/loop.mp4"
assert body["tracks"][0]["title"] == "T1"
assert out["url"].endswith("/3/video.mp4")

View File

@@ -55,8 +55,8 @@ def mint_upload_token(payload: dict) -> str:
return base64.urlsafe_b64encode(body).decode() + "." + sig
def verify_upload_token(token: str) -> dict:
"""업로드 토큰 검증 + jti 사용 마킹."""
def _decode_upload_token(token: str) -> dict:
"""토큰 시그니처 + 만료 + jti 존재만 검증. JTI 마킹 없음."""
try:
b64, sig = token.split(".", 1)
body = base64.urlsafe_b64decode(b64.encode())
@@ -72,13 +72,25 @@ def verify_upload_token(token: str) -> dict:
if int(time.time()) > expires_at:
raise HTTPException(status_code=401, detail="토큰 만료")
jti = payload.get("jti")
if not jti:
if not payload.get("jti"):
raise HTTPException(status_code=401, detail="jti 누락")
return payload
def verify_upload_token(token: str) -> dict:
"""업로드 토큰 검증 + jti 사용 마킹. single-shot 업로드와 chunked init에서만 사용."""
payload = _decode_upload_token(token)
jti = payload["jti"]
with _jti_lock:
if jti in _used_jti:
raise HTTPException(status_code=409, detail="이미 사용된 토큰")
_used_jti.add(jti)
return payload
def verify_upload_token_no_consume(token: str) -> dict:
"""업로드 토큰 검증만 (jti consume 없음). chunked upload chunk/complete/abort/status에 사용."""
return _decode_upload_token(token)

View File

@@ -4,6 +4,7 @@
- create_share_link(file_path, expires_in_sec) -> share URL
"""
import asyncio
import logging
import os
from datetime import datetime, timedelta, timezone
@@ -15,6 +16,11 @@ logger = logging.getLogger("packs-lab.dsm")
DSM_HOST = os.getenv("DSM_HOST", "") # 예: https://gahusb.synology.me:5001
DSM_USER = os.getenv("DSM_USER", "")
DSM_PASS = os.getenv("DSM_PASS", "")
# LAN IP로 DSM 접근 시 self-signed cert가 IP에 매칭 안 되어 검증 실패. LAN 내부 통신이라 false 허용.
# 운영에서 LAN IP + self-signed면 DSM_VERIFY_SSL=false. 도메인 + 정상 cert면 기본값(true) 유지.
DSM_VERIFY_SSL = os.getenv("DSM_VERIFY_SSL", "true").strip().lower() != "false"
DSM_MAX_RETRIES = max(1, int(os.getenv("DSM_MAX_RETRIES", "3")))
DSM_BACKOFF_SEC = float(os.getenv("DSM_BACKOFF_SEC", "0.5"))
API_AUTH = "/webapi/auth.cgi"
API_SHARE = "/webapi/entry.cgi"
@@ -24,13 +30,45 @@ class DSMError(RuntimeError):
pass
async def _request_with_retry(
client: httpx.AsyncClient,
url: str,
params: dict,
timeout: float,
) -> httpx.Response:
"""5xx · transport · timeout만 지수백오프 retry. 4xx와 DSM success=false는 호출자가 판단."""
last_exc: Exception | None = None
for attempt in range(DSM_MAX_RETRIES):
try:
r = await client.get(url, params=params, timeout=timeout)
if r.status_code < 500:
return r
last_exc = httpx.HTTPStatusError(
f"HTTP {r.status_code}", request=r.request, response=r
)
logger.warning(
"DSM HTTP %s — attempt %s/%s body=%s",
r.status_code, attempt + 1, DSM_MAX_RETRIES, r.text[:200],
)
except (httpx.TransportError, httpx.TimeoutException) as e:
last_exc = e
logger.warning(
"DSM transport error: %s — attempt %s/%s",
e, attempt + 1, DSM_MAX_RETRIES,
)
if attempt < DSM_MAX_RETRIES - 1:
await asyncio.sleep(DSM_BACKOFF_SEC * (2 ** attempt))
raise DSMError(f"DSM 요청 실패 (재시도 {DSM_MAX_RETRIES}회): {last_exc}")
async def _login(client: httpx.AsyncClient) -> str:
"""DSM 세션 sid 반환."""
if not all([DSM_HOST, DSM_USER, DSM_PASS]):
raise DSMError("DSM 환경변수 미설정")
r = await client.get(
r = await _request_with_retry(
client,
f"{DSM_HOST}{API_AUTH}",
params={
{
"api": "SYNO.API.Auth",
"version": "7",
"method": "login",
@@ -39,12 +77,14 @@ async def _login(client: httpx.AsyncClient) -> str:
"session": "FileStation",
"format": "sid",
},
timeout=15.0,
15.0,
)
r.raise_for_status()
data = r.json()
if not data.get("success"):
raise DSMError(f"DSM login 실패: {data.get('error')}")
err = data.get("error", {})
logger.error("DSM login 실패: code=%s error=%s", err.get("code"), err)
raise DSMError(f"DSM login 실패: code={err.get('code')} error={err}")
return data["data"]["sid"]
@@ -74,12 +114,13 @@ async def create_share_link(file_path: str, expires_in_sec: int = 14400) -> tupl
expires_at = datetime.now(timezone.utc) + timedelta(seconds=expires_in_sec)
expire_time_ms = int(expires_at.timestamp() * 1000)
async with httpx.AsyncClient(verify=True) as client:
async with httpx.AsyncClient(verify=DSM_VERIFY_SSL) as client:
sid = await _login(client)
try:
r = await client.get(
r = await _request_with_retry(
client,
f"{DSM_HOST}{API_SHARE}",
params={
{
"api": "SYNO.FileStation.Sharing",
"version": "3",
"method": "create",
@@ -87,16 +128,22 @@ async def create_share_link(file_path: str, expires_in_sec: int = 14400) -> tupl
"date_expired": expire_time_ms,
"_sid": sid,
},
timeout=15.0,
15.0,
)
r.raise_for_status()
data = r.json()
if not data.get("success"):
raise DSMError(f"DSM Sharing.create 실패: {data.get('error')}")
err = data.get("error", {})
logger.error(
"DSM Sharing.create 실패: path=%s code=%s error=%s",
file_path, err.get("code"), err,
)
raise DSMError(f"DSM Sharing.create 실패: code={err.get('code')} error={err}")
links = data["data"]["links"]
if not links:
raise DSMError("Sharing 응답에 링크 없음")
url = links[0]["url"]
logger.info("DSM share link created: path=%s", file_path)
return url, expires_at
finally:
await _logout(client, sid)

View File

@@ -51,3 +51,16 @@ class MintTokenResponse(BaseModel):
token: str
expires_at: datetime
jti: str
class InitUploadResponse(BaseModel):
"""chunked upload 세션 초기화 응답. session_id는 mint-token의 jti와 동일."""
session_id: str
chunk_max_size: int
expected_size: int
expires_at: datetime
class ChunkUploadResponse(BaseModel):
written: int
expected_size: int

View File

@@ -2,13 +2,20 @@
- POST /api/packs/sign-link — Vercel HMAC 인증 → DSM 공유 링크
- POST /api/packs/admin/mint-token — Vercel HMAC 인증 → 일회성 upload 토큰
- POST /api/packs/upload — 일회성 토큰 인증 → multipart 저장 + supabase INSERT
- POST /api/packs/upload — 일회성 토큰 인증 → multipart 저장 + supabase INSERT (single-shot)
- POST /api/packs/upload/init — 일회성 토큰 인증 → chunked upload 세션 초기화
- PUT /api/packs/upload/{session_id}/chunk — 동일 토큰 + offset → 부분파일 append
- POST /api/packs/upload/{session_id}/complete — 동일 토큰 → 완료 + supabase INSERT
- GET /api/packs/upload/{session_id}/status — 현재 written 조회 (재개용)
- DELETE /api/packs/upload/{session_id} — 세션 중단 + 부분파일 정리
- GET /api/packs/list — Vercel HMAC 인증 → pack_files 전체 조회
- DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete (DSM 공유는 자동 만료)
"""
import json
import logging
import os
import re
import shutil
import time
import uuid
from datetime import datetime, timezone
@@ -17,9 +24,16 @@ from pathlib import Path
from fastapi import APIRouter, File, Header, HTTPException, Request, UploadFile
from supabase import Client, create_client
from .auth import mint_upload_token, verify_request_hmac, verify_upload_token
from .auth import (
mint_upload_token,
verify_request_hmac,
verify_upload_token,
verify_upload_token_no_consume,
)
from .dsm_client import DSMError, create_share_link
from .models import (
ChunkUploadResponse,
InitUploadResponse,
MintTokenRequest,
MintTokenResponse,
PackFileItem,
@@ -32,10 +46,60 @@ logger = logging.getLogger("packs-lab.routes")
router = APIRouter(prefix="/api/packs")
PACK_BASE_DIR = Path(os.getenv("PACK_BASE_DIR", "/app/data/packs"))
# DSM·Supabase에 노출되는 NAS 호스트 절대경로. 컨테이너 내부 PACK_BASE_DIR과 같은 디렉토리를
# 호스트 시점에서 가리켜야 한다 (docker volume 마운트의 호스트 측 경로). 미설정 시 PACK_BASE_DIR로
# fallback — 로컬 개발용. 운영 NAS에서는 반드시 PACK_HOST_DIR=/volume1/docker/webpage/media/packs.
PACK_HOST_DIR = Path(os.getenv("PACK_HOST_DIR", str(PACK_BASE_DIR)))
ALLOWED_EXT = {"pdf", "zip", "mp4", "mov", "mkv", "wav", "m4a", "mp3", "png", "jpg", "jpeg", "webp", "prj"}
MAX_BYTES = 5 * 1024 * 1024 * 1024 # 5GB
SAFE_FILENAME = re.compile(r"^[\w가-힣\-\.\(\)\s]+$")
UPLOAD_TOKEN_TTL_SEC = int(os.getenv("UPLOAD_TOKEN_TTL_SEC", "1800")) # 30분 default
CHUNK_MAX_SIZE = int(os.getenv("PACK_CHUNK_MAX_SIZE", str(64 * 1024 * 1024))) # 64MB default
SESSIONS_DIR_NAME = ".uploads"
def _sessions_root() -> Path:
return PACK_BASE_DIR / SESSIONS_DIR_NAME
def _session_dir(jti: str) -> Path:
# jti는 uuid4 형식이라 path traversal 위험 없음. 안전을 위해 추가 검증.
if not re.match(r"^[0-9a-fA-F\-]{1,64}$", jti):
raise HTTPException(status_code=400, detail="잘못된 session_id")
return _sessions_root() / jti
def _session_meta_path(jti: str) -> Path:
return _session_dir(jti) / "meta.json"
def _session_data_path(jti: str) -> Path:
return _session_dir(jti) / "data.part"
def _load_session(jti: str) -> dict:
meta_file = _session_meta_path(jti)
if not meta_file.exists():
raise HTTPException(status_code=404, detail="업로드 세션을 찾을 수 없습니다")
return json.loads(meta_file.read_text(encoding="utf-8"))
def _save_session(jti: str, meta: dict) -> None:
_session_meta_path(jti).write_text(json.dumps(meta), encoding="utf-8")
def _cleanup_session(jti: str) -> None:
shutil.rmtree(_session_dir(jti), ignore_errors=True)
def _verify_session_token(authorization: str, session_id: str) -> dict:
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Authorization 헤더 누락")
token = authorization[len("Bearer "):]
payload = verify_upload_token_no_consume(token)
if payload.get("jti") != session_id:
raise HTTPException(status_code=403, detail="토큰과 세션 ID 불일치")
return payload
def _supabase() -> Client:
@@ -67,9 +131,10 @@ async def sign_link(
verify_request_hmac(body, x_timestamp, x_signature)
payload = SignLinkRequest.model_validate_json(body)
# 경로 안전: PACK_BASE_DIR 하위인지 확인
# 경로 안전: PACK_HOST_DIR(NAS 호스트 절대경로) 하위인지 확인.
# file_path는 upload 라우트가 Supabase에 저장한 호스트경로 그대로 전달되어 DSM API에 사용됨.
abs_path = Path(payload.file_path).resolve()
if not str(abs_path).startswith(str(PACK_BASE_DIR)):
if not str(abs_path).startswith(str(PACK_HOST_DIR)):
raise HTTPException(status_code=400, detail="허용된 경로 외부")
try:
@@ -124,56 +189,221 @@ async def upload(
filename = _check_filename(payload["filename"])
expected_size = int(payload["size_bytes"])
tier_dir = PACK_BASE_DIR / tier
tier_dir.mkdir(parents=True, exist_ok=True)
target = tier_dir / filename
# tier 디렉토리는 만들지 않고 PACK_BASE_DIR 평면 구조에 저장. tier 구분은 filename 규칙으로.
PACK_BASE_DIR.mkdir(parents=True, exist_ok=True)
target = PACK_BASE_DIR / filename
if target.exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다. 다른 이름으로 업로드하거나 기존 파일을 먼저 삭제하세요")
# multipart 스트림 저장 + 크기 검증
written = 0
with target.open("wb") as f:
while True:
chunk = await file.read(1024 * 1024)
if not chunk:
break
written += len(chunk)
if written > MAX_BYTES:
f.close()
target.unlink(missing_ok=True)
raise HTTPException(status_code=413, detail="파일 크기 5GB 초과")
f.write(chunk)
upload_committed = False
try:
# multipart 스트림 저장 + 크기 검증
written = 0
with target.open("wb") as f:
while True:
chunk = await file.read(1024 * 1024)
if not chunk:
break
written += len(chunk)
if written > MAX_BYTES:
raise HTTPException(status_code=413, detail="파일 크기 5GB 초과")
f.write(chunk)
if written != expected_size:
target.unlink(missing_ok=True)
raise HTTPException(status_code=400, detail=f"실제 크기({written})와 토큰 크기({expected_size}) 불일치")
if written != expected_size:
raise HTTPException(status_code=400, detail=f"실제 크기({written})와 토큰 크기({expected_size}) 불일치")
# supabase INSERT
# Supabase·DSM에 노출되는 file_path는 NAS 호스트 절대경로여야 한다.
# 컨테이너 경로(target)는 마운트된 호스트경로의 다른 시점일 뿐이라, 같은 디렉토리 구조를 보유.
host_path = PACK_HOST_DIR / filename
# supabase INSERT
sb = _supabase()
file_id = str(uuid.uuid4())
try:
res = sb.table("pack_files").insert({
"id": file_id,
"min_tier": tier,
"label": label,
"file_path": str(host_path),
"filename": filename,
"size_bytes": written,
}).execute()
except Exception as e:
logger.exception("Supabase INSERT 예외: filename=%s", filename)
raise HTTPException(status_code=500, detail=f"DB INSERT 실패: {e}") from e
if not res.data:
raise HTTPException(status_code=500, detail="DB INSERT 실패")
upload_committed = True
return UploadResponse(
file_id=file_id,
file_path=str(host_path),
filename=filename,
size_bytes=written,
min_tier=tier,
label=label,
uploaded_at=res.data[0]["uploaded_at"],
)
finally:
if not upload_committed and target.exists():
try:
target.unlink()
logger.warning("업로드 실패로 부분 파일 정리: %s", target)
except Exception as e:
logger.exception("부분 파일 정리 실패: %s%s", target, e)
# ── Chunked upload (resumable) ──────────────────────────────────────────────
# mint-token이 발급한 동일 토큰을 init → chunk* → complete 전 흐름에서 재사용한다.
# jti = session_id. init에서만 jti consume, chunk/complete/abort는 no-consume 검증.
@router.post("/upload/init", response_model=InitUploadResponse)
async def upload_init(authorization: str = Header("")):
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Authorization 헤더 누락")
token = authorization[len("Bearer "):]
payload = verify_upload_token(token) # init만 jti consume
tier = payload["tier"]
label = payload["label"]
filename = _check_filename(payload["filename"])
expected_size = int(payload["size_bytes"])
jti = payload["jti"]
PACK_BASE_DIR.mkdir(parents=True, exist_ok=True)
if (PACK_BASE_DIR / filename).exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다")
sdir = _session_dir(jti)
if sdir.exists():
raise HTTPException(status_code=409, detail="이미 시작된 세션입니다")
sdir.mkdir(parents=True, exist_ok=True)
_session_data_path(jti).touch()
_save_session(jti, {
"filename": filename,
"expected_size": expected_size,
"tier": tier,
"label": label,
"written": 0,
"expires_at": int(payload["expires_at"]),
})
return InitUploadResponse(
session_id=jti,
chunk_max_size=CHUNK_MAX_SIZE,
expected_size=expected_size,
expires_at=datetime.fromtimestamp(payload["expires_at"], tz=timezone.utc),
)
@router.put("/upload/{session_id}/chunk", response_model=ChunkUploadResponse)
async def upload_chunk(
session_id: str,
request: Request,
offset: int = 0,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
if offset != meta["written"]:
raise HTTPException(
status_code=409,
detail=f"offset {offset} 불일치 (현재 written={meta['written']})",
headers={"X-Current-Offset": str(meta["written"])},
)
body = await request.body()
if not body:
raise HTTPException(status_code=400, detail="청크가 비어 있음")
if len(body) > CHUNK_MAX_SIZE:
raise HTTPException(status_code=413, detail=f"청크 크기 {CHUNK_MAX_SIZE} 초과")
if meta["written"] + len(body) > meta["expected_size"]:
raise HTTPException(status_code=413, detail="누적 크기 expected_size 초과")
with _session_data_path(session_id).open("ab") as f:
f.write(body)
meta["written"] += len(body)
_save_session(session_id, meta)
return ChunkUploadResponse(written=meta["written"], expected_size=meta["expected_size"])
@router.get("/upload/{session_id}/status", response_model=ChunkUploadResponse)
async def upload_status(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
return ChunkUploadResponse(written=meta["written"], expected_size=meta["expected_size"])
@router.post("/upload/{session_id}/complete", response_model=UploadResponse)
async def upload_complete(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
if meta["written"] != meta["expected_size"]:
raise HTTPException(
status_code=400,
detail=f"미완료: written={meta['written']} expected={meta['expected_size']}",
)
filename = meta["filename"]
target = PACK_BASE_DIR / filename
if target.exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다")
data_file = _session_data_path(session_id)
data_file.replace(target) # atomic rename within same FS
host_path = PACK_HOST_DIR / filename
sb = _supabase()
file_id = str(uuid.uuid4())
res = sb.table("pack_files").insert({
"id": file_id,
"min_tier": tier,
"label": label,
"file_path": str(target),
"filename": filename,
"size_bytes": written,
}).execute()
try:
res = sb.table("pack_files").insert({
"id": file_id,
"min_tier": meta["tier"],
"label": meta["label"],
"file_path": str(host_path),
"filename": filename,
"size_bytes": meta["written"],
}).execute()
except Exception as e:
logger.exception("Supabase INSERT 예외 (chunked complete): filename=%s", filename)
target.unlink(missing_ok=True)
raise HTTPException(status_code=500, detail=f"DB INSERT 실패: {e}") from e
if not res.data:
target.unlink(missing_ok=True)
raise HTTPException(status_code=500, detail="DB INSERT 실패")
_cleanup_session(session_id)
return UploadResponse(
file_id=file_id,
file_path=str(target),
file_path=str(host_path),
filename=filename,
size_bytes=written,
min_tier=tier,
label=label,
size_bytes=meta["written"],
min_tier=meta["tier"],
label=meta["label"],
uploaded_at=res.data[0]["uploaded_at"],
)
@router.delete("/upload/{session_id}")
async def upload_abort(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
_cleanup_session(session_id)
return {"ok": True}
@router.get("/list", response_model=list[PackFileItem])
async def list_files(
request: Request,

View File

@@ -8,6 +8,13 @@ import httpx
from app.dsm_client import create_share_link, DSMError
@pytest.fixture(autouse=True)
def _no_backoff(monkeypatch):
"""retry 백오프 sleep 제거 — 테스트 속도."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_BACKOFF_SEC", 0.0)
@pytest.fixture(autouse=True)
def _dsm_env(monkeypatch):
monkeypatch.setenv("DSM_HOST", "https://test-nas:5001")
@@ -109,3 +116,109 @@ def test_dsm_share_failure_logs_out():
assert "login" in call_order
assert "logout" in call_order, "logout이 호출되지 않음 (finally 누락 의심)"
def test_retry_on_5xx_then_success(monkeypatch):
"""첫 호출 5xx → retry → 두 번째 200으로 성공."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 3)
login_calls = {"n": 0}
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
if method == "login":
login_calls["n"] += 1
if login_calls["n"] == 1:
return _make_response({}, status_code=503)
return _make_response({"success": True, "data": {"sid": "sid-after-retry"}})
if method == "create":
return _make_response({
"success": True,
"data": {"links": [{"url": "https://nas/sharing/retry"}]},
})
return _make_response({"success": True})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
url, _ = asyncio.run(create_share_link("/volume1/x.zip"))
assert url == "https://nas/sharing/retry"
assert login_calls["n"] == 2, "5xx 응답에 대해 retry가 동작해야 함"
def test_retry_exhausts_on_persistent_5xx(monkeypatch):
"""5xx가 MAX_RETRIES 동안 계속되면 DSMError로 raise."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 2)
login_calls = {"n": 0}
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
if method == "login":
login_calls["n"] += 1
return _make_response({}, status_code=503)
return _make_response({"success": True})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
with pytest.raises(DSMError, match="재시도"):
asyncio.run(create_share_link("/volume1/x.zip"))
assert login_calls["n"] == 2, f"MAX_RETRIES만큼 시도해야 함 (실제: {login_calls['n']})"
def test_retry_on_transport_error_then_success(monkeypatch):
"""httpx.ConnectError → retry → 성공."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 3)
login_calls = {"n": 0}
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
if method == "login":
login_calls["n"] += 1
if login_calls["n"] == 1:
raise httpx.ConnectError("connection refused")
return _make_response({"success": True, "data": {"sid": "sid"}})
if method == "create":
return _make_response({
"success": True,
"data": {"links": [{"url": "https://nas/sharing/tr"}]},
})
return _make_response({"success": True})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
url, _ = asyncio.run(create_share_link("/volume1/x.zip"))
assert url == "https://nas/sharing/tr"
assert login_calls["n"] == 2
def test_no_retry_on_4xx(monkeypatch):
"""4xx (영구 오류)는 retry 없이 즉시 raise_for_status."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 3)
login_calls = {"n": 0}
def _raise_4xx():
raise httpx.HTTPStatusError(
"client error",
request=MagicMock(),
response=MagicMock(status_code=403),
)
async def fake_get(self, url, *, params=None, **kw):
login_calls["n"] += 1
resp = MagicMock(spec=httpx.Response)
resp.status_code = 403
resp.json.return_value = {}
resp.raise_for_status = _raise_4xx
return resp
with patch.object(httpx.AsyncClient, "get", new=fake_get):
with pytest.raises(httpx.HTTPStatusError):
asyncio.run(create_share_link("/volume1/x.zip"))
assert login_calls["n"] == 1, "4xx는 retry 없이 즉시 raise"

View File

@@ -37,11 +37,12 @@ def test_health():
@patch("app.routes.create_share_link", new_callable=AsyncMock)
def test_sign_link_success(mock_share):
mock_share.return_value = ("https://test.synology.me:5001/d/s/abc", datetime.now(timezone.utc))
# Windows에서는 절대경로 resolve 결과가 C:\... 로 prefix되므로 PACK_BASE_DIR도 동일하게 패치
# Windows에서는 절대경로 resolve 결과가 C:\... 로 prefix되므로 PACK_HOST_DIR도 동일하게 패치
# sign-link는 PACK_HOST_DIR(NAS 호스트경로) 기준으로 검증함.
from pathlib import Path
abs_resolved = Path("/volume1/docker/webpage/media/packs/master/x.mp4").resolve()
base_resolved = Path(str(abs_resolved).rsplit("master", 1)[0].rstrip("\\/"))
with patch("app.routes.PACK_BASE_DIR", base_resolved):
with patch("app.routes.PACK_HOST_DIR", base_resolved):
body = b'{"file_path":"/volume1/docker/webpage/media/packs/master/x.mp4","expires_in_seconds":14400}'
r = client.post("/api/packs/sign-link", content=body, headers=_signed(body))
assert r.status_code == 200
@@ -159,8 +160,8 @@ def test_upload_size_mismatch(tmp_path, monkeypatch):
)
assert resp.status_code == 400
assert "크기" in resp.json()["detail"]
# 파일이 정리되었는지 확인
assert not (tmp_path / "pro" / "size_mismatch_test.zip").exists()
# 파일이 정리되었는지 확인 (평면 구조)
assert not (tmp_path / "size_mismatch_test.zip").exists()
def test_upload_jti_replay(tmp_path, monkeypatch):
@@ -245,3 +246,290 @@ def test_list_filters_deleted():
assert resp.status_code == 200
fake_supabase.table.return_value.select.return_value.is_.assert_called_with("deleted_at", "null")
def _mint(filename: str, size: int, jti: str = None) -> str:
return auth.mint_upload_token({
"tier": "pro",
"label": "샘플",
"filename": filename,
"size_bytes": size,
"jti": jti or str(uuid.uuid4()),
"expires_at": int(time.time()) + 1800,
})
def test_chunk_upload_full_flow(tmp_path, monkeypatch):
"""init → chunk(0) → chunk(N) → complete 정상 흐름."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
from pathlib import Path
monkeypatch.setattr("app.routes.PACK_HOST_DIR", Path("/volume1/host"))
fake_supabase = MagicMock()
fake_supabase.table.return_value.insert.return_value.execute.return_value = MagicMock(
data=[{"uploaded_at": "2026-05-12T00:00:00+00:00"}]
)
payload = b"a" * 100 + b"b" * 50 # 150 bytes total
chunk1 = payload[:100]
chunk2 = payload[100:]
jti = str(uuid.uuid4())
token = _mint("chunk_full.zip", len(payload), jti=jti)
headers = {"Authorization": f"Bearer {token}"}
with patch("app.routes._supabase", return_value=fake_supabase):
test_client = TestClient(app)
# init
r = test_client.post("/api/packs/upload/init", headers=headers)
assert r.status_code == 200, r.text
sid = r.json()["session_id"]
assert sid == jti
assert r.json()["expected_size"] == 150
# chunk 1 (offset=0)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=0",
content=chunk1,
headers=headers,
)
assert r.status_code == 200, r.text
assert r.json()["written"] == 100
# chunk 2 (offset=100)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=100",
content=chunk2,
headers=headers,
)
assert r.status_code == 200
assert r.json()["written"] == 150
# complete
r = test_client.post(f"/api/packs/upload/{sid}/complete", headers=headers)
assert r.status_code == 200, r.text
body = r.json()
assert body["filename"] == "chunk_full.zip"
assert body["size_bytes"] == 150
assert body["file_path"] == "/volume1/host/chunk_full.zip" or body["file_path"].endswith("chunk_full.zip")
# 파일이 최종 위치로 이동했고 session은 정리됨
assert (tmp_path / "chunk_full.zip").read_bytes() == payload
assert not (tmp_path / ".uploads" / sid).exists()
def test_chunk_upload_offset_mismatch(tmp_path, monkeypatch):
"""잘못된 offset → 409 + X-Current-Offset 헤더."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("offset_mismatch.zip", 100, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
r = test_client.post("/api/packs/upload/init", headers=headers)
assert r.status_code == 200
sid = r.json()["session_id"]
# 잘못된 offset (10인데 0이어야 함)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=10",
content=b"x" * 10,
headers=headers,
)
assert r.status_code == 409
assert r.headers.get("X-Current-Offset") == "0"
def test_chunk_upload_status(tmp_path, monkeypatch):
"""status로 현재 written 조회."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("status_check.zip", 50, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
r = test_client.post("/api/packs/upload/init", headers=headers)
sid = r.json()["session_id"]
# 빈 상태
r = test_client.get(f"/api/packs/upload/{sid}/status", headers=headers)
assert r.status_code == 200
assert r.json()["written"] == 0
assert r.json()["expected_size"] == 50
# 일부 업로드 후
test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=0",
content=b"x" * 20,
headers=headers,
)
r = test_client.get(f"/api/packs/upload/{sid}/status", headers=headers)
assert r.json()["written"] == 20
def test_chunk_upload_abort(tmp_path, monkeypatch):
"""DELETE → session 디렉토리 정리."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("abort_test.zip", 30, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers)
test_client.put(
f"/api/packs/upload/{jti}/chunk?offset=0",
content=b"y" * 10,
headers=headers,
)
assert (tmp_path / ".uploads" / jti).exists()
r = test_client.delete(f"/api/packs/upload/{jti}", headers=headers)
assert r.status_code == 200
assert not (tmp_path / ".uploads" / jti).exists()
def test_chunk_upload_wrong_token(tmp_path, monkeypatch):
"""다른 jti의 token으로 chunk 호출 → 403."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
# session A 시작
jti_a = str(uuid.uuid4())
token_a = _mint("wrong_token_a.zip", 30, jti=jti_a)
headers_a = {"Authorization": f"Bearer {token_a}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers_a)
# session B의 token으로 session A의 chunk 호출
jti_b = str(uuid.uuid4())
token_b = _mint("wrong_token_b.zip", 30, jti=jti_b)
headers_b = {"Authorization": f"Bearer {token_b}"}
r = test_client.put(
f"/api/packs/upload/{jti_a}/chunk?offset=0",
content=b"z" * 10,
headers=headers_b,
)
assert r.status_code == 403
def test_chunk_upload_complete_incomplete(tmp_path, monkeypatch):
"""expected_size 미달 상태에서 complete 호출 → 400."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("incomplete.zip", 100, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers)
test_client.put(
f"/api/packs/upload/{jti}/chunk?offset=0",
content=b"q" * 50,
headers=headers,
)
r = test_client.post(f"/api/packs/upload/{jti}/complete", headers=headers)
assert r.status_code == 400
assert "미완료" in r.json()["detail"]
def test_chunk_init_filename_collision(tmp_path, monkeypatch):
"""init 시 동일 파일명이 PACK_BASE_DIR에 이미 있으면 409."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
(tmp_path / "existing.zip").write_bytes(b"already here")
token = _mint("existing.zip", 100)
r = TestClient(app).post(
"/api/packs/upload/init",
headers={"Authorization": f"Bearer {token}"},
)
assert r.status_code == 409
def test_chunk_upload_stores_host_path(tmp_path, monkeypatch):
"""complete 시 Supabase에 저장되는 file_path는 PACK_HOST_DIR 기준."""
from pathlib import Path
container_base = tmp_path / "container"
host_base = Path("/volume1/host/packs")
monkeypatch.setattr("app.routes.PACK_BASE_DIR", container_base)
monkeypatch.setattr("app.routes.PACK_HOST_DIR", host_base)
captured = {}
fake_supabase = MagicMock()
def capture_insert(payload):
captured.update(payload)
m = MagicMock()
m.execute.return_value = MagicMock(data=[{"uploaded_at": "2026-05-12T00:00:00+00:00"}])
return m
fake_supabase.table.return_value.insert.side_effect = capture_insert
jti = str(uuid.uuid4())
token = _mint("hostpath_chunk.zip", 5, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
with patch("app.routes._supabase", return_value=fake_supabase):
c = TestClient(app)
c.post("/api/packs/upload/init", headers=headers)
c.put(f"/api/packs/upload/{jti}/chunk?offset=0", content=b"hello", headers=headers)
r = c.post(f"/api/packs/upload/{jti}/complete", headers=headers)
assert r.status_code == 200
assert captured["file_path"] == str(host_base / "hostpath_chunk.zip")
def test_upload_stores_host_path_not_container_path(tmp_path, monkeypatch):
"""upload 시 Supabase에 저장되는 file_path는 PACK_BASE_DIR(컨테이너) 가 아닌 PACK_HOST_DIR(NAS 호스트) 절대경로여야 한다.
DSM API는 NAS 호스트 절대경로 기준이라 컨테이너 내부 경로(/app/data/packs/...)를
Supabase에 저장하면 sign-link 시 DSM이 파일을 못 찾는다.
"""
from pathlib import Path
container_base = tmp_path / "container"
host_base = Path("/volume1/docker/webpage/media/packs")
monkeypatch.setattr("app.routes.PACK_BASE_DIR", container_base)
monkeypatch.setattr("app.routes.PACK_HOST_DIR", host_base)
captured_insert = {}
fake_supabase = MagicMock()
def capture_insert(payload):
captured_insert.update(payload)
m = MagicMock()
m.execute.return_value = MagicMock(data=[{"uploaded_at": "2026-05-11T00:00:00+00:00"}])
return m
fake_supabase.table.return_value.insert.side_effect = capture_insert
token = auth.mint_upload_token({
"tier": "pro",
"label": "샘플",
"filename": "host_path_check.zip",
"size_bytes": 5,
"jti": str(uuid.uuid4()),
"expires_at": int(time.time()) + 1800,
})
with patch("app.routes._supabase", return_value=fake_supabase):
test_client = TestClient(app)
resp = test_client.post(
"/api/packs/upload",
files={"file": ("host_path_check.zip", b"hello")},
headers={"Authorization": f"Bearer {token}"},
)
assert resp.status_code == 200
# Supabase에 저장된 file_path는 호스트 경로
expected_host = str(host_base / "host_path_check.zip")
assert captured_insert["file_path"] == expected_host
# 응답의 file_path도 호스트 경로
assert resp.json()["file_path"] == expected_host
# 컨테이너 경로(tmp_path 하위)와 다름
assert str(container_base) not in captured_insert["file_path"]

View File

@@ -2,7 +2,7 @@
set -euo pipefail
# ── 서비스 목록 (한 곳에서만 관리) ──
SERVICES="lotto travel-proxy deployer stock-lab music-lab blog-lab realestate-lab agent-office personal nginx scripts"
SERVICES="lotto travel-proxy deployer stock-lab music-lab blog-lab realestate-lab agent-office personal packs-lab nginx scripts"
# 1. 자동 감지: Docker 컨테이너 내부인가?
if [ -d "/repo" ] && [ -d "/runtime" ]; then

View File

@@ -7,12 +7,12 @@ flock -n 200 || { echo "Deploy already running, skipping"; exit 0; }
# ── 서비스 목록 (한 곳에서만 관리) ──
# docker compose 서비스명 (deployer 제외 — 자기 자신을 재빌드하면 스크립트 중단)
BUILD_TARGETS="lotto travel-proxy stock-lab music-lab blog-lab realestate-lab agent-office personal frontend"
BUILD_TARGETS="lotto travel-proxy stock-lab music-lab blog-lab realestate-lab agent-office personal packs-lab frontend"
# 컨테이너 이름 (고아 정리용)
CONTAINER_NAMES="lotto stock-lab music-lab blog-lab realestate-lab agent-office personal travel-proxy frontend"
CONTAINER_NAMES="lotto stock-lab music-lab blog-lab realestate-lab agent-office personal packs-lab travel-proxy frontend"
# 헬스체크 대상
HEALTH_ENDPOINTS="lotto stock-lab travel-proxy music-lab blog-lab realestate-lab agent-office personal"
# data 디렉토리
HEALTH_ENDPOINTS="lotto stock-lab travel-proxy music-lab blog-lab realestate-lab agent-office personal packs-lab"
# data 디렉토리 (packs-lab은 별도 media/packs 사용)
DATA_DIRS="music stock blog realestate agent-office personal"
# 1. 자동 감지: Docker 컨테이너 내부인가?
@@ -75,6 +75,10 @@ for d in $DATA_DIRS; do
mkdir -p "$DST/data/$d"
done
# packs-lab media 디렉토리 (DSM 공유 + admin upload target)
mkdir -p "$DST/media/packs"
chown "${DEPLOY_UID}:${DEPLOY_GID}" "$DST/media/packs" 2>/dev/null || true
# ── 서비스 재빌드 (deployer 제외) ──
cd "$DST"

240
stock-lab/API_SPEC.md Normal file
View File

@@ -0,0 +1,240 @@
# 📈 Stock Lab API Specification
프론트엔드 연동을 위한 주식 서비스 API 명세서입니다.
**Base URL**: `/api`
---
## 1. 💰 계좌 잔고 조회
현재 연결된 한국투자증권 계좌의 잔고와 보유 종목을 조회합니다.
- **URL**: `/trade/balance`
- **Method**: `GET`
- **Description**: Windows AI Server를 통해 실시간 잔고를 가져옵니다.
### Response Example
```json
{
"holdings": [
{
"code": "005930",
"name": "삼성전자",
"qty": 10,
"buy_price": 72000.0,
"current_price": 74500.0,
"profit_rate": 3.47
}
],
"summary": {
"total_eval": 15400000,
"deposit": 5000000,
"note": "정상 조회됨"
}
}
```
---
## 2. 🤖 AI 자동 매매 (분석/주문)
AI에게 현재 잔고와 뉴스를 기반으로 매매 판단을 요청합니다.
- **URL**: `/trade/auto`
- **Method**: `POST`
- **Description**: 분석에는 수 초~수십 초가 소요될 수 있습니다. (타임아웃 주의)
### Response Example (성공 - JSON 파싱 완료)
```json
{
"status": "success",
"decision": {
"action": "BUY",
"ticker": "000660",
"quantity": 10,
"reason": "반도체 업황 개선 뉴스 다수 포착 및 현금 비중 과다"
},
"trade_result": {
"success": true,
"order_no": "1234567"
}
}
```
### Response Example (실패 - AI가 JSON을 안 줬을 때)
AI가 말로 설명하느라 JSON 포맷을 어긴 경우입니다. `raw_response`를 화면에 그대로 보여주는 것을 권장합니다.
```json
{
"status": "failed_parse",
"raw_response": "잔고 현황을 분석해 보겠습니다...\n결정:\n```\n{\n ... \n}\n```"
}
```
**Frontend 처리 권장사항**: `status``failed_parse`라면 `raw_response` 텍스트를 `pre` 태그 등으로 그대로 노출하거나, 정규식으로 JSON 부분만 추출하여 보여주세요.
---
## 3. 📰 뉴스 조회
DB에 저장된 최신 뉴스를 조회합니다.
- **URL**: `/stock/news`
- **Method**: `GET`
- **Params**:
- `limit`: 개수 (기본 20)
- `category`: `domestic` (국내) | `overseas` (해외)
### Response Example
```json
[
{
"id": 105,
"title": "삼성전자, 3분기 영업익 2.4조... 전년비 77% 감소",
"link": "https://n.news.naver.com/...",
"published_at": "2024-09-25T09:00:00",
"sentiment": "negative"
}
]
```
---
## 4. 📊 지수 조회
KOSPI, KOSDAQ 등 주요 지수를 조회합니다.
- **URL**: `/stock/indices`
- **Method**: `GET`
### Response Example
```json
{
"KOSPI": {
"value": "2450.55",
"change": "-10.23",
"percent": "-0.42%"
},
"USD/KRW": {
"value": "1340.50",
"change": "5.00",
"percent": "0.37%"
}
}
```
---
## 5. 📂 포트폴리오 (수동 입력)
KB증권·삼성증권 등 Open API 미제공 증권사용.
보유 종목을 수동 등록하면 **현재가는 네이버 금융에서 자동 조회** (3분 캐시)하여 손익을 계산해 반환합니다.
---
### 5-1. 전체 조회
- **URL**: `GET /portfolio`
- **Description**: 등록된 모든 종목의 현재가·평가금액·손익을 포함하여 반환합니다.
#### Response
```json
{
"holdings": [
{
"id": 1,
"broker": "KB증권",
"ticker": "005930",
"name": "삼성전자",
"quantity": 100,
"avg_price": 72000,
"current_price": 74500,
"price_session": "NXT_AFTER",
"price_as_of": "2026-05-11T19:21:40+09:00",
"eval_amount": 7450000,
"profit_amount": 250000,
"profit_rate": 3.47
}
],
"summary": {
"total_buy": 7200000,
"total_eval": 7450000,
"total_profit": 250000,
"total_profit_rate": 3.47
}
}
```
> **주의**: 현재가 조회에 실패한 종목은 `current_price`, `eval_amount`, `profit_amount`, `profit_rate` 가 `null`로 반환됩니다.
> 프론트에서 `null` 체크 후 `"조회 실패"` 등으로 표시해 주세요.
> **현재가 출처(`price_session`)**: 정규장 마감 후 NXT 시간외 거래가 진행 중이면 NXT 가격으로 자동 전환됩니다.
> - `REGULAR` — KRX 정규장 진행중(09:0015:30) 실시간 가격
> - `NXT_PRE` — NXT 프리마켓(08:0008:50) 거래가
> - `NXT_AFTER` — NXT 애프터마켓(15:3020:00) 거래가
> - `CLOSED` — 모든 세션 마감, 정규장 종가 노출
>
> `price_as_of`는 가격이 마지막으로 형성된 시각(ISO 8601, KST). HTML 폴백 경로에서는 `null`일 수 있음.
---
### 5-2. 종목 추가
- **URL**: `POST /portfolio`
- **Status**: `201 Created`
#### Request Body
```json
{
"broker": "KB증권",
"ticker": "005930",
"name": "삼성전자",
"quantity": 100,
"avg_price": 72000
}
```
| 필드 | 타입 | 설명 |
|------|------|------|
| `broker` | string | 증권사명 (자유 입력) |
| `ticker` | string | 종목 코드 6자리 |
| `name` | string | 종목명 |
| `quantity` | integer | 보유 수량 |
| `avg_price` | integer | 평균 매입가 (원) |
#### Response
```json
{ "id": 1, "ok": true }
```
---
### 5-3. 종목 수정
- **URL**: `PUT /portfolio/{id}`
- **Description**: 변경할 필드만 포함하면 됩니다 (부분 수정).
#### Request Body (모든 필드 Optional)
```json
{ "quantity": 150 }
```
#### Response
```json
{ "ok": true }
```
#### Error (존재하지 않는 id)
```json
{ "error": "Item not found" } // HTTP 404
```
---
### 5-4. 종목 삭제
- **URL**: `DELETE /portfolio/{id}`
#### Response
```json
{ "ok": true }
```
#### Error (존재하지 않는 id)
```json
{ "error": "Item not found" } // HTTP 404
```

View File

@@ -3,11 +3,16 @@ import os
import hashlib
from typing import List, Dict, Any, Optional
DB_PATH = "/app/data/stock.db"
from app.screener.schema import ensure_screener_schema
DB_PATH = os.environ.get("STOCK_DB_PATH", "/app/data/stock.db")
def _conn() -> sqlite3.Connection:
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
conn = sqlite3.connect(DB_PATH)
db_path = os.environ.get("STOCK_DB_PATH", DB_PATH)
parent = os.path.dirname(db_path)
if parent:
os.makedirs(parent, exist_ok=True)
conn = sqlite3.connect(db_path)
conn.row_factory = sqlite3.Row
return conn
@@ -96,6 +101,9 @@ def init_db():
if "commission" not in sh_cols:
conn.execute("ALTER TABLE sell_history ADD COLUMN commission REAL NOT NULL DEFAULT 0")
# Screener 스키마 부트스트랩 (7테이블 + 디폴트 설정 시드)
ensure_screener_schema(conn)
def save_articles(articles: List[Dict[str, str]]) -> int:
count = 0
with _conn() as conn:

View File

@@ -22,11 +22,15 @@ from .db import (
add_sell_history, get_sell_history, update_sell_history, delete_sell_history,
)
from .scraper import fetch_market_news, fetch_major_indices
from .price_fetcher import get_current_prices
from .price_fetcher import get_current_prices, get_current_prices_detail
from .ai_summarizer import summarize_news, OllamaError
app = FastAPI()
# Screener 라우터 등록
from app.screener.router import router as screener_router
app.include_router(screener_router)
# CORS 설정 (프론트엔드 접근 허용)
_cors_origins = os.getenv("CORS_ALLOW_ORIGINS", "http://localhost:3007,http://localhost:8080").split(",")
app.add_middleware(
@@ -319,7 +323,7 @@ def get_portfolio():
}
tickers = list({item["ticker"] for item in items})
prices = get_current_prices(tickers)
details = get_current_prices_detail(tickers)
holdings = []
total_buy = 0 # 요약 표시용 (purchase_price 기반)
@@ -327,7 +331,10 @@ def get_portfolio():
total_eval = 0
for item in items:
current_price = prices.get(item["ticker"])
detail = details.get(item["ticker"])
current_price = detail["price"] if detail else None
price_session = detail["session"] if detail else None
price_as_of = detail["as_of"] if detail else None
# avg_price: 평균단가 — 손익(평가금액 - 매입원가) 계산 기준
# purchase_price: 매입가 — 총 매입 금액 표시 기준 (없으면 avg_price로 폴백)
purchase_price = item.get("purchase_price") if item.get("purchase_price") is not None else item["avg_price"]
@@ -347,6 +354,8 @@ def get_portfolio():
"avg_price": item["avg_price"],
"purchase_price": purchase_price,
"current_price": current_price,
"price_session": price_session,
"price_as_of": price_as_of,
"eval_amount": eval_amount,
"profit_amount": profit_amount,
"profit_rate": profit_rate,

View File

@@ -3,7 +3,8 @@ import requests
from bs4 import BeautifulSoup
from typing import Optional
_cache: dict[str, tuple[Optional[int], float]] = {} # ticker -> (price, timestamp)
# 캐시는 detail 단위(가격+세션+as_of)로 보관. 호환용 단순 가격은 여기서 추출.
_cache: dict[str, tuple[Optional[dict], float]] = {} # ticker -> (detail | None, timestamp)
_CACHE_TTL = 180 # 3분
_HEADERS = {
@@ -15,22 +16,74 @@ _HEADERS = {
}
def _fetch_from_mobile_api(ticker: str) -> Optional[int]:
"""네이버 모바일 주식 API로 현재가 조회"""
def _parse_price_str(value) -> Optional[int]:
if value is None:
return None
s = str(value).replace(",", "").strip()
if not s:
return None
# 음수/소수점도 일단 정수 라운드(국내 주식은 정수)
try:
return int(float(s))
except ValueError:
return None
def _select_price_from_response(payload: dict) -> dict:
"""네이버 모바일 주식 API 응답 dict에서 (price, session, as_of)를 결정.
세션 분류:
- "REGULAR" : 정규장(KRX) 운영중 — closePrice가 실시간
- "NXT_PRE" : 정규장 마감 + NXT 프리마켓 운영중 → overPrice 사용
- "NXT_AFTER" : 정규장 마감 + NXT 애프터마켓 운영중 → overPrice 사용
- "CLOSED" : 정규장 마감 + NXT 비운영/거래중지 → closePrice 사용
반환 dict: {"price": int | None, "session": str, "as_of": str | None}
"""
close_price = _parse_price_str(payload.get("closePrice") or payload.get("stockEndPrice"))
top_as_of = payload.get("localTradedAt")
market_status = (payload.get("marketStatus") or "").upper()
if market_status == "OPEN":
return {"price": close_price, "session": "REGULAR", "as_of": top_as_of}
over = payload.get("overMarketPriceInfo")
if isinstance(over, dict):
over_status = (over.get("overMarketStatus") or "").upper()
trade_stop_name = ((over.get("tradeStopType") or {}).get("name") or "").upper()
if over_status == "OPEN" and trade_stop_name == "TRADING":
over_price = _parse_price_str(over.get("overPrice"))
if over_price is not None:
session_type = (over.get("tradingSessionType") or "").upper()
if session_type == "PRE_MARKET":
session = "NXT_PRE"
elif session_type == "AFTER_MARKET":
session = "NXT_AFTER"
else:
# 알 수 없는 NXT 세션은 보수적으로 AFTER 취급
session = "NXT_AFTER"
return {
"price": over_price,
"session": session,
"as_of": over.get("localTradedAt") or top_as_of,
}
return {"price": close_price, "session": "CLOSED", "as_of": top_as_of}
def _fetch_mobile_api_payload(ticker: str) -> Optional[dict]:
"""네이버 모바일 주식 API 응답 dict 반환."""
url = f"https://m.stock.naver.com/api/stock/{ticker}/basic"
try:
resp = requests.get(url, headers=_HEADERS, timeout=5)
resp.raise_for_status()
data = resp.json()
price_str = data.get("closePrice") or data.get("stockEndPrice") or ""
price_str = str(price_str).replace(",", "").strip()
return int(price_str) if price_str.isdigit() else None
return resp.json()
except Exception:
return None
def _fetch_from_html_fallback(ticker: str) -> Optional[int]:
"""네이버 금융 HTML 폴백 (.no_today .blind 파싱)"""
def _fetch_close_price_from_html(ticker: str) -> Optional[int]:
"""네이버 금융 HTML 폴백 (정규장 종가만 가능, NXT 정보 없음)."""
url = f"https://finance.naver.com/item/main.naver?code={ticker}"
try:
resp = requests.get(url, headers=_HEADERS, timeout=5)
@@ -38,31 +91,49 @@ def _fetch_from_html_fallback(ticker: str) -> Optional[int]:
soup = BeautifulSoup(resp.content, "html.parser", from_encoding="cp949")
tag = soup.select_one(".no_today .blind")
if tag:
price_str = tag.get_text(strip=True).replace(",", "")
return int(price_str) if price_str.isdigit() else None
return _parse_price_str(tag.get_text(strip=True))
return None
except Exception:
return None
def get_current_price(ticker: str) -> Optional[int]:
"""단건 현재가 조회 (3분 캐시)"""
def get_current_price_info(ticker: str) -> Optional[dict]:
"""단건 상세 가격 정보 조회 (3분 캐시).
반환: {"price": int | None, "session": str, "as_of": str | None} | None
"""
now = time.time()
cached = _cache.get(ticker)
if cached and (now - cached[1]) < _CACHE_TTL:
return cached[0]
price = _fetch_from_mobile_api(ticker)
if price is None:
price = _fetch_from_html_fallback(ticker)
detail: Optional[dict] = None
payload = _fetch_mobile_api_payload(ticker)
if isinstance(payload, dict):
detail = _select_price_from_response(payload)
if detail.get("price") is None:
detail = None # 폴백 시도
_cache[ticker] = (price, now)
return price
if detail is None:
fallback_price = _fetch_close_price_from_html(ticker)
if fallback_price is not None:
detail = {"price": fallback_price, "session": "CLOSED", "as_of": None}
_cache[ticker] = (detail, now)
return detail
def get_current_prices_detail(tickers: list[str]) -> dict[str, Optional[dict]]:
"""배치 상세 가격 조회 (캐시 미스 종목만 실제 호출)."""
return {ticker: get_current_price_info(ticker) for ticker in tickers}
def get_current_price(ticker: str) -> Optional[int]:
"""단건 현재가 조회 — 호환용. detail에서 price만 추출."""
detail = get_current_price_info(ticker)
return detail["price"] if detail else None
def get_current_prices(tickers: list[str]) -> dict[str, Optional[int]]:
"""배치 현재가 조회 (캐시 미스 종목만 실제 호출)"""
result: dict[str, Optional[int]] = {}
for ticker in tickers:
result[ticker] = get_current_price(ticker)
return result
"""배치 현재가 조회 — 호환용."""
return {ticker: get_current_price(ticker) for ticker in tickers}

View File

@@ -0,0 +1,12 @@
"""Stock screener — KRX 강세주 분석 노드 기반 보드.
See docs/superpowers/specs/2026-05-12-stock-screener-board-design.md
"""
from .engine import Screener, ScreenContext, ScreenerResult
from .registry import NODE_REGISTRY, GATE_REGISTRY
__all__ = [
"Screener", "ScreenContext", "ScreenerResult",
"NODE_REGISTRY", "GATE_REGISTRY",
]

View File

@@ -0,0 +1,76 @@
"""Synthetic fixtures for screener tests — no DB / no FDR / no naver."""
import datetime as dt
import pandas as pd
def make_master(tickers: list[str], market_caps: dict | None = None,
preferred: set | None = None, managed: set | None = None) -> pd.DataFrame:
market_caps = market_caps or {t: 100_000_000_000 for t in tickers}
preferred = preferred or set()
managed = managed or set()
return pd.DataFrame([
{
"ticker": t,
"name": f"테스트{t}",
"market": "KOSPI",
"market_cap": market_caps.get(t),
"is_managed": int(t in managed),
"is_preferred": int(t in preferred),
"is_spac": 0,
"listed_date": None,
}
for t in tickers
]).set_index("ticker")
def make_prices(tickers: list[str], days: int = 260, start_close: int = 50000,
trend_pct: float = 0.0,
asof: dt.date = dt.date(2026, 5, 12)) -> pd.DataFrame:
"""trend_pct: 일별 종가 등락률(%). 양수면 상승 추세."""
rows = []
for t in tickers:
close = start_close
for i in range(days):
day_idx = days - 1 - i # asof가 마지막
date = asof - dt.timedelta(days=day_idx)
high = int(close * 1.012)
low = int(close * 0.988)
rows.append({
"ticker": t, "date": date.isoformat(),
"open": close, "high": high, "low": low, "close": close,
"volume": 1_000_000, "value": close * 1_000_000,
})
close = int(close * (1 + trend_pct / 100))
return pd.DataFrame(rows)
def make_flow(tickers: list[str], days: int = 260,
foreign_per_day: dict | None = None,
asof: dt.date = dt.date(2026, 5, 12)) -> pd.DataFrame:
foreign_per_day = foreign_per_day or {t: 0 for t in tickers}
rows = []
for t in tickers:
for i in range(days):
day_idx = days - 1 - i
date = asof - dt.timedelta(days=day_idx)
rows.append({
"ticker": t, "date": date.isoformat(),
"foreign_net": foreign_per_day.get(t, 0),
"institution_net": 0,
})
return pd.DataFrame(rows)
def make_kospi(days: int = 260, start: int = 2500, trend_pct: float = 0.0,
asof: dt.date = dt.date(2026, 5, 12)) -> pd.Series:
values = []
dates = []
v = start
for i in range(days):
day_idx = days - 1 - i
d = asof - dt.timedelta(days=day_idx)
dates.append(d.isoformat())
values.append(v)
v = v * (1 + trend_pct / 100)
return pd.Series(values, index=dates, name="kospi")

View File

@@ -0,0 +1,161 @@
"""Screener engine — ScreenContext (Phase 0) + Screener/combine (Phase 2)."""
from __future__ import annotations
import datetime as dt
import sqlite3
from dataclasses import dataclass, replace
import pandas as pd
@dataclass(frozen=True)
class ScreenContext:
"""1회 실행 동안 공유되는 읽기 전용 데이터 컨테이너."""
master: pd.DataFrame # index=ticker
prices: pd.DataFrame # cols: ticker,date,open,high,low,close,volume,value
flow: pd.DataFrame # cols: ticker,date,foreign_net,institution_net
kospi: pd.Series # index=date(str), name="kospi"
asof: dt.date
@classmethod
def load(cls, conn: sqlite3.Connection, asof: dt.date,
lookback_days: int = 252 * 2) -> "ScreenContext":
cutoff = (asof - dt.timedelta(days=int(lookback_days * 1.5))).isoformat()
asof_iso = asof.isoformat()
master = pd.read_sql_query(
"SELECT * FROM krx_master",
conn, index_col="ticker",
)
prices = pd.read_sql_query(
"SELECT ticker,date,open,high,low,close,volume,value "
"FROM krx_daily_prices WHERE date BETWEEN ? AND ? ORDER BY date",
conn, params=(cutoff, asof_iso),
)
flow = pd.read_sql_query(
"SELECT ticker,date,foreign_net,institution_net "
"FROM krx_flow WHERE date BETWEEN ? AND ? ORDER BY date",
conn, params=(cutoff, asof_iso),
)
# KOSPI 지수: MVP에서는 005930(삼성전자) 종가를 시장 대용으로 사용.
# 후속 슬라이스에서 ^KS11 별도 캐시.
kospi = pd.Series(dtype=float, name="kospi")
if "005930" in master.index and not prices.empty:
sub = prices[prices["ticker"] == "005930"].set_index("date")["close"]
kospi = sub.copy()
kospi.name = "kospi"
return cls(master=master, prices=prices, flow=flow, kospi=kospi, asof=asof)
def restrict(self, tickers) -> "ScreenContext":
tickers = pd.Index(tickers)
return replace(
self,
master=self.master.loc[self.master.index.intersection(tickers)],
prices=self.prices[self.prices["ticker"].isin(tickers)],
flow=self.flow[self.flow["ticker"].isin(tickers)],
)
def latest_close(self) -> pd.Series:
if self.prices.empty:
return pd.Series(dtype=float)
return self.prices.sort_values("date").groupby("ticker")["close"].last()
def latest_high(self) -> pd.Series:
if self.prices.empty:
return pd.Series(dtype=float)
return self.prices.sort_values("date").groupby("ticker")["high"].last()
# ---- combine + Screener (Phase 2) ----
from . import position_sizer as _ps
def combine(scores: dict, weights: dict) -> pd.Series:
"""Weighted average across score nodes. ValueError if all weights = 0."""
active = {k: w for k, w in weights.items() if w > 0 and k in scores}
if not active:
raise ValueError("no active score nodes (all weights = 0)")
df = pd.DataFrame({k: scores[k] for k in active})
w = pd.Series(active)
weighted = (df.fillna(0).multiply(w, axis=1)).sum(axis=1) / w.sum()
return weighted
@dataclass
class ScreenerResult:
asof: dt.date
survivors_count: int
scores: dict # node name → pd.Series
weights: dict
ranked: pd.Series # ticker → total_score (sorted desc, head=top_n)
rows: list # list of dicts (for serialization)
warnings: list
class Screener:
def __init__(self, gate, score_nodes, weights: dict, node_params: dict,
gate_params: dict, top_n: int = 20, sizer_params: dict = None):
self.gate = gate
self.score_nodes = score_nodes
self.weights = weights
self.node_params = node_params
self.gate_params = gate_params
self.top_n = top_n
self.sizer_params = sizer_params or {"atr_window": 14, "atr_stop_mult": 2.0, "rr_ratio": 2.0}
def run(self, ctx: ScreenContext) -> ScreenerResult:
warnings: list = []
survivors = self.gate.filter(ctx, self.gate_params)
if len(survivors) == 0:
raise ValueError("no survivors after hygiene gate")
if len(survivors) < 100:
warnings.append(f"survivors_count={len(survivors)} < 100 — 백분위 정규화 신뢰도 낮음")
scoped = ctx.restrict(survivors)
scores: dict = {}
for n in self.score_nodes:
w = self.weights.get(n.name, 0)
if w <= 0:
continue
try:
scores[n.name] = n.compute(scoped, self.node_params.get(n.name, {}))
except Exception as e:
warnings.append(f"node '{n.name}' failed: {e}")
scores[n.name] = pd.Series(0.0, index=scoped.master.index)
total = combine(scores, self.weights)
ranked = total.sort_values(ascending=False).head(self.top_n)
sizing = _ps.plan_positions(scoped, list(ranked.index), self.sizer_params)
latest_close = scoped.latest_close()
rows = []
for rank_idx, ticker in enumerate(ranked.index, start=1):
s = sizing.get(ticker, {})
row = {
"rank": rank_idx,
"ticker": ticker,
"name": str(scoped.master.loc[ticker, "name"]),
"total_score": float(ranked.loc[ticker]),
"scores": {k: float(v.get(ticker, 0.0)) for k, v in scores.items()},
"close": int(latest_close.get(ticker, 0)),
"market_cap": int(scoped.master.loc[ticker, "market_cap"] or 0),
"entry_price": s.get("entry_price"),
"stop_price": s.get("stop_price"),
"target_price": s.get("target_price"),
"atr14": s.get("atr14"),
"r_pct": s.get("r_pct"),
}
rows.append(row)
return ScreenerResult(
asof=ctx.asof, survivors_count=len(survivors),
scores=scores, weights=self.weights,
ranked=ranked, rows=rows, warnings=warnings,
)

View File

View File

@@ -0,0 +1,40 @@
"""Node base classes + helpers."""
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Any, ClassVar
import pandas as pd
class ScoreNode(ABC):
name: ClassVar[str]
label: ClassVar[str]
default_params: ClassVar[dict]
param_schema: ClassVar[dict]
@abstractmethod
def compute(self, ctx: "Any", params: dict) -> pd.Series:
"""returns Series indexed by ticker, 0..100 float."""
class GateNode(ABC):
name: ClassVar[str]
label: ClassVar[str]
default_params: ClassVar[dict]
param_schema: ClassVar[dict]
@abstractmethod
def filter(self, ctx: "Any", params: dict) -> pd.Index:
"""returns surviving tickers."""
def percentile_rank(series: pd.Series) -> pd.Series:
"""Percentile rank in [0, 100]. All-equal → 50. NaN preserved."""
if series.empty:
return series.astype(float)
if series.dropna().nunique() == 1:
return pd.Series(50.0, index=series.index)
ranked = series.rank(pct=True, na_option="keep") * 100.0
return ranked

View File

@@ -0,0 +1,33 @@
"""외국인 N일 누적 순매수 강도 (시총 대비)."""
import pandas as pd
from .base import ScoreNode, percentile_rank
class ForeignBuy(ScoreNode):
name = "foreign_buy"
label = "외국인 누적 순매수"
default_params = {"window_days": 5}
param_schema = {
"type": "object",
"properties": {
"window_days": {"type": "integer", "minimum": 1, "maximum": 60, "default": 5}
},
}
def compute(self, ctx, params: dict) -> pd.Series:
window = int(params.get("window_days", 5))
flow = ctx.flow
if flow.empty:
return pd.Series(dtype=float)
last_dates = (
flow.sort_values("date").groupby("ticker").tail(window)
)
net_sum = last_dates.groupby("ticker")["foreign_net"].sum()
market_cap = ctx.master["market_cap"].fillna(0).reindex(net_sum.index)
raw = (net_sum / market_cap.replace(0, pd.NA)).astype(float)
return percentile_rank(raw).fillna(50.0)

View File

@@ -0,0 +1,30 @@
"""52주 신고가 근접도 (룰 기반: 70% 미만 0점, 100% 도달 100점, 선형)."""
import pandas as pd
from .base import ScoreNode
class High52WProximity(ScoreNode):
name = "high52w"
label = "52주 신고가 근접도"
default_params = {"window_days": 252}
param_schema = {
"type": "object",
"properties": {
"window_days": {"type": "integer", "minimum": 60, "maximum": 504, "default": 252}
},
}
def compute(self, ctx, params: dict) -> pd.Series:
window = int(params.get("window_days", 252))
prices = ctx.prices
if prices.empty:
return pd.Series(dtype=float)
ordered = prices.sort_values("date")
last = ordered.groupby("ticker").tail(window)
agg = last.groupby("ticker").agg(close=("close", "last"), high=("high", "max"))
proximity = (agg["close"] / agg["high"]).clip(upper=1.0)
score = ((proximity - 0.7) / 0.3).clip(lower=0.0, upper=1.0) * 100.0
return score.fillna(0.0)

View File

@@ -0,0 +1,81 @@
"""HygieneGate — pre-filter for screener."""
from __future__ import annotations
import pandas as pd
from .base import GateNode
class HygieneGate(GateNode):
name = "hygiene"
label = "위생 게이트"
default_params = {
"min_market_cap_won": 50_000_000_000,
"min_avg_value_won": 500_000_000,
"min_listed_days": 60,
"skip_managed": True,
"skip_preferred": True,
"skip_spac": True,
"skip_halted_days": 3,
}
param_schema = {
"type": "object",
"properties": {
"min_market_cap_won": {"type": "integer", "minimum": 0},
"min_avg_value_won": {"type": "integer", "minimum": 0},
"min_listed_days": {"type": "integer", "minimum": 0},
"skip_managed": {"type": "boolean"},
"skip_preferred": {"type": "boolean"},
"skip_spac": {"type": "boolean"},
"skip_halted_days": {"type": "integer", "minimum": 0},
},
}
def filter(self, ctx, params: dict) -> pd.Index:
master = ctx.master.copy()
prices = ctx.prices
# 시총
master = master[master["market_cap"].fillna(0) >= params["min_market_cap_won"]]
# 우선주·관리·스팩
if params.get("skip_preferred", True):
master = master[master["is_preferred"] == 0]
if params.get("skip_managed", True):
master = master[master["is_managed"] == 0]
if params.get("skip_spac", True):
master = master[master["is_spac"] == 0]
candidates = master.index
# 20일 평균 거래대금
if not prices.empty:
recent20 = (
prices[prices["ticker"].isin(candidates)]
.sort_values("date")
.groupby("ticker")
.tail(20)
)
avg_value = recent20.groupby("ticker")["value"].mean()
ok = avg_value[avg_value >= params["min_avg_value_won"]].index
candidates = candidates.intersection(ok)
# 최근 N일 거래정지 (volume==0 N일 이상)
halted_days = params.get("skip_halted_days", 3)
if halted_days > 0 and not prices.empty:
recent = (
prices[prices["ticker"].isin(candidates)]
.sort_values("date")
.groupby("ticker")
.tail(halted_days)
)
zero_count = (
recent.assign(z=lambda d: (d["volume"] == 0).astype(int))
.groupby("ticker")["z"].sum()
)
healthy = zero_count[zero_count < halted_days].index
candidates = candidates.intersection(healthy)
# 상장 N일 — MVP에선 listed_date null 허용, null이면 통과
return pd.Index(candidates)

View File

@@ -0,0 +1,51 @@
"""이평선 정배열 점수 — 5개 조건 충족 개수 / 5 × 100."""
import pandas as pd
from .base import ScoreNode
class MaAlignment(ScoreNode):
name = "ma_alignment"
label = "이평선 정배열"
default_params = {"ma_periods": [50, 150, 200]}
param_schema = {
"type": "object",
"properties": {
"ma_periods": {"type": "array", "items": {"type": "integer"}}
},
}
def compute(self, ctx, params: dict) -> pd.Series:
ma_periods = params.get("ma_periods", self.default_params["ma_periods"])
if len(ma_periods) != 3:
raise ValueError("ma_periods must have 3 entries (short, medium, long)")
ma_s, ma_m, ma_l = (int(x) for x in ma_periods)
prices = ctx.prices
if prices.empty:
return pd.Series(dtype=float)
ordered = prices.sort_values("date")
min_history = max(252, ma_l)
def _score(s: pd.Series) -> float:
closes = s.astype(float).reset_index(drop=True)
if len(closes) < min_history:
return float("nan")
close = closes.iloc[-1]
ma_short = closes.rolling(ma_s).mean().iloc[-1]
ma_medium = closes.rolling(ma_m).mean().iloc[-1]
ma_long = closes.rolling(ma_l).mean().iloc[-1]
low52 = closes.iloc[-252:].min()
conds = [
close > ma_short,
ma_short > ma_medium,
ma_medium > ma_long,
close > ma_long,
close >= low52 * 1.25,
]
return sum(conds) / 5 * 100.0
raw = ordered.groupby("ticker", group_keys=False)["close"].apply(_score)
return raw.fillna(0.0)

View File

@@ -0,0 +1,34 @@
"""20일 모멘텀."""
import pandas as pd
from .base import ScoreNode, percentile_rank
class Momentum20(ScoreNode):
name = "momentum"
label = "20일 모멘텀"
default_params = {"window_days": 20}
param_schema = {
"type": "object",
"properties": {
"window_days": {"type": "integer", "minimum": 5, "maximum": 120, "default": 20}
},
}
def compute(self, ctx, params: dict) -> pd.Series:
window = int(params.get("window_days", 20))
prices = ctx.prices
if prices.empty:
return pd.Series(dtype=float)
ordered = prices.sort_values("date")
last = ordered.groupby("ticker").tail(window + 1)
def _ret(s):
if len(s) < window + 1:
return float("nan")
return s.iloc[-1] / s.iloc[0] - 1
raw = last.groupby("ticker")["close"].apply(_ret)
return percentile_rank(raw).fillna(50.0)

View File

@@ -0,0 +1,48 @@
"""RS Rating — IBD 가중 (3m=2,6m=1,9m=1,12m=1)."""
import pandas as pd
from .base import ScoreNode, percentile_rank
_PERIOD_TO_DAYS = {"3m": 63, "6m": 126, "9m": 189, "12m": 252}
class RsRating(ScoreNode):
name = "rs_rating"
label = "RS Rating (시장 대비 상대강도)"
default_params = {"weights": {"3m": 2, "6m": 1, "9m": 1, "12m": 1}}
param_schema = {
"type": "object",
"properties": {
"weights": {"type": "object"}
},
}
def compute(self, ctx, params: dict) -> pd.Series:
weights: dict = params.get("weights", self.default_params["weights"])
prices = ctx.prices
kospi = ctx.kospi
if prices.empty or kospi.empty:
return pd.Series(dtype=float)
ordered = prices.sort_values("date")
def _excess_for_ticker(g: pd.DataFrame) -> float:
closes = g.set_index("date")["close"]
total = 0.0
wsum = 0.0
for period, w in weights.items():
k = _PERIOD_TO_DAYS.get(period, 0)
if len(closes) <= k or len(kospi) <= k:
continue
r_stock = closes.iloc[-1] / closes.iloc[-(k + 1)] - 1
r_market = kospi.iloc[-1] / kospi.iloc[-(k + 1)] - 1
total += w * (r_stock - r_market)
wsum += w
return total / wsum if wsum else float("nan")
raw = ordered.groupby("ticker", group_keys=False).apply(
_excess_for_ticker, include_groups=False
)
return percentile_rank(raw).fillna(50.0)

View File

@@ -0,0 +1,40 @@
"""VCP-lite — 단기/장기 일중 변동성 비율 기반 수축률."""
import pandas as pd
from .base import ScoreNode, percentile_rank
class VcpLite(ScoreNode):
name = "vcp_lite"
label = "VCP-lite (변동성 수축)"
default_params = {"short_window": 40, "long_window": 252}
param_schema = {
"type": "object",
"properties": {
"short_window": {"type": "integer", "minimum": 10, "maximum": 120, "default": 40},
"long_window": {"type": "integer", "minimum": 60, "maximum": 504, "default": 252},
},
}
def compute(self, ctx, params: dict) -> pd.Series:
short_w = int(params.get("short_window", 40))
long_w = int(params.get("long_window", 252))
prices = ctx.prices
if prices.empty:
return pd.Series(dtype=float)
ordered = prices.sort_values("date").copy()
ordered["range_pct"] = (ordered["high"] - ordered["low"]) / ordered["close"]
def _ratio(s: pd.Series) -> float:
if len(s) < long_w:
return float("nan")
short_vol = s.tail(short_w).mean()
long_vol = s.tail(long_w).mean()
if long_vol == 0 or pd.isna(long_vol):
return float("nan")
return 1 - (short_vol / long_vol)
raw = ordered.groupby("ticker", group_keys=False)["range_pct"].apply(_ratio)
return percentile_rank(raw).fillna(50.0)

View File

@@ -0,0 +1,40 @@
"""거래량 급증 — log1p(recent/baseline)."""
import numpy as np
import pandas as pd
from .base import ScoreNode, percentile_rank
class VolumeSurge(ScoreNode):
name = "volume_surge"
label = "거래량 급증"
default_params = {"baseline_days": 20, "eval_days": 3}
param_schema = {
"type": "object",
"properties": {
"baseline_days": {"type": "integer", "minimum": 5, "maximum": 60, "default": 20},
"eval_days": {"type": "integer", "minimum": 1, "maximum": 10, "default": 3},
},
}
def compute(self, ctx, params: dict) -> pd.Series:
baseline = int(params.get("baseline_days", 20))
eval_d = int(params.get("eval_days", 3))
prices = ctx.prices
if prices.empty:
return pd.Series(dtype=float)
ordered = prices.sort_values("date")
last_recent = ordered.groupby("ticker").tail(eval_d).groupby("ticker")["volume"].mean()
last_baseline = (
ordered.groupby("ticker")
.tail(baseline + eval_d)
.groupby("ticker")
.head(baseline)
.groupby("ticker")["volume"]
.mean()
)
ratio = last_recent / last_baseline.replace(0, pd.NA)
raw = np.log1p(ratio.astype(float))
return percentile_rank(raw).fillna(50.0)

View File

@@ -0,0 +1,51 @@
"""ATR Wilder smoothing + entry/stop/target 계산."""
import pandas as pd
def compute_atr_wilder(df_one_ticker: pd.DataFrame, window: int = 14) -> float:
"""단일 종목 DataFrame(date·open·high·low·close)에 대해 Wilder ATR 마지막 값."""
g = df_one_ticker.sort_values("date").copy()
high = g["high"].astype(float)
low = g["low"].astype(float)
close = g["close"].astype(float)
prev_close = close.shift(1)
tr = pd.concat([
(high - low),
(high - prev_close).abs(),
(low - prev_close).abs(),
], axis=1).max(axis=1)
atr = tr.ewm(alpha=1 / window, adjust=False).mean()
return float(atr.iloc[-1])
def round_won(x: float) -> int:
return int(round(x))
def plan_positions(ctx, tickers: list, params: dict) -> dict:
"""각 ticker 에 대해 entry/stop/target/atr14 반환."""
atr_window = int(params.get("atr_window", 14))
stop_mult = float(params.get("atr_stop_mult", 2.0))
rr = float(params.get("rr_ratio", 2.0))
prices = ctx.prices.sort_values("date")
out: dict = {}
for t in tickers:
sub = prices[prices["ticker"] == t]
if sub.empty:
continue
close = float(sub["close"].iloc[-1])
atr14 = compute_atr_wilder(sub, window=atr_window)
entry = round_won(close * 1.005)
stop = round_won(close - stop_mult * atr14)
target = round_won(entry + rr * (entry - stop))
r_pct = (entry - stop) / entry * 100 if entry else 0.0
out[t] = {
"entry_price": entry,
"stop_price": stop,
"target_price": target,
"atr14": atr14,
"r_pct": r_pct,
}
return out

View File

@@ -0,0 +1,24 @@
"""Registry of node classes (single source of truth for /nodes endpoint)."""
from .nodes.hygiene import HygieneGate
from .nodes.foreign_buy import ForeignBuy
from .nodes.volume_surge import VolumeSurge
from .nodes.momentum import Momentum20
from .nodes.high52w import High52WProximity
from .nodes.rs_rating import RsRating
from .nodes.ma_alignment import MaAlignment
from .nodes.vcp_lite import VcpLite
NODE_REGISTRY: dict = {
"foreign_buy": ForeignBuy,
"volume_surge": VolumeSurge,
"momentum": Momentum20,
"high52w": High52WProximity,
"rs_rating": RsRating,
"ma_alignment": MaAlignment,
"vcp_lite": VcpLite,
}
GATE_REGISTRY: dict = {
"hygiene": HygieneGate,
}

View File

@@ -0,0 +1,310 @@
"""FastAPI router for /api/stock/screener/*"""
from __future__ import annotations
import datetime as dt
import json
import os
import sqlite3
from typing import Optional
from fastapi import APIRouter, HTTPException
from . import schemas
from .registry import NODE_REGISTRY, GATE_REGISTRY
router = APIRouter(prefix="/api/stock/screener")
import json as _json
import pathlib as _pathlib
_HOLIDAYS_CACHE = None
def _holidays():
global _HOLIDAYS_CACHE
if _HOLIDAYS_CACHE is None:
path = _pathlib.Path(__file__).resolve().parent.parent / "holidays.json"
try:
with path.open(encoding="utf-8") as f:
data = _json.load(f)
_HOLIDAYS_CACHE = set(data) if isinstance(data, list) else set(data.keys())
except FileNotFoundError:
_HOLIDAYS_CACHE = set()
return _HOLIDAYS_CACHE
def _is_holiday(d: dt.date) -> bool:
return d.weekday() >= 5 or d.isoformat() in _holidays()
def _db_path() -> str:
return os.environ.get("STOCK_DB_PATH", "/app/data/stock.db")
def _conn() -> sqlite3.Connection:
return sqlite3.connect(_db_path())
# ---------- /nodes ----------
@router.get("/nodes", response_model=schemas.NodesResponse)
def get_nodes():
score_nodes = [
schemas.NodeMeta(
name=cls.name, label=cls.label,
default_params=cls.default_params, param_schema=cls.param_schema,
)
for cls in NODE_REGISTRY.values()
]
gate_nodes = [
schemas.NodeMeta(
name=cls.name, label=cls.label,
default_params=cls.default_params, param_schema=cls.param_schema,
)
for cls in GATE_REGISTRY.values()
]
return schemas.NodesResponse(score_nodes=score_nodes, gate_nodes=gate_nodes)
# ---------- /settings ----------
@router.get("/settings", response_model=schemas.SettingsResponse)
def get_settings():
with _conn() as c:
row = c.execute(
"SELECT weights_json, node_params_json, gate_params_json, "
"top_n, rr_ratio, atr_window, atr_stop_mult, updated_at "
"FROM screener_settings WHERE id=1"
).fetchone()
if row is None:
raise HTTPException(503, "settings not initialized")
return schemas.SettingsResponse(
weights=json.loads(row[0]),
node_params=json.loads(row[1]),
gate_params=json.loads(row[2]),
top_n=row[3], rr_ratio=row[4], atr_window=row[5], atr_stop_mult=row[6],
updated_at=row[7],
)
@router.put("/settings", response_model=schemas.SettingsResponse)
def put_settings(body: schemas.SettingsBody):
now = dt.datetime.utcnow().isoformat()
with _conn() as c:
c.execute(
"""UPDATE screener_settings SET
weights_json=?, node_params_json=?, gate_params_json=?,
top_n=?, rr_ratio=?, atr_window=?, atr_stop_mult=?, updated_at=?
WHERE id=1""",
(
json.dumps(body.weights), json.dumps(body.node_params),
json.dumps(body.gate_params),
body.top_n, body.rr_ratio, body.atr_window, body.atr_stop_mult, now,
),
)
c.commit()
return schemas.SettingsResponse(**body.model_dump(), updated_at=now)
# ---------- /run ----------
from . import telegram as _tg
from .engine import Screener, ScreenContext
def _resolve_asof(asof_str, conn: sqlite3.Connection) -> dt.date:
if asof_str:
return dt.date.fromisoformat(asof_str)
row = conn.execute("SELECT max(date) FROM krx_daily_prices").fetchone()
if not row or row[0] is None:
raise HTTPException(503, "no snapshot available — run /snapshot/refresh first")
return dt.date.fromisoformat(row[0])
def _load_settings(conn) -> dict:
row = conn.execute(
"SELECT weights_json,node_params_json,gate_params_json,top_n,"
"rr_ratio,atr_window,atr_stop_mult FROM screener_settings WHERE id=1"
).fetchone()
return {
"weights": json.loads(row[0]),
"node_params": json.loads(row[1]),
"gate_params": json.loads(row[2]),
"top_n": row[3],
"rr_ratio": row[4],
"atr_window": row[5],
"atr_stop_mult": row[6],
}
def _persist_run(conn, asof, mode, weights, node_params, gate_params, top_n,
result, started_at, finished_at) -> int:
cur = conn.execute(
"""INSERT INTO screener_runs (asof,mode,status,started_at,finished_at,
weights_json,node_params_json,gate_params_json,top_n,survivors_count,telegram_sent)
VALUES (?,?,?,?,?,?,?,?,?,?,0)""",
(asof.isoformat(), mode, "success", started_at, finished_at,
json.dumps(weights), json.dumps(node_params), json.dumps(gate_params),
top_n, result.survivors_count),
)
run_id = cur.lastrowid
for row in result.rows:
conn.execute(
"""INSERT INTO screener_results (run_id,rank,ticker,name,total_score,
scores_json,close,market_cap,entry_price,stop_price,target_price,atr14)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?)""",
(run_id, row["rank"], row["ticker"], row["name"], row["total_score"],
json.dumps(row["scores"]), row["close"], row["market_cap"],
row["entry_price"], row["stop_price"], row["target_price"], row["atr14"]),
)
conn.commit()
return run_id
@router.post("/run", response_model=schemas.RunResponse)
def post_run(body: schemas.RunRequest):
from .registry import NODE_REGISTRY as _NR, GATE_REGISTRY as _GR
started_at = dt.datetime.utcnow().isoformat()
with _conn() as c:
asof = _resolve_asof(body.asof, c)
# Skipped holiday handling for mode='auto'
if body.mode == "auto" and _is_holiday(asof):
return schemas.RunResponse(
asof=asof.isoformat(), mode="auto", status="skipped_holiday",
run_id=None, survivors_count=None,
weights={}, top_n=0,
results=[], telegram_payload=None,
warnings=[f"{asof.isoformat()} is a holiday — skipped"],
)
defaults = _load_settings(c)
if body.mode == "auto":
weights = defaults["weights"]
node_params = defaults["node_params"]
gate_params = defaults["gate_params"]
top_n = defaults["top_n"]
else:
weights = body.weights if body.weights is not None else defaults["weights"]
node_params = body.node_params if body.node_params is not None else defaults["node_params"]
gate_params = body.gate_params if body.gate_params is not None else defaults["gate_params"]
top_n = body.top_n if body.top_n is not None else defaults["top_n"]
sizer_params = {
"atr_window": defaults["atr_window"],
"atr_stop_mult": defaults["atr_stop_mult"],
"rr_ratio": defaults["rr_ratio"],
}
ctx = ScreenContext.load(c, asof)
score_nodes = [cls() for name, cls in _NR.items() if weights.get(name, 0) > 0]
gate = _GR["hygiene"]()
try:
screener = Screener(
gate=gate, score_nodes=score_nodes, weights=weights,
node_params=node_params, gate_params=gate_params,
top_n=top_n, sizer_params=sizer_params,
)
result = screener.run(ctx)
except ValueError as e:
raise HTTPException(422, str(e))
finished_at = dt.datetime.utcnow().isoformat()
run_id = None
if body.mode in ("manual_save", "auto"):
run_id = _persist_run(c, asof, body.mode, weights, node_params, gate_params,
top_n, result, started_at, finished_at)
payload = _tg.build_telegram_payload(
asof=asof, mode=body.mode, survivors_count=result.survivors_count,
top_n=top_n, rows=result.rows, run_id=run_id,
)
return schemas.RunResponse(
asof=asof.isoformat(), mode=body.mode, status="success",
run_id=run_id, survivors_count=result.survivors_count,
weights=weights, top_n=top_n,
results=result.rows,
telegram_payload=schemas.TelegramPayload(**payload),
warnings=result.warnings,
)
# ---------- /snapshot/refresh ----------
from . import snapshot as _snap
@router.post("/snapshot/refresh")
def post_snapshot_refresh(asof: Optional[str] = None):
asof_date = dt.date.fromisoformat(asof) if asof else dt.date.today()
if asof_date.weekday() >= 5:
return {"asof": asof_date.isoformat(), "status": "skipped_weekend"}
with _conn() as c:
summary = _snap.refresh_daily(c, asof_date)
return summary
# ---------- /runs ----------
@router.get("/runs", response_model=list[schemas.RunSummary])
def list_runs(limit: int = 30):
with _conn() as c:
rows = c.execute(
"SELECT id,asof,mode,status,started_at,finished_at,top_n,"
"survivors_count,telegram_sent FROM screener_runs "
"ORDER BY asof DESC, id DESC LIMIT ?", (limit,),
).fetchall()
return [
schemas.RunSummary(
id=r[0], asof=r[1], mode=r[2], status=r[3],
started_at=r[4], finished_at=r[5], top_n=r[6],
survivors_count=r[7], telegram_sent=bool(r[8]),
)
for r in rows
]
@router.get("/runs/{run_id}")
def get_run(run_id: int):
with _conn() as c:
meta = c.execute(
"SELECT id,asof,mode,status,started_at,finished_at,top_n,"
"survivors_count,telegram_sent,weights_json,node_params_json,gate_params_json "
"FROM screener_runs WHERE id=?",
(run_id,),
).fetchone()
if not meta:
raise HTTPException(404, "run not found")
rows = c.execute(
"SELECT rank,ticker,name,total_score,scores_json,close,market_cap,"
"entry_price,stop_price,target_price,atr14 "
"FROM screener_results WHERE run_id=? ORDER BY rank",
(run_id,),
).fetchall()
return {
"meta": {
"id": meta[0], "asof": meta[1], "mode": meta[2], "status": meta[3],
"started_at": meta[4], "finished_at": meta[5], "top_n": meta[6],
"survivors_count": meta[7], "telegram_sent": bool(meta[8]),
"weights": json.loads(meta[9]),
"node_params": json.loads(meta[10]),
"gate_params": json.loads(meta[11]),
},
"results": [
{
"rank": r[0], "ticker": r[1], "name": r[2],
"total_score": r[3], "scores": json.loads(r[4]),
"close": r[5], "market_cap": r[6],
"entry_price": r[7], "stop_price": r[8], "target_price": r[9],
"atr14": r[10],
}
for r in rows
],
}

View File

@@ -0,0 +1,136 @@
"""Screener schema bootstrap. Called once at module import via db.py."""
import json
import sqlite3
from datetime import datetime, timezone
DEFAULT_WEIGHTS = {
"foreign_buy": 1.0,
"volume_surge": 1.0,
"momentum": 1.0,
"high52w": 1.2,
"rs_rating": 1.2,
"ma_alignment": 1.0,
"vcp_lite": 0.8,
}
DEFAULT_NODE_PARAMS = {
"foreign_buy": {"window_days": 5},
"volume_surge": {"baseline_days": 20, "eval_days": 3},
"momentum": {"window_days": 20},
"high52w": {"window_days": 252},
"rs_rating": {"weights": {"3m": 2, "6m": 1, "9m": 1, "12m": 1}},
"ma_alignment": {"ma_periods": [50, 150, 200]},
"vcp_lite": {"short_window": 40, "long_window": 252},
}
DEFAULT_GATE_PARAMS = {
"min_market_cap_won": 50_000_000_000,
"min_avg_value_won": 500_000_000,
"min_listed_days": 60,
"skip_managed": True,
"skip_preferred": True,
"skip_spac": True,
"skip_halted_days": 3,
}
DDL = """
CREATE TABLE IF NOT EXISTS krx_master (
ticker TEXT PRIMARY KEY,
name TEXT NOT NULL,
market TEXT NOT NULL,
market_cap INTEGER,
is_managed INTEGER NOT NULL DEFAULT 0,
is_preferred INTEGER NOT NULL DEFAULT 0,
is_spac INTEGER NOT NULL DEFAULT 0,
listed_date TEXT,
updated_at TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS krx_daily_prices (
ticker TEXT NOT NULL,
date TEXT NOT NULL,
open INTEGER, high INTEGER, low INTEGER, close INTEGER,
volume INTEGER,
value INTEGER,
PRIMARY KEY (ticker, date)
);
CREATE INDEX IF NOT EXISTS idx_prices_date ON krx_daily_prices(date);
CREATE TABLE IF NOT EXISTS krx_flow (
ticker TEXT NOT NULL,
date TEXT NOT NULL,
foreign_net INTEGER,
institution_net INTEGER,
PRIMARY KEY (ticker, date)
);
CREATE INDEX IF NOT EXISTS idx_flow_date ON krx_flow(date);
CREATE TABLE IF NOT EXISTS screener_settings (
id INTEGER PRIMARY KEY CHECK (id = 1),
weights_json TEXT NOT NULL,
node_params_json TEXT NOT NULL,
gate_params_json TEXT NOT NULL,
top_n INTEGER NOT NULL DEFAULT 20,
rr_ratio REAL NOT NULL DEFAULT 2.0,
atr_window INTEGER NOT NULL DEFAULT 14,
atr_stop_mult REAL NOT NULL DEFAULT 2.0,
updated_at TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS screener_runs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
asof TEXT NOT NULL,
mode TEXT NOT NULL,
status TEXT NOT NULL,
error TEXT,
started_at TEXT NOT NULL,
finished_at TEXT,
weights_json TEXT NOT NULL,
node_params_json TEXT NOT NULL,
gate_params_json TEXT NOT NULL,
top_n INTEGER NOT NULL,
survivors_count INTEGER,
telegram_sent INTEGER NOT NULL DEFAULT 0
);
CREATE INDEX IF NOT EXISTS idx_runs_asof ON screener_runs(asof DESC);
CREATE TABLE IF NOT EXISTS screener_results (
run_id INTEGER NOT NULL,
rank INTEGER NOT NULL,
ticker TEXT NOT NULL,
name TEXT NOT NULL,
total_score REAL NOT NULL,
scores_json TEXT NOT NULL,
close INTEGER,
market_cap INTEGER,
entry_price INTEGER,
stop_price INTEGER,
target_price INTEGER,
atr14 REAL,
PRIMARY KEY (run_id, ticker),
FOREIGN KEY (run_id) REFERENCES screener_runs(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_results_run_rank ON screener_results(run_id, rank);
"""
def ensure_screener_schema(conn: sqlite3.Connection) -> None:
"""Create tables and seed default settings (idempotent)."""
conn.executescript(DDL)
existing = conn.execute("SELECT id FROM screener_settings WHERE id=1").fetchone()
if existing is None:
now = datetime.now(timezone.utc).isoformat()
conn.execute(
"""
INSERT INTO screener_settings (
id, weights_json, node_params_json, gate_params_json,
top_n, rr_ratio, atr_window, atr_stop_mult, updated_at
) VALUES (1, ?, ?, ?, 20, 2.0, 14, 2.0, ?)
""",
(
json.dumps(DEFAULT_WEIGHTS),
json.dumps(DEFAULT_NODE_PARAMS),
json.dumps(DEFAULT_GATE_PARAMS),
now,
),
)
conn.commit()

View File

@@ -0,0 +1,85 @@
from __future__ import annotations
from typing import Literal, Optional
from pydantic import BaseModel, Field
class NodeMeta(BaseModel):
name: str
label: str
default_params: dict
param_schema: dict
class NodesResponse(BaseModel):
score_nodes: list[NodeMeta]
gate_nodes: list[NodeMeta]
class SettingsBody(BaseModel):
weights: dict[str, float]
node_params: dict[str, dict] = Field(default_factory=dict)
gate_params: dict
top_n: int = 20
rr_ratio: float = 2.0
atr_window: int = 14
atr_stop_mult: float = 2.0
class SettingsResponse(SettingsBody):
updated_at: str
class RunRequest(BaseModel):
mode: Literal["preview", "manual_save", "auto"] = "preview"
asof: Optional[str] = None
weights: Optional[dict[str, float]] = None
node_params: Optional[dict[str, dict]] = None
gate_params: Optional[dict] = None
top_n: Optional[int] = None
class ResultRow(BaseModel):
rank: int
ticker: str
name: str
total_score: float
scores: dict[str, float]
close: int
market_cap: int
entry_price: Optional[int] = None
stop_price: Optional[int] = None
target_price: Optional[int] = None
atr14: Optional[float] = None
r_pct: Optional[float] = None
class TelegramPayload(BaseModel):
chat_target: str
parse_mode: str
text: str
class RunResponse(BaseModel):
asof: str
mode: str
status: Literal["success", "failed", "skipped_holiday"]
run_id: Optional[int] = None
survivors_count: Optional[int] = None
weights: dict[str, float]
top_n: int
results: list[ResultRow] = Field(default_factory=list)
telegram_payload: Optional[TelegramPayload] = None
warnings: list[str] = Field(default_factory=list)
error: Optional[str] = None
class RunSummary(BaseModel):
id: int
asof: str
mode: str
status: str
started_at: str
finished_at: Optional[str] = None
top_n: int
survivors_count: Optional[int] = None
telegram_sent: bool

View File

@@ -0,0 +1,247 @@
"""KRX daily snapshot loader (FDR + naver finance scraping)."""
from __future__ import annotations
import datetime as dt
import logging
import re
import sqlite3
import time
from dataclasses import dataclass
import FinanceDataReader as fdr
import httpx
import pandas as pd
from bs4 import BeautifulSoup
log = logging.getLogger(__name__)
NAVER_FRGN_URL = "https://finance.naver.com/item/frgn.naver"
NAVER_HEADERS = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
"Referer": "https://finance.naver.com/",
}
DEFAULT_FLOW_TOP_N = 500
DEFAULT_RATE_LIMIT_SEC = 0.2
@dataclass
class RefreshSummary:
asof: dt.date
master_count: int
prices_count: int
flow_count: int
failures: list[str]
def asdict(self) -> dict:
return {
"asof": self.asof.isoformat(),
"master_count": self.master_count,
"prices_count": self.prices_count,
"flow_count": self.flow_count,
"failures": self.failures,
}
def _iso(d: dt.date) -> str:
return d.isoformat()
def _is_preferred(name: str) -> int:
"""우선주 휴리스틱: 종목명이 ''로 끝나거나 '우[A-Z]?'/'\\d?' 패턴."""
n = name or ""
return 1 if re.search(r"우[A-Z]?$|우\d?$", n) else 0
def _is_spac(name: str) -> int:
return 1 if "스팩" in (name or "") else 0
def fetch_master_listing() -> pd.DataFrame:
"""fdr.StockListing('KRX'). Wrapped for stub-ability in tests."""
return fdr.StockListing("KRX")
def fetch_ohlcv_for_ticker(ticker: str, start: str, end: str) -> pd.DataFrame:
"""fdr.DataReader for backfill."""
return fdr.DataReader(ticker, start, end)
def fetch_flow_naver(ticker: str, *, client) -> dict | None:
"""Scrape naver frgn page; return latest-day flow dict, or None."""
r = client.get(NAVER_FRGN_URL, params={"code": ticker, "page": 1})
if r.status_code != 200:
return None
soup = BeautifulSoup(r.text, "lxml")
for row in soup.select("table.type2 tr"):
cells = [c.get_text(strip=True).replace(",", "") for c in row.select("td")]
if not cells or not cells[0]:
continue
if not re.match(r"\d{4}\.\d{2}\.\d{2}", cells[0]):
continue
try:
inst = int(cells[5]) if cells[5] not in ("", "-") else 0
foreign = int(cells[6]) if cells[6] not in ("", "-") else 0
return {
"date": cells[0].replace(".", "-"),
"foreign_net": foreign,
"institution_net": inst,
}
except (IndexError, ValueError):
return None
return None
def _master_and_prices_rows(asof: dt.date,
df: pd.DataFrame) -> tuple[list[tuple], list[tuple]]:
iso = _iso(asof)
now_iso = dt.datetime.utcnow().isoformat()
master_rows: list[tuple] = []
price_rows: list[tuple] = []
for _, row in df.iterrows():
ticker = str(row.get("Code") or "").strip()
name = str(row.get("Name") or "").strip()
if not ticker or not name:
continue
market_raw = str(row.get("Market") or "").upper()
market = "KOSDAQ" if "KOSDAQ" in market_raw else "KOSPI"
try:
market_cap = int(row["Marcap"]) if pd.notna(row.get("Marcap")) else None
except (TypeError, ValueError):
market_cap = None
master_rows.append((
ticker, name, market, market_cap,
0, _is_preferred(name), _is_spac(name),
None, now_iso,
))
try:
o = int(row["Open"]) if pd.notna(row.get("Open")) else None
h = int(row["High"]) if pd.notna(row.get("High")) else None
l = int(row["Low"]) if pd.notna(row.get("Low")) else None
c = int(row["Close"]) if pd.notna(row.get("Close")) else None
v = int(row["Volume"]) if pd.notna(row.get("Volume")) else None
amt = row.get("Amount")
a = int(amt) if pd.notna(amt) else None
if c is not None and v is not None:
price_rows.append((ticker, iso, o, h, l, c, v, a))
except (TypeError, KeyError):
pass
return master_rows, price_rows
def _gather_flow_naver(asof: dt.date, tickers: list[str],
*, rate_limit_sec: float = DEFAULT_RATE_LIMIT_SEC) -> list[tuple]:
iso = _iso(asof)
rows: list[tuple] = []
if not tickers:
return rows
with httpx.Client(timeout=10, headers=NAVER_HEADERS) as client:
for t in tickers:
try:
data = fetch_flow_naver(t, client=client)
if data and data["date"] == iso:
rows.append((t, iso, data["foreign_net"], data["institution_net"]))
except Exception as e:
log.warning("flow scrape failed for %s: %s", t, e)
if rate_limit_sec > 0:
time.sleep(rate_limit_sec)
return rows
def refresh_daily(conn: sqlite3.Connection, asof: dt.date,
flow_top_n: int = DEFAULT_FLOW_TOP_N,
rate_limit_sec: float = DEFAULT_RATE_LIMIT_SEC) -> dict:
"""Pull master + prices (FDR) + flow (naver scraping for top N by market cap)."""
df = fetch_master_listing()
master_rows, price_rows = _master_and_prices_rows(asof, df)
conn.executemany("""
INSERT INTO krx_master (
ticker, name, market, market_cap,
is_managed, is_preferred, is_spac,
listed_date, updated_at
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(ticker) DO UPDATE SET
name=excluded.name, market=excluded.market,
market_cap=excluded.market_cap,
is_managed=excluded.is_managed,
is_preferred=excluded.is_preferred,
is_spac=excluded.is_spac,
updated_at=excluded.updated_at
""", master_rows)
conn.executemany("""
INSERT OR REPLACE INTO krx_daily_prices
(ticker, date, open, high, low, close, volume, value)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", price_rows)
# 외국인/기관: 시총 상위 N종목만 (rate limit 보호)
if flow_top_n > 0:
top = sorted(master_rows, key=lambda r: r[3] or 0, reverse=True)[:flow_top_n]
flow_tickers = [r[0] for r in top]
else:
flow_tickers = []
flow_rows = _gather_flow_naver(asof, flow_tickers, rate_limit_sec=rate_limit_sec)
conn.executemany("""
INSERT OR REPLACE INTO krx_flow
(ticker, date, foreign_net, institution_net)
VALUES (?, ?, ?, ?)
""", flow_rows)
conn.commit()
return RefreshSummary(
asof=asof, master_count=len(master_rows),
prices_count=len(price_rows), flow_count=len(flow_rows),
failures=[],
).asdict()
def backfill(conn: sqlite3.Connection, start: dt.date, end: dt.date) -> list[dict]:
"""5년치 일봉 백필 — 종목별 fdr.DataReader 호출. Master는 end 기준 (FDR은 historical master 미지원)."""
df = fetch_master_listing()
master_rows, _ = _master_and_prices_rows(end, df)
conn.executemany("""
INSERT INTO krx_master (
ticker, name, market, market_cap,
is_managed, is_preferred, is_spac,
listed_date, updated_at
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(ticker) DO UPDATE SET name=excluded.name
""", master_rows)
iso_start = start.isoformat()
iso_end = end.isoformat()
results = []
for r in master_rows:
t = r[0]
try:
ddf = fetch_ohlcv_for_ticker(t, iso_start, iso_end)
if ddf is None or ddf.empty:
continue
ddf = ddf.reset_index()
ddf["Date"] = pd.to_datetime(ddf["Date"]).dt.strftime("%Y-%m-%d")
rows = []
for _, rr in ddf.iterrows():
if pd.isna(rr["Close"]) or pd.isna(rr["Volume"]):
continue
rows.append((
t, rr["Date"],
int(rr["Open"]) if pd.notna(rr["Open"]) else None,
int(rr["High"]) if pd.notna(rr["High"]) else None,
int(rr["Low"]) if pd.notna(rr["Low"]) else None,
int(rr["Close"]),
int(rr["Volume"]),
int(rr["Close"] * rr["Volume"]),
))
conn.executemany("""
INSERT OR REPLACE INTO krx_daily_prices
(ticker, date, open, high, low, close, volume, value)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", rows)
results.append({"ticker": t, "count": len(rows)})
except Exception as e:
log.error("backfill failed for %s: %s", t, e)
results.append({"ticker": t, "error": str(e)})
conn.commit()
return results

View File

@@ -0,0 +1,72 @@
"""Telegram payload builder. Caller (agent-office) handles actual delivery."""
from __future__ import annotations
import datetime as dt
NODE_ICONS = {
"foreign_buy": "👤외",
"volume_surge": "⚡거",
"momentum": "🚀모",
"high52w": "🆙고",
"rs_rating": "💪RS",
"ma_alignment": "📈MA",
"vcp_lite": "🌀VCP",
}
PAGE_BASE = "https://gahusb.synology.me/stock/screener"
def _escape_md(s: str) -> str:
"""Minimal MarkdownV2 escape — extend if formatting breaks."""
for ch in r"\_*[]()~`>#+-=|{}.!":
s = s.replace(ch, "\\" + ch)
return s
def _format_won(n) -> str:
if n is None:
return "-"
return f"{int(n):,}"
def build_telegram_payload(asof: dt.date, mode: str, survivors_count: int,
top_n: int, rows: list, run_id) -> dict:
title = "*KRX 강세주 스크리너*"
header = (
f"🎯 {title}{_escape_md(asof.isoformat())} \\({_escape_md(mode)}\\)\n"
f"통과 {survivors_count}종 / Top {top_n} / 본문 1\\-10"
)
lines = []
for r in rows[:10]:
icons = " ".join(
NODE_ICONS[name] for name, sc in r["scores"].items()
if sc >= 70 and name in NODE_ICONS
)
score_str = f"{r['total_score']:.1f}"
r_pct = r.get("r_pct")
r_pct_str = f"{r_pct:.1f}" if r_pct is not None else "-"
lines.append(
f"{r['rank']}\\. *{_escape_md(r['name'])}* `{r['ticker']}` "
f"{_escape_md(score_str)}\n"
f" {icons}\n"
f" 진입 {_format_won(r.get('entry_price'))} "
f"손절 {_format_won(r.get('stop_price'))} "
f"익절 {_format_won(r.get('target_price'))} "
f"\\(R {_escape_md(r_pct_str)}%\\)"
)
# URL은 inline link로 감싸 URL 내부 . - ? = 이스케이프 회피
link = (
f"🔗 [전체 결과·11\\~20위]({PAGE_BASE}?run_id={run_id})"
if run_id else ""
)
text = header + "\n\n" + "\n\n".join(lines) + ("\n\n" + link if link else "")
return {
"chat_target": "default",
"parse_mode": "MarkdownV2",
"text": text,
}

View File

@@ -0,0 +1,131 @@
"""price_fetcher._select_price_from_response 단위 테스트.
실행:
cd web-backend/stock-lab
python -m unittest app.test_price_fetcher -v
"""
import os
import sys
import unittest
# app 패키지를 직접 실행 가능하도록
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from app.price_fetcher import _select_price_from_response
class SelectPriceFromResponseTest(unittest.TestCase):
def test_regular_session_uses_close_price(self):
"""정규장 운영 중이면 closePrice를 REGULAR 세션으로 반환."""
payload = {
"closePrice": "70,500",
"marketStatus": "OPEN",
"localTradedAt": "2026-05-11T11:23:45+09:00",
"overMarketPriceInfo": None,
}
result = _select_price_from_response(payload)
self.assertEqual(result["price"], 70500)
self.assertEqual(result["session"], "REGULAR")
self.assertEqual(result["as_of"], "2026-05-11T11:23:45+09:00")
def test_nxt_after_market_open_uses_over_price(self):
"""정규장 마감 + NXT 애프터마켓 운영중이면 overPrice를 NXT_AFTER 세션으로 반환."""
payload = {
"closePrice": "285,500",
"marketStatus": "CLOSE",
"localTradedAt": "2026-05-11T15:30:00+09:00",
"overMarketPriceInfo": {
"tradingSessionType": "AFTER_MARKET",
"overMarketStatus": "OPEN",
"overPrice": "285,000",
"localTradedAt": "2026-05-11T19:21:40+09:00",
"tradeStopType": {"name": "TRADING"},
},
}
result = _select_price_from_response(payload)
self.assertEqual(result["price"], 285000)
self.assertEqual(result["session"], "NXT_AFTER")
self.assertEqual(result["as_of"], "2026-05-11T19:21:40+09:00")
def test_nxt_pre_market_open_uses_over_price(self):
"""NXT 프리마켓 운영중이면 NXT_PRE 세션 + overPrice."""
payload = {
"closePrice": "70,500",
"marketStatus": "CLOSE",
"localTradedAt": "2026-05-10T15:30:00+09:00",
"overMarketPriceInfo": {
"tradingSessionType": "PRE_MARKET",
"overMarketStatus": "OPEN",
"overPrice": "70,800",
"localTradedAt": "2026-05-11T08:30:00+09:00",
"tradeStopType": {"name": "TRADING"},
},
}
result = _select_price_from_response(payload)
self.assertEqual(result["price"], 70800)
self.assertEqual(result["session"], "NXT_PRE")
self.assertEqual(result["as_of"], "2026-05-11T08:30:00+09:00")
def test_nxt_closed_falls_back_to_close_price(self):
"""NXT가 CLOSE 상태이면 closePrice 사용, 세션은 CLOSED."""
payload = {
"closePrice": "285,500",
"marketStatus": "CLOSE",
"localTradedAt": "2026-05-11T15:30:00+09:00",
"overMarketPriceInfo": {
"tradingSessionType": "AFTER_MARKET",
"overMarketStatus": "CLOSE",
"overPrice": "285,000",
"tradeStopType": {"name": "TRADING"},
},
}
result = _select_price_from_response(payload)
self.assertEqual(result["price"], 285500)
self.assertEqual(result["session"], "CLOSED")
def test_nxt_trading_halted_falls_back_to_close_price(self):
"""NXT OPEN이지만 tradeStopType이 TRADING이 아니면 closePrice 사용."""
payload = {
"closePrice": "285,500",
"marketStatus": "CLOSE",
"overMarketPriceInfo": {
"tradingSessionType": "AFTER_MARKET",
"overMarketStatus": "OPEN",
"overPrice": "285,000",
"tradeStopType": {"name": "STOP"},
},
}
result = _select_price_from_response(payload)
self.assertEqual(result["price"], 285500)
self.assertEqual(result["session"], "CLOSED")
def test_no_over_market_info_returns_close_price(self):
"""overMarketPriceInfo 자체가 없는 경우(해외 종목 등) closePrice 그대로."""
payload = {
"closePrice": "150,000",
"marketStatus": "CLOSE",
"localTradedAt": "2026-05-11T15:30:00+09:00",
}
result = _select_price_from_response(payload)
self.assertEqual(result["price"], 150000)
self.assertEqual(result["session"], "CLOSED")
def test_missing_close_price_returns_none(self):
"""closePrice가 없거나 비숫자면 price는 None."""
payload = {"closePrice": "", "marketStatus": "CLOSE"}
result = _select_price_from_response(payload)
self.assertIsNone(result["price"])
def test_alternate_stock_end_price_field(self):
"""일부 응답은 stockEndPrice 필드를 사용 — 폴백 인식."""
payload = {
"stockEndPrice": "12,345",
"marketStatus": "OPEN",
}
result = _select_price_from_response(payload)
self.assertEqual(result["price"], 12345)
self.assertEqual(result["session"], "REGULAR")
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,61 @@
import datetime as dt
import sqlite3
import pandas as pd
import pytest
from app.screener.engine import ScreenContext
from app.screener.schema import ensure_screener_schema
from app.screener._test_fixtures import make_master, make_prices, make_flow
@pytest.fixture
def conn(tmp_path):
db_path = tmp_path / "ctx.db"
c = sqlite3.connect(db_path)
ensure_screener_schema(c)
yield c
c.close()
def _seed(conn, master_df, prices_df, flow_df):
now = dt.datetime.utcnow().isoformat()
for t, row in master_df.iterrows():
conn.execute("""INSERT INTO krx_master (ticker,name,market,market_cap,
is_managed,is_preferred,is_spac,listed_date,updated_at)
VALUES (?,?,?,?,?,?,?,?,?)""",
(t, row["name"], row["market"], row["market_cap"],
row["is_managed"], row["is_preferred"], row["is_spac"], None, now))
prices_df.to_sql("krx_daily_prices", conn, if_exists="append", index=False)
flow_df.to_sql("krx_flow", conn, if_exists="append", index=False)
conn.commit()
def test_load_returns_dataframes(conn):
asof = dt.date(2026, 5, 12)
_seed(conn,
make_master(["005930", "035420"]),
make_prices(["005930", "035420"], days=30, asof=asof),
make_flow(["005930", "035420"], days=30, asof=asof))
ctx = ScreenContext.load(conn, asof, lookback_days=30)
assert ctx.asof == asof
assert set(ctx.master.index) == {"005930", "035420"}
assert ctx.prices.shape[0] == 60 # 2 종목 × 30일
assert ctx.flow.shape[0] == 60
def test_restrict_filters_tickers(conn):
asof = dt.date(2026, 5, 12)
_seed(conn,
make_master(["005930", "035420", "091990"]),
make_prices(["005930", "035420", "091990"], days=30, asof=asof),
make_flow(["005930", "035420", "091990"], days=30, asof=asof))
ctx = ScreenContext.load(conn, asof, lookback_days=30)
scoped = ctx.restrict(pd.Index(["005930"]))
assert list(scoped.master.index) == ["005930"]
assert (scoped.prices["ticker"] == "005930").all()
assert (scoped.flow["ticker"] == "005930").all()

View File

@@ -0,0 +1,55 @@
import datetime as dt
import pandas as pd
import pytest
from app.screener.engine import ScreenContext, Screener, combine
from app.screener.nodes.hygiene import HygieneGate
from app.screener.nodes.foreign_buy import ForeignBuy
from app.screener.nodes.momentum import Momentum20
from app.screener._test_fixtures import make_master, make_prices, make_flow, make_kospi
def _ctx(master, prices, flow):
return ScreenContext(master=master, prices=prices, flow=flow,
kospi=make_kospi(days=260),
asof=dt.date(2026, 5, 12))
def test_combine_weighted_average():
scores = {
"foreign_buy": pd.Series({"A": 80, "B": 20}),
"momentum": pd.Series({"A": 60, "B": 40}),
}
weights = {"foreign_buy": 2.0, "momentum": 1.0}
out = combine(scores, weights)
# A: (80*2 + 60*1)/3 = 73.33
assert abs(out["A"] - 73.333) < 0.1
assert abs(out["B"] - 26.666) < 0.1
def test_combine_all_zero_weight_raises():
scores = {"foreign_buy": pd.Series({"A": 80})}
with pytest.raises(ValueError, match="no active"):
combine(scores, {"foreign_buy": 0})
def test_screener_run_end_to_end():
asof = dt.date(2026, 5, 12)
master = make_master(["GOOD", "SMALL"],
market_caps={"GOOD": 200_000_000_000, "SMALL": 1_000_000_000})
prices = make_prices(["GOOD", "SMALL"], days=260, asof=asof, trend_pct=0.1)
flow = make_flow(["GOOD", "SMALL"], days=260, asof=asof,
foreign_per_day={"GOOD": 100_000_000, "SMALL": 0})
ctx = _ctx(master, prices, flow)
screener = Screener(
gate=HygieneGate(),
score_nodes=[ForeignBuy(), Momentum20()],
weights={"foreign_buy": 1.0, "momentum": 1.0},
node_params={"foreign_buy": {"window_days": 5}, "momentum": {"window_days": 20}},
gate_params={**HygieneGate.default_params, "min_listed_days": 0},
top_n=10,
)
result = screener.run(ctx)
assert result.survivors_count == 1 # SMALL은 게이트 탈락
assert result.ranked.index[0] == "GOOD"

View File

@@ -0,0 +1,24 @@
import pandas as pd
import pytest
from app.screener.nodes.base import percentile_rank
def test_percentile_rank_basic():
s = pd.Series([10, 20, 30, 40, 50])
out = percentile_rank(s)
assert (out >= 0).all() and (out <= 100).all()
assert out.iloc[0] < out.iloc[-1] # smallest gets lowest rank
def test_percentile_rank_all_equal_returns_50():
s = pd.Series([42, 42, 42, 42])
out = percentile_rank(s)
assert (out == 50.0).all()
def test_percentile_rank_handles_nan():
s = pd.Series([1.0, float("nan"), 3.0, 5.0])
out = percentile_rank(s)
assert pd.isna(out.iloc[1])
assert (out.dropna() >= 0).all()

View File

@@ -0,0 +1,32 @@
import datetime as dt
import pandas as pd
from app.screener.engine import ScreenContext
from app.screener.nodes.foreign_buy import ForeignBuy
from app.screener._test_fixtures import make_master, make_prices, make_flow
def _ctx(master, prices, flow):
return ScreenContext(master=master, prices=prices, flow=flow,
kospi=pd.Series(dtype=float, name="kospi"),
asof=dt.date(2026, 5, 12))
def test_higher_foreign_buy_gets_higher_score():
asof = dt.date(2026, 5, 12)
master = make_master(["A", "B"])
prices = make_prices(["A", "B"], days=30, asof=asof)
flow = make_flow(["A", "B"], days=30, asof=asof,
foreign_per_day={"A": 100_000_000, "B": 0})
out = ForeignBuy().compute(_ctx(master, prices, flow), {"window_days": 5})
assert out["A"] > out["B"]
assert 0 <= out.min() <= out.max() <= 100
def test_all_zero_returns_50():
asof = dt.date(2026, 5, 12)
master = make_master(["A", "B"])
prices = make_prices(["A", "B"], days=30, asof=asof)
flow = make_flow(["A", "B"], days=30, asof=asof, foreign_per_day={"A": 0, "B": 0})
out = ForeignBuy().compute(_ctx(master, prices, flow), {"window_days": 5})
assert (out == 50.0).all()

Some files were not shown because too many files have changed in this diff Show More