Files
web-page-backend/docs/superpowers/plans/2026-05-09-gpu-video-offload.md
gahusb bb0b0dff25 docs: GPU 영상 인코딩 오프로드 spec + plan
NAS 저성능 CPU(J4025) ffmpeg 5분 타임아웃 → Windows PC RTX 5070 Ti NVENC로
오프로드. 같은 music_ai 서버에 /encode_video endpoint 추가, NAS는 다운 시
즉시 실패 (로컬 폴백 X). LAN 무인증.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 01:52:34 +09:00

25 KiB
Raw Permalink Blame History

GPU 영상 인코딩 오프로드 — 구현 계획

For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development.

Goal: NAS의 ffmpeg 영상 인코딩을 Windows PC(RTX 5070 Ti) NVENC로 오프로드.

Architecture: music-lab(NAS) → HTTP POST → music_ai(Windows, port 8765 /encode_video) → ffmpeg NVENC → SMB로 NAS에 직접 mp4 저장. Windows 서버 다운 시 NAS는 즉시 실패.

Tech Stack: httpx (NAS 측 HTTP 클라이언트), FastAPI (Windows 서버 endpoint), ffmpeg.exe with NVENC.

Spec: docs/superpowers/specs/2026-05-09-gpu-video-offload-design.md


File Structure

경로 책임
music_ai/video_encoder.py (new) 경로 변환 + ffmpeg NVENC subprocess 호출 + 검증
music_ai/server.py (modify) /encode_video POST endpoint 등록, /health에 ffmpeg/nvenc 정보 추가
music_ai/.env.example (modify) NAS_VOLUME_PREFIX, WINDOWS_DRIVE_ROOT, FFMPEG_PATH 문서화
music_ai/tests/test_video_encoder.py (new) translate_path, encode endpoint 단위 테스트
music-lab/app/pipeline/video.py (rewrite) subprocess 제거, httpx로 Windows 서버 호출
music-lab/tests/test_video_thumb.py (rewrite video tests) respx mock 기반
web-backend/docker-compose.yml (modify) music-lab env 3개 추가

Task 1: Windows music_ai/video_encoder.py + 테스트

Files:

  • Create: music_ai/video_encoder.py
  • Create: music_ai/tests/test_video_encoder.py

Step 1: Write failing test

# music_ai/tests/test_video_encoder.py
import os
import pytest
from unittest.mock import patch, MagicMock
from video_encoder import translate_path, encode_video, EncodeError


@pytest.fixture
def env(monkeypatch):
    monkeypatch.setenv("NAS_VOLUME_PREFIX", "/volume1/")
    monkeypatch.setenv("WINDOWS_DRIVE_ROOT", "Z:\\")
    monkeypatch.setenv("FFMPEG_PATH", "C:\\ffmpeg\\bin\\ffmpeg.exe")


def test_translate_path_basic(env):
    assert translate_path("/volume1/docker/webpage/data/x.jpg") == r"Z:\docker\webpage\data\x.jpg"


def test_translate_path_nested(env):
    assert translate_path("/volume1/docker/webpage/data/videos/3/cover.jpg") == r"Z:\docker\webpage\data\videos\3\cover.jpg"


def test_translate_path_rejects_bad_prefix(env):
    with pytest.raises(ValueError):
        translate_path("/etc/passwd")


@patch("subprocess.run")
def test_encode_video_success(mock_run, env, tmp_path):
    # 입력 파일 fake
    cover = tmp_path / "cover.jpg"
    cover.write_bytes(b"\x00" * 100)
    audio = tmp_path / "audio.mp3"
    audio.write_bytes(b"\x00" * 100)
    out = tmp_path / "video.mp4"

    def fake_run(cmd, **kwargs):
        # ffmpeg 실행을 흉내내어 출력 파일을 만듦
        out.write_bytes(b"\x00" * (2 * 1024 * 1024))  # 2MB
        return MagicMock(returncode=0, stderr="")
    mock_run.side_effect = fake_run

    # translate_path를 mock해서 입력 경로를 직접 사용
    with patch("video_encoder.translate_path", side_effect=lambda p: str(p).replace("/volume1/", str(tmp_path) + "/")):
        result = encode_video(
            cover_path_nas="/volume1/cover.jpg",
            audio_path_nas="/volume1/audio.mp3",
            output_path_nas="/volume1/video.mp4",
            resolution="1920x1080",
            duration_sec=120,
        )
    assert result["ok"] is True
    assert result["encoder"] == "h264_nvenc"
    assert result["output_bytes"] > 1024 * 1024


@patch("subprocess.run")
def test_encode_video_input_missing(mock_run, env, tmp_path):
    with pytest.raises(EncodeError) as exc:
        encode_video(
            cover_path_nas="/volume1/missing.jpg",
            audio_path_nas="/volume1/missing.mp3",
            output_path_nas="/volume1/out.mp4",
            resolution="1920x1080",
            duration_sec=120,
        )
    assert "input_validation" in str(exc.value)


@patch("subprocess.run")
def test_encode_video_ffmpeg_failure(mock_run, env, tmp_path):
    cover = tmp_path / "cover.jpg"; cover.write_bytes(b"\x00")
    audio = tmp_path / "audio.mp3"; audio.write_bytes(b"\x00")
    mock_run.return_value = MagicMock(returncode=1, stderr="invalid codec\n" * 50)

    with patch("video_encoder.translate_path", side_effect=lambda p: str(p).replace("/volume1/", str(tmp_path) + "/")):
        with pytest.raises(EncodeError) as exc:
            encode_video(
                cover_path_nas="/volume1/cover.jpg",
                audio_path_nas="/volume1/audio.mp3",
                output_path_nas="/volume1/out.mp4",
                resolution="1920x1080",
                duration_sec=120,
            )
    assert "ffmpeg" in str(exc.value).lower()


@patch("subprocess.run")
def test_encode_video_output_too_small(mock_run, env, tmp_path):
    cover = tmp_path / "cover.jpg"; cover.write_bytes(b"\x00")
    audio = tmp_path / "audio.mp3"; audio.write_bytes(b"\x00")
    def fake_run(cmd, **kwargs):
        (tmp_path / "out.mp4").write_bytes(b"\x00" * 100)  # 100 bytes — too small
        return MagicMock(returncode=0, stderr="")
    mock_run.side_effect = fake_run

    with patch("video_encoder.translate_path", side_effect=lambda p: str(p).replace("/volume1/", str(tmp_path) + "/")):
        with pytest.raises(EncodeError) as exc:
            encode_video(
                cover_path_nas="/volume1/cover.jpg",
                audio_path_nas="/volume1/audio.mp3",
                output_path_nas="/volume1/out.mp4",
                resolution="1920x1080",
                duration_sec=120,
            )
    assert "output_check" in str(exc.value)


def test_resolution_validation(env):
    with pytest.raises(EncodeError) as exc:
        encode_video(
            cover_path_nas="/volume1/x.jpg",
            audio_path_nas="/volume1/x.mp3",
            output_path_nas="/volume1/out.mp4",
            resolution="invalid",
            duration_sec=120,
        )
    assert "resolution" in str(exc.value).lower()

Step 2: Run test to verify it fails

cd music_ai && python -m pytest tests/test_video_encoder.py -v

Expected: ImportError on video_encoder module.

Step 3: Implement video_encoder.py

"""GPU(NVENC) 영상 인코더 — NAS music-lab에서 호출."""
import os
import re
import subprocess
import logging

logger = logging.getLogger("music_ai.video_encoder")

NAS_VOLUME_PREFIX = os.getenv("NAS_VOLUME_PREFIX", "/volume1/")
WINDOWS_DRIVE_ROOT = os.getenv("WINDOWS_DRIVE_ROOT", "Z:\\")
FFMPEG_PATH = os.getenv("FFMPEG_PATH", "ffmpeg")
FFMPEG_TIMEOUT_S = 180
RESOLUTION_RE = re.compile(r"^\d{3,4}x\d{3,4}$")
MIN_OUTPUT_BYTES = 1024 * 1024  # 1MB


class EncodeError(Exception):
    """{stage: input_validation|path_translate|ffmpeg|output_check, message: ...}"""
    def __init__(self, stage: str, message: str):
        self.stage = stage
        self.message = message
        super().__init__(f"[{stage}] {message}")


def translate_path(nas_path: str) -> str:
    """NAS 절대경로 → Windows SMB 경로."""
    if not nas_path.startswith(NAS_VOLUME_PREFIX):
        raise ValueError(f"NAS prefix 불일치: {nas_path}")
    rel = nas_path[len(NAS_VOLUME_PREFIX):]
    return WINDOWS_DRIVE_ROOT + rel.replace("/", "\\")


def encode_video(*, cover_path_nas: str, audio_path_nas: str,
                  output_path_nas: str, resolution: str,
                  duration_sec: int = 0, style: str = "visualizer") -> dict:
    """영상 인코딩 + Z:\\에 직접 저장."""
    # 1) Resolution 검증
    if not RESOLUTION_RE.match(resolution):
        raise EncodeError("input_validation", f"invalid resolution: {resolution}")
    w, h = resolution.split("x")

    # 2) 경로 변환
    try:
        cover_win = translate_path(cover_path_nas)
        audio_win = translate_path(audio_path_nas)
        out_win   = translate_path(output_path_nas)
    except ValueError as e:
        raise EncodeError("path_translate", str(e))

    # 3) 입력 존재 확인
    if not os.path.isfile(cover_win):
        raise EncodeError("input_validation", f"cover not found: {cover_win}")
    if not os.path.isfile(audio_win):
        raise EncodeError("input_validation", f"audio not found: {audio_win}")

    # 4) 출력 디렉토리 보장
    os.makedirs(os.path.dirname(out_win), exist_ok=True)

    # 5) ffmpeg 명령
    cmd = [
        FFMPEG_PATH, "-y",
        "-hwaccel", "cuda",
        "-loop", "1", "-i", cover_win,
        "-i", audio_win,
        "-filter_complex",
        f"[0:v]scale={w}:{h},format=yuv420p[bg];"
        f"[1:a]showwaves=s={w}x200:mode=cline:colors=0xFF4444@0.8[wave];"
        f"[bg][wave]overlay=0:({h}-200)[out]",
        "-map", "[out]", "-map", "1:a",
        "-c:v", "h264_nvenc",
        "-preset", "p4",
        "-rc", "vbr",
        "-cq", "23",
        "-b:v", "0",
        "-pix_fmt", "yuv420p",
        "-c:a", "aac", "-b:a", "192k",
        "-shortest", out_win,
    ]
    logger.info("ffmpeg: %s", " ".join(cmd))

    # 6) ffmpeg 실행
    import time
    t0 = time.time()
    try:
        result = subprocess.run(cmd, capture_output=True, text=True, timeout=FFMPEG_TIMEOUT_S)
    except subprocess.TimeoutExpired:
        raise EncodeError("ffmpeg", f"timeout after {FFMPEG_TIMEOUT_S}s")
    duration_ms = int((time.time() - t0) * 1000)

    if result.returncode != 0:
        raise EncodeError("ffmpeg", f"returncode={result.returncode}: {result.stderr[-800:]}")

    # 7) 출력 검증
    if not os.path.isfile(out_win):
        raise EncodeError("output_check", "output file not created")
    output_bytes = os.path.getsize(out_win)
    if output_bytes < MIN_OUTPUT_BYTES:
        raise EncodeError("output_check", f"output too small: {output_bytes} bytes")

    return {
        "ok": True,
        "duration_ms": duration_ms,
        "output_path_nas": output_path_nas,
        "output_bytes": output_bytes,
        "encoder": "h264_nvenc",
        "preset": "p4",
    }


def check_ffmpeg_nvenc() -> bool:
    """서버 시작 시 NVENC 가용성 확인."""
    try:
        result = subprocess.run(
            [FFMPEG_PATH, "-encoders"],
            capture_output=True, text=True, timeout=10,
        )
        return "h264_nvenc" in result.stdout
    except Exception:
        return False

Step 4: Run tests

cd music_ai && python -m pytest tests/test_video_encoder.py -v

Expected: 6 PASS

Step 5: Commit

cd C:/Users/jaeoh/Desktop/workspace/music_ai
git init 2>/dev/null || true   # may not be a git repo, that's OK
# music_ai is local-only per CLAUDE.md, no remote push

(music_ai is local-only; just save the file. No git push needed.)


Task 2: Windows music_ai/server.py/encode_video endpoint + 헬스 확장

Files:

  • Modify: music_ai/server.py
  • Modify: music_ai/.env.example

Step 1: Read existing server.py to understand FastAPI pattern + existing /health

Step 2: Add /encode_video endpoint

# server.py — 추가
from pydantic import BaseModel
from fastapi import HTTPException
import video_encoder


class EncodeVideoRequest(BaseModel):
    cover_path_nas: str
    audio_path_nas: str
    output_path_nas: str
    resolution: str = "1920x1080"
    duration_sec: int = 0
    style: str = "visualizer"


@app.post("/encode_video")
def encode_video_endpoint(req: EncodeVideoRequest):
    try:
        result = video_encoder.encode_video(
            cover_path_nas=req.cover_path_nas,
            audio_path_nas=req.audio_path_nas,
            output_path_nas=req.output_path_nas,
            resolution=req.resolution,
            duration_sec=req.duration_sec,
            style=req.style,
        )
        return result
    except video_encoder.EncodeError as e:
        # input_validation, path_translate → 400
        # ffmpeg, output_check → 500
        status_code = 400 if e.stage in ("input_validation", "path_translate") else 500
        raise HTTPException(
            status_code=status_code,
            detail={"ok": False, "stage": e.stage, "error": e.message},
        )

Step 3: 확장된 /health

기존 /health 응답에 추가:

import torch  # if existing health uses it
import video_encoder

# Module-level cache so health doesn't run ffmpeg every call
_FFMPEG_NVENC_CACHED = None
def _ffmpeg_nvenc_available():
    global _FFMPEG_NVENC_CACHED
    if _FFMPEG_NVENC_CACHED is None:
        _FFMPEG_NVENC_CACHED = video_encoder.check_ffmpeg_nvenc()
    return _FFMPEG_NVENC_CACHED


@app.get("/health")
def health():
    return {
        "ok": True,
        "gpu": torch.cuda.get_device_name(0) if torch.cuda.is_available() else None,  # 또는 기존 형식 유지
        "musicgen_loaded": True,  # 기존 그대로
        "ffmpeg_path": video_encoder.FFMPEG_PATH,
        "ffmpeg_nvenc": _ffmpeg_nvenc_available(),
    }

(기존 /health의 정확한 형식은 코드 읽고 매칭. 위는 예시.)

Step 4: .env.example 업데이트

# Existing
MODEL_NAME=facebook/musicgen-stereo-large
OUTPUT_DIR=output
SERVER_PORT=8765

# New for video encoder
NAS_VOLUME_PREFIX=/volume1/
WINDOWS_DRIVE_ROOT=Z:\
FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe

Step 5: 수동 검증

cd music_ai && start.bat   # 또는 적절한 시작 명령
curl http://localhost:8765/health
# Expected: {..., "ffmpeg_nvenc": true}

curl -X POST http://localhost:8765/encode_video -H "Content-Type: application/json" -d '{
  "cover_path_nas": "/volume1/docker/webpage/data/videos/3/cover.jpg",
  "audio_path_nas": "/volume1/docker/webpage/data/1c695df3-8a82-4c09-ba7b-82c07608ec5b.mp3",
  "output_path_nas": "/volume1/docker/webpage/data/videos/test/video.mp4",
  "resolution": "1920x1080",
  "duration_sec": 176
}'
# Expected: 200 + duration_ms ~ 10-20초

(실제 파일 경로는 사용자 환경에 맞게 조정)

Step 6: Commit (music_ai is local-only, no remote)


Task 3: NAS music-lab — pipeline/video.py 재작성 + 테스트

Files:

  • Rewrite: music-lab/app/pipeline/video.py
  • Rewrite: music-lab/tests/test_video_thumb.py (video 부분만)

Step 1: Replace failing tests

# music-lab/tests/test_video_thumb.py — video 관련 테스트 부분만 교체
import pytest
import respx
import httpx
from httpx import Response
from app.pipeline import video, thumb, storage


@pytest.fixture
def encoder_env(monkeypatch):
    monkeypatch.setenv("WINDOWS_VIDEO_ENCODER_URL", "http://192.168.45.59:8765")
    monkeypatch.setattr(video, "ENCODER_URL", "http://192.168.45.59:8765")


@respx.mock
def test_generate_video_calls_remote_encoder(encoder_env, tmp_path, monkeypatch):
    monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
    respx.post("http://192.168.45.59:8765/encode_video").mock(
        return_value=Response(200, json={
            "ok": True, "duration_ms": 12000,
            "output_path_nas": "/volume1/docker/webpage/data/videos/3/video.mp4",
            "output_bytes": 28000000,
            "encoder": "h264_nvenc", "preset": "p4",
        })
    )
    out = video.generate(
        pipeline_id=3,
        audio_path="/app/data/1c695df3.mp3",
        cover_path="/app/data/videos/3/cover.jpg",
        genre="lo-fi", duration_sec=120, resolution="1920x1080",
        style="visualizer",
    )
    assert out["url"].endswith("/3/video.mp4")
    assert out["used_fallback"] is False
    assert out["encode_duration_ms"] == 12000


@respx.mock
def test_generate_video_raises_on_connection_error(encoder_env, monkeypatch, tmp_path):
    monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
    respx.post("http://192.168.45.59:8765/encode_video").mock(
        side_effect=httpx.ConnectError("Connection refused")
    )
    with pytest.raises(video.VideoGenerationError) as exc:
        video.generate(
            pipeline_id=4,
            audio_path="/app/data/x.mp3", cover_path="/app/data/videos/4/cover.jpg",
            genre="lo-fi", duration_sec=120, resolution="1920x1080",
        )
    assert "연결 실패" in str(exc.value) or "Connection" in str(exc.value)


@respx.mock
def test_generate_video_raises_on_500(encoder_env, monkeypatch, tmp_path):
    monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
    respx.post("http://192.168.45.59:8765/encode_video").mock(
        return_value=Response(500, json={"ok": False, "stage": "ffmpeg", "error": "bad codec"})
    )
    with pytest.raises(video.VideoGenerationError) as exc:
        video.generate(
            pipeline_id=5,
            audio_path="/app/data/x.mp3", cover_path="/app/data/videos/5/cover.jpg",
            genre="lo-fi", duration_sec=120, resolution="1920x1080",
        )
    assert "Windows 인코더 오류" in str(exc.value)
    assert "ffmpeg" in str(exc.value)


def test_generate_video_no_url_configured(monkeypatch, tmp_path):
    monkeypatch.setattr(storage, "VIDEO_DATA_DIR", str(tmp_path))
    monkeypatch.setattr(video, "ENCODER_URL", "")
    with pytest.raises(video.VideoGenerationError) as exc:
        video.generate(
            pipeline_id=6,
            audio_path="/app/data/x.mp3", cover_path="/app/data/videos/6/cover.jpg",
            genre="lo-fi", duration_sec=120, resolution="1920x1080",
        )
    assert "WINDOWS_VIDEO_ENCODER_URL" in str(exc.value)


def test_container_to_nas_videos_path(monkeypatch):
    monkeypatch.setenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
    monkeypatch.setenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
    assert video._container_to_nas("/app/data/videos/3/cover.jpg") == "/volume1/docker/webpage/data/videos/3/cover.jpg"


def test_container_to_nas_music_path(monkeypatch):
    monkeypatch.setenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
    monkeypatch.setenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")
    assert video._container_to_nas("/app/data/abc.mp3") == "/volume1/docker/webpage/data/music/abc.mp3"

기존 test_generate_video_calls_ffmpeg, test_generate_video_failure_marks_failed 삭제. thumb 관련 테스트는 그대로 유지.

Step 2: Run, verify fail

cd music-lab && python -m pytest tests/test_video_thumb.py -v

Expected: video 관련 테스트들이 실패 (또는 ImportError).

Step 3: Rewrite app/pipeline/video.py

"""영상 비주얼 생성 — Windows GPU 서버 (NVENC) 호출.

Windows 서버 다운/실패 시 즉시 예외 (NAS 로컬 폴백 없음 — 의도적 결정).
"""
import os
import logging
import httpx

from . import storage

logger = logging.getLogger("music-lab.video")

ENCODER_URL = os.getenv("WINDOWS_VIDEO_ENCODER_URL", "")
ENCODER_TIMEOUT_S = 200  # Windows 서버 ffmpeg 180s + 마진

# NAS 호스트 절대경로 prefix — docker bind mount의 host 측
NAS_VIDEOS_ROOT = os.getenv("NAS_VIDEOS_ROOT", "/volume1/docker/webpage/data/videos")
NAS_MUSIC_ROOT = os.getenv("NAS_MUSIC_ROOT", "/volume1/docker/webpage/data/music")


class VideoGenerationError(Exception):
    pass


def generate(*, pipeline_id: int, audio_path: str, cover_path: str,
             genre: str, duration_sec: int, resolution: str = "1920x1080",
             style: str = "visualizer") -> dict:
    """원격 Windows GPU 서버 호출. 다운/실패 시 즉시 예외."""
    if not ENCODER_URL:
        raise VideoGenerationError(
            "WINDOWS_VIDEO_ENCODER_URL 미설정 — Windows 인코더 서버 주소 필요"
        )

    out_path = os.path.join(storage.pipeline_dir(pipeline_id), "video.mp4")
    nas_audio = _container_to_nas(audio_path)
    nas_cover = _container_to_nas(cover_path)
    nas_output = _container_to_nas(out_path)

    payload = {
        "cover_path_nas": nas_cover,
        "audio_path_nas": nas_audio,
        "output_path_nas": nas_output,
        "resolution": resolution,
        "duration_sec": duration_sec,
        "style": style,
    }

    logger.info("Windows 인코더 호출: pipeline=%d audio=%s", pipeline_id, audio_path)
    try:
        with httpx.Client(timeout=ENCODER_TIMEOUT_S) as client:
            resp = client.post(f"{ENCODER_URL}/encode_video", json=payload)
    except (httpx.ConnectError, httpx.ReadTimeout, httpx.WriteTimeout, httpx.NetworkError) as e:
        raise VideoGenerationError(f"Windows 인코더 연결 실패: {e}")

    if resp.status_code != 200:
        try:
            detail = resp.json().get("detail", resp.json())
        except Exception:
            detail = {"error": resp.text[:300]}
        stage = detail.get("stage", "?") if isinstance(detail, dict) else "?"
        error = detail.get("error", str(detail)) if isinstance(detail, dict) else str(detail)
        raise VideoGenerationError(
            f"Windows 인코더 오류 ({resp.status_code}): {stage}{error}"
        )

    data = resp.json()
    if not data.get("ok"):
        raise VideoGenerationError(f"Windows 인코더 응답 ok=false: {data}")

    return {
        "url": storage.media_url(pipeline_id, "video.mp4"),
        "used_fallback": False,
        "duration_sec": duration_sec,
        "encode_duration_ms": data.get("duration_ms"),
        "encoder": data.get("encoder", "h264_nvenc"),
    }


def _container_to_nas(container_path: str) -> str:
    """ /app/data/videos/3/cover.jpg → /volume1/docker/webpage/data/videos/3/cover.jpg
        /app/data/abc.mp3            → /volume1/docker/webpage/data/music/abc.mp3
    """
    if container_path.startswith("/app/data/videos/"):
        return container_path.replace("/app/data/videos/", NAS_VIDEOS_ROOT + "/", 1)
    if container_path.startswith("/app/data/"):
        rel = container_path[len("/app/data/"):]
        return NAS_MUSIC_ROOT + "/" + rel
    return container_path

Step 4: Run tests

cd music-lab && python -m pytest tests/ -v

Expected: 73 PASS — 2 (제거) + 6 (신규) = 77? 아니면 73 그대로 — count 확인.

Step 5: Commit + push

git -C C:/Users/jaeoh/Desktop/workspace/web-backend add music-lab/app/pipeline/video.py \
                                                       music-lab/tests/test_video_thumb.py
git -C C:/Users/jaeoh/Desktop/workspace/web-backend commit -m "feat(music-lab): 영상 인코딩을 Windows GPU 서버로 오프로드

- pipeline/video.py 재작성: subprocess.run 제거, httpx로 192.168.45.59:8765/encode_video 호출
- Windows 서버 다운 시 즉시 VideoGenerationError (NAS 로컬 폴백 X)
- /app/data/* → /volume1/docker/webpage/data/* 경로 변환 (_container_to_nas)
- 테스트는 respx mock 기반으로 교체 (6개 신규)"
git -C C:/Users/jaeoh/Desktop/workspace/web-backend push origin main

Task 4: docker-compose.yml env 추가

Files:

  • Modify: web-backend/docker-compose.yml

Step 1: music-lab 서비스 environment에 추가

  music-lab:
    environment:
      # ... existing ...
      - WINDOWS_VIDEO_ENCODER_URL=${WINDOWS_VIDEO_ENCODER_URL}
      - NAS_VIDEOS_ROOT=${NAS_VIDEOS_ROOT:-/volume1/docker/webpage/data/videos}
      - NAS_MUSIC_ROOT=${NAS_MUSIC_ROOT:-/volume1/docker/webpage/data/music}

Step 2: docker-compose syntax 검증

cd C:/Users/jaeoh/Desktop/workspace/web-backend && python -c "import yaml; yaml.safe_load(open('docker-compose.yml'))" && echo OK

Step 3: Commit + push

git -C C:/Users/jaeoh/Desktop/workspace/web-backend add docker-compose.yml
git -C C:/Users/jaeoh/Desktop/workspace/web-backend commit -m "chore(infra): GPU 인코더 env 추가 (WINDOWS_VIDEO_ENCODER_URL)"
git -C C:/Users/jaeoh/Desktop/workspace/web-backend push origin main

Task 5: 사용자 매뉴얼 단계 (사람이 직접)

후속 단계, 코드 작업 아님:

  1. Windows PC: ffmpeg 설치 + PATH 설정

    • https://www.gyan.dev/ffmpeg/builds/ → "release full" 다운로드
    • C:\ffmpeg\ 압축 해제 → C:\ffmpeg\bin\ffmpeg.exe 확인
    • 시스템 PATH에 C:\ffmpeg\bin 추가
    • 검증: ffmpeg -version + ffmpeg -encoders | findstr h264_nvenc
  2. Windows PC: music_ai/.env 추가

    NAS_VOLUME_PREFIX=/volume1/
    WINDOWS_DRIVE_ROOT=Z:\
    FFMPEG_PATH=C:\ffmpeg\bin\ffmpeg.exe
    
  3. Windows PC: SMB 마운트 확인Z:\docker\webpage\data\ 접근 가능

  4. Windows PC: music_ai 서버 재시작start.bat

  5. Windows PC 헬스 체크curl http://localhost:8765/healthffmpeg_nvenc: true 확인

  6. NAS .env에 추가

    WINDOWS_VIDEO_ENCODER_URL=http://192.168.45.59:8765
    
  7. NAS music-lab 재시작docker compose up -d music-lab

  8. E2E 테스트 — 진행 탭에서 새 파이프라인 시작, 영상 단계가 1020초에 완료되는지 확인


Self-Review

Spec coverage:

  • §4 Windows endpoint → Task 1, 2 ✓
  • §5 NAS video.py → Task 3 ✓
  • §6 에러 처리 → Task 3 (httpx 예외 catch) ✓
  • §7 헬스 모니터링 → Task 2 (/health 확장) ✓
  • §8 테스트 → Task 1, 3 ✓
  • §9 Windows 사전 준비 → Task 5 (사용자 수동) ✓
  • §10 산출물 → 4 task로 모두 커버

Placeholder scan: 없음.

Type consistency:

  • EncodeError(stage, message) Task 1 정의, Task 2에서 e.stage/e.message 사용 ✓
  • VideoGenerationError Task 3에서 raise, 기존 orchestrator에서 catch ✓
  • 응답 JSON 형식 spec §4-2와 일치 ✓
  • 환경변수 이름 일관 (NAS_VOLUME_PREFIX, WINDOWS_DRIVE_ROOT, FFMPEG_PATH, WINDOWS_VIDEO_ENCODER_URL, NAS_VIDEOS_ROOT, NAS_MUSIC_ROOT)