2 Commits

Author SHA1 Message Date
b4dd21e67a feat(packs-lab): chunked resumable upload (offset-based) 추가
기존 single-shot POST /upload는 그대로 유지하고, 5GB+ 안정성을 위한
chunk upload 5-endpoint를 추가했다.

- POST /upload/init — mint-token jti consume + 세션 디렉토리 생성
- PUT /upload/{sid}/chunk?offset=N — offset 매칭 후 .part 파일 append
  · 불일치 시 409 + X-Current-Offset 헤더로 재개 지점 통보
- GET /upload/{sid}/status — 현재 written / expected_size 조회
- POST /upload/{sid}/complete — atomic rename + Supabase INSERT
- DELETE /upload/{sid} — 세션 중단 + 부분파일 정리

auth.py: verify_upload_token_no_consume() 추가 — chunk/complete/abort/status
는 동일 mint-token을 재사용해야 하므로 jti consume 없이 시그니처+만료만 검증.

models.py: InitUploadResponse, ChunkUploadResponse 추가.

세션 state: PACK_BASE_DIR/.uploads/{jti}/meta.json + data.part (파일시스템
영속, 단일 컨테이너 가정).

chunk 크기 상한: PACK_CHUNK_MAX_SIZE env (기본 64MB).

tests: chunk upload 시나리오 8종 — full-flow / offset mismatch / status /
abort / wrong token / incomplete complete / filename collision / host path
저장. 전체 37 테스트 pass.

CLAUDE.md: packs-lab API 표에 chunk 5-endpoint + 사용 패턴 보강.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 02:36:20 +09:00
448dbd5f48 feat(packs-lab): DSM 호출 retry/backoff + 업로드 cleanup 보강
- dsm_client.py: _request_with_retry()로 5xx·transport·timeout만 지수백오프
  재시도 (DSM_MAX_RETRIES, DSM_BACKOFF_SEC env). DSM error code 응답 본문 로깅.
- routes.py: upload 핸들러를 try/finally로 감싸 부분파일 정리 보장, Supabase
  INSERT 호출 자체에 try/except 추가해 네트워크 예외도 cleanup.
- test_dsm_client.py: retry 시나리오 4종 추가 (5xx→성공/소진/transport
  error/4xx no-retry). 전체 29 테스트 pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 02:31:39 +09:00
7 changed files with 695 additions and 46 deletions

View File

@@ -666,10 +666,21 @@ docker compose up -d
|--------|------|------| |--------|------|------|
| POST | `/api/packs/sign-link` | Vercel HMAC → DSM Sharing.create로 4시간 유효 다운로드 URL 발급 | | POST | `/api/packs/sign-link` | Vercel HMAC → DSM Sharing.create로 4시간 유효 다운로드 URL 발급 |
| POST | `/api/packs/admin/mint-token` | Vercel HMAC → 일회성 upload 토큰 발급 (기본 30분 TTL) | | POST | `/api/packs/admin/mint-token` | Vercel HMAC → 일회성 upload 토큰 발급 (기본 30분 TTL) |
| POST | `/api/packs/upload` | Bearer token → multipart 5GB 저장 + Supabase INSERT | | POST | `/api/packs/upload` | Bearer token (single-shot) → multipart 5GB 저장 + Supabase INSERT |
| POST | `/api/packs/upload/init` | Bearer token → chunked upload 세션 초기화 (`session_id = jti`, `chunk_max_size` 반환). init만 jti consume |
| PUT | `/api/packs/upload/{session_id}/chunk?offset=N` | 동일 Bearer token → 부분파일 append (offset 불일치 시 409 + `X-Current-Offset` 헤더) |
| GET | `/api/packs/upload/{session_id}/status` | 동일 Bearer token → `{written, expected_size}` 조회 (재개용) |
| POST | `/api/packs/upload/{session_id}/complete` | 동일 Bearer token → 부분파일 rename + Supabase INSERT |
| DELETE | `/api/packs/upload/{session_id}` | 동일 Bearer token → 세션 중단 + 부분파일 정리 |
| GET | `/api/packs/list` | Vercel HMAC → 활성 pack_files 목록 (deleted_at IS NULL) | | GET | `/api/packs/list` | Vercel HMAC → 활성 pack_files 목록 (deleted_at IS NULL) |
| DELETE | `/api/packs/{file_id}` | Vercel HMAC → soft delete (DSM 공유는 자동 만료) | | DELETE | `/api/packs/{file_id}` | Vercel HMAC → soft delete (DSM 공유는 자동 만료) |
**Chunked upload 흐름 (5GB+ 안정성)**
- 같은 mint-token을 init·chunk·status·complete·abort 전체에서 Bearer로 재사용 (jti consume은 init에서만)
- 세션 state: 컨테이너 내부 `PACK_BASE_DIR/.uploads/{jti}/meta.json + data.part`
- chunk 재시도: 클라이언트는 PUT 응답 헤더 `X-Current-Offset` 또는 `GET /status`로 재개 지점 확인
- 환경변수 `PACK_CHUNK_MAX_SIZE` (기본 64MB) — 너무 크면 nginx buffering 부담, 너무 작으면 RTT 비용
### deployer (deployer/) ### deployer (deployer/)
- Webhook 검증: `X-Gitea-Signature` (HMAC SHA256, `compare_digest` 사용) - Webhook 검증: `X-Gitea-Signature` (HMAC SHA256, `compare_digest` 사용)
- `WEBHOOK_SECRET` 환경변수로 시크릿 관리 - `WEBHOOK_SECRET` 환경변수로 시크릿 관리

View File

@@ -55,8 +55,8 @@ def mint_upload_token(payload: dict) -> str:
return base64.urlsafe_b64encode(body).decode() + "." + sig return base64.urlsafe_b64encode(body).decode() + "." + sig
def verify_upload_token(token: str) -> dict: def _decode_upload_token(token: str) -> dict:
"""업로드 토큰 검증 + jti 사용 마킹.""" """토큰 시그니처 + 만료 + jti 존재만 검증. JTI 마킹 없음."""
try: try:
b64, sig = token.split(".", 1) b64, sig = token.split(".", 1)
body = base64.urlsafe_b64decode(b64.encode()) body = base64.urlsafe_b64decode(b64.encode())
@@ -72,13 +72,25 @@ def verify_upload_token(token: str) -> dict:
if int(time.time()) > expires_at: if int(time.time()) > expires_at:
raise HTTPException(status_code=401, detail="토큰 만료") raise HTTPException(status_code=401, detail="토큰 만료")
jti = payload.get("jti") if not payload.get("jti"):
if not jti:
raise HTTPException(status_code=401, detail="jti 누락") raise HTTPException(status_code=401, detail="jti 누락")
return payload
def verify_upload_token(token: str) -> dict:
"""업로드 토큰 검증 + jti 사용 마킹. single-shot 업로드와 chunked init에서만 사용."""
payload = _decode_upload_token(token)
jti = payload["jti"]
with _jti_lock: with _jti_lock:
if jti in _used_jti: if jti in _used_jti:
raise HTTPException(status_code=409, detail="이미 사용된 토큰") raise HTTPException(status_code=409, detail="이미 사용된 토큰")
_used_jti.add(jti) _used_jti.add(jti)
return payload return payload
def verify_upload_token_no_consume(token: str) -> dict:
"""업로드 토큰 검증만 (jti consume 없음). chunked upload chunk/complete/abort/status에 사용."""
return _decode_upload_token(token)

View File

@@ -4,6 +4,7 @@
- create_share_link(file_path, expires_in_sec) -> share URL - create_share_link(file_path, expires_in_sec) -> share URL
""" """
import asyncio
import logging import logging
import os import os
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
@@ -18,6 +19,8 @@ DSM_PASS = os.getenv("DSM_PASS", "")
# LAN IP로 DSM 접근 시 self-signed cert가 IP에 매칭 안 되어 검증 실패. LAN 내부 통신이라 false 허용. # LAN IP로 DSM 접근 시 self-signed cert가 IP에 매칭 안 되어 검증 실패. LAN 내부 통신이라 false 허용.
# 운영에서 LAN IP + self-signed면 DSM_VERIFY_SSL=false. 도메인 + 정상 cert면 기본값(true) 유지. # 운영에서 LAN IP + self-signed면 DSM_VERIFY_SSL=false. 도메인 + 정상 cert면 기본값(true) 유지.
DSM_VERIFY_SSL = os.getenv("DSM_VERIFY_SSL", "true").strip().lower() != "false" DSM_VERIFY_SSL = os.getenv("DSM_VERIFY_SSL", "true").strip().lower() != "false"
DSM_MAX_RETRIES = max(1, int(os.getenv("DSM_MAX_RETRIES", "3")))
DSM_BACKOFF_SEC = float(os.getenv("DSM_BACKOFF_SEC", "0.5"))
API_AUTH = "/webapi/auth.cgi" API_AUTH = "/webapi/auth.cgi"
API_SHARE = "/webapi/entry.cgi" API_SHARE = "/webapi/entry.cgi"
@@ -27,13 +30,45 @@ class DSMError(RuntimeError):
pass pass
async def _request_with_retry(
client: httpx.AsyncClient,
url: str,
params: dict,
timeout: float,
) -> httpx.Response:
"""5xx · transport · timeout만 지수백오프 retry. 4xx와 DSM success=false는 호출자가 판단."""
last_exc: Exception | None = None
for attempt in range(DSM_MAX_RETRIES):
try:
r = await client.get(url, params=params, timeout=timeout)
if r.status_code < 500:
return r
last_exc = httpx.HTTPStatusError(
f"HTTP {r.status_code}", request=r.request, response=r
)
logger.warning(
"DSM HTTP %s — attempt %s/%s body=%s",
r.status_code, attempt + 1, DSM_MAX_RETRIES, r.text[:200],
)
except (httpx.TransportError, httpx.TimeoutException) as e:
last_exc = e
logger.warning(
"DSM transport error: %s — attempt %s/%s",
e, attempt + 1, DSM_MAX_RETRIES,
)
if attempt < DSM_MAX_RETRIES - 1:
await asyncio.sleep(DSM_BACKOFF_SEC * (2 ** attempt))
raise DSMError(f"DSM 요청 실패 (재시도 {DSM_MAX_RETRIES}회): {last_exc}")
async def _login(client: httpx.AsyncClient) -> str: async def _login(client: httpx.AsyncClient) -> str:
"""DSM 세션 sid 반환.""" """DSM 세션 sid 반환."""
if not all([DSM_HOST, DSM_USER, DSM_PASS]): if not all([DSM_HOST, DSM_USER, DSM_PASS]):
raise DSMError("DSM 환경변수 미설정") raise DSMError("DSM 환경변수 미설정")
r = await client.get( r = await _request_with_retry(
client,
f"{DSM_HOST}{API_AUTH}", f"{DSM_HOST}{API_AUTH}",
params={ {
"api": "SYNO.API.Auth", "api": "SYNO.API.Auth",
"version": "7", "version": "7",
"method": "login", "method": "login",
@@ -42,12 +77,14 @@ async def _login(client: httpx.AsyncClient) -> str:
"session": "FileStation", "session": "FileStation",
"format": "sid", "format": "sid",
}, },
timeout=15.0, 15.0,
) )
r.raise_for_status() r.raise_for_status()
data = r.json() data = r.json()
if not data.get("success"): if not data.get("success"):
raise DSMError(f"DSM login 실패: {data.get('error')}") err = data.get("error", {})
logger.error("DSM login 실패: code=%s error=%s", err.get("code"), err)
raise DSMError(f"DSM login 실패: code={err.get('code')} error={err}")
return data["data"]["sid"] return data["data"]["sid"]
@@ -80,9 +117,10 @@ async def create_share_link(file_path: str, expires_in_sec: int = 14400) -> tupl
async with httpx.AsyncClient(verify=DSM_VERIFY_SSL) as client: async with httpx.AsyncClient(verify=DSM_VERIFY_SSL) as client:
sid = await _login(client) sid = await _login(client)
try: try:
r = await client.get( r = await _request_with_retry(
client,
f"{DSM_HOST}{API_SHARE}", f"{DSM_HOST}{API_SHARE}",
params={ {
"api": "SYNO.FileStation.Sharing", "api": "SYNO.FileStation.Sharing",
"version": "3", "version": "3",
"method": "create", "method": "create",
@@ -90,16 +128,22 @@ async def create_share_link(file_path: str, expires_in_sec: int = 14400) -> tupl
"date_expired": expire_time_ms, "date_expired": expire_time_ms,
"_sid": sid, "_sid": sid,
}, },
timeout=15.0, 15.0,
) )
r.raise_for_status() r.raise_for_status()
data = r.json() data = r.json()
if not data.get("success"): if not data.get("success"):
raise DSMError(f"DSM Sharing.create 실패: {data.get('error')}") err = data.get("error", {})
logger.error(
"DSM Sharing.create 실패: path=%s code=%s error=%s",
file_path, err.get("code"), err,
)
raise DSMError(f"DSM Sharing.create 실패: code={err.get('code')} error={err}")
links = data["data"]["links"] links = data["data"]["links"]
if not links: if not links:
raise DSMError("Sharing 응답에 링크 없음") raise DSMError("Sharing 응답에 링크 없음")
url = links[0]["url"] url = links[0]["url"]
logger.info("DSM share link created: path=%s", file_path)
return url, expires_at return url, expires_at
finally: finally:
await _logout(client, sid) await _logout(client, sid)

View File

@@ -51,3 +51,16 @@ class MintTokenResponse(BaseModel):
token: str token: str
expires_at: datetime expires_at: datetime
jti: str jti: str
class InitUploadResponse(BaseModel):
"""chunked upload 세션 초기화 응답. session_id는 mint-token의 jti와 동일."""
session_id: str
chunk_max_size: int
expected_size: int
expires_at: datetime
class ChunkUploadResponse(BaseModel):
written: int
expected_size: int

View File

@@ -2,13 +2,20 @@
- POST /api/packs/sign-link — Vercel HMAC 인증 → DSM 공유 링크 - POST /api/packs/sign-link — Vercel HMAC 인증 → DSM 공유 링크
- POST /api/packs/admin/mint-token — Vercel HMAC 인증 → 일회성 upload 토큰 - POST /api/packs/admin/mint-token — Vercel HMAC 인증 → 일회성 upload 토큰
- POST /api/packs/upload — 일회성 토큰 인증 → multipart 저장 + supabase INSERT - POST /api/packs/upload — 일회성 토큰 인증 → multipart 저장 + supabase INSERT (single-shot)
- POST /api/packs/upload/init — 일회성 토큰 인증 → chunked upload 세션 초기화
- PUT /api/packs/upload/{session_id}/chunk — 동일 토큰 + offset → 부분파일 append
- POST /api/packs/upload/{session_id}/complete — 동일 토큰 → 완료 + supabase INSERT
- GET /api/packs/upload/{session_id}/status — 현재 written 조회 (재개용)
- DELETE /api/packs/upload/{session_id} — 세션 중단 + 부분파일 정리
- GET /api/packs/list — Vercel HMAC 인증 → pack_files 전체 조회 - GET /api/packs/list — Vercel HMAC 인증 → pack_files 전체 조회
- DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete (DSM 공유는 자동 만료) - DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete (DSM 공유는 자동 만료)
""" """
import json
import logging import logging
import os import os
import re import re
import shutil
import time import time
import uuid import uuid
from datetime import datetime, timezone from datetime import datetime, timezone
@@ -17,9 +24,16 @@ from pathlib import Path
from fastapi import APIRouter, File, Header, HTTPException, Request, UploadFile from fastapi import APIRouter, File, Header, HTTPException, Request, UploadFile
from supabase import Client, create_client from supabase import Client, create_client
from .auth import mint_upload_token, verify_request_hmac, verify_upload_token from .auth import (
mint_upload_token,
verify_request_hmac,
verify_upload_token,
verify_upload_token_no_consume,
)
from .dsm_client import DSMError, create_share_link from .dsm_client import DSMError, create_share_link
from .models import ( from .models import (
ChunkUploadResponse,
InitUploadResponse,
MintTokenRequest, MintTokenRequest,
MintTokenResponse, MintTokenResponse,
PackFileItem, PackFileItem,
@@ -40,6 +54,52 @@ ALLOWED_EXT = {"pdf", "zip", "mp4", "mov", "mkv", "wav", "m4a", "mp3", "png", "j
MAX_BYTES = 5 * 1024 * 1024 * 1024 # 5GB MAX_BYTES = 5 * 1024 * 1024 * 1024 # 5GB
SAFE_FILENAME = re.compile(r"^[\w가-힣\-\.\(\)\s]+$") SAFE_FILENAME = re.compile(r"^[\w가-힣\-\.\(\)\s]+$")
UPLOAD_TOKEN_TTL_SEC = int(os.getenv("UPLOAD_TOKEN_TTL_SEC", "1800")) # 30분 default UPLOAD_TOKEN_TTL_SEC = int(os.getenv("UPLOAD_TOKEN_TTL_SEC", "1800")) # 30분 default
CHUNK_MAX_SIZE = int(os.getenv("PACK_CHUNK_MAX_SIZE", str(64 * 1024 * 1024))) # 64MB default
SESSIONS_DIR_NAME = ".uploads"
def _sessions_root() -> Path:
return PACK_BASE_DIR / SESSIONS_DIR_NAME
def _session_dir(jti: str) -> Path:
# jti는 uuid4 형식이라 path traversal 위험 없음. 안전을 위해 추가 검증.
if not re.match(r"^[0-9a-fA-F\-]{1,64}$", jti):
raise HTTPException(status_code=400, detail="잘못된 session_id")
return _sessions_root() / jti
def _session_meta_path(jti: str) -> Path:
return _session_dir(jti) / "meta.json"
def _session_data_path(jti: str) -> Path:
return _session_dir(jti) / "data.part"
def _load_session(jti: str) -> dict:
meta_file = _session_meta_path(jti)
if not meta_file.exists():
raise HTTPException(status_code=404, detail="업로드 세션을 찾을 수 없습니다")
return json.loads(meta_file.read_text(encoding="utf-8"))
def _save_session(jti: str, meta: dict) -> None:
_session_meta_path(jti).write_text(json.dumps(meta), encoding="utf-8")
def _cleanup_session(jti: str) -> None:
shutil.rmtree(_session_dir(jti), ignore_errors=True)
def _verify_session_token(authorization: str, session_id: str) -> dict:
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Authorization 헤더 누락")
token = authorization[len("Bearer "):]
payload = verify_upload_token_no_consume(token)
if payload.get("jti") != session_id:
raise HTTPException(status_code=403, detail="토큰과 세션 ID 불일치")
return payload
def _supabase() -> Client: def _supabase() -> Client:
@@ -135,54 +195,215 @@ async def upload(
if target.exists(): if target.exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다. 다른 이름으로 업로드하거나 기존 파일을 먼저 삭제하세요") raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다. 다른 이름으로 업로드하거나 기존 파일을 먼저 삭제하세요")
# multipart 스트림 저장 + 크기 검증 upload_committed = False
written = 0 try:
with target.open("wb") as f: # multipart 스트림 저장 + 크기 검증
while True: written = 0
chunk = await file.read(1024 * 1024) with target.open("wb") as f:
if not chunk: while True:
break chunk = await file.read(1024 * 1024)
written += len(chunk) if not chunk:
if written > MAX_BYTES: break
f.close() written += len(chunk)
target.unlink(missing_ok=True) if written > MAX_BYTES:
raise HTTPException(status_code=413, detail="파일 크기 5GB 초과") raise HTTPException(status_code=413, detail="파일 크기 5GB 초과")
f.write(chunk) f.write(chunk)
if written != expected_size: if written != expected_size:
target.unlink(missing_ok=True) raise HTTPException(status_code=400, detail=f"실제 크기({written})와 토큰 크기({expected_size}) 불일치")
raise HTTPException(status_code=400, detail=f"실제 크기({written})와 토큰 크기({expected_size}) 불일치")
# Supabase·DSM에 노출되는 file_path는 NAS 호스트 절대경로여야 한다.
# 컨테이너 경로(target)는 마운트된 호스트경로의 다른 시점일 뿐이라, 같은 디렉토리 구조를 보유.
host_path = PACK_HOST_DIR / filename
# supabase INSERT
sb = _supabase()
file_id = str(uuid.uuid4())
try:
res = sb.table("pack_files").insert({
"id": file_id,
"min_tier": tier,
"label": label,
"file_path": str(host_path),
"filename": filename,
"size_bytes": written,
}).execute()
except Exception as e:
logger.exception("Supabase INSERT 예외: filename=%s", filename)
raise HTTPException(status_code=500, detail=f"DB INSERT 실패: {e}") from e
if not res.data:
raise HTTPException(status_code=500, detail="DB INSERT 실패")
upload_committed = True
return UploadResponse(
file_id=file_id,
file_path=str(host_path),
filename=filename,
size_bytes=written,
min_tier=tier,
label=label,
uploaded_at=res.data[0]["uploaded_at"],
)
finally:
if not upload_committed and target.exists():
try:
target.unlink()
logger.warning("업로드 실패로 부분 파일 정리: %s", target)
except Exception as e:
logger.exception("부분 파일 정리 실패: %s%s", target, e)
# ── Chunked upload (resumable) ──────────────────────────────────────────────
# mint-token이 발급한 동일 토큰을 init → chunk* → complete 전 흐름에서 재사용한다.
# jti = session_id. init에서만 jti consume, chunk/complete/abort는 no-consume 검증.
@router.post("/upload/init", response_model=InitUploadResponse)
async def upload_init(authorization: str = Header("")):
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Authorization 헤더 누락")
token = authorization[len("Bearer "):]
payload = verify_upload_token(token) # init만 jti consume
tier = payload["tier"]
label = payload["label"]
filename = _check_filename(payload["filename"])
expected_size = int(payload["size_bytes"])
jti = payload["jti"]
PACK_BASE_DIR.mkdir(parents=True, exist_ok=True)
if (PACK_BASE_DIR / filename).exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다")
sdir = _session_dir(jti)
if sdir.exists():
raise HTTPException(status_code=409, detail="이미 시작된 세션입니다")
sdir.mkdir(parents=True, exist_ok=True)
_session_data_path(jti).touch()
_save_session(jti, {
"filename": filename,
"expected_size": expected_size,
"tier": tier,
"label": label,
"written": 0,
"expires_at": int(payload["expires_at"]),
})
return InitUploadResponse(
session_id=jti,
chunk_max_size=CHUNK_MAX_SIZE,
expected_size=expected_size,
expires_at=datetime.fromtimestamp(payload["expires_at"], tz=timezone.utc),
)
@router.put("/upload/{session_id}/chunk", response_model=ChunkUploadResponse)
async def upload_chunk(
session_id: str,
request: Request,
offset: int = 0,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
if offset != meta["written"]:
raise HTTPException(
status_code=409,
detail=f"offset {offset} 불일치 (현재 written={meta['written']})",
headers={"X-Current-Offset": str(meta["written"])},
)
body = await request.body()
if not body:
raise HTTPException(status_code=400, detail="청크가 비어 있음")
if len(body) > CHUNK_MAX_SIZE:
raise HTTPException(status_code=413, detail=f"청크 크기 {CHUNK_MAX_SIZE} 초과")
if meta["written"] + len(body) > meta["expected_size"]:
raise HTTPException(status_code=413, detail="누적 크기 expected_size 초과")
with _session_data_path(session_id).open("ab") as f:
f.write(body)
meta["written"] += len(body)
_save_session(session_id, meta)
return ChunkUploadResponse(written=meta["written"], expected_size=meta["expected_size"])
@router.get("/upload/{session_id}/status", response_model=ChunkUploadResponse)
async def upload_status(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
return ChunkUploadResponse(written=meta["written"], expected_size=meta["expected_size"])
@router.post("/upload/{session_id}/complete", response_model=UploadResponse)
async def upload_complete(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
if meta["written"] != meta["expected_size"]:
raise HTTPException(
status_code=400,
detail=f"미완료: written={meta['written']} expected={meta['expected_size']}",
)
filename = meta["filename"]
target = PACK_BASE_DIR / filename
if target.exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다")
data_file = _session_data_path(session_id)
data_file.replace(target) # atomic rename within same FS
# Supabase·DSM에 노출되는 file_path는 NAS 호스트 절대경로여야 한다.
# 컨테이너 경로(target)는 마운트된 호스트경로의 다른 시점일 뿐이라, 같은 디렉토리 구조를 보유.
host_path = PACK_HOST_DIR / filename host_path = PACK_HOST_DIR / filename
# supabase INSERT
sb = _supabase() sb = _supabase()
file_id = str(uuid.uuid4()) file_id = str(uuid.uuid4())
res = sb.table("pack_files").insert({ try:
"id": file_id, res = sb.table("pack_files").insert({
"min_tier": tier, "id": file_id,
"label": label, "min_tier": meta["tier"],
"file_path": str(host_path), "label": meta["label"],
"filename": filename, "file_path": str(host_path),
"size_bytes": written, "filename": filename,
}).execute() "size_bytes": meta["written"],
}).execute()
except Exception as e:
logger.exception("Supabase INSERT 예외 (chunked complete): filename=%s", filename)
target.unlink(missing_ok=True)
raise HTTPException(status_code=500, detail=f"DB INSERT 실패: {e}") from e
if not res.data: if not res.data:
target.unlink(missing_ok=True) target.unlink(missing_ok=True)
raise HTTPException(status_code=500, detail="DB INSERT 실패") raise HTTPException(status_code=500, detail="DB INSERT 실패")
_cleanup_session(session_id)
return UploadResponse( return UploadResponse(
file_id=file_id, file_id=file_id,
file_path=str(host_path), file_path=str(host_path),
filename=filename, filename=filename,
size_bytes=written, size_bytes=meta["written"],
min_tier=tier, min_tier=meta["tier"],
label=label, label=meta["label"],
uploaded_at=res.data[0]["uploaded_at"], uploaded_at=res.data[0]["uploaded_at"],
) )
@router.delete("/upload/{session_id}")
async def upload_abort(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
_cleanup_session(session_id)
return {"ok": True}
@router.get("/list", response_model=list[PackFileItem]) @router.get("/list", response_model=list[PackFileItem])
async def list_files( async def list_files(
request: Request, request: Request,

View File

@@ -8,6 +8,13 @@ import httpx
from app.dsm_client import create_share_link, DSMError from app.dsm_client import create_share_link, DSMError
@pytest.fixture(autouse=True)
def _no_backoff(monkeypatch):
"""retry 백오프 sleep 제거 — 테스트 속도."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_BACKOFF_SEC", 0.0)
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
def _dsm_env(monkeypatch): def _dsm_env(monkeypatch):
monkeypatch.setenv("DSM_HOST", "https://test-nas:5001") monkeypatch.setenv("DSM_HOST", "https://test-nas:5001")
@@ -109,3 +116,109 @@ def test_dsm_share_failure_logs_out():
assert "login" in call_order assert "login" in call_order
assert "logout" in call_order, "logout이 호출되지 않음 (finally 누락 의심)" assert "logout" in call_order, "logout이 호출되지 않음 (finally 누락 의심)"
def test_retry_on_5xx_then_success(monkeypatch):
"""첫 호출 5xx → retry → 두 번째 200으로 성공."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 3)
login_calls = {"n": 0}
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
if method == "login":
login_calls["n"] += 1
if login_calls["n"] == 1:
return _make_response({}, status_code=503)
return _make_response({"success": True, "data": {"sid": "sid-after-retry"}})
if method == "create":
return _make_response({
"success": True,
"data": {"links": [{"url": "https://nas/sharing/retry"}]},
})
return _make_response({"success": True})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
url, _ = asyncio.run(create_share_link("/volume1/x.zip"))
assert url == "https://nas/sharing/retry"
assert login_calls["n"] == 2, "5xx 응답에 대해 retry가 동작해야 함"
def test_retry_exhausts_on_persistent_5xx(monkeypatch):
"""5xx가 MAX_RETRIES 동안 계속되면 DSMError로 raise."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 2)
login_calls = {"n": 0}
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
if method == "login":
login_calls["n"] += 1
return _make_response({}, status_code=503)
return _make_response({"success": True})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
with pytest.raises(DSMError, match="재시도"):
asyncio.run(create_share_link("/volume1/x.zip"))
assert login_calls["n"] == 2, f"MAX_RETRIES만큼 시도해야 함 (실제: {login_calls['n']})"
def test_retry_on_transport_error_then_success(monkeypatch):
"""httpx.ConnectError → retry → 성공."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 3)
login_calls = {"n": 0}
async def fake_get(self, url, *, params=None, **kw):
method = (params or {}).get("method", "")
if method == "login":
login_calls["n"] += 1
if login_calls["n"] == 1:
raise httpx.ConnectError("connection refused")
return _make_response({"success": True, "data": {"sid": "sid"}})
if method == "create":
return _make_response({
"success": True,
"data": {"links": [{"url": "https://nas/sharing/tr"}]},
})
return _make_response({"success": True})
with patch.object(httpx.AsyncClient, "get", new=fake_get):
url, _ = asyncio.run(create_share_link("/volume1/x.zip"))
assert url == "https://nas/sharing/tr"
assert login_calls["n"] == 2
def test_no_retry_on_4xx(monkeypatch):
"""4xx (영구 오류)는 retry 없이 즉시 raise_for_status."""
from app import dsm_client
monkeypatch.setattr(dsm_client, "DSM_MAX_RETRIES", 3)
login_calls = {"n": 0}
def _raise_4xx():
raise httpx.HTTPStatusError(
"client error",
request=MagicMock(),
response=MagicMock(status_code=403),
)
async def fake_get(self, url, *, params=None, **kw):
login_calls["n"] += 1
resp = MagicMock(spec=httpx.Response)
resp.status_code = 403
resp.json.return_value = {}
resp.raise_for_status = _raise_4xx
return resp
with patch.object(httpx.AsyncClient, "get", new=fake_get):
with pytest.raises(httpx.HTTPStatusError):
asyncio.run(create_share_link("/volume1/x.zip"))
assert login_calls["n"] == 1, "4xx는 retry 없이 즉시 raise"

View File

@@ -248,6 +248,241 @@ def test_list_filters_deleted():
fake_supabase.table.return_value.select.return_value.is_.assert_called_with("deleted_at", "null") fake_supabase.table.return_value.select.return_value.is_.assert_called_with("deleted_at", "null")
def _mint(filename: str, size: int, jti: str = None) -> str:
return auth.mint_upload_token({
"tier": "pro",
"label": "샘플",
"filename": filename,
"size_bytes": size,
"jti": jti or str(uuid.uuid4()),
"expires_at": int(time.time()) + 1800,
})
def test_chunk_upload_full_flow(tmp_path, monkeypatch):
"""init → chunk(0) → chunk(N) → complete 정상 흐름."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
from pathlib import Path
monkeypatch.setattr("app.routes.PACK_HOST_DIR", Path("/volume1/host"))
fake_supabase = MagicMock()
fake_supabase.table.return_value.insert.return_value.execute.return_value = MagicMock(
data=[{"uploaded_at": "2026-05-12T00:00:00+00:00"}]
)
payload = b"a" * 100 + b"b" * 50 # 150 bytes total
chunk1 = payload[:100]
chunk2 = payload[100:]
jti = str(uuid.uuid4())
token = _mint("chunk_full.zip", len(payload), jti=jti)
headers = {"Authorization": f"Bearer {token}"}
with patch("app.routes._supabase", return_value=fake_supabase):
test_client = TestClient(app)
# init
r = test_client.post("/api/packs/upload/init", headers=headers)
assert r.status_code == 200, r.text
sid = r.json()["session_id"]
assert sid == jti
assert r.json()["expected_size"] == 150
# chunk 1 (offset=0)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=0",
content=chunk1,
headers=headers,
)
assert r.status_code == 200, r.text
assert r.json()["written"] == 100
# chunk 2 (offset=100)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=100",
content=chunk2,
headers=headers,
)
assert r.status_code == 200
assert r.json()["written"] == 150
# complete
r = test_client.post(f"/api/packs/upload/{sid}/complete", headers=headers)
assert r.status_code == 200, r.text
body = r.json()
assert body["filename"] == "chunk_full.zip"
assert body["size_bytes"] == 150
assert body["file_path"] == "/volume1/host/chunk_full.zip" or body["file_path"].endswith("chunk_full.zip")
# 파일이 최종 위치로 이동했고 session은 정리됨
assert (tmp_path / "chunk_full.zip").read_bytes() == payload
assert not (tmp_path / ".uploads" / sid).exists()
def test_chunk_upload_offset_mismatch(tmp_path, monkeypatch):
"""잘못된 offset → 409 + X-Current-Offset 헤더."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("offset_mismatch.zip", 100, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
r = test_client.post("/api/packs/upload/init", headers=headers)
assert r.status_code == 200
sid = r.json()["session_id"]
# 잘못된 offset (10인데 0이어야 함)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=10",
content=b"x" * 10,
headers=headers,
)
assert r.status_code == 409
assert r.headers.get("X-Current-Offset") == "0"
def test_chunk_upload_status(tmp_path, monkeypatch):
"""status로 현재 written 조회."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("status_check.zip", 50, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
r = test_client.post("/api/packs/upload/init", headers=headers)
sid = r.json()["session_id"]
# 빈 상태
r = test_client.get(f"/api/packs/upload/{sid}/status", headers=headers)
assert r.status_code == 200
assert r.json()["written"] == 0
assert r.json()["expected_size"] == 50
# 일부 업로드 후
test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=0",
content=b"x" * 20,
headers=headers,
)
r = test_client.get(f"/api/packs/upload/{sid}/status", headers=headers)
assert r.json()["written"] == 20
def test_chunk_upload_abort(tmp_path, monkeypatch):
"""DELETE → session 디렉토리 정리."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("abort_test.zip", 30, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers)
test_client.put(
f"/api/packs/upload/{jti}/chunk?offset=0",
content=b"y" * 10,
headers=headers,
)
assert (tmp_path / ".uploads" / jti).exists()
r = test_client.delete(f"/api/packs/upload/{jti}", headers=headers)
assert r.status_code == 200
assert not (tmp_path / ".uploads" / jti).exists()
def test_chunk_upload_wrong_token(tmp_path, monkeypatch):
"""다른 jti의 token으로 chunk 호출 → 403."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
# session A 시작
jti_a = str(uuid.uuid4())
token_a = _mint("wrong_token_a.zip", 30, jti=jti_a)
headers_a = {"Authorization": f"Bearer {token_a}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers_a)
# session B의 token으로 session A의 chunk 호출
jti_b = str(uuid.uuid4())
token_b = _mint("wrong_token_b.zip", 30, jti=jti_b)
headers_b = {"Authorization": f"Bearer {token_b}"}
r = test_client.put(
f"/api/packs/upload/{jti_a}/chunk?offset=0",
content=b"z" * 10,
headers=headers_b,
)
assert r.status_code == 403
def test_chunk_upload_complete_incomplete(tmp_path, monkeypatch):
"""expected_size 미달 상태에서 complete 호출 → 400."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("incomplete.zip", 100, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers)
test_client.put(
f"/api/packs/upload/{jti}/chunk?offset=0",
content=b"q" * 50,
headers=headers,
)
r = test_client.post(f"/api/packs/upload/{jti}/complete", headers=headers)
assert r.status_code == 400
assert "미완료" in r.json()["detail"]
def test_chunk_init_filename_collision(tmp_path, monkeypatch):
"""init 시 동일 파일명이 PACK_BASE_DIR에 이미 있으면 409."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
(tmp_path / "existing.zip").write_bytes(b"already here")
token = _mint("existing.zip", 100)
r = TestClient(app).post(
"/api/packs/upload/init",
headers={"Authorization": f"Bearer {token}"},
)
assert r.status_code == 409
def test_chunk_upload_stores_host_path(tmp_path, monkeypatch):
"""complete 시 Supabase에 저장되는 file_path는 PACK_HOST_DIR 기준."""
from pathlib import Path
container_base = tmp_path / "container"
host_base = Path("/volume1/host/packs")
monkeypatch.setattr("app.routes.PACK_BASE_DIR", container_base)
monkeypatch.setattr("app.routes.PACK_HOST_DIR", host_base)
captured = {}
fake_supabase = MagicMock()
def capture_insert(payload):
captured.update(payload)
m = MagicMock()
m.execute.return_value = MagicMock(data=[{"uploaded_at": "2026-05-12T00:00:00+00:00"}])
return m
fake_supabase.table.return_value.insert.side_effect = capture_insert
jti = str(uuid.uuid4())
token = _mint("hostpath_chunk.zip", 5, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
with patch("app.routes._supabase", return_value=fake_supabase):
c = TestClient(app)
c.post("/api/packs/upload/init", headers=headers)
c.put(f"/api/packs/upload/{jti}/chunk?offset=0", content=b"hello", headers=headers)
r = c.post(f"/api/packs/upload/{jti}/complete", headers=headers)
assert r.status_code == 200
assert captured["file_path"] == str(host_base / "hostpath_chunk.zip")
def test_upload_stores_host_path_not_container_path(tmp_path, monkeypatch): def test_upload_stores_host_path_not_container_path(tmp_path, monkeypatch):
"""upload 시 Supabase에 저장되는 file_path는 PACK_BASE_DIR(컨테이너) 가 아닌 PACK_HOST_DIR(NAS 호스트) 절대경로여야 한다. """upload 시 Supabase에 저장되는 file_path는 PACK_BASE_DIR(컨테이너) 가 아닌 PACK_HOST_DIR(NAS 호스트) 절대경로여야 한다.