feat(packs-lab): chunked resumable upload (offset-based) 추가

기존 single-shot POST /upload는 그대로 유지하고, 5GB+ 안정성을 위한
chunk upload 5-endpoint를 추가했다.

- POST /upload/init — mint-token jti consume + 세션 디렉토리 생성
- PUT /upload/{sid}/chunk?offset=N — offset 매칭 후 .part 파일 append
  · 불일치 시 409 + X-Current-Offset 헤더로 재개 지점 통보
- GET /upload/{sid}/status — 현재 written / expected_size 조회
- POST /upload/{sid}/complete — atomic rename + Supabase INSERT
- DELETE /upload/{sid} — 세션 중단 + 부분파일 정리

auth.py: verify_upload_token_no_consume() 추가 — chunk/complete/abort/status
는 동일 mint-token을 재사용해야 하므로 jti consume 없이 시그니처+만료만 검증.

models.py: InitUploadResponse, ChunkUploadResponse 추가.

세션 state: PACK_BASE_DIR/.uploads/{jti}/meta.json + data.part (파일시스템
영속, 단일 컨테이너 가정).

chunk 크기 상한: PACK_CHUNK_MAX_SIZE env (기본 64MB).

tests: chunk upload 시나리오 8종 — full-flow / offset mismatch / status /
abort / wrong token / incomplete complete / filename collision / host path
저장. 전체 37 테스트 pass.

CLAUDE.md: packs-lab API 표에 chunk 5-endpoint + 사용 패턴 보강.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-12 02:36:20 +09:00
parent 448dbd5f48
commit b4dd21e67a
5 changed files with 489 additions and 7 deletions

View File

@@ -666,10 +666,21 @@ docker compose up -d
|--------|------|------| |--------|------|------|
| POST | `/api/packs/sign-link` | Vercel HMAC → DSM Sharing.create로 4시간 유효 다운로드 URL 발급 | | POST | `/api/packs/sign-link` | Vercel HMAC → DSM Sharing.create로 4시간 유효 다운로드 URL 발급 |
| POST | `/api/packs/admin/mint-token` | Vercel HMAC → 일회성 upload 토큰 발급 (기본 30분 TTL) | | POST | `/api/packs/admin/mint-token` | Vercel HMAC → 일회성 upload 토큰 발급 (기본 30분 TTL) |
| POST | `/api/packs/upload` | Bearer token → multipart 5GB 저장 + Supabase INSERT | | POST | `/api/packs/upload` | Bearer token (single-shot) → multipart 5GB 저장 + Supabase INSERT |
| POST | `/api/packs/upload/init` | Bearer token → chunked upload 세션 초기화 (`session_id = jti`, `chunk_max_size` 반환). init만 jti consume |
| PUT | `/api/packs/upload/{session_id}/chunk?offset=N` | 동일 Bearer token → 부분파일 append (offset 불일치 시 409 + `X-Current-Offset` 헤더) |
| GET | `/api/packs/upload/{session_id}/status` | 동일 Bearer token → `{written, expected_size}` 조회 (재개용) |
| POST | `/api/packs/upload/{session_id}/complete` | 동일 Bearer token → 부분파일 rename + Supabase INSERT |
| DELETE | `/api/packs/upload/{session_id}` | 동일 Bearer token → 세션 중단 + 부분파일 정리 |
| GET | `/api/packs/list` | Vercel HMAC → 활성 pack_files 목록 (deleted_at IS NULL) | | GET | `/api/packs/list` | Vercel HMAC → 활성 pack_files 목록 (deleted_at IS NULL) |
| DELETE | `/api/packs/{file_id}` | Vercel HMAC → soft delete (DSM 공유는 자동 만료) | | DELETE | `/api/packs/{file_id}` | Vercel HMAC → soft delete (DSM 공유는 자동 만료) |
**Chunked upload 흐름 (5GB+ 안정성)**
- 같은 mint-token을 init·chunk·status·complete·abort 전체에서 Bearer로 재사용 (jti consume은 init에서만)
- 세션 state: 컨테이너 내부 `PACK_BASE_DIR/.uploads/{jti}/meta.json + data.part`
- chunk 재시도: 클라이언트는 PUT 응답 헤더 `X-Current-Offset` 또는 `GET /status`로 재개 지점 확인
- 환경변수 `PACK_CHUNK_MAX_SIZE` (기본 64MB) — 너무 크면 nginx buffering 부담, 너무 작으면 RTT 비용
### deployer (deployer/) ### deployer (deployer/)
- Webhook 검증: `X-Gitea-Signature` (HMAC SHA256, `compare_digest` 사용) - Webhook 검증: `X-Gitea-Signature` (HMAC SHA256, `compare_digest` 사용)
- `WEBHOOK_SECRET` 환경변수로 시크릿 관리 - `WEBHOOK_SECRET` 환경변수로 시크릿 관리

View File

@@ -55,8 +55,8 @@ def mint_upload_token(payload: dict) -> str:
return base64.urlsafe_b64encode(body).decode() + "." + sig return base64.urlsafe_b64encode(body).decode() + "." + sig
def verify_upload_token(token: str) -> dict: def _decode_upload_token(token: str) -> dict:
"""업로드 토큰 검증 + jti 사용 마킹.""" """토큰 시그니처 + 만료 + jti 존재만 검증. JTI 마킹 없음."""
try: try:
b64, sig = token.split(".", 1) b64, sig = token.split(".", 1)
body = base64.urlsafe_b64decode(b64.encode()) body = base64.urlsafe_b64decode(b64.encode())
@@ -72,13 +72,25 @@ def verify_upload_token(token: str) -> dict:
if int(time.time()) > expires_at: if int(time.time()) > expires_at:
raise HTTPException(status_code=401, detail="토큰 만료") raise HTTPException(status_code=401, detail="토큰 만료")
jti = payload.get("jti") if not payload.get("jti"):
if not jti:
raise HTTPException(status_code=401, detail="jti 누락") raise HTTPException(status_code=401, detail="jti 누락")
return payload
def verify_upload_token(token: str) -> dict:
"""업로드 토큰 검증 + jti 사용 마킹. single-shot 업로드와 chunked init에서만 사용."""
payload = _decode_upload_token(token)
jti = payload["jti"]
with _jti_lock: with _jti_lock:
if jti in _used_jti: if jti in _used_jti:
raise HTTPException(status_code=409, detail="이미 사용된 토큰") raise HTTPException(status_code=409, detail="이미 사용된 토큰")
_used_jti.add(jti) _used_jti.add(jti)
return payload return payload
def verify_upload_token_no_consume(token: str) -> dict:
"""업로드 토큰 검증만 (jti consume 없음). chunked upload chunk/complete/abort/status에 사용."""
return _decode_upload_token(token)

View File

@@ -51,3 +51,16 @@ class MintTokenResponse(BaseModel):
token: str token: str
expires_at: datetime expires_at: datetime
jti: str jti: str
class InitUploadResponse(BaseModel):
"""chunked upload 세션 초기화 응답. session_id는 mint-token의 jti와 동일."""
session_id: str
chunk_max_size: int
expected_size: int
expires_at: datetime
class ChunkUploadResponse(BaseModel):
written: int
expected_size: int

View File

@@ -2,13 +2,20 @@
- POST /api/packs/sign-link — Vercel HMAC 인증 → DSM 공유 링크 - POST /api/packs/sign-link — Vercel HMAC 인증 → DSM 공유 링크
- POST /api/packs/admin/mint-token — Vercel HMAC 인증 → 일회성 upload 토큰 - POST /api/packs/admin/mint-token — Vercel HMAC 인증 → 일회성 upload 토큰
- POST /api/packs/upload — 일회성 토큰 인증 → multipart 저장 + supabase INSERT - POST /api/packs/upload — 일회성 토큰 인증 → multipart 저장 + supabase INSERT (single-shot)
- POST /api/packs/upload/init — 일회성 토큰 인증 → chunked upload 세션 초기화
- PUT /api/packs/upload/{session_id}/chunk — 동일 토큰 + offset → 부분파일 append
- POST /api/packs/upload/{session_id}/complete — 동일 토큰 → 완료 + supabase INSERT
- GET /api/packs/upload/{session_id}/status — 현재 written 조회 (재개용)
- DELETE /api/packs/upload/{session_id} — 세션 중단 + 부분파일 정리
- GET /api/packs/list — Vercel HMAC 인증 → pack_files 전체 조회 - GET /api/packs/list — Vercel HMAC 인증 → pack_files 전체 조회
- DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete (DSM 공유는 자동 만료) - DELETE /api/packs/{file_id} — Vercel HMAC 인증 → soft delete (DSM 공유는 자동 만료)
""" """
import json
import logging import logging
import os import os
import re import re
import shutil
import time import time
import uuid import uuid
from datetime import datetime, timezone from datetime import datetime, timezone
@@ -17,9 +24,16 @@ from pathlib import Path
from fastapi import APIRouter, File, Header, HTTPException, Request, UploadFile from fastapi import APIRouter, File, Header, HTTPException, Request, UploadFile
from supabase import Client, create_client from supabase import Client, create_client
from .auth import mint_upload_token, verify_request_hmac, verify_upload_token from .auth import (
mint_upload_token,
verify_request_hmac,
verify_upload_token,
verify_upload_token_no_consume,
)
from .dsm_client import DSMError, create_share_link from .dsm_client import DSMError, create_share_link
from .models import ( from .models import (
ChunkUploadResponse,
InitUploadResponse,
MintTokenRequest, MintTokenRequest,
MintTokenResponse, MintTokenResponse,
PackFileItem, PackFileItem,
@@ -40,6 +54,52 @@ ALLOWED_EXT = {"pdf", "zip", "mp4", "mov", "mkv", "wav", "m4a", "mp3", "png", "j
MAX_BYTES = 5 * 1024 * 1024 * 1024 # 5GB MAX_BYTES = 5 * 1024 * 1024 * 1024 # 5GB
SAFE_FILENAME = re.compile(r"^[\w가-힣\-\.\(\)\s]+$") SAFE_FILENAME = re.compile(r"^[\w가-힣\-\.\(\)\s]+$")
UPLOAD_TOKEN_TTL_SEC = int(os.getenv("UPLOAD_TOKEN_TTL_SEC", "1800")) # 30분 default UPLOAD_TOKEN_TTL_SEC = int(os.getenv("UPLOAD_TOKEN_TTL_SEC", "1800")) # 30분 default
CHUNK_MAX_SIZE = int(os.getenv("PACK_CHUNK_MAX_SIZE", str(64 * 1024 * 1024))) # 64MB default
SESSIONS_DIR_NAME = ".uploads"
def _sessions_root() -> Path:
return PACK_BASE_DIR / SESSIONS_DIR_NAME
def _session_dir(jti: str) -> Path:
# jti는 uuid4 형식이라 path traversal 위험 없음. 안전을 위해 추가 검증.
if not re.match(r"^[0-9a-fA-F\-]{1,64}$", jti):
raise HTTPException(status_code=400, detail="잘못된 session_id")
return _sessions_root() / jti
def _session_meta_path(jti: str) -> Path:
return _session_dir(jti) / "meta.json"
def _session_data_path(jti: str) -> Path:
return _session_dir(jti) / "data.part"
def _load_session(jti: str) -> dict:
meta_file = _session_meta_path(jti)
if not meta_file.exists():
raise HTTPException(status_code=404, detail="업로드 세션을 찾을 수 없습니다")
return json.loads(meta_file.read_text(encoding="utf-8"))
def _save_session(jti: str, meta: dict) -> None:
_session_meta_path(jti).write_text(json.dumps(meta), encoding="utf-8")
def _cleanup_session(jti: str) -> None:
shutil.rmtree(_session_dir(jti), ignore_errors=True)
def _verify_session_token(authorization: str, session_id: str) -> dict:
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Authorization 헤더 누락")
token = authorization[len("Bearer "):]
payload = verify_upload_token_no_consume(token)
if payload.get("jti") != session_id:
raise HTTPException(status_code=403, detail="토큰과 세션 ID 불일치")
return payload
def _supabase() -> Client: def _supabase() -> Client:
@@ -193,6 +253,157 @@ async def upload(
logger.exception("부분 파일 정리 실패: %s%s", target, e) logger.exception("부분 파일 정리 실패: %s%s", target, e)
# ── Chunked upload (resumable) ──────────────────────────────────────────────
# mint-token이 발급한 동일 토큰을 init → chunk* → complete 전 흐름에서 재사용한다.
# jti = session_id. init에서만 jti consume, chunk/complete/abort는 no-consume 검증.
@router.post("/upload/init", response_model=InitUploadResponse)
async def upload_init(authorization: str = Header("")):
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Authorization 헤더 누락")
token = authorization[len("Bearer "):]
payload = verify_upload_token(token) # init만 jti consume
tier = payload["tier"]
label = payload["label"]
filename = _check_filename(payload["filename"])
expected_size = int(payload["size_bytes"])
jti = payload["jti"]
PACK_BASE_DIR.mkdir(parents=True, exist_ok=True)
if (PACK_BASE_DIR / filename).exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다")
sdir = _session_dir(jti)
if sdir.exists():
raise HTTPException(status_code=409, detail="이미 시작된 세션입니다")
sdir.mkdir(parents=True, exist_ok=True)
_session_data_path(jti).touch()
_save_session(jti, {
"filename": filename,
"expected_size": expected_size,
"tier": tier,
"label": label,
"written": 0,
"expires_at": int(payload["expires_at"]),
})
return InitUploadResponse(
session_id=jti,
chunk_max_size=CHUNK_MAX_SIZE,
expected_size=expected_size,
expires_at=datetime.fromtimestamp(payload["expires_at"], tz=timezone.utc),
)
@router.put("/upload/{session_id}/chunk", response_model=ChunkUploadResponse)
async def upload_chunk(
session_id: str,
request: Request,
offset: int = 0,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
if offset != meta["written"]:
raise HTTPException(
status_code=409,
detail=f"offset {offset} 불일치 (현재 written={meta['written']})",
headers={"X-Current-Offset": str(meta["written"])},
)
body = await request.body()
if not body:
raise HTTPException(status_code=400, detail="청크가 비어 있음")
if len(body) > CHUNK_MAX_SIZE:
raise HTTPException(status_code=413, detail=f"청크 크기 {CHUNK_MAX_SIZE} 초과")
if meta["written"] + len(body) > meta["expected_size"]:
raise HTTPException(status_code=413, detail="누적 크기 expected_size 초과")
with _session_data_path(session_id).open("ab") as f:
f.write(body)
meta["written"] += len(body)
_save_session(session_id, meta)
return ChunkUploadResponse(written=meta["written"], expected_size=meta["expected_size"])
@router.get("/upload/{session_id}/status", response_model=ChunkUploadResponse)
async def upload_status(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
return ChunkUploadResponse(written=meta["written"], expected_size=meta["expected_size"])
@router.post("/upload/{session_id}/complete", response_model=UploadResponse)
async def upload_complete(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
meta = _load_session(session_id)
if meta["written"] != meta["expected_size"]:
raise HTTPException(
status_code=400,
detail=f"미완료: written={meta['written']} expected={meta['expected_size']}",
)
filename = meta["filename"]
target = PACK_BASE_DIR / filename
if target.exists():
raise HTTPException(status_code=409, detail="이미 존재하는 파일명입니다")
data_file = _session_data_path(session_id)
data_file.replace(target) # atomic rename within same FS
host_path = PACK_HOST_DIR / filename
sb = _supabase()
file_id = str(uuid.uuid4())
try:
res = sb.table("pack_files").insert({
"id": file_id,
"min_tier": meta["tier"],
"label": meta["label"],
"file_path": str(host_path),
"filename": filename,
"size_bytes": meta["written"],
}).execute()
except Exception as e:
logger.exception("Supabase INSERT 예외 (chunked complete): filename=%s", filename)
target.unlink(missing_ok=True)
raise HTTPException(status_code=500, detail=f"DB INSERT 실패: {e}") from e
if not res.data:
target.unlink(missing_ok=True)
raise HTTPException(status_code=500, detail="DB INSERT 실패")
_cleanup_session(session_id)
return UploadResponse(
file_id=file_id,
file_path=str(host_path),
filename=filename,
size_bytes=meta["written"],
min_tier=meta["tier"],
label=meta["label"],
uploaded_at=res.data[0]["uploaded_at"],
)
@router.delete("/upload/{session_id}")
async def upload_abort(
session_id: str,
authorization: str = Header(""),
):
_verify_session_token(authorization, session_id)
_cleanup_session(session_id)
return {"ok": True}
@router.get("/list", response_model=list[PackFileItem]) @router.get("/list", response_model=list[PackFileItem])
async def list_files( async def list_files(
request: Request, request: Request,

View File

@@ -248,6 +248,241 @@ def test_list_filters_deleted():
fake_supabase.table.return_value.select.return_value.is_.assert_called_with("deleted_at", "null") fake_supabase.table.return_value.select.return_value.is_.assert_called_with("deleted_at", "null")
def _mint(filename: str, size: int, jti: str = None) -> str:
return auth.mint_upload_token({
"tier": "pro",
"label": "샘플",
"filename": filename,
"size_bytes": size,
"jti": jti or str(uuid.uuid4()),
"expires_at": int(time.time()) + 1800,
})
def test_chunk_upload_full_flow(tmp_path, monkeypatch):
"""init → chunk(0) → chunk(N) → complete 정상 흐름."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
from pathlib import Path
monkeypatch.setattr("app.routes.PACK_HOST_DIR", Path("/volume1/host"))
fake_supabase = MagicMock()
fake_supabase.table.return_value.insert.return_value.execute.return_value = MagicMock(
data=[{"uploaded_at": "2026-05-12T00:00:00+00:00"}]
)
payload = b"a" * 100 + b"b" * 50 # 150 bytes total
chunk1 = payload[:100]
chunk2 = payload[100:]
jti = str(uuid.uuid4())
token = _mint("chunk_full.zip", len(payload), jti=jti)
headers = {"Authorization": f"Bearer {token}"}
with patch("app.routes._supabase", return_value=fake_supabase):
test_client = TestClient(app)
# init
r = test_client.post("/api/packs/upload/init", headers=headers)
assert r.status_code == 200, r.text
sid = r.json()["session_id"]
assert sid == jti
assert r.json()["expected_size"] == 150
# chunk 1 (offset=0)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=0",
content=chunk1,
headers=headers,
)
assert r.status_code == 200, r.text
assert r.json()["written"] == 100
# chunk 2 (offset=100)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=100",
content=chunk2,
headers=headers,
)
assert r.status_code == 200
assert r.json()["written"] == 150
# complete
r = test_client.post(f"/api/packs/upload/{sid}/complete", headers=headers)
assert r.status_code == 200, r.text
body = r.json()
assert body["filename"] == "chunk_full.zip"
assert body["size_bytes"] == 150
assert body["file_path"] == "/volume1/host/chunk_full.zip" or body["file_path"].endswith("chunk_full.zip")
# 파일이 최종 위치로 이동했고 session은 정리됨
assert (tmp_path / "chunk_full.zip").read_bytes() == payload
assert not (tmp_path / ".uploads" / sid).exists()
def test_chunk_upload_offset_mismatch(tmp_path, monkeypatch):
"""잘못된 offset → 409 + X-Current-Offset 헤더."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("offset_mismatch.zip", 100, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
r = test_client.post("/api/packs/upload/init", headers=headers)
assert r.status_code == 200
sid = r.json()["session_id"]
# 잘못된 offset (10인데 0이어야 함)
r = test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=10",
content=b"x" * 10,
headers=headers,
)
assert r.status_code == 409
assert r.headers.get("X-Current-Offset") == "0"
def test_chunk_upload_status(tmp_path, monkeypatch):
"""status로 현재 written 조회."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("status_check.zip", 50, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
r = test_client.post("/api/packs/upload/init", headers=headers)
sid = r.json()["session_id"]
# 빈 상태
r = test_client.get(f"/api/packs/upload/{sid}/status", headers=headers)
assert r.status_code == 200
assert r.json()["written"] == 0
assert r.json()["expected_size"] == 50
# 일부 업로드 후
test_client.put(
f"/api/packs/upload/{sid}/chunk?offset=0",
content=b"x" * 20,
headers=headers,
)
r = test_client.get(f"/api/packs/upload/{sid}/status", headers=headers)
assert r.json()["written"] == 20
def test_chunk_upload_abort(tmp_path, monkeypatch):
"""DELETE → session 디렉토리 정리."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("abort_test.zip", 30, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers)
test_client.put(
f"/api/packs/upload/{jti}/chunk?offset=0",
content=b"y" * 10,
headers=headers,
)
assert (tmp_path / ".uploads" / jti).exists()
r = test_client.delete(f"/api/packs/upload/{jti}", headers=headers)
assert r.status_code == 200
assert not (tmp_path / ".uploads" / jti).exists()
def test_chunk_upload_wrong_token(tmp_path, monkeypatch):
"""다른 jti의 token으로 chunk 호출 → 403."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
# session A 시작
jti_a = str(uuid.uuid4())
token_a = _mint("wrong_token_a.zip", 30, jti=jti_a)
headers_a = {"Authorization": f"Bearer {token_a}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers_a)
# session B의 token으로 session A의 chunk 호출
jti_b = str(uuid.uuid4())
token_b = _mint("wrong_token_b.zip", 30, jti=jti_b)
headers_b = {"Authorization": f"Bearer {token_b}"}
r = test_client.put(
f"/api/packs/upload/{jti_a}/chunk?offset=0",
content=b"z" * 10,
headers=headers_b,
)
assert r.status_code == 403
def test_chunk_upload_complete_incomplete(tmp_path, monkeypatch):
"""expected_size 미달 상태에서 complete 호출 → 400."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
jti = str(uuid.uuid4())
token = _mint("incomplete.zip", 100, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
test_client = TestClient(app)
test_client.post("/api/packs/upload/init", headers=headers)
test_client.put(
f"/api/packs/upload/{jti}/chunk?offset=0",
content=b"q" * 50,
headers=headers,
)
r = test_client.post(f"/api/packs/upload/{jti}/complete", headers=headers)
assert r.status_code == 400
assert "미완료" in r.json()["detail"]
def test_chunk_init_filename_collision(tmp_path, monkeypatch):
"""init 시 동일 파일명이 PACK_BASE_DIR에 이미 있으면 409."""
monkeypatch.setattr("app.routes.PACK_BASE_DIR", tmp_path)
(tmp_path / "existing.zip").write_bytes(b"already here")
token = _mint("existing.zip", 100)
r = TestClient(app).post(
"/api/packs/upload/init",
headers={"Authorization": f"Bearer {token}"},
)
assert r.status_code == 409
def test_chunk_upload_stores_host_path(tmp_path, monkeypatch):
"""complete 시 Supabase에 저장되는 file_path는 PACK_HOST_DIR 기준."""
from pathlib import Path
container_base = tmp_path / "container"
host_base = Path("/volume1/host/packs")
monkeypatch.setattr("app.routes.PACK_BASE_DIR", container_base)
monkeypatch.setattr("app.routes.PACK_HOST_DIR", host_base)
captured = {}
fake_supabase = MagicMock()
def capture_insert(payload):
captured.update(payload)
m = MagicMock()
m.execute.return_value = MagicMock(data=[{"uploaded_at": "2026-05-12T00:00:00+00:00"}])
return m
fake_supabase.table.return_value.insert.side_effect = capture_insert
jti = str(uuid.uuid4())
token = _mint("hostpath_chunk.zip", 5, jti=jti)
headers = {"Authorization": f"Bearer {token}"}
with patch("app.routes._supabase", return_value=fake_supabase):
c = TestClient(app)
c.post("/api/packs/upload/init", headers=headers)
c.put(f"/api/packs/upload/{jti}/chunk?offset=0", content=b"hello", headers=headers)
r = c.post(f"/api/packs/upload/{jti}/complete", headers=headers)
assert r.status_code == 200
assert captured["file_path"] == str(host_base / "hostpath_chunk.zip")
def test_upload_stores_host_path_not_container_path(tmp_path, monkeypatch): def test_upload_stores_host_path_not_container_path(tmp_path, monkeypatch):
"""upload 시 Supabase에 저장되는 file_path는 PACK_BASE_DIR(컨테이너) 가 아닌 PACK_HOST_DIR(NAS 호스트) 절대경로여야 한다. """upload 시 Supabase에 저장되는 file_path는 PACK_BASE_DIR(컨테이너) 가 아닌 PACK_HOST_DIR(NAS 호스트) 절대경로여야 한다.