61 Commits

Author SHA1 Message Date
05e7ffdfd9 매도 히스토리 수정 API 추가 (PUT /api/portfolio/sell-history/:id)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-19 23:02:11 +09:00
c7401c5d9f 주식 매도 히스토리 API 추가 (/api/portfolio/sell-history)
- sell_history 테이블 신규 생성 (db.py init_db)
- CRUD 함수 추가: add_sell_history, get_sell_history, delete_sell_history
- GET /api/portfolio/sell-history (broker, days 필터)
- POST /api/portfolio/sell-history (id 포함 저장된 레코드 반환)
- DELETE /api/portfolio/sell-history/{record_id}

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-19 22:31:17 +09:00
5d6fe2f04b 청약 관리 API 추가 (/api/subscription)
- subscription_items 테이블: 청약 목록 CRUD (GET/POST/PUT/DELETE)
- subscription_profile 테이블: 내 청약 조건 프로필 싱글톤 (GET/PUT, upsert)
- specialQuals JSON 배열, bool → int SQLite 변환 처리

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 02:18:06 +09:00
2926770d6f 부동산 청약 단지 관리 API 추가
- realestate_complexes 테이블 추가 (lotto.db)
- CRUD 엔드포인트 4개: GET/POST /api/realestate/complexes, PUT/DELETE /api/realestate/complexes/:id
- status: 청약예정|청약중|결과발표|완료, priority: high|normal|low 검증

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 01:23:28 +09:00
197d451d5f README.md 수정 2026-03-11 08:30:47 +09:00
f45041d46c 블로그 글 작성 api 추가 2026-03-11 08:07:24 +09:00
483963b463 자산관리 효율 증가 api 추가 2026-03-07 03:44:14 +09:00
11423e5106 todo List 작성 api 추가 2026-03-05 01:19:57 +09:00
de7468b256 stock 실계좌 예수금 정보 추가 2026-03-02 19:00:51 +09:00
d6d2eb0787 stock 실계좌 예수금 정보 추가 2026-03-02 18:44:51 +09:00
136aea8aee fix: portfolio API nginx 라우팅 수정 (trailing slash 없이도 매칭)
/api/portfolio/ → /api/portfolio 로 변경하여
trailing slash 미포함 요청도 stock-lab으로 정상 프록시되도록 수정

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 00:50:59 +09:00
ea9eb749aa stock 실계좌 정보 표출 추가 2026-02-25 23:49:28 +09:00
71d9d7a571 lotto lab 추천 알고리즘 및 시뮬레이션 강화 2026-02-23 22:32:14 +09:00
c96815c2e3 stock-lab 오류 수정, lotto-lab 히트맵 기반 추천 기능 추가 2026-02-05 01:26:20 +09:00
4035432c54 stock-lab 의미없이 남겨진 소스로 오류 발생의 해결 2026-01-28 01:22:52 +09:00
d28c291a55 stock-lab 자동 매매 요청 삭제, 수동 매매 요청 추가 2026-01-28 01:20:31 +09:00
21a8173963 주요지수 해외 오류 수정 및 원달러환율 추가 2026-01-28 00:55:47 +09:00
f6fcff0faf 주식 해외 지수 응답 추가 2026-01-28 00:44:09 +09:00
55863d7744 자동매매 응답 로그 출력 추가 2026-01-27 03:12:50 +09:00
a330a5271c /api/trade/auto 요청 오류 스펙에 맞게 수정 2026-01-27 03:06:59 +09:00
e27fbfada1 요청에 대한 명시적 로그 표출 2026-01-27 03:00:07 +09:00
7fb55a7be7 api/trade 미동작 문제 해결: CORS 허용 추가 2026-01-27 02:53:44 +09:00
9a8df4908a NAS 앞단에 있는 nginx가 요청을 Docker 컨테이너로 넘겨줄 수 있도록 config api/trade 설정 2026-01-27 02:28:56 +09:00
a8cbef75db window pc AI server 구축 및 NAS 중계 서버 연결 설정 2026-01-27 01:26:23 +09:00
b6fd444dba 서비스 구체화 현 상태 README 정리 2026-01-27 01:08:05 +09:00
f2e23c1241 windows pc, how to set up ollama server 2026-01-26 23:39:08 +09:00
c6850da4ac 주식 증권 api 연동 및 window pc AI 연동 기능 구현 시작 2026-01-26 22:31:56 +09:00
8283dab0de fix: use correct mainnews endpoint for overseas news 2026-01-26 04:05:30 +09:00
9faa1c5715 fix: update overseas news API url and key mapping 2026-01-26 04:03:09 +09:00
0e2d241e18 fix: handle list response from Naver API in scraper 2026-01-26 04:00:48 +09:00
84c5877207 fix: rewrite scraper.py to fix syntax errors and use mobile API 2026-01-26 03:57:30 +09:00
cbafc1f959 scraper.py 오류 수정 2026-01-26 03:51:08 +09:00
dce6b3e692 scraper.py 오류 수정 2026-01-26 03:48:51 +09:00
3d0dd24f27 feat: add overseas financial news and indices support 2026-01-26 03:45:19 +09:00
2fafce0327 fix: change manual scrap endpoint to /api/stock/scrap 2026-01-26 03:31:26 +09:00
25ede4f478 fix: import Any in stock-lab scraper 2026-01-26 03:23:54 +09:00
2493bc72fb stock_lab 배포 경로 추가될 수 있게 추가 2026-01-26 03:19:25 +09:00
dd6435eb86 기타 추가 설정 2026-01-26 03:17:59 +09:00
94db1da045 feat: add stock indices scraping and update healthcheck 2026-01-26 03:14:46 +09:00
d8e4e0461c feat: add stock-lab service for financial news scraping and analysis 2026-01-26 02:56:52 +09:00
421e52b205 refactor: extract utils to fix circular import and enable smart generator 2026-01-26 01:21:57 +09:00
526d6a53e5 파일 권한 설정 추가 2026-01-26 01:17:40 +09:00
432840a38d feat: smart recommendation generator with feedback loop and result checker 2026-01-26 01:15:49 +09:00
597353e6d4 feat: auto-sync full lotto history on stats api access 2026-01-26 00:45:32 +09:00
bd43c99221 fix: revert deployer to root and fix permissions in script 2026-01-26 00:35:26 +09:00
2c95fe49f3 fix: healthcheck url 2026-01-26 00:28:13 +09:00
8ccfc32749 healthcheck(api test) 스크립트 추가 2026-01-26 00:20:36 +09:00
67ef3c4bbf git 배포 시 파일 권한 변경되는 부분 해결 2026-01-26 00:12:51 +09:00
ee54458bf0 fix: deployer webhook timeout - implement async background task 2026-01-26 00:05:17 +09:00
e1c3168d5c fix: deploy.sh path detection for host execution 2026-01-26 00:01:08 +09:00
2d5972c25d lotto lab 전 차수 로또 당첨 번호 그래프 시각화 api 추가 2026-01-25 23:56:00 +09:00
1ddbd4ad0e lotto 추천 결과 통계 시각화 (분포, 합계, 홀짝) 를 구현 2026-01-25 22:40:39 +09:00
f75bf5d3e5 deployer 스크립트 권한 오류 수정 2026-01-25 21:37:33 +09:00
0fde916120 travel-proxy 이미지 썸네일 로딩 최적화, 페이지네이션 추가 2026-01-25 19:40:26 +09:00
64c526488a fix: deploy-nas.sh path detection for host execution 2026-01-25 19:10:23 +09:00
c655b655c9 deploy-nas.sh 스크립트 수행 오류 수정 2026-01-25 19:09:04 +09:00
005c0261c2 Local PC 개발 및 테스트 - git 배포 - gitea webhook 단계별 설정 2026-01-25 19:00:22 +09:00
879bb2f25d README.md 현 스펙 정리 2026-01-25 17:42:19 +09:00
82cbae7ae2 webhook 설정 오류 수정
- deployer 배포 webhook 오류 설정 수정
2026-01-25 17:28:58 +09:00
a8b661b304 rename script folder 2026-01-25 15:44:39 +09:00
b815c37064 webhook 자동 배포 설정 2026-01-25 11:51:39 +09:00
30 changed files with 4111 additions and 181 deletions

View File

@@ -1,17 +1,52 @@
# timezone # ---------------------------------------------------------------------------
# [Environment Configuration]
# 이 파일을 복사하여 .env 파일을 생성하고, 환경에 맞게 주석을 해제/수정하여 사용하세요.
# ---------------------------------------------------------------------------
# [COMMON]
APP_VERSION=dev
TZ=Asia/Seoul TZ=Asia/Seoul
COMPOSE_PROJECT_NAME=webpage
# backend lotto collector sources
LOTTO_ALL_URL=https://smok95.github.io/lotto/results/all.json LOTTO_ALL_URL=https://smok95.github.io/lotto/results/all.json
LOTTO_LATEST_URL=https://smok95.github.io/lotto/results/latest.json LOTTO_LATEST_URL=https://smok95.github.io/lotto/results/latest.json
# travel-proxy # [SECURITY]
TRAVEL_ROOT=/data/travel WEBHOOK_SECRET=change_this_secret_in_prod
TRAVEL_THUMB_ROOT=/data/thumbs
TRAVEL_MEDIA_BASE=/media/travel
TRAVEL_CACHE_TTL=300
# CORS (travel-proxy) # [PATHS]
CORS_ALLOW_ORIGINS=* # 1. 런타임 데이터 루트 (docker-compose.yml이 실행되는 위치)
# NAS: /volume1/docker/webpage
# Local: . (현재 프로젝트 루트)
RUNTIME_PATH=.
# 2. Git 저장소 루트
# NAS: /volume1/workspace/web-page-backend
# Local: .
REPO_PATH=.
# 3. Frontend 정적 파일 경로
# NAS: /volume1/docker/webpage/frontend (업로드된 파일)
# Local: ./frontend/dist (빌드된 결과물)
FRONTEND_PATH=./frontend/dist
# 4. 여행 사진 원본 경로
# NAS: /volume1/web/images/webPage/travel
# Local: ./mock_data/photos
PHOTO_PATH=./mock_data/photos
# 5. 주식 데이터 저장 경로
# NAS: /volume1/docker/webpage/data/stock
# Local: ./data/stock
STOCK_DATA_PATH=./data/stock
# [PERMISSIONS]
# NAS: 1026:100
# Local: 1000:1000 (Windows Docker Desktop의 경우 크게 중요하지 않음)
PUID=1000
PGID=1000
# [STOCK LAB]
# NAS는 Windows AI Server로 요청을 중계(Proxy)하는 역할만 수행합니다.
# 실제 KIS API 호출 및 AI 분석은 Windows PC에서 수행됩니다.
# Windows AI Server (NAS 입장에서 바라본 Windows PC IP)
WINDOWS_AI_SERVER_URL=http://192.168.0.5:8000

334
README.md
View File

@@ -0,0 +1,334 @@
# web-backend
Synology NAS 기반 개인 웹 플랫폼 백엔드 모노레포.
로또 분석, 주식 포트폴리오, 여행 앨범, 블로그, 투두리스트를 하나의 서비스로 운영한다.
---
## 서비스 구성
```
┌─────────────────────────────────────────────────────────────┐
│ lotto-frontend (Nginx:8080) │
│ ├── 정적 SPA 서빙 (React + Vite) │
│ └── API 리버스 프록시 │
│ ├── /api/ → lotto-backend:8000 │
│ ├── /api/stock/ → stock-lab:8000 │
│ ├── /api/trade/ → stock-lab:8000 │
│ ├── /api/portfolio → stock-lab:8000 │
│ ├── /api/travel/ → travel-proxy:8000 │
│ └── /webhook → deployer:9000 │
└─────────────────────────────────────────────────────────────┘
```
| 컨테이너 | 포트 | 역할 |
|---------|------|------|
| `lotto-backend` | 18000 | 로또·블로그·투두 API |
| `stock-lab` | 18500 | 주식 뉴스·포트폴리오·자산 추적 |
| `travel-proxy` | 19000 | 여행 사진 API + 썸네일 생성 |
| `lotto-frontend` | 8080 | SPA 서빙 + 리버스 프록시 |
| `webpage-deployer` | 19010 | Gitea Webhook → 자동 배포 |
---
## 디렉토리 구조
```
web-backend/
├── backend/ # lotto-backend 서비스 (Python/FastAPI)
│ ├── app/
│ │ ├── main.py # 라우터, 스케줄러
│ │ ├── db.py # SQLite CRUD (7개 테이블)
│ │ ├── generator.py # 몬테카를로 시뮬레이션 엔진
│ │ ├── analyzer.py # 5가지 통계 분석
│ │ ├── checker.py # 당첨 결과 채점
│ │ ├── collector.py # 로또 데이터 수집
│ │ ├── recommender.py # 추천 알고리즘
│ │ └── utils.py # 메트릭 계산
│ └── Dockerfile
├── stock-lab/ # stock-lab 서비스 (Python/FastAPI)
│ ├── app/
│ │ ├── main.py # 라우터, 스케줄러
│ │ ├── db.py # SQLite CRUD (4개 테이블)
│ │ ├── scraper.py # 네이버 금융 뉴스 크롤링
│ │ ├── price_fetcher.py # 현재가 조회 (3분 캐시)
│ │ └── holidays.json # 한국 주식시장 휴장일
│ └── Dockerfile
├── travel-proxy/ # travel-proxy 서비스 (Python/FastAPI)
│ ├── app/
│ │ └── main.py # 사진 API, 썸네일 생성 (Pillow)
│ └── Dockerfile
├── deployer/ # Gitea Webhook 수신 → 자동 배포
│ ├── app.py # HMAC SHA256 검증 + 배포 트리거
│ └── Dockerfile
├── nginx/
│ └── default.conf # 리버스 프록시 + SPA + 캐시
├── scripts/
│ ├── deploy.sh # 운영 배포 (git pull → rsync → compose up)
│ ├── deploy-nas.sh # rsync 전용 스크립트
│ └── healthcheck.sh # 전체 서비스 헬스 체크
├── docker-compose.yml
├── .env.example
└── CLAUDE.md
```
---
## 빠른 시작 (로컬 개발)
```bash
# 1. 환경변수 설정
cp .env.example .env
# 2. 컨테이너 실행 (.env 기본값으로 즉시 실행 가능)
docker compose up -d
# 3. 확인
curl http://localhost:18000/health
curl http://localhost:18500/health
```
| 서비스 | 로컬 URL |
|--------|----------|
| Frontend + API | http://localhost:8080 |
| lotto-backend | http://localhost:18000 |
| stock-lab | http://localhost:18500 |
| travel-proxy | http://localhost:19000 |
---
## API 목록
### lotto-backend (`/api/`)
#### 로또
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/lotto/latest` | 최신 당첨번호 |
| GET | `/api/lotto/{drw_no}` | 특정 회차 |
| GET | `/api/lotto/stats` | 번호 빈도 통계 |
| GET | `/api/lotto/analysis` | 5가지 통계 분석 리포트 |
| GET | `/api/lotto/best` | 시뮬레이션 최적 번호 (기본 20쌍) |
| GET | `/api/lotto/simulation` | 시뮬레이션 상세 결과 |
| GET | `/api/lotto/recommend` | 통계 기반 추천 |
| GET | `/api/lotto/recommend/heatmap` | 히트맵 기반 추천 |
| GET | `/api/lotto/recommend/batch` | 배치 추천 |
| POST | `/api/admin/simulate` | 시뮬레이션 수동 실행 |
| POST | `/api/admin/sync_latest` | 당첨번호 수동 동기화 |
#### 추천 이력
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/history` | 목록 (limit, offset, favorite, tag, sort) |
| PATCH | `/api/history/{id}` | 즐겨찾기·메모·태그 수정 |
| DELETE | `/api/history/{id}` | 삭제 |
#### 투두리스트
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/todos` | 전체 목록 |
| POST | `/api/todos` | 생성 (status: todo\|in_progress\|done) |
| PUT | `/api/todos/{id}` | 수정 |
| DELETE | `/api/todos/done` | 완료 항목 일괄 삭제 |
| DELETE | `/api/todos/{id}` | 개별 삭제 |
> ⚠️ `/done` 라우트는 반드시 `/{id}` 보다 먼저 등록해야 함
#### 블로그
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/blog/posts` | 글 목록 (`{"posts": [...]}`, date DESC) |
| POST | `/api/blog/posts` | 글 생성 (date 미입력 시 오늘 날짜) |
| PUT | `/api/blog/posts/{id}` | 글 수정 |
| DELETE | `/api/blog/posts/{id}` | 글 삭제 |
블로그 포스트 구조: `{ id, title, tags[], body, date, excerpt, created_at, updated_at }`
---
### stock-lab (`/api/stock/`, `/api/trade/`, `/api/portfolio`)
#### 뉴스 & 지표
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/stock/news` | 뉴스 목록 (limit, category) |
| GET | `/api/stock/indices` | 주요 지표 (KOSPI 등) |
| POST | `/api/stock/scrap` | 뉴스 수동 스크랩 |
#### 실계좌 (Windows AI 서버 프록시)
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/trade/balance` | 실계좌 잔고 조회 |
| POST | `/api/trade/order` | 주문 (BUY\|SELL, price=0이면 시장가) |
#### 포트폴리오
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/portfolio` | 전체 조회 (현재가·손익·예수금 포함) |
| POST | `/api/portfolio` | 종목 추가 |
| PUT | `/api/portfolio/{id}` | 종목 수정 |
| DELETE | `/api/portfolio/{id}` | 종목 삭제 |
| GET | `/api/portfolio/cash` | 예수금 전체 조회 |
| PUT | `/api/portfolio/cash` | 예수금 upsert |
| DELETE | `/api/portfolio/cash/{broker}` | 예수금 삭제 |
| POST | `/api/portfolio/snapshot` | 총 자산 스냅샷 수동 저장 |
| GET | `/api/portfolio/snapshot/history` | 자산 변화 이력 (days=0: 전체) |
---
### travel-proxy (`/api/travel/`)
| 메서드 | 경로 | 설명 |
|--------|------|------|
| GET | `/api/travel/regions` | 지역 GeoJSON |
| GET | `/api/travel/photos` | 사진 목록 (region, page, size) |
| POST | `/api/travel/reload` | 캐시 초기화 |
- 썸네일: `/media/travel/.thumb/{album}/{file}` (nginx 직접 서빙, 30일 캐시)
- 원본: `/media/travel/{album}/{file}` (nginx 직접 서빙, 7일 캐시)
---
## 핵심 로직
### 몬테카를로 시뮬레이션 (lotto-backend)
```
역대 당첨번호 분석 → 번호별 가중치 산출
→ 가중 확률 샘플링으로 후보 20,000개 생성
→ 5가지 기법으로 각 조합 점수화
→ 상위 100개 DB 저장 → best_picks 20개 교체
```
**5가지 채점 기법:**
| 기법 | 가중치 | 내용 |
|------|--------|------|
| 빈도 Z-score | 25% | 번호 출현 빈도의 표준편차 |
| 조합 지문 | 30% | 합계 정규분포 + 홀짝 비율 + 구간분포 |
| 갭 분석 | 20% | 마지막 출현 이후 경과 회차 |
| 공동 출현 | 15% | 번호 쌍 동시 출현 빈도 |
| 다양성 | 10% | 연속번호·범위·구간 커버리지 |
**스케줄:** 매일 0, 4, 8, 12, 16, 20시 (하루 6회, 각 5분)
### 총 자산 스냅샷 (stock-lab)
```
평일 15:40 자동 실행 → holidays.json으로 공휴일 스킵
→ 포트폴리오 현재가 조회 → total_eval
→ 예수금 합계 → total_cash
→ asset_snapshots upsert (date UNIQUE, 같은 날 중복 시 덮어씀)
```
### 현재가 조회 (stock-lab)
- 네이버 모바일 API 우선 (`m.stock.naver.com/api/stock/{ticker}/basic`)
- 실패 시 네이버 금융 HTML 파싱 폴백
- 3분 TTL 메모리 캐시
### 여행 사진 썸네일 (travel-proxy)
- 480×480 리사이징 (Pillow), 확장자 유지 (JPEG/PNG/WEBP)
- 온디맨드 생성 후 `/data/thumbs/` 영구 캐시
- 원자성 보장: tmp 파일 작성 후 rename
---
## 자동 배포
```
git push → Gitea → X-Gitea-Signature (HMAC SHA256)
→ deployer:9000/webhook (서명 검증, compare_digest 사용)
→ BackgroundTask: scripts/deploy.sh (10분 타임아웃)
1. git pull
2. .releases/{timestamp}/ 백업
3. rsync (repo → runtime)
4. docker compose up -d --build
5. chown PUID:PGID
```
> 프론트엔드는 **자동 배포 안 됨** — 로컬 빌드 후 NAS에 수동 업로드
---
## 데이터베이스
### lotto.db (`/app/data/lotto.db`)
| 테이블 | 설명 |
|--------|------|
| `draws` | 로또 당첨번호 |
| `recommendations` | 추천 이력 (즐겨찾기·태그·채점 포함) |
| `simulation_runs` | 시뮬레이션 실행 기록 |
| `simulation_candidates` | 시뮬레이션 후보 (점수 5종) |
| `best_picks` | 현재 활성 최적 번호 20개 (is_active 플래그) |
| `todos` | 투두리스트 (UUID PK) |
| `blog_posts` | 블로그 글 (tags: JSON 배열) |
### stock.db (`/app/data/stock.db`)
| 테이블 | 설명 |
|--------|------|
| `articles` | 뉴스 기사 (hash UNIQUE, category: domestic\|overseas) |
| `portfolio` | 보유 종목 (broker, ticker, quantity, avg_price) |
| `broker_cash` | 증권사별 예수금 (broker UNIQUE) |
| `asset_snapshots` | 일별 총 자산 스냅샷 (date UNIQUE) |
---
## 환경변수
```env
# 경로 설정
RUNTIME_PATH=.
REPO_PATH=.
FRONTEND_PATH=./frontend/dist
PHOTO_PATH=./mock_data/photos
# NAS 파일 권한
PUID=1000
PGID=1000
# 외부 서비스
WINDOWS_AI_SERVER_URL=http://192.168.45.59:8000
WEBHOOK_SECRET=your_secret_here
```
---
## 인프라
| 항목 | 값 |
|------|----|
| 장비 | Synology NAS (Intel Celeron J4025, 18GB RAM) |
| Docker | Synology Container Manager |
| Git 서버 | Gitea (NAS 내부 self-hosted) |
| AI 서버 | Windows PC (192.168.45.59:8000) — RTX 3070 Ti + Ollama |
| Python | 3.12 (`slim` / `alpine` 기반 이미지) |
| DB | SQLite (볼륨 마운트로 영속 저장) |
---
## 주의사항
- **`.env` 파일** — 절대 커밋 금지. `.env.example`만 레포에 포함
- **Nginx trailing slash** — `/api/portfolio`는 두 location 블록으로 처리 (trailing slash 유무 모두 매칭)
- **라우트 순서** — `/api/todos/done``/api/todos/{id}` 보다 먼저 등록 필수
- **캐시 전략** — `index.html`: no-store / `assets/`: 1년 immutable
- **PUID/PGID** — travel-proxy는 NAS 파일 권한을 위해 환경변수 주입 필수
- **공휴일 목록** — `stock-lab/app/holidays.json` 매년 수동 갱신 필요 (KRX 기준)
- **Windows AI 서버** — IP 192.168.45.59 (공유기 DHCP 고정 예약)

View File

@@ -16,3 +16,7 @@ ENV PYTHONUNBUFFERED=1
EXPOSE 8000 EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"] CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
ARG APP_VERSION=dev
ENV APP_VERSION=$APP_VERSION
LABEL org.opencontainers.image.version=$APP_VERSION

354
backend/app/analyzer.py Normal file
View File

@@ -0,0 +1,354 @@
"""
통계 분석 엔진 - lotto-lab 고도화
[팀 회의 합의 기반 5가지 통계 기법]
1. 빈도 Z-score 분석: 각 번호의 출현 빈도가 기댓값에서 얼마나 벗어났는지
2. 조합 지문(Fingerprint): 조합의 합계, 홀짝 비율, 구간 분포가 역대 당첨번호와 유사한지
3. 갭 분석(Gap): 각 번호의 마지막 출현으로부터 경과 회차 수 기반 점수
4. 공동 출현 행렬(Co-occurrence): 번호 쌍이 역대에 함께 나온 빈도 기반 점수
5. 다양성(Diversity): 연속 번호, 범위, 구간 분포 다양성
[통계 근거]
- 1~45번 각각의 이론적 출현 확률: 6/45 ≈ 13.33% per draw
- 기댓값 합계: E[sum] = 6 × E[1..45] = 6 × 23 = 138
- 표준편차 합계: std ≈ sqrt(6 × Var[uniform 1..45]) ≈ 31
- 홀수 23개 (1,3,...,45), 짝수 22개 (2,4,...,44)
- 번호 쌍 공동 출현 확률: C(43,4)/C(45,6) ≈ 1.516% per draw
"""
import math
from collections import Counter, defaultdict
from typing import List, Tuple, Dict, Any, Optional
# 구간 정의: (시작, 끝) 포함
ZONE_RANGES: List[Tuple[int, int]] = [
(1, 9),
(10, 19),
(20, 29),
(30, 39),
(40, 45),
]
def _get_zone(n: int) -> int:
"""번호가 속하는 구간 인덱스 (0-4)"""
for z, (lo, hi) in enumerate(ZONE_RANGES):
if lo <= n <= hi:
return z
return 4
def build_analysis_cache(draws: List[Tuple[int, List[int]]]) -> Dict[str, Any]:
"""
역대 당첨번호 데이터 기반 통계 분석 캐시 구성.
시뮬레이션 실행 시 한 번만 호출하여 재사용 (성능 최적화).
Args:
draws: [(drw_no, [n1,n2,n3,n4,n5,n6]), ...] 오름차순
Returns:
통계 캐시 딕셔너리
"""
if not draws:
return {}
total_draws = len(draws)
all_nums_list = [n for _, nums in draws for n in nums]
freq_all = Counter(all_nums_list)
# ── 1. 빈도 Z-score ──────────────────────────────────────────────────────
freq_values = [freq_all.get(n, 0) for n in range(1, 46)]
mean_freq = sum(freq_values) / 45.0
variance_freq = sum((f - mean_freq) ** 2 for f in freq_values) / 45.0
std_freq = math.sqrt(variance_freq)
z_scores: Dict[int, float] = {}
for n in range(1, 46):
z_scores[n] = (freq_all.get(n, 0) - mean_freq) / max(std_freq, 0.001)
# ── 2. 갭 분석: 마지막 출현 이후 경과 회차 ──────────────────────────────
# gap = 0: 가장 최근 회차에 출현, gap = k: k회 전에 마지막 출현
last_seen_gap: Dict[int, int] = {}
for gap_idx, (_, nums) in enumerate(reversed(draws)):
for n in nums:
if n not in last_seen_gap:
last_seen_gap[n] = gap_idx
for n in range(1, 46):
if n not in last_seen_gap:
last_seen_gap[n] = total_draws # 한 번도 안 나옴 (이론상 거의 불가)
# ── 3. 공동 출현 행렬 ────────────────────────────────────────────────────
# cooccur[(i,j)] = 번호 i와 j가 같은 회차에 함께 출현한 횟수 (i < j)
cooccur: Dict[Tuple[int, int], int] = defaultdict(int)
for _, nums in draws:
s = sorted(nums)
for i in range(len(s)):
for j in range(i + 1, len(s)):
cooccur[(s[i], s[j])] += 1
# 번호 쌍 공동 출현 기댓값: C(43,4)/C(45,6) × total_draws
# C(43,4) = 123,410 / C(45,6) = 8,145,060
expected_cooccur = total_draws * 123410.0 / 8145060.0
# ── 4. 역대 조합 통계 (합계, 홀수 개수) ──────────────────────────────────
historical_sums = [sum(nums) for _, nums in draws]
mean_sum = sum(historical_sums) / total_draws
std_sum = math.sqrt(
sum((s - mean_sum) ** 2 for s in historical_sums) / total_draws
)
std_sum = max(std_sum, 1.0) # 0 나누기 방지
historical_odds = [sum(1 for n in nums if n % 2 == 1) for _, nums in draws]
odd_dist = Counter(historical_odds)
odd_prob: Dict[int, float] = {k: v / total_draws for k, v in odd_dist.items()}
max_odd_prob = max(odd_prob.values()) if odd_prob else 1.0
# ── 5. 구간별 분포 통계 ───────────────────────────────────────────────────
# 각 구간에 몇 개 포함되는지의 역대 분포
zone_counts = [Counter() for _ in ZONE_RANGES]
for _, nums in draws:
for z_idx, (lo, hi) in enumerate(ZONE_RANGES):
cnt = sum(1 for n in nums if lo <= n <= hi)
zone_counts[z_idx][cnt] += 1
zone_probs: List[Dict[int, float]] = []
for zc in zone_counts:
total_z = sum(zc.values())
zone_probs.append({k: v / total_z for k, v in zc.items()})
max_zone_probs = [max(zp.values()) if zp else 1.0 for zp in zone_probs]
# ── 6. 최근 빈도 (후보 생성 가중치용) ────────────────────────────────────
recent_100 = draws[-100:] if len(draws) >= 100 else draws
freq_recent = Counter(n for _, nums in recent_100 for n in nums)
return {
"total_draws": total_draws,
"freq_all": freq_all,
"z_scores": z_scores,
"last_seen_gap": last_seen_gap,
"cooccur": dict(cooccur),
"expected_cooccur": expected_cooccur,
"mean_sum": mean_sum,
"std_sum": std_sum,
"odd_prob": odd_prob,
"max_odd_prob": max_odd_prob,
"zone_probs": zone_probs,
"max_zone_probs": max_zone_probs,
"freq_recent": freq_recent,
}
def build_number_weights(cache: Dict[str, Any]) -> Dict[int, float]:
"""
몬테카를로 시뮬레이션의 후보 생성에 사용할 번호별 샘플링 가중치.
빈도 + 최근 빈도 + 갭 분석을 반영하여 '좋은' 번호가 더 자주 선택되도록 유도.
"""
freq_all = cache["freq_all"]
last_seen_gap = cache["last_seen_gap"]
freq_recent = cache["freq_recent"]
weights: Dict[int, float] = {}
for n in range(1, 46):
w = freq_all.get(n, 0) + 1.5 * freq_recent.get(n, 0)
gap = last_seen_gap.get(n, 0)
if gap <= 1:
gap_factor = 0.50 # 바로 직전 등장 → 패널티
elif gap <= 3:
gap_factor = 0.75
elif gap <= 12:
gap_factor = 1.00 # 적정 범위
elif gap <= 25:
gap_factor = 1.10 # 약간 오래된 번호 → 소폭 보너스
else:
gap_factor = 1.20 # 오래된 번호 → 보너스
weights[n] = max(w * gap_factor, 0.5)
return weights
def score_combination(numbers: List[int], cache: Dict[str, Any]) -> Dict[str, float]:
"""
6개 번호 조합의 통계적 품질 점수 계산 (0~1 범위 정규화).
5가지 기법별 점수:
- score_frequency (25%): 빈도 Z-score
- score_fingerprint(30%): 조합의 통계적 지문 (합계, 홀짝, 구간)
- score_gap (20%): 갭 분석
- score_cooccur (15%): 공동 출현 기댓값 대비
- score_diversity (10%): 연속번호, 범위, 구간 다양성
Returns:
{"score_total": ..., "score_frequency": ..., ...}
"""
nums = sorted(numbers)
# ── 1. 빈도 점수 (Frequency Score) ────────────────────────────────────────
z_scores = cache["z_scores"]
avg_z = sum(z_scores.get(n, 0.0) for n in nums) / 6.0
# Sigmoid 정규화: avg_z > 0이면 0.5 이상
score_frequency = 1.0 / (1.0 + math.exp(-avg_z / 1.5))
# ── 2. 조합 지문 점수 (Fingerprint Score) ─────────────────────────────────
# 2a. 합계 정규분포 점수
total = sum(nums)
mean_sum = cache["mean_sum"]
std_sum = cache["std_sum"]
z_sum = (total - mean_sum) / std_sum
sum_score = math.exp(-0.5 * z_sum ** 2) # 정규분포 밀도 (peak=1 at mean)
# 2b. 홀짝 비율 점수
odd_count = sum(1 for n in nums if n % 2 == 1)
odd_prob = cache["odd_prob"]
max_odd_prob = cache["max_odd_prob"]
odd_score = odd_prob.get(odd_count, 0.01) / max_odd_prob
# 2c. 구간 분포 점수
zone_probs = cache["zone_probs"]
max_zone_probs = cache["max_zone_probs"]
zone_score = 0.0
for z_idx, (lo, hi) in enumerate(ZONE_RANGES):
cnt = sum(1 for n in nums if lo <= n <= hi)
zp = zone_probs[z_idx]
mzp = max_zone_probs[z_idx]
zone_score += zp.get(cnt, 0.01) / mzp
zone_score /= len(ZONE_RANGES)
score_fingerprint = sum_score * 0.50 + odd_score * 0.30 + zone_score * 0.20
# ── 3. 갭 점수 (Gap Score) ────────────────────────────────────────────────
last_seen_gap = cache["last_seen_gap"]
gap_scores: List[float] = []
for n in nums:
gap = last_seen_gap.get(n, 0)
if gap <= 1:
gs = 0.20 # 직전 등장 번호 - 강한 패널티
elif gap <= 3:
gs = 0.55
elif gap <= 7:
gs = 0.85
elif gap <= 15:
gs = 1.00 # 최적 범위
elif gap <= 25:
gs = 0.90
else:
gs = 0.75 # 오래된 번호 - 여전히 양호
gap_scores.append(gs)
score_gap = sum(gap_scores) / 6.0
# ── 4. 공동 출현 점수 (Co-occurrence Score) ───────────────────────────────
cooccur = cache["cooccur"]
expected_cooccur = cache["expected_cooccur"]
pair_scores: List[float] = []
for i in range(len(nums)):
for j in range(i + 1, len(nums)):
actual = cooccur.get((nums[i], nums[j]), 0)
ratio = actual / max(expected_cooccur, 0.001)
# Sigmoid: ratio = 1에서 0.5, ratio > 1이면 > 0.5
ps = 1.0 / (1.0 + math.exp(-2.0 * (ratio - 1.0)))
pair_scores.append(ps)
score_cooccur = sum(pair_scores) / max(len(pair_scores), 1)
# ── 5. 다양성 점수 (Diversity Score) ─────────────────────────────────────
# 5a. 연속 번호 포함 여부 (역대 당첨번호 약 52%에 최소 1쌍 포함)
has_consecutive = any(nums[i + 1] - nums[i] == 1 for i in range(len(nums) - 1))
consecutive_score = 0.65 if has_consecutive else 0.40
# 5b. 범위 점수 (최소~최대 차이)
num_range = nums[-1] - nums[0]
if 28 <= num_range <= 43:
spread_score = 1.00
elif 20 <= num_range < 28:
spread_score = 0.85
elif 13 <= num_range < 20:
spread_score = 0.65
elif num_range < 13:
spread_score = 0.25
else: # > 43 (최대 44: 1~45)
spread_score = 0.95
# 5c. 구간 커버리지 (몇 개 구간에 걸쳐 있는가)
zones_used = set(_get_zone(n) for n in nums)
zone_coverage = (len(zones_used) - 1) / 4.0 # 0~1
score_diversity = (
consecutive_score * 0.35
+ spread_score * 0.35
+ zone_coverage * 0.30
)
# ── 최종 가중 합산 ────────────────────────────────────────────────────────
score_total = (
score_frequency * 0.25
+ score_fingerprint * 0.30
+ score_gap * 0.20
+ score_cooccur * 0.15
+ score_diversity * 0.10
)
return {
"score_total": round(score_total, 6),
"score_frequency": round(score_frequency, 6),
"score_fingerprint": round(score_fingerprint, 6),
"score_gap": round(score_gap, 6),
"score_cooccur": round(score_cooccur, 6),
"score_diversity": round(score_diversity, 6),
}
def get_statistical_report(draws: List[Tuple[int, List[int]]]) -> Dict[str, Any]:
"""
통계 분석 리포트 생성 (GET /api/lotto/analysis 응답용).
각 번호의 빈도, Z-score, 갭, 히트/콜드/오버듀 분류를 반환.
"""
if not draws:
return {"error": "데이터 없음"}
cache = build_analysis_cache(draws)
total_draws = cache["total_draws"]
freq_all = cache["freq_all"]
z_scores = cache["z_scores"]
last_seen_gap = cache["last_seen_gap"]
number_stats = []
for n in range(1, 46):
freq = freq_all.get(n, 0)
expected = total_draws * 6.0 / 45.0
number_stats.append({
"number": n,
"frequency": freq,
"expected": round(expected, 1),
"frequency_pct": round(freq / (total_draws * 6) * 100, 2),
"z_score": round(z_scores.get(n, 0.0), 3),
"gap": last_seen_gap.get(n, total_draws),
"zone": _get_zone(n),
})
sorted_by_freq = sorted(number_stats, key=lambda x: -x["frequency"])
sorted_by_gap = sorted(number_stats, key=lambda x: -x["gap"])
# 역대 합계 분포 요약
hist_sums = [sum(nums) for _, nums in draws]
sum_buckets: Dict[str, int] = {}
for lo in range(21, 256, 20):
hi = lo + 19
key = f"{lo}-{hi}"
sum_buckets[key] = sum(1 for s in hist_sums if lo <= s <= hi)
return {
"total_draws": total_draws,
"mean_sum": round(cache["mean_sum"], 2),
"std_sum": round(cache["std_sum"], 2),
"odd_distribution": {
str(k): round(v * 100, 1)
for k, v in sorted(cache["odd_prob"].items())
},
"number_stats": number_stats,
"hot_numbers": [x["number"] for x in sorted_by_freq[:10]],
"cold_numbers": [x["number"] for x in sorted_by_freq[-10:]],
"overdue_numbers": [x["number"] for x in sorted_by_gap[:10]],
"sum_distribution": sum_buckets,
}

66
backend/app/checker.py Normal file
View File

@@ -0,0 +1,66 @@
import json
from .db import (
_conn, get_draw, update_recommendation_result
)
def _calc_rank(my_nums: list[int], win_nums: list[int], bonus: int) -> tuple[int, int, bool]:
"""
(rank, correct_cnt, has_bonus) 반환
rank: 1~5 (1등~5등), 0 (낙첨)
"""
matched = set(my_nums) & set(win_nums)
cnt = len(matched)
has_bonus = bonus in my_nums
if cnt == 6:
return 1, cnt, has_bonus
if cnt == 5 and has_bonus:
return 2, cnt, has_bonus
if cnt == 5:
return 3, cnt, has_bonus
if cnt == 4:
return 4, cnt, has_bonus
if cnt == 3:
return 5, cnt, has_bonus
return 0, cnt, has_bonus
def check_results_for_draw(drw_no: int) -> int:
"""
특정 회차(drw_no) 결과가 나왔을 때,
해당 회차를 타겟으로 했던(based_on_draw == drw_no - 1) 추천들을 채점한다.
반환값: 채점한 개수
"""
win_row = get_draw(drw_no)
if not win_row:
return 0
win_nums = [
win_row["n1"], win_row["n2"], win_row["n3"],
win_row["n4"], win_row["n5"], win_row["n6"]
]
bonus = win_row["bonus"]
# based_on_draw가 (이번회차 - 1)인 것들 조회
# 즉, 1000회차 추첨 결과로는, 999회차 데이터를 바탕으로 1000회차를 예측한 것들을 채점
target_based_on = drw_no - 1
with _conn() as conn:
rows = conn.execute(
"""
SELECT id, numbers
FROM recommendations
WHERE based_on_draw = ? AND checked = 0
""",
(target_based_on,)
).fetchall()
count = 0
for r in rows:
my_nums = json.loads(r["numbers"])
rank, correct, has_bonus = _calc_rank(my_nums, win_nums, bonus)
update_recommendation_result(r["id"], rank, correct, has_bonus)
count += 1
return count

View File

@@ -1,7 +1,7 @@
import requests import requests
from typing import Dict, Any from typing import Dict, Any
from .db import get_draw, upsert_draw from .db import get_draw, upsert_draw, upsert_many_draws, get_latest_draw, count_draws
def _normalize_item(item: dict) -> dict: def _normalize_item(item: dict) -> dict:
# smok95 all.json / latest.json 구조 # smok95 all.json / latest.json 구조
@@ -27,20 +27,13 @@ def sync_all_from_json(all_url: str) -> Dict[str, Any]:
r.raise_for_status() r.raise_for_status()
data = r.json() # list[dict] data = r.json() # list[dict]
inserted = 0 # 정규화
skipped = 0 rows = [_normalize_item(item) for item in data]
# Bulk Insert (성능 향상)
upsert_many_draws(rows)
for item in data: return {"mode": "all_json", "url": all_url, "total": len(rows)}
row = _normalize_item(item)
if get_draw(row["drw_no"]):
skipped += 1
continue
upsert_draw(row)
inserted += 1
return {"mode": "all_json", "url": all_url, "inserted": inserted, "skipped": skipped, "total": len(data)}
def sync_latest(latest_url: str) -> Dict[str, Any]: def sync_latest(latest_url: str) -> Dict[str, Any]:
r = requests.get(latest_url, timeout=30) r = requests.get(latest_url, timeout=30)
@@ -53,3 +46,40 @@ def sync_latest(latest_url: str) -> Dict[str, Any]:
return {"mode": "latest_json", "url": latest_url, "was_new": (before is None), "drawNo": row["drw_no"]} return {"mode": "latest_json", "url": latest_url, "was_new": (before is None), "drawNo": row["drw_no"]}
def sync_ensure_all(latest_url: str, all_url: str) -> Dict[str, Any]:
"""
1회부터 최신 회차까지 빠짐없이 있는지 확인하고, 없으면 전체 동기화 수행.
반환값: {"synced": bool, "reason": str, ...}
"""
# 1. 원격 최신 회차 확인
try:
r = requests.get(latest_url, timeout=10)
r.raise_for_status()
remote_item = r.json()
remote_no = int(remote_item["draw_no"])
except Exception as e:
# 외부 통신 실패 시, 그냥 로컬 데이터로 진행하도록 에러 억제 (혹은 에러 발생)
# 여기서는 통계 기능 작동이 우선이므로 로그만 남기고 pass하고 싶지만,
# 확실한 동기화를 위해 에러를 던지거나 False 리턴
return {"synced": False, "error": str(e)}
# 2. 로컬 상태 확인
local_latest_row = get_latest_draw()
local_no = local_latest_row["drw_no"] if local_latest_row else 0
local_cnt = count_draws()
# 3. 동기화 필요 여부 판단
# - 전체 개수가 최신 회차 번호보다 적으면 중간에 빈 것 (1회부터 시작한다고 가정)
# - 혹은 DB 최신 번호가 원격보다 낮으면 업데이트 필요
need_sync = (local_no < remote_no) or (local_cnt < local_no)
if not need_sync:
return {"synced": True, "updated": False, "local_no": local_no}
# 4. 전체 동기화 실행
# (단순 latest sync로는 중간 구멍을 못 채우므로, 구멍이 있거나 차이가 크면 all_sync 수행)
# 만약 차이가 1회차 뿐이고 구멍이 없다면 sync_latest만 해도 되지만,
# 로직 단순화를 위해 missing 감지 시 그냥 all_sync (Bulk Insert라 빠름)
res = sync_all_from_json(all_url)
return {"synced": True, "updated": True, "detail": res}

View File

@@ -63,9 +63,344 @@ def init_db() -> None:
_ensure_column(conn, "recommendations", "tags", _ensure_column(conn, "recommendations", "tags",
"ALTER TABLE recommendations ADD COLUMN tags TEXT NOT NULL DEFAULT '[]';") "ALTER TABLE recommendations ADD COLUMN tags TEXT NOT NULL DEFAULT '[]';")
# ✅ 결과 채점용 컬럼 추가
_ensure_column(conn, "recommendations", "rank",
"ALTER TABLE recommendations ADD COLUMN rank INTEGER;")
_ensure_column(conn, "recommendations", "correct_count",
"ALTER TABLE recommendations ADD COLUMN correct_count INTEGER DEFAULT 0;")
_ensure_column(conn, "recommendations", "has_bonus",
"ALTER TABLE recommendations ADD COLUMN has_bonus INTEGER DEFAULT 0;")
_ensure_column(conn, "recommendations", "checked",
"ALTER TABLE recommendations ADD COLUMN checked INTEGER DEFAULT 0;")
# ✅ UNIQUE 인덱스(중복 저장 방지) # ✅ UNIQUE 인덱스(중복 저장 방지)
conn.execute("CREATE UNIQUE INDEX IF NOT EXISTS uq_reco_dedup ON recommendations(dedup_hash);") conn.execute("CREATE UNIQUE INDEX IF NOT EXISTS uq_reco_dedup ON recommendations(dedup_hash);")
# ── 시뮬레이션 테이블 ─────────────────────────────────────────────────
conn.execute(
"""
CREATE TABLE IF NOT EXISTS simulation_runs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
run_at TEXT NOT NULL DEFAULT (datetime('now')),
strategy TEXT NOT NULL DEFAULT 'monte_carlo',
total_generated INTEGER NOT NULL DEFAULT 0,
top_k_selected INTEGER NOT NULL DEFAULT 0,
avg_score REAL,
notes TEXT DEFAULT ''
);
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_simrun_at ON simulation_runs(run_at DESC);"
)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS simulation_candidates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
run_id INTEGER NOT NULL,
numbers TEXT NOT NULL,
score_total REAL NOT NULL,
score_frequency REAL,
score_fingerprint REAL,
score_gap REAL,
score_cooccur REAL,
score_diversity REAL,
is_best INTEGER DEFAULT 0,
based_on_draw INTEGER,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
FOREIGN KEY(run_id) REFERENCES simulation_runs(id)
);
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_simcand_run "
"ON simulation_candidates(run_id, score_total DESC);"
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_simcand_best "
"ON simulation_candidates(is_best, score_total DESC);"
)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS best_picks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
numbers TEXT NOT NULL,
score_total REAL NOT NULL,
rank_in_run INTEGER,
source_run_id INTEGER,
based_on_draw INTEGER,
is_active INTEGER DEFAULT 1,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
FOREIGN KEY(source_run_id) REFERENCES simulation_runs(id)
);
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_bestpicks_active "
"ON best_picks(is_active, score_total DESC);"
)
# ── todos 테이블 ───────────────────────────────────────────────────────
conn.execute(
"""
CREATE TABLE IF NOT EXISTS todos (
id TEXT PRIMARY KEY
DEFAULT (lower(hex(randomblob(4))) || '-' || lower(hex(randomblob(2)))),
title TEXT NOT NULL,
description TEXT,
status TEXT NOT NULL DEFAULT 'todo'
CHECK(status IN ('todo','in_progress','done')),
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
);
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_todos_created ON todos(created_at DESC);"
)
# ── blog_posts 테이블 ──────────────────────────────────────────────────
conn.execute(
"""
CREATE TABLE IF NOT EXISTS blog_posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
body TEXT NOT NULL DEFAULT '',
excerpt TEXT NOT NULL DEFAULT '',
tags TEXT NOT NULL DEFAULT '[]',
date TEXT NOT NULL DEFAULT (date('now','localtime')),
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
);
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_blog_date ON blog_posts(date DESC);"
)
# ── realestate_complexes 테이블 ────────────────────────────────────────
conn.execute(
"""
CREATE TABLE IF NOT EXISTS realestate_complexes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
address TEXT NOT NULL DEFAULT '',
lat REAL,
lng REAL,
units INTEGER,
types TEXT NOT NULL DEFAULT '[]',
avg_price_per_pyeong INTEGER,
subscription_start TEXT,
subscription_end TEXT,
result_date TEXT,
status TEXT NOT NULL DEFAULT '청약예정'
CHECK(status IN ('청약예정','청약중','결과발표','완료')),
priority TEXT NOT NULL DEFAULT 'normal'
CHECK(priority IN ('high','normal','low')),
tags TEXT NOT NULL DEFAULT '[]',
naver_url TEXT NOT NULL DEFAULT '',
floor_plan_url TEXT NOT NULL DEFAULT '',
memo TEXT NOT NULL DEFAULT '',
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
);
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_realestate_status ON realestate_complexes(status);"
)
# ── subscription_items 테이블 ──────────────────────────────────────────
conn.execute(
"""
CREATE TABLE IF NOT EXISTS subscription_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
complex_name TEXT NOT NULL,
address TEXT NOT NULL DEFAULT '',
pyeong TEXT,
total_price INTEGER,
type TEXT,
special_type TEXT,
supply_type TEXT,
status TEXT NOT NULL DEFAULT '검토중',
min_score INTEGER,
max_income INTEGER,
homeless_required INTEGER,
subscription_start TEXT,
subscription_end TEXT,
contract_date TEXT,
interim_date TEXT,
balance_date TEXT,
result_date TEXT,
deposit_rate INTEGER DEFAULT 10,
interim_rate INTEGER DEFAULT 60,
balance_rate INTEGER DEFAULT 30,
loan_type TEXT,
loan_rate REAL,
memo TEXT NOT NULL DEFAULT '',
naver_url TEXT NOT NULL DEFAULT '',
created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
updated_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now'))
);
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_sub_items_created ON subscription_items(created_at DESC);"
)
# ── subscription_profile 테이블 (싱글톤 id=1) ──────────────────────────
conn.execute(
"""
CREATE TABLE IF NOT EXISTS subscription_profile (
id INTEGER PRIMARY KEY DEFAULT 1,
is_household_head INTEGER DEFAULT 1,
is_homeless INTEGER DEFAULT 1,
homeless_period INTEGER,
savings_months INTEGER,
savings_count INTEGER,
dependents INTEGER DEFAULT 0,
residency_area TEXT,
is_married INTEGER,
marriage_months INTEGER,
monthly_income INTEGER,
special_quals TEXT NOT NULL DEFAULT '[]'
);
"""
)
# ── todos CRUD ───────────────────────────────────────────────────────────────
def _todo_row_to_dict(r) -> Dict[str, Any]:
return {
"id": r["id"],
"title": r["title"],
"description": r["description"],
"status": r["status"],
"created_at": r["created_at"],
"updated_at": r["updated_at"],
}
def get_all_todos() -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute(
"SELECT * FROM todos ORDER BY created_at DESC"
).fetchall()
return [_todo_row_to_dict(r) for r in rows]
def create_todo(title: str, description: Optional[str], status: str) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"INSERT INTO todos (title, description, status) VALUES (?, ?, ?)",
(title, description, status),
)
row = conn.execute(
"SELECT * FROM todos WHERE rowid = last_insert_rowid()"
).fetchone()
return _todo_row_to_dict(row)
def update_todo(todo_id: str, fields: Dict[str, Any]) -> Optional[Dict[str, Any]]:
"""fields에 있는 항목만 업데이트 (PATCH 방식), updated_at 자동 갱신"""
allowed = {"title", "description", "status"}
updates = {k: v for k, v in fields.items() if k in allowed}
if not updates:
with _conn() as conn:
row = conn.execute("SELECT * FROM todos WHERE id = ?", (todo_id,)).fetchone()
return _todo_row_to_dict(row) if row else None
set_clauses = ", ".join(f"{k} = ?" for k in updates)
set_clauses += ", updated_at = strftime('%Y-%m-%dT%H:%M:%fZ','now')"
args = list(updates.values()) + [todo_id]
with _conn() as conn:
conn.execute(
f"UPDATE todos SET {set_clauses} WHERE id = ?",
args,
)
row = conn.execute("SELECT * FROM todos WHERE id = ?", (todo_id,)).fetchone()
return _todo_row_to_dict(row) if row else None
def delete_todo(todo_id: str) -> bool:
with _conn() as conn:
cur = conn.execute("DELETE FROM todos WHERE id = ?", (todo_id,))
return cur.rowcount > 0
def delete_done_todos() -> int:
with _conn() as conn:
cur = conn.execute("DELETE FROM todos WHERE status = 'done'")
return cur.rowcount
# ── blog_posts CRUD ──────────────────────────────────────────────────────────
def _post_row_to_dict(r) -> Dict[str, Any]:
return {
"id": r["id"],
"title": r["title"],
"body": r["body"],
"excerpt": r["excerpt"],
"tags": json.loads(r["tags"]) if r["tags"] else [],
"date": r["date"],
"created_at": r["created_at"],
"updated_at": r["updated_at"],
}
def get_all_posts() -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute(
"SELECT * FROM blog_posts ORDER BY date DESC, id DESC"
).fetchall()
return [_post_row_to_dict(r) for r in rows]
def create_post(title: str, body: str, excerpt: str, tags: List[str], date: str) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"INSERT INTO blog_posts (title, body, excerpt, tags, date) VALUES (?, ?, ?, ?, ?)",
(title, body, excerpt, json.dumps(tags), date),
)
row = conn.execute(
"SELECT * FROM blog_posts WHERE rowid = last_insert_rowid()"
).fetchone()
return _post_row_to_dict(row)
def update_post(post_id: int, fields: Dict[str, Any]) -> Optional[Dict[str, Any]]:
allowed = {"title", "body", "excerpt", "tags", "date"}
updates = {k: v for k, v in fields.items() if k in allowed}
if not updates:
with _conn() as conn:
row = conn.execute("SELECT * FROM blog_posts WHERE id = ?", (post_id,)).fetchone()
return _post_row_to_dict(row) if row else None
if "tags" in updates:
updates["tags"] = json.dumps(updates["tags"])
set_clauses = ", ".join(f"{k} = ?" for k in updates)
set_clauses += ", updated_at = strftime('%Y-%m-%dT%H:%M:%fZ','now')"
args = list(updates.values()) + [post_id]
with _conn() as conn:
conn.execute(f"UPDATE blog_posts SET {set_clauses} WHERE id = ?", args)
row = conn.execute("SELECT * FROM blog_posts WHERE id = ?", (post_id,)).fetchone()
return _post_row_to_dict(row) if row else None
def delete_post(post_id: int) -> bool:
with _conn() as conn:
cur = conn.execute("DELETE FROM blog_posts WHERE id = ?", (post_id,))
return cur.rowcount > 0
def upsert_draw(row: Dict[str, Any]) -> None: def upsert_draw(row: Dict[str, Any]) -> None:
with _conn() as conn: with _conn() as conn:
conn.execute( conn.execute(
@@ -88,6 +423,30 @@ def upsert_draw(row: Dict[str, Any]) -> None:
), ),
) )
def upsert_many_draws(rows: List[Dict[str, Any]]) -> None:
data = [
(
int(r["drw_no"]), str(r["drw_date"]),
int(r["n1"]), int(r["n2"]), int(r["n3"]),
int(r["n4"]), int(r["n5"]), int(r["n6"]),
int(r["bonus"])
) for r in rows
]
with _conn() as conn:
conn.executemany(
"""
INSERT INTO draws (drw_no, drw_date, n1, n2, n3, n4, n5, n6, bonus)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(drw_no) DO UPDATE SET
drw_date=excluded.drw_date,
n1=excluded.n1, n2=excluded.n2, n3=excluded.n3,
n4=excluded.n4, n5=excluded.n5, n6=excluded.n6,
bonus=excluded.bonus,
updated_at=datetime('now')
""",
data
)
def get_latest_draw() -> Optional[Dict[str, Any]]: def get_latest_draw() -> Optional[Dict[str, Any]]:
with _conn() as conn: with _conn() as conn:
r = conn.execute("SELECT * FROM draws ORDER BY drw_no DESC LIMIT 1").fetchone() r = conn.execute("SELECT * FROM draws ORDER BY drw_no DESC LIMIT 1").fetchone()
@@ -237,3 +596,519 @@ def delete_recommendation(rec_id: int) -> bool:
cur = conn.execute("DELETE FROM recommendations WHERE id = ?", (rec_id,)) cur = conn.execute("DELETE FROM recommendations WHERE id = ?", (rec_id,))
return cur.rowcount > 0 return cur.rowcount > 0
def update_recommendation_result(rec_id: int, rank: int, correct_count: int, has_bonus: bool) -> bool:
with _conn() as conn:
cur = conn.execute(
"""
UPDATE recommendations
SET rank = ?, correct_count = ?, has_bonus = ?, checked = 1
WHERE id = ?
""",
(rank, correct_count, 1 if has_bonus else 0, rec_id)
)
return cur.rowcount > 0
# ── 시뮬레이션 CRUD ─────────────────────────────────────────────────────────
def save_simulation_run(
strategy: str,
total_generated: int,
top_k_selected: int,
avg_score: float,
notes: str = "",
) -> int:
"""시뮬레이션 실행 기록 저장, 생성된 ID 반환"""
with _conn() as conn:
cur = conn.execute(
"""
INSERT INTO simulation_runs (strategy, total_generated, top_k_selected, avg_score, notes)
VALUES (?, ?, ?, ?, ?)
""",
(strategy, total_generated, top_k_selected, round(avg_score, 6), notes),
)
return int(cur.lastrowid)
def save_simulation_candidates_bulk(
run_id: int,
candidates: List[Dict[str, Any]],
based_on_draw: Optional[int],
) -> None:
"""
상위 후보들을 simulation_candidates 테이블에 일괄 저장.
candidates 각 항목: {"numbers": [...], "score_total": ..., "score_*": ..., "is_best": bool}
"""
data = [
(
run_id,
json.dumps(sorted(c["numbers"])),
c["score_total"],
c.get("score_frequency"),
c.get("score_fingerprint"),
c.get("score_gap"),
c.get("score_cooccur"),
c.get("score_diversity"),
1 if c.get("is_best") else 0,
based_on_draw,
)
for c in candidates
]
with _conn() as conn:
conn.executemany(
"""
INSERT INTO simulation_candidates
(run_id, numbers, score_total, score_frequency, score_fingerprint,
score_gap, score_cooccur, score_diversity, is_best, based_on_draw)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
data,
)
def replace_best_picks(
picks: List[Dict[str, Any]],
run_id: int,
based_on_draw: Optional[int],
) -> None:
"""
기존 활성 best_picks를 비활성화하고 새 picks로 교체.
picks 각 항목: {"numbers": [...], "score_total": ..., "rank_in_run": int}
"""
with _conn() as conn:
conn.execute("UPDATE best_picks SET is_active = 0 WHERE is_active = 1")
data = [
(
json.dumps(sorted(p["numbers"])),
p["score_total"],
p.get("rank_in_run"),
run_id,
based_on_draw,
)
for p in picks
]
conn.executemany(
"""
INSERT INTO best_picks (numbers, score_total, rank_in_run, source_run_id, based_on_draw, is_active)
VALUES (?, ?, ?, ?, ?, 1)
""",
data,
)
def get_best_picks(limit: int = 20) -> List[Dict[str, Any]]:
"""현재 활성화된 best_picks 조회 (점수 내림차순)"""
with _conn() as conn:
rows = conn.execute(
"""
SELECT id, numbers, score_total, rank_in_run, source_run_id, based_on_draw, created_at
FROM best_picks
WHERE is_active = 1
ORDER BY score_total DESC
LIMIT ?
""",
(limit,),
).fetchall()
return [
{
"id": int(r["id"]),
"numbers": json.loads(r["numbers"]),
"score_total": r["score_total"],
"rank_in_run": r["rank_in_run"],
"source_run_id": r["source_run_id"],
"based_on_draw": r["based_on_draw"],
"created_at": r["created_at"],
}
for r in rows
]
def get_simulation_runs(limit: int = 10) -> List[Dict[str, Any]]:
"""최근 시뮬레이션 실행 기록 조회"""
with _conn() as conn:
rows = conn.execute(
"""
SELECT id, run_at, strategy, total_generated, top_k_selected, avg_score, notes
FROM simulation_runs
ORDER BY id DESC
LIMIT ?
""",
(limit,),
).fetchall()
return [dict(r) for r in rows]
def get_simulation_candidates(run_id: int, limit: int = 100) -> List[Dict[str, Any]]:
"""특정 시뮬레이션 실행의 후보 목록 조회 (점수 내림차순)"""
with _conn() as conn:
rows = conn.execute(
"""
SELECT id, numbers, score_total, score_frequency, score_fingerprint,
score_gap, score_cooccur, score_diversity, is_best, based_on_draw, created_at
FROM simulation_candidates
WHERE run_id = ?
ORDER BY score_total DESC
LIMIT ?
""",
(run_id, limit),
).fetchall()
return [
{**dict(r), "numbers": json.loads(r["numbers"])}
for r in rows
]
# ── realestate_complexes CRUD ─────────────────────────────────────────────────
def _complex_row_to_dict(r) -> Dict[str, Any]:
return {
"id": r["id"],
"name": r["name"],
"address": r["address"],
"lat": r["lat"],
"lng": r["lng"],
"units": r["units"],
"types": json.loads(r["types"]) if r["types"] else [],
"avgPricePerPyeong": r["avg_price_per_pyeong"],
"subscriptionStart": r["subscription_start"],
"subscriptionEnd": r["subscription_end"],
"resultDate": r["result_date"],
"status": r["status"],
"priority": r["priority"],
"tags": json.loads(r["tags"]) if r["tags"] else [],
"naverUrl": r["naver_url"],
"floorPlanUrl": r["floor_plan_url"],
"memo": r["memo"],
"created_at": r["created_at"],
"updated_at": r["updated_at"],
}
def get_all_complexes() -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute(
"SELECT * FROM realestate_complexes ORDER BY id DESC"
).fetchall()
return [_complex_row_to_dict(r) for r in rows]
def get_complex(complex_id: int) -> Optional[Dict[str, Any]]:
with _conn() as conn:
r = conn.execute(
"SELECT * FROM realestate_complexes WHERE id = ?", (complex_id,)
).fetchone()
return _complex_row_to_dict(r) if r else None
def create_complex(data: Dict[str, Any]) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"""
INSERT INTO realestate_complexes
(name, address, lat, lng, units, types, avg_price_per_pyeong,
subscription_start, subscription_end, result_date,
status, priority, tags, naver_url, floor_plan_url, memo)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
data["name"],
data.get("address", ""),
data.get("lat"),
data.get("lng"),
data.get("units"),
json.dumps(data.get("types", [])),
data.get("avgPricePerPyeong"),
data.get("subscriptionStart"),
data.get("subscriptionEnd"),
data.get("resultDate"),
data.get("status", "청약예정"),
data.get("priority", "normal"),
json.dumps(data.get("tags", [])),
data.get("naverUrl", ""),
data.get("floorPlanUrl", ""),
data.get("memo", ""),
),
)
row = conn.execute(
"SELECT * FROM realestate_complexes WHERE rowid = last_insert_rowid()"
).fetchone()
return _complex_row_to_dict(row)
def update_complex(complex_id: int, data: Dict[str, Any]) -> Optional[Dict[str, Any]]:
field_map = {
"name": "name",
"address": "address",
"lat": "lat",
"lng": "lng",
"units": "units",
"avgPricePerPyeong": "avg_price_per_pyeong",
"subscriptionStart": "subscription_start",
"subscriptionEnd": "subscription_end",
"resultDate": "result_date",
"status": "status",
"priority": "priority",
"naverUrl": "naver_url",
"floorPlanUrl": "floor_plan_url",
"memo": "memo",
}
json_fields = {"types", "tags"}
updates: Dict[str, Any] = {}
for camel, snake in field_map.items():
if camel in data:
updates[snake] = data[camel]
for f in json_fields:
if f in data:
updates[f] = json.dumps(data[f])
if not updates:
return get_complex(complex_id)
set_clauses = ", ".join(f"{k} = ?" for k in updates)
set_clauses += ", updated_at = strftime('%Y-%m-%dT%H:%M:%fZ','now')"
args = list(updates.values()) + [complex_id]
with _conn() as conn:
conn.execute(
f"UPDATE realestate_complexes SET {set_clauses} WHERE id = ?", args
)
row = conn.execute(
"SELECT * FROM realestate_complexes WHERE id = ?", (complex_id,)
).fetchone()
return _complex_row_to_dict(row) if row else None
def delete_complex(complex_id: int) -> bool:
with _conn() as conn:
cur = conn.execute(
"DELETE FROM realestate_complexes WHERE id = ?", (complex_id,)
)
return cur.rowcount > 0
# ── subscription_items CRUD ───────────────────────────────────────────────────
_SUB_ITEM_FIELD_MAP = {
"complexName": "complex_name",
"address": "address",
"pyeong": "pyeong",
"totalPrice": "total_price",
"type": "type",
"specialType": "special_type",
"supplyType": "supply_type",
"status": "status",
"minScore": "min_score",
"maxIncome": "max_income",
"homelessRequired": "homeless_required",
"subscriptionStart": "subscription_start",
"subscriptionEnd": "subscription_end",
"contractDate": "contract_date",
"interimDate": "interim_date",
"balanceDate": "balance_date",
"resultDate": "result_date",
"depositRate": "deposit_rate",
"interimRate": "interim_rate",
"balanceRate": "balance_rate",
"loanType": "loan_type",
"loanRate": "loan_rate",
"memo": "memo",
"naverUrl": "naver_url",
}
def _sub_item_row_to_dict(r) -> Dict[str, Any]:
return {
"id": r["id"],
"complexName": r["complex_name"],
"address": r["address"],
"pyeong": r["pyeong"],
"totalPrice": r["total_price"],
"type": r["type"],
"specialType": r["special_type"],
"supplyType": r["supply_type"],
"status": r["status"],
"minScore": r["min_score"],
"maxIncome": r["max_income"],
"homelessRequired": r["homeless_required"],
"subscriptionStart": r["subscription_start"],
"subscriptionEnd": r["subscription_end"],
"contractDate": r["contract_date"],
"interimDate": r["interim_date"],
"balanceDate": r["balance_date"],
"resultDate": r["result_date"],
"depositRate": r["deposit_rate"],
"interimRate": r["interim_rate"],
"balanceRate": r["balance_rate"],
"loanType": r["loan_type"],
"loanRate": r["loan_rate"],
"memo": r["memo"],
"naverUrl": r["naver_url"],
"created_at": r["created_at"],
"updated_at": r["updated_at"],
}
def get_all_subscription_items() -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute(
"SELECT * FROM subscription_items ORDER BY created_at DESC"
).fetchall()
return [_sub_item_row_to_dict(r) for r in rows]
def create_subscription_item(data: Dict[str, Any]) -> Dict[str, Any]:
with _conn() as conn:
conn.execute(
"""
INSERT INTO subscription_items
(complex_name, address, pyeong, total_price, type, special_type, supply_type,
status, min_score, max_income, homeless_required,
subscription_start, subscription_end, contract_date, interim_date,
balance_date, result_date, deposit_rate, interim_rate, balance_rate,
loan_type, loan_rate, memo, naver_url)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
data["complexName"],
data.get("address", ""),
data.get("pyeong"),
data.get("totalPrice"),
data.get("type"),
data.get("specialType"),
data.get("supplyType"),
data.get("status", "검토중"),
data.get("minScore"),
data.get("maxIncome"),
data.get("homelessRequired"),
data.get("subscriptionStart"),
data.get("subscriptionEnd"),
data.get("contractDate"),
data.get("interimDate"),
data.get("balanceDate"),
data.get("resultDate"),
data.get("depositRate", 10),
data.get("interimRate", 60),
data.get("balanceRate", 30),
data.get("loanType"),
data.get("loanRate"),
data.get("memo", ""),
data.get("naverUrl", ""),
),
)
row = conn.execute(
"SELECT * FROM subscription_items WHERE rowid = last_insert_rowid()"
).fetchone()
return _sub_item_row_to_dict(row)
def update_subscription_item(item_id: int, data: Dict[str, Any]) -> Optional[Dict[str, Any]]:
updates: Dict[str, Any] = {}
for camel, snake in _SUB_ITEM_FIELD_MAP.items():
if camel in data:
updates[snake] = data[camel]
if not updates:
with _conn() as conn:
row = conn.execute(
"SELECT * FROM subscription_items WHERE id = ?", (item_id,)
).fetchone()
return _sub_item_row_to_dict(row) if row else None
set_clauses = ", ".join(f"{k} = ?" for k in updates)
set_clauses += ", updated_at = strftime('%Y-%m-%dT%H:%M:%fZ','now')"
args = list(updates.values()) + [item_id]
with _conn() as conn:
conn.execute(
f"UPDATE subscription_items SET {set_clauses} WHERE id = ?", args
)
row = conn.execute(
"SELECT * FROM subscription_items WHERE id = ?", (item_id,)
).fetchone()
return _sub_item_row_to_dict(row) if row else None
def delete_subscription_item(item_id: int) -> bool:
with _conn() as conn:
cur = conn.execute("DELETE FROM subscription_items WHERE id = ?", (item_id,))
return cur.rowcount > 0
# ── subscription_profile CRUD (싱글톤) ────────────────────────────────────────
def _profile_row_to_dict(r) -> Dict[str, Any]:
return {
"isHouseholdHead": bool(r["is_household_head"]) if r["is_household_head"] is not None else None,
"isHomeless": bool(r["is_homeless"]) if r["is_homeless"] is not None else None,
"homelessPeriod": r["homeless_period"],
"savingsMonths": r["savings_months"],
"savingsCount": r["savings_count"],
"dependents": r["dependents"],
"residencyArea": r["residency_area"],
"isMarried": bool(r["is_married"]) if r["is_married"] is not None else None,
"marriageMonths": r["marriage_months"],
"monthlyIncome": r["monthly_income"],
"specialQuals": json.loads(r["special_quals"]) if r["special_quals"] else [],
}
def get_subscription_profile() -> Optional[Dict[str, Any]]:
with _conn() as conn:
r = conn.execute(
"SELECT * FROM subscription_profile WHERE id = 1"
).fetchone()
return _profile_row_to_dict(r) if r else None
def upsert_subscription_profile(data: Dict[str, Any]) -> Dict[str, Any]:
field_map = {
"isHouseholdHead": "is_household_head",
"isHomeless": "is_homeless",
"homelessPeriod": "homeless_period",
"savingsMonths": "savings_months",
"savingsCount": "savings_count",
"dependents": "dependents",
"residencyArea": "residency_area",
"isMarried": "is_married",
"marriageMonths": "marriage_months",
"monthlyIncome": "monthly_income",
}
updates: Dict[str, Any] = {}
for camel, snake in field_map.items():
if camel in data:
val = data[camel]
# bool → int (SQLite)
if isinstance(val, bool):
val = 1 if val else 0
updates[snake] = val
if "specialQuals" in data:
updates["special_quals"] = json.dumps(data["specialQuals"])
with _conn() as conn:
existing = conn.execute(
"SELECT id FROM subscription_profile WHERE id = 1"
).fetchone()
if existing:
if updates:
set_clauses = ", ".join(f"{k} = ?" for k in updates)
conn.execute(
f"UPDATE subscription_profile SET {set_clauses} WHERE id = 1",
list(updates.values()),
)
else:
cols = ["id"] + list(updates.keys())
vals = [1] + list(updates.values())
placeholders = ", ".join("?" for _ in vals)
conn.execute(
f"INSERT INTO subscription_profile ({', '.join(cols)}) VALUES ({placeholders})",
vals,
)
row = conn.execute(
"SELECT * FROM subscription_profile WHERE id = 1"
).fetchone()
return _profile_row_to_dict(row)

154
backend/app/generator.py Normal file
View File

@@ -0,0 +1,154 @@
"""
시뮬레이션 엔진 - lotto-lab 고도화
[몬테카를로 시뮬레이션 흐름]
1. 역대 당첨번호 기반 통계 캐시 구성 (build_analysis_cache)
2. 통계 가중치로 N개 후보 조합 생성 (weighted sampling)
3. 5가지 기법으로 각 후보 스코어링 (score_combination)
4. 상위 top_k개 선별하여 DB 저장 (simulation_candidates, best_picks 교체)
[시뮬레이션 파라미터]
- n_candidates: 1회 시뮬레이션당 생성 후보 수 (기본 20,000)
- top_k: 선별 및 저장할 상위 개수 (기본 100)
- best_n: best_picks에 올릴 최상위 개수 (기본 20)
"""
import random
from typing import Dict, Any, List, Optional
from .db import (
get_latest_draw,
get_all_draw_numbers,
save_simulation_run,
save_simulation_candidates_bulk,
replace_best_picks,
)
from .analyzer import build_analysis_cache, build_number_weights, score_combination
def _weighted_sample_6(weights: Dict[int, float]) -> List[int]:
"""
가중 확률 샘플링으로 중복 없이 6개 번호 추출.
weights: {1: w1, 2: w2, ..., 45: w45}
"""
pool = list(range(1, 46))
chosen: List[int] = []
for _ in range(6):
total = sum(weights[n] for n in pool)
r = random.random() * total
acc = 0.0
for n in pool:
acc += weights[n]
if acc >= r:
chosen.append(n)
pool.remove(n)
break
return chosen
def run_simulation(
n_candidates: int = 20000,
top_k: int = 100,
best_n: int = 20,
) -> Dict[str, Any]:
"""
몬테카를로 시뮬레이션 실행 메인 함수.
Args:
n_candidates: 생성할 후보 조합 수 (기본 20,000)
top_k: DB에 저장할 상위 후보 수 (기본 100)
best_n: best_picks에 올릴 최상위 수 (기본 20)
Returns:
{run_id, total_generated, top_k_selected, avg_score, best_score, based_on_draw}
또는 {"error": ...}
"""
draws = get_all_draw_numbers()
if not draws:
return {"error": "당첨번호 데이터가 없습니다. 먼저 동기화를 실행하세요."}
latest = get_latest_draw()
based_on_draw = latest["drw_no"] if latest else None
# ── 1. 통계 캐시 및 가중치 구성 (시뮬레이션 전체에서 재사용) ────────────
cache = build_analysis_cache(draws)
weights = build_number_weights(cache)
# ── 2. 후보 생성 및 스코어링 ──────────────────────────────────────────────
candidates: List[Dict[str, Any]] = []
seen_keys: set = set()
max_attempts = n_candidates * 3 # 중복 제거 여유분
attempts = 0
while len(candidates) < n_candidates and attempts < max_attempts:
attempts += 1
nums = _weighted_sample_6(weights)
key = tuple(sorted(nums))
if key in seen_keys:
continue
seen_keys.add(key)
scores = score_combination(nums, cache)
candidates.append({
"numbers": sorted(nums),
**scores,
})
# ── 3. 점수 내림차순 정렬 및 상위 선별 ──────────────────────────────────
candidates.sort(key=lambda x: -x["score_total"])
top_candidates = candidates[:top_k]
# is_best 플래그 표시
best_keys = {tuple(c["numbers"]) for c in top_candidates[:best_n]}
for c in top_candidates:
c["is_best"] = tuple(c["numbers"]) in best_keys
avg_score = (
sum(c["score_total"] for c in top_candidates) / len(top_candidates)
if top_candidates else 0.0
)
best_score = top_candidates[0]["score_total"] if top_candidates else 0.0
# ── 4. DB 저장 ────────────────────────────────────────────────────────────
run_id = save_simulation_run(
strategy="monte_carlo",
total_generated=len(candidates),
top_k_selected=len(top_candidates),
avg_score=avg_score,
notes=f"based_on_draw={based_on_draw}, history={len(draws)}",
)
# 상위 top_k개만 DB에 저장 (전체 20,000개는 메모리에서만 처리)
save_simulation_candidates_bulk(run_id, top_candidates, based_on_draw)
# best_picks 교체 (상위 best_n개)
best_picks_data = [
{
"numbers": c["numbers"],
"score_total": c["score_total"],
"rank_in_run": i + 1,
}
for i, c in enumerate(top_candidates[:best_n])
]
replace_best_picks(best_picks_data, run_id, based_on_draw)
return {
"run_id": run_id,
"total_generated": len(candidates),
"top_k_selected": len(top_candidates),
"best_n_saved": len(best_picks_data),
"avg_score": round(avg_score, 6),
"best_score": round(best_score, 6),
"based_on_draw": based_on_draw,
}
def generate_smart_recommendations(count: int = 10) -> int:
"""
하위 호환성 유지용 래퍼.
내부적으로 run_simulation을 호출하며, 기존 /api/admin/auto_gen 등에서 계속 사용 가능.
"""
result = run_simulation(n_candidates=5000, top_k=count, best_n=count)
if "error" in result:
return 0
return result.get("best_n_saved", 0)

View File

@@ -8,9 +8,25 @@ from .db import (
init_db, get_draw, get_latest_draw, get_all_draw_numbers, init_db, get_draw, get_latest_draw, get_all_draw_numbers,
save_recommendation_dedup, list_recommendations_ex, delete_recommendation, save_recommendation_dedup, list_recommendations_ex, delete_recommendation,
update_recommendation, update_recommendation,
# 시뮬레이션 관련
get_best_picks, get_simulation_runs, get_simulation_candidates,
# todos
get_all_todos, create_todo, update_todo, delete_todo, delete_done_todos,
# blog
get_all_posts, create_post, update_post, delete_post,
# realestate
get_all_complexes, get_complex, create_complex, update_complex, delete_complex,
# subscription
get_all_subscription_items, create_subscription_item,
update_subscription_item, delete_subscription_item,
get_subscription_profile, upsert_subscription_profile,
) )
from .recommender import recommend_numbers from .recommender import recommend_numbers, recommend_with_heatmap
from .collector import sync_latest from .collector import sync_latest, sync_ensure_all
from .generator import run_simulation, generate_smart_recommendations
from .checker import check_results_for_draw
from .utils import calc_metrics, calc_recent_overlap
from .analyzer import get_statistical_report
app = FastAPI() app = FastAPI()
scheduler = BackgroundScheduler(timezone=os.getenv("TZ", "Asia/Seoul")) scheduler = BackgroundScheduler(timezone=os.getenv("TZ", "Asia/Seoul"))
@@ -18,74 +34,35 @@ scheduler = BackgroundScheduler(timezone=os.getenv("TZ", "Asia/Seoul"))
ALL_URL = os.getenv("LOTTO_ALL_URL", "https://smok95.github.io/lotto/results/all.json") ALL_URL = os.getenv("LOTTO_ALL_URL", "https://smok95.github.io/lotto/results/all.json")
LATEST_URL = os.getenv("LOTTO_LATEST_URL", "https://smok95.github.io/lotto/results/latest.json") LATEST_URL = os.getenv("LOTTO_LATEST_URL", "https://smok95.github.io/lotto/results/latest.json")
def calc_metrics(numbers: List[int]) -> Dict[str, Any]:
nums = sorted(numbers)
s = sum(nums)
odd = sum(1 for x in nums if x % 2 == 1)
even = len(nums) - odd
mn, mx = nums[0], nums[-1]
rng = mx - mn
# 1-10, 11-20, 21-30, 31-40, 41-45
buckets = {
"1-10": 0,
"11-20": 0,
"21-30": 0,
"31-40": 0,
"41-45": 0,
}
for x in nums:
if 1 <= x <= 10:
buckets["1-10"] += 1
elif 11 <= x <= 20:
buckets["11-20"] += 1
elif 21 <= x <= 30:
buckets["21-30"] += 1
elif 31 <= x <= 40:
buckets["31-40"] += 1
else:
buckets["41-45"] += 1
return {
"sum": s,
"odd": odd,
"even": even,
"min": mn,
"max": mx,
"range": rng,
"buckets": buckets,
}
def calc_recent_overlap(numbers: List[int], draws: List[Tuple[int, List[int]]], last_k: int) -> Dict[str, Any]:
"""
draws: [(drw_no, [n1..n6]), ...] 오름차순
last_k: 최근 k회 기준 중복
"""
if last_k <= 0:
return {"last_k": 0, "repeats": 0, "repeated_numbers": []}
recent = draws[-last_k:] if len(draws) >= last_k else draws
recent_set = set()
for _, nums in recent:
recent_set.update(nums)
repeated = sorted(set(numbers) & recent_set)
return {
"last_k": len(recent),
"repeats": len(repeated),
"repeated_numbers": repeated,
}
@app.on_event("startup") @app.on_event("startup")
def on_startup(): def on_startup():
init_db() init_db()
scheduler.add_job(lambda: sync_latest(LATEST_URL), "cron", hour="9,21", minute=10)
# 1. 로또 당첨번호 동기화 (매일 9시, 21시 10분)
# 동기화 후 새로운 회차가 있으면 채점(check)까지 수행
def _sync_and_check():
res = sync_latest(LATEST_URL)
if res["was_new"]:
check_results_for_draw(res["drawNo"])
scheduler.add_job(_sync_and_check, "cron", hour="9,21", minute=10)
# 2. 몬테카를로 시뮬레이션 (하루 6회: 0, 4, 8, 12, 16, 20시)
# 20,000개 후보 생성 → 스코어링 → 상위 100개 저장 → best_picks 교체
def _run_simulation_job():
run_simulation(n_candidates=20000, top_k=100, best_n=20)
scheduler.add_job(_run_simulation_job, "cron", hour="0,4,8,12,16,20", minute=5)
scheduler.start() scheduler.start()
@app.get("/health") @app.get("/health")
def health(): def health():
return {"ok": True} return {"ok": True}
@app.get("/api/lotto/latest") @app.get("/api/lotto/latest")
def api_latest(): def api_latest():
row = get_latest_draw() row = get_latest_draw()
@@ -96,8 +73,10 @@ def api_latest():
"date": row["drw_date"], "date": row["drw_date"],
"numbers": [row["n1"], row["n2"], row["n3"], row["n4"], row["n5"], row["n6"]], "numbers": [row["n1"], row["n2"], row["n3"], row["n4"], row["n5"], row["n6"]],
"bonus": row["bonus"], "bonus": row["bonus"],
"metrics": calc_metrics([row["n1"], row["n2"], row["n3"], row["n4"], row["n5"], row["n6"]]),
} }
@app.get("/api/lotto/{drw_no:int}") @app.get("/api/lotto/{drw_no:int}")
def api_draw(drw_no: int): def api_draw(drw_no: int):
row = get_draw(drw_no) row = get_draw(drw_no)
@@ -108,28 +87,166 @@ def api_draw(drw_no: int):
"date": row["drw_date"], "date": row["drw_date"],
"numbers": [row["n1"], row["n2"], row["n3"], row["n4"], row["n5"], row["n6"]], "numbers": [row["n1"], row["n2"], row["n3"], row["n4"], row["n5"], row["n6"]],
"bonus": row["bonus"], "bonus": row["bonus"],
"metrics": calc_metrics([row["n1"], row["n2"], row["n3"], row["n4"], row["n5"], row["n6"]]),
} }
@app.post("/api/admin/sync_latest") @app.post("/api/admin/sync_latest")
def admin_sync_latest(): def admin_sync_latest():
return sync_latest(LATEST_URL) res = sync_latest(LATEST_URL)
if res["was_new"]:
check_results_for_draw(res["drawNo"])
return res
# ---------- ✅ recommend (dedup save) ----------
@app.post("/api/admin/auto_gen")
def admin_auto_gen(count: int = 10):
"""기존 호환 유지: 소규모 시뮬레이션 수동 트리거"""
n = generate_smart_recommendations(count)
return {"generated": n}
@app.post("/api/admin/simulate")
def admin_simulate(n_candidates: int = 20000, top_k: int = 100, best_n: int = 20):
"""
몬테카를로 시뮬레이션 수동 트리거.
백그라운드 스케줄과 동일한 동작을 즉시 실행.
"""
result = run_simulation(
n_candidates=max(1000, min(n_candidates, 50000)),
top_k=max(10, min(top_k, 500)),
best_n=max(10, min(best_n, 50)),
)
if "error" in result:
raise HTTPException(status_code=500, detail=result["error"])
return result
@app.get("/api/lotto/stats")
def api_stats():
sync_ensure_all(LATEST_URL, ALL_URL)
draws = get_all_draw_numbers()
if not draws:
raise HTTPException(status_code=404, detail="No data yet")
frequency = {n: 0 for n in range(1, 46)}
total_draws = len(draws)
for _, nums in draws:
for n in nums:
frequency[n] += 1
stats = [
{"number": n, "count": frequency[n]}
for n in range(1, 46)
]
return {
"total_draws": total_draws,
"frequency": stats,
}
# ── 통계 분석 리포트 ────────────────────────────────────────────────────────
@app.get("/api/lotto/analysis")
def api_analysis():
"""
5가지 통계 기법 기반 분석 리포트.
- 번호별 빈도, Z-score, 갭
- 핫/콜드/오버듀 번호
- 역대 합계 분포, 홀짝 분포
"""
draws = get_all_draw_numbers()
if not draws:
raise HTTPException(status_code=404, detail="No data yet")
return get_statistical_report(draws)
# ── 시뮬레이션 best_picks (메인 추천 엔드포인트) ────────────────────────────
@app.get("/api/lotto/best")
def api_best_picks(limit: int = 20):
"""
시뮬레이션을 통해 선별된 최적 번호 조합 반환 (기본 20쌍).
하루 6회 시뮬레이션 후 자동 갱신됨.
각 조합에 점수 및 메트릭 포함.
"""
limit = max(1, min(limit, 50))
picks = get_best_picks(limit=limit)
if not picks:
raise HTTPException(
status_code=404,
detail="시뮬레이션 결과가 없습니다. /api/admin/simulate로 먼저 실행하세요.",
)
draws = get_all_draw_numbers()
result = []
for p in picks:
nums = p["numbers"]
result.append({
"rank": p["rank_in_run"],
"numbers": nums,
"score_total": p["score_total"],
"based_on_draw": p["based_on_draw"],
"simulation_run_id": p["source_run_id"],
"created_at": p["created_at"],
"metrics": calc_metrics(nums),
})
latest = get_latest_draw()
return {
"based_on_draw": latest["drw_no"] if latest else None,
"count": len(result),
"items": result,
}
# ── 시뮬레이션 전체 결과 조회 (상세 API) ────────────────────────────────────
@app.get("/api/lotto/simulation")
def api_simulation(run_id: Optional[int] = None, runs_limit: int = 5):
"""
시뮬레이션 실행 기록 및 상위 후보 상세 조회.
run_id 미지정 시: 최근 runs_limit개 실행 기록 + 가장 최근 run의 후보 반환.
run_id 지정 시: 해당 run의 후보만 반환.
"""
runs = get_simulation_runs(limit=runs_limit)
if not runs:
raise HTTPException(status_code=404, detail="시뮬레이션 기록이 없습니다.")
target_run_id = run_id if run_id is not None else runs[0]["id"]
candidates = get_simulation_candidates(target_run_id, limit=100)
# 후보에 메트릭 추가
enriched = []
for c in candidates:
enriched.append({
**c,
"metrics": calc_metrics(c["numbers"]),
})
return {
"runs": runs,
"selected_run_id": target_run_id,
"candidates_count": len(enriched),
"candidates": enriched,
}
# ── 기존 수동 추천 API (하위 호환 유지) ─────────────────────────────────────
@app.get("/api/lotto/recommend") @app.get("/api/lotto/recommend")
def api_recommend( def api_recommend(
recent_window: int = 200, recent_window: int = 200,
recent_weight: float = 2.0, recent_weight: float = 2.0,
avoid_recent_k: int = 5, avoid_recent_k: int = 5,
# ---- optional constraints (Lotto Lab) ----
sum_min: Optional[int] = None, sum_min: Optional[int] = None,
sum_max: Optional[int] = None, sum_max: Optional[int] = None,
odd_min: Optional[int] = None, odd_min: Optional[int] = None,
odd_max: Optional[int] = None, odd_max: Optional[int] = None,
range_min: Optional[int] = None, range_min: Optional[int] = None,
range_max: Optional[int] = None, range_max: Optional[int] = None,
max_overlap_latest: Optional[int] = None, # 최근 avoid_recent_k 회차와 중복 허용 개수 max_overlap_latest: Optional[int] = None,
max_try: int = 200, # 조건 맞는 조합 찾기 재시도 max_try: int = 200,
): ):
draws = get_all_draw_numbers() draws = get_all_draw_numbers()
if not draws: if not draws:
@@ -141,7 +258,6 @@ def api_recommend(
"recent_window": recent_window, "recent_window": recent_window,
"recent_weight": float(recent_weight), "recent_weight": float(recent_weight),
"avoid_recent_k": avoid_recent_k, "avoid_recent_k": avoid_recent_k,
"sum_min": sum_min, "sum_min": sum_min,
"sum_max": sum_max, "sum_max": sum_max,
"odd_min": odd_min, "odd_min": odd_min,
@@ -166,7 +282,6 @@ def api_recommend(
return False return False
if range_max is not None and m["range"] > range_max: if range_max is not None and m["range"] > range_max:
return False return False
if max_overlap_latest is not None: if max_overlap_latest is not None:
ov = calc_recent_overlap(nums, draws, last_k=avoid_recent_k) ov = calc_recent_overlap(nums, draws, last_k=avoid_recent_k)
if ov["repeats"] > max_overlap_latest: if ov["repeats"] > max_overlap_latest:
@@ -194,11 +309,9 @@ def api_recommend(
if chosen is None: if chosen is None:
raise HTTPException( raise HTTPException(
status_code=400, status_code=400,
detail=f"Constraints too strict. No valid set found in max_try={max_try}. " detail=f"Constraints too strict. No valid set found in max_try={max_try}.",
f"Try relaxing sum/odd/range/overlap constraints.",
) )
# ✅ dedup save
saved = save_recommendation_dedup( saved = save_recommendation_dedup(
latest["drw_no"] if latest else None, latest["drw_no"] if latest else None,
chosen, chosen,
@@ -218,10 +331,121 @@ def api_recommend(
"params": params, "params": params,
"metrics": metrics, "metrics": metrics,
"recent_overlap": overlap, "recent_overlap": overlap,
"tries": tries, "tries": tries,
} }
# ---------- ✅ history list (filter/paging) ----------
# ── 히트맵 기반 추천 (하위 호환 유지) ────────────────────────────────────────
@app.get("/api/lotto/recommend/heatmap")
def api_recommend_heatmap(
heatmap_window: int = 20,
heatmap_weight: float = 1.5,
recent_window: int = 200,
recent_weight: float = 2.0,
avoid_recent_k: int = 5,
sum_min: Optional[int] = None,
sum_max: Optional[int] = None,
odd_min: Optional[int] = None,
odd_max: Optional[int] = None,
range_min: Optional[int] = None,
range_max: Optional[int] = None,
max_overlap_latest: Optional[int] = None,
max_try: int = 200,
):
draws = get_all_draw_numbers()
if not draws:
raise HTTPException(status_code=404, detail="No data yet")
past_recs = list_recommendations_ex(limit=100, sort="id_desc")
latest = get_latest_draw()
params = {
"heatmap_window": heatmap_window,
"heatmap_weight": float(heatmap_weight),
"recent_window": recent_window,
"recent_weight": float(recent_weight),
"avoid_recent_k": avoid_recent_k,
"sum_min": sum_min,
"sum_max": sum_max,
"odd_min": odd_min,
"odd_max": odd_max,
"range_min": range_min,
"range_max": range_max,
"max_overlap_latest": max_overlap_latest,
"max_try": int(max_try),
}
def _accept(nums: List[int]) -> bool:
m = calc_metrics(nums)
if sum_min is not None and m["sum"] < sum_min:
return False
if sum_max is not None and m["sum"] > sum_max:
return False
if odd_min is not None and m["odd"] < odd_min:
return False
if odd_max is not None and m["odd"] > odd_max:
return False
if range_min is not None and m["range"] < range_min:
return False
if range_max is not None and m["range"] > range_max:
return False
if max_overlap_latest is not None:
ov = calc_recent_overlap(nums, draws, last_k=avoid_recent_k)
if ov["repeats"] > max_overlap_latest:
return False
return True
chosen = None
explain = None
tries = 0
while tries < max_try:
tries += 1
result = recommend_with_heatmap(
draws,
past_recs,
heatmap_window=heatmap_window,
heatmap_weight=heatmap_weight,
recent_window=recent_window,
recent_weight=recent_weight,
avoid_recent_k=avoid_recent_k,
)
nums = result["numbers"]
if _accept(nums):
chosen = nums
explain = result["explain"]
break
if chosen is None:
raise HTTPException(
status_code=400,
detail=f"Constraints too strict. No valid set found in max_try={max_try}.",
)
saved = save_recommendation_dedup(
latest["drw_no"] if latest else None,
chosen,
params,
)
metrics = calc_metrics(chosen)
overlap = calc_recent_overlap(chosen, draws, last_k=avoid_recent_k)
return {
"id": saved["id"],
"saved": saved["saved"],
"deduped": saved["deduped"],
"based_on_latest_draw": latest["drw_no"] if latest else None,
"numbers": chosen,
"explain": explain,
"params": params,
"metrics": metrics,
"recent_overlap": overlap,
"tries": tries,
}
# ── 추천 이력 ────────────────────────────────────────────────────────────────
@app.get("/api/history") @app.get("/api/history")
def api_history( def api_history(
limit: int = 30, limit: int = 30,
@@ -260,6 +484,7 @@ def api_history(
"filters": {"favorite": favorite, "tag": tag, "q": q, "sort": sort}, "filters": {"favorite": favorite, "tag": tag, "q": q, "sort": sort},
} }
@app.delete("/api/history/{rec_id:int}") @app.delete("/api/history/{rec_id:int}")
def api_history_delete(rec_id: int): def api_history_delete(rec_id: int):
ok = delete_recommendation(rec_id) ok = delete_recommendation(rec_id)
@@ -267,12 +492,13 @@ def api_history_delete(rec_id: int):
raise HTTPException(status_code=404, detail="Not found") raise HTTPException(status_code=404, detail="Not found")
return {"deleted": True, "id": rec_id} return {"deleted": True, "id": rec_id}
# ---------- ✅ history update (favorite/note/tags) ----------
class HistoryUpdate(BaseModel): class HistoryUpdate(BaseModel):
favorite: Optional[bool] = None favorite: Optional[bool] = None
note: Optional[str] = None note: Optional[str] = None
tags: Optional[List[str]] = None tags: Optional[List[str]] = None
@app.patch("/api/history/{rec_id:int}") @app.patch("/api/history/{rec_id:int}")
def api_history_patch(rec_id: int, body: HistoryUpdate): def api_history_patch(rec_id: int, body: HistoryUpdate):
ok = update_recommendation(rec_id, favorite=body.favorite, note=body.note, tags=body.tags) ok = update_recommendation(rec_id, favorite=body.favorite, note=body.note, tags=body.tags)
@@ -280,11 +506,11 @@ def api_history_patch(rec_id: int, body: HistoryUpdate):
raise HTTPException(status_code=404, detail="Not found or no changes") raise HTTPException(status_code=404, detail="Not found or no changes")
return {"updated": True, "id": rec_id} return {"updated": True, "id": rec_id}
# ---------- ✅ batch recommend ----------
# ── 배치 추천 (하위 호환 유지) ───────────────────────────────────────────────
def _batch_unique(draws, count: int, recent_window: int, recent_weight: float, avoid_recent_k: int, max_try: int = 200): def _batch_unique(draws, count: int, recent_window: int, recent_weight: float, avoid_recent_k: int, max_try: int = 200):
items = [] items = []
seen = set() seen = set()
tries = 0 tries = 0
while len(items) < count and tries < max_try: while len(items) < count and tries < max_try:
tries += 1 tries += 1
@@ -294,9 +520,9 @@ def _batch_unique(draws, count: int, recent_window: int, recent_weight: float, a
continue continue
seen.add(key) seen.add(key)
items.append(r) items.append(r)
return items return items
@app.get("/api/lotto/recommend/batch") @app.get("/api/lotto/recommend/batch")
def api_recommend_batch( def api_recommend_batch(
count: int = 5, count: int = 5,
@@ -322,14 +548,20 @@ def api_recommend_batch(
return { return {
"based_on_latest_draw": latest["drw_no"] if latest else None, "based_on_latest_draw": latest["drw_no"] if latest else None,
"count": count, "count": count,
"items": [{"numbers": it["numbers"], "explain": it["explain"]} for it in items], "items": [{
"numbers": it["numbers"],
"explain": it["explain"],
"metrics": calc_metrics(it["numbers"]),
} for it in items],
"params": params, "params": params,
} }
class BatchSave(BaseModel): class BatchSave(BaseModel):
items: List[List[int]] items: List[List[int]]
params: dict params: dict
@app.post("/api/lotto/recommend/batch") @app.post("/api/lotto/recommend/batch")
def api_recommend_batch_save(body: BatchSave): def api_recommend_batch_save(body: BatchSave):
latest = get_latest_draw() latest = get_latest_draw()
@@ -342,3 +574,291 @@ def api_recommend_batch_save(body: BatchSave):
return {"saved": True, "created_ids": created, "deduped_ids": deduped} return {"saved": True, "created_ids": created, "deduped_ids": deduped}
@app.get("/api/version")
def version():
return {"version": os.getenv("APP_VERSION", "dev")}
# ── Todos API ─────────────────────────────────────────────────────────────────
class TodoCreate(BaseModel):
title: str
description: Optional[str] = None
status: str = "todo"
class TodoUpdate(BaseModel):
title: Optional[str] = None
description: Optional[str] = None
status: Optional[str] = None
@app.get("/api/todos")
def api_todos_list():
return get_all_todos()
@app.post("/api/todos", status_code=201)
def api_todos_create(body: TodoCreate):
if body.status not in ("todo", "in_progress", "done"):
raise HTTPException(status_code=422, detail="status must be todo | in_progress | done")
return create_todo(body.title, body.description, body.status)
# ⚠️ /done 라우트를 /{todo_id} 보다 먼저 등록해야 done이 id로 매칭되지 않음
@app.delete("/api/todos/done")
def api_todos_delete_done():
deleted = delete_done_todos()
return {"deleted": deleted}
@app.put("/api/todos/{todo_id}")
def api_todos_update(todo_id: str, body: TodoUpdate):
if body.status is not None and body.status not in ("todo", "in_progress", "done"):
raise HTTPException(status_code=422, detail="status must be todo | in_progress | done")
updated = update_todo(todo_id, body.model_dump(exclude_none=True))
if updated is None:
raise HTTPException(status_code=404, detail="Todo not found")
return updated
@app.delete("/api/todos/{todo_id}")
def api_todos_delete(todo_id: str):
ok = delete_todo(todo_id)
if not ok:
raise HTTPException(status_code=404, detail="Todo not found")
return {"ok": True}
# ── Blog API ──────────────────────────────────────────────────────────────────
class BlogPostCreate(BaseModel):
title: str
body: str = ""
excerpt: str = ""
tags: List[str] = []
date: str = "" # 빈 문자열이면 오늘 날짜 사용
class BlogPostUpdate(BaseModel):
title: Optional[str] = None
body: Optional[str] = None
excerpt: Optional[str] = None
tags: Optional[List[str]] = None
date: Optional[str] = None
@app.get("/api/blog/posts")
def api_blog_list():
return {"posts": get_all_posts()}
@app.post("/api/blog/posts", status_code=201)
def api_blog_create(body: BlogPostCreate):
from datetime import date as _date
post_date = body.date if body.date else _date.today().isoformat()
post = create_post(body.title, body.body, body.excerpt, body.tags, post_date)
return post
@app.put("/api/blog/posts/{post_id}")
def api_blog_update(post_id: int, body: BlogPostUpdate):
updated = update_post(post_id, body.model_dump(exclude_none=True))
if updated is None:
raise HTTPException(status_code=404, detail="Post not found")
return updated
@app.delete("/api/blog/posts/{post_id}")
def api_blog_delete(post_id: int):
ok = delete_post(post_id)
if not ok:
raise HTTPException(status_code=404, detail="Post not found")
return {"ok": True}
# ── RealEstate API ─────────────────────────────────────────────────────────────
VALID_STATUSES = {"청약예정", "청약중", "결과발표", "완료"}
VALID_PRIORITIES = {"high", "normal", "low"}
class ComplexCreate(BaseModel):
name: str
address: str = ""
lat: Optional[float] = None
lng: Optional[float] = None
units: Optional[int] = None
types: List[str] = []
avgPricePerPyeong: Optional[int] = None
subscriptionStart: Optional[str] = None
subscriptionEnd: Optional[str] = None
resultDate: Optional[str] = None
status: str = "청약예정"
priority: str = "normal"
tags: List[str] = []
naverUrl: str = ""
floorPlanUrl: str = ""
memo: str = ""
class ComplexUpdate(BaseModel):
name: Optional[str] = None
address: Optional[str] = None
lat: Optional[float] = None
lng: Optional[float] = None
units: Optional[int] = None
types: Optional[List[str]] = None
avgPricePerPyeong: Optional[int] = None
subscriptionStart: Optional[str] = None
subscriptionEnd: Optional[str] = None
resultDate: Optional[str] = None
status: Optional[str] = None
priority: Optional[str] = None
tags: Optional[List[str]] = None
naverUrl: Optional[str] = None
floorPlanUrl: Optional[str] = None
memo: Optional[str] = None
@app.get("/api/realestate/complexes")
def api_realestate_list():
return get_all_complexes()
@app.post("/api/realestate/complexes", status_code=201)
def api_realestate_create(body: ComplexCreate):
if body.status not in VALID_STATUSES:
raise HTTPException(status_code=400, detail=f"status must be one of {VALID_STATUSES}")
if body.priority not in VALID_PRIORITIES:
raise HTTPException(status_code=400, detail=f"priority must be one of {VALID_PRIORITIES}")
return create_complex(body.model_dump())
@app.put("/api/realestate/complexes/{complex_id}")
def api_realestate_update(complex_id: int, body: ComplexUpdate):
data = body.model_dump(exclude_none=True)
if "status" in data and data["status"] not in VALID_STATUSES:
raise HTTPException(status_code=400, detail=f"status must be one of {VALID_STATUSES}")
if "priority" in data and data["priority"] not in VALID_PRIORITIES:
raise HTTPException(status_code=400, detail=f"priority must be one of {VALID_PRIORITIES}")
updated = update_complex(complex_id, data)
if updated is None:
raise HTTPException(status_code=404, detail="Complex not found")
return updated
@app.delete("/api/realestate/complexes/{complex_id}")
def api_realestate_delete(complex_id: int):
ok = delete_complex(complex_id)
if not ok:
raise HTTPException(status_code=404, detail="Complex not found")
return {"ok": True}
# ── Subscription API ───────────────────────────────────────────────────────────
class SubscriptionItemCreate(BaseModel):
complexName: str
address: str = ""
pyeong: Optional[str] = None
totalPrice: Optional[int] = None
type: Optional[str] = None
specialType: Optional[str] = None
supplyType: Optional[str] = None
status: str = "검토중"
minScore: Optional[int] = None
maxIncome: Optional[int] = None
homelessRequired: Optional[int] = None
subscriptionStart: Optional[str] = None
subscriptionEnd: Optional[str] = None
contractDate: Optional[str] = None
interimDate: Optional[str] = None
balanceDate: Optional[str] = None
resultDate: Optional[str] = None
depositRate: int = 10
interimRate: int = 60
balanceRate: int = 30
loanType: Optional[str] = None
loanRate: Optional[float] = None
memo: str = ""
naverUrl: str = ""
class SubscriptionItemUpdate(BaseModel):
complexName: Optional[str] = None
address: Optional[str] = None
pyeong: Optional[str] = None
totalPrice: Optional[int] = None
type: Optional[str] = None
specialType: Optional[str] = None
supplyType: Optional[str] = None
status: Optional[str] = None
minScore: Optional[int] = None
maxIncome: Optional[int] = None
homelessRequired: Optional[int] = None
subscriptionStart: Optional[str] = None
subscriptionEnd: Optional[str] = None
contractDate: Optional[str] = None
interimDate: Optional[str] = None
balanceDate: Optional[str] = None
resultDate: Optional[str] = None
depositRate: Optional[int] = None
interimRate: Optional[int] = None
balanceRate: Optional[int] = None
loanType: Optional[str] = None
loanRate: Optional[float] = None
memo: Optional[str] = None
naverUrl: Optional[str] = None
class SubscriptionProfile(BaseModel):
isHouseholdHead: Optional[bool] = None
isHomeless: Optional[bool] = None
homelessPeriod: Optional[int] = None
savingsMonths: Optional[int] = None
savingsCount: Optional[int] = None
dependents: Optional[int] = None
residencyArea: Optional[str] = None
isMarried: Optional[bool] = None
marriageMonths: Optional[int] = None
monthlyIncome: Optional[int] = None
specialQuals: Optional[List[str]] = None
@app.get("/api/subscription/items")
def api_subscription_list():
return get_all_subscription_items()
@app.post("/api/subscription/items", status_code=201)
def api_subscription_create(body: SubscriptionItemCreate):
return create_subscription_item(body.model_dump())
@app.put("/api/subscription/items/{item_id}")
def api_subscription_update(item_id: int, body: SubscriptionItemUpdate):
updated = update_subscription_item(item_id, body.model_dump(exclude_none=True))
if updated is None:
raise HTTPException(status_code=404, detail="Item not found")
return updated
@app.delete("/api/subscription/items/{item_id}")
def api_subscription_delete(item_id: int):
ok = delete_subscription_item(item_id)
if not ok:
raise HTTPException(status_code=404, detail="Item not found")
return {"ok": True}
@app.get("/api/subscription/profile")
def api_subscription_profile_get():
profile = get_subscription_profile()
return profile if profile is not None else {}
@app.put("/api/subscription/profile")
def api_subscription_profile_put(body: SubscriptionProfile):
return upsert_subscription_profile(body.model_dump(exclude_none=True))

View File

@@ -66,3 +66,98 @@ def recommend_numbers(
return {"numbers": chosen_sorted, "explain": explain} return {"numbers": chosen_sorted, "explain": explain}
def recommend_with_heatmap(
draws: List[Tuple[int, List[int]]],
past_recommendations: List[Dict[str, Any]],
*,
heatmap_window: int = 10,
heatmap_weight: float = 1.5,
recent_window: int = 200,
recent_weight: float = 2.0,
avoid_recent_k: int = 5,
seed: int | None = None,
) -> Dict[str, Any]:
"""
히트맵 기반 가중치 추천:
- 과거 추천 번호들의 적중률을 분석하여 잘 맞춘 번호에 가중치 부여
- 기존 통계 기반 추천과 결합
Args:
draws: 실제 당첨 번호 리스트 [(회차, [번호들]), ...]
past_recommendations: 과거 추천 데이터 [{"numbers": [...], "correct_count": N, "based_on_draw": M}, ...]
heatmap_window: 히트맵 분석할 최근 추천 개수
heatmap_weight: 히트맵 가중치 (높을수록 과거 적중 번호 선호)
"""
if seed is not None:
random.seed(seed)
# 1. 기존 통계 기반 가중치 계산
all_nums = [n for _, nums in draws for n in nums]
freq_all = Counter(all_nums)
recent = draws[-recent_window:] if len(draws) >= recent_window else draws
recent_nums = [n for _, nums in recent for n in nums]
freq_recent = Counter(recent_nums)
last_k = draws[-avoid_recent_k:] if len(draws) >= avoid_recent_k else draws
last_k_nums = set(n for _, nums in last_k for n in nums)
# 2. 히트맵 생성: 과거 추천에서 적중한 번호들의 빈도
heatmap = Counter()
recent_recs = past_recommendations[-heatmap_window:] if len(past_recommendations) >= heatmap_window else past_recommendations
for rec in recent_recs:
if rec.get("correct_count", 0) > 0: # 적중한 추천만
# 적중 개수에 비례해서 가중치 부여 (많이 맞춘 추천일수록 높은 가중)
weight = rec["correct_count"] ** 1.5 # 제곱으로 강조
for num in rec["numbers"]:
heatmap[num] += weight
# 3. 최종 가중치 = 기존 통계 + 히트맵
weights = {}
for n in range(1, 46):
w = freq_all[n] + recent_weight * freq_recent[n]
# 히트맵 가중치 추가
if n in heatmap:
w += heatmap_weight * heatmap[n]
# 최근 출현 번호 패널티
if n in last_k_nums:
w *= 0.6
weights[n] = max(w, 0.1)
# 4. 가중 샘플링으로 6개 선택
chosen = []
pool = list(range(1, 46))
for _ in range(6):
total = sum(weights[n] for n in pool)
r = random.random() * total
acc = 0.0
for n in pool:
acc += weights[n]
if acc >= r:
chosen.append(n)
pool.remove(n)
break
chosen_sorted = sorted(chosen)
# 5. 설명 데이터
explain = {
"recent_window": recent_window,
"recent_weight": recent_weight,
"avoid_recent_k": avoid_recent_k,
"heatmap_window": heatmap_window,
"heatmap_weight": heatmap_weight,
"top_all": [n for n, _ in freq_all.most_common(10)],
"top_recent": [n for n, _ in freq_recent.most_common(10)],
"top_heatmap": [n for n, _ in heatmap.most_common(10)],
"last_k_draws": [d for d, _ in last_k],
"analyzed_recommendations": len(recent_recs),
}
return {"numbers": chosen_sorted, "explain": explain}

59
backend/app/utils.py Normal file
View File

@@ -0,0 +1,59 @@
from typing import List, Dict, Any, Tuple
def calc_metrics(numbers: List[int]) -> Dict[str, Any]:
nums = sorted(numbers)
s = sum(nums)
odd = sum(1 for x in nums if x % 2 == 1)
even = len(nums) - odd
mn, mx = nums[0], nums[-1]
rng = mx - mn
# 1-10, 11-20, 21-30, 31-40, 41-45
buckets = {
"1-10": 0,
"11-20": 0,
"21-30": 0,
"31-40": 0,
"41-45": 0,
}
for x in nums:
if 1 <= x <= 10:
buckets["1-10"] += 1
elif 11 <= x <= 20:
buckets["11-20"] += 1
elif 21 <= x <= 30:
buckets["21-30"] += 1
elif 31 <= x <= 40:
buckets["31-40"] += 1
else:
buckets["41-45"] += 1
return {
"sum": s,
"odd": odd,
"even": even,
"min": mn,
"max": mx,
"range": rng,
"buckets": buckets,
}
def calc_recent_overlap(numbers: List[int], draws: List[Tuple[int, List[int]]], last_k: int) -> Dict[str, Any]:
"""
draws: [(drw_no, [n1..n6]), ...] 오름차순
last_k: 최근 k회 기준 중복
"""
if last_k <= 0:
return {"last_k": 0, "repeats": 0, "repeated_numbers": []}
recent = draws[-last_k:] if len(draws) >= last_k else draws
recent_set = set()
for _, nums in recent:
recent_set.update(nums)
repeated = sorted(set(numbers) & recent_set)
return {
"last_k": len(recent),
"repeats": len(repeated),
"repeated_numbers": repeated,
}

16
deployer/Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
FROM python:3.12-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
git rsync ca-certificates curl \
docker.io \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py /app/app.py
ENV PYTHONUNBUFFERED=1
EXPOSE 9000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "9000"]

52
deployer/app.py Normal file
View File

@@ -0,0 +1,52 @@
import os, hmac, hashlib, subprocess
from fastapi import FastAPI, Request, HTTPException, BackgroundTasks
import logging
# 로깅 설정
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("deployer")
app = FastAPI()
SECRET = os.getenv("WEBHOOK_SECRET", "")
def verify(sig: str, body: bytes) -> bool:
if not SECRET or not sig:
return False
mac = hmac.new(SECRET.encode(), msg=body, digestmod=hashlib.sha256).hexdigest()
candidates = {mac, f"sha256={mac}"}
return any(hmac.compare_digest(sig, c) for c in candidates)
def run_deploy_script():
"""배포 스크립트를 백그라운드에서 실행하고 로그를 남김"""
logger.info("Starting deployment script...")
try:
# 타임아웃 10분 설정
p = subprocess.run(["/bin/bash", "/scripts/deploy.sh"], capture_output=True, text=True, timeout=600)
if p.returncode == 0:
logger.info(f"Deployment SUCCESS:\n{p.stdout}")
else:
logger.error(f"Deployment FAILED ({p.returncode}):\n{p.stdout}\n{p.stderr}")
except Exception as e:
logger.exception(f"Exception during deployment: {e}")
@app.post("/webhook")
async def webhook(req: Request, background_tasks: BackgroundTasks):
body = await req.body()
sig = (
req.headers.get("X-Gitea-Signature")
or req.headers.get("X-Hub-Signature-256")
or ""
)
if not verify(sig, body):
raise HTTPException(401, "bad signature")
# ✅ 비동기 실행: Gitea에게는 즉시 OK 응답을 주고, 배포는 뒤에서 실행
background_tasks.add_task(run_deploy_script)
return {"ok": True, "message": "Deployment started in background"}

View File

@@ -0,0 +1,2 @@
fastapi
uvicorn

View File

@@ -2,7 +2,10 @@ version: "3.8"
services: services:
backend: backend:
build: ./backend build:
context: ./backend
args:
APP_VERSION: ${APP_VERSION:-dev}
container_name: lotto-backend container_name: lotto-backend
restart: unless-stopped restart: unless-stopped
ports: ports:
@@ -12,13 +15,28 @@ services:
- LOTTO_ALL_URL=${LOTTO_ALL_URL:-https://smok95.github.io/lotto/results/all.json} - LOTTO_ALL_URL=${LOTTO_ALL_URL:-https://smok95.github.io/lotto/results/all.json}
- LOTTO_LATEST_URL=${LOTTO_LATEST_URL:-https://smok95.github.io/lotto/results/latest.json} - LOTTO_LATEST_URL=${LOTTO_LATEST_URL:-https://smok95.github.io/lotto/results/latest.json}
volumes: volumes:
- /volume1/docker/webpage/data:/app/data - ${RUNTIME_PATH}/data:/app/data
stock-lab:
build:
context: ./stock-lab
args:
APP_VERSION: ${APP_VERSION:-dev}
container_name: stock-lab
restart: unless-stopped
ports:
- "18500:8000"
environment:
- TZ=${TZ:-Asia/Seoul}
- WINDOWS_AI_SERVER_URL=${WINDOWS_AI_SERVER_URL:-http://192.168.0.5:8000}
volumes:
- ${STOCK_DATA_PATH:-./data/stock}:/app/data
travel-proxy: travel-proxy:
build: ./travel-proxy build: ./travel-proxy
container_name: travel-proxy container_name: travel-proxy
restart: unless-stopped restart: unless-stopped
user: "1026:100" user: "${PUID}:${PGID}"
ports: ports:
- "19000:8000" # 내부 확인용 - "19000:8000" # 내부 확인용
environment: environment:
@@ -29,8 +47,8 @@ services:
- TRAVEL_CACHE_TTL=${TRAVEL_CACHE_TTL:-300} - TRAVEL_CACHE_TTL=${TRAVEL_CACHE_TTL:-300}
- CORS_ALLOW_ORIGINS=${CORS_ALLOW_ORIGINS:-*} - CORS_ALLOW_ORIGINS=${CORS_ALLOW_ORIGINS:-*}
volumes: volumes:
- /volume1/web/images/webPage/travel:/data/travel:ro - ${PHOTO_PATH}:/data/travel:ro
- /volume1/docker/webpage/travel-thumbs:/data/thumbs:rw - ${RUNTIME_PATH}/travel-thumbs:/data/thumbs:rw
frontend: frontend:
image: nginx:alpine image: nginx:alpine
@@ -39,9 +57,23 @@ services:
ports: ports:
- "8080:80" - "8080:80"
volumes: volumes:
- /volume1/docker/webpage/frontend:/usr/share/nginx/html:ro - ${FRONTEND_PATH}:/usr/share/nginx/html:ro
- /volume1/docker/webpage/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro - ${RUNTIME_PATH}/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- /volume1/web/images/webPage/travel:/data/travel:ro - ${PHOTO_PATH}:/data/travel:ro
- /volume1/docker/webpage/travel-thumbs:/data/thumbs:ro - ${RUNTIME_PATH}/travel-thumbs:/data/thumbs:ro
extra_hosts: extra_hosts:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"
deployer:
build: ./deployer
container_name: webpage-deployer
restart: unless-stopped
ports:
- "19010:9000" # 외부 노출 필요 없으면 내부만 (리버스프록시로 /webhook만 노출 추천)
environment:
- WEBHOOK_SECRET=${WEBHOOK_SECRET}
volumes:
- ${REPO_PATH}:/repo:rw
- ${RUNTIME_PATH}:/runtime:rw
- ${RUNTIME_PATH}/scripts:/scripts:ro
- /var/run/docker.sock:/var/run/docker.sock

View File

@@ -54,6 +54,37 @@ server {
proxy_pass http://travel-proxy:8000/api/travel/; proxy_pass http://travel-proxy:8000/api/travel/;
} }
# stock API
location /api/stock/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://stock-lab:8000/api/stock/;
}
# trade API (Stock Lab Proxy)
location /api/trade/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://stock-lab:8000/api/trade/;
}
# portfolio API (Stock Lab) — trailing slash 유무 모두 매칭
location /api/portfolio {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://stock-lab:8000/api/portfolio;
}
# API 프록시 (여기가 포인트: /api/ 중복 제거) # API 프록시 (여기가 포인트: /api/ 중복 제거)
location /api/ { location /api/ {
proxy_http_version 1.1; proxy_http_version 1.1;
@@ -66,6 +97,27 @@ server {
proxy_pass http://backend:8000; proxy_pass http://backend:8000;
} }
# webhook receiver (handle both /webhook and /webhook/)
location = /webhook {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://deployer:9000/webhook;
}
location /webhook/ {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://deployer:9000/webhook;
}
# SPA 라우팅 (마지막에 두는 게 안전) # SPA 라우팅 (마지막에 두는 게 안전)
location / { location / {
try_files $uri $uri/ /index.html; try_files $uri $uri/ /index.html;

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT="/volume1/docker/webpage"
cd "$ROOT"
echo "[1/5] git fetch + pull"
git fetch --all --prune
git pull --ff-only
echo "[2/5] docker compose build"
docker compose build --pull
echo "[3/5] docker compose up"
docker compose up -d
echo "[4/5] status"
docker compose ps
echo "[5/5] done"

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
BASE="http://127.0.0.1"
echo "backend health:"
curl -fsS "${BASE}:18000/health" | sed 's/^/ /'
echo "backend latest:"
curl -fsS "${BASE}:18000/api/lotto/latest" | head -c 200; echo
echo "travel regions:"
curl -fsS "${BASE}:19000/api/travel/regions" | head -c 200; echo
echo "OK"

59
scripts/deploy-nas.sh Normal file
View File

@@ -0,0 +1,59 @@
#!/bin/bash
set -euo pipefail
# 1. 자동 감지: Docker 컨테이너 내부인가?
if [ -d "/repo" ] && [ -d "/runtime" ]; then
echo "Detected Docker Container environment."
SRC="/repo"
DST="/runtime"
else
# 2. Host 환경: .env 로드 시도
if [ -f ".env" ]; then
echo "Loading .env file..."
set -a; source .env; set +a
fi
# 환경변수가 없으면 현재 디렉토리를 SRC로
SRC="${REPO_PATH:-$(pwd)}"
DST="${RUNTIME_PATH:-}"
if [ -z "$DST" ]; then
echo "Error: RUNTIME_PATH is not set. Please create .env file with RUNTIME_PATH defined."
exit 1
fi
fi
echo "Source: $SRC"
echo "Target: $DST"
cd "$SRC"
# 레포에서 운영으로 반영할 항목들만 복사/동기화 (필요한 것만 적기)
# backend, travel-proxy, deployer, nginx, scripts, docker-compose.yml, .env 등
rsync -a --delete \
--exclude ".git" \
--exclude ".releases" \
"$SRC/backend/" "$DST/backend/"
rsync -a --delete \
--exclude ".git" \
"$SRC/travel-proxy/" "$DST/travel-proxy/"
rsync -a --delete \
--exclude ".git" \
"$SRC/deployer/" "$DST/deployer/"
rsync -a --delete \
--exclude ".git" \
"$SRC/stock-lab/" "$DST/stock-lab/"
rsync -a --delete \
--exclude ".git" \
"$SRC/nginx/" "$DST/nginx/"
rsync -a --delete \
--exclude ".git" \
"$SRC/scripts/" "$DST/scripts/"
# compose 파일 / env 파일
rsync -a "$SRC/docker-compose.yml" "$DST/docker-compose.yml"
if [ -f "$SRC/.env" ]; then
rsync -a "$SRC/.env" "$DST/.env"
fi
echo "SYNC_OK"

64
scripts/deploy.sh Normal file
View File

@@ -0,0 +1,64 @@
#!/bin/bash
set -euo pipefail
# 1. 자동 감지: Docker 컨테이너 내부인가?
if [ -d "/repo" ] && [ -d "/runtime" ]; then
echo "Detected Docker Container environment."
SRC="/repo"
DST="/runtime"
else
# 2. Host 환경: .env 로드 시도
if [ -f ".env" ]; then
echo "Loading .env file..."
set -a; source .env; set +a
fi
# 환경변수가 없으면 현재 디렉토리를 SRC로
SRC="${REPO_PATH:-$(pwd)}"
DST="${RUNTIME_PATH:-/volume1/docker/webpage}" # 기본값 설정
if [ -z "$DST" ]; then
echo "Error: RUNTIME_PATH is not set."
exit 1
fi
fi
echo "Source: $SRC"
echo "Target: $DST"
git config --global --add safe.directory "$SRC"
cd "$SRC"
git fetch --all --prune
git pull --ff-only
# 릴리즈 백업(롤백용): 아래 5번과 연결
TAG="$(date +%Y%m%d-%H%M%S)"
mkdir -p "$DST/.releases/$TAG"
rsync -a --delete \
--exclude ".releases" \
"$DST/" "$DST/.releases/$TAG/"
# 소스 → 운영 반영 (네가 이미 만든 deploy-nas.sh가 있으면 그걸 호출해도 됨)
# 예: repo/scripts/deploy-nas.sh가 운영으로 복사/동기화하는 로직이라면:
bash "$SRC/scripts/deploy-nas.sh"
cd "$DST"
docker-compose up -d --build backend travel-proxy stock-lab frontend
# [Permission Fix]
# deployer가 root로 돌면서 생성한 파일들의 소유권을 호스트 사용자로 변경
# .env에 정의된 PUID:PGID가 있으면 사용, 없으면 1000:1000
TARGET_UID=$(grep PUID .env | cut -d '=' -f2 || echo 1000)
TARGET_GID=$(grep PGID .env | cut -d '=' -f2 || echo 1000)
echo "Fixing permissions to $TARGET_UID:$TARGET_GID ..."
chown -R "$TARGET_UID:$TARGET_GID" "$DST" || true
chmod -R 755 "$DST" || true
# Repo 쪽도 혹시 모르니
if [ "$SRC" != "$DST" ]; then
chown -R "$TARGET_UID:$TARGET_GID" "$SRC" || true
chmod -R 755 "$SRC" || true
fi
echo "DEPLOY_OK $TAG"

59
scripts/healthcheck.sh Normal file
View File

@@ -0,0 +1,59 @@
#!/bin/bash
set -euo pipefail
# NAS 내부 헬스체크용 (localhost 사용)
# 포트: backend(18000), travel-proxy(19000), frontend(8080)
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
echo "======================================"
echo " Starting Health Check..."
echo "======================================"
check_url() {
local name="$1"
local url="$2"
# HTTP 상태 코드만 가져옴 (타임아웃 5초)
status=$(curl -o /dev/null -s -w "%{http_code}" --max-time 5 "$url" || echo "FAIL")
if [[ "$status" == "200" ]]; then
echo -e "[${GREEN}OK${NC}] $name ($url) -> $status"
else
echo -e "[${RED}XX${NC}] $name ($url) -> $status"
# 하나라도 실패하면 exit 1 (CI/CD용)
# exit 1
fi
}
echo ""
echo "--- 1. Backend Service ---"
check_url "Backend Health" "http://localhost:18000/health"
check_url "Lotto Latest" "http://localhost:18000/api/lotto/latest"
check_url "Stats API" "http://localhost:18000/api/lotto/stats"
echo ""
echo "--- 2. Travel Proxy Service ---"
# Travel Proxy는 Main.py에서 루트(/) 엔드포인트가 없을 수 있어서 regions 체크
check_url "Travel Regions" "http://localhost:19000/api/travel/regions"
echo ""
echo "--- 3. Stock Lab Service ---"
check_url "Stock Health" "http://localhost:18500/health"
check_url "Stock News" "http://localhost:18500/api/stock/news"
check_url "Stock Indices" "http://localhost:18500/api/stock/indices"
echo ""
echo "--- 4. Frontend (Nginx) ---"
# 외부 포트 8080으로 접속 테스트
check_url "Frontend Home" "http://localhost:8080"
# Nginx가 Backend로 잘 프록시하는지 체크 (실제 존재하는 api 호출)
check_url "Nginx->Backend Proxy" "http://localhost:8080/api/lotto/latest"
echo ""
echo "======================================"
echo " Health Check Completed."
echo "======================================"

9
stock-lab/Dockerfile Normal file
View File

@@ -0,0 +1,9 @@
FROM python:3.12-alpine
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

276
stock-lab/app/db.py Normal file
View File

@@ -0,0 +1,276 @@
import sqlite3
import os
import hashlib
from typing import List, Dict, Any
DB_PATH = "/app/data/stock.db"
def _conn() -> sqlite3.Connection:
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
return conn
def init_db():
with _conn() as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS articles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
hash TEXT UNIQUE NOT NULL,
category TEXT DEFAULT 'domestic',
title TEXT NOT NULL,
link TEXT,
summary TEXT,
press TEXT,
pub_date TEXT,
crawled_at TEXT
)
""")
conn.execute("CREATE INDEX IF NOT EXISTS idx_articles_crawled ON articles(crawled_at DESC)")
# 컬럼 추가 (기존 테이블 마이그레이션)
cols = {r["name"] for r in conn.execute("PRAGMA table_info(articles)").fetchall()}
if "category" not in cols:
conn.execute("ALTER TABLE articles ADD COLUMN category TEXT DEFAULT 'domestic'")
conn.execute("""
CREATE TABLE IF NOT EXISTS portfolio (
id INTEGER PRIMARY KEY AUTOINCREMENT,
broker TEXT NOT NULL,
ticker TEXT NOT NULL,
name TEXT NOT NULL,
quantity INTEGER NOT NULL,
avg_price INTEGER NOT NULL,
created_at TEXT DEFAULT (datetime('now','localtime')),
updated_at TEXT DEFAULT (datetime('now','localtime'))
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS broker_cash (
id INTEGER PRIMARY KEY AUTOINCREMENT,
broker TEXT UNIQUE NOT NULL,
cash INTEGER NOT NULL DEFAULT 0,
updated_at TEXT DEFAULT (datetime('now','localtime'))
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS asset_snapshots (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT UNIQUE NOT NULL,
total_eval INTEGER NOT NULL,
total_cash INTEGER NOT NULL,
total_assets INTEGER NOT NULL,
created_at TEXT DEFAULT (datetime('now','localtime'))
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS sell_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
broker TEXT NOT NULL,
ticker TEXT NOT NULL,
name TEXT NOT NULL,
quantity INTEGER NOT NULL,
avg_price REAL NOT NULL,
sell_price REAL NOT NULL,
buy_amount REAL NOT NULL,
sell_amount REAL NOT NULL,
realized_profit REAL NOT NULL,
realized_rate REAL NOT NULL,
sold_at TEXT NOT NULL
)
""")
def save_articles(articles: List[Dict[str, str]]) -> int:
count = 0
with _conn() as conn:
for a in articles:
# 중복 체크용 해시 (제목+링크)
unique_str = f"{a['title']}|{a['link']}"
h = hashlib.md5(unique_str.encode()).hexdigest()
try:
cat = a.get("category", "domestic")
conn.execute("""
INSERT INTO articles (hash, category, title, link, summary, press, pub_date, crawled_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", (h, cat, a['title'], a['link'], a['summary'], a['press'], a['date'], a['crawled_at']))
count += 1
except sqlite3.IntegrityError:
pass # 이미 존재함
return count
def get_latest_articles(limit: int = 20, category: str = None) -> List[Dict[str, Any]]:
with _conn() as conn:
if category:
rows = conn.execute(
"SELECT * FROM articles WHERE category = ? ORDER BY crawled_at DESC, id DESC LIMIT ?",
(category, limit)
).fetchall()
else:
rows = conn.execute(
"SELECT * FROM articles ORDER BY crawled_at DESC, id DESC LIMIT ?",
(limit,)
).fetchall()
return [dict(r) for r in rows]
# --- Portfolio CRUD ---
def add_portfolio_item(broker: str, ticker: str, name: str, quantity: int, avg_price: int) -> int:
with _conn() as conn:
cur = conn.execute(
"INSERT INTO portfolio (broker, ticker, name, quantity, avg_price) VALUES (?, ?, ?, ?, ?)",
(broker, ticker, name, quantity, avg_price),
)
return cur.lastrowid
def get_all_portfolio() -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute("SELECT * FROM portfolio ORDER BY id").fetchall()
return [dict(r) for r in rows]
def get_portfolio_item(item_id: int) -> Dict[str, Any] | None:
with _conn() as conn:
row = conn.execute("SELECT * FROM portfolio WHERE id = ?", (item_id,)).fetchone()
return dict(row) if row else None
def update_portfolio_item(item_id: int, **kwargs) -> bool:
allowed = {"broker", "ticker", "name", "quantity", "avg_price"}
fields = {k: v for k, v in kwargs.items() if k in allowed and v is not None}
if not fields:
return False
fields["updated_at"] = __import__("datetime").datetime.now().strftime("%Y-%m-%d %H:%M:%S")
set_clause = ", ".join(f"{k} = ?" for k in fields)
values = list(fields.values()) + [item_id]
with _conn() as conn:
cur = conn.execute(f"UPDATE portfolio SET {set_clause} WHERE id = ?", values)
return cur.rowcount > 0
def delete_portfolio_item(item_id: int) -> bool:
with _conn() as conn:
cur = conn.execute("DELETE FROM portfolio WHERE id = ?", (item_id,))
return cur.rowcount > 0
# --- Broker Cash CRUD ---
def upsert_broker_cash(broker: str, cash: int) -> None:
now = __import__("datetime").datetime.now().strftime("%Y-%m-%d %H:%M:%S")
with _conn() as conn:
conn.execute("""
INSERT INTO broker_cash (broker, cash, updated_at)
VALUES (?, ?, ?)
ON CONFLICT(broker) DO UPDATE SET cash = excluded.cash, updated_at = excluded.updated_at
""", (broker, cash, now))
def get_all_broker_cash() -> List[Dict[str, Any]]:
with _conn() as conn:
rows = conn.execute("SELECT * FROM broker_cash ORDER BY broker").fetchall()
return [dict(r) for r in rows]
def get_broker_cash(broker: str) -> Dict[str, Any] | None:
with _conn() as conn:
row = conn.execute("SELECT * FROM broker_cash WHERE broker = ?", (broker,)).fetchone()
return dict(row) if row else None
def delete_broker_cash(broker: str) -> bool:
with _conn() as conn:
cur = conn.execute("DELETE FROM broker_cash WHERE broker = ?", (broker,))
return cur.rowcount > 0
# --- Asset Snapshot CRUD ---
def upsert_asset_snapshot(date: str, total_eval: int, total_cash: int, total_assets: int) -> None:
now = __import__("datetime").datetime.now().strftime("%Y-%m-%d %H:%M:%S")
with _conn() as conn:
conn.execute("""
INSERT INTO asset_snapshots (date, total_eval, total_cash, total_assets, created_at)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT(date) DO UPDATE SET
total_eval = excluded.total_eval,
total_cash = excluded.total_cash,
total_assets = excluded.total_assets,
created_at = excluded.created_at
""", (date, total_eval, total_cash, total_assets, now))
# --- Sell History CRUD ---
def add_sell_history(data: Dict[str, Any]) -> Dict[str, Any]:
with _conn() as conn:
cur = conn.execute("""
INSERT INTO sell_history
(broker, ticker, name, quantity, avg_price, sell_price,
buy_amount, sell_amount, realized_profit, realized_rate, sold_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
data["broker"], data["ticker"], data["name"], data["quantity"],
data["avg_price"], data["sell_price"], data["buy_amount"],
data["sell_amount"], data["realized_profit"], data["realized_rate"],
data["sold_at"],
))
row = conn.execute("SELECT * FROM sell_history WHERE id = ?", (cur.lastrowid,)).fetchone()
return dict(row)
def get_sell_history(broker: str = None, days: int = None) -> List[Dict[str, Any]]:
conditions = []
params = []
if broker:
conditions.append("broker = ?")
params.append(broker)
if days:
conditions.append("sold_at >= datetime('now', ? || ' days')")
params.append(f"-{days}")
where = f"WHERE {' AND '.join(conditions)}" if conditions else ""
with _conn() as conn:
rows = conn.execute(
f"SELECT * FROM sell_history {where} ORDER BY sold_at DESC",
params,
).fetchall()
return [dict(r) for r in rows]
def update_sell_history(record_id: int, data: Dict[str, Any]) -> Dict[str, Any] | None:
fields = ["broker", "ticker", "name", "quantity", "avg_price", "sell_price",
"buy_amount", "sell_amount", "realized_profit", "realized_rate", "sold_at"]
set_clause = ", ".join(f"{f} = ?" for f in fields)
values = [data[f] for f in fields] + [record_id]
with _conn() as conn:
cur = conn.execute(f"UPDATE sell_history SET {set_clause} WHERE id = ?", values)
if cur.rowcount == 0:
return None
row = conn.execute("SELECT * FROM sell_history WHERE id = ?", (record_id,)).fetchone()
return dict(row)
def delete_sell_history(record_id: int) -> bool:
with _conn() as conn:
cur = conn.execute("DELETE FROM sell_history WHERE id = ?", (record_id,))
return cur.rowcount > 0
def get_asset_snapshots(days: int = 30) -> List[Dict[str, Any]]:
with _conn() as conn:
if days == 0:
rows = conn.execute(
"SELECT date, total_eval, total_cash, total_assets FROM asset_snapshots ORDER BY date ASC"
).fetchall()
else:
rows = conn.execute(
"SELECT date, total_eval, total_cash, total_assets FROM asset_snapshots ORDER BY date DESC LIMIT ?",
(days,)
).fetchall()
rows = list(reversed(rows))
return [dict(r) for r in rows]

View File

@@ -0,0 +1,18 @@
[
"2026-01-01",
"2026-01-28",
"2026-01-29",
"2026-01-30",
"2026-03-01",
"2026-05-05",
"2026-05-25",
"2026-06-06",
"2026-08-15",
"2026-09-24",
"2026-09-25",
"2026-09-26",
"2026-10-03",
"2026-10-09",
"2026-12-25",
"2026-12-31"
]

411
stock-lab/app/main.py Normal file
View File

@@ -0,0 +1,411 @@
import os
import json
from datetime import date as date_type
from typing import Optional
from fastapi import FastAPI, Query
from fastapi.responses import JSONResponse
from fastapi.middleware.cors import CORSMiddleware
import requests
from apscheduler.schedulers.background import BackgroundScheduler
from pydantic import BaseModel
from .db import (
init_db, save_articles, get_latest_articles,
add_portfolio_item, get_all_portfolio, get_portfolio_item,
update_portfolio_item, delete_portfolio_item,
upsert_broker_cash, get_all_broker_cash, get_broker_cash, delete_broker_cash,
upsert_asset_snapshot, get_asset_snapshots,
add_sell_history, get_sell_history, update_sell_history, delete_sell_history,
)
from .scraper import fetch_market_news, fetch_major_indices, fetch_overseas_news
from .price_fetcher import get_current_prices
app = FastAPI()
# CORS 설정 (프론트엔드 접근 허용)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # 운영 시에는 구체적인 도메인으로 제한하는 것이 좋음
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
scheduler = BackgroundScheduler(timezone=os.getenv("TZ", "Asia/Seoul"))
# Windows AI Server URL (NAS .env에서 설정)
WINDOWS_AI_SERVER_URL = os.getenv("WINDOWS_AI_SERVER_URL", "http://192.168.0.5:8000")
# 공휴일 목록 로드
_HOLIDAYS_PATH = os.path.join(os.path.dirname(__file__), "holidays.json")
try:
with open(_HOLIDAYS_PATH, "r") as f:
_HOLIDAYS: set = set(json.load(f))
except Exception:
_HOLIDAYS = set()
def is_market_open(d: date_type) -> bool:
return d.weekday() < 5 and d.strftime("%Y-%m-%d") not in _HOLIDAYS
def save_daily_snapshot():
today = date_type.today()
if not is_market_open(today):
print(f"[Snapshot] {today} 휴장일 — 스킵")
return
today_str = today.strftime("%Y-%m-%d")
items = get_all_portfolio()
cash_rows = get_all_broker_cash()
total_cash = sum(r["cash"] for r in cash_rows)
if items:
tickers = list({item["ticker"] for item in items})
prices = get_current_prices(tickers)
total_eval = sum(
prices.get(item["ticker"], item["avg_price"]) * item["quantity"]
for item in items
)
else:
total_eval = 0
total_assets = total_eval + total_cash
upsert_asset_snapshot(today_str, total_eval, total_cash, total_assets)
print(f"[Snapshot] {today_str} 저장 완료: eval={total_eval}, cash={total_cash}, total={total_assets}")
@app.on_event("startup")
def on_startup():
init_db()
# 매일 아침 8시 뉴스 스크랩 (NAS 자체 수행)
scheduler.add_job(run_scraping_job, "cron", hour="8", minute="0")
# 평일 15:40 총 자산 스냅샷 저장
scheduler.add_job(save_daily_snapshot, "cron", day_of_week="mon-fri", hour=15, minute=40)
# 앱 시작 시에도 한 번 실행 (데이터 없으면)
if not get_latest_articles(1):
run_scraping_job()
scheduler.start()
def run_scraping_job():
print("[StockLab] Starting news scraping...")
# 1. 국내
articles_kr = fetch_market_news()
count_kr = save_articles(articles_kr)
# 2. 해외 (임시 차단)
# articles_world = fetch_overseas_news()
# count_world = save_articles(articles_world)
count_world = 0
print(f"[StockLab] Saved {count_kr} domestic, {count_world} overseas articles.")
@app.get("/health")
def health():
return {"ok": True}
@app.get("/api/stock/news")
def get_news(limit: int = 20, category: str = None):
"""최신 주식 뉴스 조회 (category: 'domestic' | 'overseas')"""
return get_latest_articles(limit, category)
@app.get("/api/stock/indices")
def get_indices():
"""주요 지표(KOSPI 등) 실시간 크롤링 조회"""
return fetch_major_indices()
@app.post("/api/stock/scrap")
def trigger_scrap():
"""수동 스크랩 트리거"""
run_scraping_job()
return {"ok": True}
# --- Trading API (Windows Proxy) ---
@app.get("/api/trade/balance")
def get_balance():
"""계좌 잔고 조회 (Windows AI Server Proxy)"""
print(f"[Proxy] Requesting Balance from {WINDOWS_AI_SERVER_URL}...")
try:
resp = requests.get(f"{WINDOWS_AI_SERVER_URL}/trade/balance", timeout=5)
if resp.status_code != 200:
print(f"[ProxyError] Balance Error: {resp.status_code} {resp.text}")
return JSONResponse(status_code=resp.status_code, content=resp.json())
print("[Proxy] Balance Success")
return resp.json()
except Exception as e:
print(f"[ProxyError] Connection Failed: {e}")
return JSONResponse(
status_code=500,
content={"error": "Connection Failed", "detail": str(e), "target": WINDOWS_AI_SERVER_URL}
)
class OrderRequest(BaseModel):
ticker: str # 종목 코드 (예: "005930")
action: str # "BUY" or "SELL"
quantity: int # 주문 수량
price: int = 0 # 0이면 시장가
reason: Optional[str] = "Manual Order" # 주문 사유 (AI 기록용)
@app.post("/api/trade/order")
def order_stock(req: OrderRequest):
"""주식 매수/매도 주문 (Windows AI Server Proxy)"""
print(f"[Proxy] Order Request: {req.dict()} to {WINDOWS_AI_SERVER_URL}...")
try:
resp = requests.post(f"{WINDOWS_AI_SERVER_URL}/trade/order", json=req.dict(), timeout=10)
if resp.status_code != 200:
print(f"[ProxyError] Order Error: {resp.status_code} {resp.text}")
return JSONResponse(status_code=resp.status_code, content=resp.json())
return resp.json()
except Exception as e:
print(f"[ProxyError] Order Connection Failed: {e}")
return JSONResponse(
status_code=500,
content={"error": "Connection Failed", "detail": str(e), "target": WINDOWS_AI_SERVER_URL}
)
@app.get("/api/version")
def version():
return {"version": os.getenv("APP_VERSION", "dev")}
# --- Portfolio API ---
class PortfolioItemRequest(BaseModel):
broker: str
ticker: str
name: str
quantity: int
avg_price: int
class PortfolioUpdateRequest(BaseModel):
broker: Optional[str] = None
ticker: Optional[str] = None
name: Optional[str] = None
quantity: Optional[int] = None
avg_price: Optional[int] = None
@app.get("/api/portfolio")
def get_portfolio():
"""전체 포트폴리오 조회 (현재가 + 손익 + 예수금 포함)"""
items = get_all_portfolio()
cash_rows = get_all_broker_cash()
total_cash = sum(r["cash"] for r in cash_rows)
if not items:
return {
"holdings": [],
"cash": cash_rows,
"summary": {
"total_buy": 0,
"total_eval": 0,
"total_profit": 0,
"total_profit_rate": 0.0,
"total_cash": total_cash,
"total_assets": total_cash,
},
}
tickers = list({item["ticker"] for item in items})
prices = get_current_prices(tickers)
holdings = []
total_buy = 0
total_eval = 0
for item in items:
current_price = prices.get(item["ticker"])
buy_amount = item["avg_price"] * item["quantity"]
eval_amount = current_price * item["quantity"] if current_price is not None else None
profit_amount = (eval_amount - buy_amount) if eval_amount is not None else None
profit_rate = round((profit_amount / buy_amount) * 100, 2) if (profit_amount is not None and buy_amount) else None
holdings.append({
"id": item["id"],
"broker": item["broker"],
"ticker": item["ticker"],
"name": item["name"],
"quantity": item["quantity"],
"avg_price": item["avg_price"],
"current_price": current_price,
"eval_amount": eval_amount,
"profit_amount": profit_amount,
"profit_rate": profit_rate,
})
total_buy += buy_amount
if eval_amount is not None:
total_eval += eval_amount
total_profit = total_eval - total_buy
total_profit_rate = round((total_profit / total_buy) * 100, 2) if total_buy else 0.0
return {
"holdings": holdings,
"cash": cash_rows,
"summary": {
"total_buy": total_buy,
"total_eval": total_eval,
"total_profit": total_profit,
"total_profit_rate": total_profit_rate,
"total_cash": total_cash,
"total_assets": total_eval + total_cash,
},
}
@app.post("/api/portfolio", status_code=201)
def create_portfolio_item(req: PortfolioItemRequest):
"""포트폴리오 종목 추가"""
item_id = add_portfolio_item(req.broker, req.ticker, req.name, req.quantity, req.avg_price)
return {"id": item_id, "ok": True}
# --- Broker Cash API ---
# /{item_id} 라우트보다 반드시 먼저 정의해야 /cash가 item_id로 매칭되지 않음
class BrokerCashRequest(BaseModel):
broker: str
cash: int
@app.get("/api/portfolio/cash")
def list_broker_cash():
"""증권사별 예수금 전체 조회"""
return get_all_broker_cash()
@app.put("/api/portfolio/cash")
def set_broker_cash(req: BrokerCashRequest):
"""증권사 예수금 등록 또는 수정 (upsert)"""
upsert_broker_cash(req.broker, req.cash)
return {"ok": True, "broker": req.broker, "cash": req.cash}
@app.delete("/api/portfolio/cash/{broker}")
def remove_broker_cash(broker: str):
"""증권사 예수금 삭제"""
if not delete_broker_cash(broker):
return JSONResponse(status_code=404, content={"error": "Broker not found"})
return {"ok": True}
@app.put("/api/portfolio/{item_id}")
def update_portfolio(item_id: int, req: PortfolioUpdateRequest):
"""포트폴리오 종목 수정"""
if get_portfolio_item(item_id) is None:
return JSONResponse(status_code=404, content={"error": "Item not found"})
update_portfolio_item(item_id, **req.dict())
return {"ok": True}
@app.delete("/api/portfolio/{item_id}")
def delete_portfolio(item_id: int):
"""포트폴리오 종목 삭제"""
if not delete_portfolio_item(item_id):
return JSONResponse(status_code=404, content={"error": "Item not found"})
return {"ok": True}
# --- Asset Snapshot API ---
@app.post("/api/portfolio/snapshot")
def create_snapshot():
"""총 자산 스냅샷 수동 저장 (오늘 날짜 기준)"""
today = date_type.today()
today_str = today.strftime("%Y-%m-%d")
items = get_all_portfolio()
cash_rows = get_all_broker_cash()
total_cash = sum(r["cash"] for r in cash_rows)
if items:
tickers = list({item["ticker"] for item in items})
prices = get_current_prices(tickers)
total_eval = sum(
prices.get(item["ticker"], item["avg_price"]) * item["quantity"]
for item in items
)
else:
total_eval = 0
total_assets = total_eval + total_cash
upsert_asset_snapshot(today_str, total_eval, total_cash, total_assets)
return {
"ok": True,
"snapshot": {
"date": today_str,
"total_eval": total_eval,
"total_cash": total_cash,
"total_assets": total_assets,
},
}
@app.get("/api/portfolio/snapshot/history")
def get_snapshot_history(days: int = Query(30, ge=0)):
"""총 자산 스냅샷 이력 조회 (days=0: 전체, days=N: 최근 N일)"""
snapshots = get_asset_snapshots(days)
return {"snapshots": snapshots}
# --- Sell History API ---
class SellHistoryRequest(BaseModel):
broker: str
ticker: str
name: str
quantity: int
avg_price: float
sell_price: float
buy_amount: float
sell_amount: float
realized_profit: float
realized_rate: float
sold_at: str
@app.get("/api/portfolio/sell-history")
def list_sell_history(broker: Optional[str] = None, days: Optional[int] = None):
"""매도 내역 조회 (broker, days 필터 선택)"""
records = get_sell_history(broker=broker, days=days)
return {"records": records}
@app.post("/api/portfolio/sell-history")
def create_sell_history(req: SellHistoryRequest):
"""매도 기록 저장"""
record = add_sell_history(req.model_dump())
return record
@app.put("/api/portfolio/sell-history/{record_id}")
def modify_sell_history(record_id: int, req: SellHistoryRequest):
"""매도 기록 수정"""
record = update_sell_history(record_id, req.model_dump())
if record is None:
return JSONResponse(status_code=404, content={"error": "not found"})
return record
@app.delete("/api/portfolio/sell-history/{record_id}")
def remove_sell_history(record_id: int):
"""매도 기록 삭제"""
if not delete_sell_history(record_id):
return JSONResponse(status_code=404, content={"error": "not found"})
return {"ok": True}

View File

@@ -0,0 +1,68 @@
import time
import requests
from bs4 import BeautifulSoup
from typing import Optional
_cache: dict[str, tuple[Optional[int], float]] = {} # ticker -> (price, timestamp)
_CACHE_TTL = 180 # 3분
_HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/90.0.4430.93 Safari/537.36"
)
}
def _fetch_from_mobile_api(ticker: str) -> Optional[int]:
"""네이버 모바일 주식 API로 현재가 조회"""
url = f"https://m.stock.naver.com/api/stock/{ticker}/basic"
try:
resp = requests.get(url, headers=_HEADERS, timeout=5)
resp.raise_for_status()
data = resp.json()
price_str = data.get("closePrice") or data.get("stockEndPrice") or ""
price_str = str(price_str).replace(",", "").strip()
return int(price_str) if price_str.isdigit() else None
except Exception:
return None
def _fetch_from_html_fallback(ticker: str) -> Optional[int]:
"""네이버 금융 HTML 폴백 (.no_today .blind 파싱)"""
url = f"https://finance.naver.com/item/main.naver?code={ticker}"
try:
resp = requests.get(url, headers=_HEADERS, timeout=5)
resp.raise_for_status()
soup = BeautifulSoup(resp.content, "html.parser", from_encoding="cp949")
tag = soup.select_one(".no_today .blind")
if tag:
price_str = tag.get_text(strip=True).replace(",", "")
return int(price_str) if price_str.isdigit() else None
return None
except Exception:
return None
def get_current_price(ticker: str) -> Optional[int]:
"""단건 현재가 조회 (3분 캐시)"""
now = time.time()
cached = _cache.get(ticker)
if cached and (now - cached[1]) < _CACHE_TTL:
return cached[0]
price = _fetch_from_mobile_api(ticker)
if price is None:
price = _fetch_from_html_fallback(ticker)
_cache[ticker] = (price, now)
return price
def get_current_prices(tickers: list[str]) -> dict[str, Optional[int]]:
"""배치 현재가 조회 (캐시 미스 종목만 실제 호출)"""
result: dict[str, Optional[int]] = {}
for ticker in tickers:
result[ticker] = get_current_price(ticker)
return result

278
stock-lab/app/scraper.py Normal file
View File

@@ -0,0 +1,278 @@
import requests
from bs4 import BeautifulSoup
from typing import List, Dict, Any
import time
# 네이버 파이낸스 주요 뉴스
NAVER_FINANCE_NEWS_URL = "https://finance.naver.com/news/mainnews.naver"
# 해외증시 뉴스 (모바일 API 사용)
# NAVER_FINANCE_WORLD_NEWS_URL 사용 안함.
# 해외증시 메인 (지수용)
NAVER_FINANCE_WORLD_URL = "https://finance.naver.com/world/"
def fetch_market_news() -> List[Dict[str, str]]:
"""
네이버 금융 '주요 뉴스' 크롤링
반환: [{"title": "...", "link": "...", "summary": "...", "date": "..."}, ...]
"""
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
}
resp = requests.get(NAVER_FINANCE_NEWS_URL, headers=headers, timeout=10)
resp.raise_for_status()
soup = BeautifulSoup(resp.content, "html.parser", from_encoding="cp949")
# 주요 뉴스 리스트 추출
# 구조: div.mainNewsList > ul > li
articles = []
news_list = soup.select(".mainNewsList ul li")
for li in news_list:
# 썸네일 있을 수도 있고 없을 수도 있음
dl = li.select_one("dl")
if not dl:
continue
# 제목 (dd.articleSubject > a)
subject_tag = dl.select_one(".articleSubject a")
if not subject_tag:
continue
title = subject_tag.get_text(strip=True)
link = "https://finance.naver.com" + subject_tag["href"]
# 요약 (dd.articleSummary)
summary_tag = dl.select_one(".articleSummary")
summary = ""
press = ""
date = ""
if summary_tag:
# 불필요한 태그 제거
for child in summary_tag.select(".press, .wdate"):
if "press" in child.get("class", []):
press = child.get_text(strip=True)
if "wdate" in child.get("class", []):
date = child.get_text(strip=True)
child.extract()
summary = summary_tag.get_text(strip=True)
articles.append({
"title": title,
"link": link,
"summary": summary,
"press": press,
"date": date,
"crawled_at": time.strftime("%Y-%m-%d %H:%M:%S"),
"category": "domestic"
})
return articles
except Exception as e:
print(f"[StockLab] Scraping failed: {e}")
return []
def fetch_overseas_news() -> List[Dict[str, str]]:
"""
네이버 금융 해외증시 뉴스 크롤링 (모바일 API 사용)
"""
api_url = "https://api.stock.naver.com/news/overseas/mainnews"
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)"
}
resp = requests.get(api_url, headers=headers, timeout=10)
resp.raise_for_status()
data = resp.json()
if isinstance(data, list):
items = data
else:
items = data.get("result", [])
articles = []
for item in items:
# API 키 매핑 (subject/title/tit, summary/subContent/sub_tit 등)
title = item.get("subject") or item.get("title") or item.get("tit") or ""
summary = item.get("summary") or item.get("subContent") or item.get("sub_tit") or ""
press = item.get("officeName") or item.get("office_name") or item.get("cp_name") or ""
# 날짜 포맷팅 (20260126123000 -> 2026-01-26 12:30:00)
raw_dt = str(item.get("dt", ""))
if len(raw_dt) == 14:
date = f"{raw_dt[:4]}-{raw_dt[4:6]}-{raw_dt[6:8]} {raw_dt[8:10]}:{raw_dt[10:12]}:{raw_dt[12:]}"
else:
date = raw_dt
# 링크 생성
aid = item.get("articleId")
oid = item.get("officeId")
link = f"https://m.stock.naver.com/worldstock/news/read/{oid}/{aid}"
articles.append({
"title": title,
"link": link,
"summary": summary,
"press": press,
"date": date,
"crawled_at": time.strftime("%Y-%m-%d %H:%M:%S"),
"category": "overseas"
})
return articles
except Exception as e:
print(f"[StockLab] Overseas news failed: {e}")
return []
def fetch_major_indices() -> Dict[str, Any]:
"""
KOSPI, KOSDAQ, KOSPI200 등 주요 지표 (네이버 금융 홈)
"""
url = "https://finance.naver.com/"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)"
}
try:
targets = [
{"key": "KOSPI", "selector": ".kospi_area", "url": "https://finance.naver.com/"},
{"key": "KOSDAQ", "selector": ".kosdaq_area", "url": "https://finance.naver.com/"},
{"key": "KOSPI200", "selector": ".kospi200_area", "url": "https://finance.naver.com/"},
]
# 해외 지수 (네이버 금융 해외 메인) - 여기서는 별도 URL 호출 필요하거나, 메인에 있는지 확인
# 네이버 메인에는 해외지수가 안 나옴. https://finance.naver.com/world/ 에서 긁어야 함
# 그러나 한 번에 처리하기 위해 함수 내에서 추가 호출
indices = []
# --- 국내 ---
resp_kr = requests.get("https://finance.naver.com/", headers=headers, timeout=5)
soup_kr = BeautifulSoup(resp_kr.content, "html.parser", from_encoding="cp949")
for t in targets:
area = soup_kr.select_one(t["selector"])
if not area: continue
# (기존 파싱 로직)
num_tag = area.select_one(".num")
value = num_tag.get_text(strip=True) if num_tag else ""
change_val_tag = area.select_one(".num2")
change_pct_tag = area.select_one(".num3")
change_val = change_val_tag.get_text(strip=True) if change_val_tag else ""
change_pct = change_pct_tag.get_text(strip=True) if change_pct_tag else ""
direction = ""
if area.select_one(".bu_p"): direction = "red"
elif area.select_one(".bu_m"): direction = "blue"
indices.append({
"name": t["key"],
"value": value,
"change_value": change_val,
"change_percent": change_pct,
"direction": direction,
"type": "domestic"
})
# --- 해외 (DJI, NAS, SPI) ---
try:
resp_world = requests.get(NAVER_FINANCE_WORLD_URL, headers=headers, timeout=5)
soup_world = BeautifulSoup(resp_world.content, "html.parser", from_encoding="cp949")
world_targets = [
{"key": "DJI", "name": "다우산업", "sym": "DJI@DJI"},
{"key": "NAS", "name": "나스닥", "sym": "NAS@IXIC"},
{"key": "SPI", "name": "S&P500", "sym": "SPI@SPX"},
]
for wt in world_targets:
# 심볼 링크로 찾기 (가장 정확함)
a_tag = soup_world.select_one(f"a[href*='symbol={wt['sym']}']")
if not a_tag:
continue
# 상위 dl 태그 찾기
dl = a_tag.find_parent("dl")
if not dl:
continue
# 값 파싱 (dd.point_status)
status_dd = dl.select_one("dd.point_status")
if not status_dd:
continue
# 1. 현재가 (strong)
val_tag = status_dd.select_one("strong")
value = val_tag.get_text(strip=True) if val_tag else ""
# 2. 등락폭 (em)
change_val_tag = status_dd.select_one("em")
change_val = change_val_tag.get_text(strip=True) if change_val_tag else ""
# 3. 등락률 (span)
change_pct_tag = status_dd.select_one("span")
change_pct = change_pct_tag.get_text(strip=True) if change_pct_tag else ""
# 4. 방향 (dl 클래스 활용)
direction = ""
dl_classes = dl.get("class", [])
if "point_up" in dl_classes:
direction = "red"
elif "point_dn" in dl_classes:
direction = "blue"
indices.append({
"name": wt["name"], # 한글 이름 사용
"value": value,
"change_value": change_val,
"change_percent": change_pct,
"direction": direction,
"type": "overseas"
})
except Exception as e:
print(f"[StockLab] World indices failed: {e}")
# --- 환율 (USD/KRW) ---
try:
resp_ex = requests.get("https://finance.naver.com/marketindex/", headers=headers, timeout=5)
soup_ex = BeautifulSoup(resp_ex.content, "html.parser", from_encoding="cp949")
usd_item = soup_ex.select_one("#exchangeList li.on > a.head.usd")
if usd_item:
value = usd_item.select_one(".value").get_text(strip=True)
change_val = usd_item.select_one(".change").get_text(strip=True)
# 방향 (blind 텍스트: 상승, 하락)
direction = ""
blind_txt = usd_item.select_one(".blind").get_text(strip=True)
if "상승" in blind_txt: direction = "red"
elif "하락" in blind_txt: direction = "blue"
# 등락률은 리스트에는 안나오고 상세에 나오지만, 여기선 생략하거나 계산 가능.
# 일단 UI 통일성을 위해 빈값 혹은 계산된 값 등 처리.
# 네이버 메인 환율 영역엔 등락률이 텍스트로 바로 안보임 (title 속성 등에 있을수 있음).
# 여기서는 간단히 값만 처리.
indices.append({
"name": "원달러 환율",
"value": value,
"change_value": change_val,
"change_percent": "", # 메인 리스트에서 바로 안보임
"direction": direction,
"type": "exchange"
})
except Exception as e:
print(f"[StockLab] Exchange rate failed: {e}")
return {"indices": indices, "crawled_at": time.strftime("%Y-%m-%d %H:%M:%S")}
except Exception as e:
print(f"[StockLab] Indices scraping failed: {e}")
return {"indices": [], "error": str(e)}

View File

@@ -0,0 +1,7 @@
# 주식 서비스용 라이브러리
requests==2.32.3
beautifulsoup4==4.12.3
fastapi==0.115.6
uvicorn[standard]==0.30.6
apscheduler==3.10.4
python-dotenv==1.0.1

View File

@@ -20,3 +20,7 @@ EXPOSE 8000
ENV PYTHONUNBUFFERED=1 ENV PYTHONUNBUFFERED=1
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"] CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
ARG APP_VERSION=dev
ENV APP_VERSION=$APP_VERSION
LABEL org.opencontainers.image.version=$APP_VERSION

View File

@@ -149,17 +149,20 @@ def scan_album(album: str) -> List[Dict[str, Any]]:
return [] return []
items = [] items = []
for p in album_dir.iterdir(): # os.scandir이 iterdir보다 빠름 (파일 속성 한 번에 가져옴)
if p.is_file() and p.suffix.lower() in IMAGE_EXT: with os.scandir(album_dir) as entries:
# ✅ 썸네일 생성 보장 for entry in entries:
ensure_thumb(p, album) if entry.is_file() and Path(entry.name).suffix.lower() in IMAGE_EXT:
# ⚡ 성능 핵심: 여기서 썸네일 생성(ensure_thumb) 절대 하지 않음!
items.append({ # 파일 존재 여부만 확인하고 바로 리턴
"album": album, items.append({
"file": p.name, "album": album,
"url": f"{MEDIA_BASE}/{album}/{p.name}", "file": entry.name,
"thumb": f"{MEDIA_BASE}/.thumb/{album}/{p.name}", "url": f"{MEDIA_BASE}/{album}/{entry.name}",
}) "thumb": f"{MEDIA_BASE}/.thumb/{album}/{entry.name}",
# 정렬을 위해 수정시간/이름 필요하면 여기서 저장
"mtime": entry.stat().st_mtime
})
return items return items
# ----------------------------- # -----------------------------
@@ -170,44 +173,69 @@ def regions():
_meta_changed_invalidate_cache() _meta_changed_invalidate_cache()
return load_regions_geojson() return load_regions_geojson()
@app.post("/api/travel/reload")
def reload_cache():
"""강제로 캐시를 비워서 새로고침"""
CACHE.clear()
META_MTIME_CACHE.clear()
return {"ok": True, "message": "Cache cleared"}
@app.get("/api/travel/photos") @app.get("/api/travel/photos")
def photos( def photos(
region: str = Query(...), region: str = Query(...),
limit: int = Query(500, le=5000), page: int = Query(1, ge=1),
size: int = Query(20, ge=1, le=100),
): ):
_meta_changed_invalidate_cache() _meta_changed_invalidate_cache()
# 1. 캐시 키에 페이지 정보 포함하지 않음 (전체 리스트 캐싱 후 슬라이싱 전략)
# 왜냐하면 파일 스캔 비용이 크지, 메모리 슬라이싱 비용은 작기 때문.
now = time.time() now = time.time()
cached = CACHE.get(region) cached = CACHE.get(region)
if cached and now - cached["ts"] < CACHE_TTL:
return cached["data"] if not cached or now - cached["ts"] >= CACHE_TTL:
region_map = load_region_map()
albums = _get_albums_for_region(region, region_map)
region_map = load_region_map() all_items = []
albums = _get_albums_for_region(region, region_map) matched = []
all_items = [] for album in albums:
matched = [] items = scan_album(album)
matched.append({"album": album, "count": len(items)})
all_items.extend(items)
for album in albums: # 정렬: 앨범명 > 파일명 (또는 찍은 날짜)
items = scan_album(album) all_items.sort(key=lambda x: (x["album"], x["file"]))
matched.append({"album": album, "count": len(items)})
all_items.extend(items) cached_data = {
"region": region,
"matched_albums": matched,
"items": all_items,
"total": len(all_items),
}
CACHE[region] = {"ts": now, "data": cached_data}
else:
cached_data = cached["data"]
all_items.sort(key=lambda x: (x["album"], x["file"])) # 2. 페이지네이션 슬라이싱
all_items = cached_data["items"]
total = len(all_items)
start = (page - 1) * size
end = start + size
paged_items = all_items[start:end]
data = { return {
"region": region, "region": region,
"matched_albums": matched, "page": page,
"items": all_items[:limit], "size": size,
"total": len(all_items), "total": total,
"cached_at": int(now), "has_next": end < total,
"cache_ttl": CACHE_TTL, "items": paged_items,
} }
CACHE[region] = {"ts": now, "data": data}
return data
@app.get("/media/travel/.thumb/{album}/{filename}") @app.get("/media/travel/.thumb/{album}/{filename}")
def get_thumb(album: str, filename: str): def get_thumb(album: str, filename: str):
@@ -218,3 +246,8 @@ def get_thumb(album: str, filename: str):
# src로부터 thumb 생성/확인 (원본 확장자 유지) # src로부터 thumb 생성/확인 (원본 확장자 유지)
p = ensure_thumb(src, album) p = ensure_thumb(src, album)
return FileResponse(str(p)) return FileResponse(str(p))
@app.get("/api/version")
def version():
import os
return {"version": os.getenv("APP_VERSION", "dev")}