Przeglądaj źródła

chore: stop tracking action-plans and docs

Dave 1 miesiąc temu
rodzic
commit
d15f4aaa3e

+ 0 - 41
action-plans/20251220-ACT-01.md

@@ -1,41 +0,0 @@
-# Redis cache keys
-
-## In active use
-
-1. box:app:video:category:list:{categoryId} – LIST of video IDs pulled from Mongo’s videoMedia where categoryIds contains the category and status is Completed; writer is VideoCategoryCacheBuilder (video-category-cache.builder.ts (line 27)), and readers are VideoService.getVideoList/getVideosByCategoryListFallback (video.service.ts (line 824) and (line 305)).
-
-2. box:app:video:tag:list:{categoryId}:{tagId} – LIST of video IDs filtered by tag; same builder (video-category-cache.builder.ts (line 251)) and VideoService.getVideosByTagListFallback/searchVideosByTagName (video.service.ts (line 400) and (line 1200)).
-
-3. box:app:tag:list:{categoryId} – LIST of stringified tag objects (id, name, seq, status, timestamps, categoryId); writer VideoCategoryCacheBuilder (video-category-cache.builder.ts (line 338)), readers VideoService.getTagListForCategory and getCategoriesWithTags (video.service.ts (line 169) and (line 740)).
-
-4. app:video:list:category:{channelId}:{categoryId}:latest – ZSET of video IDs scored by editedAt/updatedAt; writer VideoListCacheBuilder (video-list-cache.builder.ts (line 85)), reader VideoService.getVideosByCategoryWithPaging (video.service.ts (line 230)).
-
-5. app:video:list:tag:{channelId}:{tagId}:latest – ZSET of tag-specific video IDs with the same scoring source; writer VideoListCacheBuilder (video-list-cache.builder.ts (line 138)), reader VideoService.getVideosByTagWithPaging (video.service.ts (line 400)).
-
-6. app:video:list:home:{channelId}:{section} – LIST of the latest top-N video IDs for each section (featured/latest/editorPick); builder exists (video-list-cache.builder.ts (line 190)) but buildHomeSectionsForChannel is never invoked in buildAll (calls are commented out at (line 65)), so the key is expected by VideoService.getHomeSectionVideos (video.service.ts (line 600)) but currently empty unless manually populated.
-
-7. app:category:all – JSON array of all categories plus their tag names (id, name, subtitle, seq, tags array from tag table); writer CategoryCacheBuilder (category-cache.builder.ts (line 1)), reader VideoService.getCategoriesWithTags (video.service.ts (line 740)).
-
-8. app:video:recommended – JSON array of sampled completed videos mapped into RecommendedVideoItem; writer RecommendedVideosCacheBuilder (recommended-videos-cache.builder.ts (line 30)), reader VideoService.getRecommendedVideos (video.service.ts (line 1532)).
-
-9. app:adpool:{adType} – JSON array of AdPayload entries populated from the Mongo ads and adsModule tables (status=1, valid date range, ordered by seq); writer AdPoolService.rebuildPoolForType (ad-pool.service.ts (line 59)), reader AdService.getAdForPlacement/listAdsByType (ad.service.ts (line 80)).
-
-10. app:ad:by-id:{adId} – JSON ad payload (id, advertiser, title, content/url/cover, adType) with TTL 5 minutes; writer AdCacheWarmupService (ad-cache-warmup.service.ts (line 18)), reader AdService.getAdForPlacement (same file region).
-
-11. app:video:detail:{videoId} – referenced by VideoService.getVideoDetail (video.service.ts (line 100)) but no current writer exists in the repo, so either the key is stale/empty or needs a fresh builder as part of the redesign.
-
-## “Ghost” keys (no consumer/endpoints)
-
-1. app:channel:all, app:channel:by-id:{channelId}, app:channel:with-categories:{channelId} – defined in CacheKeys (cache-keys.ts (line 18)) and written by ChannelCacheBuilder (channel-cache.builder.ts (line 24)), but no controller/service reads them today.
-
-2. app:category:{categoryId} and app:category:by-id:{categoryId} – written alongside app:category:all (category-cache.builder.ts (line 18)) but only accessed by CategoryCacheService (category-cache.service.ts (line 1)), which isn’t injected anywhere in the workspace.
-3. app:category:with-tags:{categoryId} – defined for tag refresh semantics (cache-keys.ts (line 36)) and even invalidated (cache-sync.service.ts (line 672)), yet no builder populates it or consumer reads it.
-4. app:tag:all – built by TagCacheBuilder (tag-cache.builder.ts (line 1)) but only exposed through TagCacheService (tag-cache.service.ts (line 1)), which is never used.
-5. Legacy list keys app:videolist:home:page:{page} and app:videolist:channel:{channelId}:page:{page} (cache-keys.ts (line 61)) are defined but not written to or read from anywhere today.
-
-## Redis cache key redesign
-
-- Inventory every actively used key: box:app:video:category:list:{categoryId}, box:app:video:tag:list:{categoryId}:{tagId}, box:app:tag:list:{categoryId}, app:video:list:category:{channelId}:{categoryId}:latest, app:video:list:tag:{channelId}:{tagId}:latest, app:video:list:home:{channelId}:{section}, app:category:all, app:video:recommended, app:adpool:{adType}, app:ad:by-id:{adId}, plus note app:video:detail:{videoId} has no writer now (video.service.ts (line 100)).
-- Capture “ghost” keys still defined in cache-keys.ts and their unused consumers (app:channel:_, app:category:_, app:category:with-tags:\*, app:tag:all, legacy paginated video list keys).
-- Document content schemas and sources (Mongo tables/fields) alongside the services writing/reading each key to guide the redesign without breaking flows.
-- Schedule verification for keys whose builders are currently unused/commented (e.g., VideoListCacheBuilder.buildTagPoolsForChannel and buildHomeSectionsForChannel) before changing namespaces or semantics.

+ 0 - 49
action-plans/20251220-ACT-02.md

@@ -1,49 +0,0 @@
-# 2025-12-20 ACT-02: MySQL → Mongo naming alignment
-
-## Objectives
-
-1. Keep existing `prisma/mysql/schema` files intact except for the explicitly deleted schema.
-2. Align Prisma/Mongo naming so Mongo side reads `sys_*` but retains the original MySQL table names.
-3. Drop unused `ImageConfig` schema once business confirms no dependency.
-
-## Tasks
-
-### A. Schema naming / retention
-
-- Keep the existing MySQL Prisma files (`user.prisma`, `role.prisma`, etc.) under `prisma/mysql/schema`.
-- When creating the Mongo-facing equivalents, rename as follows (with `@@map` to the original table names):
-  - `sys-user.prisma` → maps to `sys_user`.
-  - `sys-role.prisma` → `sys_role`.
-  - `sys-user-role.prisma` → `sys_user_role`.
-  - `sys-menu.prisma` → `sys_menu`.
-  - `sys-role-menu.prisma` → `sys_role_menu`.
-  - `sys-api-permission.prisma` → `sys_api_permission`.
-  - `sys-role-api-permission.prisma` → `sys_role_api_permission`.
-  - `sys-operation-log.prisma` → `sys_operation_log`.
-  - `sys-login-log.prisma` → `sys_login_log`.
-  - `sys-cache-sync-action.prisma` → `sys_cacheSyncAction`.
-
-### B. Additional collection splits
-
-- Extract `ProviderVideoSync` from `prisma/mysql/schema/main.prisma` into a dedicated `sys-providerVideoSync.prisma` whose `@@map` is `sys_providerVideoSync`.
-- Ensure `cache-sync-action.prisma` becomes `sys-cache-sync-action.prisma` while mapping the table to `sys_cacheSyncAction` and preserving BigInt PK/indices.
-- Do not migrate `sys_quota_log`; keep the MySQL schema file untouched and leave it out of Mongo plan.
-
-### C. Clean-up rules
-
-- Delete only `image-config.prisma` to remove the unused `image_config` table once the service no longer depends on it; do not touch other MySQL schema files.
-- Update documentation to reflect new Mongo schema filenames and their MySQL table mappings.
-
-### D. Verification & follow-up
-
-- Draft a migration checklist enumerating which mgnt services still depend on each schema to ensure feature parity.
-- Schedule follow-up on Mongo auto-increment/counters once naming is stabilized.
-
-### E. Coding action plan
-
-1. Add new Mongo-focused Prisma schema files under `prisma/mongo/schema/` using the `sys-*.prisma` names listed above, each importing the matching model definition and adding `@@map("<original_table>")` to preserve the existing table name; keep the existing MySQL files untouched for reference/use.
-2. For `ProviderVideoSync`, move the model definition out of `prisma/mysql/schema/main.prisma` into `prisma/mongo/schema/sys-providerVideoSync.prisma` and ensure it maps to `sys_providerVideoSync`; update any import/usage in the Mongo Prisma client config if needed.
-3. Replace `cache-sync-action.prisma` with `prisma/mongo/schema/sys-cache-sync-action.prisma` that maps to `sys_cacheSyncAction` while retaining the `id`, indexes, and payload JSON semantics; update Mongo Prisma client instantiation to load the new schema file.
-4. Keep `sys_quota_log` only in MySQL schema (document it as excluded) and ensure no routing/logic tries to read the Mongo counterpart.
-5. Delete `prisma/mysql/schema/image-config.prisma` as agreed; remove any references in documentation or migration scripts so future exports do not include the `image_config` table.
-6. Update `docs/MGNT_MYSQL_TO_MONGO_MIGRATION.md` or related migration notes with the new filenames and mappings, plus the coding steps above so the implementation story is clear for tomorrow’s work.

+ 0 - 108
action-plans/20251222-ACT-01.md

@@ -1,108 +0,0 @@
-## Assumptions we’ll stick to
-
-- Provider video metadata is **used as-is** (including `updatedAt: DateTime`).
-- Ads cache keys are `box:app:adpool:<AdType>` and Ads payload is already slimmed (status=1, seq asc).
-- We won’t add DB fields unless we’re doing the User migration step.
-
----
-
-## Tomorrow Action Plan
-
-### Phase 0 — Prep and guardrails
-
-1. **Confirm API contract snapshots**
-   - Current login response shape
-   - Current video endpoints response shape (even if incomplete)
-
-2. **Confirm Redis key spec and payload contracts**
-   - Ads cache keys + fields
-   - Video cache keys + fields (use provider fields as-is)
-
-Deliverable: quick “contract notes” doc/comment so we don’t drift while implementing multiple endpoints.
-
----
-
-### Phase 1 — Login returns Ads (highest leverage)
-
-3. **Update login flow** to return Ads payload (likely home-related adTypes)
-   - Decide which `AdType`s are returned on login (e.g. STARTUP, BANNER, POPUP\_\* etc.)
-   - Read from Redis first (`box:app:adpool:<AdType>`), DB fallback if missing, then rebuild cache
-   - Ensure returned Ads are already filtered (status=1) and sorted by seq asc
-
-Deliverable: login response includes `ads` (grouped by adType or whatever your app contract expects).
-
----
-
-### Phase 2 — Video listing/search endpoints (build foundation once, reuse everywhere)
-
-4. **`/api/v1/video/list`**
-   - Define baseline filters + pagination (current behavior, don’t invent new ones)
-   - Implement Redis list/payload strategy if we’re ready; otherwise DB-first with safe caching behind it
-
-5. **`/api/v1/video/category/`**
-   - Category list endpoint should reuse the same “list IDs → fetch payloads” pattern
-   - Stable sorting and pagination
-
-6. **`/api/v1/video/search-by-tag`**
-   - Define tag input: `tagId` / tag name / categoryId constraints
-   - Implement query + caching (tag list key) and reuse payload key
-
-Deliverable: list/search endpoints are consistent and share the same internal list-builder + payload fetcher.
-
----
-
-### Phase 3 — Recommendation endpoints (depends on Phase 2)
-
-7. **`/api/v1/video/recommended`**
-   - Add **optional parameters** (only the ones you explicitly want)
-     - Example buckets: `categoryId`, `tagId`, `country`, `excludeIds`, `limit`
-
-   - Implement “better results” logic without breaking old callers (default behavior unchanged)
-
-8. **`/api/v1/recommendation/videos/{videoId}/similar`** _(fix the typo: you wrote `/api/v1/api/v1/...`)_
-   - Similarity rules: tag overlap / same category / same country / exclude same id
-   - Reuse existing list/payload pattern
-   - Ensure deterministic ordering
-
-Deliverable: recommended + similar endpoints work and don’t duplicate query logic.
-
----
-
-### Phase 4 — Prisma migration: User from box-stats → box-admin
-
-9. **Model migration design**
-   - Identify current `User` schema in **box-stats** and all places it’s referenced
-   - Decide target schema in **box-admin** (migration-safe; avoid breaking fields)
-
-10. **Data migration strategy**
-
-- One-time migration script: copy users + indexes
-- Dual-read / cutover plan if needed (keep backward compatibility)
-- “Optimize may be required”: focus on indexes + query hotspots after migration
-
-Deliverable: User model lives in box-admin, references updated, and box-stats dependency reduced/removed safely.
-
----
-
-## Implementation order (recommended)
-
-1. Login returns Ads
-2. Video list
-3. Video category
-4. Search-by-tag
-5. Recommended (optional params)
-6. Similar videos
-7. Prisma User migration
-
----
-
-## Testing checklist (quick but effective)
-
-- Login response includes Ads and is stable even when Redis is empty
-- Video endpoints: pagination + ordering + empty results
-- Recommended/similar: excludes current videoId, respects optional params, no duplicates
-- Migration: User read/write still works after cutover; seed/admin login still ok
-
----
-
-If you paste your **current login response DTO** (or the controller return object), I’ll turn Phase 1 into a tight Codex prompt that edits the right files without guessing.

+ 0 - 36
docs/API_FLOW_DIAGRAMS.md

@@ -1,36 +0,0 @@
-# API Flow Diagrams (Maker–Checker Removed)
-
-## 1. Mgnt Update Flow (Immediate Publish)
-```mermaid
-sequenceDiagram
-  participant M as Mgnt User
-  participant MgntAPI
-  participant DB as MySQL/Mongo
-  participant Redis
-
-  M->>MgntAPI: Update content (ads/videos/channels)
-  MgntAPI->>DB: Save changes with audit trails
-  MgntAPI->>Redis: Update relevant keys
-  MgntAPI-->>M: Success, published immediately
-```
-
-## 2. Ad Placement Flow
-```mermaid
-sequenceDiagram
-  participant U as User
-  participant App as App API
-  participant Redis
-  participant Mongo
-
-  U->>App: GET /ads/placement
-  App->>Redis: GET cached ad pool
-  alt hit
-    Redis-->>App: pool
-    App-->>U: Ad response
-  else miss
-    App->>Mongo: Query ads
-    App->>Redis: Rebuild ad pool
-    App-->>U: Response
-  end
-```
-

+ 0 - 68
docs/ARCHITECTURE.md

@@ -1,68 +0,0 @@
-# Box System Architecture (Audit-Only, Immediate Publish)
-
-## 1. Overview
-The Box System is a high-performance video content and advertisement platform built using a NestJS monorepo. It consists of:
-- Management API (box-mgnt-api)
-- Application API (box-app-api)
-- Stats API (future: box-stats-api)
-- Shared libraries: common, core, db
-- Datastores: MySQL, MongoDB, Redis
-
-This updated architecture **removes Maker–Checker**, uses **audit-only trails**, and adopts **immediate publish** behavior.
-
-## 2. RBAC Roles
-- **SuperAdmin**: Full system access, including managing users and roles.
-- **Manager**: Manage content (ads, videos, channels), system params.
-- **Admin**: Similar to Manager with limited permissions.
-- **Viewer**: Read-only access.
-
-Only SuperAdmin can:
-- Create users
-- Assign roles
-- Change roles
-- Disable accounts
-
-## 3. Audit Trail Model
-Kept fields:
-- createdBy  
-- createdAt  
-- updatedBy  
-- updatedAt  
-- lastUpdated  
-
-These appear in MySQL and MongoDB content models.
-
-## 4. Data Flow
-### Mgnt API
-User updates → validation → save to DB → update Redis cache → immediate publish.
-
-### App API
-Requests go to Redis first:
-- Ads served from prebuilt pools
-- Videos served partly cached
-- Fallback to Mongo where applicable (future extension)
-
-### Stats API (Upcoming)
-Handles:
-- Event ingestion
-- Redis queue/stream
-- Workers writing aggregated stats
-
-## 5. Modules
-### Mgnt Modules
-- Ads  
-- Ads Module  
-- Videos  
-- Categories  
-- Channels  
-- Tags  
-- System Params  
-- Upload / File service  
-
-### App Modules
-- Ads placement  
-- Video listing  
-- Homepage aggregation  
-- System params  
-- Categories  
-

+ 0 - 102
docs/CODEX_CI_CHECKLIST.md

@@ -1,102 +0,0 @@
-# Codex + CI Checklist
-
-Goal: Use Codex as a powerful assistant **without** breaking the Box System or letting unreviewed AI code slip into production.
-
-This checklist is for every feature or refactor where Codex is involved.
-
----
-
-## 1. Before You Ask Codex
-
-- [ ] **Create a feature branch** from `main` or `develop`: - Example: `feature/auto-registration-device-code`.
-- [ ] **Confirm docs are up to date**: - `docs/ARCHITECTURE.md` - `docs/SYSTEM_OVERVIEW.md` - `docs/DEVELOPMENT_CHECKLIST.md` - `docs/ROADMAP.md`
-- [ ] **Clarify the task in your own words**: - What are you changing? - Which app? (`box-mgnt-api`, `box-app-api`, `box-stats-api`) - What are the constraints? (e.g. “no breaking changes”, “BigInt timestamps”)
-
----
-
-## 2. When Prompting Codex
-
-- [ ] **Mention the project context**: - “We are in `box-nestjs-monorepo` (NestJS + TypeScript, MySQL, Mongo, Redis).”
-- [ ] **Tell Codex to read the docs** referenced in `.codex/config.json`.
-- [ ] **Use a structured prompt** (from `docs/CODEX_PROMPTS.md`), including: - Architecture reference. - Tech stack. - Constraints: - TypeScript only. - NestJS patterns. - BigInt epoch for timestamps. - Avoid breaking changes.
-- [ ] **Ask for a plan before code**: - “Step 1: Summarize current design.” - “Step 2: Propose changes and list affected files.” - “Step 3: Wait for confirmation before generating TypeScript code.”
-
----
-
-## 3. Reviewing Codex’s Plan
-
-- [ ] Check the **alignment with architecture**: - Does the plan match `ARCHITECTURE.md` and `API_FLOW_DIAGRAMS.md`?
-- [ ] Check **scope vs roadmap**: - Is the change consistent with the current phase in `ROADMAP.md`?
-- [ ] Check **dependencies & impact**: - Which modules, services, DTOs, and DB schemas are affected?
-- [ ] If anything feels off: - [ ] Ask Codex to revise the plan. - [ ] Or narrow the scope.
-
-Do **not** accept auto-generated code if the plan itself looks wrong.
-
----
-
-## 4. Generating Code with Codex
-
-- [ ] Ask Codex to: - Generate TypeScript code **in small chunks** (one module/file at a time). - Clearly label which file each snippet belongs to.
-- [ ] Verify: - [ ] Uses TypeScript and NestJS decorators correctly. - [ ] Respects BigInt timestamps for date/time fields. - [ ] Uses existing conventions from `libs/common`, `libs/core`, `libs/db`.
-- [ ] Manually **paste code into files** (or let Codex apply edits, but still review them).
-
----
-
-## 5. Local Verification (Before Commit)
-
-- [ ] Run **TypeScript compilation**: - `pnpm build` or `pnpm lint` + `pnpm tsc` equivalent.
-- [ ] Run **linters & formatters**: - `pnpm lint` - `pnpm format` (if applicable)
-- [ ] Run **tests**: - `pnpm test` (or app-specific test commands)
-- [ ] If schema changed: - [ ] Run Prisma format & generate: - `pnpm prisma:format` - `pnpm prisma:generate` - [ ] Apply local migrations to dev DB (if needed).
-- [ ] Test **key endpoints** manually with curl/Postman: - Especially: - New/modified endpoints. - Auth-protected routes. - Redis-related flows (ad placement, stats events).
-
-If anything fails, fix it yourself or ask Codex for a focused correction (with error logs included in the prompt).
-
----
-
-## 6. Preparing the Pull Request
-
-- [ ] Create a **clear PR title**: - Example: `feat(app-api): auto-register users by device_code`.
-- [ ] In the PR description, include: - [ ] Summary of the change. - [ ] Which parts were generated or heavily assisted by Codex. - [ ] Link to **relevant docs** in `docs/` (architecture, roadmap). - [ ] Any potential risks or backward-compatibility notes.
-- [ ] Attach **testing evidence**: - [ ] Commands you ran (lint, test, build). - [ ] Screenshots/logs for manual endpoint tests if useful.
-
----
-
-## 7. Code Review (Human + Codex-Assisted)
-
-Reviewer checklist:
-
-- [ ] Understand the **functional goal** of the change.
-- [ ] Confirm architecture alignment: - Modules/controllers/services match design.
-- [ ] Check for: - [ ] Hardcoded secrets or credentials. - [ ] Direct DB access bypassing `libs/db` abstractions. - [ ] Inconsistent DTOs or response shapes. - [ ] Missing validation on inputs.
-- [ ] Check **error handling & logging**: - Are important failures logged in a structured way?
-- [ ] Optionally, use Codex again: - Ask it to review a specific file/method for: - Complexity. - Edge cases. - Missing null/undefined handling.
-
----
-
-## 8. CI Integration
-
-- [ ] Ensure your CI pipeline runs on each PR: - Lint - Tests - Build (optional)
-- [ ] If CI fails: - [ ] Fix locally. - [ ] Re-run tests. - [ ] Push updates.
-- [ ] Do **not** override CI protections** to merge failing PRs**, even if Codex generated the code.
-
----
-
-## 9. After Merge
-
-- [ ] Monitor logs/metrics in the relevant environment: - Error rates. - Latency. - Queue depth (for stats).
-- [ ] If issues appear: - [ ] Create a hotfix branch. - [ ] Optionally use Codex to assist in debugging, but: - Keep changes small and review carefully.
-
----
-
-## 10. Golden Rules
-
-1. Codex is a **power tool**, not an excuse to skip thinking.
-2. Docs in `docs/` are **the source of truth**; Codex must follow them.
-3. Never merge Codex-generated changes **without**:
-   - Passing tests.
-   - At least one human review.
-4. When in doubt:
-   - Reduce the scope.
-   - Add more tests.
-   - Ask Codex for a design explanation before touching code.

+ 0 - 18
docs/CODEX_ONBOARDING.md

@@ -1,18 +0,0 @@
-# Codex Onboarding (Audit-Only Version)
-
-Codex should follow:
-- Audit-only changes
-- Immediate publish workflows
-- RBAC roles: SuperAdmin, Manager, Admin, Viewer
-- No Maker–Checker logic
-
-When generating code:
-- Refer to architecture docs
-- Use TypeScript only
-- Maintain backward compatibility
-
-Codex should:
-1. Read docs in /docs  
-2. Analyze impacted modules  
-3. Propose design before code  
-4. Follow audit trail conventions  

+ 0 - 66
docs/CODEX_PROMPTS.md

@@ -1,66 +0,0 @@
-# Codex Prompt Templates for Box Monorepo
-
-## 1. General Feature Implementation (Backend, NestJS, TypeScript)
-
-**Use this when adding a new feature.**
-
-> You are working in the `box-nestjs-monorepo` project (NestJS + TypeScript).
-> Read and respect the project docs in `docs/`:
->
-> - ARCHITECTURE.md
-> - SYSTEM_OVERVIEW.md
-> - API_FLOW_DIAGRAMS.md
-> - DEVELOPMENT_CHECKLIST.md
-> - DEPLOYMENT_GUIDE.md
-> - CODEX_ONBOARDING.md
->
-> Task:
->
-> 1. Summarize the relevant architecture and flows for this task.
-> 2. Propose the NestJS module/controller/service/DTO design in a bullet list.
-> 3. After that, ask me if I am ready for TypeScript code.
->
-> Requirements:
->
-> - Use **TypeScript only**.
-> - Follow existing folder structures under `apps/` and `libs/`.
-> - Use Redis/Mongo/MySQL access patterns already present in the codebase, unless you explicitly propose a better one.
-> - Maintain backward compatibility with existing APIs and DTOs where possible.
-> - Keep date/time fields as **BigInt epoch** values in the data layer (as per project rules).
->
-> Task details:
-> [Describe the feature here, e.g. “Implement auto-registration of users by device_code in box-app-api, including MySQL schema changes and service logic.”]
-
----
-
-## 2. Refactor / Cleanup Prompt
-
-> You are refactoring existing code in `box-nestjs-monorepo`.
-> First:
->
-> 1. Inspect the current implementation and summarize what it does.
-> 2. Identify risks of breaking changes.
-> 3. Propose a safer refactor plan (steps).
-> 4. Only then, after I confirm, generate TypeScript changes.
->
-> Rules:
->
-> - Do **not** rename public APIs or DTO fields unless necessary and explicitly agreed.
-> - Preserve external behavior.
-> - Improve readability, testability, and consistency with patterns in `libs/common` and `libs/core`.
-
----
-
-## 3. Docs-Driven Implementation Prompt
-
-> Use `docs/ARCHITECTURE.md`, `docs/SYSTEM_OVERVIEW.md`, and `docs/API_FLOW_DIAGRAMS.md` as the source of truth.
-> I want you to:
->
-> 1. Read those docs.
-> 2. Explain how the requested feature fits into the architecture.
-> 3. List the affected files/modules (existing and new).
-> 4. Propose a small, incremental implementation plan (1–3 PRs).
-> 5. Only after I confirm, start generating TypeScript code.
->
-> Feature:
-> [Describe feature here]

+ 0 - 21
docs/DEPLOYMENT_GUIDE.md

@@ -1,21 +0,0 @@
-# Deployment Guide
-
-## 1. Requirements
-- Node 20+  
-- pnpm  
-- Redis  
-- MongoDB  
-- MySQL  
-
-## 2. Build
-```
-pnpm install
-pnpm build
-```
-
-## 3. Run with PM2
-```
-pm2 start dist/apps/box-mgnt-api/main.js
-pm2 start dist/apps/box-app-api/main.js
-```
-

+ 0 - 40
docs/DEVELOPMENT_CHECKLIST.md

@@ -1,40 +0,0 @@
-# Development Checklist (Audit-Only)
-
-## Phase 0 – Foundation
-- Setup monorepo  
-- Setup MySQL, MongoDB, Redis  
-- Configure base NestJS apps  
-- Setup libs/common, libs/db  
-
-## Phase 1 – Auth & RBAC
-- Implement login  
-- JWT  
-- 2FA  
-- RBAC roles: SuperAdmin, Manager, Admin, Viewer  
-- Only SuperAdmin can manage users/roles  
-
-## Phase 2 – Mgnt Modules
-- Ads  
-- Videos  
-- Channels  
-- Categories  
-- Tags  
-- System Params  
-- Upload service  
-- Audit trails only  
-- Immediate publish workflow  
-
-## Phase 3 – App API
-- Ads placement  
-- Homepage data  
-- Video listing  
-- Redis-driven ad pools  
-
-## Phase 4 – Stats Pipeline (Future)
-- Event ingestion  
-- Redis stream/queue  
-
-## Phase 5–8 (Future)
-- Optimization  
-- Monitoring  
-- Advanced features  

+ 0 - 42
docs/DOCKER.md

@@ -1,42 +0,0 @@
-# Docker development stack
-
-## Quick start
-
-1. Copy the environment template (only required if you want to override defaults):
-   ```sh
-   cp .env.docker .env
-   ```
-   The stack already reads `.env.docker` via `env_file`, so copying is optional unless you need custom values.
-2. Start the stack:
-   ```sh
-   docker compose up -d
-   ```
-3. When you need to run migrations or any workflow that requires the bundled MySQL, activate the `migration` profile:
-   ```sh
-   docker compose --profile migration up -d box-mysql
-   ```
-   MySQL stays behind this profile so the default stack keeps the scope limited. The APIs already point to `box-mysql` via `MYSQL_URL`, so start the profile before or next to the main stack when migrations are required.
-4. Stop and remove containers/volumes:
-   ```sh
-   docker compose down
-   ```
-
-## Useful helpers
-
-- Stream a service log: `docker compose logs -f box-mgnt-api`
-- Enter the Mongo shell to confirm the replica set: `docker compose exec box-mongodb mongosh --eval 'rs.status()'`
-- Rebuild an API image after code changes: `docker compose build box-mgnt-api`
-
-## Stack summary
-
-- `box-redis`, `box-rabbitmq`, `box-mongodb`, and `box-mysql` use named volumes (`redis_data`, `rabbitmq_data`, `mongo_data`, `mysql_data`). MySQL is only provisioned when you activate the `migration` profile.
-- Mongo runs `mongod --replSet rs0 --bind_ip_all` plus `docker/mongo/init-repl.js`, which initializes `rs0` exactly once and ensures `box_admin`/`box_stats` exist.
-- Each API reads `.env.docker` via `env_file`, then sets host/port/DB overrides for container-specific behavior.
-- One user-defined network (`box-network`) keeps all containers connected while ports stay consistent with the Nest apps (`3300`, `3301`, `3302`).
-
-## Smoke test checklist
-
-- `curl -fsSL http://localhost:3300/api/v1/health` (box-mgnt-api) returns HTTP 200/JSON.
-- `curl -fsSL http://localhost:3301/api/v1/health` (box-app-api) returns HTTP 200/JSON.
-- `curl -fsSL http://localhost:3302/api/v1/health` (box-stats-api) returns HTTP 200/JSON.
-- `docker compose exec box-mongodb mongosh --quiet --eval 'rs.status().ok'` prints `1`.

+ 0 - 27
docs/HIGH_LEVEL_MERMAID_DIAGRAMS.md

@@ -1,27 +0,0 @@
-```mermaid
-flowchart LR
-  subgraph Mgnt API
-    A1(Content Mgmt)
-    A2(System Params)
-  end
-
-  subgraph App API
-    B1(Ad Placement)
-    B2(Video Listing)
-  end
-
-  subgraph Datastores
-    MSQL[(MySQL)]
-    MONGO[(MongoDB)]
-    REDIS[(Redis)]
-  end
-
-  A1-->MSQL
-  A1-->MONGO
-  A1-->REDIS
-
-  B1-->REDIS
-  B2-->MONGO
-
-  App API-->REDIS
-```

+ 0 - 189
docs/MGNT_MYSQL_TO_MONGO_MIGRATION.md

@@ -1,189 +0,0 @@
-# Mgnt Storage Migration (MySQL → MongoDB `box_admin`)
-
-## Scope
-- Target: Move **box-mgnt-api** admin storage from MySQL to MongoDB database `box_admin`.
-- Keep **box_stats** unchanged (MongoStats stays as-is).
-- Read-only analysis; no code modifications.
-
-## 1) MySQL Prisma Models Used by box-mgnt-api at Runtime
-
-### Used models (direct Prisma access in mgnt services)
-- `User` (`prisma/mysql/schema/user.prisma`)
-- `Role` (`prisma/mysql/schema/role.prisma`)
-- `UserRole` (`prisma/mysql/schema/user-role.prisma`)
-- `Menu` (`prisma/mysql/schema/menu.prisma`)
-- `RoleMenu` (`prisma/mysql/schema/role-menu.prisma`)
-- `ApiPermission` (`prisma/mysql/schema/api-permission.prisma`)
-- `RoleApiPermission` (`prisma/mysql/schema/role-api-permission.prisma`)
-- `LoginLog` (`prisma/mysql/schema/login-log.prisma`)
-- `OperationLog` (`prisma/mysql/schema/operation-log.prisma`)
-- `QuotaLog` (`prisma/mysql/schema/quota-log.prisma`)
-- `CacheSyncAction` (`prisma/mysql/schema/cache-sync-action.prisma`)
-
-### Not used by box-mgnt-api runtime (present in schema only)
-- `ImageConfig` (schema deleted; no runtime references; table removed before Mongo migration)
-- `ProviderVideoSync` (model data flows from provider sync code and now lives under `prisma/mongo/schema/sys-providerVideoSync.prisma`)
-
-## 2) Constraints per Model (PK, Unique, Relations)
-
-### Summary table
-| Model | Primary Key | Unique Constraints | Relations |
-|---|---|---|---|
-| User | `id` (auto-increment int) | `username` | `UserRole.userId -> User.id` |
-| Role | `id` | `name` | `UserRole.roleId -> Role.id`, `RoleMenu.roleId -> Role.id`, `RoleApiPermission.roleId -> Role.id` |
-| UserRole | `id` | `(userId, roleId)` | `userId -> User`, `roleId -> Role` |
-| Menu | `id` | `frontendAuth` | self relation (`parentId -> Menu.id`), `RoleMenu.menuId -> Menu.id`, `ApiPermission.menuId -> Menu.id` |
-| RoleMenu | `id` | `(roleId, menuId)` | `roleId -> Role`, `menuId -> Menu` |
-| ApiPermission | `id` | `(menuId, path, method)` | `menuId -> Menu`, `RoleApiPermission.apiPermissionId -> ApiPermission.id` |
-| RoleApiPermission | `id` | `(roleId, apiPermissionId)` | `roleId -> Role`, `apiPermissionId -> ApiPermission` |
-| LoginLog | `id` | none | none |
-| OperationLog | `id` | none | `menuId` is nullable but not enforced by FK in schema |
-| QuotaLog | `id` | none | none |
-| CacheSyncAction | `id` (BigInt) | none | none; indexed by `status,nextAttemptAt` and `entityType,entityId` |
-
-### Additional field constraints (from schema)
-- Enums used in mgnt:
-  - `MenuType` (`DIRECTORY`, `MENU`, `SUBMENU`, `BUTTON`)
-  - `LoginType` / `LoginStatus`
-  - `OperationType`
-- JSON fields:
-  - `User.twoFARecoveryCodes`, `User.allowIps`
-  - `Menu.meta`
-  - `OperationLog.body`, `OperationLog.response`
-  - `CacheSyncAction.payload`
-- Special uniqueness / lookup usage in code:
-  - `Menu.frontendAuth` queried by `findUnique` in `apps/box-mgnt-api/src/mgnt-backend/core/menu/menu.service.ts`
-
-## 3) Proposed Mongo Collection Schemas + Indexes (`box_admin`)
-
-### Collections (1-to-1 with current MySQL models)
-- `sys_user`
-  - Fields: `id` (int), `username`, `password`, `status`, `nick`, `photo`, `remark`, `twoFA`, `twoFALastUsedStep`, `twoFARecoveryCodes`, `allowIps`, `jwtToken`, `oAuthJwtToken`, `lastLoginTime`, `create_time`, `update_time`
-  - Indexes: unique on `username`, index on `status` (optional for listing)
-- `sys_role`
-  - Fields: `id` (int), `name`, `status`, `remark`, `create_time`, `update_time`
-  - Indexes: unique on `name`
-- `sys_user_role`
-  - Fields: `id` (int), `user_id`, `role_id`, `create_time`, `update_time`
-  - Indexes: unique compound `(user_id, role_id)`; index on `user_id`, `role_id`
-- `sys_menu`
-  - Fields: `id` (int), `parent_id`, `title`, `status`, `type`, `order`, `frontend_auth`, `path`, `name`, `icon`, `redirect`, `component_key`, `meta`, `canView`, `canCreate`, `canUpdate`, `canDelete`, `create_time`, `update_time`
-  - Indexes: unique on `frontend_auth`, index on `parent_id`, index on `type`
-- `sys_role_menu`
-  - Fields: `id` (int), `role_id`, `menu_id`, `canView`, `canCreate`, `canUpdate`, `canDelete`, `create_time`, `update_time`
-  - Indexes: unique compound `(role_id, menu_id)`; index on `role_id`, `menu_id`
-- `sys_api_permission`
-  - Fields: `id` (int), `menu_id`, `path`, `method`, `create_time`, `update_time`
-  - Indexes: unique compound `(menu_id, path, method)`; index on `menu_id`
-- `sys_role_api_permission`
-  - Fields: `id` (int), `role_id`, `api_permission_id`, `create_time`, `update_time`
-  - Indexes: unique compound `(role_id, api_permission_id)`; index on `role_id`, `api_permission_id`
-- `sys_login_log`
-  - Fields: `id` (int), `type`, `status`, `username`, `ip_address`, `user_agent`, `create_time`, `update_time`
-  - Indexes: index on `username`, `create_time`
-- `sys_operation_log`
-  - Fields: `id` (int), `username`, `menu_id`, `description`, `type`, `status`, `method`, `path`, `body`, `response`, `ip_address`, `call_method`, `create_time`, `update_time`
-  - Indexes: index on `username`, `menu_id`, `create_time`
-- `sys_quota_log`
-  - Fields: `id` (int), `username`, `op_username`, `amount`, `is_inc`, `quota`, `remark`, `create_time`, `update_time`
-  - Indexes: index on `username`, `create_time`
-- `cache_sync_action`
-  - Fields: `id` (long), `entityType`, `entityId`, `operation`, `status`, `attempts`, `nextAttemptAt`, `lastError`, `payload`, `createdAt`, `updatedAt`
-  - Indexes: `(status, nextAttemptAt)`, `(entityType, entityId)`
-
-### Mongo Prisma schema filenames & mappings
-- `prisma/mongo/schema/sys-user.prisma` → maps to `sys_user` (mirrors `prisma/mysql/schema/user.prisma`)
-- `prisma/mongo/schema/sys-role.prisma` → maps to `sys_role`
-- `prisma/mongo/schema/sys-user-role.prisma` → maps to `sys_user_role`
-- `prisma/mongo/schema/sys-menu.prisma` → maps to `sys_menu`
-- `prisma/mongo/schema/sys-role-menu.prisma` → maps to `sys_role_menu`
-- `prisma/mongo/schema/sys-api-permission.prisma` → maps to `sys_api_permission`
-- `prisma/mongo/schema/sys-role-api-permission.prisma` → maps to `sys_role_api_permission`
-- `prisma/mongo/schema/sys-login-log.prisma` → maps to `sys_login_log`
-- `prisma/mongo/schema/sys-operation-log.prisma` → maps to `sys_operation_log`
-- `prisma/mongo/schema/sys-cache-sync-action.prisma` → maps to `sys_cacheSyncAction` (preserves BigInt PK/indexes from the MySQL schema)
-- `prisma/mongo/schema/sys-providerVideoSync.prisma` → maps to `sys_providerVideoSync` (moved out of `prisma/mysql/schema/main.prisma` so Mongo owns the provider sync store)
-- `sys_quota_log` intentionally remains only in MySQL; no Mongo schema file is created so Mongo services keep relying on the existing `prisma/mysql/schema/quota-log.prisma`.
-
-Every Mongo schema file uses `@@map("sys_<table>")` so the collection name stays aligned with the MySQL table while Prisma now loads the Mongo-friendly schema per file.
-
-### ID strategy
-- Preserve numeric IDs used by current code (e.g., `User.id`, `Role.id`, `Menu.id`).
-- Use a monotonically increasing counter per collection to emulate auto-increment.
-
-## 6) Verification Checklist
-
-| Schema | Mgnt service(s) still touching it | What to verify before cut-over |
-|---|---|---|
-| `sys_user` | `apps/box-mgnt-api/src/mgnt-backend/core/user/user.service.ts` (User CRUD/role assignment) and `apps/box-mgnt-api/src/mgnt-backend/core/auth/auth.service.ts` (lookup during login). | Ensure `SysUser` stays feature-compatible while token, password/2FA, and role-linking flows move to the Mongo client. |
-| `sys_role` | `apps/box-mgnt-api/src/mgnt-backend/core/role/role.service.ts`. | Confirm role creation/updating in Mongo respects the same `Menu` grants and status flags. |
-| `sys_user_role` | `user.service.ts` (link/unlink roles) and `role.service.ts` (read assignments). | Validate transactional user+role updates work once `sys_user_role` is served from Mongo. |
-| `sys_menu` | `apps/box-mgnt-api/src/mgnt-backend/core/menu/menu.service.ts`, `role.service.ts` (menu grant computations), and `operation-log.service.ts` (frontendAuth lookup). | Keep the menu tree, related meta, and `frontendAuth` resolution identical in Mongo. |
-| `sys_role_menu` | `role.service.ts` (grant storage). | Ensure create/delete/replace flows still maintain `(roleId, menuId)` uniqueness. |
-| `sys_api_permission` | `menu.service.ts` (API-permission CRUD) and `role.service.ts` (permission-based exports). | Preserve `(menuId, path, method)` uniqueness and any joins/resets. |
-| `sys_role_api_permission` | `role.service.ts` (grant updates / queries). | Confirm transactions that delete/recreate role-permission links continue to run. |
-| `sys_login_log` | `apps/box-mgnt-api/src/mgnt-backend/core/logging/login-log/login-log.service.ts` and the auth flow (via `LoginLogService`). | Logins will keep writing to `SysLoginLog` for auditing, so the new Mongo collection must mirror the MySQL schema. |
-| `sys_operation_log` | `apps/box-mgnt-api/src/mgnt-backend/core/logging/operation-log/operation-log.service.ts` (and the decorator/`operation-log` controller that reads it). | Validate the operation decorator still resolves `menuId` via `MenuService` and writes request/response bodies as JSON. |
-| `sys_cacheSyncAction` | `apps/box-mgnt-api/src/cache-sync/cache-sync.service.ts` (queue callback) plus every mgnt feature (ads/category/channel/tag/video) that calls it and the cache-sync controllers/checklist services. | The durable action queue must keep its BigInt PK/indexes and retry semantics while `CacheSyncService` continues scheduling old entity types. |
-| `sys_providerVideoSync` | `apps/box-mgnt-api/src/mgnt-backend/feature/provider-video-sync/provider-video-sync.service.ts`. | Provider sync writes counts/counters to Mongo, so verify the new collection respects the original `lastSyncedAt`/`syncStatus` semantics. |
-
-## 7) Follow-up: Mongo auto-increment / counters
-
-- Naming has stabilized with the `sys_*` models; the next checkpoint is to implement a `counters` collection (e.g., `{ _id: "sys_user", seq: 123 }`) so each Mongo schema can request the next integer ID before inserting.
-- Track this in the migration board and revisit once every `Sys*` collection has Prisma + service wiring in place; the `CounterService` can live alongside the Mongo Prisma client or inside a shared `@box/db` helper.
-
-## 4) Relational Assumptions and Transaction Usage
-
-### Joins / relational queries used in mgnt services
-- User ↔ Role via `UserRole` (list, get, update)
-  - `apps/box-mgnt-api/src/mgnt-backend/core/user/user.service.ts`
-  - Uses `include` on `userRoles.role` and `where` clauses with `userRoles.some`
-- Role ↔ Menu via `RoleMenu` (get permissions)
-  - `apps/box-mgnt-api/src/mgnt-backend/core/role/role.service.ts`
-- Menu ↔ ApiPermission via `ApiPermission` + RoleApiPermission
-  - `apps/box-mgnt-api/src/mgnt-backend/core/menu/menu.service.ts`
-- Menu self-relation (parent/children), used for tree building
-  - `apps/box-mgnt-api/src/mgnt-backend/core/menu/menu.service.ts`
-
-### Transaction usage that assumes relational guarantees
-- Role create/update
-  - `RoleService.create` and `RoleService.update` use `$transaction` to insert role + roleMenu or replace roleMenu
-- User create/update/delete
-  - `UserService.create`, `UserService.update`, `UserService.delete` use `$transaction` to change user and userRole together
-- Menu permission update
-  - `MenuService.updatePermission` deletes `roleApiPermission` + `apiPermission`, then recreates in a `$transaction`
-- Menu delete
-  - `MenuService.delete` deletes `roleMenu` + `menu` in a `$transaction`
-- Pagination
-  - `UserService.list` and `RoleService.list` use `$transaction` to fetch list + count
-
-### How to adapt in Mongo
-- Replace relational joins with one of:
-  - **Explicit join collections** (retain `sys_user_role`, `sys_role_menu`, `sys_role_api_permission`) and perform multi-step queries in services.
-  - **Embedded arrays** (e.g., store `roleIds` on `sys_user` and `menuIds` on `sys_role`) to reduce query count.
-- Replace transactions with Mongo multi-document transactions where atomicity is required:
-  - Role create/update (role + role_menu)
-  - User create/update/delete (user + user_role)
-  - Menu permission update (api_permission + role_api_permission)
-- Replace `include` and `some` relational filters with:
-  - Pre-join in application code (fetch roles by roleIds, map)
-  - `$lookup` aggregation pipelines (only if performance is acceptable)
-
-## 5) Migration Design Notes
-- Keep data model parity with existing MySQL shape to minimize code change risk.
-- Preserve unique constraints and compound indexes listed above.
-- Maintain enum values as strings in Mongo to match Prisma usage.
-- Plan for auto-increment simulation using a counters collection (e.g., `counters` with `{ _id: "sys_user", seq: 123 }`).
-
-## Appendix: MySQL Schema Files (Reference)
-- `prisma/mysql/schema/user.prisma`
-- `prisma/mysql/schema/role.prisma`
-- `prisma/mysql/schema/user-role.prisma`
-- `prisma/mysql/schema/menu.prisma`
-- `prisma/mysql/schema/role-menu.prisma`
-- `prisma/mysql/schema/api-permission.prisma`
-- `prisma/mysql/schema/role-api-permission.prisma`
-- `prisma/mysql/schema/login-log.prisma`
-- `prisma/mysql/schema/operation-log.prisma`
-- `prisma/mysql/schema/quota-log.prisma`
-- `prisma/mysql/schema/cache-sync-action.prisma`

+ 0 - 204
docs/MONGODB_REMOVAL_FEASIBILITY.md

@@ -1,204 +0,0 @@
-# MongoDB Removal Feasibility
-
-## Feasibility Summary
-- Removing MongoDB + MongoStats is feasible but high-effort.
-- MongoDB is the primary store for ads, videos, categories, tags, channels, system params, and stats/events.
-- Code paths assume Mongo-native arrays/JSON and ObjectId semantics.
-- A MySQL-only approach requires a full schema replacement plus extensive query rewrites and migration tooling.
-
-## 1) Inventory of Mongo/MongoStats Usage
-
-### Prisma clients
-- Mongo: `@prisma/mongo/client` via `MongoPrismaService` (`libs/db/src/prisma/mongo-prisma.service.ts`)
-- MongoStats: `@prisma/mongo-stats/client` via `MongoStatsPrismaService` (`libs/db/src/prisma/mongo-stats-prisma.service.ts`)
-- App-scoped services:
-  - `apps/box-app-api/src/prisma/prisma-mongo.service.ts`
-  - `apps/box-app-api/src/prisma/prisma-mongo-stats.service.ts`
-  - `apps/box-stats-api/src/prisma/prisma-mongo.service.ts`
-
-### Mongo (main) schema usage
-- Ads / AdsModule (mgnt CRUD, app reads, recommendations, cache warmups)
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/ads/ads.service.ts`
-  - `apps/box-app-api/src/feature/ads/ad.service.ts`
-  - `apps/box-app-api/src/feature/recommendation/recommendation.service.ts`
-  - `libs/core/src/ad/*`, `libs/common/src/services/ad-pool.service.ts`
-- Category / Tag / Channel (mgnt CRUD, app reads, cache builders)
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/category/category.service.ts`
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/tag/tag.service.ts`
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/channel/channel.service.ts`
-  - `apps/box-app-api/src/feature/video/video.service.ts`
-  - `libs/core/src/cache/*`
-- VideoMedia (mgnt CRUD, app reads, recommendation, stats aggregation, cache builders)
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/video-media/video-media.service.ts`
-  - `apps/box-app-api/src/feature/video/video.service.ts`
-  - `apps/box-app-api/src/feature/recommendation/recommendation.service.ts`
-  - `apps/box-stats-api/src/feature/stats-events/stats-aggregation.service.ts`
-  - `libs/core/src/cache/video/*`
-- SystemParam (mgnt CRUD + app reads)
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/system-params/system-params.service.ts`
-  - `apps/box-app-api/src/feature/sys-params/sys-params.service.ts`
-- Sync services (write-heavy)
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/sync-videomedia/sync-videomedia.service.ts`
-  - `apps/box-mgnt-api/src/mgnt-backend/feature/provider-video-sync/provider-video-sync.service.ts`
-- Cache tooling/dev tools
-  - `apps/box-mgnt-api/src/cache-sync/cache-sync.service.ts`
-  - `apps/box-mgnt-api/src/dev/services/video-cache-coverage.service.ts`
-  - `apps/box-mgnt-api/src/dev/services/video-stats.service.ts`
-
-### MongoStats schema usage
-- User + login history
-  - `apps/box-stats-api/src/feature/user-login/user-login.service.ts`
-- Raw events + processed message dedupe
-  - `apps/box-stats-api/src/feature/stats-events/stats-events.consumer.ts`
-- Aggregated stats + Redis sync
-  - `apps/box-stats-api/src/feature/stats-events/stats-aggregation.service.ts`
-
-### Mongo schemas defined but not used
-- `Home` collection (`prisma/mongo/schema/home.prisma`) — no code references found
-- `AdsClickHistory` (`prisma/mongo-stats/schema/ads-click-history.prisma`) — no code references found
-
-## 2) Data Types Stored in Mongo and Usage Patterns
-
-### Core content (Mongo main)
-- Ads / AdsModule
-  - Fields: ObjectId ids, adType enum, start/end BigInt epoch, content/cover/url
-  - Patterns: mgnt CRUD; app reads for lists and recommendation; cache warmups populate Redis
-- Category
-  - Fields: name, subtitle, seq, status, tagNames (String[]), tags (Json)
-  - Patterns: mgnt CRUD; app reads categories; cache builders precompute lists
-- Tag
-  - Fields: name, categoryId (ObjectId), seq/status
-  - Patterns: mgnt CRUD; app reads tags by category; cache builders build tag lists
-- Channel
-  - Fields: channelId (unique), name, URLs, tags/categories as Json/arrays
-  - Patterns: mgnt CRUD; app auth uses channel lookup; cache builders
-- VideoMedia
-  - Large document with arrays (categoryIds, tagIds, tags, actors, secondTags, appids, japanNames), string metadata, BigInt size, DateTime timestamps, tagsFlat
-  - Patterns: mgnt CRUD & sync; app reads by IDs (often from Redis); recommendation queries; stats aggregation needs tagIds; cache builders depend on indices
-
-### System parameters (Mongo main)
-- SystemParam
-  - Fields: id (int), side/type enums, name/value, BigInt timestamps
-  - Patterns: mgnt CRUD; app reads for enums and ad modules
-
-### Stats/event data (MongoStats)
-- User & UserLoginHistory
-  - Writes on login events (upsert user, insert login record)
-  - Reads for stats aggregation/reporting (inferred via indexes)
-- Raw events: AdClickEvents, VideoClickEvents, AdImpressionEvents
-  - Append-only writes from RabbitMQ consumer
-  - Reads by aggregation jobs to compute scores and sync Redis
-- ProcessedMessage
-  - Deduplication on messageId, created by consumer, deleted on failure
-- Aggregated stats: VideoGlobalStats, AdsGlobalStats
-  - Upserted by aggregation jobs; read for scoring/sync to Redis
-- Unused: AdsClickHistory (no code usage)
-
-### Mongo-specific features used
-- ObjectId primary keys stored as strings in code (`@db.ObjectId`)
-- Arrays (String[], Int[]) and Json fields
-- `aggregateRaw` used in `apps/box-app-api/src/feature/video/video.service.ts`
-
-## 3) Minimal MySQL Schema Proposal
-
-### Core content tables
-- `ads`
-  - `id CHAR(24) PRIMARY KEY`, `ads_module_id CHAR(24)`, `advertiser`, `title`, `ads_content`, `ads_cover_img`, `ads_url`, `img_source`, `start_dt BIGINT`, `expiry_dt BIGINT`, `seq INT`, `status INT`, `create_at BIGINT`, `update_at BIGINT`
-  - Index: `ads_module_id`, `status`, `start_dt`, `expiry_dt`
-- `ads_module`
-  - `id CHAR(24) PRIMARY KEY`, `ad_type VARCHAR`, `ads_module VARCHAR`, `module_desc`, `seq INT`
-  - Unique: `ad_type`, `ads_module`
-- `category`
-  - `id CHAR(24) PRIMARY KEY`, `name`, `subtitle`, `seq`, `status`, `create_at BIGINT`, `update_at BIGINT`
-- `tag`
-  - `id CHAR(24) PRIMARY KEY`, `name`, `category_id CHAR(24)`, `seq`, `status`, `create_at BIGINT`, `update_at BIGINT`
-  - Index: `category_id`, `status`
-- `channel`
-  - `id CHAR(24) PRIMARY KEY`, `channel_id VARCHAR UNIQUE`, `name`, `landing_url`, `video_cdn`, `cover_cdn`, `client_name`, `client_notice`, `remark`, `create_at BIGINT`, `update_at BIGINT`
-- `system_param`
-  - `id INT PRIMARY KEY`, `side`, `data_type`, `name`, `value`, `remark`, `create_at BIGINT`, `update_at BIGINT`
-  - Unique: `(side, name)`
-- `video_media`
-  - `id CHAR(24) PRIMARY KEY`, core fields + `tags_flat VARCHAR`, `list_status INT`, `edited_at BIGINT`, `img_source`, provider fields, timestamps
-  - Index: `(list_status, tags_flat)`, `added_time`, `updated_at`
-
-### Join tables to replace arrays
-- `video_media_category` (`video_media_id`, `category_id`, PRIMARY KEY both)
-  - Index on `category_id`
-- `video_media_tag` (`video_media_id`, `tag_id`, PRIMARY KEY both)
-  - Index on `tag_id`
-- Optional denormalized tag names: `video_media_tags` as JSON if minimizing join complexity
-
-### Stats/event tables
-- `user_stats`
-  - `id CHAR(24) PRIMARY KEY`, `uid VARCHAR UNIQUE`, `ip`, `os`, `u_channel_id`, `machine`, `create_at BIGINT`, `last_login_at BIGINT`
-- `user_login_history`
-  - `id CHAR(24) PRIMARY KEY`, `uid`, `ip`, `user_agent`, `app_version`, `os`, `u_channel_id`, `machine`, `create_at BIGINT`, `token_id`
-  - Index: `(uid, create_at)`, `(u_channel_id, create_at)`, `(ip, create_at)`
-- `ad_click_events`, `video_click_events`, `ad_impression_events`
-  - Fields mirror MongoStats schema; index on `(ad_id|video_id, time)`, `(uid, time)`, `(ad_type, time)`
-- `processed_messages`
-  - `message_id VARCHAR UNIQUE`, `event_type`, `processed_at BIGINT`, `created_at BIGINT`
-- `ads_global_stats`, `video_global_stats`
-  - per-id unique rows, indexes on `computed_score`, `computed_recency`, `computed_ctr`, `last_seen_at`
-
-### Notes
-- Use `CHAR(24)` or `BINARY(12)` for former ObjectId values to preserve IDs used in APIs.
-- Arrays used in filters should become join tables; JSON columns can be a stopgap but complicate query parity.
-
-## 4) Migration Plan (Phases, Risks, Rollback)
-
-### Phases
-1. Schema introduction
-   - Add new MySQL tables (content + stats) while keeping Mongo/MongoStats active.
-2. Backfill
-   - One-time migration from Mongo/MongoStats into MySQL (ads, videoMedia, tags, events, stats).
-3. Dual-write
-   - Update write paths to write both Mongo and MySQL (mgnt CRUD, stats consumer, sync services).
-4. Read switch
-   - Shift read paths (app APIs, cache builders, stats aggregation) to MySQL.
-5. Stabilization
-   - Keep Mongo in read-only mode for verification.
-6. Decommission
-   - Remove Mongo dependencies after parity signoff.
-
-### Key risks
-- Array semantics: Mongo arrays for `categoryIds`, `tagIds`, `tagsFlat` used in filters and Redis sync.
-- Aggregation logic: `aggregateRaw` in `apps/box-app-api/src/feature/video/video.service.ts` is Mongo-specific.
-- Event volume: stats events are append-heavy; MySQL indexing/partitioning needed for aggregation performance.
-- ID consistency: APIs and caches assume ObjectId string IDs.
-- Cache rebuilds depend on Mongo data shape and indexing; porting to MySQL must preserve ordering and filters.
-
-### Rollback strategy
-- Keep Mongo/MongoStats live during migration.
-- If errors: switch reads back to Mongo, stop MySQL writes, keep MySQL for postmortem.
-- Preserve dual-write toggles to flip back quickly.
-
-## 5) Test Checklist for Parity
-
-### API parity
-- Ads API: `/api/v1/ads/list`, `/api/v1/ads/click`, `/api/v1/ads/impression`
-- Video API: `/api/v1/video/list`, `/api/v1/video/search-by-tag`, `/api/v1/video/category/:id/latest`, `/api/v1/video/recommended`
-- Homepage API: `/api/v1/homepage` (ads, categories, announcements, videos)
-- Recommendation APIs: `/api/v1/recommend/*`, `/api/v1/recommendation/*`
-
-### Mgnt parity
-- CRUD: ads, categories, tags, channels, system params, video media
-- Sync endpoints: provider sync and video sync flow
-- Cache rebuild endpoints: cache-sync and admin rebuild
-
-### Stats parity
-- RabbitMQ ingestion → event writes
-- Aggregation results: `ads_global_stats`, `video_global_stats` values and Redis scores
-- Redis sync results: `ads:global:score`, `video:global:score`, `video:tag:*:score`
-
-### Performance
-- Cache rebuild job duration and Redis write throughput
-- Video listing latency (paged list by category/tag)
-- Recommendation query latency (similar ads/videos)
-- Stats aggregation runtime with realistic event volumes
-
-### Data integrity
-- ID mapping unchanged (ObjectId format preserved)
-- Tags/categories relations consistent for video media
-- No loss of BigInt epoch precision in timestamps

+ 0 - 780
docs/OBSERVABILITY_ENHANCEMENTS.md

@@ -1,780 +0,0 @@
-# Observability Enhancements for RabbitMQ and Stats Aggregation
-
-This document describes the enhanced observability features added to the RabbitMQ publisher and stats aggregation services.
-
----
-
-## Part A: RabbitMQ Publisher Observability
-
-### Event Publishing Resilience Tiers
-
-The RabbitMQ publisher implements **three tiers of resilience** based on event criticality:
-
-#### Tier 1: Stats Events (Full Resilience)
-
-**Events:** `stats.ad.click`, `stats.video.click`, `stats.ad.impression`
-
-**Features:**
-
-- ✅ Circuit breaker protection
-- ✅ Retry logic (3 attempts with exponential backoff)
-- ✅ Redis fallback queue (24-hour TTL)
-- ✅ Dead Letter Queue (DLQ)
-- ✅ Publisher-level idempotency (7-day window)
-- ✅ Message TTL (24 hours)
-
-**Why:** These events are critical for analytics and recommendations. Data loss is unacceptable.
-
-#### Tier 2: User Activity Events (Partial Resilience)
-
-**Events:** `user.login`, `ads.click`
-
-**Features:**
-
-- ✅ Circuit breaker protection
-- ✅ Retry logic (3 attempts with exponential backoff)
-- ❌ No Redis fallback queue
-- ❌ No DLQ
-- ❌ No idempotency checking
-- ❌ No message TTL
-
-**Why:** These events are less critical. Losing a few during outages is acceptable. Simpler implementation reduces overhead.
-
-**Warning Logs:**
-
-```
-[WARN] ⚠️  Circuit breaker OPEN. Dropping user.login event for uid=user123 (non-critical event, no fallback)
-[WARN] ⚠️  Circuit breaker OPEN. Dropping ads.click event for adsId=ad456 (non-critical event, no fallback)
-```
-
-**Monitoring Impact:**
-Track these warning logs to understand data loss during RabbitMQ outages:
-
-```bash
-# Count dropped user.login events
-grep "Dropping user.login event" app.log | wc -l
-
-# Count dropped ads.click events
-grep "Dropping ads.click event" app.log | wc -l
-```
-
-#### Tier 3: Fire-and-Forget (No Resilience)
-
-**Events:** (none currently, but pattern available for future use)
-
-**Features:**
-
-- ❌ No circuit breaker
-- ❌ No retries
-- ❌ No fallback
-- ❌ Direct publish, drop on failure
-
-**Why:** For truly ephemeral events where loss doesn't matter (e.g., real-time presence indicators).
-
----
-
-### Enhanced Structured Logging
-
-#### 1. Circuit Breaker State Transitions
-
-All circuit breaker state transitions now include `failureCount` and `successCount` in log messages:
-
-**Circuit Opened:**
-
-```
-[WARN] Circuit breaker OPENED (failureCount=5, successCount=0). Will retry after 60000ms
-```
-
-**Circuit Half-Open:**
-
-```
-[INFO] Circuit breaker HALF-OPEN (failureCount=0, successCount=0). Testing connection...
-```
-
-**Circuit Closed:**
-
-```
-[INFO] Circuit breaker CLOSED (failureCount=0, successCount=2). Resuming normal operation.
-```
-
-#### 2. Reconnection Lifecycle Logging
-
-Reconnection attempts now have detailed structured logs with emojis for easy visual scanning:
-
-**Reconnection Start:**
-
-```
-[INFO] 🔄 Starting RabbitMQ reconnection attempt (circuitState=HALF_OPEN)...
-```
-
-**Connection Progress:**
-
-```
-[DEBUG] 🔌 Reconnecting to RabbitMQ at amqp://localhost:5672...
-```
-
-**Reconnection Success:**
-
-```
-[INFO] ✅ RabbitMQ reconnection successful (hasConnection=true, hasChannel=true)
-```
-
-**Reconnection Failure:**
-
-```
-[ERROR] ❌ RabbitMQ reconnection failed (circuitState=HALF_OPEN): Connection refused
-[WARN] ⚠️  Reconnection failed during HALF_OPEN, reopening circuit
-```
-
-**URL Not Configured:**
-
-```
-[ERROR] ❌ Reconnection failed: RABBITMQ_URL is not set. Cannot reconnect to RabbitMQ.
-```
-
-### Extended Circuit Status API
-
-The `getCircuitStatus()` method now returns comprehensive health information:
-
-```typescript
-interface CircuitStatus {
-  state: 'CLOSED' | 'OPEN' | 'HALF_OPEN';
-  failureCount: number;
-  successCount: number;
-  hasConnection: boolean; // ✨ NEW
-  hasChannel: boolean; // ✨ NEW
-  isReconnecting: boolean; // ✨ NEW
-  nextAttemptTime: number; // ✨ NEW (timestamp in ms)
-}
-```
-
-**Example Response:**
-
-```json
-{
-  "state": "CLOSED",
-  "failureCount": 0,
-  "successCount": 2,
-  "hasConnection": true,
-  "hasChannel": true,
-  "isReconnecting": false,
-  "nextAttemptTime": 0
-}
-```
-
-### Internal Status Endpoint
-
-A new internal-only HTTP endpoint provides RabbitMQ health monitoring:
-
-**Endpoint:** `GET /api/v1/internal/rabbitmq/status`
-
-**Access Control:**
-
-- Disabled by default in production (NODE_ENV=production)
-- Enable in production with: `ENABLE_INTERNAL_ENDPOINTS=true`
-- Always accessible in development/staging environments
-
-**Response Structure:**
-
-```json
-{
-  "timestamp": "2025-12-08T12:00:00.000Z",
-  "rabbitmq": {
-    "circuitBreaker": {
-      "state": "CLOSED",
-      "failureCount": 0,
-      "successCount": 0,
-      "nextAttemptTime": null
-    },
-    "connection": {
-      "hasConnection": true,
-      "hasChannel": true,
-      "isReconnecting": false
-    },
-    "health": "healthy"
-  }
-}
-```
-
-**Health Status Values:**
-
-- `"healthy"` - Connection & channel active, circuit CLOSED
-- `"reconnecting"` - Actively attempting to reconnect
-- `"unhealthy"` - Connection/channel down or circuit OPEN/HALF_OPEN
-
-**Production Disabled Response:**
-
-```json
-{
-  "error": "Internal endpoints are disabled in production",
-  "hint": "Set ENABLE_INTERNAL_ENDPOINTS=true to enable"
-}
-```
-
-**Example Usage:**
-
-```bash
-# Development/Staging (no auth required)
-curl http://localhost:3000/api/v1/internal/rabbitmq/status
-
-# Production (requires ENABLE_INTERNAL_ENDPOINTS=true)
-curl http://prod.example.com/api/v1/internal/rabbitmq/status
-```
-
-**Module Configuration:**
-
-```typescript
-// rabbitmq.module.ts
-@Module({
-  controllers: [
-    RabbitmqFallbackReplayController,
-    RabbitmqStatusController  // ✨ NEW
-  ],
-  // ...
-})
-```
-
-### Refactored User Activity Event Publishing
-
-The `publishUserLogin()` and `publishAdsClick()` methods have been refactored to share the robust publishing pipeline used by stats events, while maintaining their simpler Tier 2 resilience profile.
-
-#### Code Structure
-
-**Before:**
-
-```typescript
-async publishUserLogin(event: UserLoginEventPayload): Promise<void> {
-  if (!this.channel) {
-    this.logger.warn('RabbitMQ channel not ready. Skipping user.login publish.');
-    return;
-  }
-  // Direct publish, no retries, no circuit breaker
-}
-```
-
-**After:**
-
-```typescript
-async publishUserLogin(event: UserLoginEventPayload): Promise<void> {
-  // Check circuit breaker
-  if (!(await this.canAttempt())) {
-    this.logger.warn(
-      `⚠️  Circuit breaker OPEN. Dropping user.login event for uid=${event.uid} (non-critical event, no fallback)`
-    );
-    return;
-  }
-
-  // Use shared retry logic
-  await this.retryPublish(async () => {
-    await this.publishUserLoginCore(event);
-  }, context);
-
-  // Record success/failure for circuit breaker
-  this.recordSuccess() / this.recordFailure();
-}
-
-// Core publish logic separated for retry wrapper
-private async publishUserLoginCore(event: UserLoginEventPayload): Promise<void> {
-  // Actual RabbitMQ publish
-}
-```
-
-#### Benefits of Refactoring
-
-**1. Shared Resilience Logic:**
-
-- Both methods now use `canAttempt()` for circuit breaker checks
-- Both methods use `retryPublish()` for exponential backoff retries
-- Both methods use `recordSuccess()` / `recordFailure()` for circuit state management
-
-**2. Consistent Error Handling:**
-
-- Clear warning logs when circuit breaker drops events
-- Structured error messages with retry attempt counts
-- Fire-and-forget behavior maintained (no exceptions thrown)
-
-**3. Easier Monitoring:**
-
-- Consistent log format across all event types
-- Circuit breaker state applies to all events
-- Clear indication of dropped events for impact analysis
-
-**4. Future Extensibility:**
-
-- Easy to upgrade to Tier 1 resilience (just add fallback/idempotency)
-- Easy to downgrade to Tier 3 (remove circuit breaker)
-- Consistent codebase for all publish methods
-
-#### Behavioral Changes
-
-**Callers are NOT affected:**
-
-- Method signatures unchanged
-- Still returns `Promise<void>`
-- Still fire-and-forget (no exceptions thrown)
-- Still safe to use in async contexts
-
-**Internal improvements:**
-
-- 3 retry attempts with delays: 100ms, 500ms, 2000ms
-- Circuit breaker prevents cascading failures
-- Better logging for troubleshooting
-
-#### Example Logs
-
-**Normal Operation:**
-
-```
-[DEBUG] Published user.login event for uid=user123
-[DEBUG] Published ads.click event for adsId=ad456
-```
-
-**Retry After Transient Failure:**
-
-```
-[WARN] Publish attempt 1 failed (user.login uid=user123). Retrying in 100ms...
-[DEBUG] Published user.login event for uid=user123
-```
-
-**Circuit Breaker Active:**
-
-```
-[WARN] ⚠️  Circuit breaker OPEN. Dropping user.login event for uid=user123 (non-critical event, no fallback)
-[WARN] ⚠️  Circuit breaker OPEN. Dropping ads.click event for adsId=ad456 (non-critical event, no fallback)
-```
-
-**Total Failure After Retries:**
-
-```
-[ERROR] Failed to publish user.login after 3 retries for uid=user123: Connection refused
-```
-
----
-
-## Part B: Stats Aggregation Observability
-
-### Enhanced Summary Logging
-
-Both ads and video aggregation services now provide comprehensive summary logs after each run.
-
-#### Ads Aggregation Summary
-
-**Log Level:** `INFO` (not spammy, one line per run)
-
-**Format:**
-
-```
-[INFO] 📊 Ads aggregation complete: updated=150, errors=2, scores(min=0.0234, max=2.4567, avg=0.8923), zeroScores=5
-```
-
-**Metrics Included:**
-
-- `updated` - Number of AdsGlobalStats records successfully updated
-- `errors` - Number of ads that failed to aggregate
-- `scores.min` - Minimum computed score across all ads
-- `scores.max` - Maximum computed score across all ads
-- `scores.avg` - Average computed score across all ads
-- `zeroScores` - Count of ads with computedScore = 0
-
-**Debug-Level Details:**
-Individual ad processing still logs at DEBUG level:
-
-```
-[DEBUG] Aggregated adId=ad123: impressions=1000, clicks=50, CTR=0.0476, score=1.2345
-```
-
-#### Video Aggregation Summary
-
-**Log Level:** `INFO`
-
-**Format:**
-
-```
-[INFO] 📊 Video aggregation complete: updated=320, errors=1, scores(min=0.0156, max=3.1234, avg=1.0234), zeroScores=12
-```
-
-**Metrics Included:**
-
-- `updated` - Number of VideoGlobalStats records successfully updated
-- `errors` - Number of videos that failed to aggregate
-- `scores.min` - Minimum computed score across all videos
-- `scores.max` - Maximum computed score across all videos
-- `scores.avg` - Average computed score across all videos
-- `zeroScores` - Count of videos with computedScore = 0
-
-**Debug-Level Details:**
-
-```
-[DEBUG] Aggregated videoId=video456: impressions=500, clicks=500, CTR=0.4000, score=2.3456
-```
-
-### Implementation Details
-
-#### Score Tracking
-
-Both methods now track scores during aggregation:
-
-```typescript
-const scores: number[] = [];
-let zeroScoreCount = 0;
-
-for (const id of ids) {
-  const score = await this.aggregateSingle(id, cutoffTime);
-  scores.push(score);
-  if (score === 0) zeroScoreCount++;
-}
-```
-
-#### Score Statistics Calculation
-
-```typescript
-const scoreStats =
-  scores.length > 0
-    ? {
-        min: Math.min(...scores),
-        max: Math.max(...scores),
-        avg: scores.reduce((sum, s) => sum + s, 0) / scores.length,
-      }
-    : { min: 0, max: 0, avg: 0 };
-```
-
-#### Return Values
-
-The private aggregation methods now return the computed score:
-
-```typescript
-private async aggregateSingleAd(adId: string, cutoffTime: bigint): Promise<number> {
-  // ... compute metrics ...
-  const computedScore = this.computeScore(popularity, ctr, recency);
-  // ... upsert to database ...
-  return computedScore;  // ✨ NEW
-}
-```
-
----
-
-## Configuration
-
-### Environment Variables
-
-**RabbitMQ Internal Endpoint:**
-
-```bash
-# Enable in production (default: disabled)
-ENABLE_INTERNAL_ENDPOINTS=true
-
-# Or check NODE_ENV
-NODE_ENV=production  # endpoint disabled by default
-NODE_ENV=development # endpoint always enabled
-```
-
-**Stats Aggregation (existing):**
-
-```bash
-# CTR smoothing parameters
-STATS_CTR_ALPHA=1
-STATS_CTR_BETA=2
-
-# Scoring weights
-STATS_WEIGHT_POPULARITY=0.5
-STATS_WEIGHT_CTR=0.3
-STATS_WEIGHT_RECENCY=0.2
-```
-
----
-
-## Monitoring & Alerts
-
-### RabbitMQ Monitoring
-
-**Recommended Metrics to Track:**
-
-1. **Circuit State Duration**
-   - Alert if circuit stays OPEN for > 5 minutes
-   - Pattern: `grep "Circuit breaker OPENED" app.log`
-
-2. **Reconnection Failures**
-   - Alert if reconnection fails > 3 times in 10 minutes
-   - Pattern: `grep "❌ RabbitMQ reconnection failed" app.log`
-
-3. **Connection Health**
-   - Poll `/api/v1/internal/rabbitmq/status` every 60 seconds
-   - Alert if `health !== "healthy"` for > 2 minutes
-
-4. **Failure Count Trends**
-   - Extract `failureCount` from circuit transition logs
-   - Alert if consistently > 0 even when circuit is CLOSED
-
-**Example Alert Query:**
-
-```bash
-# Check if circuit has been OPEN for too long
-tail -n 1000 app.log | grep "Circuit breaker OPENED" | tail -n 1
-```
-
-### Stats Aggregation Monitoring
-
-**Recommended Metrics to Track:**
-
-1. **Zero Score Count**
-   - Alert if `zeroScores` > 10% of total items
-   - Indicates potential data quality issues
-   - Pattern: `grep "📊.*zeroScores=" app.log`
-
-2. **Error Rate**
-   - Alert if `errors / (updated + errors) > 0.05` (5%)
-   - Pattern: Extract `errors` and `updated` from summary logs
-
-3. **Score Distribution**
-   - Monitor `min`, `max`, `avg` trends over time
-   - Alert if `avg` drops significantly (e.g., > 50% decrease)
-
-4. **Processing Time**
-   - Use existing start/end timestamps
-   - Alert if aggregation takes > expected duration
-
-**Example Log Analysis:**
-
-```bash
-# Extract ads aggregation metrics
-grep "📊 Ads aggregation complete" app.log | tail -n 10
-
-# Extract video aggregation metrics
-grep "📊 Video aggregation complete" app.log | tail -n 10
-
-# Count zero scores over last hour
-grep "📊.*zeroScores=" app.log | \
-  grep "$(date -u +%Y-%m-%d)" | \
-  awk -F'zeroScores=' '{sum+=$2} END {print sum}'
-```
-
----
-
-## Log Level Guide
-
-### RabbitMQ Publisher
-
-| Level | What Gets Logged                                                              |
-| ----- | ----------------------------------------------------------------------------- |
-| ERROR | Connection failures, reconnection failures, publish errors                    |
-| WARN  | Circuit opened, reconnection needed during HALF_OPEN                          |
-| INFO  | Circuit state changes, reconnection success, module initialization            |
-| DEBUG | Individual message publishes, connection health checks, reconnection progress |
-
-**Recommended Production Level:** `INFO`
-
-- Captures all state changes and errors
-- Minimal noise (no per-message logs)
-
-### Stats Aggregation
-
-| Level | What Gets Logged                                        |
-| ----- | ------------------------------------------------------- |
-| ERROR | Individual item aggregation failures (with stack trace) |
-| WARN  | (not currently used)                                    |
-| INFO  | Start/end of aggregation runs, summary statistics       |
-| DEBUG | Per-item aggregation details (CTR, score, etc.)         |
-
-**Recommended Production Level:** `INFO`
-
-- One log line per aggregation run
-- Summary statistics for monitoring
-- Individual item details at DEBUG only
-
----
-
-## Testing
-
-### RabbitMQ Status Endpoint
-
-**Test in Development:**
-
-```bash
-# Should return full status
-curl http://localhost:3000/api/v1/internal/rabbitmq/status | jq
-
-# Verify health status reflects circuit state
-# 1. Stop RabbitMQ
-# 2. Generate some events (triggers circuit breaker)
-# 3. Check endpoint - should show "unhealthy"
-# 4. Start RabbitMQ
-# 5. Wait 60s (circuit timeout)
-# 6. Check endpoint - should show "healthy"
-```
-
-**Test in Production:**
-
-```bash
-# Without ENABLE_INTERNAL_ENDPOINTS (should be blocked)
-curl https://prod.example.com/api/v1/internal/rabbitmq/status
-# Expected: {"error": "Internal endpoints are disabled in production", ...}
-
-# With ENABLE_INTERNAL_ENDPOINTS=true (should work)
-export ENABLE_INTERNAL_ENDPOINTS=true
-# Restart app
-curl https://prod.example.com/api/v1/internal/rabbitmq/status
-# Expected: {"timestamp": "...", "rabbitmq": {...}}
-```
-
-### Stats Aggregation Logs
-
-**Trigger Manual Aggregation:**
-
-```bash
-# Trigger ads aggregation
-curl -X POST http://localhost:3001/api/v1/internal/stats/aggregate-ads
-
-# Check logs for summary
-tail -f logs/box-stats-api.log | grep "📊 Ads aggregation"
-
-# Trigger video aggregation
-curl -X POST http://localhost:3001/api/v1/internal/stats/aggregate-videos
-
-# Check logs for summary
-tail -f logs/box-stats-api.log | grep "📊 Video aggregation"
-```
-
-**Verify Metrics:**
-
-```bash
-# Count items with zero scores
-# Insert test data with 0 impressions/clicks
-# Run aggregation
-# Verify zeroScores count matches
-```
-
----
-
-## Troubleshooting
-
-### RabbitMQ Publisher
-
-**Problem:** Circuit stays OPEN indefinitely
-
-**Diagnosis:**
-
-```bash
-grep "Circuit breaker" app.log | tail -n 20
-```
-
-**Common Causes:**
-
-- RabbitMQ still down
-- Reconnection failing (check for "❌ RabbitMQ reconnection failed")
-- Network connectivity issues
-
-**Solution:**
-
-1. Verify RabbitMQ is running: `docker ps | grep rabbitmq`
-2. Check network: `telnet localhost 5672`
-3. Review reconnection error logs
-4. Restart application if necessary
-
----
-
-**Problem:** Status endpoint returns unhealthy but RabbitMQ is running
-
-**Diagnosis:**
-
-```bash
-curl http://localhost:3000/api/v1/internal/rabbitmq/status | jq .rabbitmq.connection
-```
-
-**Common Causes:**
-
-- Connection established but channel creation failed
-- Circuit breaker in OPEN/HALF_OPEN state
-- Stale connection object
-
-**Solution:**
-
-1. Check `hasConnection` and `hasChannel` separately
-2. Review logs for channel creation errors
-3. Wait for next reconnection cycle (60s)
-
-### Stats Aggregation
-
-**Problem:** High `zeroScores` count
-
-**Diagnosis:**
-
-```bash
-# Find items with zero scores in database
-db.adsGlobalStats.find({ computedScore: 0 }).limit(10)
-```
-
-**Common Causes:**
-
-- Items with no impressions or clicks (new items)
-- Items older than aggregation window
-- Scoring formula issues
-
-**Solution:**
-
-1. Verify items have events in the time window
-2. Check scoring weights configuration
-3. Review formula: `score = w1*log(1+impressions) + w2*ctr + w3*recency`
-
----
-
-**Problem:** Average score dropping over time
-
-**Diagnosis:**
-
-```bash
-# Extract avg scores over time
-grep "📊.*avg=" app.log | \
-  awk '{print $1, $2}' | \
-  grep "avg=" | \
-  awk -F'avg=' '{print $1, $2}'
-```
-
-**Common Causes:**
-
-- Increased number of low-quality items
-- Recency decay (older items losing score)
-- Changed scoring weights
-
-**Solution:**
-
-1. Review score distribution (min, max, avg)
-2. Check if `zeroScores` count increased
-3. Verify scoring weights haven't changed
-4. Consider adjusting weights if needed
-
----
-
-## Summary
-
-### Part A: RabbitMQ Publisher ✅
-
-- ✅ Enhanced circuit state transition logs with `failureCount`/`successCount`
-- ✅ Detailed reconnection lifecycle logs with structured messages
-- ✅ Extended `getCircuitStatus()` with connection health flags
-- ✅ New internal status endpoint `/api/v1/internal/rabbitmq/status`
-- ✅ Environment-based access control for production safety
-
-### Part B: Stats Aggregation ✅
-
-- ✅ Summary logs after each aggregation run
-- ✅ Score statistics (min, max, avg) included
-- ✅ Zero score count tracking
-- ✅ Count of updated vs errored items
-- ✅ INFO-level logs (not spammy)
-- ✅ DEBUG-level per-item details preserved
-
-### Impact
-
-**Visibility Improvements:**
-
-- Circuit breaker state changes are now self-explanatory
-- Reconnection progress is traceable end-to-end
-- RabbitMQ health can be monitored programmatically
-- Stats aggregation quality is measurable per run
-
-**Operations Benefits:**
-
-- Faster troubleshooting with structured logs
-- Proactive monitoring via status endpoint
-- Clear separation of summary (INFO) vs detail (DEBUG) logs
-- Production-safe internal endpoints with environment guards

+ 0 - 223
docs/PROJECT_STATE_SUMMARY.md

@@ -1,223 +0,0 @@
-# Project State Summary
-
-## Implementation Summary
-
-### 1. Project structure overview (apps / libs / packages actually present)
-- apps: `apps/box-app-api`, `apps/box-mgnt-api`, `apps/box-stats-api`
-- libs: `libs/common`, `libs/core`, `libs/db`
-- packages: none present
-- data/infra: `prisma/mysql`, `prisma/mongo`, `prisma/mongo-stats`
-- docs/artifacts: `docs`, `dist`, `logs`, `build.log`, `structures.txt`
-
-### 2. Implemented backend applications (what can actually boot/run)
-- box-mgnt-api: Fastify-based NestJS app (`apps/box-mgnt-api/src/main.ts`)
-- box-app-api: Express-based NestJS app (`apps/box-app-api/src/main.ts`)
-- box-stats-api: Express-based NestJS app (`apps/box-stats-api/src/main.ts`)
-
-### 3. Implemented modules per app (auth, user, config, etc.)
-- box-mgnt-api
-  - Core + auth: Auth, User, Role, Menu, LoginLog, OperationLog, QuotaLog
-  - Feature: Ads, Category, Channel, Tag, VideoMedia, SystemParams, SyncVideomedia, ProviderVideoSync, S3, Health, ImageUpload, MgntHttpService
-  - Cache: CacheSyncModule, CoreModule (cache warmups/builders), RedisModule
-  - Cross-cutting: CommonModule (global interceptors/filters/logging), SharedModule (Prisma, HttpModule)
-  - Dev-only: DevVideoCacheModule (loaded only when `NODE_ENV !== 'production'`)
-  - UNUSED: OssModule exists in `apps/box-mgnt-api/src/mgnt-backend/feature/oss/oss.module.ts` but is commented out in `apps/box-mgnt-api/src/mgnt-backend/feature/feature.module.ts`
-- box-app-api
-  - Auth, Ads, Video, Recommendation, Homepage, SysParams, Stats, Health
-  - Infrastructure: PrismaMongoModule (Mongo + MongoStats Prisma services), RedisModule, RedisCacheModule (cache-manager), RabbitmqModule
-- box-stats-api
-  - UserLoginModule, RabbitmqConsumerModule, StatsEventsModule, PrismaMongoModule, RedisModule
-
-### 4. Implemented infrastructure
-- database(s): MySQL (`prisma/mysql`), MongoDB (`prisma/mongo`), MongoDB stats (`prisma/mongo-stats`)
-- ORM/ODM: Prisma (`@prisma/client`), services in `libs/db/src/prisma/*`
-- cache: Redis via `ioredis` (`libs/db/src/redis/*`), cache-manager via `cache-manager-ioredis-yet` in `apps/box-app-api/src/redis/redis-cache.module.ts`
-- message queue: RabbitMQ via `amqplib` (`apps/box-app-api/src/rabbitmq/*`, `apps/box-stats-api/src/feature/rabbitmq/*`)
-
-### 5. Implemented cross-cutting concerns
-- auth / guards
-  - box-mgnt-api: global JWT + RBAC guards (`apps/box-mgnt-api/src/mgnt-backend/core/auth/auth.module.ts`), LocalAuthGuard for login, RateLimitGuard on auth endpoints, MFA flow in auth service/guards
-  - box-app-api: JwtAuthGuard used on specific endpoints (ads click/impression, stats events, video click, rabbitmq fallback)
-  - UNUSED: `MfaGuard` exists only in `libs/common/src/guards/mfa.guard.ts` with no references
-- exception filters
-  - HttpExceptionFilter applied globally via `libs/common/src/common.module.ts` and `apps/box-app-api/src/app.module.ts`
-  - UNUSED: `AllExceptionsFilter` exists only in `libs/common/src/filters/all-exceptions.filter.ts`
-- logging
-  - box-mgnt-api: `nestjs-pino` + Correlation/Logging/OperationLog/Response interceptors in `libs/common/src/common.module.ts`
-  - box-app-api / box-stats-api: Nest Logger in `main.ts`
-- config loading
-  - box-mgnt-api: `ConfigModule` with env validation (`apps/box-mgnt-api/src/config/env.validation.ts`)
-  - box-app-api: `ConfigModule` with `.env.app` + `.env`
-  - box-stats-api: `ConfigModule` with `.env.stats` + `.env`
-
-### 6. Implemented APIs/endpoints (high-level, no payload details)
-- box-app-api (global prefix `/api/v1`)
-  - Auth: `/auth/login`
-  - Ads: `/ads/list`, `/ads/click`, `/ads/impression`
-  - Video: `/video/recommended`, `/video/category/:channelId/:categoryId/latest`, `/video/categories-with-tags`, `/video/list`, `/video/search-by-tag`, `/video/click`
-  - Recommendation (public): `/recommend/video/:videoId`, `/recommend/ad/:adId`
-  - Recommendation (internal-style): `/api/v1/recommendation/videos/:videoId/similar`, `/api/v1/recommendation/ads/:adId/similar`, `/api/v1/recommendation/ads/:adId/similar-simple`
-  - Homepage: `/homepage`
-  - Sys params: `/sys-params/ad-types`
-  - Stats events: `/stats/ad/click`, `/stats/video/click`, `/stats/ad/impression`
-  - Health: `/health`
-  - RabbitMQ: `/rabbitmq/fallback/replay`, `/rabbitmq/fallback/status`, `/rabbitmq/fallback/queue-size`, `/api/v1/internal/rabbitmq/status`
-- box-stats-api (global prefix `/api/v1`)
-  - Internal stats: `/internal/stats/ingestion`, `/internal/stats/aggregate/ads`, `/internal/stats/aggregate/videos`
-  - Debug: `/internal/stats/debug/redis/videos/top`, `/internal/stats/debug/redis/ads/top`
-- box-mgnt-api (global prefix `/api/v1`, routed under `/mgnt`)
-  - Auth: `/api/v1/mgnt/auth/login`, `/login2fa`, `/logout`, `/2fa/generate`, `/2fa/setup`, `/2fa/enable`, `/permission`, `/abilities`
-  - Users: `/api/v1/mgnt/users` (CRUD + list/search), `/me`, `/me/password`, `/me/personal-info`, `/role/:roleId`, `/admin-remove-2fa`, `/admin-set-2fa`, `/reset-password`
-  - Roles: `/api/v1/mgnt/roles` (list/create/update/delete/permissions)
-  - Menus: `/api/v1/mgnt/menus` (list/tree/create/update/delete)
-  - Logs: `/api/v1/mgnt/login-logs`, `/operation-logs`, `/quota-logs`
-  - System params: `/api/v1/mgnt/system-param` (list/create/update/get)
-  - Ads: `/api/v1/mgnt/ads` (list/get/create/update/delete), `/ads/:id/cover`, `/ads/modules/list`
-  - Ads legacy images: `/api/v1/mgnt/ads-legacy/*`
-  - Categories: `/api/v1/mgnt/categories`
-  - Channels: `/api/v1/mgnt/channels`
-  - Tags: `/api/v1/mgnt/tags`
-  - Video media: `/api/v1/mgnt/video-media` (list/create/update/status/sync/etc.)
-  - Sync: `/api/v1/mgnt/sync-videomedia`, `/api/v1/mgnt/provider-video-sync`
-  - S3 utilities: `/api/v1/mgnt/s3/*` (policy, signed URL/image/video helpers)
-  - Health: `/api/v1/mgnt/health/redis`
-  - Cache sync/admin: `/api/v1/mgnt/cache/*`, `/mgnt-debug/cache-sync/*`, `/api/v1/mgnt/admin/cache/video/*`
-  - Dev-only (non-prod): `/api/v1/mgnt/dev/cache/*`, `/api/v1/mgnt/dev/video/*`
-
-### 7. Implemented build / dev / deploy scripts
-- Dev: `dev:mgnt`, `dev:app`, `dev:stats`
-- Build: `build:mgnt`, `build:app`, `build:stats`
-- Start: `start:mgnt`, `start:app`, `start:stats`
-- Prisma: generate/migrate/seed/setup for MySQL and Mongo
-- Tooling: `typecheck`, `typecheck:watch`, `lint`, `lint:fix`
-
-### 8. Known constraints enforced in code
-- Global API prefixing: `/api/v1` on all apps; management APIs additionally under `/mgnt` router
-- ValidationPipe with `whitelist` and `transform` enabled (all apps)
-- CORS configured by env (`APP_CORS_ORIGIN`) with defaults and method restrictions
-- File upload size limit 30MB for mgnt Fastify multipart
-- Internal endpoints guarded by environment checks (RabbitMQ status) and dev-only module gating by `NODE_ENV`
-- Auth enforcement: mgnt uses global JWT + RBAC guards; login endpoints rate-limited; app/stats use JwtAuthGuard on selected endpoints
-- BigInt JSON serialization override in mgnt main
-
-### 9. Explicit TODOs or TODO comments found in code
-- `apps/box-stats-api/src/feature/stats-events/stats-aggregation.service.ts`: TODO for tag-based ad stats
-- `apps/box-mgnt-api/src/mgnt-backend/feature/video-media/video-media.controller.ts`: TODO for delete video media
-- `apps/box-mgnt-api/src/cache-sync/cache-sync.service.ts`: TODO for HOME/CHANNEL/TRENDING list rebuild
-- `libs/core/src/cache/video/list/video-list-cache.builder.ts`: multiple TODOs for refactor after schema change
-
-### 10. Ambiguous or risky areas that require human confirmation
-- UNCLEAR – controller paths in `apps/box-app-api/src/feature/recommendation/recommendation.controller.ts` and `apps/box-app-api/src/rabbitmq/rabbitmq-status.controller.ts` hardcode `api/v1` while `apps/box-app-api/src/main.ts` also sets a global `api/v1` prefix; confirm whether routes are intended to be `/api/v1/api/v1/...` or just `/api/v1/...`
-- UNCLEAR – `GET /api/v1/video/recommended` reads `req.user` but has no guard; confirm if another auth middleware populates `req.user` or if this is intended to be public
-- UNCLEAR – `apps/box-stats-api/src/feature/stats-events/stats-internal.controller.ts` exposes internal/debug endpoints without auth; confirm expected protection model
-- UNUSED – commented endpoints (e.g., `apps/box-app-api/src/feature/sys-params/sys-params.controller.ts` constants, `apps/box-app-api/src/feature/ads/ad.controller.ts` ad URL, `apps/box-mgnt-api/src/mgnt-backend/core/auth/auth.controller.ts` OAuth routes) appear intentionally disabled; confirm whether they should stay disabled
-
-## Feature Classification
-
-| Feature / Module | Status | Evidence in Code | Risk if Touched |
-|---|---|---|---|
-| box-mgnt-api app | PRODUCTION-READY | `apps/box-mgnt-api/src/main.ts`, `apps/box-mgnt-api/src/app.module.ts`, scripts `dev:mgnt/start:mgnt` in `package.json` | High: core admin backend |
-| Mgnt Core Auth (JWT/RBAC/2FA) | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/core/auth/auth.module.ts` is imported via `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | High: login + permissions |
-| Mgnt Users | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/core/user/user.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | High: admin user mgmt |
-| Mgnt Roles | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/core/role/role.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | High: RBAC |
-| Mgnt Menus | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/core/menu/menu.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium: admin UI navigation |
-| Mgnt Logs (login/operation/quota) | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/core/logging/*` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium: audit trails |
-| Mgnt Ads management | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/ads/ads.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | High: ad CRUD |
-| Mgnt Categories | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/category/category.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium: app taxonomy |
-| Mgnt Channels | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/channel/channel.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium |
-| Mgnt Tags | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/tag/tag.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium |
-| Mgnt Video Media | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/video-media/video-media.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | High: video catalog |
-| Mgnt System Params | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/system-params/system-params.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium |
-| Mgnt Sync Video Media | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/sync-videomedia/sync-videomedia.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium: sync logic |
-| Mgnt Provider Video Sync | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/provider-video-sync/provider-video-sync.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/feature/feature.module.ts` | Medium |
-| Mgnt S3 endpoints | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/s3/s3.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Medium: upload/signing |
-| Mgnt Health (Redis) | INTERNAL-ONLY | `apps/box-mgnt-api/src/mgnt-backend/feature/health/health.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts` | Low: monitoring |
-| Mgnt Cache Sync (admin/debug) | INTERNAL-ONLY | `apps/box-mgnt-api/src/cache-sync/cache-sync.module.ts` wired in `apps/box-mgnt-api/src/app.module.ts` | Medium: cache integrity |
-| Mgnt Dev Video Cache tools | INTERNAL-ONLY | `apps/box-mgnt-api/src/dev/dev-video-cache.module.ts` only imported when `NODE_ENV !== 'production'` in `apps/box-mgnt-api/src/app.module.ts` | Medium: dev diagnostics |
-| Mgnt ImageUpload service | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/image-upload/image-upload.module.ts` imported by feature services (ads/video-media) | Medium: media pipeline |
-| Mgnt MgntHttpService | PRODUCTION-READY | `apps/box-mgnt-api/src/mgnt-backend/feature/mgnt-http-service/mgnt-http-service.module.ts` wired in `apps/box-mgnt-api/src/mgnt-backend/feature/feature.module.ts` | Low |
-| Mgnt OSS module | DEAD / UNUSED | `apps/box-mgnt-api/src/mgnt-backend/feature/oss/oss.module.ts` not imported (commented in `apps/box-mgnt-api/src/mgnt-backend/feature/feature.module.ts`) | Low: unused |
-| box-app-api app | PRODUCTION-READY | `apps/box-app-api/src/main.ts`, `apps/box-app-api/src/app.module.ts`, scripts `dev:app/start:app` in `package.json` | High: public API |
-| App Auth (JWT) | PRODUCTION-READY | `apps/box-app-api/src/feature/auth/auth.module.ts` imported in `apps/box-app-api/src/app.module.ts` | High |
-| App Ads API | PRODUCTION-READY | `apps/box-app-api/src/feature/ads/ad.module.ts` imported in `apps/box-app-api/src/app.module.ts` | High |
-| App Video API | PRODUCTION-READY | `apps/box-app-api/src/feature/video/video.module.ts` imported in `apps/box-app-api/src/app.module.ts` | High |
-| App Homepage API | PRODUCTION-READY | `apps/box-app-api/src/feature/homepage/homepage.module.ts` imported in `apps/box-app-api/src/app.module.ts` | Medium |
-| App Sys Params API | PRODUCTION-READY | `apps/box-app-api/src/feature/sys-params/sys-params.module.ts` imported in `apps/box-app-api/src/app.module.ts` | Medium |
-| App Recommendation API (public) | PRODUCTION-READY | `apps/box-app-api/src/feature/recommendation/recommendation.module.ts` imported in `apps/box-app-api/src/app.module.ts` | Medium |
-| App Stats Events publisher | PRODUCTION-READY | `apps/box-app-api/src/feature/stats/stats.module.ts` imported in `apps/box-app-api/src/app.module.ts` | Medium |
-| App Health endpoint | INTERNAL-ONLY | `apps/box-app-api/src/health/health.module.ts` imported in `apps/box-app-api/src/app.module.ts` | Low |
-| App RabbitMQ publisher + fallback | INTERNAL-ONLY | `apps/box-app-api/src/rabbitmq/rabbitmq.module.ts` imported in `apps/box-app-api/src/app.module.ts` | Medium: event pipeline |
-| App RabbitMQ status endpoint | INTERNAL-ONLY | env guard in `apps/box-app-api/src/rabbitmq/rabbitmq-status.controller.ts` | Low |
-| box-stats-api app | PRODUCTION-READY | `apps/box-stats-api/src/main.ts`, `apps/box-stats-api/src/app.module.ts`, scripts `dev:stats/start:stats` in `package.json` | High: stats processing |
-| Stats RabbitMQ consumer | PRODUCTION-READY | `apps/box-stats-api/src/feature/rabbitmq/rabbitmq-consumer.module.ts` imported in `apps/box-stats-api/src/app.module.ts` | Medium |
-| Stats aggregation scheduler | PRODUCTION-READY | `apps/box-stats-api/src/feature/stats-events/stats-events.module.ts` imported in `apps/box-stats-api/src/app.module.ts` | Medium |
-| Stats internal/debug endpoints | INTERNAL-ONLY | `apps/box-stats-api/src/feature/stats-events/stats-internal.controller.ts` | Low |
-| Shared Prisma services (MySQL/Mongo/MongoStats) | PRODUCTION-READY | `libs/db/src/prisma/prisma.module.ts` exported by `libs/db/src/shared.module.ts` used in mgnt/stats | High: data access |
-| Redis module/service | PRODUCTION-READY | `libs/db/src/redis/redis.module.ts` used in all apps | High: cache/messaging |
-| CommonModule interceptors/filters | PRODUCTION-READY | `libs/common/src/common.module.ts` imported in `apps/box-mgnt-api/src/app.module.ts` | Medium: response/logging |
-| AllExceptionsFilter | DEAD / UNUSED | `libs/common/src/filters/all-exceptions.filter.ts` no wiring found | Low |
-| MfaGuard | DEAD / UNUSED | `libs/common/src/guards/mfa.guard.ts` no usage | Low |
-
-## How This Project Currently Runs
-
-### 1) Start `box-mgnt-api`
-- Command(s): `pnpm dev:mgnt` (nest start), or `pnpm build:mgnt` then `pnpm start:mgnt` (`node dist/apps/box-mgnt-api/src/main.js`)
-- Env loading: `ConfigModule.forRoot` uses `.env.mgnt` then `.env` in `apps/box-mgnt-api/src/app.module.ts`
-- REQUIRED (validated): `MYSQL_URL`, `MONGO_URL`, `JWT_SECRET`, `REDIS_HOST`, `REDIS_PORT`
-- REQUIRED in practice: `MONGO_STATS_URL` (Prisma MongoStats connects on init; schema expects `env("MONGO_STATS_URL")`)
-- OPTIONAL (defaults/hardcoded): `APP_HOST`, `HOST`, `APP_PORT`/`PORT`, `APP_CORS_ORIGIN`, `REDIS_PASSWORD`, `REDIS_DB`, `REDIS_KEY_PREFIX`, `REDIS_TLS`, `JWT_EXPIRES_IN_SECONDS`, `ENCRYPTION_KEY`
-- External services contacted at runtime: MySQL (Prisma connect), MongoDB (Prisma connect), MongoStats (Prisma connect), Redis (ioredis client init). S3 only when S3 endpoints are used; `S3Service` relies on `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_S3_REGION_NAME`, `AWS_STORAGE_BUCKET_NAME`, optional `AWS_S3_ENDPOINT_URL`
-
-### 2) Start `box-app-api`
-- Command(s): `pnpm dev:app` (uses `dotenv -e .env.app -- nest start ...`), or `pnpm build:app` then `pnpm start:app` (`node dist/apps/box-app-api/src/main.js`)
-- Env loading: `ConfigModule.forRoot` uses `.env.app` then `.env` in `apps/box-app-api/src/app.module.ts` (dev command also sets env)
-- REQUIRED: `MONGO_URL`, `MONGO_STATS_URL` (Prisma Mongo + MongoStats clients connect on init; schemas use `env("MONGO_URL")` and `env("MONGO_STATS_URL")`)
-- REQUIRED in practice: Redis connection parameters; defaults are used if unset (`REDIS_HOST` defaults to `127.0.0.1`, `REDIS_PORT` defaults to `6379`)
-- OPTIONAL (defaults/hardcoded): `APP_HOST`, `HOST`, `APP_PORT`/`PORT`, `APP_CORS_ORIGIN`, `IMAGE_ROOT_PATH` (default `/data/box-images`), `JWT_SECRET` (defaults to `default-secret-key`), `JWT_EXPIRES_IN` (defaults to `7d`)
-- RabbitMQ: `RABBITMQ_URL` is OPTIONAL; if missing, publisher logs and stays open-circuit
-- External services contacted at runtime: MongoDB, MongoStats, Redis (both RedisModule and cache-manager). RabbitMQ if `RABBITMQ_URL` is set. Filesystem for static images at `IMAGE_ROOT_PATH` via Express static middleware
-
-### 3) Start `box-stats-api`
-- Command(s): `pnpm dev:stats` (uses `dotenv -e .env.stats -- nest start ...`), or `pnpm build:stats` then `pnpm start:stats` (`node dist/apps/box-stats-api/src/main.js`)
-- Env loading: `ConfigModule.forRoot` uses `.env.stats` then `.env` in `apps/box-stats-api/src/app.module.ts`
-- REQUIRED: `MONGO_STATS_URL` (Prisma MongoStats connects on init)
-- REQUIRED in practice: Redis connection parameters; defaults are used if unset (`REDIS_HOST` default `127.0.0.1`, `REDIS_PORT` default `6379`)
-- OPTIONAL (defaults/hardcoded): `APP_HOST`, `HOST`, `APP_PORT`/`PORT`, `APP_CORS_ORIGIN`
-- RabbitMQ: `RABBITMQ_URL` OPTIONAL; consumer logs error and returns without connecting if missing
-- External services contacted at runtime: MongoStats, Redis. RabbitMQ only if `RABBITMQ_URL` is set
-
-### Hardcoded/default behaviors to note
-- `box-app-api` JWT secret defaults to `default-secret-key` and expiration defaults to `7d` if env is missing
-- All apps default to binding host `0.0.0.0` and ports `3300/3301/3302` if `APP_PORT`/`PORT` not set
-- `box-app-api` and `box-mgnt-api` serve static images from `/data/box-images` if `IMAGE_ROOT_PATH` is not set
-
-## Plan vs Current Implementation
-
-### 1. Features already implemented beyond the original plan
-- box-stats-api exists and runs with RabbitMQ consumer + aggregation + internal endpoints, while docs call it “future/upcoming” (`docs/ARCHITECTURE.md`, `docs/SYSTEM_OVERVIEW.md`, `apps/box-stats-api/src/app.module.ts`, `apps/box-stats-api/src/feature/stats-events/*`)
-- Recommendation APIs (similar ads/videos + “guess you like”) are implemented but not mentioned in architecture/roadmap (`apps/box-app-api/src/feature/recommendation/*`)
-- RabbitMQ resilience layer with fallback replay + DLQ for stats events exists but is not called out in early planning docs (`apps/box-app-api/src/rabbitmq/rabbitmq-publisher.service.ts`, `apps/box-app-api/src/rabbitmq/rabbitmq-fallback-replay.controller.ts`)
-- Dev-only cache tooling and production cache-admin endpoints exist beyond high-level planning notes (`apps/box-mgnt-api/src/dev/*`, `apps/box-mgnt-api/src/cache-sync/*`)
-
-### 2. Planned items that are not implemented
-- TODO: delete video media endpoint (`apps/box-mgnt-api/src/mgnt-backend/feature/video-media/video-media.controller.ts`)
-- TODO: cache rebuild for HOME/CHANNEL/TRENDING lists (`apps/box-mgnt-api/src/cache-sync/cache-sync.service.ts`)
-- TODO: tag-based ad aggregation (`apps/box-stats-api/src/feature/stats-events/stats-aggregation.service.ts`)
-- TODO: refactors after schema changes for video list cache (`libs/core/src/cache/video/list/video-list-cache.builder.ts`)
-- Planned “optimization/scalability/monitoring” phases are not represented as concrete modules in code (Phase 5–8 in `docs/ROADMAP.md`)
-
-### 3. Design decisions that diverged from early planning
-- “Stats API (future/upcoming)” in planning docs diverges from code that already implements and wires it (`docs/ARCHITECTURE.md`, `docs/SYSTEM_OVERVIEW.md`, `apps/box-stats-api/src/*`)
-- “Redis-first with Mongo fallback (future extension)” diverges from app code that directly uses Mongo Prisma in multiple services (`apps/box-app-api/src/feature/*/*.service.ts`)
-
-### 4. Irreversible decisions (schema, contracts, public APIs)
-- Database schemas are established in Prisma for MySQL/Mongo/MongoStats (`prisma/mysql/schema/*.prisma`, `prisma/mongo/schema/*.prisma`, `prisma/mongo-stats/schema/*.prisma`)
-- Public API route structure is fixed to `/api/v1` and `/api/v1/mgnt/*` with specific controllers already defined (`apps/*/src/main.ts`, `apps/box-mgnt-api/src/mgnt-backend/mgnt-backend.module.ts`, controller files under `apps/box-app-api/src/feature/*` and `apps/box-mgnt-api/src/mgnt-backend/*`)
-- Stats ingestion routes and internal debug endpoints are defined and exposed in code, creating implied contracts (`apps/box-app-api/src/feature/stats/stats.controller.ts`, `apps/box-stats-api/src/feature/stats-events/stats-internal.controller.ts`)
-
-## Resume Checklist
-
-- Safe to continue: `apps/box-app-api/src/feature/homepage`, `apps/box-app-api/src/feature/sys-params`, `apps/box-mgnt-api/src/mgnt-backend/feature/tag`, `apps/box-mgnt-api/src/mgnt-backend/feature/category`, `apps/box-mgnt-api/src/mgnt-backend/feature/channel`, `apps/box-mgnt-api/src/mgnt-backend/feature/system-params` (isolated CRUD, clear module wiring)
-- Fragile—don’t touch without tests: auth/guards (`apps/box-mgnt-api/src/mgnt-backend/core/auth`, `apps/box-app-api/src/feature/auth`), cache sync/warmups (`apps/box-mgnt-api/src/cache-sync`, `libs/core/src/cache/*`), RabbitMQ publisher/consumer (`apps/box-app-api/src/rabbitmq`, `apps/box-stats-api/src/feature/rabbitmq`)
-- Missing glue code blocking features: TODOs in cache rebuild paths (`apps/box-mgnt-api/src/cache-sync/cache-sync.service.ts`), video list cache refactors (`libs/core/src/cache/video/list/video-list-cache.builder.ts`), video media delete endpoint (`apps/box-mgnt-api/src/mgnt-backend/feature/video-media/video-media.controller.ts`)
-- Unknowns to clarify before new work: route prefixing ambiguity (`apps/box-app-api/src/feature/recommendation/recommendation.controller.ts` vs global prefix), auth expectations for `GET /api/v1/video/recommended`, stats internal endpoints access model, required env vars for S3 usage (`apps/box-mgnt-api/src/mgnt-backend/feature/s3/s3.service.ts`) and RabbitMQ availability expectations

+ 0 - 519
docs/RABBITMQ_FALLBACK_REPLAY.md

@@ -1,519 +0,0 @@
-# RabbitMQ Fallback Replay Service
-
-## Overview
-
-The **RabbitmqFallbackReplayService** automatically replays messages stored in the Redis fallback queue when RabbitMQ recovers from outages. This completes the recovery story with zero manual intervention.
-
-## How It Works
-
-### Message Flow
-
-```
-Normal Operation:
-  Event → RabbitMQ → Success ✅
-
-Circuit Breaker OPEN (RabbitMQ down):
-  Event → Fallback Queue (Redis) → DLQ 📦
-
-RabbitMQ Recovers:
-  Circuit Breaker → HALF_OPEN → CLOSED ✅
-  Replay Service → Read Fallback Queue → Republish → Success ✅
-```
-
-### Architecture
-
-```mermaid
-graph TD
-    A[Message Fails] --> B[Stored in Redis Fallback Queue]
-    B --> C{RabbitMQ Recovered?}
-    C -->|No| D[Wait for CRON Job]
-    D --> C
-    C -->|Yes| E[Replay Service Triggered]
-    E --> F[SCAN Redis Keys]
-    F --> G[Parse Routing Key + MessageID]
-    G --> H[Read Payload]
-    H --> I{Replay Success?}
-    I -->|Yes| J[Delete from Fallback]
-    I -->|No| K[Keep in Fallback + DLQ]
-    J --> L[Process Next Message]
-    K --> L
-```
-
-## Configuration
-
-### Environment Variables
-
-```bash
-# Enable/disable automatic CRON job replay
-RABBITMQ_FALLBACK_REPLAY_ENABLED=true   # Enable automatic replay
-RABBITMQ_FALLBACK_REPLAY_ENABLED=false  # Disable (manual trigger only)
-```
-
-### CRON Schedule
-
-- **Automatic Replay**: Every 5 minutes when `RABBITMQ_FALLBACK_REPLAY_ENABLED=true`
-- **Batch Size**: 100 messages per run (configurable via API)
-
-## Redis Key Format
-
-Fallback messages are stored with this pattern:
-
-```
-rabbitmq:fallback:{routingKey}:{messageId}
-```
-
-**Examples:**
-
-```
-rabbitmq:fallback:stats.ad.click:a1b2c3d4-uuid
-rabbitmq:fallback:stats.video.click:e5f6g7h8-uuid
-rabbitmq:fallback:stats.ad.impression:i9j0k1l2-uuid
-```
-
-**Payload Structure:**
-
-```json
-{
-  "messageId": "a1b2c3d4-uuid",
-  "uid": "user123",
-  "adId": "ad456",
-  "adType": "banner",
-  "clickedAt": "2025-01-01T12:00:00Z",
-  "ip": "192.168.1.1"
-}
-```
-
-**TTL:** 24 hours (86400 seconds)
-
-## API Endpoints
-
-All endpoints require JWT authentication (`@UseGuards(JwtAuthGuard)`).
-
-### 1. Manual Replay
-
-**POST** `/rabbitmq/fallback/replay?limit={number}`
-
-Manually trigger replay of fallback messages.
-
-**Query Parameters:**
-
-- `limit` (optional): Maximum messages to replay (default: 100)
-
-**Response:**
-
-```json
-{
-  "scanned": 150, // Total keys found
-  "replayed": 100, // Successfully replayed
-  "failed": 5, // Failed to replay
-  "deleted": 100 // Successfully deleted from fallback
-}
-```
-
-**Example:**
-
-```bash
-curl -X POST \
-  -H "Authorization: Bearer YOUR_JWT_TOKEN" \
-  "http://localhost:3000/rabbitmq/fallback/replay?limit=200"
-```
-
-### 2. Get Status
-
-**GET** `/rabbitmq/fallback/status`
-
-Get current replay service status.
-
-**Response:**
-
-```json
-{
-  "enabled": true, // CRON job enabled
-  "isReplaying": false // Currently running
-}
-```
-
-### 3. Get Queue Size
-
-**GET** `/rabbitmq/fallback/queue-size`
-
-Get count of messages currently in fallback queue.
-
-**Response:**
-
-```json
-{
-  "count": 42
-}
-```
-
-## Service Methods
-
-### Public API
-
-#### `replayOnce(limit: number = 100)`
-
-Manually trigger replay of fallback messages.
-
-**Parameters:**
-
-- `limit` - Maximum number of messages to replay (default: 100)
-
-**Returns:**
-
-```typescript
-{
-  scanned: number; // Total keys found
-  replayed: number; // Successfully replayed
-  failed: number; // Failed to replay
-  deleted: number; // Successfully deleted
-}
-```
-
-**Example:**
-
-```typescript
-const stats = await replayService.replayOnce(200);
-console.log(`Replayed ${stats.replayed}/${stats.scanned} messages`);
-```
-
-#### `getStatus()`
-
-Get current replay service status.
-
-**Returns:**
-
-```typescript
-{
-  enabled: boolean; // CRON job enabled
-  isReplaying: boolean; // Currently running
-}
-```
-
-#### `getFallbackQueueSize()`
-
-Get count of messages in fallback queue using cursor-based SCAN.
-
-**Returns:** `Promise<number>`
-
-### Internal Implementation
-
-#### `automaticReplay()` - CRON Job
-
-Runs every 5 minutes when `RABBITMQ_FALLBACK_REPLAY_ENABLED=true`.
-
-- Replays up to 100 messages per run
-- Skips if already running (prevents concurrent execution)
-- Logs summary statistics at INFO level
-
-#### `replayMessage(key: string)` - Private
-
-Replays a single message from the fallback queue.
-
-**Process:**
-
-1. Parse key format: `rabbitmq:fallback:{routingKey}:{messageId}`
-2. Extract `routingKey` and `messageId`
-3. Read payload from Redis using `redis.getJson()`
-4. Call `publisher.replayFallbackMessage(routingKey, payload, messageId)`
-5. Delete key on success using `redis.del()`
-
-**Returns:**
-
-```typescript
-{
-  success: boolean; // Replay succeeded
-  deleted: boolean; // Key deleted from fallback
-}
-```
-
-## Integration
-
-### Module Setup
-
-```typescript
-// apps/box-app-api/src/rabbitmq/rabbitmq.module.ts
-@Module({
-  imports: [ConfigModule, RedisModule, AuthModule],
-  controllers: [RabbitmqFallbackReplayController],
-  providers: [RabbitmqPublisherService, RabbitmqFallbackReplayService],
-  exports: [RabbitmqPublisherService, RabbitmqFallbackReplayService],
-})
-export class RabbitmqModule {}
-```
-
-### App Module Setup
-
-```typescript
-// apps/box-app-api/src/app.module.ts
-@Module({
-  imports: [
-    ScheduleModule.forRoot(), // Required for @Cron decorator
-    RabbitmqModule,
-    // ... other modules
-  ],
-})
-export class AppModule {}
-```
-
-## Error Handling
-
-### Infinite Loop Prevention
-
-The replay service prevents infinite loops through the **context** parameter:
-
-```typescript
-// Normal publish
-context = 'stats.ad.click adId=123';
-
-// Replay publish
-context = 'fallback-replay routingKey=stats.ad.click';
-```
-
-**Behavior:**
-
-- Normal publish failures → Store to fallback + DLQ
-- Replay publish failures → **Only DLQ** (no re-storage to fallback)
-
-### Circuit Breaker Integration
-
-Replay respects the circuit breaker state:
-
-- **OPEN**: Skip replay (RabbitMQ still down)
-- **HALF_OPEN**: Attempt replay (testing recovery)
-- **CLOSED**: Full replay (RabbitMQ healthy)
-
-### Concurrent Execution Prevention
-
-The service uses an `isReplaying` flag to prevent concurrent runs:
-
-```typescript
-if (this.isReplaying) {
-  this.logger.warn('Replay already in progress, skipping this run');
-  return { scanned: 0, replayed: 0, failed: 0, deleted: 0 };
-}
-```
-
-## Performance Considerations
-
-### SCAN vs KEYS
-
-The service uses cursor-based **SCAN** instead of **KEYS** to avoid blocking Redis:
-
-```typescript
-// ✅ Good: Non-blocking cursor iteration
-let cursor = '0';
-do {
-  const result = await this.redis.scan(cursor, 'MATCH', pattern, 'COUNT', 100);
-  cursor = result[0];
-  const keys = result[1];
-  // Process keys...
-} while (cursor !== '0');
-
-// ❌ Bad: Blocks Redis during execution
-const keys = await this.redis.keys('rabbitmq:fallback:*');
-```
-
-### Batch Processing
-
-- **SCAN Batch Size**: 100 keys per iteration
-- **Replay Batch Size**: Configurable via `limit` parameter
-- **Default Limit**: 100 messages per CRON run
-
-### Logging
-
-- **DEBUG Level**: Individual message details
-- **INFO Level**: Summary statistics only
-- **ERROR Level**: Failures with stack traces
-
-**Example Output:**
-
-```
-[INFO] Starting fallback replay (limit: 100)...
-[INFO] Found 150 fallback messages, processing 100...
-[INFO] Fallback replay completed: scanned=150, replayed=95, failed=5, deleted=95, duration=2345ms
-```
-
-## Monitoring
-
-### Health Checks
-
-Monitor these metrics:
-
-1. **Queue Size**: `/rabbitmq/fallback/queue-size`
-   - Alert if > 1000 messages
-2. **Replay Status**: `/rabbitmq/fallback/status`
-   - Alert if `enabled=false` in production
-3. **Replay Success Rate**: `replayed / scanned`
-   - Alert if < 90%
-
-### Log Analysis
-
-Search for these patterns:
-
-```bash
-# Successful replays
-grep "Fallback replay completed" app.log
-
-# Failed replays
-grep "Error replaying message" app.log
-
-# Circuit breaker blocking replays
-grep "Circuit breaker OPEN" app.log
-```
-
-## Troubleshooting
-
-### Problem: Messages stuck in fallback queue
-
-**Diagnosis:**
-
-```bash
-curl -H "Authorization: Bearer TOKEN" \
-  http://localhost:3000/rabbitmq/fallback/queue-size
-```
-
-**Solutions:**
-
-1. Check RabbitMQ health
-2. Check circuit breaker state
-3. Manually trigger replay with higher limit:
-   ```bash
-   curl -X POST -H "Authorization: Bearer TOKEN" \
-     "http://localhost:3000/rabbitmq/fallback/replay?limit=1000"
-   ```
-
-### Problem: Replay failing continuously
-
-**Diagnosis:**
-Check logs for error patterns:
-
-```bash
-grep "Error replaying message" app.log | tail -n 20
-```
-
-**Common Causes:**
-
-- RabbitMQ still unhealthy (circuit breaker OPEN)
-- Invalid payload format
-- Network connectivity issues
-
-**Solutions:**
-
-1. Verify RabbitMQ is fully recovered
-2. Check circuit breaker state in logs
-3. Inspect DLQ for problematic messages
-
-### Problem: CRON job not running
-
-**Diagnosis:**
-
-```bash
-curl -H "Authorization: Bearer TOKEN" \
-  http://localhost:3000/rabbitmq/fallback/status
-```
-
-**Solutions:**
-
-1. Check environment variable:
-   ```bash
-   echo $RABBITMQ_FALLBACK_REPLAY_ENABLED  # Should be "true"
-   ```
-2. Verify ScheduleModule is imported in AppModule
-3. Check application logs for CRON job messages
-
-## Best Practices
-
-### Production Deployment
-
-1. **Enable CRON Job:**
-
-   ```bash
-   RABBITMQ_FALLBACK_REPLAY_ENABLED=true
-   ```
-
-2. **Monitor Queue Size:**
-   - Set up alerts for queue size > 1000
-   - Indicates persistent RabbitMQ issues
-
-3. **Rate Limit Manual Triggers:**
-   - Avoid triggering replay with limit > 1000
-   - Can overwhelm recovering RabbitMQ broker
-
-4. **Review DLQ Regularly:**
-   - Messages that fail replay go to DLQ
-   - Inspect for systematic issues
-
-### Development/Testing
-
-1. **Disable CRON Job:**
-
-   ```bash
-   RABBITMQ_FALLBACK_REPLAY_ENABLED=false
-   ```
-
-2. **Test Manual Replay:**
-
-   ```bash
-   curl -X POST -H "Authorization: Bearer TOKEN" \
-     "http://localhost:3000/rabbitmq/fallback/replay?limit=10"
-   ```
-
-3. **Simulate Failures:**
-   - Stop RabbitMQ → Generate events → Check fallback queue
-   - Start RabbitMQ → Trigger manual replay → Verify success
-
-## Complete Recovery Flow
-
-```mermaid
-sequenceDiagram
-    participant App as Application
-    participant CB as Circuit Breaker
-    participant RQ as Redis Fallback Queue
-    participant MQ as RabbitMQ
-    participant DLQ as Dead Letter Queue
-    participant RS as Replay Service
-
-    Note over MQ: RabbitMQ goes down
-    App->>CB: publishStatsEvent()
-    CB->>CB: Record failure
-    CB->>CB: State → OPEN
-    App->>RQ: storeInFallbackQueue()
-    App->>DLQ: sendToDLQ()
-
-    Note over MQ: RabbitMQ recovers
-    RS->>RQ: SCAN for fallback keys
-    RQ-->>RS: Return keys
-    RS->>RS: Parse routingKey & messageId
-    RS->>RQ: getJson(key)
-    RQ-->>RS: Return payload
-    RS->>CB: replayFallbackMessage()
-    CB->>CB: Check state (HALF_OPEN)
-    CB->>MQ: Publish message
-    MQ-->>CB: ACK
-    CB->>CB: Record success
-    CB->>CB: State → CLOSED
-    CB-->>RS: Success
-    RS->>RQ: del(key)
-    RQ-->>RS: Deleted
-```
-
-## Summary
-
-The RabbitmqFallbackReplayService provides **automated recovery** from RabbitMQ outages:
-
-✅ **Automatic**: CRON job runs every 5 minutes  
-✅ **Manual Control**: API endpoints for on-demand replay  
-✅ **Safe**: Prevents infinite loops and concurrent execution  
-✅ **Efficient**: Uses SCAN instead of KEYS  
-✅ **Observable**: Status API and detailed logging  
-✅ **Production-Ready**: Integrated with circuit breaker and error handling
-
-**Data Loss Prevention:** 6 layers of protection
-
-1. Retry with exponential backoff
-2. Circuit breaker with auto-recovery
-3. Redis fallback queue ← **This service**
-4. Dead Letter Queue
-5. Publisher idempotency
-6. Message TTL

+ 0 - 19
docs/ROADMAP.md

@@ -1,19 +0,0 @@
-# Roadmap (Updated for Audit-Only)
-
-## Phase 0
-Monorepo, environment, database setup.
-
-## Phase 1
-Auth + RBAC (SuperAdmin-only user management).
-
-## Phase 2
-Mgnt modules with audit-only flows.
-
-## Phase 3
-App API core ads/videos.
-
-## Phase 4
-Stats system (future).
-
-## Phase 5–8
-Optimization, scalability, monitoring.

+ 0 - 21
docs/SYSTEM_OVERVIEW.md

@@ -1,21 +0,0 @@
-# System Overview (Audit-Only, Immediate Publish)
-
-The Box System is a multi-API backend platform built for large-scale content delivery and advertisement placement.  
-
-## Key Principles
-- **Immediate publish**: No Maker–Checker, no approval steps.
-- **Audit-only tracking**: Every change is logged but applied instantly.
-- **RBAC enforcement**: SuperAdmin has full control; Managers/Admins handle content.
-- **Redis-first architecture** for ad/video responses.
-- **MongoDB for content**, **MySQL for admin**, **Redis for performance**.
-
-## Components
-1. **box-mgnt-api (Management)**  
-   Admin tool for managing ads, videos, channels, categories, system parameters.
-
-2. **box-app-api (Consumer)**  
-   Public API serving ads, videos, homepage data.
-
-3. **box-stats-api (Future)**  
-   Event ingestion and stats aggregation.
-

+ 0 - 9
docs/ads-redis-key-inventory.md

@@ -1,9 +0,0 @@
-# Ads Redis Key Inventory
-
-| Key pattern | Writer(s) | Reader(s) | Value type | TTL | Notes |
-| --- | --- | --- | --- | --- | --- |
-| `box:app:adpool:{adType}` | `AdPoolService.rebuildPoolForType` (via `CacheSyncService.rebuildAdsCacheByType`/warm cache) | `readAdPoolEntriesWithLegacySupport` → used by `AdService.getAdForPlacement` and `HomepageService.getAdsByType` | JSON array of `AdPoolEntry` | none | Canonical pool keyed by `AdType`. Writers select `[id,adType,advertiser,title,adsContent,adsCoverImg,adsUrl,imgSource,startDt,expiryDt,seq]` and preserve `seq`-ascending order. Legacy module-keyed pool is rehydrated once and deleted after a warm rebuild.
-| `app:adpool:{adsModuleId}` | *None (legacy)* | `readAdPoolEntriesWithLegacySupport` (fallback path that looks up `AdsModule` by `adType`) | JSON array (legacy schema) | none | Transitional key used only when the new `box:app:adpool` pool is missing. The helper logs a warning, writes the normalized entries into the `adType` pool, and never falls back again.
-| `app:ad:by-id:{adId}` | `CacheSyncService.rebuildSingleAdCache` (triggered by `AdsService.scheduleAdRefresh`) | `AdService.fetchAdDetails` (placement), `HomepageService.fetchAdDetails` | JSON object (per-ad payload: `id,advertiser,title,adsContent,adsCoverImg,adsUrl,imgSource,adType,startDt,expiryDt,seq`) | ~300s (`CacheSyncService.AD_CACHE_TTL`) | Per-ad cache entry with short TTL to keep metadata fresh. Writers delete the key when the ad is inactive or missing.
-
-No Ads keys currently live without writers/readers except for the legacy `app:adpool:{adsModuleId}` fallback. The canonical `box:app:adpool:{adType}` should remain the single source of truth once caches rebuild and the legacy key is pruned.

+ 0 - 19
docs/ads-redis-key-spec.md

@@ -1,19 +0,0 @@
-# Ads Redis Key Spec (new adType schema)
-
-## Key naming
-- `box:app:adpool:{adType}` – JSON array scoped exclusively by the Prisma `AdType` enum (no `adsModuleId`).  
-- `app:ad:by-id:{adId}` – JSON object for a single ad, used by placement services and homepage caches.
-
-## Payload schema
-- All cached entries must consist of the following fields only:  
-  `id`, `adType`, `advertiser`, `title`, `adsContent`, `adsCoverImg`, `adsUrl`, `imgSource`, `startDt`, `expiryDt`, `seq`.
-- Records are included only when `status === 1` and the current epoch-second window satisfies `startDt <= now` and `(expiryDt === 0 || expiryDt >= now)`.
-- Ordering inside the pool is strictly ascending by `seq`; the array represents that ordering so readers can respect ad priority.
-- Timestamps (`startDt`, `expiryDt`) stay as BigInt epoch seconds.
-
-## Value type & TTL
-- `box:app:adpool:{adType}` is stored as a JSON string (array) and has no TTL; cache rebuilds overwrite and regenerate it via the ad-pool service.
-- `app:ad:by-id:{adId}` is stored as JSON (object) with `EX` of 300 seconds (set from the mgnt cache-sync/warmup services). Readers expect this short TTL to keep per-ad payloads fresh.
-
-## Migration notes
-- Writers must drop `adsModuleId` completely and rely on `adType` for both pool entries and per-ad caches. Legacy code should map module IDs to ad types only during migration, never store them in Redis.

BIN
docs/download.png


+ 0 - 12
docs/video-redis-key-inventory.md

@@ -1,12 +0,0 @@
-# Video Redis Key Inventory
-
-| Key pattern | Writer(s) | Reader(s) | Value type | TTL | Notes |
-| --- | --- | --- | --- | --- | --- |
-| `box:app:video:category:list:{categoryId}` | `VideoCategoryCacheBuilder.buildCategoryVideoListForCategory` | `VideoService.getLatestVideosByCategory`, `VideoService.getVideosByCategoryListFallback`, `VideoService.getVideoList` (category-only flow), `searchVideosByTagName` fallback when tag list missing | LIST (video IDs) | none | Canonical list that stores only IDs. `VideoCacheHelper` ensures legacy `box:box:...` keys are handled.
-| `box:app:video:tag:list:{categoryId}:{tagId}` | `VideoCategoryCacheBuilder.buildTagFilteredVideoListForTag` | `VideoService.getVideoList` (tag filter), `VideoService.getVideosByTagListFallback`, `searchVideosByTagName` | LIST (video IDs) | none | Readers rely on the list ordering preserved by the builder.
-| `box:app:tag:list:{categoryId}` | `VideoCategoryCacheBuilder.buildTagMetadataListForCategory` | `VideoService.getTagListForCategory`, `VideoService.getCategoriesWithTags`, `VideoService.getVideoList` (tag lookup), `searchVideosByTagName`, admin diagnostics | LIST (Tag JSON strings) | none | Helper parses JSON into `TagMetadata`. Legacy double-prefixed keys are touched by `VideoCacheHelper` fallback and admin cleanup.
-| `box:app:video:recommended` | `RecommendedVideosCacheBuilder.buildAll` | `VideoService.getRecommendedVideos` | JSON array (`RecommendedVideoItem[]`) | 3600s | TTL enforced by builder; refreshed via cache sync.
-| `box:app:video:detail:{videoId}` | *None (ghost key)* | `VideoService.getVideoDetail` | JSON object (full VideoDetailDto) | unspecified | Readers still reference this key even though no builders populate it; treat as a legacy/ghost key and consider migrating to the payload cache spec.
-| `box:app:video:payload:{videoId}` | *(planned, follows spec)* | *(future readers)* | JSON object with only `{id,title,coverImg,coverImgNew,videoTime,country,firstTag,secondTags,preFileName,desc,size,updatedAt,filename,fieldNameFs,ext}` | TBD (align with writer) | New minimal payload key described in `docs/video-redis-key-spec.md`. Use `CacheKeys.appVideoPayloadKey` / `tsCacheKeys.video.payload` for generation.
-
-The inventory above complements the `VideoCacheHelper` fallback logic: all LIST readers now go through helpers that try the correct `app:` key first and rewrite any legacy `box:box:` entries encountered.

+ 0 - 19
docs/video-redis-key-spec.md

@@ -1,19 +0,0 @@
-# Video Redis Key Spec (new semantics)
-
-## Key naming & payload rules
-- `box:app:video:category:list:{categoryId}` – Redis **LIST** of video IDs scoped by category. Readers fetch IDs (via `VideoCacheHelper`) and hydrate details from MongoDB; writers are the category cache builder.
-- `box:app:video:tag:list:{categoryId}:{tagId}` – Redis **LIST** of video IDs filtered by tag membership. Writers are the tag-filtered builder, readers reuse the same helpers as category lists when `tagName` filters are requested.
-- `box:app:tag:list:{categoryId}` – Redis **LIST** of stringified Tag metadata objects (`{id,name,seq,status,categoryId,channelId,createAt,updateAt}`). Only the tag metadata builder writes here; readers rely on `VideoCacheHelper.getTagListForCategory` for typed access.
-- `box:app:video:recommended` – Redis **STRING** (JSON) storing an array of `RecommendedVideoItem` structures produced by `RecommendedVideosCacheBuilder` and consumed by `VideoService.getRecommendedVideos`.
-- `box:app:video:payload:{videoId}` – Redis **STRING** (JSON) intended to hold a **minimal VideoMedia payload** with exactly these fields: `id`, `title`, `coverImg`, `coverImgNew`, `videoTime`, `country`, `firstTag`, `secondTags`, `preFileName`, `desc`, `size`, `updatedAt`, `filename`, `fieldNameFs`, `ext`. `size` remains a BigInt serialized via `RedisService.setJson`. This key mirrors the detail cache but keeps the payload minimal, and caches should only include active videos sourced from the latest Mongo data.
-
-## Value types & TTL
-- Category/tag lists: Redis **LIST** of Mongo ObjectId strings. No TTL is set (lists persist until the next rebuild).
-- Tag metadata lists: Redis **LIST** of JSON strings; TTL is unset so the data lives until the next rebuild.
-- Recommended videos: JSON array stored via `RedisService.setJson` with a **1-hour TTL** (`RecommendedVideosCacheBuilder.CACHE_TTL`).
-- Payload key: JSON string stored via `RedisService.setJson`. TTL stays in sync with the builder that writes it (none by default, but can be added when the payload writer is implemented).
-
-## Migration & hygiene
-- All helpers that read video lists or tag metadata go through `VideoCacheHelper`, which attempts the canonical key (`app:...`) and falls back once to the legacy double-prefixed form (`box:app:...`) before logging a warning.
-- Admin/developer endpoints now only delete canonical patterns (`app:...`) and also sweep legacy `box:app:...` keys so old double-prefixed data can be removed after the new prefixes are safely draining.
-- New constants in `CacheKeys`/`tsCacheKeys` expose `appVideoPayloadKey(videoId)` so builders/services can stay aligned with the spec while Redis’ configured `box:` prefix is always applied exactly once.