Full React+TypeScript app with LP-based optimization engine, drag-and-drop specialization ranking (with touch/arrow support), course selection UI, results dashboard with decision tree, and two optimization modes (maximize-count, priority-order).
172 lines
10 KiB
Markdown
172 lines
10 KiB
Markdown
## Context
|
||
|
||
Greenfield React web application. No existing code, backend, or infrastructure. The complete problem domain is documented in `SPECIALIZATION_EVALUATION.md`: 46 courses across 12 elective sets, 14 specializations, a course-specialization qualification matrix with special markers (■/S1/S2), required course gates, and a credit non-duplication constraint that makes this an optimization problem rather than simple counting.
|
||
|
||
The application runs entirely client-side. All data is embedded as static constants. The solver must be fast enough for instant feedback on every user interaction.
|
||
|
||
## Goals / Non-Goals
|
||
|
||
**Goals:**
|
||
|
||
- Exact optimal credit allocation (LP-based, not heuristic)
|
||
- Sub-second recalculation on any user interaction (rank change, pin, mode switch)
|
||
- Clear visibility into trade-offs: what each open-set decision enables or eliminates
|
||
- Works on modern browsers, no install or backend required
|
||
|
||
**Non-Goals:**
|
||
|
||
- Multi-user features, persistence, or accounts (local-only tool)
|
||
- Supporting arbitrary program structures (hardcoded to J27 EMBA curriculum)
|
||
- Mobile-optimized layout (desktop-first, functional on tablet)
|
||
- Printing or export
|
||
|
||
## Decisions
|
||
|
||
### 1. Project Tooling: Vite + React + TypeScript
|
||
|
||
**Choice:** Vite with React and TypeScript.
|
||
|
||
**Why:** Vite provides fast dev server and optimized builds. TypeScript catches data model errors at compile time — critical when the course-specialization matrix has ~100 entries that must be correct. No SSR needed since this is a static client-side tool.
|
||
|
||
**Alternatives considered:**
|
||
- Create React App: deprecated, slower builds
|
||
- Next.js: overkill for a single-page client-only tool
|
||
|
||
### 2. LP Solver: `javascript-lp-solver`
|
||
|
||
**Choice:** `javascript-lp-solver` — a pure JavaScript simplex implementation.
|
||
|
||
**Why:** The LP instances are tiny (~168 variables, ~26 constraints). This library is dependency-free, runs synchronously, and solves each instance in microseconds. No WASM compilation step or async loading. The API accepts a JSON model definition, which maps cleanly to our problem structure.
|
||
|
||
**Alternatives considered:**
|
||
- `glpk.js` (WASM GLPK port): more powerful, but WASM loading adds complexity for no benefit at this problem size
|
||
- Custom simplex: less maintenance burden on a library, but reinventing a solved problem
|
||
- No LP solver (combinatorial): credit splitting makes pure combinatorial approaches awkward; LP handles fractional allocation naturally
|
||
|
||
### 3. Optimization Architecture: Subset Enumeration + LP Feasibility
|
||
|
||
Both optimization modes share a common primitive: **given a fixed set of selected courses and a target set of specializations, is there a feasible credit allocation where every target spec reaches 9 credits?**
|
||
|
||
This is a small LP:
|
||
- Variables: `x[course][spec]` = credits allocated from course to spec
|
||
- Constraints: `Σ_spec x[c][s] ≤ 2.5` for each course, `Σ_course x[c][s] ≥ 9` for each target spec, `x[c][s] = 0` where course doesn't qualify, `x ≥ 0`
|
||
- Strategy S2: at most one S2-marked course has `x[c][strategy] > 0`
|
||
|
||
The two modes differ only in how they search for the best feasible target set:
|
||
|
||
**Maximize Count mode:**
|
||
1. Pre-filter: remove specs missing required courses → get candidate list
|
||
2. For n = 3 down to 0, enumerate `C(candidates, n)` subsets
|
||
3. For each subset, check LP feasibility (with S2 enumeration for Strategy)
|
||
4. Among feasible subsets of the max size, pick the one with best priority score: `Σ (15 - rank[spec])`
|
||
5. Return first feasible size found
|
||
|
||
**Priority Order mode:**
|
||
1. Pre-filter same as above
|
||
2. Start with `achieved = []`
|
||
3. For each spec in priority order: check if `{achieved ∪ spec}` is LP-feasible
|
||
4. If yes, add to achieved; if no, skip
|
||
5. At most 14 feasibility checks (one per spec)
|
||
|
||
**S2 enumeration:** When Strategy is in the target set, enumerate which S2 course (if any) counts. With typically 0-3 S2 courses selected, this adds at most 4 LP solves per subset check.
|
||
|
||
**Performance for fixed courses:** Max count mode worst case: `C(14,3) + C(14,2) + C(14,1) = 469` subsets × ~4 S2 options = ~1,876 LP solves. Each LP takes microseconds. Total: < 20ms.
|
||
|
||
### 4. Decision Tree: Per-Choice Impact Analysis
|
||
|
||
For open (unpinned) sets, the user needs to understand: "what does picking this course change?"
|
||
|
||
**Approach:** For each open set, for each course choice in that set:
|
||
1. Temporarily pin that course
|
||
2. Compute the **ceiling** — best achievable outcome assuming all *other* open sets are chosen optimally
|
||
3. Report: ceiling specialization count, which specs become achievable/unreachable vs. current state
|
||
|
||
**Ceiling computation** requires enumerating remaining open-set combinations. Cost: for each choice being evaluated, enumerate `4^(open-1)` remaining combos, each running the subset enumeration.
|
||
|
||
| Open sets | Combos per choice | × 4 choices × sets | Total optimizations |
|
||
|-----------|-------------------|---------------------|---------------------|
|
||
| 2 | 4 | 32 | 32 |
|
||
| 4 | 64 | 1,024 | 1,024 |
|
||
| 6 | 1,024 | 24,576 | 24,576 |
|
||
| 8 | 4,096 | 131,072 | 131,072 |
|
||
| 10 | 65,536 | 2,621,440 | too many |
|
||
|
||
At 6+ open sets, full enumeration becomes expensive (seconds). Solution:
|
||
|
||
**Tiered computation:**
|
||
- **Instant (main thread):** upper-bound reachability per spec using credit counting (ignores sharing). Runs on every interaction. Shows achievable/unreachable status.
|
||
- **Fast (main thread):** LP optimization on pinned courses only. Shows guaranteed achievements.
|
||
- **Background (Web Worker):** full decision tree enumeration. Runs after user interaction settles (debounced ~300ms). Progressively updates the UI as results arrive per open set.
|
||
|
||
**Early termination:** when computing ceilings, if we find a 3-spec feasible subset for a choice, we know the ceiling is 3 (the max) and can skip remaining combos for that choice.
|
||
|
||
**Impact ordering:** open sets are ranked by decision impact = variance in ceiling outcomes across their course choices. A set where all courses lead to the same outcome is "low impact." A set where one course enables 3 specs and another only 2 is "high impact." High-impact sets are shown first.
|
||
|
||
### 5. State Management: `useReducer` with Derived State
|
||
|
||
**Choice:** Single `useReducer` for user inputs, with derived computation results via `useMemo`.
|
||
|
||
```
|
||
State (persisted to localStorage):
|
||
├── specialization ranking: number[] (ordered spec IDs)
|
||
├── optimization mode: 'maximize-count' | 'priority-order'
|
||
└── pinned courses: Map<SetId, CourseId | null>
|
||
|
||
Derived (computed):
|
||
├── optimization result (from LP solver)
|
||
├── per-spec status + credit allocation
|
||
└── decision tree (from Web Worker)
|
||
```
|
||
|
||
**Why not Redux/Zustand:** Only three pieces of user state. A reducer handles the interactions (reorder, pin, toggle mode) cleanly without external dependencies.
|
||
|
||
### 6. Drag-and-Drop: `@dnd-kit`
|
||
|
||
**Choice:** `@dnd-kit/core` + `@dnd-kit/sortable` for specialization ranking.
|
||
|
||
**Why:** Modern, accessible (keyboard support), lightweight, well-maintained. The only drag-and-drop in the app is reordering a 14-item list.
|
||
|
||
**Alternatives considered:**
|
||
- `react-beautiful-dnd`: deprecated by Atlassian
|
||
- HTML5 drag-and-drop API: poor accessibility, inconsistent browser behavior
|
||
|
||
### 8. UI Verification: `agent-browser`
|
||
|
||
**Choice:** `agent-browser` (Vercel Labs) for interactive UI verification during development.
|
||
|
||
**Why:** The interesting behavior in this app is visual — credit allocations, status transitions, decision tree structure. Unit tests cover the optimizer, but verifying the UI wires everything together correctly requires interacting with the running app. `agent-browser` is a headless browser CLI designed for AI agents: it produces accessibility tree snapshots with semantic element references (`@e1`, `@e2`) that are easy to reason about programmatically, and supports click, fill, drag, and screenshot commands.
|
||
|
||
**How it's used:** During implementation, after building each UI component group, launch the Vite dev server and use `agent-browser` to:
|
||
1. Take accessibility tree snapshots to verify rendered structure (14 specializations listed, 12 elective sets grouped by term, status badges present)
|
||
2. Interact with the UI (drag to reorder, click to pin courses, toggle optimization mode)
|
||
3. Snapshot again to verify results updated correctly
|
||
4. Take screenshots for visual confirmation of layout and styling
|
||
|
||
This is **interactive verification, not a repeatable test suite**. It complements vitest unit tests (which cover data integrity and optimizer correctness) by catching UI wiring issues — wrong props, missing state connections, broken drag-and-drop — that unit tests cannot.
|
||
|
||
**Alternatives considered:**
|
||
- Playwright: more mature, better for repeatable e2e test suites, but overkill for this project's size. The app is small enough that interactive verification + unit tests cover the risk.
|
||
- Manual browser testing: works but slower and less systematic than scripted agent-browser commands.
|
||
|
||
### 9. Data Embedding: Typed Constants Module
|
||
|
||
The course-specialization matrix is embedded as a TypeScript module (`src/data/`) exporting typed constants:
|
||
|
||
- `COURSES`: array of `{ id, name, setId, specializations: { specId: '■' | 'S1' | 'S2' }[] }`
|
||
- `ELECTIVE_SETS`: array of `{ id, name, term, courseIds }`
|
||
- `SPECIALIZATIONS`: array of `{ id, name, abbreviation, requiredCourseId? }`
|
||
|
||
Derived lookups (courses-by-set, specs-by-course, etc.) are computed once at module load.
|
||
|
||
Data is transcribed from `SPECIALIZATION_EVALUATION.md` and verified by a unit test that checks row/column counts, required course mappings, and S1/S2 marker counts against known totals.
|
||
|
||
## Risks / Trade-offs
|
||
|
||
**[Decision tree too slow with many open sets]** → Web Worker + progressive rendering + early termination. If 10+ sets are open, fall back to upper-bound reachability only (no full enumeration). Acceptable because with 10 open sets the user hasn't made enough choices for precise decision analysis to matter.
|
||
|
||
**[Data transcription errors in embedded matrix]** → Unit test validates the embedded data against known aggregate counts (46 courses, 14 specs, specific marker totals per spec from the reachability table in SPECIALIZATION_EVALUATION.md). Any transcription error changes a count and fails the test.
|
||
|
||
**[LP solver library abandoned or buggy]** → `javascript-lp-solver` is simple enough that the relevant subset (feasibility checking) could be replaced with a custom implementation if needed. The LP instances are trivially small.
|
||
|
||
**[User confusion between optimization modes]** → Show both results side by side when they differ. When they agree, the mode toggle is less prominent. The interesting moment — when the modes disagree — is where the user learns something.
|