Compare commits
33 Commits
1.1.1-alph
...
1.3.0-alph
| Author | SHA1 | Date | |
|---|---|---|---|
| e9584929a4 | |||
| e91b9f6025 | |||
| 8e97c2fef0 | |||
| b1701865ab | |||
| edb29a9376 | |||
| efbe6fe77f | |||
| 3593291596 | |||
| 8c5ad5c401 | |||
| 59433bc896 | |||
| abd712f694 | |||
| a41ec656a0 | |||
| a2e77586f3 | |||
| 5f5a89512d | |||
| 4d707e1504 | |||
| 85bb6468d6 | |||
| e578585e89 | |||
| f9910fb59f | |||
| f5dd271c65 | |||
| 92f5e1c33a | |||
| 60cd6bfaec | |||
| d7c049e978 | |||
| c61f66928f | |||
| 6b6795bb00 | |||
| b17205c2f9 | |||
| f459cbac67 | |||
| 8b7a90d2a8 | |||
| 3b50754386 | |||
| e1e05e82ae | |||
| 9c1c11df5a | |||
| 0fe118f9e6 | |||
| b520a20444 | |||
| 187fb07934 | |||
| c62e256331 |
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@@ -58,7 +58,7 @@ jobs:
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '18'
|
||||
node-version: '20'
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install dependencies
|
||||
|
||||
112
CHANGELOG.md
112
CHANGELOG.md
@@ -6,7 +6,117 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
|
||||
---
|
||||
|
||||
## [Unreleased]
|
||||
## [1.3.0] - 2026-02-06
|
||||
|
||||
### Added
|
||||
- **Remote IP access**: New `allowedIPs` setting accepts comma-separated IPs and CIDR ranges (e.g., `100.64.0.0/10` for Tailscale) to allow non-localhost connections
|
||||
- Server automatically binds to `0.0.0.0` when remote IPs are configured, otherwise stays on `127.0.0.1`
|
||||
- Three-layer network validation: source IP check, CORS origin check, and host header validation
|
||||
- Bearer token authentication remains mandatory for all connections
|
||||
- Localhost is always implicitly allowed — cannot lock out local access
|
||||
- IPv4-mapped IPv6 addresses (`::ffff:x.x.x.x`) handled transparently
|
||||
- New `network-utils` module with CIDR parsing and IP matching (no external dependencies)
|
||||
- Security warning displayed in settings when remote access is enabled
|
||||
|
||||
---
|
||||
|
||||
## [1.2.0] - 2026-01-31
|
||||
|
||||
### Added
|
||||
- **Line Numbers for `read_note`**: New `withLineNumbers` option prefixes each line with its 1-indexed line number (e.g., `1→# Title`, `2→Content`)
|
||||
- Returns `totalLines` count for easy reference
|
||||
- Designed for use with `update_sections` to ensure accurate line-based edits
|
||||
- Example: `read_note("note.md", { withLineNumbers: true })` returns numbered content
|
||||
|
||||
- **Force Parameter for `update_sections`**: New `force` parameter allows bypassing the version check
|
||||
- Use `force: true` to skip the `ifMatch` requirement (not recommended for normal use)
|
||||
- Intended for scenarios where you intentionally want to overwrite without checking
|
||||
|
||||
### Changed
|
||||
- **`read_note` Always Returns `versionId`**: The response now always includes `versionId` for concurrency control
|
||||
- Previously only returned when `parseFrontmatter: true`
|
||||
- Enables safe `update_sections` workflows by providing the ETag upfront
|
||||
|
||||
### Breaking Changes
|
||||
- **`update_sections` Now Requires `ifMatch`**: The `ifMatch` parameter is now required by default
|
||||
- Prevents accidental overwrites when file content has changed since reading
|
||||
- Get `versionId` from `read_note` response and pass it as `ifMatch`
|
||||
- To opt-out, pass `force: true` (not recommended)
|
||||
- Error message guides users on proper usage workflow
|
||||
|
||||
---
|
||||
|
||||
## [1.1.4] - 2025-12-20
|
||||
|
||||
### Fixed
|
||||
- **Template Literal Type Safety**: Fixed type error in `notifications.ts` where `args.recursive` could produce `[object Object]` when stringified
|
||||
- Added explicit `String()` coercion for unknown types in template literals
|
||||
|
||||
- **File Deletion API**: Replaced `Vault.trash()` with `FileManager.trashFile()` per Obsidian guidelines
|
||||
- All file deletions now respect user's configured deletion preference (system trash or `.trash/` folder)
|
||||
- Removed unused `trash()` method from `IVaultAdapter` interface and `VaultAdapter` class
|
||||
- Both soft and regular delete operations now use the same user-preferred deletion method
|
||||
|
||||
### Changed
|
||||
- `delete_note` destination field now returns `'trash'` instead of `.trash/{filename}` since actual location depends on user settings
|
||||
|
||||
---
|
||||
|
||||
## [1.1.3] - 2025-12-16
|
||||
|
||||
### Fixed
|
||||
- **Sentence Case**: Fixed remaining UI text violations
|
||||
- `main.ts`: "Toggle MCP Server" → "Toggle MCP server"
|
||||
- `settings.ts`: "Authentication & Configuration" → "Authentication & configuration"
|
||||
- `settings.ts`: "UI Notifications" → "UI notifications"
|
||||
|
||||
- **Promise Handling**: Improved async/promise patterns per Obsidian guidelines
|
||||
- `main.ts`: Changed `async onunload()` to synchronous with `void this.stopServer()`
|
||||
- `routes.ts`: Wrapped async Express handler with void IIFE pattern
|
||||
- `mcp-server.ts`: Promise rejection now always uses Error instance
|
||||
|
||||
- **Async/Await Cleanup**: Removed `async` from 7 methods that contained no `await`:
|
||||
- `mcp-server.ts`: `handleInitialize`, `handleListTools`
|
||||
- `vault-tools.ts`: `getVaultInfo`, `listNotes`, `createFileMetadataWithFrontmatter`, `exists`, `resolveWikilink`
|
||||
- `link-utils.ts`: `validateLinks`
|
||||
|
||||
- **Type Safety**: Replaced `any` types with `Record<string, unknown>` and removed eslint-disable directives
|
||||
- `mcp-server.ts`: Tool arguments type
|
||||
- `tools/index.ts`: `callTool` args parameter
|
||||
- `notifications.ts`: `args` interface field, `showToolCall` parameter, `formatArgs` parameter
|
||||
|
||||
- **Import Statements**: Eliminated forbidden `require()` imports
|
||||
- `crypto-adapter.ts`: Replaced `require('crypto')` with `globalThis.crypto`
|
||||
- `encryption-utils.ts`: Replaced bare `require('electron')` with `window.require` pattern
|
||||
|
||||
### Changed
|
||||
- Updated test mocks to match new synchronous method signatures and import patterns
|
||||
|
||||
---
|
||||
|
||||
## [1.1.2] - 2025-11-15
|
||||
|
||||
### Fixed
|
||||
- **Code Review Issues**: Addressed all issues from Obsidian plugin submission review
|
||||
- **Type Safety**: Added eslint-disable comments with justifications for all necessary `any` types in JSON-RPC tool argument handling
|
||||
- **Command IDs**: Removed redundant plugin name prefix from command identifiers (BREAKING CHANGE):
|
||||
- `start-mcp-server` → `start-server`
|
||||
- `stop-mcp-server` → `stop-server`
|
||||
- `restart-mcp-server` → `restart-server`
|
||||
- **Promise Handling**: Added `void` operator for intentional fire-and-forget promise in notification queue processing
|
||||
- **ESLint Directives**: Added descriptive explanations to all eslint-disable comments
|
||||
- **Switch Statement Scope**: Wrapped case blocks in braces to fix lexical declaration warnings in glob pattern matcher
|
||||
- **Regular Expression**: Added eslint-disable comment for control character validation in Windows path checking
|
||||
- **Type Definitions**: Changed empty object type `{}` to `object` in MCP capabilities interface
|
||||
- **Import Statements**: Added comprehensive justifications for `require()` usage in Electron/Node.js modules (synchronous access required)
|
||||
- **Code Cleanup**: Removed unused imports (`MCPPluginSettings`, `TFile`, `VaultInfo`)
|
||||
|
||||
### Changed
|
||||
- Command IDs simplified to remove redundant plugin identifier (may affect users with custom hotkeys)
|
||||
|
||||
### Documentation
|
||||
- Enhanced inline code documentation for ESLint suppressions and require() statements
|
||||
- Added detailed rationale for synchronous module loading requirements in Obsidian plugin context
|
||||
|
||||
---
|
||||
|
||||
|
||||
636
docs/plans/2025-12-16-obsidian-code-review-fixes.md
Normal file
636
docs/plans/2025-12-16-obsidian-code-review-fixes.md
Normal file
@@ -0,0 +1,636 @@
|
||||
# Obsidian Plugin Code Review Fixes Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Fix all required issues from the Nov 16, 2025 ObsidianReviewBot code review to unblock plugin submission approval.
|
||||
|
||||
**Architecture:** Systematic file-by-file fixes addressing: sentence case UI text, async/await cleanup, eslint directive removal, require() to ES6 import conversion, and promise handling improvements.
|
||||
|
||||
**Tech Stack:** TypeScript, Obsidian API, ESLint
|
||||
|
||||
---
|
||||
|
||||
## Task 1: Fix Sentence Case in main.ts
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/main.ts:45`
|
||||
|
||||
**Step 1: Fix ribbon icon tooltip**
|
||||
|
||||
Change line 45 from:
|
||||
```typescript
|
||||
this.addRibbonIcon('server', 'Toggle MCP Server', async () => {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
this.addRibbonIcon('server', 'Toggle MCP server', async () => {
|
||||
```
|
||||
|
||||
**Step 2: Fix onunload promise issue (lines 96-98)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
async onunload() {
|
||||
await this.stopServer();
|
||||
}
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
onunload() {
|
||||
void this.stopServer();
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 4: Commit**
|
||||
|
||||
```bash
|
||||
git add src/main.ts
|
||||
git commit -m "fix: sentence case and onunload promise in main.ts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 2: Fix Sentence Case in settings.ts
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/settings.ts:209,319`
|
||||
|
||||
**Step 1: Fix authentication section header (line 209)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
authSummary.setText('Authentication & Configuration');
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
authSummary.setText('Authentication & configuration');
|
||||
```
|
||||
|
||||
**Step 2: Fix notifications section header (line 319)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
notifSummary.setText('UI Notifications');
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
notifSummary.setText('UI notifications');
|
||||
```
|
||||
|
||||
**Step 3: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 4: Commit**
|
||||
|
||||
```bash
|
||||
git add src/settings.ts
|
||||
git commit -m "fix: sentence case for section headers in settings.ts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 3: Fix mcp-server.ts Issues
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/server/mcp-server.ts:57,70,77-79,117`
|
||||
|
||||
**Step 1: Remove async from handleInitialize (line 57)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
private async handleInitialize(_params: JSONRPCParams): Promise<InitializeResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
private handleInitialize(_params: JSONRPCParams): InitializeResult {
|
||||
```
|
||||
|
||||
**Step 2: Remove async from handleListTools (line 70)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
private async handleListTools(): Promise<ListToolsResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
private handleListTools(): ListToolsResult {
|
||||
```
|
||||
|
||||
**Step 3: Update handleRequest callers (lines 41-43)**
|
||||
|
||||
Since handleInitialize and handleListTools are no longer async, remove the await:
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
case 'initialize':
|
||||
return this.createSuccessResponse(request.id, await this.handleInitialize(request.params ?? {}));
|
||||
case 'tools/list':
|
||||
return this.createSuccessResponse(request.id, await this.handleListTools());
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
case 'initialize':
|
||||
return this.createSuccessResponse(request.id, this.handleInitialize(request.params ?? {}));
|
||||
case 'tools/list':
|
||||
return this.createSuccessResponse(request.id, this.handleListTools());
|
||||
```
|
||||
|
||||
**Step 4: Remove eslint-disable and fix any type (lines 77-79)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Tool arguments come from JSON-RPC and need runtime validation
|
||||
const paramsObj = params as { name: string; arguments: any };
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
const paramsObj = params as { name: string; arguments: Record<string, unknown> };
|
||||
```
|
||||
|
||||
**Step 5: Fix promise rejection to use Error (line 117)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
reject(error);
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
reject(error instanceof Error ? error : new Error(String(error)));
|
||||
```
|
||||
|
||||
**Step 6: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 7: Commit**
|
||||
|
||||
```bash
|
||||
git add src/server/mcp-server.ts
|
||||
git commit -m "fix: async/await, eslint directive, and promise rejection in mcp-server.ts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 4: Fix routes.ts Promise Issue
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/server/routes.ts:10-19`
|
||||
|
||||
**Step 1: Wrap async handler to handle void context**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
app.post('/mcp', async (req: Request, res: Response) => {
|
||||
try {
|
||||
const request = req.body as JSONRPCRequest;
|
||||
const response = await handleRequest(request);
|
||||
res.json(response);
|
||||
} catch (error) {
|
||||
console.error('MCP request error:', error);
|
||||
res.status(500).json(createErrorResponse(null, ErrorCodes.InternalError, 'Internal server error'));
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
app.post('/mcp', (req: Request, res: Response) => {
|
||||
void (async () => {
|
||||
try {
|
||||
const request = req.body as JSONRPCRequest;
|
||||
const response = await handleRequest(request);
|
||||
res.json(response);
|
||||
} catch (error) {
|
||||
console.error('MCP request error:', error);
|
||||
res.status(500).json(createErrorResponse(null, ErrorCodes.InternalError, 'Internal server error'));
|
||||
}
|
||||
})();
|
||||
});
|
||||
```
|
||||
|
||||
**Step 2: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add src/server/routes.ts
|
||||
git commit -m "fix: wrap async handler with void for proper promise handling"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 5: Fix tools/index.ts ESLint Directive
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/tools/index.ts:477-478`
|
||||
|
||||
**Step 1: Remove eslint-disable and fix type**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Tool arguments come from JSON-RPC and require runtime validation
|
||||
async callTool(name: string, args: any): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
async callTool(name: string, args: Record<string, unknown>): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
**Step 2: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add src/tools/index.ts
|
||||
git commit -m "fix: remove eslint-disable directive in tools/index.ts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 6: Fix vault-tools.ts Async Methods
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/tools/vault-tools.ts:18,63,310,498,925`
|
||||
|
||||
**Step 1: Remove async from getVaultInfo (line 18)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
async getVaultInfo(): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
getVaultInfo(): CallToolResult {
|
||||
```
|
||||
|
||||
**Step 2: Remove async from listNotes (line 63)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
async listNotes(path?: string): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
listNotes(path?: string): CallToolResult {
|
||||
```
|
||||
|
||||
**Step 3: Remove async from createFileMetadataWithFrontmatter (line 310)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
private async createFileMetadataWithFrontmatter(
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
private createFileMetadataWithFrontmatter(
|
||||
```
|
||||
|
||||
Also update the return type from `Promise<FileMetadataWithFrontmatter>` to `FileMetadataWithFrontmatter`.
|
||||
|
||||
**Step 4: Remove async from exists (line 498)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
async exists(path: string): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
exists(path: string): CallToolResult {
|
||||
```
|
||||
|
||||
**Step 5: Remove async from resolveWikilink (line 925)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
async resolveWikilink(sourcePath: string, linkText: string): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
resolveWikilink(sourcePath: string, linkText: string): CallToolResult {
|
||||
```
|
||||
|
||||
**Step 6: Update callers if any use await on these methods**
|
||||
|
||||
Search for any `await this.getVaultInfo()`, `await this.listNotes()`, `await this.exists()`, `await this.resolveWikilink()`, `await this.createFileMetadataWithFrontmatter()` and remove the `await` keyword.
|
||||
|
||||
**Step 7: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 8: Commit**
|
||||
|
||||
```bash
|
||||
git add src/tools/vault-tools.ts
|
||||
git commit -m "fix: remove async from methods without await in vault-tools.ts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 7: Fix notifications.ts ESLint Directives
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/ui/notifications.ts:10-11,78-79,145-146,179`
|
||||
|
||||
**Step 1: Fix interface args type (lines 10-11)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Tool arguments come from JSON-RPC and can be any valid JSON structure
|
||||
args: any;
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
args: Record<string, unknown>;
|
||||
```
|
||||
|
||||
**Step 2: Fix showToolCall parameter type (lines 78-79)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Tool arguments come from JSON-RPC and can be any valid JSON structure
|
||||
showToolCall(toolName: string, args: any, duration?: number): void {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
showToolCall(toolName: string, args: Record<string, unknown>, duration?: number): void {
|
||||
```
|
||||
|
||||
**Step 3: Fix formatArgs parameter type (lines 145-146)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Tool arguments come from JSON-RPC and can be any valid JSON structure
|
||||
private formatArgs(args: any): string {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
private formatArgs(args: Record<string, unknown>): string {
|
||||
```
|
||||
|
||||
**Step 4: Fix unused 'e' variable (line 179)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
} catch (e) {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
} catch {
|
||||
```
|
||||
|
||||
**Step 5: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 6: Commit**
|
||||
|
||||
```bash
|
||||
git add src/ui/notifications.ts
|
||||
git commit -m "fix: remove eslint directives and unused catch variable in notifications.ts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 8: Fix crypto-adapter.ts Require Import
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/utils/crypto-adapter.ts:18-34`
|
||||
|
||||
**Step 1: Replace require with dynamic approach**
|
||||
|
||||
The challenge here is that require() is used for synchronous access. We need to restructure to use a lazy initialization pattern.
|
||||
|
||||
Change the entire Node.js section from:
|
||||
```typescript
|
||||
// Node.js environment (15+) - uses Web Crypto API standard
|
||||
if (typeof global !== 'undefined') {
|
||||
try {
|
||||
// Using require() is necessary for synchronous crypto access in Obsidian desktop plugins
|
||||
// ES6 dynamic imports would create race conditions as crypto must be available synchronously
|
||||
// eslint-disable-next-line @typescript-eslint/no-var-requires -- Synchronous Node.js crypto API access required
|
||||
const nodeCrypto = require('crypto') as typeof import('crypto');
|
||||
if (nodeCrypto?.webcrypto) {
|
||||
return nodeCrypto.webcrypto as unknown as Crypto;
|
||||
}
|
||||
} catch {
|
||||
// Crypto module not available or failed to load
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To (using globalThis.crypto which is available in Node 19+ and Electron):
|
||||
```typescript
|
||||
// Node.js/Electron environment - globalThis.crypto available in modern runtimes
|
||||
if (typeof globalThis !== 'undefined' && globalThis.crypto) {
|
||||
return globalThis.crypto;
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add src/utils/crypto-adapter.ts
|
||||
git commit -m "fix: use globalThis.crypto instead of require('crypto')"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 9: Fix encryption-utils.ts Require Import
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/utils/encryption-utils.ts:8-18`
|
||||
|
||||
**Step 1: Restructure electron import**
|
||||
|
||||
Since Electron's safeStorage must be accessed synchronously at module load time, and ES6 dynamic imports are async, we need to use a different approach. In Obsidian plugins running in Electron, we can access electron through the window object.
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
// Safely import safeStorage - may not be available in all environments
|
||||
let safeStorage: ElectronSafeStorage | null = null;
|
||||
try {
|
||||
// Using require() is necessary for synchronous access to Electron's safeStorage API in Obsidian desktop plugins
|
||||
// ES6 dynamic imports would create race conditions as this module must be available synchronously
|
||||
// eslint-disable-next-line @typescript-eslint/no-var-requires -- Synchronous Electron API access required for Obsidian plugin
|
||||
const electron = require('electron') as typeof import('electron');
|
||||
safeStorage = electron.safeStorage || null;
|
||||
} catch (error) {
|
||||
console.warn('Electron safeStorage not available, API keys will be stored in plaintext');
|
||||
}
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
// Safely import safeStorage - may not be available in all environments
|
||||
let safeStorage: ElectronSafeStorage | null = null;
|
||||
try {
|
||||
// Access electron through the global window object in Obsidian's Electron environment
|
||||
// This avoids require() while still getting synchronous access
|
||||
const electronRemote = (window as Window & { require?: (module: string) => typeof import('electron') }).require;
|
||||
if (electronRemote) {
|
||||
const electron = electronRemote('electron');
|
||||
safeStorage = electron.safeStorage || null;
|
||||
}
|
||||
} catch {
|
||||
console.warn('Electron safeStorage not available, API keys will be stored in plaintext');
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add src/utils/encryption-utils.ts
|
||||
git commit -m "fix: use window.require pattern instead of bare require for electron"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 10: Fix link-utils.ts Async Method
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/utils/link-utils.ts:448`
|
||||
|
||||
**Step 1: Remove async from validateLinks**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
static async validateLinks(
|
||||
vault: IVaultAdapter,
|
||||
metadata: IMetadataCacheAdapter,
|
||||
content: string,
|
||||
sourcePath: string
|
||||
): Promise<LinkValidationResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
static validateLinks(
|
||||
vault: IVaultAdapter,
|
||||
metadata: IMetadataCacheAdapter,
|
||||
content: string,
|
||||
sourcePath: string
|
||||
): LinkValidationResult {
|
||||
```
|
||||
|
||||
**Step 2: Update any callers that await this method**
|
||||
|
||||
Search for `await LinkUtils.validateLinks` or `await this.validateLinks` and remove the `await`.
|
||||
|
||||
**Step 3: Verify build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 4: Commit**
|
||||
|
||||
```bash
|
||||
git add src/utils/link-utils.ts
|
||||
git commit -m "fix: remove async from validateLinks method"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 11: Final Build and Test
|
||||
|
||||
**Step 1: Run full build**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: No errors
|
||||
|
||||
**Step 2: Run tests**
|
||||
|
||||
Run: `npm test`
|
||||
Expected: All tests pass
|
||||
|
||||
**Step 3: Commit any remaining changes**
|
||||
|
||||
```bash
|
||||
git status
|
||||
# If any uncommitted changes:
|
||||
git add -A
|
||||
git commit -m "fix: final cleanup for code review issues"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Optional Tasks (if time permits)
|
||||
|
||||
### Optional Task A: Fix Unused Error Variables
|
||||
|
||||
**Files:**
|
||||
- `src/tools/vault-tools.ts:289,359,393,445,715`
|
||||
- `src/utils/encryption-utils.ts:16`
|
||||
- `src/utils/frontmatter-utils.ts:76,329,358`
|
||||
- `src/utils/search-utils.ts:117,326`
|
||||
- `src/utils/waypoint-utils.ts:103`
|
||||
|
||||
For each occurrence, change `catch (error) {` or `catch (e) {` or `catch (decompressError) {` to just `catch {`.
|
||||
|
||||
### Optional Task B: Use FileManager.trashFile()
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/adapters/vault-adapter.ts:46-48`
|
||||
- Modify: `src/adapters/interfaces.ts` (update IVaultAdapter interface)
|
||||
|
||||
This requires passing the App or FileManager to the VaultAdapter, which is a larger refactor.
|
||||
|
||||
---
|
||||
|
||||
## Summary Checklist
|
||||
|
||||
- [ ] Task 1: main.ts sentence case + onunload
|
||||
- [ ] Task 2: settings.ts sentence case
|
||||
- [ ] Task 3: mcp-server.ts async/eslint/promise fixes
|
||||
- [ ] Task 4: routes.ts promise handling
|
||||
- [ ] Task 5: tools/index.ts eslint directive
|
||||
- [ ] Task 6: vault-tools.ts async methods
|
||||
- [ ] Task 7: notifications.ts eslint directives
|
||||
- [ ] Task 8: crypto-adapter.ts require import
|
||||
- [ ] Task 9: encryption-utils.ts require import
|
||||
- [ ] Task 10: link-utils.ts async method
|
||||
- [ ] Task 11: Final build and test
|
||||
@@ -0,0 +1,87 @@
|
||||
# Design: Line Numbers by Default in `read_note`
|
||||
|
||||
**Date:** 2026-01-31
|
||||
**Version:** 1.2.1
|
||||
|
||||
## Summary
|
||||
|
||||
Change `read_note` to return line-numbered content by default (e.g., `1→First line`) to help AI assistants reference specific locations when discussing notes. Add `withLineNumbers: false` to get raw content.
|
||||
|
||||
## Motivation
|
||||
|
||||
AI assistants can give much more precise references like "line 42 has a typo" rather than vague "in the section about X". Line numbers make file discussions unambiguous.
|
||||
|
||||
## Changes
|
||||
|
||||
### 1. Default Behavior Change
|
||||
|
||||
```typescript
|
||||
// Before
|
||||
const withLineNumbers = options?.withLineNumbers ?? false;
|
||||
|
||||
// After
|
||||
const withLineNumbers = options?.withLineNumbers ?? true;
|
||||
```
|
||||
|
||||
### 2. Apply to Parsed Path
|
||||
|
||||
Currently the `parseFrontmatter: true` path ignores line numbers. Add line numbering to the `content` field (and `contentWithoutFrontmatter`) when enabled.
|
||||
|
||||
### 3. Schema Update
|
||||
|
||||
Update the tool description to say "Default: true" and clarify opt-out with `withLineNumbers: false`.
|
||||
|
||||
## Files to Modify
|
||||
|
||||
### `src/tools/note-tools.ts`
|
||||
- Line 48: Change default from `false` to `true`
|
||||
- Lines 125-155: Add line numbering logic to the `parseFrontmatter` path for `content` and `contentWithoutFrontmatter` fields
|
||||
- Add `totalLines` to parsed response when line numbers enabled
|
||||
|
||||
### `src/tools/index.ts`
|
||||
- Lines 51-54: Update schema description to reflect new default
|
||||
|
||||
### `tests/note-tools.test.ts`
|
||||
- Update existing tests that expect raw content to either:
|
||||
- Explicitly pass `withLineNumbers: false`, or
|
||||
- Update assertions to expect numbered content
|
||||
|
||||
## Response Format Examples
|
||||
|
||||
### Before (current default)
|
||||
```json
|
||||
{
|
||||
"content": "# Title\nSome content",
|
||||
"wordCount": 3,
|
||||
"versionId": "abc123"
|
||||
}
|
||||
```
|
||||
|
||||
### After (new default)
|
||||
```json
|
||||
{
|
||||
"content": "1→# Title\n2→Some content",
|
||||
"totalLines": 2,
|
||||
"wordCount": 3,
|
||||
"versionId": "abc123"
|
||||
}
|
||||
```
|
||||
|
||||
### Opt-out (`withLineNumbers: false`)
|
||||
```json
|
||||
{
|
||||
"content": "# Title\nSome content",
|
||||
"wordCount": 3,
|
||||
"versionId": "abc123"
|
||||
}
|
||||
```
|
||||
|
||||
## Breaking Change
|
||||
|
||||
This changes the default response format. MCP clients that parse `content` expecting raw text will need to either:
|
||||
- Update their parsing to handle line-numbered format
|
||||
- Explicitly pass `withLineNumbers: false`
|
||||
|
||||
## Version
|
||||
|
||||
Bump to 1.2.1.
|
||||
516
docs/plans/2026-01-31-update-sections-safety.md
Normal file
516
docs/plans/2026-01-31-update-sections-safety.md
Normal file
@@ -0,0 +1,516 @@
|
||||
# update_sections Safety Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Add line numbers to `read_note` and require version checking in `update_sections` to prevent line-based edit errors.
|
||||
|
||||
**Architecture:** Three focused changes: (1) `withLineNumbers` option on `read_note` returns numbered lines using `→` prefix, (2) `force` parameter on `update_sections` makes `ifMatch` required unless explicitly bypassed, (3) always return `versionId` from `read_note`.
|
||||
|
||||
**Tech Stack:** TypeScript, Jest, Obsidian API
|
||||
|
||||
---
|
||||
|
||||
## Task 1: Add `withLineNumbers` Tests
|
||||
|
||||
**Files:**
|
||||
- Modify: `tests/note-tools.test.ts` (after line ~100, in the `readNote` describe block)
|
||||
|
||||
**Step 1: Write the failing tests**
|
||||
|
||||
Add these tests in the `describe('readNote', ...)` block:
|
||||
|
||||
```typescript
|
||||
it('should return numbered lines when withLineNumbers is true', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = '# Title\n\nParagraph text\nMore text';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md', { withLineNumbers: true });
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe('1→# Title\n2→\n3→Paragraph text\n4→More text');
|
||||
expect(parsed.totalLines).toBe(4);
|
||||
expect(parsed.versionId).toBe('2000-100');
|
||||
expect(parsed.wordCount).toBe(4); // Title Paragraph text More text
|
||||
});
|
||||
|
||||
it('should return versionId even without withLineNumbers', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = '# Test';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md');
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe('# Test');
|
||||
expect(parsed.versionId).toBe('2000-100');
|
||||
});
|
||||
```
|
||||
|
||||
**Step 2: Run tests to verify they fail**
|
||||
|
||||
Run: `npm test -- --testPathPattern=note-tools.test.ts --testNamePattern="withLineNumbers|versionId even without"`
|
||||
Expected: FAIL - `versionId` undefined, content not numbered
|
||||
|
||||
**Step 3: Commit failing tests**
|
||||
|
||||
```bash
|
||||
git add tests/note-tools.test.ts
|
||||
git commit -m "test: add failing tests for withLineNumbers and versionId"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 2: Implement `withLineNumbers` in `readNote`
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/tools/note-tools.ts:31-38` (options type)
|
||||
- Modify: `src/tools/note-tools.ts:83-99` (implementation)
|
||||
|
||||
**Step 1: Update the options type (line 33-37)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
options?: {
|
||||
withFrontmatter?: boolean;
|
||||
withContent?: boolean;
|
||||
parseFrontmatter?: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
options?: {
|
||||
withFrontmatter?: boolean;
|
||||
withContent?: boolean;
|
||||
parseFrontmatter?: boolean;
|
||||
withLineNumbers?: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Add withLineNumbers handling (after line 45, before path validation)**
|
||||
|
||||
Add this line after the existing option destructuring:
|
||||
```typescript
|
||||
/* istanbul ignore next */
|
||||
const withLineNumbers = options?.withLineNumbers ?? false;
|
||||
```
|
||||
|
||||
**Step 3: Add numbered content logic (replace lines 83-99)**
|
||||
|
||||
Replace the existing `if (!parseFrontmatter)` block with:
|
||||
|
||||
```typescript
|
||||
// If no special options, return simple content
|
||||
if (!parseFrontmatter) {
|
||||
// Compute word count when returning content
|
||||
if (withContent) {
|
||||
const wordCount = ContentUtils.countWords(content);
|
||||
const versionId = VersionUtils.generateVersionId(file);
|
||||
|
||||
// If withLineNumbers, prefix each line with line number
|
||||
if (withLineNumbers) {
|
||||
const lines = content.split('\n');
|
||||
const numberedContent = lines
|
||||
.map((line, idx) => `${idx + 1}→${line}`)
|
||||
.join('\n');
|
||||
|
||||
const result = {
|
||||
content: numberedContent,
|
||||
totalLines: lines.length,
|
||||
versionId,
|
||||
wordCount
|
||||
};
|
||||
return {
|
||||
content: [{ type: "text", text: JSON.stringify(result, null, 2) }]
|
||||
};
|
||||
}
|
||||
|
||||
const result = {
|
||||
content,
|
||||
wordCount,
|
||||
versionId
|
||||
};
|
||||
return {
|
||||
content: [{ type: "text", text: JSON.stringify(result, null, 2) }]
|
||||
};
|
||||
}
|
||||
return {
|
||||
content: [{ type: "text", text: content }]
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Run tests to verify they pass**
|
||||
|
||||
Run: `npm test -- --testPathPattern=note-tools.test.ts --testNamePattern="withLineNumbers|versionId even without"`
|
||||
Expected: PASS
|
||||
|
||||
**Step 5: Run full test suite**
|
||||
|
||||
Run: `npm test`
|
||||
Expected: All 760+ tests pass
|
||||
|
||||
**Step 6: Commit implementation**
|
||||
|
||||
```bash
|
||||
git add src/tools/note-tools.ts
|
||||
git commit -m "feat(read_note): add withLineNumbers option and always return versionId"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 3: Update `read_note` Schema
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/tools/index.ts:39-50` (read_note properties)
|
||||
|
||||
**Step 1: Add withLineNumbers to schema**
|
||||
|
||||
After the `parseFrontmatter` property (around line 49), add:
|
||||
|
||||
```typescript
|
||||
withLineNumbers: {
|
||||
type: "boolean",
|
||||
description: "If true, prefix each line with its line number (e.g., '1→content'). Use this when you need to make line-based edits with update_sections. Returns totalLines count and versionId for use with ifMatch parameter. Default: false"
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Update tool description (line 31)**
|
||||
|
||||
Update the description to mention line numbers:
|
||||
|
||||
```typescript
|
||||
description: "Read the content of a file from the Obsidian vault with optional frontmatter parsing. Returns versionId for concurrency control. When withLineNumbers is true, prefixes each line with its number (e.g., '1→content') for use with update_sections. Returns word count (excluding frontmatter and Obsidian comments) when content is included. Path must be vault-relative (no leading slash) and include the file extension. Use list() first if you're unsure of the exact path.",
|
||||
```
|
||||
|
||||
**Step 3: Verify build passes**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: Build succeeds with no type errors
|
||||
|
||||
**Step 4: Commit schema update**
|
||||
|
||||
```bash
|
||||
git add src/tools/index.ts
|
||||
git commit -m "docs(read_note): add withLineNumbers to tool schema"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 4: Add `force` Parameter Tests for `updateSections`
|
||||
|
||||
**Files:**
|
||||
- Modify: `tests/note-tools.test.ts` (after line ~960, in the `updateSections` describe block)
|
||||
|
||||
**Step 1: Write failing tests**
|
||||
|
||||
Add these tests in the `describe('updateSections', ...)` block:
|
||||
|
||||
```typescript
|
||||
it('should return error when ifMatch not provided and force not set', async () => {
|
||||
const mockFile = createMockTFile('test.md');
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
|
||||
const result = await noteTools.updateSections('test.md', [
|
||||
{ startLine: 1, endLine: 1, content: 'New' }
|
||||
]);
|
||||
|
||||
expect(result.isError).toBe(true);
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.error).toBe('Version check required');
|
||||
expect(parsed.message).toContain('ifMatch parameter is required');
|
||||
expect(mockVault.modify).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should proceed without ifMatch when force is true', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = 'Line 1\nLine 2';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections(
|
||||
'test.md',
|
||||
[{ startLine: 1, endLine: 1, content: 'New Line 1' }],
|
||||
undefined, // no ifMatch
|
||||
true, // validateLinks
|
||||
true // force
|
||||
);
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
expect(mockVault.modify).toHaveBeenCalled();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.success).toBe(true);
|
||||
});
|
||||
|
||||
it('should proceed with valid ifMatch without force', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = 'Line 1\nLine 2';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections(
|
||||
'test.md',
|
||||
[{ startLine: 1, endLine: 1, content: 'New Line 1' }],
|
||||
'2000-100' // valid ifMatch
|
||||
);
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
expect(mockVault.modify).toHaveBeenCalled();
|
||||
});
|
||||
```
|
||||
|
||||
**Step 2: Run tests to verify they fail**
|
||||
|
||||
Run: `npm test -- --testPathPattern=note-tools.test.ts --testNamePattern="ifMatch not provided|force is true|valid ifMatch without force"`
|
||||
Expected: FAIL - first test expects error but gets success, second test has wrong arity
|
||||
|
||||
**Step 3: Commit failing tests**
|
||||
|
||||
```bash
|
||||
git add tests/note-tools.test.ts
|
||||
git commit -m "test: add failing tests for updateSections force parameter"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 5: Implement `force` Parameter in `updateSections`
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/tools/note-tools.ts:880-907` (method signature and validation)
|
||||
|
||||
**Step 1: Update method signature (lines 880-885)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
async updateSections(
|
||||
path: string,
|
||||
edits: SectionEdit[],
|
||||
ifMatch?: string,
|
||||
validateLinks: boolean = true
|
||||
): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
async updateSections(
|
||||
path: string,
|
||||
edits: SectionEdit[],
|
||||
ifMatch?: string,
|
||||
validateLinks: boolean = true,
|
||||
force: boolean = false
|
||||
): Promise<CallToolResult> {
|
||||
```
|
||||
|
||||
**Step 2: Add ifMatch requirement check (after line 907, after edits validation)**
|
||||
|
||||
Insert after the "No edits provided" check:
|
||||
|
||||
```typescript
|
||||
// Require ifMatch unless force is true
|
||||
if (!ifMatch && !force) {
|
||||
return {
|
||||
content: [{
|
||||
type: "text",
|
||||
text: JSON.stringify({
|
||||
error: 'Version check required',
|
||||
message: 'The ifMatch parameter is required to prevent overwriting concurrent changes. First call read_note with withLineNumbers:true to get the versionId, then pass it as ifMatch. To bypass this check, set force:true (not recommended).'
|
||||
}, null, 2)
|
||||
}],
|
||||
isError: true
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3: Run tests to verify they pass**
|
||||
|
||||
Run: `npm test -- --testPathPattern=note-tools.test.ts --testNamePattern="ifMatch not provided|force is true|valid ifMatch without force"`
|
||||
Expected: PASS
|
||||
|
||||
**Step 4: Run full test suite**
|
||||
|
||||
Run: `npm test`
|
||||
Expected: Some tests may fail (existing tests that don't pass ifMatch)
|
||||
|
||||
**Step 5: Fix existing tests that now fail**
|
||||
|
||||
Update existing `updateSections` tests to either:
|
||||
- Pass a valid `ifMatch` value, OR
|
||||
- Pass `force: true`
|
||||
|
||||
For the "should update sections successfully" test (around line 882), update to use force:
|
||||
|
||||
```typescript
|
||||
const result = await noteTools.updateSections('test.md', [
|
||||
{ startLine: 2, endLine: 3, content: 'New Line 2\nNew Line 3' }
|
||||
], undefined, true, true); // validateLinks=true, force=true
|
||||
```
|
||||
|
||||
Apply similar fixes to other affected tests in the `updateSections` block.
|
||||
|
||||
**Step 6: Run full test suite again**
|
||||
|
||||
Run: `npm test`
|
||||
Expected: All tests pass
|
||||
|
||||
**Step 7: Commit implementation**
|
||||
|
||||
```bash
|
||||
git add src/tools/note-tools.ts tests/note-tools.test.ts
|
||||
git commit -m "feat(update_sections): require ifMatch with force opt-out"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 6: Update `update_sections` Schema and Call Site
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/tools/index.ts:184-194` (update_sections schema)
|
||||
- Modify: `src/tools/index.ts:529-537` (call site)
|
||||
|
||||
**Step 1: Update ifMatch description (line 184-187)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
ifMatch: {
|
||||
type: "string",
|
||||
description: "Optional ETag/versionId for concurrency control. If provided, update only proceeds if file hasn't been modified. Get versionId from read operations. Prevents conflicting edits in concurrent scenarios."
|
||||
},
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
ifMatch: {
|
||||
type: "string",
|
||||
description: "Required: ETag/versionId for concurrency control. Get this from read_note response (always included). Update only proceeds if file hasn't changed since read. Omit only with force:true."
|
||||
},
|
||||
```
|
||||
|
||||
**Step 2: Add force property (after validateLinks, around line 191)**
|
||||
|
||||
Add:
|
||||
```typescript
|
||||
force: {
|
||||
type: "boolean",
|
||||
description: "If true, skip version check and apply edits without ifMatch. Use only when you intentionally want to overwrite without checking for concurrent changes. Not recommended. Default: false"
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3: Update call site (lines 529-537)**
|
||||
|
||||
Change from:
|
||||
```typescript
|
||||
case "update_sections": {
|
||||
const a = args as { path: string; edits: Array<{ startLine: number; endLine: number; content: string }>; ifMatch?: string; validateLinks?: boolean };
|
||||
result = await this.noteTools.updateSections(
|
||||
a.path,
|
||||
a.edits,
|
||||
a.ifMatch,
|
||||
a.validateLinks ?? true
|
||||
);
|
||||
break;
|
||||
}
|
||||
```
|
||||
|
||||
To:
|
||||
```typescript
|
||||
case "update_sections": {
|
||||
const a = args as { path: string; edits: Array<{ startLine: number; endLine: number; content: string }>; ifMatch?: string; validateLinks?: boolean; force?: boolean };
|
||||
result = await this.noteTools.updateSections(
|
||||
a.path,
|
||||
a.edits,
|
||||
a.ifMatch,
|
||||
a.validateLinks ?? true,
|
||||
a.force ?? false
|
||||
);
|
||||
break;
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Verify build passes**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: Build succeeds
|
||||
|
||||
**Step 5: Run full test suite**
|
||||
|
||||
Run: `npm test`
|
||||
Expected: All tests pass
|
||||
|
||||
**Step 6: Commit schema and call site updates**
|
||||
|
||||
```bash
|
||||
git add src/tools/index.ts
|
||||
git commit -m "docs(update_sections): update schema for required ifMatch and force opt-out"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task 7: Final Verification
|
||||
|
||||
**Step 1: Run full test suite with coverage**
|
||||
|
||||
Run: `npm run test:coverage`
|
||||
Expected: All tests pass, coverage maintained
|
||||
|
||||
**Step 2: Build for production**
|
||||
|
||||
Run: `npm run build`
|
||||
Expected: Build succeeds with no errors
|
||||
|
||||
**Step 3: Manual verification checklist**
|
||||
|
||||
Verify these scenarios work correctly:
|
||||
|
||||
1. `read_note` without options → returns `content`, `wordCount`, `versionId`
|
||||
2. `read_note` with `withLineNumbers: true` → returns numbered content, `totalLines`, `versionId`
|
||||
3. `update_sections` without `ifMatch` → returns "Version check required" error
|
||||
4. `update_sections` with `force: true` → proceeds without version check
|
||||
5. `update_sections` with valid `ifMatch` → proceeds normally
|
||||
6. `update_sections` with stale `ifMatch` → returns version mismatch error
|
||||
|
||||
**Step 4: Create summary commit**
|
||||
|
||||
```bash
|
||||
git log --oneline -6
|
||||
```
|
||||
|
||||
Verify commit history looks clean and logical.
|
||||
|
||||
---
|
||||
|
||||
## Summary of Changes
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `src/tools/note-tools.ts` | Add `withLineNumbers` option, add `force` parameter, always return `versionId` |
|
||||
| `src/tools/index.ts` | Update schemas for both tools, update call site |
|
||||
| `tests/note-tools.test.ts` | Add tests for new features, fix existing tests |
|
||||
|
||||
**Breaking Change:** `update_sections` now requires `ifMatch` parameter unless `force: true` is passed.
|
||||
87
docs/plans/2026-02-06-remote-ip-access-design.md
Normal file
87
docs/plans/2026-02-06-remote-ip-access-design.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# Remote IP access for MCP server
|
||||
|
||||
Date: 2026-02-06
|
||||
|
||||
## Problem
|
||||
|
||||
The MCP server is hardcoded to localhost-only access (bind address, CORS, host header validation). This prevents use cases where Obsidian runs in a Docker container or remote machine and MCP clients connect over a Tailscale VPN or other private network.
|
||||
|
||||
## Design
|
||||
|
||||
### New setting: `allowedIPs`
|
||||
|
||||
A comma-separated string of IPs and CIDR ranges. Default: `""` (empty).
|
||||
|
||||
- When empty: server behaves exactly as today (binds to `127.0.0.1`, localhost-only)
|
||||
- When populated: server binds to `0.0.0.0` and allows connections from listed IPs/CIDRs
|
||||
- Localhost (`127.0.0.1`) is always implicitly allowed regardless of the setting
|
||||
- Examples: `100.64.0.0/10`, `192.168.1.50`, `10.0.0.0/8`
|
||||
|
||||
### Middleware changes (src/server/middleware.ts)
|
||||
|
||||
Three layers are updated:
|
||||
|
||||
1. **Source IP validation (new)** - Checks `req.socket.remoteAddress` against the allow-list before auth. Rejects connections from unlisted IPs with 403. Localhost always passes.
|
||||
|
||||
2. **CORS policy update** - Extends the origin check to allow origins whose hostname matches the allow-list, in addition to the existing localhost regex.
|
||||
|
||||
3. **Host header validation update** - Extends to accept Host headers matching allowed IPs, in addition to localhost.
|
||||
|
||||
All three use a shared `isIPAllowed()` utility.
|
||||
|
||||
### Server bind (src/server/mcp-server.ts)
|
||||
|
||||
The `start()` method computes bind address dynamically:
|
||||
- `allowedIPs` non-empty (trimmed) -> bind `0.0.0.0`
|
||||
- `allowedIPs` empty -> bind `127.0.0.1` (current behavior)
|
||||
|
||||
### Network utilities (src/utils/network-utils.ts)
|
||||
|
||||
New file (~40 lines) exporting:
|
||||
|
||||
- `parseAllowedIPs(setting: string): AllowedIPEntry[]` - Parses comma-separated string into structured list of individual IPs and CIDR ranges
|
||||
- `isIPAllowed(ip: string, allowList: AllowedIPEntry[]): boolean` - Checks if an IP matches any entry. Handles IPv4-mapped IPv6 addresses (`::ffff:x.x.x.x`) that Node.js uses for `req.socket.remoteAddress`
|
||||
|
||||
CIDR matching is standard bit arithmetic, no external dependencies needed.
|
||||
|
||||
### Settings UI (src/settings.ts)
|
||||
|
||||
New text field below the Port setting:
|
||||
- Name: "Allowed IPs"
|
||||
- Description: "Comma-separated IPs or CIDR ranges allowed to connect (e.g., 100.64.0.0/10, 192.168.1.50). Leave empty for localhost only. Restart required."
|
||||
- Placeholder: `100.64.0.0/10, 192.168.1.0/24`
|
||||
- Shows restart warning when changed while server is running
|
||||
- Shows security note when non-empty: "Server is accessible from non-localhost IPs. Ensure your API key is kept secure."
|
||||
|
||||
Status display updates to show actual bind address (`0.0.0.0` vs `127.0.0.1`).
|
||||
|
||||
Generated client configs (Windsurf/Claude Code) stay as `127.0.0.1` - users adjust manually for remote access.
|
||||
|
||||
### Settings type (src/types/settings-types.ts)
|
||||
|
||||
Add `allowedIPs: string` to `MCPServerSettings` with default `""`.
|
||||
|
||||
## Security model
|
||||
|
||||
- **Auth is still mandatory.** IP allow-list is defense-in-depth, not a replacement for Bearer token authentication.
|
||||
- **Localhost always allowed.** Cannot accidentally lock out local access.
|
||||
- **Empty default = current behavior.** Zero-change upgrade for existing users. Feature is opt-in.
|
||||
- **Three-layer validation:** Source IP check + CORS + Host header validation + Bearer auth.
|
||||
|
||||
## Testing
|
||||
|
||||
New file `tests/network-utils.test.ts`:
|
||||
- Individual IP match/mismatch
|
||||
- CIDR range matching (e.g., `100.64.0.0/10` matches `100.100.1.1`)
|
||||
- IPv4-mapped IPv6 handling (`::ffff:192.168.1.1`)
|
||||
- Edge cases: empty string, malformed entries, extra whitespace
|
||||
- Localhost always allowed regardless of list contents
|
||||
|
||||
## Files changed
|
||||
|
||||
1. `src/types/settings-types.ts` - Add `allowedIPs` field
|
||||
2. `src/utils/network-utils.ts` - New file: CIDR parsing + IP matching
|
||||
3. `src/server/middleware.ts` - Update CORS, host validation, add source IP check
|
||||
4. `src/server/mcp-server.ts` - Dynamic bind address
|
||||
5. `src/settings.ts` - New text field + security note
|
||||
6. `tests/network-utils.test.ts` - New test file
|
||||
201
docs/plans/PLAN-update-sections-safety.md
Normal file
201
docs/plans/PLAN-update-sections-safety.md
Normal file
@@ -0,0 +1,201 @@
|
||||
# Plan: Fix update_sections Line Number Issue via MCP Server Changes
|
||||
|
||||
## Problem Analysis
|
||||
|
||||
When using `update_sections`, line number errors occur because:
|
||||
|
||||
1. **`read_note` doesn't return line numbers** - Returns content as a string, no line mapping
|
||||
2. **`ifMatch` is optional** - No enforcement of version checking before edits
|
||||
3. **`versionId` inconsistent** - Only returned when `parseFrontmatter: true`
|
||||
|
||||
### Root Cause
|
||||
|
||||
The `Read` tool shows line numbers (e.g., `1→content`) but `read_note` does not. When using `read_note` and later calling `update_sections`, line numbers are guessed based on stale content.
|
||||
|
||||
---
|
||||
|
||||
## Proposed Changes
|
||||
|
||||
### Change 1: Add `withLineNumbers` Option to `read_note`
|
||||
|
||||
**File:** `src/tools/note-tools.ts`
|
||||
|
||||
**Current behavior:** Returns `{ content: "...", wordCount: N }`
|
||||
|
||||
**New behavior with `withLineNumbers: true`:** Returns numbered lines using `→` prefix:
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "1→---\n2→title: Example\n3→---\n4→\n5→## Overview\n6→Some text here",
|
||||
"totalLines": 6,
|
||||
"versionId": "abc123",
|
||||
"wordCount": 42
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation (add after existing options handling):**
|
||||
|
||||
```typescript
|
||||
// If withLineNumbers requested, prefix each line with line number
|
||||
if (options?.withLineNumbers && withContent) {
|
||||
const lines = content.split('\n');
|
||||
const numberedContent = lines
|
||||
.map((line, idx) => `${idx + 1}→${line}`)
|
||||
.join('\n');
|
||||
|
||||
const result = {
|
||||
content: numberedContent,
|
||||
totalLines: lines.length,
|
||||
versionId: VersionUtils.generateVersionId(file),
|
||||
wordCount: ContentUtils.countWords(content)
|
||||
};
|
||||
return {
|
||||
content: [{ type: "text", text: JSON.stringify(result, null, 2) }]
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**Schema update (in `index.ts`):**
|
||||
|
||||
```typescript
|
||||
withLineNumbers: {
|
||||
type: "boolean",
|
||||
description: "If true, prefix each line with its line number (e.g., '1→content'). Use this when you need to make line-based edits with update_sections. Returns totalLines count and versionId for use with ifMatch parameter."
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Change 2: Require `ifMatch` for `update_sections`
|
||||
|
||||
**File:** `src/tools/note-tools.ts`
|
||||
|
||||
**Current behavior:** `ifMatch` is optional - edits proceed without version check.
|
||||
|
||||
**New behavior:** `ifMatch` is required unless `force: true` is passed.
|
||||
|
||||
**Method signature change:**
|
||||
|
||||
```typescript
|
||||
async updateSections(
|
||||
path: string,
|
||||
edits: SectionEdit[],
|
||||
ifMatch?: string, // Still optional in signature
|
||||
validateLinks: boolean = true,
|
||||
force?: boolean // NEW: explicit opt-out
|
||||
): Promise<CallToolResult>
|
||||
```
|
||||
|
||||
**Validation logic (early in method, after path/edits validation):**
|
||||
|
||||
```typescript
|
||||
// Require ifMatch unless force is true
|
||||
if (!ifMatch && !force) {
|
||||
return {
|
||||
content: [{
|
||||
type: "text",
|
||||
text: JSON.stringify({
|
||||
error: 'Version check required',
|
||||
message: 'The ifMatch parameter is required to prevent overwriting concurrent changes. First call read_note with withLineNumbers:true to get the versionId, then pass it as ifMatch. To bypass this check, set force:true (not recommended).'
|
||||
}, null, 2)
|
||||
}],
|
||||
isError: true
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
**Schema update (in `index.ts`):**
|
||||
|
||||
```typescript
|
||||
ifMatch: {
|
||||
type: "string",
|
||||
description: "Required: ETag/versionId for concurrency control. Get this from read_note response. Update only proceeds if file hasn't changed since read. Omit only with force:true."
|
||||
},
|
||||
force: {
|
||||
type: "boolean",
|
||||
description: "If true, skip version check and apply edits without ifMatch. Use only when you intentionally want to overwrite without checking for concurrent changes. Default: false"
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Keep `required: ["path", "edits"]` in schema - we enforce `ifMatch` in code to provide a helpful error message.
|
||||
|
||||
---
|
||||
|
||||
### Change 3: Always Return `versionId` from `read_note`
|
||||
|
||||
**File:** `src/tools/note-tools.ts`
|
||||
|
||||
**Current behavior:** Only returns `versionId` when `parseFrontmatter: true`.
|
||||
|
||||
**New behavior:** Always include `versionId` in the response.
|
||||
|
||||
**Current code (around line 88):**
|
||||
|
||||
```typescript
|
||||
const result = {
|
||||
content,
|
||||
wordCount
|
||||
};
|
||||
```
|
||||
|
||||
**Updated code:**
|
||||
|
||||
```typescript
|
||||
const result = {
|
||||
content,
|
||||
wordCount,
|
||||
versionId: VersionUtils.generateVersionId(file)
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify
|
||||
|
||||
| File | Changes |
|
||||
|------|---------|
|
||||
| `src/tools/note-tools.ts` | Add `withLineNumbers`, add `force` parameter, always return `versionId` |
|
||||
| `src/tools/index.ts` | Update schemas for `read_note` and `update_sections` |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. **Modify `readNote`** in `note-tools.ts`:
|
||||
- Add `withLineNumbers` option handling
|
||||
- Always return `versionId` when returning content
|
||||
|
||||
2. **Modify `updateSections`** in `note-tools.ts`:
|
||||
- Add `force` parameter
|
||||
- Add validation requiring `ifMatch` unless `force: true`
|
||||
|
||||
3. **Update tool schemas** in `index.ts`:
|
||||
- Add `withLineNumbers` property to `read_note` schema
|
||||
- Add `force` property to `update_sections` schema
|
||||
- Update `ifMatch` description to indicate it's required
|
||||
|
||||
4. **Update call site** in `index.ts`:
|
||||
- Pass `force` parameter through to `updateSections`
|
||||
|
||||
5. **Write tests** for new behaviors
|
||||
|
||||
6. **Build and test** in Obsidian
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. **`read_note` with `withLineNumbers: true`** → returns numbered content, `totalLines`, `versionId`
|
||||
2. **`read_note` without options** → returns content with `versionId` (new behavior)
|
||||
3. **`update_sections` without `ifMatch`** → returns error with helpful message
|
||||
4. **`update_sections` with `force: true`** → proceeds without version check
|
||||
5. **`update_sections` with valid `ifMatch`** → proceeds normally
|
||||
6. **`update_sections` with stale `ifMatch`** → returns version mismatch error
|
||||
|
||||
---
|
||||
|
||||
## Breaking Change
|
||||
|
||||
**Impact:** Callers that omit `ifMatch` from `update_sections` will receive an error unless they explicitly pass `force: true`.
|
||||
|
||||
**Mitigation:** The error message explains how to fix the issue and mentions the `force` option for those who intentionally want to skip version checking.
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "mcp-server",
|
||||
"name": "MCP Server",
|
||||
"version": "1.1.1",
|
||||
"version": "1.2.1",
|
||||
"minAppVersion": "0.15.0",
|
||||
"description": "Exposes vault operations via Model Context Protocol (MCP) over HTTP.",
|
||||
"author": "William Ballou",
|
||||
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "mcp-server",
|
||||
"version": "1.0.1",
|
||||
"version": "1.2.1",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "mcp-server",
|
||||
"version": "1.0.1",
|
||||
"version": "1.2.1",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"cors": "^2.8.5",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "mcp-server",
|
||||
"version": "1.1.1",
|
||||
"version": "1.2.1",
|
||||
"description": "MCP (Model Context Protocol) server plugin - exposes vault operations via HTTP",
|
||||
"main": "main.js",
|
||||
"scripts": {
|
||||
|
||||
@@ -33,9 +33,6 @@ export interface IVaultAdapter {
|
||||
|
||||
// File modification
|
||||
modify(file: TFile, data: string): Promise<void>;
|
||||
|
||||
// File deletion (respects Obsidian trash settings)
|
||||
trash(file: TAbstractFile, system: boolean): Promise<void>;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -42,8 +42,4 @@ export class VaultAdapter implements IVaultAdapter {
|
||||
async modify(file: TFile, data: string): Promise<void> {
|
||||
await this.vault.modify(file, data);
|
||||
}
|
||||
|
||||
async trash(file: TAbstractFile, system: boolean): Promise<void> {
|
||||
await this.vault.trash(file, system);
|
||||
}
|
||||
}
|
||||
12
src/main.ts
12
src/main.ts
@@ -42,7 +42,7 @@ export default class MCPServerPlugin extends Plugin {
|
||||
this.updateStatusBar();
|
||||
|
||||
// Add ribbon icon to toggle server
|
||||
this.addRibbonIcon('server', 'Toggle MCP Server', async () => {
|
||||
this.addRibbonIcon('server', 'Toggle MCP server', async () => {
|
||||
if (this.mcpServer?.isRunning()) {
|
||||
await this.stopServer();
|
||||
} else {
|
||||
@@ -52,7 +52,7 @@ export default class MCPServerPlugin extends Plugin {
|
||||
|
||||
// Register commands
|
||||
this.addCommand({
|
||||
id: 'start-mcp-server',
|
||||
id: 'start-server',
|
||||
name: 'Start server',
|
||||
callback: async () => {
|
||||
await this.startServer();
|
||||
@@ -60,7 +60,7 @@ export default class MCPServerPlugin extends Plugin {
|
||||
});
|
||||
|
||||
this.addCommand({
|
||||
id: 'stop-mcp-server',
|
||||
id: 'stop-server',
|
||||
name: 'Stop server',
|
||||
callback: async () => {
|
||||
await this.stopServer();
|
||||
@@ -68,7 +68,7 @@ export default class MCPServerPlugin extends Plugin {
|
||||
});
|
||||
|
||||
this.addCommand({
|
||||
id: 'restart-mcp-server',
|
||||
id: 'restart-server',
|
||||
name: 'Restart server',
|
||||
callback: async () => {
|
||||
await this.stopServer();
|
||||
@@ -93,8 +93,8 @@ export default class MCPServerPlugin extends Plugin {
|
||||
}
|
||||
}
|
||||
|
||||
async onunload() {
|
||||
await this.stopServer();
|
||||
onunload() {
|
||||
void this.stopServer();
|
||||
}
|
||||
|
||||
async startServer() {
|
||||
|
||||
@@ -38,9 +38,9 @@ export class MCPServer {
|
||||
try {
|
||||
switch (request.method) {
|
||||
case 'initialize':
|
||||
return this.createSuccessResponse(request.id, await this.handleInitialize(request.params ?? {}));
|
||||
return this.createSuccessResponse(request.id, this.handleInitialize(request.params ?? {}));
|
||||
case 'tools/list':
|
||||
return this.createSuccessResponse(request.id, await this.handleListTools());
|
||||
return this.createSuccessResponse(request.id, this.handleListTools());
|
||||
case 'tools/call':
|
||||
return this.createSuccessResponse(request.id, await this.handleCallTool(request.params ?? {}));
|
||||
case 'ping':
|
||||
@@ -54,7 +54,7 @@ export class MCPServer {
|
||||
}
|
||||
}
|
||||
|
||||
private async handleInitialize(_params: JSONRPCParams): Promise<InitializeResult> {
|
||||
private handleInitialize(_params: JSONRPCParams): InitializeResult {
|
||||
return {
|
||||
protocolVersion: "2024-11-05",
|
||||
capabilities: {
|
||||
@@ -67,7 +67,7 @@ export class MCPServer {
|
||||
};
|
||||
}
|
||||
|
||||
private async handleListTools(): Promise<ListToolsResult> {
|
||||
private handleListTools(): ListToolsResult {
|
||||
return {
|
||||
tools: this.toolRegistry.getToolDefinitions()
|
||||
};
|
||||
@@ -101,7 +101,8 @@ export class MCPServer {
|
||||
public async start(): Promise<void> {
|
||||
return new Promise((resolve, reject) => {
|
||||
try {
|
||||
this.server = this.app.listen(this.settings.port, '127.0.0.1', () => {
|
||||
const bindAddress = this.settings.allowedIPs?.trim() ? '0.0.0.0' : '127.0.0.1';
|
||||
this.server = this.app.listen(this.settings.port, bindAddress, () => {
|
||||
resolve();
|
||||
});
|
||||
|
||||
@@ -113,7 +114,7 @@ export class MCPServer {
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
reject(error);
|
||||
reject(error instanceof Error ? error : new Error(String(error)));
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@@ -3,12 +3,24 @@ import express from 'express';
|
||||
import cors from 'cors';
|
||||
import { MCPServerSettings } from '../types/settings-types';
|
||||
import { ErrorCodes, JSONRPCResponse } from '../types/mcp-types';
|
||||
import { parseAllowedIPs, isIPAllowed } from '../utils/network-utils';
|
||||
|
||||
export function setupMiddleware(app: Express, settings: MCPServerSettings, createErrorResponse: (id: string | number | null, code: number, message: string) => JSONRPCResponse): void {
|
||||
const allowList = parseAllowedIPs(settings.allowedIPs);
|
||||
|
||||
// Parse JSON bodies
|
||||
app.use(express.json());
|
||||
|
||||
// CORS configuration - Always enabled with fixed localhost-only policy
|
||||
// Source IP validation - reject connections from unlisted IPs before any other checks
|
||||
app.use((req: Request, res: Response, next: NextFunction) => {
|
||||
const remoteAddress = req.socket.remoteAddress;
|
||||
if (remoteAddress && !isIPAllowed(remoteAddress, allowList)) {
|
||||
return res.status(403).json(createErrorResponse(null, ErrorCodes.InvalidRequest, 'Connection from this IP is not allowed'));
|
||||
}
|
||||
next();
|
||||
});
|
||||
|
||||
// CORS configuration
|
||||
const corsOptions = {
|
||||
origin: (origin: string | undefined, callback: (err: Error | null, allow?: boolean) => void) => {
|
||||
// Allow requests with no origin (like CLI clients, curl, MCP SDKs)
|
||||
@@ -19,10 +31,22 @@ export function setupMiddleware(app: Express, settings: MCPServerSettings, creat
|
||||
// Allow localhost and 127.0.0.1 on any port, both HTTP and HTTPS
|
||||
const localhostRegex = /^https?:\/\/(localhost|127\.0\.0\.1)(:\d+)?$/;
|
||||
if (localhostRegex.test(origin)) {
|
||||
callback(null, true);
|
||||
} else {
|
||||
callback(new Error('Not allowed by CORS'));
|
||||
return callback(null, true);
|
||||
}
|
||||
|
||||
// Check if origin hostname is in the allow-list
|
||||
if (allowList.length > 0) {
|
||||
try {
|
||||
const url = new URL(origin);
|
||||
if (isIPAllowed(url.hostname, allowList)) {
|
||||
return callback(null, true);
|
||||
}
|
||||
} catch {
|
||||
// Invalid origin URL, fall through to reject
|
||||
}
|
||||
}
|
||||
|
||||
callback(new Error('Not allowed by CORS'));
|
||||
},
|
||||
credentials: true
|
||||
};
|
||||
@@ -44,15 +68,27 @@ export function setupMiddleware(app: Express, settings: MCPServerSettings, creat
|
||||
next();
|
||||
});
|
||||
|
||||
// Origin validation for security (DNS rebinding protection)
|
||||
// Host header validation for security (DNS rebinding protection)
|
||||
app.use((req: Request, res: Response, next: NextFunction) => {
|
||||
const host = req.headers.host;
|
||||
|
||||
// Only allow localhost connections
|
||||
if (host && !host.startsWith('localhost') && !host.startsWith('127.0.0.1')) {
|
||||
return res.status(403).json(createErrorResponse(null, ErrorCodes.InvalidRequest, 'Only localhost connections allowed'));
|
||||
if (!host) {
|
||||
return next();
|
||||
}
|
||||
|
||||
next();
|
||||
// Strip port from host header
|
||||
const hostname = host.split(':')[0];
|
||||
|
||||
// Always allow localhost
|
||||
if (hostname === 'localhost' || hostname === '127.0.0.1') {
|
||||
return next();
|
||||
}
|
||||
|
||||
// Check against allow-list
|
||||
if (allowList.length > 0 && isIPAllowed(hostname, allowList)) {
|
||||
return next();
|
||||
}
|
||||
|
||||
return res.status(403).json(createErrorResponse(null, ErrorCodes.InvalidRequest, 'Connection from this host is not allowed'));
|
||||
});
|
||||
}
|
||||
|
||||
@@ -7,15 +7,17 @@ export function setupRoutes(
|
||||
createErrorResponse: (id: string | number | null, code: number, message: string) => JSONRPCResponse
|
||||
): void {
|
||||
// Main MCP endpoint
|
||||
app.post('/mcp', async (req: Request, res: Response) => {
|
||||
try {
|
||||
const request = req.body as JSONRPCRequest;
|
||||
const response = await handleRequest(request);
|
||||
res.json(response);
|
||||
} catch (error) {
|
||||
console.error('MCP request error:', error);
|
||||
res.status(500).json(createErrorResponse(null, ErrorCodes.InternalError, 'Internal server error'));
|
||||
}
|
||||
app.post('/mcp', (req: Request, res: Response) => {
|
||||
void (async () => {
|
||||
try {
|
||||
const request = req.body as JSONRPCRequest;
|
||||
const response = await handleRequest(request);
|
||||
res.json(response);
|
||||
} catch (error) {
|
||||
console.error('MCP request error:', error);
|
||||
res.status(500).json(createErrorResponse(null, ErrorCodes.InternalError, 'Internal server error'));
|
||||
}
|
||||
})();
|
||||
});
|
||||
|
||||
// Health check endpoint
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import { App, Notice, PluginSettingTab, Setting } from 'obsidian';
|
||||
import { MCPPluginSettings } from './types/settings-types';
|
||||
import MCPServerPlugin from './main';
|
||||
import { generateApiKey } from './utils/auth-utils';
|
||||
|
||||
@@ -139,9 +138,10 @@ export class MCPServerSettingTab extends PluginSettingTab {
|
||||
const statusEl = containerEl.createEl('div', {cls: 'mcp-server-status'});
|
||||
const isRunning = this.plugin.mcpServer?.isRunning() ?? false;
|
||||
|
||||
const bindAddress = this.plugin.settings.allowedIPs?.trim() ? '0.0.0.0' : '127.0.0.1';
|
||||
statusEl.createEl('p', {
|
||||
text: isRunning
|
||||
? `✅ Running on http://127.0.0.1:${this.plugin.settings.port}/mcp`
|
||||
? `✅ Running on http://${bindAddress}:${this.plugin.settings.port}/mcp`
|
||||
: '⭕ Stopped'
|
||||
});
|
||||
|
||||
@@ -204,10 +204,33 @@ export class MCPServerSettingTab extends PluginSettingTab {
|
||||
}
|
||||
}));
|
||||
|
||||
// Allowed IPs setting
|
||||
new Setting(containerEl)
|
||||
.setName('Allowed IPs')
|
||||
.setDesc('Comma-separated IPs or CIDR ranges allowed to connect remotely (e.g., 100.64.0.0/10, 192.168.1.50). Leave empty for localhost only. Restart required.')
|
||||
.addText(text => text
|
||||
.setPlaceholder('100.64.0.0/10, 192.168.1.0/24')
|
||||
.setValue(this.plugin.settings.allowedIPs)
|
||||
.onChange(async (value) => {
|
||||
this.plugin.settings.allowedIPs = value;
|
||||
await this.plugin.saveSettings();
|
||||
if (this.plugin.mcpServer?.isRunning()) {
|
||||
new Notice('⚠️ Server restart required for allowed IPs changes to take effect');
|
||||
}
|
||||
}));
|
||||
|
||||
// Security note when remote access is enabled
|
||||
if (this.plugin.settings.allowedIPs?.trim()) {
|
||||
const securityNote = containerEl.createEl('div', {cls: 'mcp-security-note'});
|
||||
securityNote.createEl('p', {
|
||||
text: '⚠️ Server is accessible from non-localhost IPs. Ensure your API key is kept secure.'
|
||||
});
|
||||
}
|
||||
|
||||
// Authentication (Always Enabled)
|
||||
const authDetails = containerEl.createEl('details', {cls: 'mcp-auth-section'});
|
||||
const authSummary = authDetails.createEl('summary', {cls: 'mcp-auth-summary'});
|
||||
authSummary.setText('Authentication & Configuration');
|
||||
authSummary.setText('Authentication & configuration');
|
||||
|
||||
// Store reference for targeted updates
|
||||
this.authDetailsEl = authDetails;
|
||||
@@ -317,7 +340,7 @@ export class MCPServerSettingTab extends PluginSettingTab {
|
||||
// Notification Settings
|
||||
const notifDetails = containerEl.createEl('details', {cls: 'mcp-auth-section'});
|
||||
const notifSummary = notifDetails.createEl('summary', {cls: 'mcp-auth-summary'});
|
||||
notifSummary.setText('UI Notifications');
|
||||
notifSummary.setText('UI notifications');
|
||||
|
||||
// Store reference for targeted updates
|
||||
this.notificationDetailsEl = notifDetails;
|
||||
|
||||
@@ -5,6 +5,7 @@ import { VaultTools } from './vault-tools';
|
||||
import { createNoteTools } from './note-tools-factory';
|
||||
import { createVaultTools } from './vault-tools-factory';
|
||||
import { NotificationManager } from '../ui/notifications';
|
||||
import { YAMLValue } from '../utils/frontmatter-utils';
|
||||
|
||||
export class ToolRegistry {
|
||||
private noteTools: NoteTools;
|
||||
@@ -27,7 +28,7 @@ export class ToolRegistry {
|
||||
return [
|
||||
{
|
||||
name: "read_note",
|
||||
description: "Read the content of a file from the Obsidian vault with optional frontmatter parsing. Returns word count (excluding frontmatter and Obsidian comments) when content is included in the response. Use this to read the contents of a specific note or file. Path must be vault-relative (no leading slash) and include the file extension. Use list() first if you're unsure of the exact path. This only works on files, not folders. By default returns raw content with word count. Set parseFrontmatter to true to get structured data with separated frontmatter, content, and word count.",
|
||||
description: "Read the content of a file from the Obsidian vault with optional frontmatter parsing. Returns versionId for concurrency control. When withLineNumbers is true, prefixes each line with its number (e.g., '1→content') for use with update_sections. Returns word count (excluding frontmatter and Obsidian comments) when content is included. Path must be vault-relative (no leading slash) and include the file extension. Use list() first if you're unsure of the exact path.",
|
||||
inputSchema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
@@ -46,6 +47,10 @@ export class ToolRegistry {
|
||||
parseFrontmatter: {
|
||||
type: "boolean",
|
||||
description: "If true, parse and separate frontmatter from content, returning structured JSON. If false (default), return raw file content as plain text. Use true when you need to work with frontmatter separately."
|
||||
},
|
||||
withLineNumbers: {
|
||||
type: "boolean",
|
||||
description: "If true (default), prefix each line with its line number (e.g., '1→content'). This helps AI assistants reference specific line numbers when discussing notes. Returns totalLines count and versionId for use with ifMatch parameter. Set to false to get raw content without line prefixes. Default: true"
|
||||
}
|
||||
},
|
||||
required: ["path"]
|
||||
@@ -182,11 +187,15 @@ export class ToolRegistry {
|
||||
},
|
||||
ifMatch: {
|
||||
type: "string",
|
||||
description: "Optional ETag/versionId for concurrency control. If provided, update only proceeds if file hasn't been modified. Get versionId from read operations. Prevents conflicting edits in concurrent scenarios."
|
||||
description: "Required: ETag/versionId for concurrency control. Get this from read_note response (always included). Update only proceeds if file hasn't changed since read. Omit only with force:true."
|
||||
},
|
||||
validateLinks: {
|
||||
type: "boolean",
|
||||
description: "If true (default), automatically validate all wikilinks and embeds in the entire note after applying section edits, returning detailed broken link information. If false, skip link validation for better performance. Link validation checks [[wikilinks]], [[note#heading]] links, and ![[embeds]]. Default: true"
|
||||
},
|
||||
force: {
|
||||
type: "boolean",
|
||||
description: "If true, skip version check and apply edits without ifMatch. Use only when you intentionally want to overwrite without checking for concurrent changes. Not recommended. Default: false"
|
||||
}
|
||||
},
|
||||
required: ["path", "edits"]
|
||||
@@ -474,8 +483,7 @@ export class ToolRegistry {
|
||||
];
|
||||
}
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
async callTool(name: string, args: any): Promise<CallToolResult> {
|
||||
async callTool(name: string, args: Record<string, unknown>): Promise<CallToolResult> {
|
||||
const startTime = Date.now();
|
||||
|
||||
// Show tool call notification
|
||||
@@ -487,124 +495,162 @@ export class ToolRegistry {
|
||||
let result: CallToolResult;
|
||||
|
||||
switch (name) {
|
||||
case "read_note":
|
||||
result = await this.noteTools.readNote(args.path, {
|
||||
withFrontmatter: args.withFrontmatter,
|
||||
withContent: args.withContent,
|
||||
parseFrontmatter: args.parseFrontmatter
|
||||
case "read_note": {
|
||||
const a = args as { path: string; withFrontmatter?: boolean; withContent?: boolean; parseFrontmatter?: boolean; withLineNumbers?: boolean };
|
||||
result = await this.noteTools.readNote(a.path, {
|
||||
withFrontmatter: a.withFrontmatter,
|
||||
withContent: a.withContent,
|
||||
parseFrontmatter: a.parseFrontmatter,
|
||||
withLineNumbers: a.withLineNumbers
|
||||
});
|
||||
break;
|
||||
case "create_note":
|
||||
}
|
||||
case "create_note": {
|
||||
const a = args as { path: string; content: string; createParents?: boolean; onConflict?: 'error' | 'overwrite' | 'rename'; validateLinks?: boolean };
|
||||
result = await this.noteTools.createNote(
|
||||
args.path,
|
||||
args.content,
|
||||
args.createParents ?? false,
|
||||
args.onConflict ?? 'error',
|
||||
args.validateLinks ?? true
|
||||
a.path,
|
||||
a.content,
|
||||
a.createParents ?? false,
|
||||
a.onConflict ?? 'error',
|
||||
a.validateLinks ?? true
|
||||
);
|
||||
break;
|
||||
case "update_note":
|
||||
}
|
||||
case "update_note": {
|
||||
const a = args as { path: string; content: string; validateLinks?: boolean };
|
||||
result = await this.noteTools.updateNote(
|
||||
args.path,
|
||||
args.content,
|
||||
args.validateLinks ?? true
|
||||
a.path,
|
||||
a.content,
|
||||
a.validateLinks ?? true
|
||||
);
|
||||
break;
|
||||
case "update_frontmatter":
|
||||
}
|
||||
case "update_frontmatter": {
|
||||
const a = args as { path: string; patch?: Record<string, YAMLValue>; remove?: string[]; ifMatch?: string };
|
||||
result = await this.noteTools.updateFrontmatter(
|
||||
args.path,
|
||||
args.patch,
|
||||
args.remove ?? [],
|
||||
args.ifMatch
|
||||
a.path,
|
||||
a.patch,
|
||||
a.remove ?? [],
|
||||
a.ifMatch
|
||||
);
|
||||
break;
|
||||
case "update_sections":
|
||||
}
|
||||
case "update_sections": {
|
||||
const a = args as { path: string; edits: Array<{ startLine: number; endLine: number; content: string }>; ifMatch?: string; validateLinks?: boolean; force?: boolean };
|
||||
result = await this.noteTools.updateSections(
|
||||
args.path,
|
||||
args.edits,
|
||||
args.ifMatch,
|
||||
args.validateLinks ?? true
|
||||
a.path,
|
||||
a.edits,
|
||||
a.ifMatch,
|
||||
a.validateLinks ?? true,
|
||||
a.force ?? false
|
||||
);
|
||||
break;
|
||||
case "rename_file":
|
||||
}
|
||||
case "rename_file": {
|
||||
const a = args as { path: string; newPath: string; updateLinks?: boolean; ifMatch?: string };
|
||||
result = await this.noteTools.renameFile(
|
||||
args.path,
|
||||
args.newPath,
|
||||
args.updateLinks ?? true,
|
||||
args.ifMatch
|
||||
a.path,
|
||||
a.newPath,
|
||||
a.updateLinks ?? true,
|
||||
a.ifMatch
|
||||
);
|
||||
break;
|
||||
case "delete_note":
|
||||
}
|
||||
case "delete_note": {
|
||||
const a = args as { path: string; soft?: boolean; dryRun?: boolean; ifMatch?: string };
|
||||
result = await this.noteTools.deleteNote(
|
||||
args.path,
|
||||
args.soft ?? true,
|
||||
args.dryRun ?? false,
|
||||
args.ifMatch
|
||||
a.path,
|
||||
a.soft ?? true,
|
||||
a.dryRun ?? false,
|
||||
a.ifMatch
|
||||
);
|
||||
break;
|
||||
case "search":
|
||||
}
|
||||
case "search": {
|
||||
const a = args as { query: string; isRegex?: boolean; caseSensitive?: boolean; includes?: string[]; excludes?: string[]; folder?: string; returnSnippets?: boolean; snippetLength?: number; maxResults?: number };
|
||||
result = await this.vaultTools.search({
|
||||
query: args.query,
|
||||
isRegex: args.isRegex,
|
||||
caseSensitive: args.caseSensitive,
|
||||
includes: args.includes,
|
||||
excludes: args.excludes,
|
||||
folder: args.folder,
|
||||
returnSnippets: args.returnSnippets,
|
||||
snippetLength: args.snippetLength,
|
||||
maxResults: args.maxResults
|
||||
query: a.query,
|
||||
isRegex: a.isRegex,
|
||||
caseSensitive: a.caseSensitive,
|
||||
includes: a.includes,
|
||||
excludes: a.excludes,
|
||||
folder: a.folder,
|
||||
returnSnippets: a.returnSnippets,
|
||||
snippetLength: a.snippetLength,
|
||||
maxResults: a.maxResults
|
||||
});
|
||||
break;
|
||||
case "search_waypoints":
|
||||
result = await this.vaultTools.searchWaypoints(args.folder);
|
||||
}
|
||||
case "search_waypoints": {
|
||||
const a = args as { folder?: string };
|
||||
result = await this.vaultTools.searchWaypoints(a.folder);
|
||||
break;
|
||||
}
|
||||
case "get_vault_info":
|
||||
result = await this.vaultTools.getVaultInfo();
|
||||
result = this.vaultTools.getVaultInfo();
|
||||
break;
|
||||
case "list":
|
||||
case "list": {
|
||||
const a = args as { path?: string; recursive?: boolean; includes?: string[]; excludes?: string[]; only?: 'files' | 'directories' | 'any'; limit?: number; cursor?: string; withFrontmatterSummary?: boolean; includeWordCount?: boolean };
|
||||
result = await this.vaultTools.list({
|
||||
path: args.path,
|
||||
recursive: args.recursive,
|
||||
includes: args.includes,
|
||||
excludes: args.excludes,
|
||||
only: args.only,
|
||||
limit: args.limit,
|
||||
cursor: args.cursor,
|
||||
withFrontmatterSummary: args.withFrontmatterSummary,
|
||||
includeWordCount: args.includeWordCount
|
||||
path: a.path,
|
||||
recursive: a.recursive,
|
||||
includes: a.includes,
|
||||
excludes: a.excludes,
|
||||
only: a.only,
|
||||
limit: a.limit,
|
||||
cursor: a.cursor,
|
||||
withFrontmatterSummary: a.withFrontmatterSummary,
|
||||
includeWordCount: a.includeWordCount
|
||||
});
|
||||
break;
|
||||
case "stat":
|
||||
result = await this.vaultTools.stat(args.path, args.includeWordCount);
|
||||
}
|
||||
case "stat": {
|
||||
const a = args as { path: string; includeWordCount?: boolean };
|
||||
result = await this.vaultTools.stat(a.path, a.includeWordCount);
|
||||
break;
|
||||
case "exists":
|
||||
result = await this.vaultTools.exists(args.path);
|
||||
}
|
||||
case "exists": {
|
||||
const a = args as { path: string };
|
||||
result = this.vaultTools.exists(a.path);
|
||||
break;
|
||||
case "read_excalidraw":
|
||||
result = await this.noteTools.readExcalidraw(args.path, {
|
||||
includeCompressed: args.includeCompressed,
|
||||
includePreview: args.includePreview
|
||||
}
|
||||
case "read_excalidraw": {
|
||||
const a = args as { path: string; includeCompressed?: boolean; includePreview?: boolean };
|
||||
result = await this.noteTools.readExcalidraw(a.path, {
|
||||
includeCompressed: a.includeCompressed,
|
||||
includePreview: a.includePreview
|
||||
});
|
||||
break;
|
||||
case "get_folder_waypoint":
|
||||
result = await this.vaultTools.getFolderWaypoint(args.path);
|
||||
}
|
||||
case "get_folder_waypoint": {
|
||||
const a = args as { path: string };
|
||||
result = await this.vaultTools.getFolderWaypoint(a.path);
|
||||
break;
|
||||
case "is_folder_note":
|
||||
result = await this.vaultTools.isFolderNote(args.path);
|
||||
}
|
||||
case "is_folder_note": {
|
||||
const a = args as { path: string };
|
||||
result = await this.vaultTools.isFolderNote(a.path);
|
||||
break;
|
||||
case "validate_wikilinks":
|
||||
result = await this.vaultTools.validateWikilinks(args.path);
|
||||
}
|
||||
case "validate_wikilinks": {
|
||||
const a = args as { path: string };
|
||||
result = await this.vaultTools.validateWikilinks(a.path);
|
||||
break;
|
||||
case "resolve_wikilink":
|
||||
result = await this.vaultTools.resolveWikilink(args.sourcePath, args.linkText);
|
||||
}
|
||||
case "resolve_wikilink": {
|
||||
const a = args as { sourcePath: string; linkText: string };
|
||||
result = this.vaultTools.resolveWikilink(a.sourcePath, a.linkText);
|
||||
break;
|
||||
case "backlinks":
|
||||
}
|
||||
case "backlinks": {
|
||||
const a = args as { path: string; includeUnlinked?: boolean; includeSnippets?: boolean };
|
||||
result = await this.vaultTools.getBacklinks(
|
||||
args.path,
|
||||
args.includeUnlinked ?? false,
|
||||
args.includeSnippets ?? true
|
||||
a.path,
|
||||
a.includeUnlinked ?? false,
|
||||
a.includeSnippets ?? true
|
||||
);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
result = {
|
||||
content: [{ type: "text", text: `Unknown tool: ${name}` }],
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { App, TFile } from 'obsidian';
|
||||
import { App } from 'obsidian';
|
||||
import {
|
||||
CallToolResult,
|
||||
ParsedNote,
|
||||
@@ -34,6 +34,7 @@ export class NoteTools {
|
||||
withFrontmatter?: boolean;
|
||||
withContent?: boolean;
|
||||
parseFrontmatter?: boolean;
|
||||
withLineNumbers?: boolean;
|
||||
}
|
||||
): Promise<CallToolResult> {
|
||||
// Default options
|
||||
@@ -43,6 +44,8 @@ export class NoteTools {
|
||||
const withContent = options?.withContent ?? true;
|
||||
/* istanbul ignore next */
|
||||
const parseFrontmatter = options?.parseFrontmatter ?? false;
|
||||
/* istanbul ignore next */
|
||||
const withLineNumbers = options?.withLineNumbers ?? true;
|
||||
|
||||
// Validate path
|
||||
if (!path || path.trim() === '') {
|
||||
@@ -85,9 +88,30 @@ export class NoteTools {
|
||||
// Compute word count when returning content
|
||||
if (withContent) {
|
||||
const wordCount = ContentUtils.countWords(content);
|
||||
const versionId = VersionUtils.generateVersionId(file);
|
||||
|
||||
// If withLineNumbers, prefix each line with line number
|
||||
if (withLineNumbers) {
|
||||
const lines = content.split('\n');
|
||||
const numberedContent = lines
|
||||
.map((line, idx) => `${idx + 1}→${line}`)
|
||||
.join('\n');
|
||||
|
||||
const result = {
|
||||
content: numberedContent,
|
||||
totalLines: lines.length,
|
||||
versionId,
|
||||
wordCount
|
||||
};
|
||||
return {
|
||||
content: [{ type: "text", text: JSON.stringify(result, null, 2) }]
|
||||
};
|
||||
}
|
||||
|
||||
const result = {
|
||||
content,
|
||||
wordCount
|
||||
wordCount,
|
||||
versionId
|
||||
};
|
||||
return {
|
||||
content: [{ type: "text", text: JSON.stringify(result, null, 2) }]
|
||||
@@ -101,13 +125,38 @@ export class NoteTools {
|
||||
// Parse frontmatter if requested
|
||||
const extracted = FrontmatterUtils.extractFrontmatter(content);
|
||||
|
||||
// Apply line numbers if requested
|
||||
let resultContent = withContent ? content : '';
|
||||
let resultContentWithoutFrontmatter = extracted.contentWithoutFrontmatter;
|
||||
let totalLines: number | undefined;
|
||||
|
||||
if (withLineNumbers && withContent) {
|
||||
const lines = content.split('\n');
|
||||
resultContent = lines.map((line, idx) => `${idx + 1}→${line}`).join('\n');
|
||||
totalLines = lines.length;
|
||||
|
||||
if (extracted.hasFrontmatter && extracted.contentWithoutFrontmatter) {
|
||||
const contentLines = extracted.contentWithoutFrontmatter.split('\n');
|
||||
// Calculate the offset: frontmatter lines + 1 for the empty line after ---
|
||||
const frontmatterLineCount = extracted.frontmatter ? extracted.frontmatter.split('\n').length + 2 : 0;
|
||||
resultContentWithoutFrontmatter = contentLines
|
||||
.map((line, idx) => `${frontmatterLineCount + idx + 1}→${line}`)
|
||||
.join('\n');
|
||||
}
|
||||
}
|
||||
|
||||
const result: ParsedNote = {
|
||||
path: file.path,
|
||||
hasFrontmatter: extracted.hasFrontmatter,
|
||||
/* istanbul ignore next - Conditional content inclusion tested via integration tests */
|
||||
content: withContent ? content : ''
|
||||
content: resultContent
|
||||
};
|
||||
|
||||
// Add totalLines when line numbers are enabled
|
||||
if (totalLines !== undefined) {
|
||||
result.totalLines = totalLines;
|
||||
}
|
||||
|
||||
// Include frontmatter if requested
|
||||
/* istanbul ignore next - Response building branches tested via integration tests */
|
||||
if (withFrontmatter && extracted.hasFrontmatter) {
|
||||
@@ -118,7 +167,7 @@ export class NoteTools {
|
||||
// Include content without frontmatter if parsing
|
||||
/* istanbul ignore next */
|
||||
if (withContent && extracted.hasFrontmatter) {
|
||||
result.contentWithoutFrontmatter = extracted.contentWithoutFrontmatter;
|
||||
result.contentWithoutFrontmatter = resultContentWithoutFrontmatter;
|
||||
}
|
||||
|
||||
// Add word count when content is included
|
||||
@@ -248,7 +297,7 @@ export class NoteTools {
|
||||
|
||||
// Add link validation if requested
|
||||
if (validateLinks) {
|
||||
result.linkValidation = await LinkUtils.validateLinks(
|
||||
result.linkValidation = LinkUtils.validateLinks(
|
||||
this.vault,
|
||||
this.metadata,
|
||||
content,
|
||||
@@ -388,7 +437,7 @@ export class NoteTools {
|
||||
|
||||
// Add link validation if requested
|
||||
if (validateLinks) {
|
||||
result.linkValidation = await LinkUtils.validateLinks(
|
||||
result.linkValidation = LinkUtils.validateLinks(
|
||||
this.vault,
|
||||
this.metadata,
|
||||
content,
|
||||
@@ -587,7 +636,8 @@ export class NoteTools {
|
||||
// Dry run - just return what would happen
|
||||
if (dryRun) {
|
||||
if (soft) {
|
||||
destination = `.trash/${file.name}`;
|
||||
// Destination depends on user's configured deletion preference
|
||||
destination = 'trash';
|
||||
}
|
||||
|
||||
const result: DeleteNoteResult = {
|
||||
@@ -603,14 +653,13 @@ export class NoteTools {
|
||||
};
|
||||
}
|
||||
|
||||
// Perform actual deletion
|
||||
// Perform actual deletion using user's preferred trash settings
|
||||
// FileManager.trashFile() respects the user's configured deletion preference
|
||||
// (system trash or .trash/ folder) as set in Obsidian settings
|
||||
await this.fileManager.trashFile(file);
|
||||
if (soft) {
|
||||
// Move to trash using Obsidian's trash method
|
||||
await this.vault.trash(file, true);
|
||||
destination = `.trash/${file.name}`;
|
||||
} else {
|
||||
// Delete using user's preferred trash settings (system trash or .trash/ folder)
|
||||
await this.fileManager.trashFile(file);
|
||||
// For soft delete, indicate the file was moved to trash (location depends on user settings)
|
||||
destination = 'trash';
|
||||
}
|
||||
|
||||
const result: DeleteNoteResult = {
|
||||
@@ -881,7 +930,8 @@ export class NoteTools {
|
||||
path: string,
|
||||
edits: SectionEdit[],
|
||||
ifMatch?: string,
|
||||
validateLinks: boolean = true
|
||||
validateLinks: boolean = true,
|
||||
force: boolean = false
|
||||
): Promise<CallToolResult> {
|
||||
// Validate path
|
||||
if (!path || path.trim() === '') {
|
||||
@@ -906,6 +956,20 @@ export class NoteTools {
|
||||
};
|
||||
}
|
||||
|
||||
// Require ifMatch unless force is true
|
||||
if (!ifMatch && !force) {
|
||||
return {
|
||||
content: [{
|
||||
type: "text",
|
||||
text: JSON.stringify({
|
||||
error: 'Version check required',
|
||||
message: 'The ifMatch parameter is required to prevent overwriting concurrent changes. First call read_note with withLineNumbers:true to get the versionId, then pass it as ifMatch. To bypass this check, set force:true (not recommended).'
|
||||
}, null, 2)
|
||||
}],
|
||||
isError: true
|
||||
};
|
||||
}
|
||||
|
||||
// Resolve file
|
||||
const file = PathUtils.resolveFile(this.app, path);
|
||||
|
||||
@@ -990,7 +1054,7 @@ export class NoteTools {
|
||||
|
||||
// Add link validation if requested
|
||||
if (validateLinks) {
|
||||
result.linkValidation = await LinkUtils.validateLinks(
|
||||
result.linkValidation = LinkUtils.validateLinks(
|
||||
this.vault,
|
||||
this.metadata,
|
||||
newContent,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { TFile, TFolder } from 'obsidian';
|
||||
import { CallToolResult, FileMetadata, DirectoryMetadata, VaultInfo, SearchResult, SearchMatch, StatResult, ExistsResult, ListResult, FileMetadataWithFrontmatter, FrontmatterSummary, WaypointSearchResult, FolderWaypointResult, FolderNoteResult, ValidateWikilinksResult, ResolveWikilinkResult, BacklinksResult } from '../types/mcp-types';
|
||||
import { CallToolResult, FileMetadata, DirectoryMetadata, SearchResult, SearchMatch, StatResult, ExistsResult, ListResult, FileMetadataWithFrontmatter, FrontmatterSummary, WaypointSearchResult, FolderWaypointResult, FolderNoteResult, ValidateWikilinksResult, ResolveWikilinkResult, BacklinksResult } from '../types/mcp-types';
|
||||
import { PathUtils } from '../utils/path-utils';
|
||||
import { ErrorMessages } from '../utils/error-messages';
|
||||
import { GlobUtils } from '../utils/glob-utils';
|
||||
@@ -15,7 +15,7 @@ export class VaultTools {
|
||||
private metadata: IMetadataCacheAdapter
|
||||
) {}
|
||||
|
||||
async getVaultInfo(): Promise<CallToolResult> {
|
||||
getVaultInfo(): CallToolResult {
|
||||
try {
|
||||
const allFiles = this.vault.getMarkdownFiles();
|
||||
const totalNotes = allFiles.length;
|
||||
@@ -60,7 +60,7 @@ export class VaultTools {
|
||||
return Math.round((bytes / Math.pow(k, i)) * 100) / 100 + ' ' + sizes[i];
|
||||
}
|
||||
|
||||
async listNotes(path?: string): Promise<CallToolResult> {
|
||||
listNotes(path?: string): CallToolResult {
|
||||
let items: Array<FileMetadata | DirectoryMetadata> = [];
|
||||
|
||||
// Normalize root path: undefined, empty string "", or "." all mean root
|
||||
@@ -279,14 +279,14 @@ export class VaultTools {
|
||||
// Apply type filtering and add items
|
||||
if (item instanceof TFile) {
|
||||
if (only !== 'directories') {
|
||||
const fileMetadata = await this.createFileMetadataWithFrontmatter(item, withFrontmatterSummary || false);
|
||||
const fileMetadata = this.createFileMetadataWithFrontmatter(item, withFrontmatterSummary || false);
|
||||
|
||||
// Optionally include word count (best effort)
|
||||
if (includeWordCount) {
|
||||
try {
|
||||
const content = await this.vault.read(item);
|
||||
fileMetadata.wordCount = ContentUtils.countWords(content);
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// Skip word count if file can't be read (binary file, etc.)
|
||||
// wordCount field simply omitted for this file
|
||||
}
|
||||
@@ -307,10 +307,10 @@ export class VaultTools {
|
||||
}
|
||||
}
|
||||
|
||||
private async createFileMetadataWithFrontmatter(
|
||||
private createFileMetadataWithFrontmatter(
|
||||
file: TFile,
|
||||
withFrontmatterSummary: boolean
|
||||
): Promise<FileMetadataWithFrontmatter> {
|
||||
): FileMetadataWithFrontmatter {
|
||||
const baseMetadata = this.createFileMetadata(file);
|
||||
|
||||
if (!withFrontmatterSummary || file.extension !== 'md') {
|
||||
@@ -356,7 +356,7 @@ export class VaultTools {
|
||||
frontmatterSummary: summary
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// If frontmatter extraction fails, just return base metadata
|
||||
}
|
||||
|
||||
@@ -390,7 +390,7 @@ export class VaultTools {
|
||||
if (folderWithStat.stat && typeof folderWithStat.stat.mtime === 'number') {
|
||||
modified = folderWithStat.stat.mtime;
|
||||
}
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// Silently fail - modified will remain 0
|
||||
}
|
||||
|
||||
@@ -442,7 +442,7 @@ export class VaultTools {
|
||||
try {
|
||||
const content = await this.vault.read(item);
|
||||
metadata.wordCount = ContentUtils.countWords(content);
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// Skip word count if file can't be read (binary file, etc.)
|
||||
}
|
||||
}
|
||||
@@ -495,7 +495,7 @@ export class VaultTools {
|
||||
};
|
||||
}
|
||||
|
||||
async exists(path: string): Promise<CallToolResult> {
|
||||
exists(path: string): CallToolResult {
|
||||
// Validate path
|
||||
if (!PathUtils.isValidVaultPath(path)) {
|
||||
return {
|
||||
@@ -712,7 +712,7 @@ export class VaultTools {
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// Skip files that can't be read
|
||||
}
|
||||
}
|
||||
@@ -922,7 +922,7 @@ export class VaultTools {
|
||||
* Resolve a single wikilink from a source note
|
||||
* Returns the target path if resolvable, or suggestions if not
|
||||
*/
|
||||
async resolveWikilink(sourcePath: string, linkText: string): Promise<CallToolResult> {
|
||||
resolveWikilink(sourcePath: string, linkText: string): CallToolResult {
|
||||
try {
|
||||
// Normalize and validate source path
|
||||
const normalizedPath = PathUtils.normalizePath(sourcePath);
|
||||
|
||||
@@ -16,6 +16,11 @@ export type JSONValue =
|
||||
*/
|
||||
export type JSONRPCParams = { [key: string]: JSONValue } | JSONValue[];
|
||||
|
||||
/**
|
||||
* Tool arguments are always objects (not arrays)
|
||||
*/
|
||||
export type ToolArguments = { [key: string]: JSONValue };
|
||||
|
||||
export interface JSONRPCRequest {
|
||||
jsonrpc: "2.0";
|
||||
id?: string | number;
|
||||
@@ -47,7 +52,7 @@ export enum ErrorCodes {
|
||||
export interface InitializeResult {
|
||||
protocolVersion: string;
|
||||
capabilities: {
|
||||
tools?: {};
|
||||
tools?: object;
|
||||
};
|
||||
serverInfo: {
|
||||
name: string;
|
||||
@@ -213,6 +218,7 @@ export interface ParsedNote {
|
||||
content: string;
|
||||
contentWithoutFrontmatter?: string;
|
||||
wordCount?: number;
|
||||
totalLines?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -3,6 +3,7 @@ export interface MCPServerSettings {
|
||||
port: number;
|
||||
apiKey: string; // Now required, not optional
|
||||
enableAuth: boolean; // Will be removed in future, kept for migration
|
||||
allowedIPs: string; // Comma-separated IPs/CIDRs allowed to connect remotely
|
||||
}
|
||||
|
||||
export interface NotificationSettings {
|
||||
@@ -20,6 +21,7 @@ export const DEFAULT_SETTINGS: MCPPluginSettings = {
|
||||
port: 3000,
|
||||
apiKey: '', // Will be auto-generated on first load
|
||||
enableAuth: true, // Always true now
|
||||
allowedIPs: '', // Empty = localhost only
|
||||
autoStart: false,
|
||||
// Notification defaults
|
||||
notificationsEnabled: false,
|
||||
|
||||
@@ -7,8 +7,7 @@ import { MCPPluginSettings } from '../types/settings-types';
|
||||
export interface NotificationHistoryEntry {
|
||||
timestamp: number;
|
||||
toolName: string;
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
args: any;
|
||||
args: Record<string, unknown>;
|
||||
success: boolean;
|
||||
duration?: number;
|
||||
error?: string;
|
||||
@@ -75,8 +74,7 @@ export class NotificationManager {
|
||||
/**
|
||||
* Show notification for tool call start
|
||||
*/
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
showToolCall(toolName: string, args: any, duration?: number): void {
|
||||
showToolCall(toolName: string, args: Record<string, unknown>, duration?: number): void {
|
||||
if (!this.shouldShowNotification()) {
|
||||
return;
|
||||
}
|
||||
@@ -142,8 +140,7 @@ export class NotificationManager {
|
||||
/**
|
||||
* Format arguments for display
|
||||
*/
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
private formatArgs(args: any): string {
|
||||
private formatArgs(args: Record<string, unknown>): string {
|
||||
if (!this.settings.showParameters) {
|
||||
return '';
|
||||
}
|
||||
@@ -156,16 +153,16 @@ export class NotificationManager {
|
||||
// Extract key parameters for display
|
||||
const keyParams: string[] = [];
|
||||
|
||||
if (args.path) {
|
||||
if (args.path && typeof args.path === 'string') {
|
||||
keyParams.push(`path: "${this.truncateString(args.path, 30)}"`);
|
||||
}
|
||||
if (args.query) {
|
||||
if (args.query && typeof args.query === 'string') {
|
||||
keyParams.push(`query: "${this.truncateString(args.query, 30)}"`);
|
||||
}
|
||||
if (args.folder) {
|
||||
if (args.folder && typeof args.folder === 'string') {
|
||||
keyParams.push(`folder: "${this.truncateString(args.folder, 30)}"`);
|
||||
}
|
||||
if (args.recursive !== undefined) {
|
||||
if (typeof args.recursive === 'boolean') {
|
||||
keyParams.push(`recursive: ${args.recursive}`);
|
||||
}
|
||||
|
||||
@@ -176,7 +173,7 @@ export class NotificationManager {
|
||||
}
|
||||
|
||||
return keyParams.join(', ');
|
||||
} catch (e) {
|
||||
} catch {
|
||||
return '';
|
||||
}
|
||||
}
|
||||
@@ -196,9 +193,9 @@ export class NotificationManager {
|
||||
*/
|
||||
private queueNotification(notificationFn: () => void): void {
|
||||
this.notificationQueue.push(notificationFn);
|
||||
|
||||
|
||||
if (!this.isProcessingQueue) {
|
||||
this.processQueue();
|
||||
void this.processQueue();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -15,19 +15,9 @@ function getCrypto(): Crypto {
|
||||
return window.crypto;
|
||||
}
|
||||
|
||||
// Node.js environment (15+) - uses Web Crypto API standard
|
||||
if (typeof global !== 'undefined') {
|
||||
try {
|
||||
// Note: require() is necessary here for synchronous crypto access in Node.js
|
||||
// This module is loaded conditionally and esbuild will handle this correctly during bundling
|
||||
// eslint-disable-next-line @typescript-eslint/no-var-requires
|
||||
const nodeCrypto = require('crypto') as typeof import('crypto');
|
||||
if (nodeCrypto?.webcrypto) {
|
||||
return nodeCrypto.webcrypto as unknown as Crypto;
|
||||
}
|
||||
} catch {
|
||||
// Crypto module not available or failed to load
|
||||
}
|
||||
// Node.js/Electron environment - globalThis.crypto available in Node 20+
|
||||
if (typeof globalThis !== 'undefined' && globalThis.crypto) {
|
||||
return globalThis.crypto;
|
||||
}
|
||||
|
||||
throw new Error('No Web Crypto API available in this environment');
|
||||
|
||||
@@ -8,13 +8,14 @@ interface ElectronSafeStorage {
|
||||
// Safely import safeStorage - may not be available in all environments
|
||||
let safeStorage: ElectronSafeStorage | null = null;
|
||||
try {
|
||||
// Note: require() is necessary here for synchronous access to Electron's safeStorage
|
||||
// This module is loaded conditionally and may not be available in all environments
|
||||
// esbuild will handle this correctly during bundling
|
||||
// eslint-disable-next-line @typescript-eslint/no-var-requires
|
||||
const electron = require('electron') as typeof import('electron');
|
||||
safeStorage = electron.safeStorage || null;
|
||||
} catch (error) {
|
||||
// Access electron through the global window object in Obsidian's Electron environment
|
||||
// This avoids require() while still getting synchronous access
|
||||
const electronRemote = (window as Window & { require?: (module: string) => typeof import('electron') }).require;
|
||||
if (electronRemote) {
|
||||
const electron = electronRemote('electron');
|
||||
safeStorage = electron.safeStorage || null;
|
||||
}
|
||||
} catch {
|
||||
console.warn('Electron safeStorage not available, API keys will be stored in plaintext');
|
||||
}
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ export class FrontmatterUtils {
|
||||
let parsedFrontmatter: Record<string, YAMLValue> | null = null;
|
||||
try {
|
||||
parsedFrontmatter = parseYaml(frontmatter) || {};
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// If parsing fails, return null for parsed frontmatter
|
||||
parsedFrontmatter = null;
|
||||
}
|
||||
@@ -326,7 +326,7 @@ export class FrontmatterUtils {
|
||||
compressed: true // Indicate data is compressed
|
||||
}
|
||||
};
|
||||
} catch (decompressError) {
|
||||
} catch {
|
||||
// Decompression failed
|
||||
return {
|
||||
isExcalidraw: true,
|
||||
@@ -355,9 +355,9 @@ export class FrontmatterUtils {
|
||||
version: jsonData.version || 2
|
||||
}
|
||||
};
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// If parsing fails, return with default values
|
||||
const isExcalidraw = content.includes('excalidraw-plugin') ||
|
||||
const isExcalidraw = content.includes('excalidraw-plugin') ||
|
||||
content.includes('"type":"excalidraw"');
|
||||
|
||||
|
||||
|
||||
@@ -45,7 +45,7 @@ export class GlobUtils {
|
||||
i++;
|
||||
break;
|
||||
|
||||
case '[':
|
||||
case '[': {
|
||||
// Character class
|
||||
const closeIdx = pattern.indexOf(']', i);
|
||||
if (closeIdx === -1) {
|
||||
@@ -57,8 +57,9 @@ export class GlobUtils {
|
||||
i = closeIdx + 1;
|
||||
}
|
||||
break;
|
||||
|
||||
case '{':
|
||||
}
|
||||
|
||||
case '{': {
|
||||
// Alternatives {a,b,c}
|
||||
const closeIdx2 = pattern.indexOf('}', i);
|
||||
if (closeIdx2 === -1) {
|
||||
@@ -67,13 +68,14 @@ export class GlobUtils {
|
||||
i++;
|
||||
} else {
|
||||
const alternatives = pattern.substring(i + 1, closeIdx2).split(',');
|
||||
regexStr += '(' + alternatives.map(alt =>
|
||||
regexStr += '(' + alternatives.map(alt =>
|
||||
this.escapeRegex(alt)
|
||||
).join('|') + ')';
|
||||
i = closeIdx2 + 1;
|
||||
}
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
case '/':
|
||||
case '.':
|
||||
case '(':
|
||||
|
||||
@@ -445,12 +445,12 @@ export class LinkUtils {
|
||||
* @param sourcePath Path of the file containing the links
|
||||
* @returns Structured validation result with categorized links
|
||||
*/
|
||||
static async validateLinks(
|
||||
static validateLinks(
|
||||
vault: IVaultAdapter,
|
||||
metadata: IMetadataCacheAdapter,
|
||||
content: string,
|
||||
sourcePath: string
|
||||
): Promise<LinkValidationResult> {
|
||||
): LinkValidationResult {
|
||||
const valid: string[] = [];
|
||||
const brokenNotes: BrokenNoteLink[] = [];
|
||||
const brokenHeadings: BrokenHeadingLink[] = [];
|
||||
|
||||
84
src/utils/network-utils.ts
Normal file
84
src/utils/network-utils.ts
Normal file
@@ -0,0 +1,84 @@
|
||||
export interface AllowedIPEntry {
|
||||
type: 'ip' | 'cidr';
|
||||
ip: number; // 32-bit numeric IPv4
|
||||
mask: number; // 32-bit subnet mask (only for CIDR)
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert dotted IPv4 string to 32-bit number.
|
||||
* Returns null if invalid.
|
||||
*/
|
||||
function ipToNumber(ip: string): number | null {
|
||||
const parts = ip.split('.');
|
||||
if (parts.length !== 4) return null;
|
||||
let num = 0;
|
||||
for (const part of parts) {
|
||||
const octet = parseInt(part, 10);
|
||||
if (isNaN(octet) || octet < 0 || octet > 255) return null;
|
||||
num = (num << 8) | octet;
|
||||
}
|
||||
return num >>> 0; // Ensure unsigned
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip IPv4-mapped IPv6 prefix (::ffff:) if present.
|
||||
*/
|
||||
function normalizeIP(ip: string): string {
|
||||
if (ip.startsWith('::ffff:')) {
|
||||
return ip.slice(7);
|
||||
}
|
||||
return ip;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a comma-separated string of IPs and CIDRs into structured entries.
|
||||
* Invalid entries are silently skipped.
|
||||
*/
|
||||
export function parseAllowedIPs(setting: string): AllowedIPEntry[] {
|
||||
if (!setting || !setting.trim()) return [];
|
||||
|
||||
const entries: AllowedIPEntry[] = [];
|
||||
for (const raw of setting.split(',')) {
|
||||
const trimmed = raw.trim();
|
||||
if (!trimmed) continue;
|
||||
|
||||
if (trimmed.includes('/')) {
|
||||
const [ipStr, prefixStr] = trimmed.split('/');
|
||||
const ip = ipToNumber(ipStr);
|
||||
const prefix = parseInt(prefixStr, 10);
|
||||
if (ip === null || isNaN(prefix) || prefix < 0 || prefix > 32) continue;
|
||||
const mask = prefix === 0 ? 0 : (~0 << (32 - prefix)) >>> 0;
|
||||
entries.push({ type: 'cidr', ip: (ip & mask) >>> 0, mask });
|
||||
} else {
|
||||
const ip = ipToNumber(trimmed);
|
||||
if (ip === null) continue;
|
||||
entries.push({ type: 'ip', ip, mask: 0xFFFFFFFF });
|
||||
}
|
||||
}
|
||||
return entries;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an IP address is allowed by the given allow-list.
|
||||
* Localhost (127.0.0.1) is always allowed.
|
||||
*/
|
||||
export function isIPAllowed(ip: string, allowList: AllowedIPEntry[]): boolean {
|
||||
const normalized = normalizeIP(ip);
|
||||
|
||||
// Localhost is always allowed
|
||||
if (normalized === '127.0.0.1' || normalized === 'localhost') return true;
|
||||
|
||||
if (allowList.length === 0) return false;
|
||||
|
||||
const num = ipToNumber(normalized);
|
||||
if (num === null) return false;
|
||||
|
||||
for (const entry of allowList) {
|
||||
if (entry.type === 'ip') {
|
||||
if (num === entry.ip) return true;
|
||||
} else {
|
||||
if (((num & entry.mask) >>> 0) === entry.ip) return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
@@ -66,7 +66,8 @@ export class PathUtils {
|
||||
|
||||
// Check for invalid characters (Windows restrictions)
|
||||
// Invalid chars: < > : " | ? * and ASCII control characters (0-31)
|
||||
const invalidChars = /[<>:"|?*\u0000-\u001F]/;
|
||||
// eslint-disable-next-line no-control-regex -- Control characters \x00-\x1F required for Windows path validation
|
||||
const invalidChars = /[<>:"|?*\x00-\x1F]/;
|
||||
if (invalidChars.test(normalized)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -114,7 +114,7 @@ export class SearchUtils {
|
||||
filesWithMatches.add(file.path);
|
||||
matches.push(...filenameMatches);
|
||||
}
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// Skip files that can't be read
|
||||
}
|
||||
}
|
||||
@@ -323,7 +323,7 @@ export class SearchUtils {
|
||||
waypointContent.push(line);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// Skip files that can't be searched
|
||||
}
|
||||
}
|
||||
|
||||
@@ -100,7 +100,7 @@ export class WaypointUtils {
|
||||
try {
|
||||
const content = await vault.read(file);
|
||||
hasWaypoint = this.hasWaypointMarker(content);
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// If we can't read the file, we can't check for waypoints
|
||||
}
|
||||
|
||||
|
||||
@@ -138,11 +138,18 @@ describe('crypto-adapter', () => {
|
||||
const globalRef = global as any;
|
||||
const originalWindow = globalRef.window;
|
||||
const originalGlobal = globalRef.global;
|
||||
const originalGlobalThisCrypto = globalThis.crypto;
|
||||
|
||||
try {
|
||||
// Remove window.crypto and global access
|
||||
// Remove window.crypto, global access, and globalThis.crypto
|
||||
delete globalRef.window;
|
||||
delete globalRef.global;
|
||||
// In modern Node.js, globalThis.crypto is always available, so we must mock it too
|
||||
Object.defineProperty(globalThis, 'crypto', {
|
||||
value: undefined,
|
||||
writable: true,
|
||||
configurable: true
|
||||
});
|
||||
|
||||
// Clear module cache to force re-evaluation
|
||||
jest.resetModules();
|
||||
@@ -157,6 +164,12 @@ describe('crypto-adapter', () => {
|
||||
// Restore original values
|
||||
globalRef.window = originalWindow;
|
||||
globalRef.global = originalGlobal;
|
||||
// Restore globalThis.crypto
|
||||
Object.defineProperty(globalThis, 'crypto', {
|
||||
value: originalGlobalThisCrypto,
|
||||
writable: true,
|
||||
configurable: true
|
||||
});
|
||||
|
||||
// Clear module cache again to restore normal state
|
||||
jest.resetModules();
|
||||
|
||||
@@ -1,18 +1,63 @@
|
||||
import { encryptApiKey, decryptApiKey, isEncryptionAvailable } from '../src/utils/encryption-utils';
|
||||
// Mock safeStorage implementation
|
||||
const mockSafeStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn((data: string) => Buffer.from(`encrypted:${data}`)),
|
||||
decryptString: jest.fn((buffer: Buffer) => buffer.toString().replace('encrypted:', ''))
|
||||
};
|
||||
|
||||
// Mock electron module
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn((data: string) => Buffer.from(`encrypted:${data}`)),
|
||||
decryptString: jest.fn((buffer: Buffer) => {
|
||||
const str = buffer.toString();
|
||||
return str.replace('encrypted:', '');
|
||||
})
|
||||
// Setup window.require mock before importing the module
|
||||
const mockWindowRequire = jest.fn((module: string) => {
|
||||
if (module === 'electron') {
|
||||
return { safeStorage: mockSafeStorage };
|
||||
}
|
||||
}));
|
||||
throw new Error(`Module not found: ${module}`);
|
||||
});
|
||||
|
||||
// Create mock window object for Node environment
|
||||
const mockWindow: Window & { require?: unknown } = {
|
||||
require: mockWindowRequire
|
||||
} as unknown as Window & { require?: unknown };
|
||||
|
||||
// Store original global window
|
||||
const originalWindow = (globalThis as unknown as { window?: unknown }).window;
|
||||
|
||||
// Set up window.require before tests run
|
||||
beforeAll(() => {
|
||||
(globalThis as unknown as { window: typeof mockWindow }).window = mockWindow;
|
||||
});
|
||||
|
||||
// Clean up after all tests
|
||||
afterAll(() => {
|
||||
if (originalWindow === undefined) {
|
||||
delete (globalThis as unknown as { window?: unknown }).window;
|
||||
} else {
|
||||
(globalThis as unknown as { window: typeof originalWindow }).window = originalWindow;
|
||||
}
|
||||
});
|
||||
|
||||
// Import after mock is set up - use require to ensure module loads after mock
|
||||
let encryptApiKey: typeof import('../src/utils/encryption-utils').encryptApiKey;
|
||||
let decryptApiKey: typeof import('../src/utils/encryption-utils').decryptApiKey;
|
||||
let isEncryptionAvailable: typeof import('../src/utils/encryption-utils').isEncryptionAvailable;
|
||||
|
||||
beforeAll(() => {
|
||||
// Reset modules to ensure fresh load with mock
|
||||
jest.resetModules();
|
||||
const encryptionUtils = require('../src/utils/encryption-utils');
|
||||
encryptApiKey = encryptionUtils.encryptApiKey;
|
||||
decryptApiKey = encryptionUtils.decryptApiKey;
|
||||
isEncryptionAvailable = encryptionUtils.isEncryptionAvailable;
|
||||
});
|
||||
|
||||
describe('Encryption Utils', () => {
|
||||
beforeEach(() => {
|
||||
// Reset mock implementations before each test
|
||||
mockSafeStorage.isEncryptionAvailable.mockReturnValue(true);
|
||||
mockSafeStorage.encryptString.mockImplementation((data: string) => Buffer.from(`encrypted:${data}`));
|
||||
mockSafeStorage.decryptString.mockImplementation((buffer: Buffer) => buffer.toString().replace('encrypted:', ''));
|
||||
mockWindowRequire.mockClear();
|
||||
});
|
||||
|
||||
describe('encryptApiKey', () => {
|
||||
it('should encrypt API key when encryption is available', () => {
|
||||
const apiKey = 'test-api-key-12345';
|
||||
@@ -23,13 +68,23 @@ describe('Encryption Utils', () => {
|
||||
});
|
||||
|
||||
it('should return plaintext when encryption is not available', () => {
|
||||
const { safeStorage } = require('electron');
|
||||
safeStorage.isEncryptionAvailable.mockReturnValueOnce(false);
|
||||
// Need to reload module with different mock behavior
|
||||
jest.resetModules();
|
||||
const mockStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => false),
|
||||
encryptString: jest.fn(),
|
||||
decryptString: jest.fn()
|
||||
};
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: mockStorage }));
|
||||
|
||||
const { encryptApiKey: encrypt } = require('../src/utils/encryption-utils');
|
||||
const apiKey = 'test-api-key-12345';
|
||||
const result = encryptApiKey(apiKey);
|
||||
const result = encrypt(apiKey);
|
||||
|
||||
expect(result).toBe(apiKey);
|
||||
|
||||
// Restore original mock
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
|
||||
it('should handle empty string', () => {
|
||||
@@ -73,92 +128,107 @@ describe('Encryption Utils', () => {
|
||||
|
||||
describe('error handling', () => {
|
||||
it('should handle encryption errors and fallback to plaintext', () => {
|
||||
const { safeStorage } = require('electron');
|
||||
const originalEncrypt = safeStorage.encryptString;
|
||||
safeStorage.encryptString = jest.fn(() => {
|
||||
throw new Error('Encryption failed');
|
||||
});
|
||||
// Reload module with error-throwing mock
|
||||
jest.resetModules();
|
||||
const mockStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn(() => {
|
||||
throw new Error('Encryption failed');
|
||||
}),
|
||||
decryptString: jest.fn()
|
||||
};
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: mockStorage }));
|
||||
|
||||
const { encryptApiKey: encrypt } = require('../src/utils/encryption-utils');
|
||||
const apiKey = 'test-api-key-12345';
|
||||
const result = encryptApiKey(apiKey);
|
||||
const result = encrypt(apiKey);
|
||||
|
||||
expect(result).toBe(apiKey); // Should return plaintext on error
|
||||
safeStorage.encryptString = originalEncrypt; // Restore
|
||||
|
||||
// Restore original mock
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
|
||||
it('should throw error when decryption fails', () => {
|
||||
const { safeStorage } = require('electron');
|
||||
const originalDecrypt = safeStorage.decryptString;
|
||||
safeStorage.decryptString = jest.fn(() => {
|
||||
throw new Error('Decryption failed');
|
||||
});
|
||||
// Reload module with error-throwing mock
|
||||
jest.resetModules();
|
||||
const mockStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn((data: string) => Buffer.from(`encrypted:${data}`)),
|
||||
decryptString: jest.fn(() => {
|
||||
throw new Error('Decryption failed');
|
||||
})
|
||||
};
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: mockStorage }));
|
||||
|
||||
const { decryptApiKey: decrypt } = require('../src/utils/encryption-utils');
|
||||
const encrypted = 'encrypted:aW52YWxpZA=='; // Invalid encrypted data
|
||||
|
||||
expect(() => decryptApiKey(encrypted)).toThrow('Failed to decrypt API key');
|
||||
safeStorage.decryptString = originalDecrypt; // Restore
|
||||
expect(() => decrypt(encrypted)).toThrow('Failed to decrypt API key');
|
||||
|
||||
// Restore original mock
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
});
|
||||
|
||||
describe('isEncryptionAvailable', () => {
|
||||
it('should return true when encryption is available', () => {
|
||||
const { isEncryptionAvailable } = require('../src/utils/encryption-utils');
|
||||
const { safeStorage } = require('electron');
|
||||
jest.resetModules();
|
||||
const mockStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn(),
|
||||
decryptString: jest.fn()
|
||||
};
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: mockStorage }));
|
||||
|
||||
safeStorage.isEncryptionAvailable.mockReturnValueOnce(true);
|
||||
expect(isEncryptionAvailable()).toBe(true);
|
||||
const { isEncryptionAvailable: checkAvail } = require('../src/utils/encryption-utils');
|
||||
expect(checkAvail()).toBe(true);
|
||||
|
||||
// Restore
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
|
||||
it('should return false when encryption is not available', () => {
|
||||
const { isEncryptionAvailable } = require('../src/utils/encryption-utils');
|
||||
const { safeStorage } = require('electron');
|
||||
jest.resetModules();
|
||||
const mockStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => false),
|
||||
encryptString: jest.fn(),
|
||||
decryptString: jest.fn()
|
||||
};
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: mockStorage }));
|
||||
|
||||
safeStorage.isEncryptionAvailable.mockReturnValueOnce(false);
|
||||
expect(isEncryptionAvailable()).toBe(false);
|
||||
const { isEncryptionAvailable: checkAvail } = require('../src/utils/encryption-utils');
|
||||
expect(checkAvail()).toBe(false);
|
||||
|
||||
// Restore
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
|
||||
it('should return false when safeStorage is null', () => {
|
||||
// This tests the case where Electron is not available
|
||||
// We need to reload the module with electron unavailable
|
||||
jest.resetModules();
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: null }));
|
||||
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: null
|
||||
}));
|
||||
|
||||
const { isEncryptionAvailable } = require('../src/utils/encryption-utils');
|
||||
expect(isEncryptionAvailable()).toBe(false);
|
||||
const { isEncryptionAvailable: checkAvail } = require('../src/utils/encryption-utils');
|
||||
expect(checkAvail()).toBe(false);
|
||||
|
||||
// Restore original mock
|
||||
jest.resetModules();
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn((data: string) => Buffer.from(`encrypted:${data}`)),
|
||||
decryptString: jest.fn((buffer: Buffer) => {
|
||||
const str = buffer.toString();
|
||||
return str.replace('encrypted:', '');
|
||||
})
|
||||
}
|
||||
}));
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
|
||||
it('should return false when isEncryptionAvailable method is missing', () => {
|
||||
jest.resetModules();
|
||||
const mockStorage = {
|
||||
// Missing isEncryptionAvailable method
|
||||
encryptString: jest.fn(),
|
||||
decryptString: jest.fn()
|
||||
};
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: mockStorage }));
|
||||
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: {
|
||||
// Missing isEncryptionAvailable method
|
||||
encryptString: jest.fn(),
|
||||
decryptString: jest.fn()
|
||||
}
|
||||
}));
|
||||
|
||||
const { isEncryptionAvailable } = require('../src/utils/encryption-utils');
|
||||
expect(isEncryptionAvailable()).toBe(false);
|
||||
const { isEncryptionAvailable: checkAvail } = require('../src/utils/encryption-utils');
|
||||
expect(checkAvail()).toBe(false);
|
||||
|
||||
// Restore
|
||||
jest.resetModules();
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
});
|
||||
|
||||
@@ -168,12 +238,13 @@ describe('Encryption Utils', () => {
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.resetModules();
|
||||
// Restore mock after each test
|
||||
mockWindow.require = mockWindowRequire;
|
||||
});
|
||||
|
||||
it('should handle electron module not being available', () => {
|
||||
// Mock require to throw when loading electron
|
||||
jest.mock('electron', () => {
|
||||
mockWindow.require = jest.fn(() => {
|
||||
throw new Error('Electron not available');
|
||||
});
|
||||
|
||||
@@ -181,12 +252,12 @@ describe('Encryption Utils', () => {
|
||||
const consoleSpy = jest.spyOn(console, 'warn').mockImplementation();
|
||||
|
||||
// Load module with electron unavailable
|
||||
const { encryptApiKey, isEncryptionAvailable } = require('../src/utils/encryption-utils');
|
||||
const { encryptApiKey: encrypt, isEncryptionAvailable: checkAvail } = require('../src/utils/encryption-utils');
|
||||
|
||||
expect(isEncryptionAvailable()).toBe(false);
|
||||
expect(checkAvail()).toBe(false);
|
||||
|
||||
const apiKey = 'test-key';
|
||||
const result = encryptApiKey(apiKey);
|
||||
const result = encrypt(apiKey);
|
||||
|
||||
// Should return plaintext when electron is unavailable
|
||||
expect(result).toBe(apiKey);
|
||||
@@ -195,21 +266,19 @@ describe('Encryption Utils', () => {
|
||||
});
|
||||
|
||||
it('should handle decryption when safeStorage is null', () => {
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: null
|
||||
}));
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: null }));
|
||||
|
||||
const { decryptApiKey } = require('../src/utils/encryption-utils');
|
||||
const { decryptApiKey: decrypt } = require('../src/utils/encryption-utils');
|
||||
|
||||
const encrypted = 'encrypted:aW52YWxpZA==';
|
||||
|
||||
expect(() => decryptApiKey(encrypted)).toThrow('Failed to decrypt API key');
|
||||
expect(() => decrypt(encrypted)).toThrow('Failed to decrypt API key');
|
||||
});
|
||||
|
||||
it('should log warning when encryption not available on first load', () => {
|
||||
const consoleSpy = jest.spyOn(console, 'warn').mockImplementation();
|
||||
|
||||
jest.mock('electron', () => {
|
||||
mockWindow.require = jest.fn(() => {
|
||||
throw new Error('Module not found');
|
||||
});
|
||||
|
||||
@@ -225,35 +294,32 @@ describe('Encryption Utils', () => {
|
||||
});
|
||||
|
||||
it('should gracefully handle plaintext keys when encryption unavailable', () => {
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: null
|
||||
}));
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: null }));
|
||||
|
||||
const { encryptApiKey, decryptApiKey } = require('../src/utils/encryption-utils');
|
||||
const { encryptApiKey: encrypt, decryptApiKey: decrypt } = require('../src/utils/encryption-utils');
|
||||
|
||||
const apiKey = 'plain-api-key';
|
||||
|
||||
// Encrypt should return plaintext
|
||||
const encrypted = encryptApiKey(apiKey);
|
||||
const encrypted = encrypt(apiKey);
|
||||
expect(encrypted).toBe(apiKey);
|
||||
|
||||
// Decrypt plaintext should return as-is
|
||||
const decrypted = decryptApiKey(apiKey);
|
||||
const decrypted = decrypt(apiKey);
|
||||
expect(decrypted).toBe(apiKey);
|
||||
});
|
||||
|
||||
it('should warn when falling back to plaintext storage', () => {
|
||||
const consoleSpy = jest.spyOn(console, 'warn').mockImplementation();
|
||||
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: {
|
||||
isEncryptionAvailable: jest.fn(() => false)
|
||||
}
|
||||
}));
|
||||
const mockStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => false)
|
||||
};
|
||||
mockWindow.require = jest.fn(() => ({ safeStorage: mockStorage }));
|
||||
|
||||
const { encryptApiKey } = require('../src/utils/encryption-utils');
|
||||
const { encryptApiKey: encrypt } = require('../src/utils/encryption-utils');
|
||||
|
||||
encryptApiKey('test-key');
|
||||
encrypt('test-key');
|
||||
|
||||
expect(consoleSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Encryption not available')
|
||||
|
||||
@@ -1,18 +1,53 @@
|
||||
import { generateApiKey } from '../src/utils/auth-utils';
|
||||
import { encryptApiKey, decryptApiKey } from '../src/utils/encryption-utils';
|
||||
import { DEFAULT_SETTINGS } from '../src/types/settings-types';
|
||||
|
||||
// Mock electron
|
||||
jest.mock('electron', () => ({
|
||||
safeStorage: {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn((data: string) => Buffer.from(`encrypted:${data}`)),
|
||||
decryptString: jest.fn((buffer: Buffer) => {
|
||||
const str = buffer.toString();
|
||||
return str.replace('encrypted:', '');
|
||||
})
|
||||
// Mock safeStorage implementation
|
||||
const mockSafeStorage = {
|
||||
isEncryptionAvailable: jest.fn(() => true),
|
||||
encryptString: jest.fn((data: string) => Buffer.from(`encrypted:${data}`)),
|
||||
decryptString: jest.fn((buffer: Buffer) => buffer.toString().replace('encrypted:', ''))
|
||||
};
|
||||
|
||||
// Setup window.require mock
|
||||
const mockWindowRequire = jest.fn((module: string) => {
|
||||
if (module === 'electron') {
|
||||
return { safeStorage: mockSafeStorage };
|
||||
}
|
||||
}));
|
||||
throw new Error(`Module not found: ${module}`);
|
||||
});
|
||||
|
||||
// Create mock window object for Node environment
|
||||
const mockWindow: Window & { require?: unknown } = {
|
||||
require: mockWindowRequire
|
||||
} as unknown as Window & { require?: unknown };
|
||||
|
||||
// Store original global window
|
||||
const originalWindow = (globalThis as unknown as { window?: unknown }).window;
|
||||
|
||||
// Set up window.require before tests run
|
||||
beforeAll(() => {
|
||||
(globalThis as unknown as { window: typeof mockWindow }).window = mockWindow;
|
||||
});
|
||||
|
||||
// Clean up after all tests
|
||||
afterAll(() => {
|
||||
if (originalWindow === undefined) {
|
||||
delete (globalThis as unknown as { window?: unknown }).window;
|
||||
} else {
|
||||
(globalThis as unknown as { window: typeof originalWindow }).window = originalWindow;
|
||||
}
|
||||
});
|
||||
|
||||
// Import after mock is set up
|
||||
let encryptApiKey: typeof import('../src/utils/encryption-utils').encryptApiKey;
|
||||
let decryptApiKey: typeof import('../src/utils/encryption-utils').decryptApiKey;
|
||||
|
||||
beforeAll(() => {
|
||||
jest.resetModules();
|
||||
const encryptionUtils = require('../src/utils/encryption-utils');
|
||||
encryptApiKey = encryptionUtils.encryptApiKey;
|
||||
decryptApiKey = encryptionUtils.decryptApiKey;
|
||||
});
|
||||
|
||||
describe('Settings Migration', () => {
|
||||
describe('API key initialization', () => {
|
||||
|
||||
141
tests/network-utils.test.ts
Normal file
141
tests/network-utils.test.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
/**
|
||||
* Unit tests for network-utils
|
||||
*/
|
||||
|
||||
import { parseAllowedIPs, isIPAllowed, AllowedIPEntry } from '../src/utils/network-utils';
|
||||
|
||||
describe('parseAllowedIPs', () => {
|
||||
test('should return empty array for empty string', () => {
|
||||
expect(parseAllowedIPs('')).toEqual([]);
|
||||
});
|
||||
|
||||
test('should return empty array for whitespace-only string', () => {
|
||||
expect(parseAllowedIPs(' ')).toEqual([]);
|
||||
});
|
||||
|
||||
test('should parse a single IP', () => {
|
||||
const result = parseAllowedIPs('192.168.1.1');
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].type).toBe('ip');
|
||||
});
|
||||
|
||||
test('should parse multiple comma-separated IPs', () => {
|
||||
const result = parseAllowedIPs('192.168.1.1, 10.0.0.5');
|
||||
expect(result).toHaveLength(2);
|
||||
});
|
||||
|
||||
test('should parse CIDR notation', () => {
|
||||
const result = parseAllowedIPs('100.64.0.0/10');
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].type).toBe('cidr');
|
||||
});
|
||||
|
||||
test('should parse mixed IPs and CIDRs', () => {
|
||||
const result = parseAllowedIPs('192.168.1.1, 10.0.0.0/8, 172.16.0.5');
|
||||
expect(result).toHaveLength(3);
|
||||
expect(result[0].type).toBe('ip');
|
||||
expect(result[1].type).toBe('cidr');
|
||||
expect(result[2].type).toBe('ip');
|
||||
});
|
||||
|
||||
test('should handle extra whitespace', () => {
|
||||
const result = parseAllowedIPs(' 192.168.1.1 , 10.0.0.5 ');
|
||||
expect(result).toHaveLength(2);
|
||||
});
|
||||
|
||||
test('should skip invalid entries', () => {
|
||||
const result = parseAllowedIPs('192.168.1.1, invalid, 10.0.0.5');
|
||||
expect(result).toHaveLength(2);
|
||||
});
|
||||
|
||||
test('should skip invalid CIDR prefix', () => {
|
||||
const result = parseAllowedIPs('10.0.0.0/33');
|
||||
expect(result).toHaveLength(0);
|
||||
});
|
||||
|
||||
test('should skip entries with invalid octets', () => {
|
||||
const result = parseAllowedIPs('256.0.0.1');
|
||||
expect(result).toHaveLength(0);
|
||||
});
|
||||
|
||||
test('should handle trailing commas', () => {
|
||||
const result = parseAllowedIPs('192.168.1.1,');
|
||||
expect(result).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('isIPAllowed', () => {
|
||||
test('should always allow 127.0.0.1 with empty list', () => {
|
||||
expect(isIPAllowed('127.0.0.1', [])).toBe(true);
|
||||
});
|
||||
|
||||
test('should always allow localhost with empty list', () => {
|
||||
expect(isIPAllowed('localhost', [])).toBe(true);
|
||||
});
|
||||
|
||||
test('should always allow IPv4-mapped localhost', () => {
|
||||
expect(isIPAllowed('::ffff:127.0.0.1', [])).toBe(true);
|
||||
});
|
||||
|
||||
test('should reject non-localhost with empty list', () => {
|
||||
expect(isIPAllowed('192.168.1.1', [])).toBe(false);
|
||||
});
|
||||
|
||||
test('should match exact IP', () => {
|
||||
const allowList = parseAllowedIPs('192.168.1.50');
|
||||
expect(isIPAllowed('192.168.1.50', allowList)).toBe(true);
|
||||
expect(isIPAllowed('192.168.1.51', allowList)).toBe(false);
|
||||
});
|
||||
|
||||
test('should match CIDR range', () => {
|
||||
const allowList = parseAllowedIPs('10.0.0.0/8');
|
||||
expect(isIPAllowed('10.0.0.1', allowList)).toBe(true);
|
||||
expect(isIPAllowed('10.255.255.255', allowList)).toBe(true);
|
||||
expect(isIPAllowed('11.0.0.1', allowList)).toBe(false);
|
||||
});
|
||||
|
||||
test('should match Tailscale CGNAT range (100.64.0.0/10)', () => {
|
||||
const allowList = parseAllowedIPs('100.64.0.0/10');
|
||||
expect(isIPAllowed('100.64.0.1', allowList)).toBe(true);
|
||||
expect(isIPAllowed('100.100.50.25', allowList)).toBe(true);
|
||||
expect(isIPAllowed('100.127.255.255', allowList)).toBe(true);
|
||||
expect(isIPAllowed('100.128.0.0', allowList)).toBe(false);
|
||||
expect(isIPAllowed('100.63.255.255', allowList)).toBe(false);
|
||||
});
|
||||
|
||||
test('should handle IPv4-mapped IPv6 addresses', () => {
|
||||
const allowList = parseAllowedIPs('192.168.1.50');
|
||||
expect(isIPAllowed('::ffff:192.168.1.50', allowList)).toBe(true);
|
||||
expect(isIPAllowed('::ffff:192.168.1.51', allowList)).toBe(false);
|
||||
});
|
||||
|
||||
test('should handle IPv4-mapped IPv6 with CIDR', () => {
|
||||
const allowList = parseAllowedIPs('10.0.0.0/8');
|
||||
expect(isIPAllowed('::ffff:10.5.3.1', allowList)).toBe(true);
|
||||
expect(isIPAllowed('::ffff:11.0.0.1', allowList)).toBe(false);
|
||||
});
|
||||
|
||||
test('should match against multiple entries', () => {
|
||||
const allowList = parseAllowedIPs('192.168.1.0/24, 10.0.0.5');
|
||||
expect(isIPAllowed('192.168.1.100', allowList)).toBe(true);
|
||||
expect(isIPAllowed('10.0.0.5', allowList)).toBe(true);
|
||||
expect(isIPAllowed('10.0.0.6', allowList)).toBe(false);
|
||||
});
|
||||
|
||||
test('should handle /32 CIDR as single IP', () => {
|
||||
const allowList = parseAllowedIPs('192.168.1.1/32');
|
||||
expect(isIPAllowed('192.168.1.1', allowList)).toBe(true);
|
||||
expect(isIPAllowed('192.168.1.2', allowList)).toBe(false);
|
||||
});
|
||||
|
||||
test('should handle /0 CIDR as allow-all', () => {
|
||||
const allowList = parseAllowedIPs('0.0.0.0/0');
|
||||
expect(isIPAllowed('1.2.3.4', allowList)).toBe(true);
|
||||
expect(isIPAllowed('255.255.255.255', allowList)).toBe(true);
|
||||
});
|
||||
|
||||
test('should return false for invalid IP input', () => {
|
||||
const allowList = parseAllowedIPs('10.0.0.0/8');
|
||||
expect(isIPAllowed('not-an-ip', allowList)).toBe(false);
|
||||
});
|
||||
});
|
||||
@@ -21,7 +21,7 @@ jest.mock('../src/utils/path-utils', () => ({
|
||||
// Mock LinkUtils for link validation tests
|
||||
jest.mock('../src/utils/link-utils', () => ({
|
||||
LinkUtils: {
|
||||
validateLinks: jest.fn().mockResolvedValue({
|
||||
validateLinks: jest.fn().mockReturnValue({
|
||||
valid: [],
|
||||
brokenNotes: [],
|
||||
brokenHeadings: [],
|
||||
@@ -52,7 +52,7 @@ describe('NoteTools', () => {
|
||||
});
|
||||
|
||||
describe('readNote', () => {
|
||||
it('should read note content successfully', async () => {
|
||||
it('should read note content successfully with line numbers by default', async () => {
|
||||
const mockFile = createMockTFile('test.md');
|
||||
const content = '# Test Note\n\nThis is test content.';
|
||||
|
||||
@@ -62,9 +62,10 @@ describe('NoteTools', () => {
|
||||
const result = await noteTools.readNote('test.md');
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
// Now returns JSON with content and wordCount
|
||||
// Now returns JSON with content (line-numbered by default) and wordCount
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe(content);
|
||||
expect(parsed.content).toBe('1→# Test Note\n2→\n3→This is test content.');
|
||||
expect(parsed.totalLines).toBe(3);
|
||||
expect(parsed.wordCount).toBe(7); // Test Note This is test content
|
||||
expect(mockVault.read).toHaveBeenCalledWith(mockFile);
|
||||
});
|
||||
@@ -124,7 +125,7 @@ describe('NoteTools', () => {
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md', { withContent: true });
|
||||
const result = await noteTools.readNote('test.md', { withContent: true, withLineNumbers: false });
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
@@ -188,20 +189,99 @@ describe('NoteTools', () => {
|
||||
expect(parsed.wordCount).toBe(0);
|
||||
});
|
||||
|
||||
it('should return JSON format even with default options', async () => {
|
||||
it('should return JSON format with raw content when withLineNumbers is false', async () => {
|
||||
const mockFile = createMockTFile('test.md');
|
||||
const content = '# Test Note\n\nContent here.';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md', { withLineNumbers: false });
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
// Returns JSON with raw content when line numbers disabled
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe(content);
|
||||
expect(parsed.wordCount).toBe(5); // Test Note Content here
|
||||
});
|
||||
|
||||
it('should return numbered lines by default', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = '# Title\n\nParagraph text\nMore text';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md');
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
// Now returns JSON even with default options
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe('1→# Title\n2→\n3→Paragraph text\n4→More text');
|
||||
expect(parsed.totalLines).toBe(4);
|
||||
expect(parsed.versionId).toBe('AXrGSV5GxqntccmzWCNwe7'); // SHA-256 hash of "2000-100"
|
||||
expect(parsed.wordCount).toBe(6); // # Title Paragraph text More text
|
||||
});
|
||||
|
||||
it('should return raw content when withLineNumbers is false', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = '# Test';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md', { withLineNumbers: false });
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe('# Test');
|
||||
expect(parsed.totalLines).toBeUndefined();
|
||||
expect(parsed.versionId).toBe('AXrGSV5GxqntccmzWCNwe7'); // SHA-256 hash of "2000-100"
|
||||
});
|
||||
|
||||
it('should return numbered lines in parseFrontmatter path by default', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = '---\ntitle: Test\n---\n\nContent here';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md', { parseFrontmatter: true });
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe('1→---\n2→title: Test\n3→---\n4→\n5→Content here');
|
||||
expect(parsed.totalLines).toBe(5);
|
||||
});
|
||||
|
||||
it('should return raw content in parseFrontmatter path when withLineNumbers is false', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = '---\ntitle: Test\n---\n\nContent here';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
|
||||
const result = await noteTools.readNote('test.md', { parseFrontmatter: true, withLineNumbers: false });
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.content).toBe(content);
|
||||
expect(parsed.wordCount).toBe(5); // Test Note Content here
|
||||
expect(parsed.totalLines).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -436,15 +516,16 @@ describe('NoteTools', () => {
|
||||
const mockFile = createMockTFile('test.md');
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.trash = jest.fn().mockResolvedValue(undefined);
|
||||
mockFileManager.trashFile = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.deleteNote('test.md', true, false);
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
expect(mockVault.trash).toHaveBeenCalledWith(mockFile, true);
|
||||
expect(mockFileManager.trashFile).toHaveBeenCalledWith(mockFile);
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.deleted).toBe(true);
|
||||
expect(parsed.soft).toBe(true);
|
||||
expect(parsed.destination).toBe('trash');
|
||||
});
|
||||
|
||||
it('should permanently delete note', async () => {
|
||||
@@ -466,6 +547,7 @@ describe('NoteTools', () => {
|
||||
const mockFile = createMockTFile('test.md');
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockFileManager.trashFile = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.deleteNote('test.md', true, true);
|
||||
|
||||
@@ -473,7 +555,8 @@ describe('NoteTools', () => {
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.deleted).toBe(false);
|
||||
expect(parsed.dryRun).toBe(true);
|
||||
expect(mockVault.trash).not.toHaveBeenCalled();
|
||||
expect(parsed.destination).toBe('trash');
|
||||
expect(mockFileManager.trashFile).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should return error if file not found', async () => {
|
||||
@@ -500,7 +583,7 @@ describe('NoteTools', () => {
|
||||
const mockFile = createMockTFile('test.md');
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.trash = jest.fn().mockRejectedValue(new Error('Cannot delete'));
|
||||
mockFileManager.trashFile = jest.fn().mockRejectedValue(new Error('Cannot delete'));
|
||||
|
||||
const result = await noteTools.deleteNote('test.md');
|
||||
|
||||
@@ -890,7 +973,7 @@ Some text
|
||||
|
||||
const result = await noteTools.updateSections('test.md', [
|
||||
{ startLine: 2, endLine: 3, content: 'New Line 2\nNew Line 3' }
|
||||
]);
|
||||
], undefined, true, true); // validateLinks=true, force=true
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
expect(mockVault.modify).toHaveBeenCalled();
|
||||
@@ -915,7 +998,7 @@ Some text
|
||||
|
||||
const result = await noteTools.updateSections('test.md', [
|
||||
{ startLine: 1, endLine: 10, content: 'New' }
|
||||
]);
|
||||
], undefined, true, true); // validateLinks=true, force=true
|
||||
|
||||
expect(result.isError).toBe(true);
|
||||
expect(result.content[0].text).toContain('Invalid line range');
|
||||
@@ -950,7 +1033,7 @@ Some text
|
||||
|
||||
const result = await noteTools.updateSections('test.md', [
|
||||
{ startLine: 1, endLine: 1, content: 'New' }
|
||||
]);
|
||||
], undefined, true, true); // validateLinks=true, force=true
|
||||
|
||||
expect(result.isError).toBe(true);
|
||||
expect(result.content[0].text).toContain('Update error');
|
||||
@@ -962,7 +1045,7 @@ Some text
|
||||
|
||||
const result = await noteTools.updateSections('nonexistent.md', [
|
||||
{ startLine: 1, endLine: 1, content: 'New' }
|
||||
]);
|
||||
], undefined, true, true); // validateLinks=true, force=true
|
||||
|
||||
expect(result.isError).toBe(true);
|
||||
expect(result.content[0].text).toContain('not found');
|
||||
@@ -974,11 +1057,82 @@ Some text
|
||||
|
||||
const result = await noteTools.updateSections('folder', [
|
||||
{ startLine: 1, endLine: 1, content: 'New' }
|
||||
]);
|
||||
], undefined, true, true); // validateLinks=true, force=true
|
||||
|
||||
expect(result.isError).toBe(true);
|
||||
expect(result.content[0].text).toContain('not a file');
|
||||
});
|
||||
|
||||
it('should return error when ifMatch not provided and force not set', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = 'Line 1\nLine 2';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections('test.md', [
|
||||
{ startLine: 1, endLine: 1, content: 'New' }
|
||||
]);
|
||||
|
||||
expect(result.isError).toBe(true);
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.error).toBe('Version check required');
|
||||
expect(parsed.message).toContain('ifMatch parameter is required');
|
||||
expect(mockVault.modify).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should proceed without ifMatch when force is true', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = 'Line 1\nLine 2';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections(
|
||||
'test.md',
|
||||
[{ startLine: 1, endLine: 1, content: 'New Line 1' }],
|
||||
undefined, // no ifMatch
|
||||
true, // validateLinks
|
||||
true // force
|
||||
);
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
expect(mockVault.modify).toHaveBeenCalled();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
expect(parsed.success).toBe(true);
|
||||
});
|
||||
|
||||
it('should proceed with valid ifMatch without force', async () => {
|
||||
const mockFile = createMockTFile('test.md', {
|
||||
ctime: 1000,
|
||||
mtime: 2000,
|
||||
size: 100
|
||||
});
|
||||
const content = 'Line 1\nLine 2';
|
||||
|
||||
(PathUtils.resolveFile as jest.Mock).mockReturnValue(mockFile);
|
||||
mockVault.read = jest.fn().mockResolvedValue(content);
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections(
|
||||
'test.md',
|
||||
[{ startLine: 1, endLine: 1, content: 'New Line 1' }],
|
||||
'AXrGSV5GxqntccmzWCNwe7' // valid ifMatch (SHA-256 hash of "2000-100")
|
||||
);
|
||||
|
||||
expect(result.isError).toBeUndefined();
|
||||
expect(mockVault.modify).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('path validation', () => {
|
||||
@@ -1251,7 +1405,7 @@ Some text
|
||||
mockVault.read = jest.fn().mockResolvedValue('Line 1\nLine 2\nLine 3');
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections('sections-test.md', edits);
|
||||
const result = await noteTools.updateSections('sections-test.md', edits, undefined, true, true); // validateLinks=true, force=true
|
||||
|
||||
expect(result.isError).toBeFalsy();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
@@ -1267,7 +1421,7 @@ Some text
|
||||
mockVault.read = jest.fn().mockResolvedValue('Line 1\nLine 2\nLine 3');
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections('sections-test.md', edits);
|
||||
const result = await noteTools.updateSections('sections-test.md', edits, undefined, true, true); // validateLinks=true, force=true
|
||||
|
||||
expect(result.isError).toBeFalsy();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
@@ -1284,7 +1438,7 @@ Some text
|
||||
mockVault.read = jest.fn().mockResolvedValue('Line 1\nLine 2\nLine 3');
|
||||
mockVault.modify = jest.fn().mockResolvedValue(undefined);
|
||||
|
||||
const result = await noteTools.updateSections('sections-test.md', edits, undefined, false);
|
||||
const result = await noteTools.updateSections('sections-test.md', edits, undefined, false, true); // validateLinks=false, force=true
|
||||
|
||||
expect(result.isError).toBeFalsy();
|
||||
const parsed = JSON.parse(result.content[0].text);
|
||||
|
||||
@@ -2,5 +2,7 @@
|
||||
"1.0.0": "0.15.0",
|
||||
"1.0.1": "0.15.0",
|
||||
"1.1.0": "0.15.0",
|
||||
"1.1.1": "0.15.0"
|
||||
"1.1.1": "0.15.0",
|
||||
"1.1.2": "0.15.0",
|
||||
"1.1.3": "0.15.0"
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user