100% in-browser Zero data transmission Auto-clears on exit
System Ready
File Translation
Drop .xlsx or .docx file here, or browse
Checking model status...
Model not in memory
Hardware: Requires a modern GPU or Apple Silicon Mac with WebGPU support (Chrome/Edge 113+, Safari 18+). The default 4B model needs ~3.4 GB GPU memory; choose a smaller model on devices with limited VRAM.

Detect and replace PII in medical documents using WebLLM + WebGPU — entirely in your browser.

Checking WebGPU...

⚠️ Disclaimer: Anonymization is not guaranteed to detect all PII. Always manually review the output before sharing documents. This tool is an aid, not a replacement for human review.

Pipeline details & hardware

LLM only (recommended): Qwen3 handles all PII detection in a single pass. Best accuracy, especially with the 4B or 8B model.

NER + LLM: Run a NER model first, then let the LLM verify and catch remaining PII. Useful for additional coverage or comparison.

NER only: Run just the selected NER model. Fastest option, no GPU required, but lower accuracy.

  • Qwen3 0.6B — ~1.4 GB VRAM. Smallest & fastest. Good for quick scans.
  • Qwen3 1.7B — ~2 GB VRAM. Good balance of speed and quality.
  • Qwen3 4B (default) — ~3.4 GB VRAM. Best quality for most devices.
  • Qwen3 8B — ~5.7 GB VRAM. Highest quality, needs a powerful GPU.
  • Multilingual PII NER — ~280 MB. XLM-RoBERTa, no GPU needed. Names, addresses, dates, IDs in 8+ languages.
  • GLiNER PII Edge — ~46 MB. Zero-shot, no GPU needed. Best for English.
  • Multilingual BERT NER — ~100 MB. Lighter general-purpose NER.

Always manually review the output. First load downloads the selected model; subsequent runs use the browser cache.

Medical Document

Drop document here

.xlsx, .docx, .txt

Mapping File Optional

Drop mapping file

.xlsx or .json

No mapping loaded
Hardware: Requires WebGPU (Chrome/Edge 113+, Safari 18+). Uses the same Qwen3 models as anonymization.

Generate structured reports from medical documents using WebLLM + WebGPU — entirely in your browser.

Checking WebGPU...

Upload Document

Drop document here

.xlsx, .docx, .txt

Or Paste Text

Offline STT: Whisper speech recognition runs entirely in your browser via WebAssembly. No audio data is sent anywhere.

Transcribe audio recordings or live microphone input using OpenAI Whisper via Transformers.js — entirely offline.

Model details
  • Whisper Tiny — ~150 MB. Fastest, basic quality.
  • Whisper Base — ~300 MB. Fair quality.
  • Whisper Small (recommended) — ~500 MB. Good quality, best for clinical use.

All models run via WebAssembly (ONNX Runtime). First use downloads the model; subsequent runs use browser cache.

Record Audio

Or Upload Audio

Drop audio file here

.mp3, .wav, .ogg, .webm, .m4a, .flac

Index and sort unsorted DICOM data from any directory or network path. Smart scanning uses file size, naming, and folder structure heuristics to avoid opening every file.

Source Directory

Scan Settings

Download models in advance so you can work offline when patients arrive. Inspect and manage all browser-stored data below.

Prepare for Offline Use

Download all models now while you have internet. Once cached, everything works in airplane mode.

Translation Model NLLB-200 · ~300 MB Translates between 200+ languages
Checking...
NER Model Multilingual PII NER · ~280 MB Detects names, addresses, IDs in 8+ languages
Checking...
Anonymization LLM Qwen3 4B · ~3.4 GB Detects and anonymizes PII via WebGPU
Checking...
Speech Model Whisper · ~500 MB Offline speech-to-text transcription
Checking...

Personal Data — Where It Goes

Your documents, text and patient data are never written to disk or browser storage. They exist only in temporary JavaScript memory and are automatically removed.

You upload a file or paste text File is read into JavaScript memory (RAM) only
AI model processes data locally Translation/anonymization runs inside your browser's CPU/GPU
You download the result Output goes to your Downloads folder — you control it
Data is cleared from memory Automatically on page close, or click the button below

Currently in memory

No personal data in memory

Auto-clears when you close or refresh this page
Auto-clears after 30 minutes of inactivity
Never written to Cache API, IndexedDB, localStorage, or cookies
Total storage used Scanning...
Cache API entries
IndexedDB databases

Cache API (Model Files)

Translation and NER models are cached here. These are safe AI model weights — no personal data.

Scanning...

IndexedDB (LLM Model Cache)

WebLLM/MLC caches Qwen model weights here. These are safe AI model data — no personal data.

Scanning...

This will remove all cached models. You will need to re-download them on next use.