The Problem: Pasting Secrets Into AI
Developers are pasting production logs, API keys, database configs, and proprietary code into AI coding assistants every day. It makes sense — tools like Claude Code, Cursor, and ChatGPT are dramatically more useful when they have real context about your problem.
But here's what actually happens when you paste a stack trace into an AI chatbot:
- Your plaintext travels over HTTPS to the provider's servers
- It's stored (at least temporarily) for processing
- It may be logged, cached, or used for model improvement
- Multiple systems and potentially humans can access it
Even with providers who promise not to train on your data, the data still crosses trust boundaries. Your production database connection string is sitting on someone else's server, protected only by their security practices and their promises.
Host-Blind Architecture: A Better Model
What if the server storing your data was mathematically incapable of reading it? Not "we promise not to look" — but "we literally cannot decrypt this even if subpoenaed."
This is the principle behind host-blind encryption — the server is blind to the content it hosts. This is how vnsh works:
# Share a log file with your AI assistant
cat server.log | vn
# Output: https://vnsh.dev/v/aBcDeFgHiJkL#R_sI4DHZ_6jNq6yqt2ORRDe9...
The key insight is in that # character. Everything after the hash fragment is never sent to the server. This is a fundamental property of how URLs work in browsers — the fragment stays client-side.
How the Encryption Flow Works
- Client generates keys: A random 256-bit AES key and 128-bit IV are generated locally using
crypto.getRandomValues() - Client encrypts: The content is encrypted with AES-256-CBC before leaving your machine
- Ciphertext uploaded: Only the encrypted blob is sent to the server — it's indistinguishable from random bytes
- Keys stay local: The decryption key and IV are encoded into the URL fragment (
#), which browsers never send to servers - Recipient decrypts: When someone opens the link, the browser extracts the keys from the fragment and decrypts client-side using the WebCrypto API
The server stores encrypted blobs. It has no access to keys, plaintext, or even file types. A subpoena would yield only random-looking binary data.
Using This With AI Coding Tools
CLI: Pipe Anything Securely
# Share git diffs without exposing code in chat
git diff HEAD~5 | vn
# Share build logs
npm run build 2>&1 | vn
# Share a config file (strip real secrets first, of course)
cat docker-compose.yml | vn
The output URL can be pasted into any AI conversation. The AI agent fetches the encrypted blob, decrypts it locally (via MCP), and injects the plaintext into its context — without the server ever seeing the content.
MCP Integration: Seamless for Claude Code
With the vnsh MCP server installed, Claude Code automatically decrypts vnsh links when you paste them. Install in one command:
curl -sL vnsh.dev/claude | sh
Now when you paste a vnsh URL into your conversation, Claude reads the encrypted content directly — no manual copy-paste of sensitive data into the chat window.
Chrome Extension: Debug Bundles for AI
The vnsh Chrome Extension takes this further with AI Debug Bundles. Press Cmd+Shift+D on any page and it captures:
- Page screenshot
- Console errors
- Selected text or code
- Current URL and page title
All packaged into a single encrypted link. Paste it to Claude or ChatGPT and the AI gets complete debug context — without you having to manually screenshot, copy errors, and describe the page.
Why Not Just Use a Regular Pastebin?
Services like Pastebin, GitHub Gists, or even Slack snippets have a fundamental problem: the server can read your data. This matters because:
- Data breaches happen. If the server is compromised, so is your content.
- Legal requests. A subpoena or government request can compel the service to hand over your data.
- Employee access. Server operators or support staff could potentially view content.
- Data persistence. Even "deleted" content often lives in backups, logs, or caches.
With vnsh, none of these attack vectors apply. The server is a "dumb pipe" — it stores encrypted bytes and serves them back. Even the vnsh team cannot access your content.
Ephemeral by Design
vnsh links auto-expire after 24 hours by default (configurable up to 7 days). After expiry, the encrypted blob is deleted from storage. No backups, no archives.
This ephemeral model is perfect for AI coding workflows where the context is only relevant during a debugging session. You don't need that stack trace forever — you need it for the next 20 minutes while you fix the bug.
Open Source and Auditable
The entire vnsh stack is open source on GitHub:
- Cloudflare Worker: The storage API (~600 lines of TypeScript)
- CLI: Zero-dependency POSIX shell script using
opensslandcurl - MCP Server: Node.js bridge for Claude Code integration
- Chrome Extension: Manifest V3, 48 tests, 93%+ coverage
All encryption happens client-side using standard, auditable primitives: AES-256-CBC via WebCrypto (browser), OpenSSL (CLI), or Node.js crypto module (MCP). All three produce byte-identical ciphertext.
Getting Started
Install the CLI in one line:
curl -sL vnsh.dev/i | sh
Or use it without installing anything:
echo "hello world" | bash <(curl -sL vnsh.dev/pipe)
For Claude Code users, add MCP support:
curl -sL vnsh.dev/claude | sh
Or get the Chrome Extension for browser-native encrypted sharing.
Your debug context deserves better than plaintext.