Vibe Coding my way to Downloading my Instapaper Articles
I’ve been using Instapaper since 2012 and have accumulated around 14,000 articles, carefully organized into folders and tags. But there was a problem: I wanted to use these articles as reference material in my Obsidian notes, and there was no easy way to get the content out of Instapaper in bulk.
The Problem
The idea came from wanting to use AI to reference Instapaper articles in my Obsidian notes. When writing about specific topics, instead of just linking to external sources, I wanted the source material directly in my vault. This would let me synthesize new ideas from both my original writing and other people’s blog posts.
I could do this manually - find an article, go to the website, use tools like Obisidan Webclipper to convert it to Markdown. But when you need 10, 12, or 15 articles about one topic, this gets tedious fast.
The Solution
Enter Claude Code. My latest obsession with AI-assisted development has given me an insane productivity boost. I can now work on multiple projects simultaneously - today I’ve been juggling two work projects and two side projects at the same time. The ability to offload tedious implementation details while supervising the work makes previously impractical side projects suddenly feasible.
This tool was one of those ideas I’d had for years but never pursued because the manual work seemed daunting.
Building It
The entire development process took about an hour:
- 10 minutes refining the idea with ChatGPT to get a PRD in Markdown
- 5 minutes of active time with Claude Code
- 45 minutes of Claude Code doing the implementation
The biggest time investment was actually setting up GitHub Actions for releases, which took longer than building the CLI itself.
ChatGPT suggested Go because of a good readability library for extracting clean content from websites. Go is also one of my preferred languages for CLI tools - it’s fast, produces single binaries, and has excellent tooling.
What It Does
The Instapaper CLI tool transforms CSV exports into a searchable knowledge base:
# Import your Instapaper CSV
instapaper import articles.csv
# Fetch article content (with readability extraction)
instapaper fetch
# Search your articles
instapaper search "kubernetes deployment"
# Export specific articles to Markdown
instapaper export --query "machine learning" --output ./obsidian/ml-research/
Technical Implementation
The tool uses:
- SQLite with FTS5 for full-text search across titles, URLs, content, folders, and tags
- go-readability for extracting clean article content
- html-to-markdown for converting to Obsidian-friendly format
- Exponential backoff for handling rate limits and failed fetches
Each exported Markdown file includes YAML frontmatter with the original URL, folder, tags, and other metadata.
Current Usage
As I write this, the tool is running in the background fetching articles. I started with the newest 1,000 articles and am now working through the rest chronologically - currently at 350 posts with many more to go.
My workflow is:
- Fetch everything into the SQLite database
- Research specific topics and export relevant articles to Markdown
- Import those into Obsidian and link to existing notes
I won’t export all 14,000 articles to my vault - that would drown out my own content. But for specific research topics, I can now quickly pull in 10-15 relevant articles and have them available locally for AI-assisted synthesis.
The Claude Code Effect
This project exemplifies what’s possible with AI-assisted development. Ideas that seemed too tedious to pursue are now achievable in an afternoon. The ability to supervise multiple AI workers while they handle implementation details is like having a team of developers.
The readability extraction works well enough for my needs - I get clean text content and can always visit the original URL when I need visuals or context. The search functionality makes the 14,000 articles actually useful rather than just a digital hoarding collection.
What’s Next
For now, this solves the core problem: getting my decade-plus collection of saved articles into a format where I can actually use them for research and writing. So actually I don’t really think this will evolve much.
The total investment - about 10 minutes of my active time plus an hour of AI work - has unlocked years of accumulated reading material. That’s the kind of productivity multiplier that makes you rethink what’s possible with the right tools.