[Day 3] Comparing AI Tools for Handwriting Transcription

I put Claude, ChatGPT, and Gemini to the test with the same handwritten notes. One misread a key number, one nailed the formatting with icons, and one gave me three different options I didn’t ask for. Here’s what happened—and what it taught me about choosing the right AI tool for the job.

Jan 3, 2026
[Day 3] Comparing AI Tools for Handwriting Transcription
One of my favourite uses of AI is transcribing handwritten notes. I think better when I write by hand. There’s something about pen on paper that helps ideas flow more freely than typing. The accuracy of AI transcription has become so good now that it’s one of the first things I recommend to anyone looking for a practical, immediate use case for these tools.
This is especially powerful in team workshops. After a brainstorming session covered in sticky notes and whiteboard scribbles, someone used to be stuck with the tedious job of typing everything up. Now? Snap a photo, upload to an AI tool, and you’ve got structured notes in seconds. It’s a clear example of how AI can be a genuine productivity booster.
Today I tested this workflow across three different AI assistants and the results revealed something important about how each tool interprets human input differently.

Today’s Experiment

Goal: Transcribe my handwritten 2026 goals from paper to digital, formatted for Notion (my second brain).
The Task: I photographed a page of handwritten goals covering four life areas: Mind, Body, Finances, Relationships, and asked Claude, ChatGPT, and Gemini to:
  1. Transcribe the handwritten notes
  1. Reformat the output for Notion with clear structure
Why This Matters: This is exactly the kind of analog-to-digital workflow that knowledge workers encounter constantly. Meeting notes, brainstorm sessions, whiteboard captures—all need to go from messy handwriting to structured digital formats. The question is: which AI handles this best?

Process

Step 1: Capture the Source Material

I started with a single page of handwritten notes in my notebook listing my 2026 goals across four categories:
  • Mind: Daily morning practice for intention-setting
  • Body: Mobility exercises, gym 2x per week, food tracking with Cal AI
  • Finances: Automated savings and investments monthly
  • Relationships: Reach out to two people daily
A sample of my handwriting that borders on being illegible even to me at times.
A sample of my handwriting that borders on being illegible even to me at times.
 

Step 2: Test Each AI Tool

I used the same image and same initial prompt across all three tools:
Initial Prompt: “Transcribe”
Follow-up Prompt: “Revise the transcript for Notion”
This kept the experiment consistent while testing how each AI interprets minimal instructions.
notion image
 

Step 3: Compare the Results

I evaluated each output on three criteria:
  • Accuracy: Did it correctly read my handwriting?
  • Formatting: How well did it structure for Notion?
  • Usefulness: Could I copy-paste directly into my workspace?

Outputs

🟠 Claude (Opus 4.5)

Transcription Accuracy: Made one notable error—read “2x per week” for gym as “4x per week.” The rest was accurate.
Notion Formatting: Clean structure with headers and bullet points. Intelligently separated Finances into its own category (my handwriting grouped it under Body). No icons.
Sample Output:
First prompt using Claude
First prompt using Claude
Follow-up prompt using Claude
Follow-up prompt using Claude
 
 
Verdict: Good structure, helpful category separation, but the accuracy error on a specific number is a reminder to always verify AI outputs.

🟢 ChatGPT

Transcription Accuracy: Correctly read “2x per week” for gym. All other details accurate.
Notion Formatting: Excellent. Added emoji icons for each category (🧠 Mind, 💪 Body, 💰 Finances, 🤝 Relationships), used consistent structure, and included bold formatting for key terms like “Cal AI” and “2× per week.”
Sample Output:
First prompt using ChatGPT
First prompt using ChatGPT
Follow-up prompt using ChatGPT
Follow-up prompt using ChatGPT
 
Verdict: The “Goldilocks” option—accurate transcription, visually appealing with icons, and immediately copy-paste ready for Notion.

🔵 Gemini

Transcription Accuracy: Correct on all details, including the 2x per week. Interestingly, it interpreted my “$” symbol for finance as “Finance” category—showing deeper contextual understanding.
Notion Formatting: Offered three different options:
  1. Clean Dashboard View - Simple goals page with checkboxes
  1. Daily Habit Tracker Setup - Full database schema with property types
  1. Weekly Toggle Review - Expandable weekly accountability template
Sample Output:
First prompt using Gemini
First prompt using Gemini
Follow-up prompt using Gemini
Follow-up prompt using Gemini
 
notion image
Overkill…
Overkill…
Not a bad idea, but it’s overengineering
Not a bad idea, but it’s overengineering
notion image
 
Verdict: Most comprehensive and accurate, but felt like overkill for my immediate need. The multiple options required more decision-making energy. Best for users who want to build a full Notion system from scratch.

What I Learned Today

1. Accuracy Varies on Specifics

Claude misread “2x” as “4x”—a small error with real consequences if I’d built my fitness routine around it. Always verify numbers, dates, and specific details in AI transcriptions.

2. Same Prompt, Different Interpretations

“Revise for Notion” meant different things to each AI:
  • Claude: Clean structure, logical category separation
  • ChatGPT: Visual polish with icons and formatting
  • Gemini: Multiple architectural options and database schemas

3. More Options ≠ Better Output

Gemini’s three options were impressive but added cognitive load. Sometimes you just want the AI to make a reasonable choice and give you something usable. ChatGPT’s single, well-formatted output was immediately actionable.

4. Context Interpretation Shows Intelligence

Gemini recognizing “$” as “Finance” demonstrated contextual reasoning beyond literal transcription. This kind of interpretation adds value but only if it aligns with your intent.

5. The Human Still Curates

Each tool produced something useful, but I still needed to:
  • Verify accuracy (caught Claude’s 4x error)
  • Choose which format fits my system
  • Decide whether I wanted icons or not
  • Adapt the output to my actual Notion workspace
This is the human-AI collaboration in action: AI handles the heavy lifting of transcription and formatting; humans provide quality control and context-specific decisions.

Try It Yourself

Time Required: 5-10 minutes per tool
What You’ll Need:
  • A photo of handwritten notes
  • Access to Claude, ChatGPT, and/or Gemini
  • A destination app (Notion, Obsidian, Apple Notes, etc.)
The Experiment:
  1. Take a clear photo of your handwritten notes
  1. Upload to your AI tool of choice
  1. Start with a simple prompt: “Transcribe”
  1. Follow up with: “Format this for [your app]”
  1. Compare the output to your original—note any errors or interpretations
  1. Copy into your destination and see what needs adjustment
Questions to Consider:
  • Which errors would matter most in your context?
  • Do you prefer icons/visual formatting or clean minimalism?
  • How much do you want the AI to interpret vs. literally transcribe?