A robotic arm that opens books, turns pages, and reads aloud — built for children with reading disabilities, people with motor challenges, and anyone who needs a voice.
Physical AI Hack 2026 · San Francisco · Jan 31–Feb 1
See it in action
The arm opens the cover, turns each page, and waits while the camera captures the spread. Then it reads aloud and turns again.
Who it's for
We didn't build a reading robot to show off. We built it because there are people who need someone — or something — to read to them.
For children and adults with reading fluency challenges, hearing words spoken aloud while seeing them on the page creates a multi-sensory experience that builds comprehension and confidence. The robot reads at a natural pace, following along with the physical book — not a screen.
People with limited hand mobility, muscular dystrophy, paralysis, or age-related motor decline lose access to physical books. The arm turns pages autonomously — the reader only needs to listen.
With voice cloning, a grandparent across the country — or one who has passed away — can still read a bedtime story. Their voice, their cadence, from the same book they used to hold.
In silent mode, the arm turns pages while the camera captures each spread. Text is extracted automatically. Hands-free digitization of books, manuscripts, and records.
Classrooms, after-school programs, and libraries get a patient, tireless reading companion that skips blank pages, reads with expression, and picks up where it left off.
Three modes
One robot, three ways to use it.
Full storytime narration. Every word, naturally read aloud with expressive voices.
Titles, headings, and bold text only. Fast information retrieval.
No audio. Photograph, extract text, advance. Pure archival digitization.
The loop
A closed-loop pipeline. Turn, capture, read, speak, wait, repeat.
Turns page & positions
Captures full spread
Classifies & extracts
Streaming speech
Each page classified in under 20 tokens. Blank and index pages are auto-skipped — the arm turns again immediately.
Three threads in parallel: vision extraction, audio pre-fetch, and playback. Speech starts before the page is fully processed.
Both pages captured at once. Vision reads left-then-right and handles titles spanning the book spine.
Stack
What's next
From hackathon prototype to real-world assistive tool.
One arm. Flat books. Learned page turns.
Various paper weights. Dynamic failure recovery.
Two-arm coordination. Human-like ergonomics.
Voice interaction. “Hey robot, read that again.”
The team
Bringing stories to life, one page at a time.
Stay in the loop
Get updates on Ladybug Robotics — new capabilities, use cases, and ways to get involved.