But he couldn’t read them alone.
His teachers did their best.
But they had other students —
and there were only so many hours in the day.
One student needs one human reader. Every single time.
That costs $25/hr. Schools can’t afford it.
Parents can’t always be there.
But we’re in the age of physical AI.
How else might we approach the problem?
The Solution
A robot arm that opens books, turns pages, and reads aloud.
How It Works
Google Gemini assesses the scene. Book open or closed? The robot decides.
Learned motor policies — precise, gentle, repeatable. Retry on failure.
Text streamed to voice. Sub-second latency. Any voice — even grandma’s.
Repeatable task execution with failure handling — no human in the loop.
Why Now
Each existed separately. A $300 robot that sees, reads, and speaks — that’s new.
Why This
85% of books have no audio version. And audiobooks don’t let you hold the real book — kids get lost without the physical page to follow.
Many children can’t turn pages at all. Cerebral palsy, muscular dystrophy, spinal cord injuries. They need a robot hand, not just a robot voice.
Voice cloning means grandma reads the bedtime story — even when she’s not there. Mom’s voice. A teacher’s voice. Comfort and connection.
Every one of them needs a patient, tireless reader.
The Roadmap
Four phases. One mission: build the world's largest paper manipulation dataset while serving 240 million children.
The Economics
Team
Vision pipeline, orchestrator, product strategy. AI researcher, Master’s-level AI at Northwestern. Mother of a child with reading disabilities.
Motor policies, voice pipeline, arm integration. Director of Engineering at Lumen, manages 40 engineers. Deep voice & audio expertise.
Data pipeline, calibration, quality assurance. Head of Global Training at H2O.ai. The motor policies are only as good as the data she curates.
Every library. Every classroom. Every grandparent’s house.
A patient, tireless reader for every child who needs one.
We built the reader.
...and the dataset.
ladybug.bot
ladybug.bot