Market Intelligence: User Frustrations with Short-Form Video
The following analysis synthesizes data from 2024–2025 scientific reports and user surveys regarding the conflict between short-form video consumption (TikTok/Reels) and deep reading habits.
1. The "TikTok Brain" Phenomenon
Recent meta-analyses (late 2024/early 2025) have validated user complaints with clinical data:
-
"Popcorn Brain" / Attentional Residue:
A 2025 review in Psychological Bulletin linked heavy SFV engagement to deficits in inhibitory control. Users physically struggle to "inhibit" the impulse to switch tasks. -
The 47-Second Threshold:
Attention span studies indicate focus on a screen has dropped to ~47 seconds before the urge to switch windows kicks in. This makes the 15-20 minute immersion required for a book neurologically painful. -
Dopamine Desensitization:
The "variable reward schedule" of feeds desensitizes the brain's reward system, making text appear "insufferably boring" by comparison.
2. User Frustrations: "I Can't Read Anymore"
Users are not just bored; they are experiencing a loss of identity as readers.
-
The "Clunky" Text Effect:
Users report text feels "impenetrable.""I used to lose time in books; now it takes real effort... my brain has to work harder."
-
The "Tsundoku" of Guilt:
Buying books for the aesthetic but failing to read them creates anxiety."My TBR pile isn't a library anymore; it's just evidence of my broken attention span."
3. The Migration to Passive Learning
Curiosity hasn't died; it just migrated to lower-friction formats.
-
The Podcast Boom:
While non-fiction print sales fell 3.1% in early 2025, business/education podcast streams surged 34%. -
The Audio Compromise:
Users are flocking to audiobooks and podcasts because they want to learn but can't stomach the friction of text.
The Business Case: A Billion-Dollar Void
Short-form learning platforms represent a massive, validated market.
600M+
Global podcast listeners + active users of micro-learning apps (e.g., Duolingo, Blinkist).
~50M
English-speaking smartphone users (18-45) actively seeking self-improvement content.
500k
Conservative capture of ~1% of TOM within 3-5 years ($30M-$50M ARR opportunity).
"For knowledge-seeking smartphone users who habitually doomscroll, the current experience of SFV consumption leads to 'brain rot' because the dopamine feedback loop creates a 'scan-and-shift' state that makes deep reading physically difficult."
Utilize 5-15 minute gaps for high-signal discovery without friction.
Convert "wasted time" guilt into a feeling of productive intellectual curiosity.
- NO curing addiction/replacing TikTok.
- NO training users back to long-form.
- MUST work within 47-second spans.
- H1: Recall ≥1 fact after 10 mins.
- H2: Session feels "productive."
- H3: Engagement >5 cards/session.
V1 Solution Summary
The Elevator Pitch:
Rabbithole is a swipeable, card-based feed of bite-sized, AI-generated facts around a chosen topic that
can be explored via tap-to-deepen "holes" within a 5–15 minute session.
- Infinite Topic Feed: Never-ending stream of fact cards.
- Micro-Holes: Tap keywords to spawn a sub-feed (1-2 layers deep).
- Productivity Signal: Visual "facts collected" counter.
- Save for Later: One-tap bookmarking for deep reading later.
- Activation: First session with ≥5 cards viewed + "productive" rating.
- Engagement: Median cards viewed per session > 10.
- Learning Proxy: % of users passing a 1-question recall check (optional).
Solution Options Considered
| Approach | Pros | Cons | Verdict |
|---|---|---|---|
| Micro-Chapters | High depth/integrity. | Too much text friction (high cognitive load). | Discarded |
| Audio-First Feed | Lowest friction. | Hard to skim/scan; passive retention is lower. | Postponed (V2) |
| Smart Card Feed | Matches swipe habit; granular; skimmable. | Risk of disjointed context. | Selected (V1) |
V1 Jobs-to-be-Done (Detail)
1. Who is it for?
The intellectually aspiring doomscroller who loves feeds but hates the post-binge regret.
2. What is the core job?
"Turn passive downtime into guilt-free discovery" using existing swipe habits.
3. Success Signal (2 Weeks)
The "Cocktail Party Effect": spontaneously citing a fact learned in the app during conversation.
V1 Core Flows
Visualizing the primary user journey: Entry → Swipe → Deep Dive → Exit.
PART V: RESEARCH APPENDIXSummary Table: The Friction Points
| Frustration | User Description | Underlying Mechanism |
|---|---|---|
| "The Glaze Over" | Eyes moving without comprehension. | Cognitive Load: Brain conditioned for visual stimuli. |
| "The Itch" | Physical discomfort when not checking phone. | Dopamine Withdrawal: Craving the fast reward loop. |
| "The Pile" | Anxiety looking at unread books. | Aspiration/Reality Gap. |
| "The Audit" | Listening but retaining little. | Passive Consumption: Illusion of learning. |
Primary Sources & Validation
- Systematic Review on Short-Form Video (2025): Psychological Bulletin — linkage to inhibitory control deficits.
- Scan-and-Shift Hypothesis (2025): SDSU Eye-tracking study — reading patterns.
- Reading Decline (2025): University of Florida/iScience — 40% drop in pleasure reading.
- Attention Spans: Gloria Mark/UC Irvine — drop to 47 seconds.
- Market Stats: Blinkist (19M users) • Podcasts (580M listeners).
How It Works: The "Infinite Wikipedia"
Rabbithole orchestrates a complex dance of AI agents to create a personalized knowledge feed.
- Intent Analysis: Agents analyze query depth/context.
- Content Synthesis: Parallel LLMs draft comprehensive overviews.
- Dynamic Linking: Keywords become "portals" to deeper content.
Technical Challenges
Latency:
Utilizing Vercel AI SDK for aggressive streaming to match "feed speed."
State:
Graph-based structures to track the user's journey through topic nodes.
export async function POST(req: Request) {
const { messages } = await req.json();
const response = await openai.createChatCompletion({
model: 'gpt-4',
stream: true,
messages
});
return OpenAIStream(response);
}
Mobile Adaptation
Built with Capacitor to bring the Next.js web app to Android with native interactions:
- Touch-optimized interactions (swipe to save, tap to explore).
- Native sharing capabilities.
- Offline caching for previously visited "holes".
Future Roadmap
We are currently exploring multi-modal learning paths, where the AI can generate not just text, but relevant images (using Flux or Midjourney APIs) and even short audio summaries.
Enter the Rabbithole