An iOS task app built on one insight: ambiguity, not effort, drives avoidance. The app detects which tasks cause you to freeze and breaks them into concrete first steps.
Dump tasks into a notes app. Reopen the list. Freeze at anything ambiguous. Do the easy things. Skip the rest. The list grows. The avoidance compounds.
The root cause isn't motivation. It's that ambiguity triggers paralysis. When a task is framed as an outcome ("figure out health insurance") and you can't identify the first physical action, you take none.
Every major task app treats all tasks equally. "Buy milk" and "prepare investor deck" get the same text field, the same checkbox. But these are fundamentally different cognitive problems.
Todoist's Task Assist comes closest but is reactive, complexity-blind, and paywalled. No mainstream app detects cognitive overload at the point of action. That's the gap Ta Da! is built for.
You type a task and hit return. The AI scores it across three dimensions: action clarity, step count, and cognitive load. Simple tasks stay as-is. Complex ones get broken into a concrete first step you can act on immediately.
Supporting product decisions that reduce friction:
You experience the AI before entering the main app. The value prop isn't explained, it's felt.
Prevents the same overwhelm the app is solving. Frustrates power users, but the target user's core problem is having too much, not too little.
Each with deep-linking to pull you back at the right moment. Re-engagement is contextual, not spammy.
The persona started with my own behaviour. I applied problem discovery to my own avoidance pattern, asking "why" until I hit the root cause: I get overwhelmed because I don't know where to start. That became a hypothesis to validate.
"Organize my closet" (high effort, low ambiguity) had a 40% pick rate. "Figure out health insurance" (moderate effort, high ambiguity) had 18%. Ambiguity, not effort, drives avoidance. That validated the weights in the AI scoring model.
The tradeoffs that shaped Ta Da!:
A single merged view recreates the exact problem the app solves: urgent and ambiguous tasks competing for attention. Brain Dump stays pressure-free for capture. Today stays focused for action.
Today is capped at seven tasks. Without the cap, the app becomes another overwhelming list. The target user's problem is having too much to do, not too little.
Scoring runs silently because it's low-stakes. Decomposition requires an explicit "Accept" because creating subtasks without consent undermines the sense of control the app is trying to restore.
Original metric: "tasks captured in W1." Changed to "completes first AI-decomposed task within 48 hours." The first measures activity. The second measures whether the product works.
North Star: complex tasks completed per active user per week. Every guardrail has a decision attached, not just a number:
| Signal | Threshold | Action |
|---|---|---|
| Acceptance rate | Below 70% | Review the scoring prompt |
| Helpfulness | Mostly 1s | Tune decomposition quality |
| Override rate | Above 10% | Recalibrate thresholds |
I used Claude as a build partner across all 10 days with structured context documents, dedicated agents, and custom skills for each workflow. Architecture decisions, UX audits, competitive analysis, survey cross-tabulation: each had scoped inputs and expected outputs.
Product instinct. The overwhelm-avoidance insight, the activation metric pivot, the principle that false positives erode trust faster than false negatives. Claude executed. I owned the product thinking.
I built first and validated after. The screener confirmed my intuition, but if the data had contradicted it, I'd have shipped an app around the wrong problem. Next time: validate before code.
The IA audit came after code was written. It surfaced 18 issues, 10 requiring code changes. Architecture decisions are cheaper on paper.
Building fast produced a real app to dogfood, not wireframes. Post-build audits caught issues before testers saw them.
Dogfooding exposed that the AI still struggles with genuinely ambiguous tasks. The real "what's next" is whatever five people who aren't me tell me is broken.