Ta Da!

An iOS task app built on one insight: ambiguity, not effort, drives avoidance. The app detects which tasks cause you to freeze and breaks them into concrete first steps.

Role
Sole PM & Builder
Timeline
10 Days
Stack
SwiftUI, Firebase, Claude API
Stage
Dogfooding & TestFlight
0 → 1 Build
Ta Da! app showing onboarding, Brain Dump task list, and Today view with progress tracking
Ta Da! app — Brain Dump and Today views

The cycle I couldn't break

Dump tasks into a notes app. Reopen the list. Freeze at anything ambiguous. Do the easy things. Skip the rest. The list grows. The avoidance compounds.

Core insight

The root cause isn't motivation. It's that ambiguity triggers paralysis. When a task is framed as an outcome ("figure out health insurance") and you can't identify the first physical action, you take none.

Every major task app treats all tasks equally. "Buy milk" and "prepare investor deck" get the same text field, the same checkbox. But these are fundamentally different cognitive problems.

Todoist's Task Assist comes closest but is reactive, complexity-blind, and paywalled. No mainstream app detects cognitive overload at the point of action. That's the gap Ta Da! is built for.

One question the app answers

You type a task and hit return. The AI scores it across three dimensions: action clarity, step count, and cognitive load. Simple tasks stay as-is. Complex ones get broken into a concrete first step you can act on immediately.

Task input to AI scoring to complexity results to actionable first step decomposition
Task input → AI scoring → decomposition flow

Supporting product decisions that reduce friction:

Design Four-step onboarding

You experience the AI before entering the main app. The value prop isn't explained, it's felt.

Constraint Seven-task daily cap

Prevents the same overwhelm the app is solving. Frustrates power users, but the target user's core problem is having too much, not too little.

Design Four notification types

Each with deep-linking to pull you back at the right moment. Re-engagement is contextual, not spammy.

From personal pattern to validated problem

The persona started with my own behaviour. I applied problem discovery to my own avoidance pattern, asking "why" until I hit the root cause: I get overwhelmed because I don't know where to start. That became a hypothesis to validate.

35
Screener survey
respondents
64%
Exhibited the full
avoidance loop
8
Deep user
interviews
Key finding

"Organize my closet" (high effort, low ambiguity) had a 40% pick rate. "Figure out health insurance" (moderate effort, high ambiguity) had 18%. Ambiguity, not effort, drives avoidance. That validated the weights in the AI scoring model.

Screener survey results: 72% are avoiders, 64% match full overwhelmed dumper persona, task pick rates showing ambiguity drives avoidance more than effort
Screener survey results showing ambiguity vs. effort avoidance patterns

Building with constraints

The tradeoffs that shaped Ta Da!:

Tradeoff Two views, not one

A single merged view recreates the exact problem the app solves: urgent and ambiguous tasks competing for attention. Brain Dump stays pressure-free for capture. Today stays focused for action.

Tradeoff Constraint over flexibility

Today is capped at seven tasks. Without the cap, the app becomes another overwhelming list. The target user's problem is having too much to do, not too little.

Principle Seamlessness vs. trust

Scoring runs silently because it's low-stakes. Decomposition requires an explicit "Accept" because creating subtasks without consent undermines the sense of control the app is trying to restore.

Pivot Activation metric redesign

Original metric: "tasks captured in W1." Changed to "completes first AI-decomposed task within 48 hours." The first measures activity. The second measures whether the product works.

How I'll know it works

North Star: complex tasks completed per active user per week. Every guardrail has a decision attached, not just a number:

Signal Threshold Action
Acceptance rate Below 70% Review the scoring prompt
Helpfulness Mostly 1s Tune decomposition quality
Override rate Above 10% Recalibrate thresholds

Claude built it. I directed it.

I used Claude as a build partner across all 10 days with structured context documents, dedicated agents, and custom skills for each workflow. Architecture decisions, UX audits, competitive analysis, survey cross-tabulation: each had scoped inputs and expected outputs.

What I didn't outsource

Product instinct. The overwhelm-avoidance insight, the activation metric pivot, the principle that false positives erode trust faster than false negatives. Claude executed. I owned the product thinking.

What I'd do differently

Sequencing was backwards

I built first and validated after. The screener confirmed my intuition, but if the data had contradicted it, I'd have shipped an app around the wrong problem. Next time: validate before code.

IA audit came too late

The IA audit came after code was written. It surfaced 18 issues, 10 requiring code changes. Architecture decisions are cheaper on paper.

Building fast had upside

Building fast produced a real app to dogfood, not wireframes. Post-build audits caught issues before testers saw them.

What's next

Dogfooding exposed that the AI still struggles with genuinely ambiguous tasks. The real "what's next" is whatever five people who aren't me tell me is broken.

User Research Problem Discovery Survey Design Claude API Product Analytics IA Audit AI Product Design Activation Metrics
Next case study

Pawllo →

Product Teardown