Product Teardown: Pawllo

A premium D2C pet food brand whose website undermines its positioning at every step. I audited the full purchase journey, benchmarked against 10 competitors, and built a working redesign prototype.

Type
Independent teardown
Scope
Full purchase funnel
Stack
Next.js, TypeScript, Tailwind, Framer Motion
Output
Audit + Working Prototype
Product Teardown
Redesign
Live walkthrough: category browsing, PDP, cart & checkout flow

Premium positioning, discount execution

Pawllo sells natural pet food, treats, and meal toppers at Rs. 300-1,200. They compete against Drools, Pedigree, HUFT, and Supertails. The product is positioned as premium. The website is not.

A first-time visitor scrolls through six folds of brand storytelling before seeing a single product. The PDP buries ingredients below marketing copy. "Add to Cart" redirects users to the cart page. UPI and wallet payments exist but hide behind a Razorpay redirect that's invisible from the checkout screen.

The question was whether these problems were fixable gaps or structural issues baked into the platform, and what "better" would actually look like if built.

27-point audit across 10 competitors

I audited every screen of the purchase journey on desktop and mobile: Homepage, PLP, PDP, Cart, and Checkout. Each screen was documented through first-person observation, tracking where friction appeared and what caused it.

I then benchmarked Pawllo against 10 Indian pet e-commerce competitors across a 27-point scoring matrix covering navigation, product discovery, information architecture, trust signals, and checkout flow.

2.8/10
Pawllo's benchmark score
7.0/10
Competitor average
7
Missing table-stakes features
Key finding

Pawllo was missing 7 features that 80%+ of competitors had: search, filters, product reviews, breadcrumbs, trust badges on PDPs, on-page ATC confirmation, and visible payment methods. These are table-stakes gaps, not optimization opportunities.

The teardown produced 14 structured findings, each following a full chain: what is wrong, why it matters to the user, why it matters to the business, the problem category (fix / test / rethink), a recommendation, and metrics to track. Five P0, five P1, four P2.

Feature Pawllo Supertails PetChef Heads Up For Tails Pet Pantry
Clean Visual Design
Product Filtering
Subscriptions
Reviews & Ratings
Personalization
Content & Education
Fresh Food Delivery
Mobile App
Checkout Experience
● Has feature ○ Missing / weak

Not everything is a hypothesis

The core prioritization decision was sorting 19 interventions by problem type, not just severity. A missing search bar is not a hypothesis. A broken CTA is not an experiment. Matching the intervention type to the problem type is what separates a structured teardown from a feature wishlist.

Tier 1: Fix 8 items. Known answer. No experiment needed.

Broken promo CTA linking to Contact instead of Shop. ATC redirect pulling users out of browsing flow. Missing search and filters. These are bugs and gaps.

Tier 2: Test 6 items. Direction clear, magnitude unknown.

PDP scroll desync, imagery gaps, review placement, payment visibility. Each structured as: We believe [change] will [outcome] because [evidence]. We will know when [metric moves].

Tier 3: Experiment 5 items. Outcome uncertain.

Per-serving pricing, Subscribe & Save, WhatsApp vs. email for reorders. Each included what would disprove the hypothesis.

Why this matters

"8 things are broken, 6 have strong evidence, 5 are worth testing" builds founder trust differently than 19 items framed as "we think this might help."

From diagnosis to working prototype

I built a working redesign prototype of 4 core screens (Homepage, Shop, PDP, Cart/Checkout) in Next.js + TypeScript + Tailwind CSS + Framer Motion. Every design choice traced back to a specific finding.

Homepage

Trust signals moved to fold 1. "Shop Now" replaced the brand narrative hero. Products visible without scrolling through six folds of storytelling.

Shop / PLP

Search, animal tabs, and category filters added. Users can find products by what they need, not by how Pawllo organized their catalog.

Product Page

Correct information hierarchy. Tabbed ingredient/nutrition sections. Customer reviews. On-page ATC toast instead of a redirect that breaks browsing flow.

Cart + Checkout

All five payment methods visible with icons and descriptions. Promo field no longer shows an error before anyone types. COD vs. digital split trackable.

12 of 14 findings were addressed. The two excluded (mobile load performance, redundant About pages) were out of scope for a static prototype.

Redesigned Homepage
Homepage
Redesigned Shop
Shop
Redesigned Product Detail Page
Product Detail
Redesigned Cart
Cart
Redesigned screens: Homepage, Shop, PDP, Cart

What I would measure on day one

This is a prototype, not a live product, so there are no conversion numbers. I built a measurement framework tied to each intervention tier. These are the baselines I would establish on day one of a real engagement.

Metric Tied to Why it matters
ATC rate Tier 1 fixes Baseline for whether removing friction moves the needle
Items per order PLP redesign Tests whether better discovery increases basket size
AOV Cross-sell / pricing Revenue impact of Tier 2 + 3 experiments
PDP-to-ATC conversion PDP information hierarchy Isolates whether content restructuring drives action
Payment method distribution Checkout visibility COD vs. digital split reveals trust signal effectiveness
Checkout completion Full funnel Segmented by new vs. returning to separate acquisition from retention

What this project taught me

On Diagnosis

The biggest skill I built was distinguishing between problem types. A missing search bar is not a hypothesis. A broken CTA is not an experiment. Matching the intervention type to the problem type is what separates a structured teardown from a feature wishlist.

On Growth Thinking

I built a 5-stage funnel analysis with assumption-based drop-off estimates grounded in industry benchmarks. This is the area I am still developing depth in. The funnel work is directional, not precise, and I have been honest about confidence levels throughout.

On Building

Going from "here is what is wrong" to "here is what better looks like" forced trade-offs that pure analysis does not surface. Scoping, information hierarchy, and mobile-first constraints became concrete once I was writing code instead of recommendations.

On Honesty

I chose not to use fabricated checkout data anywhere in the analysis. Credibility matters more than completeness. Every estimate is labeled with its confidence level and sourced where possible.

UX Audit Competitive Benchmarking Funnel Analysis Hypothesis Framing Prioritization Next.js TypeScript Tailwind CSS Framer Motion D2C E-commerce
Next case study

Zomato: ML at Scale →

Scale Product