A premium D2C pet food brand whose website undermines its positioning at every step. I audited the full purchase journey, benchmarked against 10 competitors, and built a working redesign prototype.
Pawllo sells natural pet food, treats, and meal toppers at Rs. 300-1,200. They compete against Drools, Pedigree, HUFT, and Supertails. The product is positioned as premium. The website is not.
A first-time visitor scrolls through six folds of brand storytelling before seeing a single product. The PDP buries ingredients below marketing copy. "Add to Cart" redirects users to the cart page. UPI and wallet payments exist but hide behind a Razorpay redirect that's invisible from the checkout screen.
The question was whether these problems were fixable gaps or structural issues baked into the platform, and what "better" would actually look like if built.
I audited every screen of the purchase journey on desktop and mobile: Homepage, PLP, PDP, Cart, and Checkout. Each screen was documented through first-person observation, tracking where friction appeared and what caused it.
I then benchmarked Pawllo against 10 Indian pet e-commerce competitors across a 27-point scoring matrix covering navigation, product discovery, information architecture, trust signals, and checkout flow.
Pawllo was missing 7 features that 80%+ of competitors had: search, filters, product reviews, breadcrumbs, trust badges on PDPs, on-page ATC confirmation, and visible payment methods. These are table-stakes gaps, not optimization opportunities.
The teardown produced 14 structured findings, each following a full chain: what is wrong, why it matters to the user, why it matters to the business, the problem category (fix / test / rethink), a recommendation, and metrics to track. Five P0, five P1, four P2.
| Feature | Pawllo | Supertails | PetChef | Heads Up For Tails | Pet Pantry |
|---|---|---|---|---|---|
| Clean Visual Design | ● | ○ | ● | ○ | ○ |
| Product Filtering | ○ | ● | ○ | ● | ● |
| Subscriptions | ○ | ● | ● | ● | ○ |
| Reviews & Ratings | ○ | ● | ● | ● | ● |
| Personalization | ○ | ● | ● | ● | ○ |
| Content & Education | ○ | ● | ○ | ● | ○ |
| Fresh Food Delivery | ○ | ○ | ● | ○ | ○ |
| Mobile App | ○ | ● | ○ | ● | ○ |
| Checkout Experience | ● | ● | ● | ● | ● |
The core prioritization decision was sorting 19 interventions by problem type, not just severity. A missing search bar is not a hypothesis. A broken CTA is not an experiment. Matching the intervention type to the problem type is what separates a structured teardown from a feature wishlist.
Broken promo CTA linking to Contact instead of Shop. ATC redirect pulling users out of browsing flow. Missing search and filters. These are bugs and gaps.
PDP scroll desync, imagery gaps, review placement, payment visibility. Each structured as: We believe [change] will [outcome] because [evidence]. We will know when [metric moves].
Per-serving pricing, Subscribe & Save, WhatsApp vs. email for reorders. Each included what would disprove the hypothesis.
"8 things are broken, 6 have strong evidence, 5 are worth testing" builds founder trust differently than 19 items framed as "we think this might help."
I built a working redesign prototype of 4 core screens (Homepage, Shop, PDP, Cart/Checkout) in Next.js + TypeScript + Tailwind CSS + Framer Motion. Every design choice traced back to a specific finding.
Trust signals moved to fold 1. "Shop Now" replaced the brand narrative hero. Products visible without scrolling through six folds of storytelling.
Search, animal tabs, and category filters added. Users can find products by what they need, not by how Pawllo organized their catalog.
Correct information hierarchy. Tabbed ingredient/nutrition sections. Customer reviews. On-page ATC toast instead of a redirect that breaks browsing flow.
All five payment methods visible with icons and descriptions. Promo field no longer shows an error before anyone types. COD vs. digital split trackable.
12 of 14 findings were addressed. The two excluded (mobile load performance, redundant About pages) were out of scope for a static prototype.
This is a prototype, not a live product, so there are no conversion numbers. I built a measurement framework tied to each intervention tier. These are the baselines I would establish on day one of a real engagement.
| Metric | Tied to | Why it matters |
|---|---|---|
| ATC rate | Tier 1 fixes | Baseline for whether removing friction moves the needle |
| Items per order | PLP redesign | Tests whether better discovery increases basket size |
| AOV | Cross-sell / pricing | Revenue impact of Tier 2 + 3 experiments |
| PDP-to-ATC conversion | PDP information hierarchy | Isolates whether content restructuring drives action |
| Payment method distribution | Checkout visibility | COD vs. digital split reveals trust signal effectiveness |
| Checkout completion | Full funnel | Segmented by new vs. returning to separate acquisition from retention |
The biggest skill I built was distinguishing between problem types. A missing search bar is not a hypothesis. A broken CTA is not an experiment. Matching the intervention type to the problem type is what separates a structured teardown from a feature wishlist.
I built a 5-stage funnel analysis with assumption-based drop-off estimates grounded in industry benchmarks. This is the area I am still developing depth in. The funnel work is directional, not precise, and I have been honest about confidence levels throughout.
Going from "here is what is wrong" to "here is what better looks like" forced trade-offs that pure analysis does not surface. Scoping, information hierarchy, and mobile-first constraints became concrete once I was writing code instead of recommendations.
I chose not to use fabricated checkout data anywhere in the analysis. Credibility matters more than completeness. Every estimate is labeled with its confidence level and sourced where possible.