Redesigning Merchant Support at Zomato

Tens of thousands of monthly support tickets were hiding a search problem: broken cuisine data was degrading discovery for every consumer on the platform.

Role
Product Generalist
Impact
100% → <20% manual review
Cross-functional
ML/Data Science, Engineering, Design, Ops
Scope
Two workstreams: Cuisine ML + Image Moderation
Scale Product
Cuisine classification pipeline: Menu input, ML engine with OCR and scoring, confidence-based routing, and outcomes including fewer tickets and better discovery
Cuisine classification and merchant support pipeline

An ops problem that was actually a product problem

Zomato's restaurant partners raised tens of thousands of support tickets monthly. The top 10 issue types drove 60% of inflow, and 70% of those were information requests with knowable answers that still required a human agent.

The instinct was to treat this as an ops problem: too many tickets, too much agent cost. But one category told a different story.

Core reframe

Cuisine tag change requests were a meaningful share of monthly volume, and tags fed directly into search and discovery. A restaurant tagged "North Indian" that primarily serves Chinese food doesn't just generate a ticket. It degrades search relevance for every consumer filtering by cuisine. This was a product quality problem surfacing through the support channel.

Most tags were wrong, and the system made them that way

I audited 100 restaurant menus at random. The majority had incorrect cuisine classifications. Three root causes:

01

Partners gamed the system

Cuisine selection was manual and unvalidated. Restaurants picked trending cuisines for visibility regardless of what they served.

02

Taxonomy was too rigid

Most menus span multiple cuisines, but the system forced one primary and two secondary. A restaurant with 2 North Indian dishes, 7 Chinese, and 5 Indo-Italian would still surface in a North Indian search.

03

Inconsistent human review

Agents didn't have clear, codified standards. Some accepted whatever was requested, adding noise to already unreliable data.

A structural gap compounded the issue: of 150+ available cuisines, only 77 mapped to discovery logic. Partners could select cuisines with zero impact on visibility, generating confusion and more tickets.

Replacing self-classification with menu intelligence

The core decision: stop trusting partners to classify themselves and start reading the menu instead. I worked with data science to build a cuisine recommendation engine using OCR and menu structure logic to suggest correct cuisines based on what a restaurant actually served.

System architecture or flow diagram
Replace with zomato-system.png
Menu OCR → cuisine scoring → trust-based routing

Trust model Confidence-based routing

Auto-approving bad classifications would trade one data quality problem for another. High-confidence matches were auto-approved; lower-confidence ones routed to human moderation. The 85% threshold was tuned through parameter iteration against a benchmark set. No elegant formula. We tested permutations until the numbers held.

Tradeoff Three cuisine slots cut to two

Partners lost a slot, reducing flexibility for genuinely multi-cuisine restaurants. But the audit showed the third slot mostly enabled gaming with irrelevant trending tags. Two slots forced an honest choice and gave cleaner signal for discovery.

Cleanup Removed unmapped legacy tags

Tags with no mapping to search logic were eliminated. They generated tickets on their own with zero impact on visibility.

Cross-functional scope: data science on the scoring model, engineering and design on backend integration, operations on new moderation SOPs and agent retraining.

The prioritization fight I didn't win outright

Core product wanted to deprioritize cuisine tagging. The argument wasn't that it didn't matter, but that other things mattered more right now.

Negotiation The compounding cost argument

Every month this didn't ship, the same ticket volume recurred and search quality continued to degrade silently. The outcome was a compromise: we kept the project on the roadmap but accepted a longer timeline. What I gave up: speed. What I preserved: the project's existence.

What changed

75%
Reduction in manual
moderation effort
80%
Faster turnaround
on change requests
Lower ticket
reopen rate
Cleaner data feeding
search & discovery

Fewer partners gaming classifications, lower ticket reopen rate, clearer feedback. The ops cost dropped, but more importantly, the data feeding search and discovery got significantly cleaner.

The other 35,000 tickets a month

Deep fix

Cuisine Classification

Built a new ML system from scratch. Required data science partnership, trust modeling, taxonomy redesign, and ops retraining. Months of work.

Breadth fix

Image Moderation

Found an existing internal ML model, got it integrated into the moderation pipeline, and designed a batch UI. Weeks of work. Different PM muscle.

Partners uploaded 35,000+ thumbnail images monthly with a 58% rejection rate and 15-hour turnaround. Agents reviewed one image at a time with free-text rejection reasons, making decisions inconsistent.

This one didn't need new ML. An auto-approval model already existed internally. I got it integrated into the moderation pipeline with a 90% confidence threshold and designed a batch UI with standardized rejection categories.

35K+
Monthly images
processed
60%
Drop in agent
workload
90%
Auto-approval
confidence threshold
The difference

The cuisine project was about building the right system. The image project was about finding the right system already in the building.

The argument I should have led with

I framed it wrong to stakeholders

I framed cuisine tagging as a support cost problem because that's where the visible pain was. The stronger argument was search relevance: bad tags degrade the experience for every consumer, not just the partners raising tickets. The lesson was about choosing which problem to put in front of stakeholders when the same project solves more than one.

ML Product Management Data Quality OCR / NLP Trust & Safety Systems Stakeholder Negotiation Ops Redesign Cross-functional Leadership Taxonomy Design Batch Moderation UX
Next case study

Ta Da! →

0 → 1 Build