When AI Gets It Wrong — A Heuristic Evaluation of Phia
Phia is an AI-powered shopping assistant co-founded by Phoebe Gates and Sophia Kianni that helps users find the best prices across 40,000+ retail and secondhand sites. Named one of TIME's Best Inventions of 2025 with over one million users since its April 2025 launch, it sits at the intersection of fashion and technology — two spaces I care deeply about.
Its core feature, the "Should I Buy This?" button, promises to instantly tell you whether you're getting a good deal. I wanted to evaluate how well that promise held up in practice.
Using Nielsen's 10 Usability Heuristics, I walked through the core shopping flow across Phia's Chrome extension and web platform, supplementing my findings with real App Store and Chrome Web Store reviews. I identified 10 usability issues across both surfaces. Two stood out as the most damaging to user trust — and those became the focus of my redesign.
Two Fixes, One Principle: My Redesign Process
Fix 1: Pricing language that actually speaks fashion
Labels like "high/typical/fair" are borrowed from finance and travel tools, not how fashion shoppers think or talk. I explored three component directions — a meter gauge, a stitched fabric label, and a pointed clothing tag — and landed on the tag. It was the most legible at small sizes, scaled cleanly across the extension and collapsed browser view, and signals fashion context through shape alone. I paired it with action-oriented language: "Don't Miss This," "Price is Fair," and "Keep Looking." I also redesigned the toolbar icon to change color based on the price verdict, giving users a passive signal before they even open the extension.
Fix 2: An AI that knows what platform it's on
Phia's chatbot confidently directs desktop users to an image search feature that only exists on mobile. I designed a camera icon entry point within the desktop search bar, following conventions from Pinterest and Google Lens, and updated the chatbot's responses to be platform-aware.
Both fixes reflect the same principle: when AI is central to a product's promise, every detail either builds or breaks trust.