Vera

AI Solution for Retailers and Shoppers

Toward the redesign process, here’s my version of the tagging tool.

Summary

Vera is a retail platform that influences shoppers to find new fashion trends and inspiration from their preferred retail store. It uses Artificial Intelligence (AI) in providing style recommendations.

Furthermore, this platform constitutes of two main applications; an in-store kiosk and a tagging tool.

The tagging tool is a crucial component to reconcile the fashion trends and store inventory. To do this, the company developed an in-house AI solution.

But like many projects, problems need to be identified, solutions are to be studied, products are to be designed, developed, tested, and measured. These processes were repeated until a satisfactory result.

Does this AI solution bulletproof? If not, what would be the missing key?

 Sector

Retail/Fashion Industry


Project Time

This was a six-month-long project. Activities were spread out within that duration.


My Role and Contributions

I was the product designer and researcher for the tagging tool which is designed for backend support.


Task

I was tasked to redesign the backend support application which uses machine learning. However, this case study is highly focused on research methods.

Delimitation of this case study

Since the platform is not yet widely available, this case study only focuses on the researches conducted for the tagging tool and explorative solutions. I also added fictional content to maintain confidentiality.

The purpose of this case study is to walk you through the process leading to discovering the importance of information architecture in implementing an AI solution.

What is my design approach?

I tailored my approach to using the service design method. Starting with research, then ideation, and finally, prototyping.

Research

  • Preparatory Research

  • Secondary Research

  • Improvised Performance Measurement

  • Creating Personas

Ideation

  • User Journey Map of Status Quo

  • User Journey Map of Future State

Prototyping

  • Rapid Prototyping

#1 Preparatory Research

According to TiSDD, preparatory research is your personal preparation before you start your actual research.

In this activity, I started with informal quick co-creative sessions with team members, colleagues, and some stakeholders. The purpose of this was to learn about the platform and its features. What was the inspiration? Who was it created for?

Some of the broader topics include —

  • What does shopping feel like today?

  • How consumers use social media?

  • What technology is used in the market?

  • Who are the competitors?

Indirectly, I learned about the rationale of the retail stores who expressed interest in investing with this technology. As suspected, one of the driving forces is to increase sales by improving the customer’s in-store experience. The secondary objective is to leverage social media in this effort.

I also learned about the industry, dynamics, key players, and interactions which were considered in my next steps.

I then closed the preparatory research by an informal discussion/presentation with the team.

Lastly, it is important to note that during this activity, I had the opportunity to see the existing tagging tool. I was also able to interview developers who conceptualized the tool. At that point, one of the main concerns which I noticed was the unstructured information architecture.

#2 Secondary Research

To kick off the secondary research, I considered the findings in the preparatory research such as the apparel market (industry), shoppers (key players), social media (interactions/dynamics), and in-store experience (interactions/dynamics) by creating an outline and questions —

  • Apparel Market

    • What is the global apparel market size?

    • What is the apparel market size projections for the USA region?

    • Which apparel is on-demand?

  • Consumer’s Behavior

    • What are their behaviors pre-purchased?

    • What are their behaviors post-purchased?

  • Social Media

    • How social media influence consumer’s behavior?

    • What age demographic uses social media in purchasing decisions?

    • What social media platform is popular in making clothing choices?

  • In-store Experience

    • With the rise of mobile, is enhancing in-store experience offer a good ROI?

    • If stores are not going away completely, what ways to improve store resiliency?

    • How to make the in-store shopping experience better?

I identified the sources and evaluated whether they were reliable.

To close out the activity, I created a summary and included a visual presentation.

Kindly note that there was also researches regarding the available technology including training machine learning data, AI, and computer vision.

 
 
 

The results from preparatory and secondary research were used as a reference in creating a workflow and interactions that could make or break the user interface (UI). In both activities, I hypothesized that the lack of structure of the IA significantly caused the inaccuracy of training machine learning data. But these claims could not be substantiated without quantitative study. How do I measure the effectiveness of IA? I asked the following questions as a guide in conducting the next method —

  • Is it clear?

  • Is it informative?

  • Is it usable?

  • Is it credible?

#3 Improvised Performance Measurement

Ideally, I wanted to conduct baseline testing to evaluate the performance of the tagging tool, specifically how users navigate the application. Unfortunately, there were many constraints including the research budget and the geographic location of those responsible validating the images (these people were also referred to as taggers).

With that, I created an improvised performance measurement. This is not exactly a standard user research method. Rather, a mathematical approach to evaluate the accuracy of the data set.

What were the steps taken?

  1. I requested the developers for a complete data set of validated images. The data captured of the tagging tool was in JSON format. It was converted to a CSV file. Then I analyzed the accuracy of data by comparing the images recognized by computer vision algorithm to the data manually inputted by taggers using advanced spreadsheet skills and following the mathematical computations of mean, median, mode.

  2. Examined the web application specifically the steps validating and reporting images. I created tasks and scenarios to test the workflow. I also recorded the screen in doing this.

What were the high-level findings?

For the first activity, there were mismatches that can be observed. These mismatches were caused by non-standard labels, missing categories, and most importantly, the lack of structure of information architecture.

With regards to the second activity, the following were observed —

  • Non-standard/conventional taxonomy/labels

  • Multi-step/multi-page

  • Unpredictable toggle behavior in Training Module and Social Task features

  • Color selection is limited

  • Cannot identify other types of apparels

  • Cannot detect layers of clothes Session time-out is 1 hour

Based on the findings, what were the recommendations?

  • Design a better information architecture; consider using the existing Google taxonomy

  • Use fashion industry-standard terminology

  • Scrape images should be 640 x 480 pixels minimum

  • Initially, scrape images from the world wide web

  • Breadcrumbs/progress indicators/signposts

  • Zooming interface

  • Fashion guides How-To/Help page

  • Auto-save session

  • Milestone submission

  • Visual cues and icons

 

Any afterthought?

IA is important in creating an outstanding user experience. In AI, it is very crucial to create a solid structure of IA, clear labels, and strategically use categories, etc. One of the most difficult challenges I observed was that neither the taggers were fashion-forward nor the developers. As a result, the taggers would guess what these images are.

On one occasion, a tagger classified a skort as a short. Unfortunately, with the limited capabilities of the computer vision algorithm, it was auto-tagged as a skirt. But which is correct? How do you classify an object that could be either?

What about the terminologies for lengths or types of neckline? And would it be a good approach to rely on terminologies alone? If the taggers decide to use external resources such as Google search, the terminologies used in the tagging tool were not standard.

Fortunately, Google’s taxonomy is public. And it would be useful to benchmark on that while creating specific user stories for outliers.

Having said all that, I realized how critical IA in AI. Efforts should be exhausted in improving this, otherwise it defeats the overall experience.

Previous
Previous

Elliott Bay Book Company · Website

Next
Next

Mind Elf · an Education Platform