(l-r) Annarose our mentor, Andrew, Justice, Xiao and me.

Food delivery recommedation

Designing the food delivery restaurant recommendation api for yelp, leveraging machine learning and exploring conversational user interface (ConUI) as a solution.

I was fortunate to learn from and work with Andrew Jones, Justice Juraschek and Xiao Liang in this project. I was responsible for user research, ideating, sketching, prototyping, and taking notes during usability testing.


The prompt required us to come up with a conversational user interface for recommending restaurants which would leverage Yelp's massive data and can be further used as an API by local delivery businesses. We were allowed to assume that the machine learning algorithm behind the recommendation works accurately as Yelp has a lot of data already.

We began our research by interviewing people in Bloomington and learning more about their eating habits. We wanted to figure out what were the key deciding factors when people are choosing a restaurant for delivery.

  • We found that people who have lived in an area for more than one year tend not to use Yelp because they think they know enough about their area. However, these same people admitted to spending too much time figuring out what to order when using delivery services like BtownMenus.
  • Participants mentioned that a friend’s recommendation is usually the number one reason they try a new restaurant.
  • People enjoyed looking through Yelp for the photos and reviews.
  • Speed of delivery and cost are the key factors for delivery, while reviews of the food and ambiance are key for eating in.

Through our user research, we found the core problem with delivery services is the time it takes for people to choose a restaurant. People order delivery because it is convenient, yet choosing a restaurant on these services is far from that. Users were spending an average of 8-10 minutes searching for what food they wanted while on BtownMenus. We found the core value of ConUI is the improved speed and efficiency. The interface is similar to talking with a human, hopefully speeding up the interactions. Any solution implementing ConUI should revolve around making a process more efficient.


We storyboard the process and focus what part should the design target
Justice explains the motivation behind ordering food

We gained insights into how that user-base found local restaurants that they have never been to in the past, but also into their decision making process using Yelp’s service. The main insights were -

  • Speed is key - People think they spend too much time looking for food
  • Friends (or people similar to them) are a powerful influence in the decision making process
  • Average price of the restaurant is a factor as most of our participants were college students

Through our secondary research, we discovered that there were actually three different branches and implementations of conUI.

  • Speech-based ConUI. For example Siri, Amazon Echo, and Google’s Voice Assistant.
  • Chatbots. Text-based response bots. Facebook’s M and Slack’s Slackbot are examples of this.
  • Pseudo ConUI. Instead of allowing the user to type or say anything they want, Pseudo ConUI provides the user with a set of choices and gives controlled responses.

We selected to go with the pseudo ConUI as it was the fastest option among the three. Moving forward our goal became -

Swift, personalized re-discovery of local restaurants


We sketched out a lot of ideas, taking inspiration from existing chat applications such as WeChat, Whatsapp, Groupme and Slack and their implementation of emojis.

Nicole and Hayden
Andrew Sketches out ideas
Dipt talking to a driver
I skecth on paper to see how to UI would look
leslie and dipt with bob
Justice skecthes out as well

We decide on an idea where each review comes with a wildcard rationale of why that restaurant is chosen. A friend’s review of would push a restaurant much higher because it is more likely to influence a user. Other rationales would include a deal at the restaurant or a Yelp review who has eaten at similar restaurants as the user.

The user can include multiple filters in one interaction. Once they have chosen a restaurant, they can click on it and a description will explain. This would include more review, deals or friends who have gone to this restaurant.

Initial screen, list of recommendations and details of restaurant.
Paper prototype we used for testing


We picked two typical users similar to our target demographic and provided them four scenarios using our paper prototype. I took notes while Xiao conducted the test.



Our design is a conversational UI that provides restaurant recommendations based off of user inputs. The goal was to decrease the time it took for someone to find a place they wanted to order by showing personalized results. Results would be suggested based on both the user’s direct input and what the artificial intelligence had learned about that specific person based on their habits.

Nicole and Hayden
A user chooses this screen from the BtownMenus app. It shows 1 or 2 recommendations based off right off the bat based off of places that their friends have gone, that have deals or are currently trending.
Dipt talking to a driver
The user has chosen “something cheap” and “Mexican”. The design returns 3-5 reviews and orders it by places their friends have reviewed and where other Yelpers who have similar taste have reviewed well. These rationales are provided in a wildcard underneath the reviews.
leslie and dipt with bob
The user can input their own words if the quick choices do not have what they are looking for. While they are typing, auto fill choices show up to lead the user to a choice more quickly.
leslie and dipt with bob
When the user clicks on a restaurant card, it pops up and expands with more information. This includes pictures, deals and other information that wasn’t able to fit in the wildcard.

Final Design

The proposed wireframes, for how a user will filter choices looks like the ones below. Made in Sketch.

Nicole and Hayden
Dipt talking to a driver
leslie and dipt with bob
leslie and dipt with bob


This project was the second project of our Interaction Design Practice course. In this project I understood how to work in a team and design as a group. The outcome of this project was something none of us could have come up with individually. However when presenting the design idea to the class and in the document, we made a lot of assumptions and were not clear. This lack of articulation couldn't convey all the decisions we had made adequately. My biggest takeaway from this project was that presenting your design is also a crucial part in communicating to stakeholders. The idea which is understood by everyone in the room is the idea which will be executed.