After understanding the business objectives and requirements, I conducted field research using the contextual inquiry method. I interviewed customer support agents and managers - observing how they performed their routine tasks in their own environment. I examined their user experience, listened to customer phone calls and timed how long it was taking for an agent to find the notes they needed from the customer’s history. I found out that on average it was taking 60 seconds to find the serial number of a product – a vital bit of information needed when an agent contacts a manufacturer on behalf of the customer.
I invited my colleague to help me interview users and make notes. During the interview, the agents provided us with valuable information and some great ideas on how to improve the system.
Watch what users do, not what they say
Many agents pointed out that having an autocomplete feature that would predict what the user is typing could help them to work faster.
The idea of having an autocomplete feature was interesting, however, we’ve learned that in practice it wouldn’t make the process any quicker due to the jargon used by the agents. During our research, we found that agents often use technical acronyms and tend to shorten or combine words . They would even intentionally make spelling mistakes in order to write notes quicker. During the testing stage, we realised that adding an autocomplete feature would bring more frustration to the agents rather than solve the issue of slow note taking.
All messages are equal, but some messages are more equal than others
During my research, I realised that a note highlighting feature named 'Mark as important' was being misused by managers who wanted to prioritise their own customer cases. More notes than necessary were being marked as important, making the process of quickly visually identifying genuinely important notes difficult. To help agents and managers with this issue, we decided to change the default colour coding – we highlighted agents’ notes in green and managers’ notes in yellow.
I noticed that the PC monitors the agents use have a 19:6 display aspect ratio. However, the UI has been designed for 4:3 aspect ratio screens, presenting the agent with a few issues. The user could see two white bars placed on the sides of the screen, also called 'pillowboxing'. Also, a horizontal scrollbar would appear when the user is looking at communication history. Part of the notes were hidden, and users had to scroll horizontally to see the full message. This issue caused a delay in response time. By simply adjusting the display aspect ratio of the agents’ screens, timings were significantly improved.
Users could see two types of notes in their history; automated system-generated notes and notes written by humans. There are different types of system-generated notes - logistics notifications, fraud detection and order-related notifications.
All notes displayed in chronological order from oldest to newest, but because notes stacked on top of each other, the user had to scroll up and down to understand which note was written by another agent.
To reduce cognitive-load, we decided to split system-generated notes and human-generated notes into two columns. Because the agents and managers were reading from left to right and the main focal point in the UI was the human-generated notes, I positioned these notes on the left and right aligned system-generated notes. Splitting system and human inputs allowed users to find information quicker and reduced cognitive load.
Human factors and ergonomics
We had an idea to add a search box into the system, allowing users to find notes quickly. We also thought that adding filters might help to narrow down the search process. During observation and usability testing, we realised that it was highly difficult for an agent to perform the task asynchronously during the call, as agents are required to respond instantly and they also make notes during a conversation with a customer using notepad and tablets. It can be tricky to an agent to search for something, make notes and speak simultaneously. The prototypes that had a search box and filters received negative feedback and error rate was much higher compared to the versions without these features.