Blog
AI driving the UI/UX revolution – Project Sophia
- January 17, 2024
- Posted by: William Dorrington
- Category: Beginner Data Science Frontiers Experience Design Level Machine Learning Microsoft Technology User Experience
It has been a topic of discussion that I have been vocal about for some time now, and the advent of Large Language Models (LLMs) and the ability to contextually interrogate data has just added additional weight to it. As artificial intelligence improves, especially LLMs, how we interact with our systems, apps, and processes needs to change – and this will be another significant factor that the AI revolution drives.
Large Language Models advancements:
As you might know, Large Language Models enable users to input “Prompts” and interact with models that can contextually understand the input. The prompt goes through a process of converting the words into tokens and then into high-dimensional vectors, which are then processed through mathematical models, feed forward networks and various attention layers. This allows a contextual output to be generated based on the input. These models are pre-trained on extensive datasets, and can be further enhanced with a Retrieval Augmentation Generation (RAG) approach to inject more data for Knowledge Intensive (KI) tasks. This augmentation allows us to generate contextual outputs based on our specific business processes.
Traditional approaches to Digital Solutions
Many of our systems, applications and processes rely on the user and the interface to know what operations, processes and functions to perform. For example, if I am a Marketing Events Manager creating a marketing campaign for an upcoming event, I would have to go to my ‘Dynamics 365 for Customer Insights & Journeys’, go to the navigational panel, select “Customer Segments”, then create a query for the segment I wish to make, i.e. Executive Level contacts, within the Commercial Industry, that is based in the South East of England (yes you can leverage Copilot for automated query generation now). That will then generate a list of contacts that meet the requirements. I’d then go across to “Campaigns” in the navigational menu, create an email campaign with the built-in designer, create the text content and get some appropriate images designed (once again, yes, you can leverage Copilot for these items). These manual interaction steps are necessary as we provide the context of our needs by initiating the processes via buttons, views, and forms that inform the system of our needs.
Context is key!
From the above example – the keyword is context. Understanding the context of users’ needs when interacting with a digital solution has been a big part of any digital transformation project, with experienced design specialists not only looking at the gathering requirements but also looking at human-focused design to ensure correct engagement around User Interface (UI) and Experience Design (XD) and, thus, adoption of these solutions. On top of the experience, we have large teams of developers building or extending interfaces of the solutions to ensure users can carry out the various processes they need to (e.g., creating segments, building emails, etc.). The system had to natively have the context built in to allow the appropriate interaction with viewing, modifying and creating data.
So what?
Now, let’s combine these two key elements of this article: [1] Contextual Understanding of Large Language Models and [2] traditional contextually infused and relatively static digital solutions. Most companies will have large pools of data underpinning their CRM, HCM, ERP and so on solutions – as stated in both “Traditional approaches to Digital Solutions” and “Context is Key“, the context has had to be natively placed within the app for the system to process the user requirement with the context being driven in this static manner. However, as discussed in “Large Language Models Advancements,” data science has advanced to the point where we can mathematically and contextually understand data input, churn through a large amount of data, and contextually deliver output based on those two factors. This now means we are not only reaching a pivot point with productivity-led generative AI infused into various processes but also where we can start reinventing how we approach and perceive the UI and XD.
By leveraging large language models grounded on underlying focal data, we should no longer need static designed UIs. Users should be able to carry out their role by engaging and communicating with these advanced models and thus interact with their data. The future of Digital Solutions with an interface should be users engaging with a fluid canvas. Let’s continue with the example of a Marketing Events Coordinator; they should be able to go to a blank canvas and enter what they currently wish to achieve into a free-text box, such as the “AI event segment needed for Commercial Industry Executives in the South East. Generative AI with hands-on labs and a glitzy cocktail theme” that is then processed through a LLM that is sitting on top of all the contextual Marketing and business data allowing the canvas to surface a list of only Commercial Industry Executives in the South East and then also generate an email campaign for an “Executive Generative AI roundtable” then allowing the user to submit whilst the system writes back all this information to the database to recall later. As a noddy “Canva” example is shown below (please excuse my poorly designed mock-up; I am far from a Graphic Artist but I hope it paints some form of a picture in your minds eye). We are seeing this approach come into action from a data research perspective with the latest announcement on “Project Sophia” from Microsoft.
The future beyond the future
With the above in mind, we must consider the current landscape of Business Applications and some of our approaches. With products such as Dynamics 365 and Power Apps that are very static UI-led, we are likely to see a pivot in the coming years. Now, I do believe static apps will be around for quite some time, so this is not an absolute death warrant in the short term, but like all technology, I do expect to see deprecation as we improve, evolve and move more towards a Research/Interaction canvas approach.
Next Steps for readers
To be ready for this future and current state, we must get better at engaging and communicating with these models – check out our free “Prompt Engineering” course.