If you read only one thing about AI/LLMs this week, make it this:

You’re Not Using ChatGPT Wrong

August 11, 2023

If you only read one thing about AI/LLMs this week, make it this: You’re Not Using ChatGPT Wrong from The AI Exchange

TL;DR:

• LLMs are incredibly flexible tools, and it can be easy to feel like we are using them “wrong” because there are no instructions, and few constraints.
• Most people interact with ChatGPT in one specific way, even if another would serve them better — the flexibility can be intimidating!
• The article lays out four good chatbot use cases — The Coach, Super Search, AI Automation, and Autonomous Agent — which use the same technology and UI to create very different user experiences.

My hot take

Great UXes set great expectations

• A lighter article this week — this is a great example of practical advice that speaks to a real problem: Users need to know what to expect from a user interface, and LLMs are so flexible and malleable that people tend to superimpose their own expectations, rightly or wrongly.
• This isn’t inherently bad, just limiting — people are getting incredible results out of ChatGPT as is, but imagine if the UX did a better job of suggesting the right interaction model for a given problem. This is happening to a degree via “prompt engineering” but wouldn’t it be better to just know you were talking to “The Coach”, and shifting your expectations and mental model to match?
• We at Stride are thinking a lot about this — on a current client engagement, we are working to build LLM-powered UXes that are narrowly targeted to high value use cases, and which guide users to the right mental model as quickly as possible, so they can solve their problems and not have to wade through personality tweaks and multiple attempts on the way to the right solution. The chat UX for LLMs is a major innovation — now we just need to really figure out how to tune it!

Link to original post by Dan Mason on LinkedIn