If you read only one thing about AI/LLMs this week, make it this:

All the Hard Stuff Nobody Talks About when Building Products with LLMs

June 2, 2023

Dan Mason is back with another "if you only read one thing about #ai this week" recommendation. This week, it's All the Hard Stuff Nobody Talks About when Building Products with LLMs written by Phillip Carter, Principal Product Manager at Honeycomb.

TL;DR

- LLMs are not great at problems where “correctness” is important. If your use case requires LLM output to be compiled or sent straight into an API, buckle up! It won’t work half the time.
- Speed is an issue. In chat, it still seems magical to watch a computer typing back to you, but as part of an application, the latency can be crippling. OpenAI and Claude/Anthropic are ahead, much faster than open models as of now.
- Many legal and compliance concerns with integrating LLMs into your product. You may need to get consent, explicitly allow LLM features to be disabled, etc.

What's Dan Mason's takeaway?

Focus on creating magical experiences, for your customers and your team.

ChatGPT needs no explanation, it’s obvious how to use it; some of this is Google paving the way with an “ask me anything” product, and with chat being a natural human UX. Most products will struggle to make their LLM features feel half as natural, although there are exceptions.

Companies should focus on two things:
1) finding product use cases for LLMs that feel as magical and natural as ChatGPT, and
2) doubling down on the use cases their teams have already found in their everyday work.

Help your employees make more magic while they work by creating safe and secure LLM environments, and reimagining your teams and processes to maximize both your AI and human performance.




Link to original post by Dan Mason on LinkedIn