If you read only one thing about AI/LLMs this week, make it this:

NN/g: First New UI Paradigm in 60 Years

June 23, 2023

If you read only one thing about AI/LLM this week, make it this: NN/g: First New UI Paradigm in 60 Years by Jakob Nielsen


- AI/LLMs fundamentally change the way we interact with computers; this is only the third time this has ever happened! The first two paradigms were batch processing (punch cards, 24 hour cycle times) and command-based interactions (we tell the computer what to do, step by step, to achieve the outcome we want).
- This third paradigm requires users not to make commands, but rather to construct outcome specifications — we tell the LLM what we want, and it delivers the desired output (or dies trying). This radically accelerates the creative process — what might have taken hundreds of commands to a computer now might only take one well-formed request to an LLM.
- This reverses the locus of control — the LLM is now in the driver’s seat, which can make it harder for human users to identify problems — if you don’t know how the LLM generated its response, you may not be able to identify errors or hallucinations. Chat is also not an ideal UX for many tasks — we may need hybrid experiences which blend the best of visual UX with chat.

Dan Mason's hot take

The time has finally come: for outcomes over output!

- Product people have been agitating for many years for the importance of outcomes over output — who cares how many story points you shipped if you don’t get the business outcome you want? Sometimes great outcomes can be accomplished with no code at all, but if your incentives reward writing code, you get code whether you need it or not.
- With #LLMs, we ask for what we want, and we get it. Right now, we mostly ask for output — write me a blog post, draw me a picture, write me some code, etc.
- As we get more comfortable with having #LLM co-pilots, we will need to blend our expertise with theirs — asking sometimes for specific outputs that drive the outcome we are working towards, and sometimes for outside-the-box ideas on how to derive a certain outcome. This is probably the most important skill for developing #AI fluency — understanding the capabilities of LLMs and how to use them to get the best possible outcome, NOT the prettiest looking output.

Link to original post by Dan Mason on LinkedIn