If you read only one thing about AI/LLMs this week, make it this:

Open Challenges in LLM Research

August 18, 2023

If you read only one thing about AI/LLM this week, make it this: Open Challenges in LLM Research by Chip Huyen


• As exciting as LLMs are in their current state, there is active research underway to improve them in fundamental ways — including user experience, different data modalities, operational footprint, hardware requirements, etc.
• Some of these current limitations are holding up adoption of LLMs in the real world — especially hallucinations and GPU availability — so this research may help to reduce current chokepoints.
• These are not purely technical problems — they have implications in UX design, organizational workflows and policies, and much more — and the democratization of LLMs has created an explosion of interest, well beyond traditional academic research.

My hot take:

This is just the beginning!

• Last week’s article was light, this one is pretty heavy! Skimming is perfectly acceptable.
• It’s useful to reflect on how we got here — essentially, researchers made bigger and bigger models, using largely traditional machine learning structures (with transformers being a major innovation), until interesting behaviors (and UXes) emerged. Now that we’ve produced a positive result, we can turn attention to refining our approach.
• Improving core weaknesses in today’s LLMs (hallucinations, GPU dependence, high energy usage) and pairing that with new capabilities (multimodal input — images, audio, text, new UXes, more human languages, etc.) will further accelerate the pace of LLM adoption.
• These improvements aren’t just going to come from academia! Real people in the real world are working on these problems, and breakthroughs can and will come from anywhere.

Link to original post by Dan Mason on LinkedIn