Squish Meets Structure: Designing with Language Models
Maggie pointing out how the sausage is made with LLMs:
most of the training data we used to train these models is a huge data dump so enormous we could never review or scan it all. So it's like we have a huge grotesque monster, and we're just putting a surface layer of pleasantries on top of it. Like polite chat interfaces.
We’re trying to make an unpredictable and opaque system adhere to our rigid expectations for how computers behave. We currently have a mismatch between our old and new mental models for computer systems.
Ultimately, her assessment of the problem:
The problem here is we're using the same interface primitives to let users interact with a fundamentally different type of technology.