Discussion about this post

User's avatar
Tom Dietterich's avatar

Excellent essay. One comment and one question.

Comment: Safety engineers will always apply a waterfall approach, because they need to do careful scenario-based hazard analysis and mitigation. They learn instead by doing pilot studies, building prototypes, and testing them in safe settings (e.g., test ranges). Because LLM-based systems have a significant failure rate, any process that carries risk (e.g., inventory management) needs to be carefully designed and tested. An LLM is like an aircraft that randomly loses power for a few minutes out of every hour. It is possible to learn how to fly such a plane, but it requires great care. One strategy (which I saw at a recent IBM presentation) is to only take action based on code emitted by the LLM. Code can be sanity-checked using formal tools from programming languages before being executed.

Question: LLM-based systems provide new abstractions (such as "agents") and new communication mechanisms (such as agent-to-agent communication, natural language communication with customers, suppliers, etc.). Can you point to work that is studying how the modern corporation might be re-designed using these new abstraction and communication mechanisms? I wonder if a task-based approach might automate fine-grained aspects of the corporation and miss the opportunity to restructure. In the back of my head, I'm wondering what the AI equivalent is of the need to redesign manufacturing processes using electricity.

Expand full comment
Kenny Fraser's avatar

I think a lot of this makes sense but have two quibbles. 1. Measuring and analysing causes sounds great but it will be very hard to get clear answers for some time - so need to take care not to introduce false certainty. 2. No amount of corporate planning will stop people experimenting. And in some cases, those will be the things that work. So again need to be careful not to stifle innovation with central planning.

Expand full comment

No posts