Questions

how does organizational theory change when organizations consist of humans and ai agents?
most organizational wisdom co-evolved with humans, who change slowly, so old advice has held up surprisingly well. agents are a new variable in the system - the principles that survive contact with mixed human-agent teams may look quite different from the ones we've inherited.

are larger models converging toward a platonic representationalism?
the same concept across different languages, which lives in distributed parameters at smaller scale, seems to collapse into shared representations as scale grows. if so, what does that imply about meaning, language, and the relationship between symbols and the things they point at?

what's our best guess at predicting ai scaling?
this is one of the most consequential forecasting problems we have, and we still don't have a reliable method. scaling laws fit past trends well but tell us little about emergent capability thresholds or where compute returns begin to bend.

what does a post-consumer economy look like?
if production becomes near-zero-marginal-cost in many domains and most demand is saturated or ai-mediated, what replaces consumption as the organizing principle of economic life? what are people for in that world?

if we don't develop continual learning, are we headed for semantic collapse?
language is a living thing, maintained by interaction. for the first time we share that interaction with a non-human interlocutor we subconsciously treat as human. since their weights are frozen, are we in a mass scale calcification and what consequences does that have?