Meeting hype with context

Share

Welcome to newsletter #5, on meetings in the age of AI

three people holding up pieces of paper trying to avoid detection by AI
thank you UKAI Projects for a great meeting about AI

One of the phrases I reach for when I write to people I care about that are going through a hard time is “words fail.” It’s a way of giving myself permission to try (and fail) to provide adequate comfort, relieving me of the burden of getting it exactly right, but still doing what I know matters more, which is showing up. A dear friend and I practice this with each other in a different context. Saying things to each other that might be hard to hear, while describing how much trust is reflected in not trying to get the delivery perfect. We know we will be ok and figure it out, we won't let the words get in the way of our intent.

Words fail is close to something I think philosopher Ludwig Wittgenstein was getting at with the idea of the limits of language. Words can do so much, and also there are things in life that words can’t hold, things that are felt and known differently, in our consciousness. Consciousness relates to experience, and experience happens both alone and in communion with others. One of those forms of communion with others is meetings, and meetings are the theme of this newsletter.

There are three pieces of writing I'm sharing long excerpts from below, as well as a shorter blog post about work and automation. I encourage you to clear some time to read them in full. Their cross-over point is collective experience and shared decision-making. Decision-making continues to be a category of automation that warrants thick bright lines around it, for many reasons, some of which are explored in these pieces. As you figure out if and how you use AI, running your uses through the lens of which decisions you are enabling it to influence and/or make continues to be super helpful.

First, on the general subject of meetings, is Elizabeth Ayer, who writes in her 2023 piece “Meetings are the work”:

“But what if, hear me out, what if the *only* work that matters in a knowledge economy happens when we are together? What if the reason we can’t seem to fix meetings is that we’re mischaracterizing “the work” in the first place? My eventual goal is to challenge the trope that meetings aren’t work, but first we need to take a tour of some big ideas…” ......

Most of us — and I certainly include myself in this — relax into existing work schemas. We’re at work, not in some epic battle to redefine truth. But again, what if… What if we put all our various knowledge tasks under the microscope: all the little choices we make to generate ideas, narrow choices, combine, reframe, highlight, focus, and decide? Under that lens, we can see that we are deciding “true enough to act on” all the time.

If we recognize the ubiquity of knowledge choices, we open up so many new possibilities for manifesting intentionality in our work, it’s hard to take them all in. We have a constant stream of options of what to prioritize and where to draw attention."

Ayer begins with a thorough and explicit acknowledgement that meetings as they are used today are not well-loved for many reasons. But her point is that while a seemingly banal increment of a work-day, they hold immense power if used differently. And they are a site that could be used differently to counteract many of the forces that long-standing managerial culture, neoliberal political culture, and recent tech culture – seek to accelerate with more automation. Meetings won't get better and different without intent and proactive effort, but this site for their use is available.

If we can see the banal meeting as a site of power in the context of decision-making, we can shift to leveraging it better, including in having better conversations about how we want to design and drive automation in our workplace, rather than passively accept it. One of the challenges of technology is how it helps organize conversations so that privacy, safety, and transparency rank above simple efficacy. But efficacy and accuracy aren’t enough, in the absence of both, we can still feel the shift of work away from producing and into reviewing, being held accountable for work we didn't do. Not great, and not something to passively accept.

The broader questions we need to surface relate to the conditions of our relationship to our work, to each other - these are often simple questions that feel hard to table in this moment when the construct that would be appropriate and rational is backwards. We're not starting at operations and talking about AI, which would make sense. We're starting at AI, then talking about operations, which doesn't.

In his long-read about AI being incorrectly blamed for the bombing of the Shajareh Tayyebeh primary school in Minab, in southern Iran, Kevin T Baker begins by naming this phenomenon of backwardness:

"In 2019, the scholar Morgan Ames published The Charisma Machine, a study of how certain technologies draw attention, resources and attribution toward themselves and away from everything else. The usual framework for understanding this dynamic is “hype”, but hype only describes what boosters do, and it assigns critics a privileged debunking role that still leaves the technology at the centre of every argument. A charismatic technology shapes the whole field around it, the way a magnet organises iron filings. LLMs may be the most powerful instance of this type in history."

You are probably feeling the organizational effects of charismatic technology, and probably not for the first time either. For now, moving slowly, forcing hype through context, and protecting collaborative spaces are extremely important frictions to introduce to contend with it. Talking to other people and making decisions about how to work together is not novel, making it hard to believe that this is part of the necessary response to the charisma machine - it feels too banal, too small, too simple. Yet this is precisely how beneficial new uses of this technology will be come to pass, in the context of decent work, through figuring it out together, in ways grounded in sectoral knowledge and context. What happens in the absence of engaging with this issue is well-told through industrial and military history. More from Baker here, it’s a long excerpt, but necessary context, emphasis mine:

“In 1984, the historian David Noble showed that when the US military and American manufacturers automated their factory floors, they consistently chose systems that were slower and more expensive but which moved decision-making away from workers and into management. The point was not efficiency – it was frequently extremely wasteful – but control. A worker who understands what they are doing can exercise judgment the institution cannot govern. Move that understanding into the system, and the worker has nothing left to do but follow instructions. Alex Karp, the CEO of Palantir, describes exactly this achievement in his 2025 book, The Technological Republic. “Software is now at the helm,” he writes, with hardware “serving as the means by which the recommendations of AI are implemented in the world.” His model for what this should look like comes from nature: bee swarms and the murmurations of starlings. “There is no mediation of the information captured by the scouts once they return to the hive,” Karp writes. The starlings need no permission from above, they require “no weekly reports to middle management, no presentations to more senior leaders, no meetings or conference calls to prepare for other meetings”. This sounds liberating, even utopian. But the signal that passes without mediation is also the signal that nobody can question.

Karp thinks he is destroying bureaucracy. He is encoding it. The contempt for meetings and weekly reports and presentations to senior leaders; he treats these as the bureaucratic process itself. They are not. ... What Karp eliminated was the discretion the institution could never admit it depended on. What remains is a bureaucracy that can execute its rules but with no one left to interpret them. Bureaucracy encoded in software does not bend. It shatters.”

This all brings to mind Ursula Franklin's distinction between holistic and prescriptive technologies, explored here by Mandy Brown. The point is to be thinking hard about what we automate, and why. There is an instinct amongst some managers today to not want to curtail people's agency, to allow them to work with the tools they want to work with. This seems aligned with the idea of holistic technology and worker agency, but when those very products are prescriptive by nature, the conversation gets messier. The kind of messy that should be animating the meetings we're having about how we want to work together. The kinds of meetings where we won't always say it right, and where we won't all agree. Power dynamics of many dimensions come into play in these conversations.

Finally, in his thoughtful piece about what “good” use of AI looks like, Marc De Pape writes:

"What is needed are competing visions of good to overcome the default of fast and cheap in order to avoid the worst possible outcomes of AI’s rapid expansion. What follows is the scaffolding of how we might think about “good” and AI. To do so, let’s talk about aviation. Specifically the history of autopilot.

As use and adoption of autopilot increased over the course of multiple decades there emerged a worry, after a number of accidents, that autopilot was contributing to the atrophying of pilot skills such that pilots struggled to fly when assistance failed. This became of sufficient concern that NASA researched the issue and published Examination of Automation-Induced Complacency and Individual Difference Variates:

Automation-induced complacency has been documented as a cause or contributing factor in many airplane accidents throughout the last two decades. It is surmised that the condition results when a crew is working in highly reliable automated environments in which they serve as supervisory controllers monitoring system states for occasional automation failures.

Simply put, pilots delegated much of the requisite judgement of flying to the automation tool turning them into monitors who were no longer as able to influence or direct the outcome of the flight as they once were. There have been a number of similar studies related to AI and this more recent study, but rather than get into the debate about whether one should or shouldn’t adopt AI, let’s look at how the aviation industry handled the risks associated with Automation-induced complacency as an analogous path forward — and through — a lot of the noise around AI today.

The Aviation industry handled Automation-Inducted Complacency by recognizing that the best outcomes required a hybrid approach. Pilots are now encouraged to manually fly to maintain their skills (take-off and landing in particular), while still taking advantage of the benefits of autopilot (reduced cognitive load over the course of long flights) to help with the overall quality of the flight: safety, fuel efficiency, smoothness, punctuality, etc.

This automation in service of quality is an important lesson for AI. How aviation responded to the issue sits in stark contrasts to how AI leaders have handled safety concerns: scolding the public for not adopting your solution without demonstrating the requisite amount of reflection to understand why there might be hesitation. If planes were falling out of the sky would anyone scold the public for not flying more? Yet here we are."


In synthesis and closing, the point of sharing these pieces is to isolate and highlight the power of meetings and collaboration and their relationships to protecting our power to make decisions and exert influence at many different scales. If we do not intentionally leverage and expand the places where collaboration can happen, the individualistic notions of what productivity looks like will fill the vacuum with ever more automation. If we can begin to see the power and leverage that exists in each unit of coming together for a meeting, we can also apply it to designing and enforcing the conditions we want for what good work looks like, with and without AI. But it requires proactive and creative engagement, not passiveness.

Meetings are a great place to talk about hard things, and also to take creative risks, ask questions, and explore some of the territory Elizabeth Ayer so carefully recounts in her piece. This is also a reason to consider pushing back against the normalization of transcripts and surveillance of meetings as a default or norm. To that end, I’m holding a free short session on etiquette in the context of using AI note-takers on May 29th. It will not be recorded :)