Technology use with lightness

a black and white image of handwriting that lists 1. Lightness 2. Quickness 3. Exactitude 4. Visibility 5. Multiplicity 6. Consistency
source: https://designopendata.wordpress.com/wp-content/uploads/2014/05/sixmemosforthenextmillennium_italocalvino.pdf

Welcome to newsletter 4

I am leaning into this newsletter not having a matchy-match existence and relationship to the prior three editions. Yet. Freedom!

Fall in love with the problem, they say.

This has been my method with AI, and other than a few months of exceptions over the last several years, it has served me well. If you have the luxury of being able to step far enough away from this moment to see the comedy, there is plenty. I would not have bet that ahistorical takes about consciousness from the tech sector would have driven me into the realm of Western philosophy, for example, but here we are. In love with the problem.

I’m finally finishing up reading Six Memos For The Next Millennium by Italo Calvino. The book is a series of lectures that were never delivered, sadly, because Calvino died prior to completing the last one and having a chance to share them in person. The lectures argue for five qualities in literature to carry us into the future: Lightness, Quickness, Exactitude, Visibility, and Multiplicity. His wife Esther wrote that the sixth memo was to be on Consistency.

The book is a study in tensions - art and science, intellectual rigor and creativity, etc. I can't help but think to borrow this framing for thinking about technology use. He also writes about his interest in folk tales and fables, and the ways in which the oral traditions of storytelling have efficacy.  

On that kind of efficacy, Goldilocks is one that I reach for with AI when dealing with the mania and working out how to move through it with others. AI is not nothing, and it’s not everything – it’s somewhere in the middle. And “just right” is probably closer to the less excitable side. As we know, this kind of in-between, riddled with nuance, is not intellectually comfortable for a society built on binaries and hierarchy. And it seems that a great way to contend with something this overwrought is, in Calvino’s first recommendation, with lightness. One thing I love about Calvino's thinking on the idea of lightness is that treating something with lightness doesn't mean treating it flippantly or as unimportant. Quite the opposite.

To that end, I’m currently knee-deep in reviewing what is out there and offered to organizations in terms of AI guidance, falling in love with the shape of this specfic problem. What is an organization supposed to be doing right now about the use of AI? I’m gathering some processes to share in draft, and will when I get there, but for now here are a few reflections based on what I’m seeing out in the world, in case this is on your mind or part of your life too.

Firstly, regulation will not resolve many of the AI-related issues that are currently surfacing in your workplace and organization. This means that it’s strategic to respond rather than let new technology define and entrench social and cultural norms because companies are putting new features in the products you’re using. Sometimes AI just shows up as a feature, it is not hitting the traditional mechanism of procurement or purchasing as a gating mechanism, one of the key findings shared in Open Contracting Partnership's "Buying AI - Tips and tools for public procurement" (a guide written for the public sector, but useful for any organization). If and when Canada has an AI law, it will support and expand a new wave of industrial compliance and consulting and audits. There aren’t always rights or wrongs in play with culture, so it’s smart to be intentional and create your own direction here.

Secondly, there is no reason to ante up on self-surveillance by increasing your use of transcripts and automated note-taking and other AI tools just because they’re there. I’m going to run a session about etiquette for the use of note-taking products sometime in the next few months. [edit: this session is now open for registration - it's happening Fri May 29 at 12.15 EDT - link to register]. It will consider and engage with the very real accessibility benefits that transcription and translation offer. It will also make you aware of how some people are using transcripts for raw material for other outputs, as well as a number of other things to keep in mind. The focus will be on being a good host.

Thirdly, I see a lot of AI trainings start with something about LLMs are this, deep learning is that, generative AI is this, algorithms are that, subset, data, machine-learning, methods … mumbo jumbo to most people. I suggest a different approach, because when you begin these conversations with what seems like a combination of a statistics/engineering/computer science lesson you fall into the frame of the technical, which is really not the priority issue of the day.

Starting this way can also reduce people’s confidence because you’re introducing several unfamiliar terms at once. This technical background can be filled in along the way, once a bit more context is set. It’s not to say that being precise about how the tech works doesn’t matter (it does) BUT I think many of us forget that with a desire to be accurate, technically, in teaching and sharing in a linear sense, we lose the power of framing a topic as political and cultural, primarily, which is a much wider frame and lens.

From scholar Dr. Lucy Suchman’s 2023 “The Uncontroversial Thingness of AI” (emphasis mine):

“AI as a floating signifier

…AI can be defined as a sign invested with social, political and economic capital and with performative effects that serve the interests of those with stakes in the field. Read as what anthropologist Claude Levi-Strauss (1987) named a floating signifier, ‘AI’ is a term that suggests a specific referent but works to escape definition in order to maximize its suggestive power. While interpretive flexibility is a feature of any technology, the thingness of AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is. This situation is exacerbated by the lures of anthropomorphism (for both developers and those encountering the technologies) and by the tendency towards circularity in standard definitions, for example, that AI is the field that aims to create computational systems capable of demonstrating human-like intelligence, or that machine learning is ‘a branch of artificial intelligence concerned with the construction of programs that learn from experience’ (Oxford Dictionary of Computer Science, cited in Broussard 2019: 91). Understood instead as a project in scaling up the classificatory regimes that enable datafication, both the signifier ‘AI’ and its associated technologies effect what philosopher of science Helen Verran has named a ‘hardening of the categories’ (Verran, 1998: 241), a fixing of the sign in place of attention to the fluidity of categorical reference and the situated practices of classification through which categories are put to work, for better and worse.”

So, fourth, approach AI as the political matter and cultural matter that it is – in the best of ways.  In your organization and with your colleagues, speak in specific terms about the function. Not how the function is done (how the tech works), but what is happening to us in relationships, in relation to decision-making, in relation to time, attention, authority, capacity, what decent work looks like, etc. (culture, organizational design, etc.). Talking about automation is often the easier way in.

Fifth, borrowing from a colleague and friend that works in social services – it can be helpful to look at AI in three specific buckets of use in your field: population level, practitioner level, and administrative level. For example, let’s say you are a therapist. 

At the population level, people are using Claude and other generative AI products for mental health support. You can’t change this directly, but you certainly a) need to know it’s happening and b) have some kind of a plan for if and how you are doing something about it within your profession and with those you serve. At the practitioner level, you have been offered a product that allows you to record and create transcripts of your therapy sessions. How are you approaching the questions that the use of this tool raise, from consent to compliance to consequences on how you practice? Finally, at the administrative level, you now have opportunities to create generative AI communications output for outreach, follow-up, etc. Does this solve a problem you currently have, or are you just going to change the shape of the work you are doing because you know it will all need to be carefully reviewed?

Sixth, there are REAMS of templates and guidelines and principles out there. For example, the Canadian Centre for Nonprofit Digital Resilience (CCNDR) has a list of over 50+ organizational resources and examples for AI use. But when you think about this issue, rather than start at the level of abstraction of “what are our rules going to be for all the AI things?” flip the question on its head: “what are the one or two (or several) cases/potential cases of AI use that are surfacing in our organization?” Then talk about what the impacts and consequences of these uses look like already.

Do away with the idea that there is a perfect set of rules for all of this. Also be aware of the false sense of legitimacy that you may be giving these tools by creating activity around them. The important thing you need to do is spend time talking with each other and negotiating the kinds of conversations that do not necessarily have right or wrong answers. On a team of nine there might already be wildly disparate opinions about if/how/when to use generative AI in the course of a workday. You need to commit to constantly returning to having these conversations with each other as time passes and you gain more experience.

Beyond that, in terms of putting principles into action, as Charley Johnson writes about in his latest newsletter, titled The Responsible AI cage:  “Principles documents are isomorphic outputs. They make organizations look like responsible AI organizations without necessarily making them behave like responsible AI organizations. And the field has organized its expectations accordingly — which means the social pressure runs toward producing the document, not toward asking whether the document is changing anything.”

The great thing is that these conversations about using technology together can go way broader than AI. Who loves the software they have to use in the course of day? All of the products we use would benefit from a bit more investment of our time and power regarding if and how we want to use them. If we think more about how we want to bring the usage of these products to life, we can open up new kinds of work and also begin to see that the idea of writing down rules often legitimizes things by focusing on the compliance side rather than the creative side.  I wish Calvino was here to see what AI is doing to people and to our shared storytelling about what computers can do.

Seventh, and finally, it's to be expected that people are all over the place with their politics regarding the use of AI, and genAI in particular. Some are fully refusing to use it, some are reluctant or feel shame, some are all in. All of these experiences are working with and against each other to inform the decisions we make both individually and collectively. Holding the far end of arguments in political work doesn't always mean an expectation that the position will be adopted, but it certainly helps strengthen the conversations and outcomes. Being curious about each other’s opinions and preferences tends to get us to better places.

Before I go, please mark your calendars to join us on April 29, 2026 at 1.30 pm ET for a free webinar:  “Boring Tiny Tools for Transit, a.k.a. Just Enough AI." - you can register here: https://us06web.zoom.us/webinar/register/WN_RBEZv7n4SJWhUGEUnL6WQA#/registration

PS: two suggestions to read and listen to…

Read:

James McKinney of Open Contracting Partnership, wrote an excellent short blog post that addresses a question you might be wondering about if you are in charge of data for your organization: is there a point to spending time getting and keeping data in good order, using data standards, if AI can easily wade through the mess of badly structured and incomplete data?

The blog post is about procurement data but contains transferable lessons for the practice of keeping data in good shape writ large.  The short answer is yes: it still matters to use data standards and keep good records, and James’ post walks through why – including where AI use is good/makes sense (pattern-matching) and where it’s better not to use it (decision-making that must be defensible and rigorous). There’s more, it’s a great succinct read.

Listen:

Johnathan and Melissa Nightingale of Raw Signal Group talking about AI and work on an episode of The Atlantic’s show Galaxy Brain titled: “Is AI Going to Turn Us All Into Middle Managers?” In it, Johnathan and Melissa refuse to do away with nuance as they speak about how AI is impacting the world of work, and the tech sector specifically. They’re particularly careful to keep history close, sharing context about labour power in recent years, automation’s impacts to date, and the social and cultural components of a good workplace. They’re clear about what makes a good manager: one that creates an environment for a team to be effective. They also shared that signs of a good manager are not a) that they are liked by their team or b) that the team is happy.

Around the 38-minute mark, Melissa describes an important piece of context about the impacts of generative AI use on a manager’s capacity to get better at their job. She talks about how most managers are living in days of meetings stacked one on top of the next from start to finish. What this means, in short (and do go listen to the point she makes in the full context of the show) is that if something goes wrong on an interpersonal level – a bad moment in a meeting, poor word choices in giving feedback, a not great reaction to something an employee has done – there is no time available for a manager to sit with it, reflect on it, and figure out what to learn/adjust for next time.

This is bad for everyone involved. But as Melissa talks about, it’s particularly frustrating because people are good at this kind of learning, if we have time to do it via self-reflection. But that time for self-reflection has been missing from the typical day of a manager for a long time already. Pushing more interpersonal interactions through generative AI means less direct and hard conversations, less interpersonal friction.  

A time-strapped manager thinking they are doing a good thing with a gentler genAI email may actually offend the recipient further (they couldn’t be bothered to write this themselves and/or talk to me about it?) AND both parties miss out on the learning that comes from engaging with the situation head-on: properly, slowly, painfully maybe – but honestly.  As ever, the root problem is old (management roles = having meetings literally all day) and the use of generative AI adds new symptoms of our deep-rooted cult of efficiency.