2024i14, Sunday: Commonplace #2

2024i14, Sunday: Commonplace #2
"I'll just stick it here for later reference...". Photo by Kelly Sikkema on Unsplash.

Issue 2 of the ongoing online Commonplace booklet. An old friend and colleague was kind enough to say this stuff gave him food for further thought. I'm touched and delighted: part of the point of this is that sometimes I'm not sure what I think about stuff, and want to revisit it later. Storing stuff in public makes that easier to do. Thanks, Mr W: much appreciated.


We've been thinking about America's trust collapse all wrong: by Jedediah Britton-Purdy. Squares with the thing from Ian Leslie last time about the mistakes we choose to make, and his comment that he chose to trust more - even if it sometimes led to being screwed over.

Each of us can also develop practices to modulate our own balance of trust and skepticism, and gently push others to do the same. Every setting is different—family, religious community, volunteering, the workplace. My job is teaching, and although a liberal is not supposed to say this, too much of what happens in the classroom assumes that everyone there has the same broadly progressive politics, and that only a fool or a jerk would disagree. This is not just a failure to welcome those who don’t agree into the conversation, all but guaranteeing that they silently dig in to whatever they already believe. It also lets down students who think of themselves as progressives, who lose the tempering of inconvenient facts, countervailing arguments, the sheer social weight of disagreement, which requires the civic and political work of argument. No one’s faith, lived experience, or personal “truth” is exempt from the burdens of conversation. At its best, sustained conversation wins converts in both directions and, more important, may transform moral horror at someone disagreeing with you into trust that people who disagree can also listen, reflect, and do things together.


What I look for when I'm hiring: by Tony Tost. This rings a bell. When I was in Corporate Life, people would sometimes ask me in frustration why they didn't get promoted even though they did their job well. To which my answer was: it sucks, but doing your job well means you can do your job well. Getting up the ladder (and God I'm glad not to be on that ladder any more) usually needed you to show two things. One of which was you could make things better, even a bit: faster, cheaper, more efficient, more effective, something. It always gets you noticed. The other, though, was still more important: bosses want to hire people who make their impossible inbox slightly smaller. Tony seems to agree.

Sometimes I think ambitious young people err in thinking that if they communicate how much the job will help their own personal ambitions, that this’ll somehow help them get hired.
The trouble is — and I’m only speaking for myself here — I’m not going to make a hiring decision for one of these jobs based on who that job is going to help out the most. I mean, I’d like to help as many people as I can. But ultimately, I’m going to make a hiring decision based on my impression of who is ultimately going to make my job a little easier.
Why? Because I’m selfish. And eight months from now, when weather has shut down production for two days, and a hidden scheduling conflict means we’ve lost a key director at the last minute, and a key crew member has just been fired and must be replaced, I need to be able to look across the office at whoever is helping me handle this inevitable shitstorm and think to myself, with all sincerity, “well, thank God ______ is here.”

There's an implied deal here, which too many bosses and organisations don't respect. Which is that if you do the job of easing the burden on your manager even a little bit, then they will be - in the pithy and scatological phrase an old colleague of mine once used - a shit umbrella. That is: someone who holds an umbrella above their team to make sure any shit cascading from upstairs stays off their people. (What happens under the umbrella can have its own messiness, to be sure; but equally it stays there.) The opposite being a manager who's a shit funnel, making sure that senior aggro is directed at members of their team while avoiding it themselves. These people are nightmares. And we all have seen too many of them.


The Post Office Horizon IT scandal and the presumption of the dependability of computer evidence: By James Christie. Another Post Office-related thing: a paper from 2020 that I read a while ago, but which has - rightly! - come round again. It points to the assumption built into English law (as a result in part of the repeal in 2000 of section 69 of the Police and Criminal Evidence Act 1984) that computers work right and that computer records are reliable unless the other side can prove they're not. You can see how this was so harmful in the Horizon trials without any great effort of thought. Apparently there's now a very, very, very belated push to put this back on the agenda. I'm not sure I'd want to go all the way back to the section 69 "don't trust computers unless you can prove they're right" presumption with its own blowback risks, although with LLMs proliferating and hallucinating (see below) I'm tempted. Perhaps the solution is not to have a presumption in either direction? Not sure. Have to consider.

As this white paper made clear, complex systems can only work satisfactorily if humans are able to vary and adapt systems in response to the problems they face. To help and protect the humans, systems must also contain essential inefficiencies; redundancy, margins for error, buffers, backups, fallbacks. Accepting this reality introduces a contradiction that is fatal for any presumption that computers are reliable. Systems are reliable only if there is active, expert human involvement. Once that involvement is acknowledged, one must also accept that one is dealing with human frailty, not infallible machines. The systems themselves can no longer be presumed to be reliable, and any evidence coming from them must be justified and open to challenge in the same way as any other evidence from a human source.


To own the future, read Shakespeare: by Paul Ford. Talking of razors, one of mine is: if Paul Ford wrote it, I'll read it. (Same with Tim Harford and Helen Lewis. Far more often than not it's worth it.) He's a fantastic writer, perhaps best known for a magisterial 30+-thousand word piece entitled "What is code?" which occupied pretty much all of a print issue of Bloomberg Businessweek back in 2015 and which online was even better. Just marvellous. This is much, much shorter, but just as good.

Wandering between these worlds, I began to realize I was that most horrifying of things: interdisciplinary. At a time when computers were still sequestered in labs, the idea that an English major should learn to code was seen as wasteful, bordering on abusive—like teaching a monkey to smoke. How could one construct programs when one was supposed to be deconstructing texts? Yet my heart told me: All disciplines are one! We should all be in the same giant building. Advisers counseled me to keep this exceptionally quiet. Choose a major, they said. Minor in something odd if you must. But why were we even here, then? Weren’t we all—ceramic engineers and women’s studies alike—rowing together into the noosphere? No, I was told. We are not. Go to your work-study job calling alumni for donations.

I'm definitely "that most horrifying of things" as well. I started young: my A-levels (all those decades ago) were double maths, French and German (which baffled everyone else studying double maths with me, all of whom were doing two sciences as well...). And this job, this wonderful job I do now, is the very definition of interdisciplinary: yes, we're advocates, and legal nerds, but we must needs suck up knowledge about everything under the sun if it's relevant to a case. We don't get to say: nah. English literature is dull. Not if our case is about plagiarism of fiction someone's written. Or to decry the worth of optics if the matter concerns a malfunctioning laser cutter.


How you change, when you learn something new: By James Fallows. James is another writer whose name is enough to make me read something. When I was studying Japanese many, many moons ago, he was based in Japan - and writing lovely reportage which helped me appreciate the place far better than I might otherwise have done. His writing on the 2016 Trump presidential campaign was superb: fair, rigorous, but not pretending to objectivity. Thank goodness. Here, he's writing about the ability to learn later in life. As someone who radically changed career in his 40s, and hopes to keep learning on the job for a couple of decades to come, it struck a chord. As, more literally given that my dad suffered a stroke shortly after he retired (you'll see what I mean when you read it), did its coda.

Learning something new, I think we shift at some point in our lives from when we're in our 20s, and everything's new and we're sort of figuring out things, and everything's a new adventure, and we're developing skills. Then by the time we're 50 or so, we assume that all the skills worth having we already have. And that the ones that we don't have probably aren't worth it.
And the things we can do, we kind of assume: Why can't everybody else do the same things I can? They must be slow.
And learning something new—especially sitting next to somebody who's probably much younger than you and can do all these things you can’t—it's a humbling experience. It makes you appreciate the range of skills that people have, the skills other people have that you don't, and the discipline that it took to master them.


There's a bunch of other stuff, but I'll save it for next time. Mid-week, probably. See you then.