2021v17, Monday: It's not just about you.

Why an apolitical workplace is a luxury only the comfortable can afford. And a cut-out-and-keep caustic guide to AI ethics.

Short thought: One of the more interesting “little firms that could” in the online services space has long been the outfit currently known as Basecamp. Its founder, Jason Fried, has been voluble - and thoughtful and interesting - about how to do good work remotely, long before the past year made that a necessity.

But now he and David Henmeier Hanson, known as “DHH” (together the senior management of Basecamp), have solidly put their feet in it. I won’t rehearse the background in detail, because others have done it far better. The tl;dr version (and this is a really thin summary of a big story):

  • Basecamp employees - a sizeable chunk of the 60-odd staff base - started to work on diversity and inclusion issues. Management blessed this.
  • In the process, the fact that for many years the firm’s internal systems had hosted a list of “funny customer names” - many of which, inevitably, were those of people of colour - came in for understandable criticism.
  • Initially, management were onside with this criticism; indeed, they owned their part in the list’s maintenance over the years.
  • But then it got ugly. A number of staff saw the list in the context of ongoing institutional discrimination - not just or even not mainly at Basecamp, to be clear, but societally. Management (Jason and DHH) pushed back against what they seemed to see as an over-reaction.
  • Jason and DHH announced that political discussion was now to be off-limits. (They later amended this - albeit apparently without making it clear that there was an amendment - to it being off-limits only on Basecamp’s own chat and comms systems.) They also said they would withdraw benefits, instead simply paying the cash value thereof, so as not to be “paternalist”.
  • This caused uproar. An all-staff meeting saw one senior and long-time executive play the “if you call this racism, you’re the racist”, “no such thing as white supremacy” card; he resigned shortly afterwards. As many as a third of the staff have now also taken redundancy.

This might seem like a tempest in a teacup. Small tech firm has row; news at 11.

But it’s not. Tech is still overwhelmingly white and overwhelmingly male, particularly at its senior levels. (It may not escape your notice that the Bar isn’t much better.) Which means its leadership often misses the key point, which is this: when you’re not rich and comfortable, when your life has incorporated a lot of moments where you don’t get to expect everything will go smoothly, when you don’t have that much of a safety net, when large numbers of people at all levels of power get to mess you about just because they can, without you having much recourse, just about everything is political.

Healthcare is political, if its availability and quality vary depending on where you live and what you look like. (Don’t doubt this: I’ve seen healthcare professionals, who I’m certain would be genuinely horrified by conscious prejudice, treat Black women with breathtaking disdain compared with how they talk to people like me.) Pay is political. Work is political, because expectations and yardsticks vary unless we pay honest attention to how they’re generated and applied.

Put simply: cutting political and social issues out of the workplace is a luxury only comfortable people can afford. A luxury which exacerbates, rather than diminishes, the power imbalance built into to workplaces by the sheer fact of people’s dependence on a paycheque. (This, by the way, is why in the UK and Europe we say people can’t consent to the use of their data in the workplace. If the alternative to consent is “find another job”, that isn’t free consent for anyone without a private income.)

For Jason and DHH to take this approach is to forget that the only people for whom politics doesn’t relate to business are those who get to dictate the terms of what goes and what doesn’t. The blindness appears to dismal effect in a post by DHH on “Basecamp’s new etiquette at work”:

Just don't bring it into the internal communication platforms we use for work, unless it directly relates to our business. I'm applying that same standard to myself, and Jason is too.

Well, that’s nice. Reminds of that line about the right of the rich to sleep under bridges. I wonder why.


Someone is right on the internet: On a somewhat related topic, issues of ethics in AI are big news, at least among geeks. Which is as it should be: the more AI or quasi-AI comes to control, dictate or direct our lives, the more concern we should have about whether the black boxes in question are exacerbating structural or other unfairness or inequality. It’s not good enough to just blame - for instance - algorithms that can’t recognise Black people on “computer says no”. People make decisions, and they must be accountable.

(This, of course, is why Article 22 of the GDPR prohibits “solely automated processing, including profiling” - although it’s by no means impossible to get round this by inserting a human into the final stage of the process, or by making statutory arrangements to allow for it.)

Big Tech isn’t that comfortable about this, so it seems - as shown by Google’s removal (whether it’s officially sacking or not isn’t wholly clear, but it’s effectively an ejection anyway) of two senior women working on AI issues.

So MIT Technology Review’s caustic A-Z of how to talk about AI ethics is horrifically on the nose. A couple of examples will suffice, I hope, to encourage you to go and read it:

ethics principles - A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI.
human in the loop - Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.
privacy trade-off - The noble sacrifice of individual control over personal information for group benefits like AI-driven health-care advancements, which also happen to be highly profitable.

And the best one comes first:

accountability - The act of holding someone else responsible for the consequences when your AI system fails.

Ouch. But yes.