In the recent past I thought of racism very logically. Quite literally, I viewed racism as a proposition in classical logic, say R(x), evaluated as either true or false on the set of scenarios, say S.
And as a result, saying things like "I'm not racist, I have black friends" wasn't anywhere near racist to me and it never made sense when people said that. In truth, I was actually quite frustrated and angry when people took scenarios that didn't seem clearly racist and insisted they were racist. What did it mean for something to be racist anyway? And how do you know what their intentions were? A non-racist could have easily said the same thing, so your accusation is unprovable, and also harmful.
I refused to acknowledge police brutality on black people using the above logic. Effectively, I put an inordinate and large burden of proof on proving racism.
It turns out I was wrong: they (they being those people who I dismissively regarded as "SJWs") weren't trying to say that something in isolation was definitively racist or not. To them, it's very much a holistic, and sometimes probabilistic (will explain later), thing: you have to look at what has been said in context of who the person is and what they could have meant. When something is said to be racist, it means something more like "this statement provides information that, when taken in context of the situation and the person (such as: is this person white?), would suggest the person held racist views".
It's actually quite complicated and is akin to a sort of Bayesian statistical inference, where with the new data we have (e.g. the person says "I have black friends") we extrapolate from prior information to deduce the parameters of our model (e.g. a parameter R determining whether the person holds a particular racist view or not). So the "new data" is not by itself determining R (racism), but rather it represents a positive differential flow of information that helps us determine R. However, the way we actually present the result in language is entirely misleading
The thing is, the language we use to call out racism doesn't accurately portray the complexity of the model doesn't help. In fact it further exacerbates a conflict of models: saying that some element (say of a situation, policy, or a statement/action) is racist (e.g. saying "that was racist" in response to someone saying something... well, racist), which is a common turn of phrase nowadays, suggests that said elements has some fixed quantity or quality of racism, as if racism were some kind of fungible property, like money. That is: money can be added to other money in a context free manner... because fungibility is like "a dollar here is a dollar there". It doesn't really matter where you add the dollar, it increases the value of something by $1 no matter what pile of dollars you add it to*. Further, the fungibility property is suggestive of a linear system: an increase dx always results in the same increase dy, no matter which x you're at. And as I mentioned, that's not the framework in which the "woke folks" are operating: saying something (dx) is racist (dy) is not some universal property of that thing (dx), but is rather an abbreviation for a much more complicated scenario that depends on what the situation (x) is.
So as it turns out we're literally talking about linearity, and also as it happens, the language the woke folks use just reinforces the linear model. It's a communication problem! And the only way to resolve communication is by true understanding. This can only be achieved by proper listening and going beyond the language. The language, as we mentioned, is flawed: it confirms the linear model.
The problem is, all of this is extremely difficult to understand. At least, it was for me! To someone who is not particularly invested in these issues (see: anyone who is not personally really affected by racism on a day-to-day basis), they have no reason to move from the simple, linear model to this complicated holistic model, especially when the language is not very helpful. It's much easier to stick with our old model, which tells us to disagree and perceive attacks and accordingly take offense when "called out". And we are surprised at this shouting match between two utterly divided sides.
Indeed, at the core of any communication problem is a
listening problem. In the wake of the Floyd shooting, I think many of us have internally recognized the importance of just listening, that it actually works. There's still a whole lot of the same old not listening, shouting, bickering, but things are getting better in the sense that we are collectively building the vocabulary and concepts required to understand the complexities.
To further complicate matters, the detection of racist elements is entirely statistical, as nothing is certain. For example, Trump saying something kind of suspicious and racist-sounding might get a pass, but done enough times, we're pretty sure the dude is encoding racism. This is problematic because our entire culture, woke folks included, operate based on an assumption of a "linear world" where making local judgements (i.e. evaluating individual statements and taking things out of context) are "good enough" . This is, of course, in contrast to operating on a global/holistic level, taking things with context, and appreciating nonlinearity and embracing full complexity.
A side note: Linearity seems to be especially an American way of doing things. I wonder how much of it is rooted in the predominant American culture of pragmatism, where the world is simple and straight and things just are, no fuzzy convoluted stuff. It relates to this sort of engineering philosophy of modularity, where components do what components do, anywhere and anywhen. No weird coupling effects, no mess. And perhaps this is also why Americans are less "global-oriented" and sometimes even insist on ignorance to the point of appearing to take pride in it. To continue with the geometric analogy, what might be happening is just an optimization, from an information-processing standpoint: there's no need to explore the whole manifold when you're assuming it's flat (linear), because local information (i.e. gradient) yields everything you need to know about the rest of the manifold. In short, why explore the rest of a flat hill when you know exactly how high you're going to be at every point? Indeed, to Americans, maybe the rest of the world is basically just America (just shittier).
And furthermore, at the risk of sounding like I'm contradicting myself, it's not like just because it's statistics, there's nothing really concrete that can be said about racist statements. There are still enforceable "taboos", such as certain words and phrases that are said to be inherently racist.
But given that we just developed this whole idea of Bayesian statistics and context sensitivity, how could we possibly go back on it and justify calling things "racist" in a context-free manner? How can "things" be "racist", regardless of how or by whom or when it was said? Well, this is again a language problem -- "that's racist" is a pretty overloaded phrase. It's the "proliferation" argument at work: we should not say certain things simply because we don't want them to proliferate. To use a lateral example, using the word "gay" to describe something negatively isn't homophobic in the sense of the "statistical inference" thing,
Honestly the whole thing is a little bit too complicated to express in the language of common discourse, which is why we struggle as a society today. The ironic thing is that most such things things are simple to explain in the language of mathematics. It's an unfortunate reality, but usually what happens is that over time, the public learns, effectively recreating the equivalent structure in, literally, their own words. Sometimes the resulting new language and theory reflects the obvious mathematical structure, other times it's a quite a bit different. More on these ideas later... maybe.
*Generally in finance though, if we go beyond dollar-speak, it turns out that financial decisions are not fungible either, even though we would like to think of them as such (it's a lot easier to calculate and optimize decisions when every aspect of a decision has a fixed cost). For example, we would like to think of purchasing a home as having some sort of fixed value, and one can indeed arrive at some kind of opportunity cost number given some fixed time window like 5 years... but add in just a little pinch of reality and you can ask questions like, if you didn't buy that home and invested it, what then is the cost of the home after 5y? What if you didn't invest it? Should we use that number? Or what if you "invested" in a lottery ticket? It becomes a little ridiculous if we go on, but the point is that it's not so simple, so when financial experts talk about "opportunity cost" of a decision or equivocally the "'value' of the object being bought/sold" itself it's actually in a very particular context. It's important not to believe that $ value applies universally to every financial situation and decision context.