(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)

In the Genesis 25:29–34 we see Esau, firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows… Jacob replies, “First you have to give my your birthright.” “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …“And thus Esau sold his birthright for a mess of pottage.”

People are typically advised against making major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:

  • Hungry
  • Angry
  • Lonely
  • Tired

Sometimes HALT is extended to SHALT by adding “Sad”.

When we’re in these (S)HALT states, our brains are operating relying on quick inferences burned into either via instinct or training. Sometimes this is called Type I reasoning. Type I includes the fight-or-flight response. While Type I fast, it’s also often prone to make errors, to oversimplify, to operate on the basis of biases such as stereotypes and prejudices. Type I relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a Type I state is inadvisable if waiting is possible. At a later time we may be more relaxed, content, and able to engage in so-called Type II reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.

Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a Type I capacity – to the extend that the analogy to the human brain holds, and we’re going to press this for now. In fact the analogy is the reason for this discussion, as AI/ML systems increasingly are placed in positions of serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of ML systems displaying bias and unjustly employing stereotypes.

So if humans are discouraged from making important decisions while in a Type I state, and machines are currently capable of only Type I,…then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.

Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) on the basis of Type I reasoning – tired people, implementing decisions based on procedures and rules. It’s called bureaucracy. 1 In this way, whether it is a human being following procedure or a machine following its instruction set, the result is quite similar. Companies have this too, often illustrated in the movies of Terry Gilliam (e.g. Brazil) as vast office complex of desk after desk of office-drones. You don’t need be to elaborate further, as you have probably been either seated at one of those drone-desks and/or on the phone with someone who is.

A colleague who’s a professor of English sometimes works as a grader for the national Advanced Placement exam, for which graders are given an extensive set of rules in order to ensure consistency and “fairness”. When I raised the prospected of someday having essays graded by sophisticated NLP systems, he confessed, “What we do is so codified that we are literally just going down a checklist and following a set of rules. A monkey could do what we do.” The prospect of having this task – scoring essays for the hallowed AP English exam! – performed by machines instead of humans did not faze him in the slightest. Even given my penchant for techno-optimism, I was a little surprised, and a little saddened.

The motivation for the present inquiry is what is different about having machines do things, specifically perform classifications (judgments, grading, etc.) that have heretofore been done by humans. If the system is already “soulless,” then what do we lose by having the “human cogs” in the bureaucratic machine replaced by machines?

This question of automation echoes concerns for factory manufacturing, and the replacing of human assembly-line workers with robots. Like the automation of bureaucracy, automated manufacturing offers consistency (a rudimentary form of value alignment), speed, and scale. In some cases, the use of robots alleviates the risk of placing humans in hazardous conditions (e.g. welding, toxic environments). However, while the drudgery of most office jobs may not exactly contribute to flourishing mental health, it’s hard to imagine such conditions being deemed “hazardous” in most cases.

Well, we’re leaving out “cost saving” of course. One might say this is the “real reason” we see so much automation. That and that many people actually to deal with a machine at times rather than a person. I confess that I tend not to be one of them, given my easy irritation at the software design and user interface choices of many companies and government bodies. I find I am usually spamming the “talk to a human” button – if such a functions is even given! – because the issue I’m having does not fit in the predefined categories of the bureaucracy’s phone menu and/or I’m having problems using the website. Even the latest NLP-powered chatbot helper-interfaces one finds are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can work with you. You’ve all experienced this, I’m saying nothing new, but I am pointing out what is different, even, we might say what is lost when we transition from “human cogs” to software (or neural network) “cogs.”

The middle ground for companies seems to be hiring low-wage human workers from economically depressed parts of the globe to serve as remote phone support, but governments typically would not have such an option. I suspect that in the next ten years we will see machine systems with increasing forays into Type II reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. ;-) But it might happen – or more likely such a system will be used to decide whether to kick the matter up to a real human.

Summary:

We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about Type I and Type II reasoning, noting that machines currently can do Type I really well. The main question that was: if humans are known to make numerous erroneous and unjust decisions in a Type I state, how is the use of machines justified? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be be drivers in a variety of socio-economic situations. One other answer came in the form of bureaucracy, that Type I is already heavily used, just with humans doing. We explored “what’s different” between a bureaucracy implemented via humans vs. machines is “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at lease the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in coming years is hard, but given the prevalence of shoddy software in widespread use, I look forward to talking to Dinesh in Mumbai rather than Erica the Bank of America Chatbot for quite some time.

P.S.-

Someone will point out that the (S)HALT state is a consequence of humans being embodied, whereas AI systems are not embodied and thus suffer no such afflictions or degradations of their performance. But this is immaterial (pun…not really…intended) – what matters not is not how humans arrive at the restricted state whereby only Type I is employed (or available), rather it is simply the observation that Type I can be dangerous for anything other than time-critical applications, and the unavailability of any machine-based systems offering the more expansive & important faculties of Type II reasoning in the foreseeable future.

  1. Literally “government by the desk,” a term coined originally by 16th century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.