1917 words

Update: FINAL SUBMITTED VERSION here: At Google Docs.

1. Musical Intro

I became interested in machine learning (ML) and its attendant conversations on “ethics and society” fairly late in my academic career. Thus I am no longer young and cool – e.g., no longer suitable for helping with the “youth ministry” at church. As a young convert in college1 however, one of the things that aided my assimilation and promotion of a Christian worldview was the burgeoning commercial subculture of “Contemporary Christian Music” (CCM). [TODO: The CCM domain was largely conceived as a Christian alternative to the secular entertainment industry, a point we will revisit later.]

At the time I became a consumer on the CCM scene, much of this music had already become corporate and saccharine, with a massive network of radio stations across the land eager to cash in on the non-denominational Christian subculture trading in music, t-shirts, and even chain stores at “the mall.” Yet one star of the CCM airwaves was the sincere and extremely talented songwriter Steven Curtis Chapman, who penned pop songs, love songs, and rousing inspirational songs, all aligning with his motto, “to challenge and encourage.” He was the real deal. His first hit album was 1989’s “More to This Life,” which earned RIAA Gold status and featured the challenging song “Who Makes the Rules” (on which my Belmont University colleague James Elliot shares co-writing credit!) which expressed the concern,

“I guess the one thing that’s been bothering me the most is when I see us playing by the same rules that the world is using.”

The idea of “rules” merits exposition, as this essay will progress by drawing a series of parallels between Christian traditions and the history of Artificial Intelligence (AI) development, up to and including current conversations regarding bias, fairness and accountability.

2. Rule-Following

Christian traditions generally emphasize the superior role of salvation by faith in the grace purchased by Christ over the requirements of The Law handed down to Moses. The Law was a set of explicit rules for hundreds of various occasions, governing all areas of life for the Hebrew [or Jewish?] people. [TODO: say more!….]

AI emerged as a formal field of inquiry in the 1950s, most famously in the “2 month, 10 man” program at Dartmouth in the summer of ‘56. Although the emphasis on developing systems that learn has been a significant part of AI research since its inception – e.g., Arthur Samuels coined the term “Machine Learning” as early as 1959 – the products impacting businesses and consumers in the following few decades tended to involve the application of preprogrammed knowledge and procedures, which is to say that they were “rule-based.” Examples of these are the “expert systems” that proliferated in the 1980s and could make recommendations based on asking the user a series of questions, or the “ELIZA” chatbot of Joseph Weizenbaum that could carry on conversations by applying preprogrammed linguistic procedures in response to inputs.

Problems with Rule-Following

Rule-based systems suffer from defects. One is simply the multiplicity of rules required to deal with every possible eventuality. More specifically, the Christian and AI traditions highlight particular difficulties in rule-based systems

In the Christian tradition, Paul teaches that The Law failed to produce righteousness – even calling The Law “powerless” – and was instead responsible for [generating] condemnation death as a result of sin. The Law serves as an instrument for the Accuser (cite). It was added so that the world would know its need for Christ. “For the law was our schoolmaster to lead us to Christ” [cite]. [Some old testament guy said] instead that someday the Law would be “written on their hearts.” Jesus noted that the rule-based followers of the Law, the Pharisees, pursued the “metric” of rule-following but not the inner that produce righteous behavior.

This is an instances of [TODO: so-and-so’s] Law: any metric will become the goal. It applies with AI systems as well. In AI, rule-based systems suffered from [defects in] knowledge acquisition, and were famously “brittle”, failing catastrophically when encountering phenomena outside the bounds of what was expected at the time of design.

3. Transition to “Inferred Rules”

3a. In Christianity

In Christianity, we are encouraged to follow the Holy Spirit as an alternative to the Law – or as Paul puts it, to follow the law of the Spirit rather than the law of [TODO fill in]. Doing so can empower us to, to borrow a quote from the 12 Step tradition,”intuitively know how to handle situations which formerly used to baffle us.” This amounts to inferring the proper course of action in a holistic response to stimuli, based on who we are and who God is, and our relationship together. One challenge with this is “explainability”: it can be hard to quantify and articulate how this process occurs. In the words of another CCM title from the early ’90s:

It’s just a Spirit thing, It’s just a holy nudge,… It’s just a little hard to explain.

– “Spirit Thing,” Newsboys (1994)

3b. In AI: Machine Learning

A similar transition occurred in AI research from roughly the mid ’00s on, when the amount of data available (thanks to the internet) and the speed of computer hardware allowed ML methods to [come into their own?]. Machine Learning has turned programming paradigms on their heads, as illustrated in the pair of diagrams in the Google Codelabs tutorial for TensorFlow [1]:

trad prog

ml

Here we see that with traditional programming, rules were essentially “inputs” for the system, whereas with machine learning, rules can be regarded as products. Thus with ML rules are ‘made’ in the sense of being manufactured, rather than being made in the sense of being specified by fiat.

Since the rule-products are typically inferred from the operations of complicated ML models with millions of parameters, these systems suffer a deficit in the area of explainability. Much of the conversation about fairness in AI has been devoted to explainability and transparency, the idea of giving users sufficient knowledge to understand the mechanism driving models’ decisions. Apart from the model complexity, it’s even harder to tell how the rules came to be because the data and answers they are supplied are themselves products of a lengthy worldwide supply chain [2], which may involve crowdsourced datasets, Amazon Mechanical Turkers, gig economy serfs, etc. Who has the means to enact these massive rule-manufacturing enterprises? Who labels the training data – and who even cares about the provenance of the training data? My recent experience in an online course on Natural Language Processing [3] suggests that such concerns are rarely at the forefront of instruction in ML development, yet these concerns have widespread cultural impact, particularly for minority segments of the population.

4. Cultural Dissonance

4a. ML Systems and Cultural Biases

As ML systems make inferences they necessarily encode the biases present in the communities that make up the supply chain. This may take the form of making summary judgments about “positive” and “negative” sentiments expressed in Twitter tweets, yielding lower predicted scores for expressions of minority status in racial (Black or Hispanic) or religious (Jewish, Muslim) terms [4]. Or it may involve performing word associations (e.g., analogies) using “word vector embeddings” [5–6]. Or perhaps using historical data about (mostly male) hiring practices to predict current hiring outcomes [7].

As a recent Twitter [dust-up? brouhaha?] between Turing Award winner Yann Lecun and AI fairness researcher Timnit Gebru illustrated [8], the perception that such biases are only the result of biased datasets is a strong one and yet there are implicit biases encoded at multiple stages of model development.

There are recent efforts to “de-bias” such models [cite], where by “de-biasing” they mean the “mathematical” sense of enforcing a symmetry or invariance of outputs with respect to changes (e.g. male->female, black->white)

and yet I contend that this amounts to simply a “re-biasing” – depending on what one means by the word “bias.” As a massive literature survey by Microsoft recently showed, the meaning of “bias” is rarely clarified [9]. The de-biasing efforts operate in the “mathematical” sense of enforcing a symmetry or invariance of outputs with respect to changes of inputs (e.g. male->female, black->white). But “bias” also means, more generally, any of the implicit assumptions one carries into an enterprise. And enforcing symmetries is itself an execution of such a bias in the latter sense.

4b. Christians & Culture & Pluralism

mention MacIntyre’s After Virtue [10]… Inevitable conflicts? Sargeant [11]

My email to Yancey:

…probably few if any people will be interested in addressing such effects as they relate to Christians, given my observations as part of the AI R&D community and the “liberal” culture in Silicon Valley and academia, as well as drawing a few parallels with the entertainment industry. I can list a few anecdotes – such as signers of the Southern Baptists’ statement on AI Ethics not listing their company affiliations for fear of repercussions, or the episode of HBO’s “Silicon Valley” that deals with anti-Christian bias directly, or surveys of “closet conservatives” in tech companies

Examples: Content Moderation & Moral Quandries

Note that content mod (e.g. hate speech) is a big thing right now.

using or even moral quandries [12]. [TODO: mention Moral Machine or is that overkill?]

Labeling: A similar sentiment was expressed in fellow 90’s CCM artists DC Talk:

“To label wrong or right by the people’s sight2 is like going to a loser to ask advice.”

– “Socially Acceptable,” DC Talk (1992)

5. Cross-Pollination

5a. Christianity –> ML

  1. work against bias that hurts people. celebrate diversity. be ethical voices, cite Anaconda survey (??)
  2. One big area of concern: content moderation, in light of cultural dissonance. Note Facebook workers censoring trump, Worship leader getting blocked
  3. note Christians not included re. “representation” but can try to be. Note Black researchers have been doing great work in this area, e.g. [13]. Few Christian voices, notably sociologist George Yancey (missing reference).

5b. ML –> Christianity [TODO: Is this stuff even relevant to main thrust of article??]

  1. inferred rules can change:
    1. training/renewing our mind, beholding Christ, etc. virtues as habits (spiritual disciplines) via reading scripture, prayer meditation
    2. Spirit can reset our weights (transfer learning!) inner healing, “this is not that”
  2. unlike Amazon’s hiring algorithm, by faith we infer based on not our past history but ideally? record of Bible as dataset, we choose to infer based on a model of the world we want rather than the world we have?
  3. Inferred rules may not match our explicit sense of rules. Paul in Romans…7? James KA Smith essay on practice vs mere belief.

6. Closing Remarks

Maybe the Salt & Light angle? : People [like me] need to not become so techy & embedded in secular thinking that we have nothing to offer but a carnal Christian-culture-flavored version of the world’s methodologies. At the same time, forming our own CCM-like “CCAI” seems…[TODO stupid], although we do have AIandFaith, FaithTechHub, etc… (not to suck up, but… ;-) )

  1. I grew up in church but only came to personal faith in college. 

  2. I always misheard this lyric as “the people of sight,” i.e. those who live by sight rather than faith, meaning non-believers.