Direct Link to File. 2102 words, 11 minute read

Intent:

Intended for FaithTech’s Writing Contest, deadline August 31, 2020, 700–1800 words.

  • My closest “category” from their instructions: “2. Ethical Challenges Created by Emerging Technology:
    • What responsibility do Christians have to identify and address algorithmic bias in policing and legal proceedings?
    • What values are embedded in Artificial Intelligence, and how do they align with Christian virtues?”)

Outline? (Note New Outline here)

The following currently-rambling essay (below) could benefit from some STRUCTURE. So let’s go back and try to outline such a structure before editing the essay:

  1. Start with Steven Curtis Chapman song, “Who Makes the Rules,” as a jumping off point

    1. SCC’s question: Is it Jesus or the world that we let tell us how to live?
    2. And yet, transition from law/rules in Old Testament, to Spirit in the New Testament. (like “Inference?” in ML?) Newsboys song, “Spirit Thing”: “it’s just a little hard to explain”
    3. Can mention training the brain – “transformed by the renewing of the mind”, via reading scripture, prayer meditation – beholding Christ, etc. [Save for Summary?]
  2. History of AI Development, esp. ’80s-now

    1. Expert systems in the 80’s (data & rules as inputs and labels as outputs)
    2. Transitioned to ML (inferences) now (data & labels as inputs and inferences/rules as outputs ). I will claim that this transition bears similarities to Law->Spirit in Christian tradition.
    3. Training on data determines the weights of the neural network. Decisions aren’t explainable. This can be problematic…for various reasons, e.g. GDPR. But human decisions aren’t always explainable, or at least the explanation given may not be provably ‘true’/reliable.
    4. And yet ML systems show “bias” (meaning what?). Societal biases – against gender, race, religion.
  3. Applications (/ Synthesis?)

    1. We want to love our neighbor. And be part of conversations and actions for removing abuse.
      1. Somewhere: Note that tech is often anti-Christian (cite Silicon Valley episode), and religion is rarely on the list of things to “check” re. diversity? If so, try not to sound too victim-y.
    2. Note that we have cultural conflicts. MacIntyre: After Virtue [1], no set of rules; and Whose Justice? Which Rationality? [NOTE: is MacIntyre so famous the we can just mention him without even citing him, i.e. would citing him be gratuitous?] One idea: Perhaps derive rules from crowdsourcing (Moral Machine)? Likely doomed. So is it just a matter of power struggles, rights, law? Sargent: “AI Ethics: Send Money, Guns and Lawyers. “ [2]
    3. Different angle: Inner healing: faith - encoding past experiences or faith for more. as akin to Amazon’s model favoring men because of past experiences.
    4. Kind of relevant, Maybe worth mentioning: We have some Christian-themed entertainment options. Will we see Christian-themed AI models? Presumably AIs used in ‘ministry’ e.g. chatbots would display secular biases if they’re not retrained? Oooo. So, anti-Uighur and anti-Jewish cases have come up in models. Anything actually been studied re. anti-Christian? Seems very likely.
  4. Summary / Issues with this essay.

    1. “So what is your POINT?” Do I have a single point? Does my essay need to have “a point” or can it just explore certain avenues related to a topic? I guess the point is “Who makes the dataset?” ;-) Or maybe the points are:
      1. there are powerful similarities between Christian tradition and AI development.
      2. questions about implications of AI should involve Christians.
      3. even if cultural values will inevitably be in conflict.
      4. …??? Profit.
      5. Perhaps I could use the example of content moderation on social media as a unifying theme .
    2. Also: Salt & light: People like me need to not become so techy & embedded in secular thinking that we have nothing to offer but a carnal Christian-culture-flavored version of the world’s methodologies.

ESSAY BEGINS HERE (~1500 words):

I am no longer young and cool, so no longer suitable for doing youth ministry. [TODO: cut this? does anyone care?] But back in my day, I listened exclusively to this new, subversive and live-giving medium known as “Christian Rock,” which arguably, by the time I was listening in earnest, had already become corporate and [saccharine?], with a massive network of radio stations across the land, eager to cash in on non-demoninational [sic] Christian subculture. One of the kings of the Contemporary Christian Music (CCM) airwaves back then was the extremely talented and sincere songwriter Steven Curtis Chapman (SCC), who penned pop songs, love songs, inspiring songs, …yea. His personal motto was “to challenge and encourage.” Hi first big hit (RIAA Gold) album was 1989’s “More to This Life,” which had some great songs on it. One of the songs on the album was “Who Makes the Rules” (co-written with my Belmont colleague James Elliot!), that expressed the concern,

“I guess the one thing that’s been bothering me the most is when I see us playing by the same rules that the world is using.”

Around that time, in the 1980’s movers & shakers in the world of technology were exploring the development of “expert systems,” a form of AI in which the decision processes of experts were codified into a set of rules a computer program could follow, to make this expertise available to a wide customer base. [I just used a successor to these systems, the rule-based software known as TurboTax, to finish my taxes.]

The idea being that some expert codifies his or her values, which then become “law” to the companies and customers who elect to use this system. And “law” is apt, as there were expert system attempts to automate legal proceedings. For the most part, these failed spectacularly.

TODO: finish this, or write some kind of transition?

Fast forward 30 years later, and machine learning (ML) is taking over the world. One sees descriptions of the basic methodology of many ML systems compared to previous methods, for example in the introductory documentation for the popular TensorFlow deep learning library, when deciding how to classify or label data: ‘It used to be that you would supply data and rules, and get out labels. Now with ML, we supply data and labels and get out rules’ [3].

These rules are often obtuse, relying on the machinations of neural network models containing millions of parameters; the decisions such models are regarded as not “explainable”, which can be problematic when you wants to understand, for example, why you were denied a bank loan (e.g., were race or gender factors?).

The outputs of automatic classification models have been shown to exhibit the same sorts of biases that are present in society at large, such as sentiments toward different groups depending on race, gender, or religion. [cite?] This is largely because the models are trained on datasets containing such biases, usually unintentionally. For example, Amazon’s hiring model was shown to favor men over women, largely because it was trained on a dataset going back many years, [during which time the workforce was predominantly male].

A recent Twitter brouhaha between Turing Award winner Yann Lecun and digital rights researcher Timnit Gebru was [something to behold] [4]. The…TODO: say more

Similarly word embeddings, used in the modeling of language, have been shown to encode analogies that reflect societal prejudices such as “computer programmer is to man, as homemaker is to woman” [5–6].

Increasingly, the major tech platforms of Facebook, Twitter, Reddit and others are under pressure to [perform greater effort] at content moderation, the filtering of offensive, misleading or dangerous speech. Given the scale of these platforms, they rely in part on automated detection methods, which classify posts and can automatically reject them. This has been found to be problematic, as recently worship leader [so and so] was finding his posts rejected. And some Facebook moderators have gone on the record saying they automatically censor any pro-Trump content. Similarly, the question of filtering “hate speech” can reflect ideological biases, such one Christian’s sincere rejection of homosexuality on the basis of Biblical norms being regarded as hate speech by members of the LGBTQ+ community.

In order to better understand the methods and foibles of automated moderation systems, I recently enrolled in the first course of the Natural Language Processing (NLP) Specialization on Coursera [7]. One of the first assignments was sentiment classification of tweets – described by others as the “hello world” of NLP (i.e., the basic introductory program that all students learn to write). Such systems have been shown to encode negative biases like those described above. For this assignment, we downloaded a pre-made dataset of 5000 “positive” tweets and 5000 “negative” tweets. How this dataset was created, we were not told. Generally, positive words like “happy” and happy-face emojis were a dead giveaway for positive tweets, and words like “sad” or frowning emojis were clear indicators for negative tweets. (Let’s leave aside the question of whether “sad” is a negative emotion.) We could just as easily been classifying sentiment of movie reviews, such as the the ‘fresh” or “rotten” summary labels for movie reviews on the site RottenTomatoes.com [8]. This binary classification problem is reductionistic, and yet for the purposes of determining policy, many decisions tend to be all-or-nothing and thus fit this binary reduction.

When I asked on the class forums, “who decides whether a given tweet is positive or negative [when constructing the dataset]?” the answers I received were purely of a technical nature. The questions of representation, of diversity, and of mediating conflicting communities’ conceptions, were absent. This sort of technical focus has been called out by many, such as Kate Crawford.

TODO: fill in somehow?

All that to say, in our current time frame “Who Makes the Rules?” increasingly become “Who Makes the Dataset” by which the rules are derived. These datasets form a crucial part of the supply chain of machine learning practitioners. They encode societal biases, and are only corrected when sufficient complaint is made. Current anti-religious sentiment is known for the case of Uighur Muslims in China. Many technologists are atheist and even militantly anti-religious, including anti-Christian. One sees bigoted anti-religious statements in many area of academia and industry, and these biases are likely going unchecked in the development of ML models. Last year on Reddit, a German researcher advertised his group’s project by posting a lengthy, profanity-laden denunciation of the Catholic church; it was surprising to see no one objecting to this. TODO: might want to avoid sounding of victimhood.

This focus on rules for scaling policies to large groups with minimal personal interaction need not be the only methodology. In the New Testament we read Paul saying that the law with its rules could never transform us, that we need the Holy Spirit. That the Law was as a schoolmaster prior to Christ, in whom we now have freedom. We read [elsewhere] that the law will become “written on their hearts” – i.e., to use computer science lingo, encoded into the weights of our neural networks. The result may be not qualify as “explainable” and yet it is live-giving: where the law brings death, the spirit brings life. Echoing this, another CCM title from the same time as the SCC song was “Spirit Thing” by the Newsboys.

“It’s just a Spirit Thing, it’s like a holy nudge… it’s just a little hard to explain”

…TODO: and what about that?

TODO: finish the essay somehow. Not sure where to take it from here. So we’ll switch to Random Thoughts:

Random thoughts:

  • Should I re-title this “Who Makes the Dataset?” or “Who Labels the Dataset?” I say no, because the dataset is but one means by which we derive the rules. And perhaps the focus on rules is improper given later New Testament remarks?
  • Christians do not play by the same rules as the world is using. We do not have the same dataset. Is this too much concerned with ethics and society.
  • …And yet, even as we have concern for the world around us, are we not members of a different society?
  • What about inference on past data in our own lives, vs. the faith to believe God at his word? …and the values we want? does this amount to faith in a different, better world, even on the part of secular ML designers?
  • Do I want to say anything about sensors and cat vision?
  • mental renewal & inner healing have the capacity to rewrite the network weights in our brain. thus changing the derived rules.

References