Finally Started MacIntyre
Direct Link to File. 407 words, 2 minute read
Email:
C.,
Just wanted to note that I finally started reading Alasdair MacIntyre’s After Virtue [1], and it’s relevant. Insofar as modern debates about moral categories use language that has been deprived of its original context.
And I like it – it’s not hard to read. :-) One thing that makes me sad is that it’s like a movie for which you know you’ll need to read the sequel: his later books develop his ideas further. But thankfuly he never (later) departs from the views in the original book, and defends it again in the preface to its third edition (2007). I’m also looking at an essay written this year by a philosopher Alan White who say that MacIntyre misread one of the people he criticized (https://www.degruyter.com/view/journals/opphil/3/1/article-p161.xml), but…eh… I’m not sure it’s necessary.
One thing this makes me think about relates to Wittgenstein: family resemblances, words meaning how their used, and different communities & traditions adopting different meanings for the same words. And even then, words have meanings that are “local in scope”, in the sense that they can’t always be extrapolated beyond their intended range – an example being the absurdities one finds with the word “all”, trying to extend “all” to mean everything in the universe.
…actually, this inspires me to keep writing a bit more…
THUS, the prospect for machine learning models that classify, that use datasets which are contingent upon a particular community, will necessarily produce nonsense or at least conflicts when extended beyond these scopes. So far the previous statement is mathematically trivial, it is so well-demonstrated. However, my “synthesis” or “point” in this regard is that THE SAME THING happens with human beings and our use of language. Again, students of philosophy and the humanities will find this to be “obvious”. But the “interdisciplinary” claim of the similarity between ML systems and human language is important.
FURTHERMORE, if we were to, say, try to amass a gigantic dataset of (English) language, the various contexts (or communities) in which these texts appear (separated either by time, geography, or specialized subcultures) would then all be AGGREGATED. Is there any reason to expect that this aggregation would result in meaningful output?
I’d say no,…and yet! The latest super-mega-gigantic language model from OpenAI, called GPT-3, is supposed to be able to adapt to contextual clues instantly. So… I don’t know. At this point my brain gives out. I guess I’ll just keep reading!