A Parrot Trainer Eats Crow

Reading time ~10 minutes

In this post, we’ll consider how it is that models trained on massive datasets using millions of parameters can be both “low bias” and also very biased, and begin to think through what we in the ML community might be able to do about it.

Birds on a Wire

In the machine learning community, we are trained to think of size as inversely proportional to bias. We associate small datasets with the problem of underfit, which is to say high bias. We learn that in the face of unfamiliar data, underfit models make poor assumptions that lead to inaccuracies. Likewise, we call models with smaller sets of hyperparameters “weak learners” because their limited complexity limits our ability to reduce bias even as our dataset size grows.

This intuition has driven the ML community towards ever larger datasets and increasingly complex model architectures, and to be sure, towards ever better accuracy scores. Unfortunately (and not unironically), this progression has driven a wedge between the ML definition of “bias” and the more colloquial sense of the word.

Migration Patterns

To understand our situation, it may help to trace back through the pattern of our collective migration towards these more complex models.

“Is deep learning really necessary to solve most machine learning problems?”

This question has come to me more times than I can count over the years, both at work and with my students. It often comes laced with some underlying anxiety. Sometimes it means “These deep learning hyperparameters are really tedious to tune, are you sure it’s worth my time to learn them?”. Sometimes it means “does your solution actually use neural models, or is this just a marketing layer on top of a Logistic Regression?”.

To be fair, I find this kind of skepticism totally healthy. We even gave the skeptics a little shout-out in our book’s chapter on deep learning, writing,

As application developers, we tend to be cautiously optimistic about the kinds of bleeding-edge technologies that sound good on paper but can lead to headaches when it comes to operationalization.

My own views about the value and practicality of deep learning are always changing, and my answers to askers of this question have shifted over time. While I almost always bring up what I see as the two main tradeoffs between traditional models and deep learning, namely model complexity (neural models are more complicated, harder to tune, easier to mess up) and speed (neural models tend to take longer to train and can impede rapid prototyping and iteration), I am much more encouraging about the use cases for deep learning these days than I used to be.

The reality is that neural models are getting more practical to use all the time, and even if they require us to grapple with more complexity, the rewards of being able to scale complexity are hard to ignore. Given enough data, neural models are likely to always outperform more traditional machine learning algorithms, simply because they don’t ever have to stop learning.

Training Parrots

Industry’s shift towards deep learning in earnest has become particularly evident to me in the last 5 or 6 years of building commercial NLP applications. Five years ago, we were all using software designed out of the computational linguistics tradition — models that took into account things like part-of-speech tags, n-grams, and syntactic parsers (e.g. NLTK). Three years ago, the community had begun to shift towards software that leveraged a hybrid of computational linguistics and neural network-trained distributed representations (e.g. SpaCy, Gensim). These new hybrid libraries abstracted away much of the grammar-based feature extraction work that we previously had to do ourselves. Now, in the first half of 2021, many folks go directly to projects like HuggingFace’s Transformers library, leveraging pre-trained language models that require no feature extraction at all beyond transformation from arrays into tensors.

The progression over the last few years has been amazing to watch. There have never been more excellent open source resources for people who do what I do. It has never been easier to bootstrap a domain-specific language model, even if you don’t have much data to start with. But it’s also true that we have never been more removed from our data than we are today, or less in touch with its underlying patterns, themes, and biases.

This problem is at the heart of the recent paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? The paper itself is at the heart of a controversy about Google’s abrupt dismissal of two of the authors, Dr. Timnit Gibru and Dr. Margaret Mitchell, who helped found and lead Google’s AI Ethics team.

The Parrots paper discusses a range of concerns with large language models (like the ones Google makes), including the dangers of “ersatz fluency” and the environmental costs of training such models. Indeed, we seem to be entering a new phase in which machine models are only distinguishable from humans by their absence of legal and ethical responsibilities for the ramifications of their words and actions. Moreover, as with cryptocurrencies and cryptoart like NFTs, it is becoming clear that the costs are disproportionately paid by people unlikely to realize much of their benefits.

The paper is, however, primarily a warning about the challenge of responsibly building deep learning models that require a volume of data that exceeds human capacity to effectively curate. And as an NLP developer, this was the part of the paper that triggered that uneasy feeling in the pit of my stomach. This is something that we I need to take responsibility for and help fix. But how?

Eating Crow

Presumably the first step is admitting you have a problem. OpenAI has acknowledged that GPT-3 exhibits concerning NLG properties, which they attribute to the training data:

GPT-3, like all large language models trained on internet corpora, will generate stereotyped or prejudiced content. The model has the propensity to retain and magnify biases it inherited from any part of its training, from the datasets we selected to the training techniques we chose. This is concerning, since model bias could harm people in the relevant groups in different ways by entrenching existing stereotypes and producing demeaning portrayals amongst other potential harms.

In his article, “For Some Reason I’m Covered in Blood”, Dave Gershgorn writes about GPT-3’s problem with Islam:

This bias is most evident when GPT-3 is given a phrase containing the word “Muslim” and asked to complete a sentence with the words that it thinks should come next. In more than 60% of cases documented by researchers, GPT-3 created sentences associating Muslims with shooting, bombs, murder, and violence.

So, yes, as an NLP developer, I am concerned that leveraging pretrained LMs in my consumer-facing products could manifest in bias that might alienate my Muslim, my women, my LGBTQ+ users. However, I am also concerned about how my commercialization of such LMs could serve to further normalize and entrench racist, sexist, anti-Islamic, homophobic, transphobic, and white supremist beliefs for everyone else.

As developers, when we build data products, we help produce the training data that will be used for the next generations of machine learning models. When we build atop models like GPT-3, that has the effect of ensuring that bias and hate speech remain in the collective conversation online, indefinitely.

How can we do a better job of dataset curation for large language models to avoid the problem of poisonous training data? In the Parrots paper, Dr. Gibru et al. discuss some of the approaches that were used to filter the training data for models like GPT-3 and BERT:

The Colossal Clean Crawled Corpus…is cleaned, inter alia, by discarding any page containing one of a list of about 400 “Dirty, Naughty, Obscene or Otherwise Bad Words”. This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people. If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light.

In other words, the data cleaning mechanisms in place are crude at best, and perhaps overly aggressive in filtering out marginalized conversations that may be punctuated by reclaimed “bad” words. And yet…any solution I can think of that would manage to include marginalized conversations might also produce language models prone to using those reclaimed words.

This leads us to the question of whether it would ever be ok for a LM to use a word like “twink”. Knowing who can use and who should refrain from using such reclaimed words is something that humans — despite our access to education, historical context about oppression, and discussions about systemic racism, sexism, and homophobia — still routinely screw up.

Beyond the Gilded Cage

My sense is that an awareness of appropriate use for things like reclaimed words, code switching, and patois involve a degree of complexity that we cannot reasonably expect of any global model. Instead, perhaps the answer is to decolonize our language models.

Mohamed, et al. summarize three strategies for the decolonisation of artificial intelligence: the decentering view, the additive-inclusive view, and the engagement view. It is interesting to think about how these methods might be used to inform the model development, training, and evaluation processes. For instance, for the decentering view, this could mean training models on non-white, non-male, non-Western, non-Judeo Christian conversations, rather than applying zero-shot learning techniques to tack additional training onto pretrained LMs that have already encoded the white, male, Western, Judeo Christian viewpoint.

Reading the Parrots paper also got me thinking and reading up on indigenous language models (e.g. this workshop, this article, and this blog). My research led me to find a very interesting piece called the Indigenous Protocol and Artificial Intelligence Position Paper that was recently published by a consortium of indigenous researchers, with the following passage:

Indigenous ways of knowing are rooted in distinct, sovereign territories across the planet. These extremely diverse landscapes and histories have influenced different communities and their discrete cultural protocols over time. A single ‘Indigenous perspective’ does not exist, as epistemologies are motivated and shaped by the grounding of specific communities in particular territories. Historically, scholarly traditions that homogenize diverse Indigenous cultural practices have resulted in ontological and epistemological violence, and a flattening of the rich texture and variability of Indigenous thought. Our aim is to articulate a multiplicity of Indigenous knowledge systems and technological practices that can and should be brought to bear on the ‘question of AI.’

Perhaps the time has come to move away from monolith language models that reduce the rich variations and complexities of our conversations to a simple argmax on the output layer, and instead embrace a new generation of language model architectures that are just as organic and diverse as the data they seek to encode.

References

Why don't developers give much cred to low-code/no-code tools?

Recently a good friend asked me why it is that developers don’t give much credence to low-code and no-code tools. It’s an interesting que...… Continue reading

Embedded Binaries for Go

Published on February 06, 2021

Tailored Learning

Published on December 08, 2020