Are we Luddites?
(A different perspective on my friend's writing.)
Are we Luddites? No.
On a definitional level, we are not Luddites as we are are not (a) alive in the 19th century and (b) English textile workers who oppose the use of certain types of automated machinery. So we're not Luddites.
Societally, the word Luddite has come to be synonymous with those that vehemently oppose new technology; specifically, those technologies that have the potential to drastically upheave the current way of life, like textile factories in the 1800s. So by this quasi-consensus terminology, are 'we', the subset of the public who dodge generative AI tools, Luddites?
No, we are not Luddites.
The word Luddite is an interesting label. Historically, the Luddites 'lost'; the state sided with the factory owners and used military force against the Luddites (granted, the Luddites didn't exactly protest peacefully). As an example of how winners write history, today, we see the Luddites as backwards: rather than thinking about the Luddites as a group of protestors worried about uncertain economic conditions that were a little too violent in their means, we currently view the Luddites as textile factory vandalizers who resorted to destroying factories and who ultimately became a side note in the grand scheme of industrialization. Thus, we view the Luddites as misguided in their goals, primitive in their means, and futile in their efforts to quell the inevitability of effective technology.
As such, when AI Companies/the news/techno-optimists brand "the other" - those that don't conform and take a break from AI/apps/phones/technology - as Luddites, they're sending a signal of the righteousness of their viewpoint. There's a subtle underlay to their rhetoric implying that they are on the "right" side of history, compared to the doubters/non-users, who are "wrong". Furthermore, their rhetoric implies inevitabilism; in the same way that textile machines eventually came to dominate textile production in 19th century England, those who brand others as Luddites imply that the all-powerful LLM will inevitably come to dominate our lives, and resistance to these AI products is futile. Thus, from their standpoint, you're better off using these products (conforming) than being on the 'wrong' side of history.
But those that avoid technology know they're not Luddites1. And labelling those that don't constantly use technology / generative models as Luddites doesn't exactly lead them to use more technology. As such, the move to call others Luddites is not really genuine; rather than trying to convince the actual so-called "Luddites" why their ways are wrong, the big players here are instead attempting to convince the average non-technical person that those technical people who avoid these cool new text output machines are weird. And they're strange. And they're not like you. Those people who run Linux? They're crazy. Don't talk to them. Don't be like them. Don't associate with them. Avoid the crazy people.
And thus the general public remains uninformed about LLMs.
So in sum, we're not Luddites (and frankly shouldn't care if we are or aren't), and this whole Luddite thing is meant to frame us, those that opt-out of technology, as weird people.
If you've got any opinions on this piece, send me an email. I enjoy synthesizing different viewpoints into my argument :)
Interestingly enough, when the media talks about Luddites, they're not talking about those that leverage LLM data poisoning attacks to destroy LLMs; Rather, their definition of Luddites seem to be those that opt out of the machine. In any case, I would argue that those that use LLM text honeypots to poison LLM training data are still not Luddites; while there most definitely is adversarial intent in these actions, there is no direct "active" act of data poisoning. Metaphorically, if I set up a (literal) honeypot to defend my house from being robbed, it seems absurd that anyone who gets stuck in my honeypot trying to rob me has grounds to sue me, crucially and especially since the thing I'm trapping in my metaphorical honeypot is not a human but a big collection of numbers and functions made by a for-profit company for profit.↩