Let’s call generative AI what it is: the biggest intellectual property heist in human history, dressed up as progress. It’s a system built on theft, trained on the life’s work of countless artists and writers who never gave permission. And this isn’t just about copyright; it’s about livelihood. This technology is a tool designed to rob creators of their ability to survive, a perfect engine for cultural destruction that feeds fascism and racism, and it’s being sold to us as the future.
First, the theft. These models weren’t created in a vacuum. They were fed a diet of millions upon millions of images and text scraped from the internet. That includes your photographs on Instagram, your stories on personal blogs, the art you posted to a portfolio site, the entire archives of museums, and the copyrighted work of living authors and artists. The tech giants behind this didn’t ask, they didn’t pay, they just took. They built their digital plantations on the stolen labor of creative people. Now, they’re selling the product back to us at a price that makes human labor impossible to compete with. It’s a parasitic system that devours culture and spits out a bland, soulless approximation, all while starving the very people who nourished it.
This is a direct assault on the livelihood of every artist and writer. For years, corporations have been trying to devalue creative labor, chipping away at pay and rights. This is their endgame. Why pay a graphic designer a fair wage for a unique piece of art when you can get a thousand derivative images from a machine for the price of a subscription? Why commission a writer for a thoughtful article when an AI can generate a passable knock-off in seconds? This isn’t just competition; it’s an existential threat. It’s a system designed to make creative careers unsustainable, to funnel all the value and money directly to the tech executives who own the machines. They are building a world where human expression is a hobby for the rich and a dead end for everyone else.
This is where it gets truly sinister. The very nature of this technology makes it a weapon for authoritarianism. A system designed to mimic and regurgitate existing patterns is inherently conservative. It doesn’t create; it averages. It finds the most common, the most dominant, the most statistically probable output and presents it as new. This is the death of innovation and the enemy of dissent. True art often challenges the status quo. It comes from the margins, from unique perspectives, from voices that refuse to conform. An AI trained on the majority opinion will never produce that. It will produce a sanitized, corporate-friendly version of reality, one that reinforces existing power structures instead of questioning them. It is the ultimate tool for a fascist regime that demands conformity and erases dissent.
Then there’s the racism. These models are trained on our data, and our data is filled with our biases. The internet is a reflection of our society, with all its beauty and all its ugliness. When you scrape it all into a model, you don’t just get the good parts. You bake the systemic racism right into the code. Ask an AI to generate an image of a “doctor” or a “CEO,” and it will overwhelmingly show you white men. Ask it for a “fast food worker” or a “criminal,” and you’ll see people of color. This isn’t a bug; it’s a feature of a system built on a stolen, biased history. It automates prejudice, giving it the false authority of a machine. It provides a digital shield for bigots, allowing them to say, “I’m not racist, the algorithm said so.” It’s the perfect tool for perpetuating systemic inequality under the guise of objective technology.
And this brings us to the most dangerous lie of all: that this technology is a path to truth. In reality, it’s a machine for generating bullshit. These models are not designed to be correct; they are designed to be plausible. They are sophisticated parrots, mimicking patterns in data without any understanding of fact, context, or consequence. They confidently invent sources, misstate historical events, and generate “facts” out of thin air. In an age already drowning in misinformation, we are now being sold a firehose of high-tech lies. When newsrooms, desperate to cut costs, start using AI to generate articles, we are signing the death warrant of journalism. We are replacing the hard, expensive, and vital work of human reporters with a machine that can’t tell the difference between a tragedy and a fairy tale. This isn’t just an error; it’s a catastrophe for a society that needs truth to survive.
The people pushing this technology will tell you it’s about democratizing creativity and information. That’s a lie. It’s about centralizing control and annihilating the ability of creators and thinkers to make a living. A handful of tech billionaires now own the means of cultural production and, increasingly, the “truth.” They decide what art looks like, what writing sounds like, and what gets reported as fact. They are building a future where human expression and objective reality are devalued and replaced by a cheap, infinitely scalable corporate product.
We are being sold a future where our unique voices, our struggles, our cultures, and our facts are just data points to be consumed and regurgitated by a machine for profit. This is a fight for the very essence of what it means to be human and for the right of humans to earn a living from their own God-given talents. To resist this is to fight for the preservation of authentic, messy, difficult, and beautiful human expression against a tide of sterile, stolen, and authoritarian digital conformity. We cannot let them turn our culture into their product and our passion into their profit. We cannot let them drown the truth in a sea of plausible-sounding lies.
Follow the Vanguard on Social Media – X, Instagram and Facebook. Subscribe the Vanguard News letters. To make a tax-deductible donation, please visit davisvanguard.org/donate or give directly through ActBlue. Your support will ensure that the vital work of the Vanguard continues.
“Ask an AI to generate an image of a “doctor” or a “CEO,” and it will overwhelmingly show you white men. Ask it for a “fast food worker” or a “criminal,” and you’ll see people of color. ”
So I decided to test this using ChatGPT. I got a white man for all four categories, doctor, CEO, fast food worker and criminal. So maybe AI isn’t so racist after all.
Then I asked the same of Google A.I. which generated around 30 pictures for each category.
There was combination of different races for every category. If anything blacks seemed to be overrepresented in the categories of CEO and doctors. (For ex.) 9 of the 30 CEO pics appeared to be black CEO’s and 13 of the 30 doctor pics appeared to be black doctors.
I will be blunt. I have watched you for a month now, and your “additions” to the conversations on Vanguard articles are a joke. You show up in every thread, drop some 6th grade contrarian nonsense that was spat out of a chatbot, and then act like you’ve made a point. You haven’t. You’ve just wasted everyone’s time.
So, I have a real question for you. Do you actually think that you’re doing something constructive here? Is this your life’s grand purpose; to troll the comment section of a local news website with bad-faith arguments? I’m genuinely trying to understand the mindset. Is this what you consider a productive use of your limited time on this earth?
What do you get out of this?
“What do you get out of this?”
I’m going to guess that the satisfaction arises from uncovering false claims – which is actually a form of public service.
Sort of like what media used to do.
In any case, I was just about to ask the same underlying question that Keith did, in regard to your claim.
The first image that comes to my “own” mind (let alone AI) when asked to picture a doctor does not return a result of “white man”. I haven’t been to a white, male doctor (or dentist) in years.
Then again, I don’t spend much time trying to conjure up such images.
I guess he didn’t like that someone actually did a little digging on his claim.
So… nothing?
Not even a little introspection?
Much like your contributions, pretty pathetic.
I am done pointing out how weak your “arguments” are. Like- one personally cannot recreate the test of AI performed if they do not know which platform was used, the precise commands, nor have a model that has not been skewed by one’s own incessant and biased use, instead of applying their brain to actual research.
Congratulations… you’re just ick, dude.
Final thought: Nobody takes you seriously; a grown ass-adult. Sit with that for as long as you need…
My A.I. command was simply “an image or pic of a CEO” etc. No bias applied and that was the results I got. Sorry if you can’t handle what ChatGpt and Google A.I delivered.
MS: I disagree. KO is my favorite commenter and the most valuable commenter in keeping the Vanguard in check. He is also the most entertaining. Writing about his comments just because you don’t like them adds nothing to the conversation. I don’t believe in chanting, but: More KO! More KO! More KO!
I actually agree with this entire article. The problem is: it’s already too late, and there never was any way to stop it.
Even if the US put wise regulations on AI, and that is doubtful, eventually China and Russia and Iran, etc would come up with just as powerful AI, and it will only be exponentially expanded into weapons systems. If the US lags in military AI, and China, for example, has two million drones each capable of flying under the radar, evading anti-drone systems, and intelligently and independently locking onto individual targets without having to call back to base, China could take over anyplace without firing a shot, just by showing they can. This could be existential, and make creative and copyright issues seem trivial.
True, as far as it goes, Matt. But it doesn’t go far enough. Already AI is escaping control by the oligarchic tech bros who designed and fabricated it. That’s their greatest fear, and it’s our opportunity.
As far as rehashing nationalistic
rivalries and fears, especially these days with China scaling up its military with autonomous drones, much less nuclear weapons, AGI will make national borders obsolete, indeed, ludicrous. It’s all bleeding over at a quickening clip.
So it’s about human cognitive dominance versus machine cognitive dominance with no person or group in control. The intelligent response is to end psychological divisions and let go of the illusion of control.
There’s a ghost in the machine, but we are the ghosts. And thought machines are just “accelerating“ the loss of creativity, culture and community that a zombie society had produced before the first LLM emerged.
“The right of humans to earn a living from their own God-given talents“ is still the language & paradigm of production and participation in the hyper-capitalist global economy.
There is no “them“ with AGI, because there will be no control of it by anyone unless we understand thought and knowledge, and put them in their proper place now. We need to redefine what it means to be a human being.
As far as the truth, it died in America before Trump was elected with his firehouse of lies flooding the psychosocial field.
Martin, I appreciate you taking the time to provide your thoughts. You’re clearly thinking about this on a very macro, almost cosmic level, and there are some interesting ideas in there about AGI and the future of human cognition.
But my intention with this piece was much more grounded and immediate. I wasn’t trying to map out the entire future of human consciousness in the age of AGI. I was trying to sound the alarm on the specific, tangible damage happening right now.
The micro focus in this article is on the theft of livelihoods, the automation of prejudice, and the way this technology is being used as a tool for control by the very powers you mention. The “right of humans to earn a living” isn’t just hyper-capitalist language to me; it’s a matter of survival for artists, writers, and thinkers who are seeing their work and their ability to survive being systematically devalued today.
I agree that we need to redefine what it means to be human, but I think we have to put out the fire that’s already in the kitchen before we can start redesigning the whole house. The immediate threats of theft, racism, and the erosion of truth are the battles I aimed focus upon right now. The bigger philosophical questions are crucial, but they’re a different fight, perhaps for a different article.
The house has already burnt down Matt. “The immediate” is what’s killing those of us who haven’t been burned up or burned out.
Alan is right, “The problem is: it’s already too late,” though if he’s right that “there never was any way to stop it” we’re all cooked.
“The bigger philosophical questions” are the pressing ones. They aren’t a different fight.
As far as being paid for our work, I don’t know any writers who don’t write for clicks with ads that are paid for their work. Are you paid for writing for The Vanguard? The Washington Post was just gutted today by Bezos.
I know not what Alan has said, as he is on my ignore list.
But, I could indeed be being too optimistic… 🤷♂️
I am not paid by the Vanguard, but I am when I freelance for other institutions.
In my area of expertise (horticulture), ai output is very inaccurate, often wildly so, and is not going to get any better. It’s already creating headaches for us as people come in looking for non-existent products, seek help for misdiagnosed problems, and embark on gardening practices that range from silly to nuts.
I can only imagine what health professionals are dealing with. For scientific questions, ai is really bad, and will not improve. The problem is, you have to be an expert in the field to know how bad it is. And the smug first-person authoritative narrative format makes it seem very credible. Imagine using it to develop public health guidelines, or regulate pesticide safety.
ai steals content, produces garbage output, costs us all money and is an environmental catastrophe. It may also prove to be the most costly mistake in business history and could easily cause a serious economic contraction when investors realize it will not live up to its overhyped promises. There are serious potential adverse consequences of letting this loose unregulated in many areas, national defense and security and public health being the most concerning.
Significant regulatory oversight is needed as soon as possible, and the largest firms need to be broken up via antitrust actions.
No one is relying upon AI for entirely-accurate information.
But if they are, they shouldn’t even be allowed to vote. If anything, that’s an indictment regarding people; not technology.
AI sources include actual citations, and that’s where one can determine the validity of AI’s conclusions/summaries.
It will become better (more-accurate), over time.
The concern regarding loss of jobs has been a theme ever since the invention of the wheel. It’s true, but that doesn’t mean that jobs themselves must be kept in regard to the underlying purpose of them. Otherwise, they’d still be manufacturing wagon wheels in mass.
Ai overreliance is already a known phenomenon.
” In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties.” https://www.sciencedirect.com/science/article/pii/S0747563224002206
The presence of citations is only useful if you are an expert in the field.
Since it must make binary decisions even when that is inappropriate, cannot implement nuance, and is incapable of ranking scientific information for rigor, it will not get better.
“The presence of citations is only useful if you are an expert in the field.”
I’m pretty sure I can figure out which plants to install, without even using AI. For one thing, someone like you wouldn’t even carry them, if they weren’t suitable.
We all need to rely upon citations if we do our own taxes, perform our own handyman work, etc. But for sure, I wouldn’t rely on it for something important.
Perhaps I have more faith in humanity than you. Pretty sure that AI will become the butt of jokes on late-night TV and elsewhere, if it doesn’t improve.
I personally view it as a “first step” for anything important. (Commenting on a blog doesn’t necessarily qualify as important.)
I view it as a conglomeration of what one would find via search engines, but often with incorrect conclusions (which then require more “digging” to see if it’s correct).
Don,
You’re dead on. Thanks for laying this out from the trenches.
Your example from horticulture is the perfect illustration of the whole damn scam. The tech bros sell this thing as an oracle, but in the real world, it’s just a bullshit generator that’s already causing headaches. The most important point you made is that you have to be an expert to spot the garbage. For everyone else, that “smug first-person authoritative narrative” is incredibly convincing.
That’s the real danger. It’s not just that it’s wrong; it’s that it’s confidently wrong, dressed up as expertise. Your examples of using it for public health or pesticide safety are terrifying. We’re not just talking about bad gardening advice anymore; we’re talking about actual, physical harm.
You’re right. This isn’t a tool that needs refinement. It’s a faulty product being pushed by monopolies, and your call for regulatory oversight and antitrust action is the only one that makes any sense. It’s a house of cards, and it’s about to collapse.
I have a more nuanced view than Matt on this – not that I think he’s necessarily wrong.
First of all, I think the term AI is misplaced. It’s not artificial intelligence. There is no agency here. There is no consciousness. It’s basically a faster computer system.
Second, accuracy is a problem on a number of different levels. You have to check where it’s getting the information from – it’s not after all an oracle. You need subject matter expertise. You have to know what you’re looking at to be able to discern crap from accuracy – and obviously that’s going to limit its utility.
What I find helpful:
* Be able to see if I missed something in writing my piece
* Be able to find additional sources of information
* Be able to transcribe audio fairly quickly – the accuracy of the program I use is about 90 percent, but it saves hours of time for what I do
* Be able to “fact check” both my writing and other things I encounter – with a lot of nuances there.
* Take a huge document and pull out quotes and summarize key points – again with guardrails and being able to check
There are other things that are helpful as well. But the bottom line is that is like everything it can be a valuable time saving tool if used correctly and things can go horribly wrong if you don’t. See the stories about the attorneys who have been disciplined for invented case law.
* Be able to see if I missed something in writing my piece
* Be able to find additional sources of information
* Be able to transcribe audio fairly quickly – the accuracy of the program I use is about 90 percent, but it saves hours of time for what I do
* Be able to “fact check” both my writing and other things I encounter – with a lot of nuances there.
* Take a huge document and pull out quotes and summarize key points – again with guardrails and being able to check
(That’s what the Vanguard’s commenters do – you’re welcome.)
“That’s what the Vanguard’s commenters do – you’re welcome.”
Not well, but sometimes. (You guys rarely catch the stuff that actually gets me in trouble ;))
David,
I appreciate the nuance you’re bringing here. You’re right to distinguish between the hype of “AI” and the reality of what these systems actually are, faster computers without any real agency. And I don’t disagree that there are specific, narrow applications where they can function as a time-saving tool, like the ones you listed. Transcription or summarizing a huge document is a far cry from what I’m talking about.
But that’s where the cost, both literal and figurative, becomes impossible to ignore. And I have to ask, at what cost? The energy consumption alone is staggering. It’s not a side effect; it’s a fundamental part of the scam. Is saving a few hours on transcription worth the environmental cost of powering these data centers? Is it worth draining reservoirs and burning through fossil fuels just so a corporation can automate a job it should be paying a human to do?
When you add that to the equation, the whole “useful tool” argument starts to collapse. My issue isn’t with the tool itself in a vacuum; it’s that the tech giants aren’t investing billions to create better transcription software. They’re building it to replace the very subject matter experts you correctly say are essential for discerning crap from accuracy. The utility for an individual like yourself is real, but the primary business model is built on eliminating experts altogether, all while burning a hole in the planet to do it.
It’s a monstrously inefficient trade-off. We’re trading the planet’s finite resources and the livelihoods of millions for a tool that, as you point out with the attorneys inventing case law, can go horribly wrong. It’s a losing proposition on every single level: ecologically, economically, and ethically. The energy bill alone makes it a catastrophic mistake.
“ The most important point you made is that you have to be an expert to spot the garbage.”
That’s not true. Discernment doesn’t depend on expertise. Indeed, it all but denies it. Expertise applies to given fields of knowledge and skill, not to seeing what is.
Seeing the truth in the moment has nothing to do with expertise, and expertise, like knowledge and experience, has to be set aside in order to see the truth.
Martin,
I think you’re making a distinction where there isn’t one, and it’s dangerous.
“Seeing what is” isn’t some mystical, context-free state of being. It requires knowledge. To use Don’s example, you can’t just “see the truth” about a plant disease without knowing anything about plants. Your “truth” would be a random guess. An expert, a horticulturist, can look at the same plant and actually see what’s wrong because their expertise gives them the context to interpret reality correctly.
Your argument that expertise must be “set aside” to see the truth is precisely what these AI con artists are counting on. They want everyone to believe their gut feeling is just as valid as a doctor’s diagnosis or a scientist’s research. It’s not. It’s a recipe for a world of confident morons, all “seeing their own truth” while the actual truth gets buried under a mountain of authoritative-sounding nonsense.
Expertise isn’t a prison; it’s a lens that brings reality into focus. To ask people to set it aside is to ask them to be willfully blind.
Matt,
Even without the pejorative reference to “some mystical, context free state,“ you’re way off on this one. You’re reacting from unexamined premises and societal conditioning regarding the primacy of knowledge, especially with regard to the cult of expertise.
It’s deeply mistaken to say, “expertise is a lens that brings reality into focus.“ To continue with that way of non-seeing is to feed the very thing you decry in your column — the blind acceptance of authority. Also, what’s dangerous is to continue with the “us vs. them” dichotomy that infuses your worldview and work.
With regard to horticulture or some other field of knowledge, expertise matters. With respect to truth, expertise is an impediment. To stand on expertise where truth is concerned is to reinforce psychological authority and authoritarian structures, and deny truth.
Again, you’re trying to put out the fire after the house has already burned down. We have long had “a society of confident morons, all ‘seeing their own truth’ while the actual truth gets buried under a mountain of authoritative-sounding nonsense.”
You’re standing on two false premises:that human beings must have “lenses“ to see the truth, when the lenses are our conditioning which prevent us from seeing the truth; and that experts are necessary in order to perceive “the actual truth.“
You’re way out of your depth here. Please do what you rather brutally told Keith to do: pause and reflect on your societally conditioned assumptions about knowledge, expertise, reality and truth.
Martin,
I appreciate you taking the time to lay out your position. But I think you’re missing the point of my argument by turning it into a philosophical debate.
My column wasn’t about the ultimate nature of Truth. It was about a specific technology and its specific, tangible dangers. The argument is simple: an AI can generate a medical diagnosis that sounds authoritative but is completely wrong. A person without medical expertise has no way of knowing that. A doctor, an expert, does. In that specific, real-world scenario, expertise is not an impediment to truth; it’s the only thing standing between an everyday person and a dangerously false “truth.”
You’re talking about an ideal state of pure perception, and that’s a fine philosophical discussion. But we are not in an ideal state. We are on the cusp of a world where a flood of AI-generated nonsense is specifically designed to mimic the language of authority. In that world, dismissing the “cult of expertise” is not a path to freedom; it’s a recipe for chaos. It’s telling people they have the tools to spot a deepfake video when they don’t, or to evaluate a scientific paper when they can’t.
My premise is not that humans “must have lenses.” My premise is that this new technology is actively trying to blind us, and in that situation, the person who has spent a lifetime developing their “lens”, their expertise, is the only one who can see at all. This isn’t about reinforcing authority; it’s about survival in a world where the fakes are becoming indistinguishable from the real.
Please stay on topic.
Matt,
You’re still reacting and not reflecting on your deep and false assumptions. Setting up the strawman of “The ultimate nature of Truth” willfully misses the point, which is that we can and must have the discernment to see what is, without the authority of experts (truth with a small ‘t’).
Your repeated example of medical expertise is painfully ironic. As someone who nearly died last year because of a doctor’s malpractice, you couldn’t strike a more raw nerve. I sure as hell can’t get more “real world” than that. So please stop spewing nonsense.
Competent doctors are way ahead of you in this one anyway. Setting up the absurd dichotomy between “bad AI“ and “expert doctors“ is frankly rather juvenile. A good doctor is already using AI judiciously. If mine had, I wouldn’t have become septic and nearly died. His ego prevented him from listening to me and the evidence, which AI had correctly diagnosed. Is that what’s going on with you?
I’m not talking about “an ideal state of pure perception (as a) fine philosophical discussion.” I’m talking about seeing what is, which knowledge and expertise have nothing to do with when it comes to self knowing and seeing what is happening in society and to humanity.
You’re fighting a rear guard action against a battle that’s already been lost, in terms that no longer apply. It’s absurd to say we need experts because without them we’ll have a “recipe for chaos”, when chaos is precisely what were being confronted with.
It’s dangerously wrong and misleading to say: “the person who has spent a lifetime developing their “lens”, their expertise, is the only one who can see at all.” we need to stop bowing down to experts, who are blind as anyone else where the human condition is concerned, and certainly not the ones who will lead us out of this chaos.
I’m saying each one of us can and must have discernment and direct perception, not to supersede experts in a specific field, but to ignite insight within ourselves and be whole human beings. Expertise, by definition, is a piece of knowledge, and you can’t see the whole by trying to assemble the pieces of knowledge through the high priests of experts.
You’re upholding the false authority of expertise over our own discernment in life, and thereby reinforcing authoritarianism.
Martin,
I’m sorry you had a bad experience with a doctor. But using your personal trauma as a debate tactic to shut down a conversation is manipulative, and I’m not going to play along.
Your story doesn’t prove your point; it proves mine. Your doctor was a bad expert, a failure of human ego and judgment. The solution to that is not to abandon the entire concept of medical expertise. The solution is a better expert, one who, as you say, uses tools like AI judiciously. You’re arguing for the abolition of pilots because one of them was drunk.
You keep trying to elevate this into a philosophical sermon about “direct perception” and “wholeness.” I’m talking about a specific technology that is being used to flood the zone with authoritative-looking lies. My argument, from the very beginning, is that in that coming world, expertise is the only firewall we have.
You’re telling people to rely on their innate “discernment.” I’m asking what happens when that discernment is being systematically and expertly deceived by a technology designed for that exact purpose. That is the question. It has nothing to do with my supposed ego or your past. It’s a practical question about a technological threat, and your philosophical hand-waving is a dodge.
Martin,
I’m going to be blunt. You are not an honest interlocutor.
An honest interlocutor engages with the argument that is actually being made. You have consistently refused to do that. Instead, you have reframed my points into strawmen, shifted the conversation from a practical technological issue to a vague philosophical sermon, and now you have used a personal trauma as a tool to derail the debate and question my character.
This is not a good-faith discussion. It is a performance of intellectual superiority designed to avoid the substance of my argument.
As such, you now unfortunately join the ranks of the other dishonest interlocutors I’ve encountered on here who are more interested in “winning” a conversation than in having one. The conversation is over.
There is a reason that A.I. can be “racist”. I am well aware of research done regarding bigotry against Jews and how it is propagated via A.I. This no doubt this applies to how A.I. can propagate other forms of bigotry as ‘truth’, even more so if the digital misinformation is intentionally and strategically propagated.
If you doubted MS’s claim yesterday, read this article. It is very important to understand one of the dangers of A.I. and how this happened and is propagated by A.I.. This is so important I’d call it a must read (below link). It can also be heard on the “Ask Haviv Anything” Podcast/YouTube, Episode #65 – link to the YouTube is also in the link below.
https://whyevolutionistrue.com/2025/12/26/how-wikipedia-distorts-israel-and-jews-in-the-interests-of-the-sites-progressive-ideology/?utm_source=chatgpt.com
From your link:
“The subject is how Wikipedia, as well as reddit, have distorted the facts about Zionism and Israel by adopting a progressive, left-wing, and, yes, antisemitic stance.”
(I have also increasingly noticed a bias from Wikipedia, as well. But truth be told, I think/hope that I have enough wisdom to see through it.)
The best approach is to observe/think for oneself, and try to avoid seeing the world via your own “tribal” (political) membership. (That’s why I stopped viewing liberals vs. conservatives in this manner.)
The problem is, many people trust Wikipedia as we trusted Websters, and apparently A.I. is one of those people, and apparently those who run Wikipedia have a quite biased political agenda on pet issues.
Wikipedia is edited collectively by anyone with a password and username and therefore can be unreliable. I usually will exclude Wikipedia when I see it listed as a source
For scientific information, particularly in my field, Wikipedia is outstanding.
That doesn’t completely surprise me, though I will say, when I taught, we did not allow students to cite Wikipedia just as when I was a student, we weren’t permitted to cite Encyclopedias.
“Wikipedia is edited collectively by anyone with a password and username”
Did you read the linked article? That’s what everyone thinks on how it works. On some rather important political issues, the reality is much more sinister.
Matt,
You keep digging your hole deeper. It’s dishonest and devious, not to mention cold-blooded, to say that I’m being “manipulative” by using “personal trauma as a debate tactic to shut down a conversation,” when you continue to arrogantly mischaracterize what I’m pointing out, and thereby try to shut down a conversation. (“Stay on [my] topic.”)
Are you being deliberately obtuse? One can only think so when you write something as wildly wrongheaded like “you’re abandoning the entire concept of medical expertise,” or the reductio ad absurdum that I’m “arguing for the abolition of pilots because one of them was drunk.”
Again, expertise has its place. I wouldn’t show up at an airport to fly the passenger plane, anymore than you would show up at a hospital to perform a surgical operation. It’s a completely specious argument that willfully fails to grasp the insight. Which is that expertise has its place, but when it is taken out of context, it impedes discernment – seeing the truth for oneself.
You think you’re on sold ground because you’re “talking about a specific technology that is being used to flood the zone with authoritative-looking lies.” Refusing to look any deeper or wider than the little pond you’re swimming in, you assert boilerplate leftist ideology about how ‘they’ have designed a technology for the exact purpose of systematically and expertly deceiving discernment.” You provide no remedy, just a feel-good, half-baked attack on the tech powers that be.
Rather than examine and nurture discernment, you make the ridiculous and unworkable claim that “expertise is the only remedy we have.” But you’re not one of the vaunted experts; you’re a commentator. So, by your own logic, why should anyone listen to you? Also by your logic, everyone must either become an expert, or accept the idiocy that experts are “the only remedy we have.”
We don’t need to imagine “that coming world” of chaos. It’s already here. Expertise in specific fields has not been, is not, and never will be a remedy to the crisis that actually exists. Indeed, the idolization of experts as authorities has greatly contributed to the erosion of discernment in ordinary folks and citizens.
Your non-question question clearly indicates that you don’t believe ordinary folks can have discernment, but must rely on experts to see truth. That’s a very undemocratic view.
So what is the true remedy? The remedy is to encourage discernment by being discerning oneself, rather than outsourcing our discernment to experts.
“So, I have a real question for you. Do you actually think you’re doing something constructive here… by giving bad-faith arguments? I’m genuinely trying to understand the mindset. Is that what you consider a productive use of your limited time on earth? What do you get out of this?