Response to Haley

Firstly I want to clarify some foundations of my POV because I think it probably has the potential to lead to misunderstandings if I don't, and I see glimpses of points that I do agree with in your response, so I want to make sure I've established where I'm coming from.

1) When I'm talking about AI, I'm not advocating for or speaking about a specific form factor, product, or company. I'm not talking about ChatGPT or Grok (unless they're specifically mentioned), but rather the end result of the underlying foundational math, processing, and infrastructure that all modern AI is ultimately composed of. That includes all the positive examples like protein folding, medical advancements, car safety systems, etc. as well as all the negative examples such as state surveillance, dark pattern optimization, etc. They all rest on the same handful of breakthroughs made over the past few decades, and in a general sense share a very similar structure. Therefore, for the sake of discussion, I am going to grant that all current implementations of AI are a "package deal” in regard to the inability to separate individual capabilities into different hypothetical realities when arguing for their merit or lack thereof.

Until very recently, AI was constrained to academia and had nowhere near the capital implications it has now, and I acknowledge and agree that the terrain changes when the incentive structures change, money comes into play, and technology is handed off from well-meaning nerds doing their PhDs to poorly and complexly incentivized corporations.

2) My default stance is one of optimism in humanity, with the understanding that there are some very misaligned people who are the exception and oftentimes (but not always) end up in positions of great decision-making power, but that the average person skews toward being morally net-good with agreeable intentions. This is a philosophical starting point for my beliefs, and I hold this POV along with the understanding that systems and incentives can make good people complicit in bad outcomes.

3) AI across the board is becoming cheaper and more environmentally friendly, pound for pound over time, while usage and scale is increasing dramatically. I have no interest in or reason to misrepresent the data or purposefully seek out incorrect info, and I try really hard to not rely on inaccurate information. I have bundled the sources I’ve collected over the past year at the bottom of this post. If there is any good data that you can provide me on what you've observed in regards to the scale of data center pollution or resource usage that I am missing, I really would welcome that and will add it to my collection. I have no reason to hold onto false beliefs, and I want to be clear that I have no ego about it.

Having said that, my research leads me to overwhelmingly believe that water usage and contamination is vastly over-reported, outdated, inaccurate, and misrepresented across the board. For the record, I do not feel this way about reporting on electricity usage, which seems to be covered much more accurately by media outlets, and will clearly be a major hurdle that will require a ton of very involved and skilled planning to not go wrong.

I’ve speculated on why the mismatch of accuracy between water and electricity reporting seems to exist, and while I don’t have any hard proof on why, I lean toward the boring uncontroversial opinion that media companies in general are incentivized toward dishonesty because of their strict reliance on ad revenue. I feel confident, but of course cannot prove, that articles overstating water usage outperform less relatable metrics like electricity or carbon output. This topic is something that's hard for me to not push back on, but I am open to having my mind changed.

In the meantime and for the sake of discussion, I think that we should look beyond this specific issue so that we can address other topics as well.

4) I think the left is generally surrendering their seat at the table in protest rather than focusing on establishing strong policy. Politically, I see many thought leaders on the left claiming that AI is useless in the same breath as admonishing companies like Palantir for massively powerful surveillance practices. Both cannot be true, and I believe dismissing AI on one vector makes it harder to get a seat at the table policy-wise when the technology is not being faithfully thought about from all angles. I go into this more later, but I wanted to establish this POV up front because it colors some of my statements throughout.

5) I really am not the brainworm put-AI-in-every-part-of-my-life person that you may think I am. I am only making this assumption based on your initial response to the post I shared, but I can assure you that’s not the case. I am an outspoken advocate of the technology, and I do find it genuinely exciting, as well as personally important for reasons I get into below, but I have very hard lines about the parts of my life that I let it touch. I am a very spiritual person, and I am finding that the more I use AI, the easier it is becoming to see where my hard lines are. That being said, I have integrated AI into many parts of my life, but I have more boundaries than most people I know who use AI more casually. My tools and explorations have more to do with living more holistically than it does with any notion of hyper-productivity.

I hope this establishes a baseline of where I'm coming from as an individual, and here are my direct responses to your messages:

1/2) On the false equivalency of agriculture vs AI: My POV is that framing AI as elective only works if the conversation is constrained to technology like chatbots specifically, which is only a small fraction of what AI is. The same backbone also drives medical research, diagnostics, climate science, logistics, and other fields that can most directly be solved on a global scale with applied machine learning. This report is part of my climate sources list at the bottom of the post, but the International Energy Agency estimates that AI-led solutions could lead to a 5% global reduction of emissions across all industries by 2035, with a net of -4.5% because AI accounts for 0.5% of the 5% figure. In practice, that means it will likely take far less than a decade for AI to offset its own environmental impact.

A major factor in my current stance toward AI development relates to my father's health issues, where I'm being blunt when I say that he would be dead now without a combination of AI-assisted patient advocacy, as well as drug trial acceleration for experimental treatments that were not possible before AI. I find myself in a unique position, because one of the experimental drugs in active Phase 2 trials for my dad's condition is the first drug in history where both the biological target and the therapeutic compound were both discovered by generative AI. It is called Rentosertib. I think there is an argument that some make, that any amount of climate impact is unacceptable for AI, and so it must be shut down entirely. Because of my circumstances, I can assure anyone that it’s not that simple when the implications of stalling progress is staring you or your family in the face.

I look at something like Waymo and the fact that it's verifiably already 80-90% safer than human driving. There are 40,000 road deaths in the United States per year, which equates to a 9/11 sized death domestic death toll every month. With numbers like that, there are obviously very important conversations that need to happen about the extent to which we are willing to use natural resources to prevent large scale human loss in medicine, transportation, labor, and disaster response among other things. I am confident that if it means saving lives, it becomes obvious that the solution is not all or nothing.

3) On the choice to not "cyborg out": I actually pretty much fully agree with you on this. I don't think the way that most people currently engage with AI is making their life more fulfilling or simpler. Consumer facing AI, as most people engage with it now, is a very crude tool that is still in its infancy, and I think the current usage patterns are much more explorative than it is helpful for most people. That being said, I have an unofficial side-hobby of getting friends and family set up with personalized AI setups, and the most common thing I hear is that they had no idea any of xyz was even possible, so I think there is a certain level of cultural lag that is happening right now, especially because of how fast the technology is moving, and I think this year and next are going to represent a form of turning point in public opinion, which already happened to a certain extent in December 2025 for the software development community. That being said, I don’t blame anyone for being weary of technology. I have always been interested in computers ever since I was a kid, and I think my disposition more naturally aligns with being more open to exploring this technology. Even then, I personally find it important to maintain a sense of wonder for human ingenuity and technological progress.

On the tangent of Luddites, I found it really interesting that you brought that term up. There’s a relatively recent book called Blood in the Machine that I've only read part of, but I was surprised to learn in the parts that I did read that Luddites were not as anti-technology as I thought, moreso that they were against technology that took away their bargaining power in the labor market, which is maybe a common understanding, but I didn’t know this. With that framing in mind, I think the modern day shorthand of Luddite to basically mean "technophobe" is probably damaging our cultural ability to understand the nuance between a technology's capabilities compared to the social arrangement around a technology's deployment. I actually think the left is much better equipped than many other political coalitions at speaking about the relationship between technology and labor. The only aspect I tend to take issue with is that the conversation seems to concentrate around reacting to maladaptive deployments of technology as opposed to proactively building out mutually beneficial deployments of the technology. This is honestly pretty surprising to me, because open source software, artificial intelligence, encryption, and many of the other computing paradigms throughout history have their roots in pretty opinionated humanistic, holistic, and oftentimes radical politics.

I actually think the parallel between AI and the history of protesting nuclear energy is a good example here. I have an uncle who was very involved in leftist protests of nuclear energy when he was in college and his twenties (70s and 80s). Looking back on it, it's hard to retroactively justify the left's historical opposition to nuclear energy deployment, and many iconic leftists such as Stewart Brand (founder of Whole Earth Catalog) and Armond Cohen (began his career fighting nuclear power in the 80s) have come full circle to the idea that protesting nuclear energy was one of the most consequential strategic errors that the environmental movement ever made. Nuclear is the most energy dense and the lowest carbon reliable power source that we know of. Countries that expanded their nuclear capability rather than shutting it down have some of the most clean power grids in the world now, France being the prime example. The protests surrounding nuclear energy had a lot to do with anti-weapons proliferation and looming dread around Cold War tensions, arguments that I am sympathetic to, despite it being pretty clear cut in hindsight that it was not a skillful political posture. The critical mistake they made, that I am arguing we're starting seeing happen now, is that they looked past the benefits that the technology could provide because they were not confident enough in the ability of policy to harness the risks. The self-fulfilling prophecy of this is that if you are only focused on how things can go wrong, they tend to just go wrong anyway, probably because there's no one spending time trying to make productive use of the technology, and bad use-cases proliferate regardless of protests due to the misallocated concentration of resources and power. At the time, you weren't really able to question whether or not being anti-nuclear was the correct position to have, and it put anyone with a more optimistic view on the technology in in a position of being politically homeless. From a policy perspective, being politically homeless is a bit like chopping all of your limbs off and then trying to ride a scooter.

I've recently been curious about how the analogue world transitioned from being a distinct entity separate from the digital world, to its current state of being much less individuated. My opinion is that language has always been the most powerful and natural tool to communicate intention, individuality, and purpose, but over the last couple decades as the digital world took over, language lost some of that power because our communication became gated by structured platforms that control how you interface with, or even what you're allowed to see of, your own data.

We've now built a technology that reclaims language as the driving input for the digital world, meaning we can reclaim and interpret our own data, we can choose how to interface with it, we can avoid dark patterns entirely, we can forego being advertised to, we can keep our information private without losing the ability to glean insight or infer important things from it. Pretty much the entirety of our lives have been increasingly digitalized while also restricting the ways in which we can express and and introspect digitally, and now that forced corporate structure is disappearing entirely, which I find incredibly exciting. I have a friend who has been doing a lot of experimentation with Los Angeles Open Data, because LA has a massive amount of freely available data on almost every topic, housing, environment, education, community development, transportation, etc. data.lacity.org

With the tools the average person has available to them right now, and access to infinite troves of open source data, there is a staggering number of untapped, genuinely transformative ideas that people can build to thoughtfully analyze or take action in whatever part of the world/community/society that they care most about, and all with minimal time, effort, and resources. If there is anything that is truly astounding to me about this technology as a whole, it's the rift between how enabled everyone is now, vs how little people have caught on yet, likely due to how fast it's all happening. As an example, I used the above LA data to build a series of visualization that I attached below this paragraph. They were built autonomously within five minutes while I was writing this post. They're rudimentary, but it was the result of a single sentence and five minutes of my computer sitting there. Imagine what could be done with a single eight hour day and an actual productive thesis in mind beyond "Using the publicly accessible Los Angeles County open data, what are some examples of data sets we could compare and infer possible correlations between regarding native wildlife. Make seven of them."

Wildlife & Environment — Los Angeles County (built autonomously from LA Open Data)

I think we've lost sight of how much of the digital world we can reclaim, and how much of the analogue world we can now understand more wholly, now that we have language as an interface to what was once an inaccessible, particularly structured, corporatized digital world.

Overall, I think this point is actually one we align on, but we are coming at it from different angles. I 100% agree that I don't want to be forced into using or liking a particular technology. I think most of these tech companies are maladaptive and have incentives that are misaligned with general wellbeing. Where we begin to stray from each other is that I don't see AI as another layer of complicating digitalization as much as I see it as the introduction of an analogue open source input (language) into something that was traditionally sanctioned by corporations and suddenly no longer is. There are many parts of my life that I don't let AI touch, my writing being an example of that, since I use writing to think, which is why this ballooned into 3000+ words. But the parts of my life that were previously entrenched in dark patterns, paywalls, excessive advertising, and bureaucracy were the first to go. I fully agree that there are many mundane, slow, and inefficient parts of life that are worth experiencing, and I find myself having more time to experience them in their fullness now that the toxic parts are taking up less of my time.

4) The altruism or lack thereof of AI leadership: I'd clarify up front that my position is that there are some genuinely well-meaning and incredibly thoughtful people involved in AI engineering. They aren't the people who end up on the news, but they do work at some of the big name-brand labs and I do believe they are well-meaning. I would say there's almost a 100% probability that I share the same disdain for the same people that you do, but this kind of goes back to my original framing of my POV at the beginning, which is that I don't think a small number of people doing bad with a technology should bar a larger number of people from doing good with a technology.

I think the destructive use cases like immigration enforcement are abhorrent and immoral. I also believe that their widespread usage is more the result of a political failure, since those government contracts being signed are a direct result of the major right wing win in 2024, rather than an admonishment of the underlying technology which is also used for search and rescue, disaster response, food supply chain safety, and disease outbreak tracking.

That being said, I am a fan of the POSIWID rhetorical argument for untangling complicated conversations about maladaptive systems, and I think there's something to be said there. However, when we're looking at a technology like AI, the technology is not the system. The technology is the tool being wielded by the system. A scalpel in a surgeon's hand and a scalpel in an attacker's hand is a sharp metal object with no opinion. That being said, I think you could reasonably argue that the scaling properties of AI make the tool/wielder distinction less clean than it is with a physical object, and that’s an argument that I don’t feel fully settled on. Regardless, that analogy points toward why I feel so frustrated by the lack of productive and creative political engagement on the left. Because if the only people at the table are the ones writing the contracts for Palantir, then POSIWID becomes a self-fulfilling prophecy. It's doing what it does because the people who decided to show up and play ball decided what it would do. I'm not going to sit here and act like I endorse Alex Karp or Palantir, but they are very directly enabled by the political and monetary incentives that surround them. In an alternate reality where they were being offered money by a leftist government to do things that were altruistic as opposed to propping up right wing political ambitions, then they'd take the money either way. This is verifiably true because Palantir already does this, maintaining longterm contracts with more neutral or altruistic organizations such as the National Center for Missing and Exploited Children and the National Institute of Health. This corporate-mercenary dynamic squarely aligns with environmental scientist, Donella Meadows', systems theory, which argues that the structure determines the outcome more than the individual morality of the people inside it. Put different people in the same structure and you get the same results.

5) "AI is doing harm now:" I'm not convinced that there's ever been a large scale milestone in technology that hasn't had to grapple with both the good and the massively bad. Satellites (communication networks / science vs surveillance), aviation (global transportation vs war machines), the printing press (information exchange vs propaganda), gene editing (curing genetic conditions vs eugenics), cars, pharmaceuticals, GPS, nuclear physics, the internet as a whole, all have major upsides that would be negligent to ignore, as well as major exploitable weaknesses that can be harnessed to the detriment of society.

I have friends who have died from the opioid crisis, and the pharmaceutical industry has been keeping my dad alive for the past two years. Solutions become harder to solve when you have to consider the good with the bad, but I think that's the only place that we can find real progress. At the moment, AI leaders are disproportionately wrapped up with right wing leaders since they're the only ones who pick up the phone. Look no further than Bluesky, where nearly every prominent voice in AI has been shunned off of the platform. I think the left's refusal to put their thought leaders into the same room as AI leadership in good faith is a critical strategy mistake that reduces their ability to have an impact on how this technology propagates through society. This isn't even theoretical. There are clear-cut examples of this dynamic at play, where left-leaning media appearances frame AI development as a cost with no benefit. Take this CNN interview for example with Reed Showalter: “I have yet to see AI solve cancer, and I would love to see it right now. The consequences have largely been increased costs for electricity and water and a medium-term decrease in employment and wages for the people in both the district, the state, and the country." (source) These are not the words of someone who is thinking about the future applications and thoughtful deployment of a technology. It is someone who is towing the party line to win the seat in their district. Democrats are rather transparently making it politically dangerous for their own candidates to engage with AI leadership.

Meanwhile, AI industry money is flooding toward Republicans, who are offering these companies the chance to shape policy together. The outcome is that the rules of AI are being written by the right. It's worth noting that almost every prominent AI voice has a strong history of democratic or left-leaning political donations. These are not people who are incapable or unwilling to sympathize and build with the left.

I don't disagree that AI is being used harmfully, and it is not my intention to hand wave away actual concerns. I actually have many concerns and they range from very minor to entirely disastrous. They're just less in the camp of climate concerns, and more in the camp of weaponized autonomous drones and economically transitioning to a post-labor society, conversations in which it is imperative that the left take part, lest our entire future be determined by a single side of the political aisle.

I don't think there is any way to make the world a better home without actively reckoning with and doing what can be done to extinguish the negative use-cases of a necessary technology while actively propping up the good use-cases. That second half is what I do not see the left doing. I can see it being disagreeable to call AI "necessary," and I think whether or not that is a qualifying term has a lot to do with personal views that are too deep-seated in any one person's POV and not worth debating. I can totally understand why someone very invested in XYZ visions of the future would argue for or against that classification, and I am perhaps on the opposite side of that classification as you.

I'm fairly confident that we have a similarly aligned base level of reality that we are striving for. Namely one that is as free of human suffering as possible, and gives people the freedom, space, time, health, and energy to pursue a fulfilling and happy life. With that said, please provide me with the benefit of the doubt that I am not trying to be antagonistic or inflammatory. I genuinely enjoy getting the opportunity to write out and think about my opinions, and I enjoy reading yours as well.


Climate Sources:

Scale & Proportion Data
How AI Compares
What's Driving Utility Costs?
Price & Grid Data
Misleading Media Coverage
Misunderstood Studies
The Actual Impact
Efficiency Gains Over Time
Water Recycling & Infrastructure Investment
How AI Is Helping Save Water
Baseline Data