What is the Anti-AI Stance?

The school year is officially underway, and we here at SIFDF are diving deeper into the world of artificial intelligence through an Artificial Intelligence Law and Policy course. One of the aspects we truly enjoy is sharing our perspectives on the rapidly evolving landscape of AI policy and, even more importantly, hearing from our peers.

One classmate we’ve had the pleasure of working with is Elle Brown. In late July, I came across an article she published titled “9 Reasons Why I Won’t Use AI—And Why You Probably Shouldn’t Either” (linked here). I highly encourage you to give it a read. Her piece introduced me to some compelling arguments against AI use, prompting me to reach out to Elle for a deeper conversation about her views and the insights behind her stance.

Below is our discussion.

Q: You frame AI as a potential threat to prioritizing people in a convenience-driven world. Can you explain the impact of this? Additionally, how do your personal experiences with community, collaboration, or mentorship shape your vision for a future that values human connection over AI-driven efficiency?

A: As someone who values open and honest communication more than anything else, I know that it can be challenging. Having deep relationships with people is not easy or convenient, but is necessary for humanity. We are the only species that can act on more than just instinct, and I believe we have an obligation to do so. This means doing things that are not easy and not always natural. It means challenging ourselves to be uncomfortable and to be inconvenienced for a greater goal or reason. More than to survive, we must live. In order to create a community where we can truly live, we must be inconvenienced for the sake of others; we must do things that do not directly benefit us in the short term. We must pick up groceries for our roommates who are busy with work or school. We must go to our niece’s softball game, knowing that she’s just going to sit in the outfield and pick flowers. We must show that other people matter to us in order to form a community. If we do not form a community, we have lost a part of what it means to be human.

In relation to AI specifically, I see ways in which this community is threatened by convenience. Why would we work to form a deep relationship with someone when we can plug all our thoughts into a chatbot who can respond immediately and say exactly what we want to hear? Why would we put effort into forming a messy human community when we can have a perfect relationship with an AI system? Who needs a mentor when we have all of human knowledge at out fingertips? Why bother with humanity when AI has it all figured out? If we become driven by only convenience and forget the benefits that community, collaboration, and mentorship can bring, we will very quickly come to the realization that there is no need for imperfect humanity.

As a Christian, I recognize the inherent imperfection of humanity. By my religion, I know that we will never be able to live up to perfection and will always fall short. For me, this means giving grace for shortcomings. It means being forgiving and understanding when an imperfect creation cannot meet whatever arbitrary standards I have set. It means that when I fall short of goals, I will have a community ready to catch me because they know I will fail. In a community of imperfect humans who recognize their imperfections, there is so much more room for grace and improvement. I will not be left behind because of my imperfections but will be embraced because of them. If these communities are not built, if AI replaces them, what am I here for? What can I possibly do to stand against the all-powerful system? Why would someone catch me when they have no need for me?

Q: Your concern about AI’s lack of regulation and uncertain future is compelling. What form of regulations do you believe would be ideal for AI’s future?

A: I am no expert on regulations and how they play out in real time. I want to make that clear before I speak on them. My regulations are probably more idealistic than could actually work, but I’ll share my ideas anyway.

First and foremost, I believe that there need to be restrictions on what kinds of images an AI system can generate. Some laws or executive orders are being made about this, but I believe more harsh restrictions on generating pornography need to be put in place. There are already a dozen cases where people have created “revenge porn” of actual people in their lives. They were able to, using an AI generator, make semi-realistic pornography of someone in their life. This is, frankly, a disgusting abuse of AI that needs to be regulated. The generation of illegal materials, outside of just child and revenge pornography, has got to be regulated. With how the systems work as is, it is very easy to have an AI system generate a crime strategy, which I see as a definite pitfall. A couple creators have made videos where they are able to use the AI’s code against them to generate illegal plans and strategies after only a few prompts. While the AI systems might initially refuse, they are not coded well enough to keep up with the human manipulation.

I believe the environmental impact of AI also needs to be regulated. With the sheer amount of potable water AI systems use to stay active, some areas have already had to diminish their personal water use to make up for the AI usage. Especially in rural areas that do not have systems in place to protect water usage, AI regulations need to be made to protect water for human use, as humans (at least I think) need it more than AI data processing centers.

Q: With AI’s regulatory landscape being uncertain, how do you personally navigate the tension between embracing technological innovation and maintaining caution, and what advice would you give to peers who are facing the same struggle?

A: I have come across a great number of people who, when confronted about using AI for things, often begin their defense with “well I only use it for x thing.” It’s hard to tell people that they shouldn’t use AI to make their outlines for them when it’s an offhanded comment and I don’t have time to get on my soapbox about everything. I will admit that I am tempted to use AI for things often. I see how it could make my life easier in some ways, but that doesn’t necessarily mean it makes my life better. I know that, especially for trial team tryouts, it would have been so easy to plug the cases into ChatGPT to come up with a theme, open, direct, cross, and close. I wouldn’t have to spend my summer thinking of this material; I could just have it generated in seconds for me. And wow that is tempting. It is really hard to avoid convenience in all aspects of my life, but I made a promise to myself that I would. It can be hard knowing that something that takes me ten minutes could be done in seconds, but I know that learning the skill now will serve me better later. I know that the convenience I forgo now will benefit me in the long term, and make me a better lawyer and a better person.

Q: Your article highlights the risk of complacency when using AI. How has your preference for seeking feedback from human peers or mentors shaped your growth as a law student, and why do you value their critical input over AI’s responses?

A: Human feedback is more important to me than anything AI will say because it is humans who are interacting with me every day. It is humans who will see the output of any feedback they give to me, so why would I care about what a machine says? If there is an associate at my summer clerkship who tells me they want something written in a certain way, I am going to write it that way regardless of what AI says about it, because AI is not reading it. AI is not going to court with me, so why would I care about its feedback when I can get feedback from the judge who is listening to my argument? My peers in law school also provide so much more than just academic feedback; they can watch movies with me, help me move out, send me funny reels, bring me food when I’m sick, and so much more. I’d rather spend extra time with them and build that relationship than just get quick feedback from an AI system.

Q: What personal values or principles guide your decision to avoid using AI tools in your daily life or studies? 

A: In my program in undergrad, we spent a lot of time discussing what makes a good life. While I don’t have a perfect answer for that question, I know that human connection and community is necessary. I value it so much more than academic or workplace success. I value it more than financial stability. While this is not the case for everyone, I believe it to be so critical that humans have relationships with other humans. To bring my faith back into it, we are meant to show others the love of Christ and what it means to belong to His church, and we cannot do that without forming relationships with other people.

Q: What specific concerns do you have about integrating AI tools into your studies or future profession, and how do you think these concerns might impact your work and the work of those around you?

A: I am concerned with people who become overly reliant on AI tools for classwork; so much so that they forget how to do basic things themselves. I worry that people who take these shortcuts now won’t be able to use those shortcuts later and will have no idea what to do. With confidentiality concerns, they won’t be able to put actual client information into an AI system like they are able to with hypothetical cases. For personal reasons, I worry that people who are more willing to use AI are going to have more opportunities than I will. I worry that, because in the short term they are able to complete work faster than I can, they will get a promotion faster than I will. I worry that my own ethical standards will hold me back professionally.

Q: Have you observed others using AI in ways that conflict with your personal sense of authenticity or integrity, and how has that influenced your choice?

A: The first time I was really confronted with personally conflicting AI usage was during the Williams Trial Competition for 1Ls. I had multiple groups tell me to my face that they were using AI to write all their materials. As someone who spent a lot of time to write everything with her competition partner, I felt shafted. I was spending hours working on my own material while other groups had barely spent any time working on theirs. Instead of spending time working on LRW assignments, I had to spend time reading the case and practicing the material I wrote. Those other groups that used AI got to focus more on school work instead of spending time on the competition. This kind of thing really conflicted with my own sense of integrity, but there was nothing I could do. My feelings after hearing about others’ AI use both challenged and solidified my views. While I could easily give in and have an easier short term, I knew that I would not be able to live with myself if I just went the easy way.

Q: Is there anything we haven’t discussed regarding your decision to avoid using AI that you’d like to share, particularly insights or perspectives that further explain your stance?

A: I mentioned it a little earlier, but the environmental impact of AI is not something to be forgotten. The amount of potable water being used by these processing systems is causing droughts in some areas. People no longer have as much access to drinking water because people are using AI for everything. Even if my earlier points against AI are not convincing, the environmental impact of AI is hemorrhaging drinking water from actual real live people who need it to live. There are basically no regulations on AI companies who are buying land to build processing centers that dry up water in towns across America. Until AI companies begin to self-regulate or try to fix this issue, I personally cannot justify the use of AI for anything.

--

I’d like to sincerely thank Elle for taking the time to work with us and openly share her values and perspectives on AI. Once again, I encourage you to read her article to gain a deeper understanding of this important and often overlooked viewpoint. I also encourage all of us to share our views on this subject, so together we can help shape the future we want to see. I believe these important discussions, when approached with respect, have the potential to make a real difference.

Previous
Previous

Is there an AI Bubble?

Next
Next

AI Girlfriends are here. Things are about to get weird.