Keeping Up With… AI Ethics
This edition of Keeping Up With… was written by Brandi Hart.
Brandi Hart is Interdisciplinary Librarian at the University of Colorado Boulder, email: brandi.hart@colorado.edu.
Introduction
Artificial Intelligence (AI) increasingly has been integrated into nearly all facets of life, raising many pressing ethical issues regarding the design and use of AI and its resultant impacts on human society and the environment. While AI, the field of science dedicated to making systems that can match human performance (do acts a human can do),[1] has existed since the 1950s, the public release of the generative AI tool ChatGPT in November 2022 and the accelerated commercialization of AI since then has brought AI to the forefront of public awareness and concern.
Given its ability to generate content previously only created via human intelligence, generative AI has also greatly impacted higher education, academic libraries, and academic research. Therefore, this article introduces some of the fundamental issues in AI Ethics that academic librarians must understand as requisite to help inform their individual and institutional decisions on adopting or using AI, instruction, and policy to avoid causing ethical harm given AI’s disruptive nature as an unprecedented technology.
What is AI Ethics?
AI Ethics addresses the ethical issues of AI including what moral norms should be encoded into AI to try to make it safe (alignment), what constitutes morally good and morally bad uses of AI at the societal (governance, policy, law) level and the individual level, and the ethical consequences and harm of AI either by its design or how it is used. AI Ethics broadly addresses:
1. Ethical issues caused by features of AI (e.g., privacy issues, since AI depends on personal data for training)
2. Ethical issues caused by how humans choose to use AI (e.g., defining ethical standards for responsible AI use vs. humans using AI to harm one another)
3. The social and environmental impact of ethical AI issues (e.g., using AI automation to replace human labor) [2]
Since this short article cannot address every issue in AI Ethics, I will focus on the ethical use of AI (not the ethical design of AI) and discuss one point from each category:
1. The ethical issues that arise from AI’s nature as an unprecedented form of agency; namely, the erosion of the fundamental principles of human dignity due to AI use
2. The ethical use cases of AI (the good, the bad, and the overuse) to guide AI usage
3. The social and environmental harm caused by irresponsible overuse of AI
(Unless specified otherwise, “AI” will refer to generative AI after the following section, given it is the predominant form of AI used in academic libraries.)
AI is Agency without Intelligence
AI encompasses a variety of systems, algorithms, and models that excel at doing tasks in domain-specific areas with clear goals and objectives. Yet, AI only “mimics thought and reasoning;”[3] hence, it is not intelligent. Indeed, AI is merely an unprecedented form of agency (the ability to act, interact with, and manipulate the physical world) which is devoid of intelligence (the ability to think rationally). [4]
By design then, AI is incapable of understanding; it cannot discern right from wrong, truth from falsehood, reality from fabrication, and other concepts that require intelligence. In fact, AI just bypasses these altogether. [5] It is because of its very nature that AI often “hallucinates” or generates false information that it states as fact. Similarly, AI’s ability to automatically produce content quickly with “unprecedented personalization” and “predictive power”[6] makes us severely overestimate its abilities and overlook its flaws.
Human Agency, Intelligence, and Responsibility
AI’s pervasive capability of subtly shaping our thoughts and actions “predetermined by underlying algorithms” is covert and is leading many people to give up parts of their human autonomy unknowingly. Therefore, it is essential we protect our fundamental principles of human dignity: agency – what we can do; capability – what we can achieve; self-realization – who we can become;[7] and care – how we treat one another (connection) and our environment. Not only does the erosion of these principles lead to social and economic inequality, but it also threatens to limit the very things that make us human.
Unfortunately, this has already happened, as can be seen by profit-driven companies lured by the “efficiency” of AI that have chosen to replace many peoples’ jobs with AI automation, thereby devaluing their intelligence, experience, skills, and creativity and leading to unemployment and underemployment. On the individual level, consider how much one’s personal identity, self-esteem, and life’s aspirations are tied to one’s work and how adversely this valuation of artificial agency over human agency affects people on a personal, social, and economic level.
Furthermore, AI often bypasses the need for human intelligence. Think about how a pencil and notebook are complementary to our cognition, aiding our ability to do long division for example, and thereby improving our intelligence. ChatGPT, in contrast, is a competitive cognitive artifact in that it doesn’t so much complement our cognition as replaces it. [8] AI limits how we form judgments, since much of the decision-making is made by the algorithms, with information simply presented to us, which has a detrimental effect on not only our critical thinking skills, but also, importantly, our self-determination. In addition, AI amplifies social discrimination given the biased data AI systems are trained on, furthering social inequality and prejudicial outcomes, which is not just a technical failure, but a “profound ethical failure.” [9]
The Good, the Bad, and the Overuse of AI
To better understand how to apply AI Ethics in our use of AI, the ethical use cases of AI can be broken-down into the following:
Good use of AI achieves a beneficial outcome for the betterment of humanity and/or the environment, such as using predictive AI in medical research and in the acceleration of pharmaceutical development (e.g., AlphaFold), or in monitoring air and water quality.
Bad use of AI is the intentional misuse of AI to achieve harmful outcomes, such as cybercriminals using AI to automate online identity theft, or despotic governments using AI to mass-surveil threatened populations or produce disinformation and propaganda.
Overuse of AI is pernicious and the most pervasive of the use cases. It is the use of AI for a generally average task that can be done perfectly well by a human with no substantial human objective or outcome accomplished. Think about the uses of AI in libraries (and society at large): using AI to summarize articles for research papers (or the proliferation of AI services built into databases for this purpose); using AI to answer research/reference questions; creating AI-generated images or other media (“AI slop”); and using AI productivity tools or virtual assistants to schedule your day or write your emails. Using AI in these ways achieves no common good for humanity or the planet but, instead, values one’s convenience above all else. That is, we become complicit when we unnecessarily use AI tools for frivolous reasons.
We must use our human intelligence and care to guide the focused use of AI to address the complex issues humans face, not convenience. Simply put, overuse of AI must be avoided to preserve human dignity and to limit the harm on the environment.
Environmental Impacts of AI Overuse
AI is very energy-intensive at every stage of production, from the manufacture of specialized AI chips, the training of AI models, and the running and cooling of data centers, to our end-use of it. Its high electricity demand furthers climate change, as in the U.S. the predominant source of electricity powering AI data centers is the burning of fossil fuels (coal and natural gas), which are the primary drivers of climate change. [10] Unfortunately, there is less incentive for U.S. AI companies to use renewable energy given the passage of the One Big Beautiful Bill, which rolled back the tax credits that incentivized clean energy, such as wind and solar. [11] Electricity for AI data centers is projected to more than double by 2030, with the U.S. being the largest driver of increased electricity demand globally. In fact, we are set to consume more electricity for AI data centers alone than for the production for all other energy-intensive goods (aluminum, steel, cement, chemicals, etc.) combined in the next five years. [12]
To better understand how individual use of AI adds to this, here is a simple illustration: one average ChatGPT prompt (or similar AI service) uses as much electricity as running a lightbulb for 20 minutes, which is 10 times as much electricity used for a normal Google search. [13] Considering that hundreds of millions of people prompt ChatGPT multiple times each day, the energy costs are far from insignificant. In fact, a typical AI data center consumes as much electricity as 100,000 households consume in a year,[14] and as of November 2025 there are an estimated 4,165 AI data centers in the U.S., the most in the world by a large margin (the second most is the UK with 499). [15] One can conclude then that only the focused, good use of AI is crucial in limiting the harm AI’s energy consumption needs has on the planet’s wellness.
Conclusion
It is up to all of us to create a society that values human agency, care, and responsibility for the benefit of each other and for the natural world to ensure that we have, and future generations inherit, a world that can support our well-being and aspirations. Hence, from an AI Ethics standpoint, academic librarians should always question their use of AI and, if it is not necessary or morally beneficial, opt out of using it to help stop the erosion of human dignity and work to ensure environmental survivability.
Notes
[1] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach, 4th ed. (Hoboken: Pearson Education, 2021), 1.
[2] Changwu Huang, et al., “An Overview of Artificial Intelligence Ethics,” IEEE Transactions on Artificial Intelligence 4, no. 4 (2023): 801.
[3] Luciano Floridi, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities (New York: Oxford University Press, 2023), 12.
[4] Luciano Floridi, “The Ethics of Artificial Intelligence: Exacerbated Problems, Renewed Problems, Unprecedented Problems.” American Philosophical Quarterly 61, no. 4, (2024): 302.
[5] Luciano Floridi, The Ethics of Artificial Intelligence, 27.
[6] Luciano Floridi, “The Ethics of Artificial Intelligence,” 306.
[7] Luciano Floridi, et al, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations,” Minds and Machines 28, no. 4 (2018): 690.
[8] David Krakauer, “Will A.I. Harm Us? Better to Ask How We’ll Reckon with Our Hybrid Nature,” Nautilus, September 5, 2016. https://nautil.us/will-ai-harm-us-better-to-ask-how-well-reckon-with-our-hybrid-nature-236098/.
[9] Luciano Floridi, “The Ethics of Artificial Intelligence,” 302.
[10] IEA. (2025). Energy & AI. International Energy Agency, 86. https://www.iea.org/reports/energy-and-ai.
[11] Dara Kerr, “AI Brings Soaring Emissions for Google and Microsoft, a Major Contributor to Climate Change,” NPR, July 12, 2024. https://www.npr.org/2024/07/12/g-s1-9545/ai-brings-soaring-emissions-for-google-and-microsoft-a-major-contributor-to-climate-change.
[12] IEA, 14.
[13] Kerr.
[14] IEA, 38.
[15] Statista, “Leading countries by number of data centers as of November 2025,” November 2025. https://www.statista.com/statistics/1228433/data-centers-worldwide-by-country/ .