Skip to Main Content

Use Generative AI Effectively: AI ethics

Ethical considerations related to Generative AI

Generative AI (GenAI) raises a range of ethical concerns that can be organized into several key categories:

Intellectual Property and Legal Risks

Copyright Infringement: Generative AI tools may train on copyrighted materials, leading to potential violations of intellectual property rights and legal uncertainty about ownership of AI-generated works. For example: in 2023, the Atlantic reported that Meta trained its generative AI, in part, on Books3, which contained more than 170,000 pirated and copyrighted books.

There are multiple lawsuits surrounding the use of copyrighted material to train AI. AI creators claim that using this material is fair use. The copyright holders argue that, since AI uses its training material to produce text, images or sounds that are often very closely related to the copyrighted material without attribution, this violates fair use.

Mitigating this ethical issue: Until these cases are decided, users should:

  • Use AI models that provide attribution for the material it creates.
  • Include the phrase “Cite your sources” in AI prompts.
  • Be sure to use proper citation yourself.

Privacy and Data Security

Data Privacy Violations: Generative AI models are often trained on massive datasets that may include personal or sensitive information scraped from the internet, sometimes without consent. For example:

  • Meta has confirmed using publicly shared posts (including photos, captions, and comments) from Facebook and Instagram to train its generative AI models.
  • LinkedIn has stated it uses all data related to user usage and personal data for generative AI models. Users can opt out of having their data used for AI training, but this only affects future training and not past usage.
  • X has also begun using user content to train its AI models, and users can no longer opt out of this practice.

Furthermore, when people interact with AI, some AI platforms store the user’s queries and prompts, including any PDFs or other samples uploaded as input. They potentially use this information for model training or other purposes without explicit consent.

All of these examples raise concerns about users’ data privacy and control. They can lead to privacy breaches and difficulties for individuals to have their data removed from models. This, in turn, raises compliance issues with privacy laws.

Sensitive Information Disclosure: AI-generated outputs can inadvertently reveal confidential or proprietary information, especially if the system has been exposed to sensitive data during training or user interactions. This can happen, for example, when a crawler or scraper accesses “pirated-content” archives. If a malicious actor on the internet hacks into a proprietary site, downloads personal information and passwords and then posts them on the internet, generative AI may be exposed to this content during its training. When the malicious actor does this to intentionally contaminate the data an AI is trained on, this is called data poisoning.

Mitigating these ethical issues: Until all generative AI platforms are transparent about where they obtain the data to train their models, users should:

  • Investigate how to “opt out” of having their input to AI models used for training.
  • Adjust privacy settings on social media accounts.
  • Protect against hacking by reviewing the privacy settings on your devices and using encryption when sending sensitive data.
  • Think twice about uploading materials to GenAI, especially those that are copyrighted.

Misinformation and Content Integrity

Unintentional misinformation produced by hallucinations: AI models sometimes produce plausible-sounding but factually incorrect information ("hallucinations"). In a real-world example, in the legal case Mata v. Avianca, a New York attorney used ChatGPT to conduct his legal research. The federal judge overseeing the suit found that attorney’s opinion contained citations and quotes that were nonexistent - ChatGPT made them up. These hallucinations can clearly mislead users and erode trust in digital content.

Disinformation and Deepfakes: Generative AI may have trained on biased disinformation and may, therefore, replicate that content. This has led to real-world problems. For example, a 2023 analysis of more than 5,000 images created with the generative AI tool Stable Diffusion found that it amplifies both gender and racial stereotypes (Nicoletti & Bass, 2023).

Generative AI can also be used to intentionally create realistic but fake text, images, audio, or video, making it easier to spread misinformation, manipulate public opinion, or commit fraud.

Mitigating this ethical issue: Misinformation and disinformation are a persistent problem. To avoid falling victim to misinformation and disinformation, users should:

  • Verify the accuracy of all information, whether obtained from generative AI or the open internet, using reliable sources that cite evidence and empirical data.
  • When a source, whether generative AI or an internet site, provides a link to a source as evidence to support an argument, follow that link.
  • Judge for yourself if it is reliable and truly supports the arguments the original article claims it does.
  • Don’t be satisfied with 1-2 sources. Look at multiple sources to seek a consensus of opinion and/or diverse voices.

Social and Workforce Impact

Workforce Displacement: The automation of tasks such as writing, coding, and content creation through generative AI can lead to job displacement, changes in job roles, and impacts on worker morale. When textbook creators, for example, use AI to create images and videos to illustrate their e-books instead of using human artists and actors, what happens to those artists and actors? What if textbook creators use AI to actually write the textbook? What happens to the subject-matter expert that might have written the text? What about accuracy? Does the claim that the textbook cost will be reduced justify these practices?

It begs the question, What is uniquely human in what humans create? What is the value of the human hand?

Labor Exploitation: The creation of supervised training input for generative AI often relies on low-paid, short-term contract labor for data annotation and content moderation. When reviewing potential input, these workers are also sometimes exposed to harmful or traumatic material, such as highly biased materials.

Environmental Impact

Resource Consumption: Training and operating large generative AI models require significant computational resources, contributing to high energy usage and environmental costs. For example, training the GPT-3 model consumed an estimated 1,287 megawatt-hours (MWh) of electricity—roughly equivalent to the total energy used by an average American household over 120 years. This process also produced approximately 552 metric tons of CO2e (carbon dioxide equivalent).

Manufacturing and E-Waste: The demand for high-performance computing hardware increases indirect environmental impacts due to manufacturing, transportation, and eventual electronic waste.


Brief list of sources (more sources are in the AI Ethics Deep Dive box)

Leffer, L. (2025, February 19). Your personal information is probably being used to train generative AI models. Scientific American. https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/

Thompson, A. (2022, March 2). Hacking poses risks for artificial intelligence. Center for Security and Emerging Technology. https://cset.georgetown.edu/article/hacking-poses-risks-for-artificial-intelligence/

Case Tracker: Artificial intelligence, Copyrights and class Actions. (2025, May 6). BakerHostetler. https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/

Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality. https://www.bloomberg.com/graphics/2023-generative-ai-bias

Practical Lessons from the Attorney AI Missteps in Mata v. Avianca. (2023, August 8). Association of Corporate Counsel (ACC). https://www.acc.com/resource-library/practical-lessons-attorney-ai-missteps-mata-v-avianca

Mehta, S. (2024, July 4). How much energy do LLMs consume? Unveiling the power behind AI. ADaSci. https://adasci.org/how-much-energy-do-llms-consume-unveiling-the-power-behind-ai/

Explained: Generative AI’s environmental impact. (2025, January 17). Massachusetts Institute of Technology. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

AI Ethics Deep Dive

For a thorough bibliography on the issues surrounding the ethical concerns that arise from generative AI, see the articles included in:

Another resource for numerous articles related to AI ethics comes from Amherst College: