Navigating Ethical and Legal Challenges in AI Development for Government Employees

Recognize data privacy risks, avoid uploading personally identifiable information, understand copyright limitations of AI-generated content, and comply with ethical, legal, and regulatory standards for responsible AI use.

Understand the ethical, legal, and regulatory responsibilities that come with using AI tools, particularly regarding data privacy and copyright considerations. Learn how improper data handling and misuse of AI-generated content may carry legal consequences and ethical implications.

Key Insights

  • Protecting personally identifiable information (PII) and protected health information (PHI) is essential when using AI systems, as uploading such data may lead to breaches of privacy and legal liability.
  • AI-generated content cannot be copyrighted without significant human contribution, as current U.S. Copyright Office guidelines require human authorship for copyright protection.
  • Legislative frameworks such as GDPR and state-level laws prioritize responsible AI use, emphasizing transparency, fairness, and data security while encouraging innovation within regulatory boundaries.

This lesson is a preview from our AI Fundamentals for Government Employees Course. Enroll in a course for detailed lessons, live instructor support, and project-based training.

Get ready to learn to recognize data privacy and confidentiality requirements when using AI. We'll also summarize legal considerations related to AI-generated content and copyrights, ensure compliance with applicable AI laws and regulations, and identify ethical challenges such as bias, transparency, and accountability in AI systems. Something that's important to know about your data privacy and confidentiality is that if a model is learning from your data, it can be used to generate responses from other users, which could leak private information.

There are steps you can take to minimize this risk, like with team licenses or turning the training or improving the model, as it calls it in your personalization settings off, but everything on the internet is potentially hackable, and I think a good guiding light for using AI assistance is keeping that in mind. In fact, during ChatGPT's free account onboarding, it says, “Don't share sensitive info. Chat history may be reviewed or used to improve our services”, and it invites us to read more of their terms and conditions and visit their help center.

So, this is something to keep in mind. Before you upload anything into a public instance of any one of these AI assistants, know that it could potentially be shared with others. Now, some AI models also have licensing agreements, so you should consider those terms, permissions, and usage restrictions before using them.

It's also important to ensure adherence to privacy policies. For example, GDPR or some states, like California, even have their own AI or data privacy laws that may affect how you interact with those partnerships. The thing I want you to remember here is that there are rules about the data that you put into these AI assistants.

Knowing that there are rules about the data that we put into these AI systems, I want to focus on a couple of key terms. One is PII, which stands for personally identifiable information. So, we wouldn't want to upload anything into our AI assistant that could qualify as PII, like names, full names, social security numbers, addresses, passports, date of births, credit card numbers, email addresses, or IP addresses.

Even if you're uploading something that doesn't have all of this information, maybe just has an anonymized employee ID with date of birth or something like this and some other details, I think it's a good practice to just avoid uploading anything that has any of this PII altogether, because even that one piece of information, if put together with other small pieces of information, may be able to, may enable some bad actor to put the puzzle pieces together and eventually, you know, get your home address or someone else's home address or social security number. So, a fantastic recommendation when interacting with AI assistants is not to put up any PII. And a subset of that PII is protected health information, information that's protected by HIPAA.

So, information about an employee's health status, the provisions of their health care, payments that have been made, or biometric data, for example. In fact, there was, you know, there's been some lawsuits where the biometric information has been uploaded to these AI systems, and that's gotten out. And I think it was the state of Indiana or something like that, that Google ended up having to pay out millions of dollars because these personally identifiable images were released to the public.

So, I point out all of these things to say that if it's personal, if it includes personal information, it's just a best practice not to put it up into your AI assistant at all. There are also rules about the generated content that you take out of one of these AI assistants. So, it's important to know that there are legal risks with AI-generated content, and you know, I am not a lawyer.

Graduate school is not a school of lawyers. This is not necessarily legal advice. You should, you know, consult your own legal counsel, but here are some things that you should consider about the output of your AI assistant.

So, there has been some back and forth over the rulings about AI and what is copyrightable and not copyrightable, but here's a really simplified explanation of the current guidance from the U.S. Copyright Office. It says that AI alone cannot hold a copyright. Human authorship is required.

So, AI-assisted works can be copyrighted if a human has contributed a significant amount to that original input. But if I just said, you know, went into ChatGPT and said, make me, you know, a master painting in whatever, that alone could not be copyrighted. I, as a human, would have to have significantly interacted with that generative AI tool to create a work that I could even think about copywriting.

It's also useful to be aware that there has been a lot of guidance on use of AI released in recent years. I've listed some of that here on the slides for you, and while some of this guidance or these executive orders and frameworks and things have been overturned more recently, a lot of these that I listed here have set really the foundation or the tone for our current implementations of AI and its use within different government organizations and agencies. In summary, you know, we're not going to go through each one of them one by one in this course necessarily, but in summary, I'd say they all contain the same spirit, which says be careful, right? If you're using AI, be careful.

You know, we've talked about in this module some of the ethical concerns and security risks, you know, that putting private information into an AI assistant could potentially have, and a lot of these, again, in summary, agree with that and sort of set the tone and overall direction of careful use of AI. The big takeaway from all this guidance is that the government is prioritizing the accelerated adoption of AI technologies while ensuring that AI systems are developed and deployed responsibly, transparently, and ethically. Agencies are being encouraged to innovate and adopt AI more quickly, but they must still adhere to strict safeguards around fairness, security, and public trust.

The emphasis is on removing barriers to innovation without compromising the values of accountability, safety, and transparency.

photo of Brian Simms

Brian Simms

Brian Simms teaches for Graduate School USA in the area of Artificial Intelligence, helping federal agencies build the knowledge and skills needed to adopt AI responsibly and effectively. An AI educator and author, he focuses on practical, mission-driven applications of AI for government leaders, program managers, and technical professionals.

More articles by Brian Simms

How to Learn AI

Build practical, career-focused skills in AI through hands-on training designed for beginners and professionals alike. Learn fundamental tools and workflows that prepare you for real-world projects or industry certification.