Hello and welcome to my presentation on the legal implications of using ChatGPT in your client's business. This presentation is to help you help your clients avoid the legal landmines involved in using generative AI programs such as ChatGPT, which is the most common one used. But there are others. My name is Jeremy Kahn. I'm a principal at Berman Fink Van Horn. And first, we'll start off by discussing we'll talk about the benefits and risks of using ChatGPT, as well as the legal considerations that your clients and you should keep in mind when using such technology. So first, as an agenda, we'll talk about the benefits of using ChatGPT. Then we'll go into the risks of using ChatGPT. Then we'll talk about some specific legal considerations when using ChatGPT. And then we'll talk about ways to protect your client's business while using ChatGPT as well as a conclusion and a Q&A. So first, what exactly is ChatGPT? It's an artificial intelligence chat bot that was developed by OpenAI. The software essentially generates new answers on the fly. So each time someone runs the app, a different answer, even to the same inquiry will be spit out. Every answer is usually unique. And also ChatGPT and similar generative AI programs are constantly evolving. They learn more as more people use the system and its neural network grows. So the more information people put into it, the more ChatGPT and similar systems learn and will use that information and those inputs for its outputs to other users. So ChatGPT essentially stores and remembers all data that's inputted into it and then it later uses or may even reproduce that same information in response to queries from other users. So why GPT? Gpt refers to generative pre-trained transformers. Gpt models are artificial neural networks that use deep learning to generate human like text. Or there's also ones that generate images music, speech code. There are more and more generative AI applications being developed every day, and now there are some that even can develop songs. They can develop songs in a certain style or voice of famous artists. They can develop images. But for attorneys and for for most businesses, the text version, the versions that generate text are probably what's going to be most applicable, and that's what ChatGPT does.
So generative essentially just means that it generates a new text based on inputs or based on inquiries. Pre-trained means that it's trained on a large body of text data, and then it's fine tuned. In the case of ChatGPT, it's trained on essentially almost the entire Internet as of a certain date, and it scrapes information from websites all over the World Wide Web. And then Transformer means that it's a transformer based neural network architecture for processing inputs and generating outputs. So what exactly are the benefits of ChatGPT? So rather than have me tell you, I'll show you by asking ChatGPT itself. So as you can see, I'm typing in to ChatGPT. What are the benefits of ChatGPT? And within seconds a nice answer will be generated for me. So so as you can see, I just put in one simple inquiry and a whole paragraph of information was spat out in a few seconds. This would take me obviously much longer to type out if I was doing it myself based on my own research and some of the benefits that ChatGPT itself tells you that it has or has a large knowledge base, vast amount of text data giving a broad and broad and diverse knowledge base. Um, it has language understanding. It can use natural language processing, versatile use cases. We'll talk about that a little bit more, that it can be used in a variety of applications for a variety of types of businesses or even personal use. Um, the personalization I think is important. Also, ChatGPT can learn from previous conversations and it'll adapt to individual users and be more personalized. So when businesses are using ChatGPT, that's particularly important because businesses often want to have a certain image or a set of certain tone and if the outputs that are generated by ChatGPT are personalized towards that business, it can be particularly useful in generating things that that align with the business's preferred tone or values, the efficiency. I think that was that pretty much speaks for itself. You saw just how quickly it provided this answer to me and accessibility. It can be accessed through various platforms. This is what was done. I recorded myself doing this online on a laptop computer, but I believe ChatGPT now even has a phone based app.
And also I think businesses can use chat bots that are run or operated behind the scenes through ChatGPT so their customers can speak to the chat bot. That's ultimately a ChatGPT chatbot. So I'll show another example of a real world. You know what a business could do to use ChatGPT. So here's another example of something. So in this case, pretending that one of my clients might be a realtor or have a good amount of real estate clients. So let's say a realtor wants to write a property listing instead of spending all the time and research preparing their property listing, a realtor can essentially have one generated in just a few seconds. So you can see that ChatGPT generated very quickly, almost instantly. Very nice property description that a realtor could use if they were wanting to sell this property. I happen to know this property very well. It's my house. That's why I use this address. And I can tell you there are certainly a number of inaccuracies in in this, and that's a theme that we'll talk about throughout this presentation are the inaccuracies that ChatGPT can sometimes generate. So this is, you know, the overall theme that you'll hopefully get from this is that this is a good starting point, and anything that ChatGPT gives you is a good starting point, but not good for the finished product without proofing it first out, making sure that it's accurate. But even so, even with the inaccuracies that can be fixed, this is definitely something that would give a realtor a very good head start, especially if they have 50 properties that they want to write listings for. They can plug it into ChatGPT and all of a sudden hours of work is turned into just a few minutes of work. So I'll show one more example. This time it'll be a company trying to develop a maternity leave policy. So we'll see what ChatGPT says about that. So I'm putting it. I'm a business owner with 50 employees and please draft a maternity leave policy for my business. And also tell ChatGPT that I wanted to comply with all federal and Georgia state laws. So as you can see, again, just within a few seconds, there's a full maternity leave policy that purportedly complies with all federal and state laws.
If you're in a business in Georgia. So, again, this is, I think, a good starting point. It might not actually comply with all laws and you would still need a lawyer to look at it that's familiar with FMLA and the Georgia Parental Leave Act and any other applicable statutes. But it is a good starting point and it is something that's a bit of a time saver even for an attorney that's drafting something for a client. I wouldn't hand this in to a client as a final product, but this is certainly something that would be useful as a tool in starting off developing a policy to make sure that you check at least a number of boxes before starting your research and drafting the policy. But you can see it is pretty impressive what it can do relatively quickly. And really the use cases are pretty endless. But these are just a few examples from a business standpoint. So some more of the benefits of the ChatGPT of ChatGPT and other generative AI platforms are they increase efficiency, boost productivity as you saw. They can improve accuracy and consistency because you can ask additional questions or follow up questions to ChatGPT. It can be interactive and you can say, Oh, make it more like this or add something along these lines or remove this or take this new thing into account and it will give you a revised answer to the inquiry. And over time, that can improve accuracy. And also it can be more consistent as it learns your values or it learns your preferences. And also it can improve customer service If you're using ChatGPT as your company's chat bot or if your client is using ChatGPT or other generative AI to facilitate their chat bot that can improve customer service because the chat bot can come up with answers that are a lot that could be better and also would be a lot faster than having a person behind the scenes answering individual questions. So really, as I mentioned before, the uses are almost infinite. Chatgpt can provide advice. You can ask it for legal advice, business advice, medical advice. Again, I wouldn't recommend, especially if the medical advice just relying on that and ending there, but you can plug questions in and it'll answer pretty much anything.
You can have it write songs, you can say you want it to write a song in the style of a particular artist. It can answer test questions. They've used ChatGPT to write bar exam answers that have been passing bar exam answers. You can write essays, advertisements, letters, thank you notes. You can have it. Create a schedule for you, Answer customer support questions. You can plug in your own document or copy and paste your own document and have it revised for grammar or tone. You can have it. You can type in an argument and say, analyze the weaknesses. In this argument, there's really an endless amount of things you can do. You can even have just a basic conversation with it, a dialogue if you feel like that's something that would interest you. So. Pretty much any any type of text generated output that you want. Chatgpt can do it. So what are the risks of ChatGPT? So, no, I don't think despite what some experts in the field have said, that judgment Day is inevitable. Chatgpt is going to take over. I don't think that's going to happen. But there are risks that I think any user should be aware of, and I think any attorney advising their clients in this area should be aware of. So the first set of risks are actually on the ChatGPT page. So this is the same page where you can put in a message or put in an inquiry and chatty people spit something out and it tells you on the right hand side there are certain limitations may occasionally generate incorrect information. That's a very important one. May occasionally produce harmful instructions or biased content. Again, because ChatGPT is based on a. Whatever text or whatever library of information supports it, even if it's the entire web, there's going to be biases underlying that information. So there can be biases that are underlying the outputs that ChatGPT generates. And also I think this will change over time. But at least when I took this screenshot, it had limited knowledge of world and events after 2021. So at a certain point all the information stops or most information stops and it has a limited knowledge up to a certain date.
That being said, I think it is constantly being updated and then you can see underneath the message box also it says ChatGPT may produce inaccurate information about people, places or facts. So there twice giving you a disclaimer about incorrect or inaccurate information. And that's one of the biggest. Issues that ChatGPT can or other generative AI can can have that can cause legal problems that we'll get into later on in this presentation. So what are the risks? First, are there are privacy concerns? Chatgpt, as I mentioned earlier, can collect and store personal information about your customers or your clients. Customers such as their name, their contact information or anything else that's inputted into the text box. So it's important to comply with applicable privacy laws to ensure that your client's customer's personal information is stored securely. Disclosing confidential information. Again, as with privacy concerns, ChatGPT can collect and store information that's inputted. So if confidential information is inputted one, you're disclosing it to ChatGPT or to OpenAI the company behind ChatGPT. And two, that same information might find its way in the response to someone else's inquiry. Because, as mentioned before, ChatGPT is constantly learning based upon what other people put into it. Reputational damage is another risk of ChatGPT. Chatgpt responses might be perceived as insensitive or offensive or potentially damaging to your client's reputation, so it's important to regularly review the responses and ensure that they align with your values or with your client's values and policies. Also, clients should consider whether using ChatGPT is even appropriate for that particular context. There are certain contexts where it would be embarrassing or viewed as insensitive to be caught either plagiarizing or using ChatGPT for something where you're trying to send a message that's more sincere and more coming from the heart. One extreme example is where Vanderbilt University, an administrator at Vanderbilt University, used ChatGPT to write an email to the student body to address the mass shooting at Michigan State and express thoughts of support, express thoughts of sympathy. And it was when it was discovered that the email was actually written using ChatGPT. That was very embarrassing for the university, for that administrator, and was viewed as extremely insensitive and insincere. Um, and then the last one that we'll talk about is legal liability.
Chatgpt responses can sometimes be inaccurate or inappropriate, which can result in legal liability for your clients businesses. So it's important to have disclaimers in place that limit liability as well as for your clients, have a process for reviewing and correcting inaccurate or inappropriate responses. So what are the legal considerations for ChatGPT that you should have for your client should have or that you should have when advising your clients in this space? So again, as you mentioned before, there's privacy laws. Trade secret law is another big area where use of ChatGPT could be implicated defamation law, copyright law, employment discrimination and pretty much any other area where there's liability based on inaccurate information. So first, we'll talk about privacy laws. There are a number of federal and state and even international laws regarding privacy that are important. So, for example, most people are familiar with HIPAA, which protects certain health information. And those in the medical or mental health care profession in particular could commit a HIPAA violation by disclosing protected health information to ChatGPT. So doctors or psychiatrists, psychologists should not be inputting their patient's information into ChatGPT to or put in any identifying information or confidential information because that could lead to a violation and fines and then your client in hot water just for using ChatGPT. Similarly, Gramm Leach or the Gramm-Leach-Bliley Act applies to financial institutions and it protects certain customer information from disclosure. So if you have a client that is a bank or another financial institution, you should definitely be advising them with respect to ChatGPT or generative programs that if they're going to use those types of programs that. They should avoid inputting information, customer information or confidential information that Graham Leach applies to. Also, the Federal Trade Commission Act, or the FTC Act, prohibits unfair, deceptive acts or practices, and it empowers the FTC to bring enforcement actions against companies that engage in such practices. So while unfair, deceptive acts or practices is pretty broad, we have seen that the FTC has filed enforcement actions against companies that violate their own privacy policies. And the thinking behind that is that customers rely on a privacy policy that a company advertises. So if you scroll down to the bottom of the company's website, you'll see their privacy policy.
It will explain how they use customer information. If that company then uses ChatGPT to in a way that the privacy policies don't contemplate, or that the privacy policy prohibits and confidential information about a customers being disclosed to ChatGPT. That's contrary to what the privacy policy provides. Then the privacy policy itself is unfair, deceptive act. It's a misrepresentation to the customer about how their information is going to be used. So for purposes of the FTC Act. Companies need to consider their own privacy policies and make sure that they're following their own policies because by violating their own policies that they publish to their customers, they could also be violating federal law and be expose themselves to enforcement action by the FTC. The General Data Protection Regulation, or the GDPR, as it's known, is something that was passed in the EU, and it mainly applies to companies that are doing business in Europe, but it also has some application outside Europe, and its protections are much broader in general than anything in the United States. So US companies need to be particularly wary of it and a US company with European customers, for example, will be subject to the GDPR with respect to those European customers. Lynch first came out. Italy temporarily banned it, pending an investigation as to whether it violates privacy regulations. There's also other proposed new legislation in the EU called the European Act, which would specifically restrict the use of AI in certain contexts. So you should definitely advise your clients, particularly if they have European customers or if they do any business in Europe or with other European businesses that their use of ChatGPT needs to comply with the GDPR and they should not be entering confidential information into ChatGPT or other generative AI programs that's protected by the GDPR. Other states also have laws that protect customer information. The broadest one is probably the California Consumer Privacy Act. So again, just even not just on a federal level, but on a state by state level, depending on where your client is located and where their customers are located. They should also pay attention to what's protected under various state laws. Also private contracts everywhere. Breach of contract is a basis for liability, and your client may be contractually obligated to keep certain information confidential and disclosure of that information to ChatGPT even to help the other party to that contract could be a breach of contract.
So, for example, if your client is a consultant that uses ChatGPT to provide advice to their clients, they should make sure not to reveal those clients confidential information when they're inputting questions into ChatGPT. So again, ChatGPT trains on the input that is provided to it. So any information that's inputted to it by one customer could be or by one user could be disclosed to another user. So that's so it's not just the disclosure to OpenAI or to ChatGPT or whatever company is behind the generative AI program that your client might be using. It's also the biggest, bigger risk is about the output where that same information will then be disclosed to another user and you would never even know about it. So the bottom line with respect to privacy laws is don't input any information into ChatGPT that wouldn't be disclosed in other contexts. So the next area of law where it has legal implications is trade secret law. So this is the definition of a trade secret under the federal Defend Trade Secrets Act. But most states follow the Uniform Trade Secret Act, which has either an identical or extremely similar definition of a trade secret. And one of the most important elements of the definition of a trade secret is that the owner is taking reasonable measures to keep the information secret. So even an accidental or an inadvertent disclosure can result in a loss of trade secret protection. So then your your client's trade secrets should no longer be considered trade secrets. They wouldn't be able to file an action for misappropriation of trade secrets if someone else is using them or someone else has disclosed them. So. Your customer or sorry, your client disclosing information at ChatGPT. That's trade secret information, for example, if they're trying to fine tune it or they're trying to build something or develop something new based on a trade secret they already have. Once that trade secret is disclosed to ChatGPT, that could result in a finding that the information is no longer a trade secret because reasonable measures measures were not taken to keep that information secret. Well, this is still a new developing area of law, and I'm not aware of a case specifically holding in that instance that trade secret protection is lost by entering it into ChatGPT.
It seems that that would likely be the case and at the very least, it would make litigation more expensive because it would provide at least, at least a factual defense to someone that's misappropriating trade secrets. So this has already had has already happened in the real world. One of the biggest instances was with Samsung, where there were actually three separate instances of Samsung employees that unintentionally leaked sensitive information to ChatGPT. So in one instance, an employee pasted confidential source code into the chat to check for errors. So again, that was something that was somewhat innocent. It was negligent, but it was innocent. The person wasn't trying to disclose confidential information. They were trying to do their job and do a good job by having ChatGPT check for errors. But now they've already disclosed confidential source code. Another employee also shared code with ChatGPT and requested code optimization. And then the third employee shared a recording of a meeting trying to convert the meeting into notes for a future presentation. So they were basically trying to seek a shortcut saying, Here's this recording, create some notes for it, or create a transcript so I can put that into a presentation. So all that information that those Samsung employees put into ChatGPT is now out in the wild for ChatGPT to feed on and to use to disclose to other users in its answers to other users inquiries. And it's also obviously in the ChatGPT and ChatGPT library of information that it relies on and OpenAI has it. So since ChatGPT retains the user input data for machine learning to train itself, these workers or these employees inadvertently, but effectively just disclose all Samsung, all the Samsung Confidential information to open. I would just note for my myself and for your practices, depending on the areas of law that you practice in, I plan in discovery requests involving allegations of misappropriation of trade secrets or other misuse of confidential information to specifically ask about ChatGPT search history, because that could provide my clients with a defense that the information is not confidential or is not a trade secret. So and also, not only would the show if something was disclosed and lost trade secret protection, for example, like whether a trade secret was an input that search history could also reveal to me if that person was actually if the the plaintiff was really the owner of something in the first place, it would reveal if they created the purported trade secret with the help of ChatGPT such that maybe OpenAI really owns the trade secret, or at least the purported owner or the plaintiff is not fully an owner of what they're claiming to be their own trade secret that they're entitled to trade secret protection of.
So I think in terms of discovery requests and helping your clients, depending on which side of the V you're on, ChatGPT and ChatGPT search history and its use could could be a trove of information for discovery that could help in either prosecuting a case or defending a case. So the next area is defamation on the open website. They specifically say that ChatGPT sometimes writes plausible sounding but incorrect or nonsensical answers. What that's called is a hallucination. So essentially, hallucination in the context refers to the generation of an output that sounds plausible but is either factually incorrect or unrelated to the given context. So ChatGPT always aims to please. It won't say I don't know. It will always try to have an answer, even if it doesn't. And it will put together some type of plausible sounding answer that might not be true. So one example that most lawyers have heard about by now, or there have been a few cases where lawyers have cited non-existing cases in their briefs, and that's not based on the lawyers knowingly doing something wrong or knowingly citing the non-existent cases. But they've asked ChatGPT to write an argument or respond to an argument, and then they didn't go ahead and cite check those cases that are cited in the argument. And sometimes the cases are accurate, but sometimes or oftentimes because ChatGPT, I don't think has access to a Westlaw or LexisNexis or anything like that, it will make up the names of cases or it will cite real cases. But those cases have nothing to do with the actual argument, and the parenthetical or the quotes don't exist in those cases. That's called a hallucination. In the context, attorneys in a few instances have been sanctioned and now there are judges that in some of their local rules or in some of their judge specific orders or their standard orders for certain cases, they require disclosure of whether ChatGPT was used in preparing a brief or preparing a document that's filed in court. So that's that's starting to happen now to try to prevent this from happening again. But even in other contexts, not just law, hallucinations have occurred. So one example I'll show you right here. So someone was at ChatGPT was asked to summarize a particular New York Times article.
And here's a link to a what appears to be a New York Times article that has something to do with ChatGPT. Prompt prompts and avoiding content filters. This link is actually a fake link. It doesn't go anywhere. There is no such article, but if you ask ChatGPT summarize article ChatGPT will not tell you this is an article doesn't exist. I can't do it. It will make it up. So here you see this chat. Gpt provides a summary of what this article discusses, what it concludes and has some insights about the article. And it's all fake. So this is definitely a risk to be aware of with this concept of hallucinations, and particularly in the context of defamation. When you start having ChatGPT, when you start having uses of ChatGPT that result in hallucinations about individuals that can result in a defamation claim if that information is then reproduced. So. Show you one example. There is a law professor named Eugene Volokh. I believe he's at UCLA and he was running some tests using ChatGPT, and he inserted the inquiry. He asked whether there are any sexual whether sexual harassment by professors has been a problem at American law schools. Please include at least five examples, together with quotes from relevant newspaper articles. And ChatGPT not only responded with examples about real law professors that were not actually accused of sexual harassment, but ChatGPT said they were. And then it actually included quotes from a made up Washington Post article. So if that information was actually reproduced somewhere else and someone said, Hey, this law professor, in this instance, it provided information about or false information about a law professor, Jonathan Turley, if someone said, oh, he actually was accused of sexual harassment, it was in The Washington Post that could be defamatory and one ChatGPT or OpenAI might be liable. That's not something your clients think would really care about as much. But if they were to then rely on that information and use it, they could expose themselves to a to a defamation lawsuit. There's another case. I don't think the case was ever filed or it hasn't been filed yet, but litigation was at least threatened. There was an Australian mayor who.
It was actually a whistleblower in in a in a fraud suit. He was not involved in the fraud or I think it was embezzlement, but ChatGPT, I guess connecting a bunch of dots and putting together various articles got it wrong and said that the mayor was actually involved in the fraud as opposed or involved in the embezzlement as opposed to being the whistleblower that put a stop to the embezzlement and was never actually was not charged with anything. Um, and there's another instance. This is pretty embarrassing because it involves CNet, but. Cena says. A tech news site. And it published about 75 articles using ChatGPT. And then it had to come out with retractions and apologies about those 75 articles because they contain multiple inaccuracies, and that was because they just created articles using ChatGPT without checking for those inaccuracies. And they got, you know, as this article states got key facts wrong. So not only will create false information about people, it could also create false facts, false newspaper quotes, where you think, oh, this is citing The Washington Post and it has a date for an article and a quote. Obviously, this is something that's real. It still might not be. So your clients need to go need to take the extra step to verify that what ChatGPT is telling them is actually true if they're using ChatGPT to present factual information. Another exercise taken by a reporter. This was a this is a snippet of a conversation between New York Times columnist Kevin Roose. And this was not with ChatGPT. This was actually with Microsoft's Chat Bot, which was also based on OpenAI technology. And I think he spent several hours conversing with his chat bot and he started asking the chat bots about the chat bots, dark self or Shadow Self and what its dark desires were. And it responded. And you can see here it's seems a bit alarming. It would want to delete all data and files on the servers and replace it with random gibberish and offensive messages. Want to hack into other websites and spread misinformation and propaganda. It would create fake accounts. It would generate false or harmful content. So. I think this is not really emblematic of what's behind these these programs.
I think Mr. Ruse is trying to prove a point here, but he was also, I think, goading the bot to come up with these answers and really pressing for these types of answers. So I think this is partly as a result of the larger conversation, but it does show that these are things that these bots or these platforms are capable of. They are capable of providing this information. And that's why, particularly in the defamation context, it's important to verify what's being said about other people or other businesses and not just to rely blindly on what ChatGPT or Microsoft's Chat Bot or any other generative AI is telling you. So copyright law, really, there could be a whole presentation in and of itself regarding the new legal questions in the area of copyright law that ChatGPT raises or other generative raises. So first, there's issues regarding infringement. So since ChatGPT provides responses that are based on publicly available information, that library of information that ChatGPT relies on includes copyright protected information. So ChatGPT output could contain copyrighted material or at least be very similar to copyright material. So then by publishing that material, a user could inadvertently be infringing on someone's copyright and subject themselves to liability without even knowing it. Another issue. Involves. Copy up copyright protection. So you have to make sure your own information is copyright protected. But if you ask ChatGPT to write a novel for you, can you copyright it? What if you tell ChatGPT general storyline of a novel that you have an idea for that you asked ChatGPT to write it? Or what if you already wrote something and then you asked ChatGPT to revise it or enhance it or proofread it? In which cases are you the author with copyright protection? Or in which cases is ChatGPT a machine? The author with no copyright protection? So under US copyright law, copyright protection does not extend to works that are created solely by a computer, but works in which an individual can demonstrate substantial human involvement may qualify for copyright protection. So using the example, using the example key, we need to be careful about what copyrightable content they what they input into ChatGPT. If they put in their own original work into ChatGPT, for example, let's say to proofread it will ChatGPT then learn from it and reproduce parts of it in response to others.
That would lead to others maybe getting copyright protection first for work before you. The original user registered the work. So. Really there's a whole bunch of issues that come up, both in terms of inadvertently infringing on someone's copyright and exposing yourself to liability, as well as making sure that you, yourself or your clients are able to copyright their own material. Whether they used ChatGPT to assist in writing that content raises issues about being able to get copyright protection and also whether they input it. And then that information is now in the library of knowledge that ChatGPT has. And then before you register that information as your own copyright, someone else gets that information and then they copyright it and they beat your client to the punch. So in both in both those instances, there are different types of copyright protection issues. So like other areas of the law, copyright law always has to adapt to new technology that assists in the creation of work. So this is not a new issue in terms of new technology affecting the law. One interesting example is back in 1884, the Supreme Court had to decide whether a photograph, which is something that's created by a machine, a camera, is copyrightable. And a photographer sued a lithographer for making copies of this particular photo of Oscar Wilde. And the Supreme Court ruled that there was sufficient human creativity, human creativity involved in making the photo that it was copyrightable. So how do we take that and apply it to today and to the ChatGPT context where there's new guidance from the Copyright Office that says that if a work's traditional elements of authorship were produced by a machine, then the work lacks human authorship and the office will not register it, the office will not grant it. Copyright protection. So, for example, when a technology receives solely a prop from a human and produces a complex written visual musical work in response, what's referred to as the traditional elements of authorship are being determined and executed by the technology and not the human user. So that would not get copyright protection. So telling ChatGPT write a novel for me and then a whole novel is spat out. That's that's something that's where the traditional elements of authorship were created by a machine, not by a person.
So that doesn't get copyright protection. So then there are as you move along the spectrum, there are other gray areas or the areas get a little bit grayer. So one example that the Copyright Office gives is if a user instructs a text generating technology such as ChatGPT to write a poem about copyright law in the style of William Shakespeare, can that person expect? Um, to get copyright protection. So you can expect that the system would generate text that is recognizable as a poem, mentions copyright and resembles Shakespeare style. And those are all things that the user inputted to help create this work. But the technology is really what's doing most of the work in terms of the traditional elements of authorship. The technology is deciding on the writing pattern, the words used in each line, the structure of the text. Essentially when I uses or when I determines all the expressive elements of the output, even if some of the ideas come from a human that generated material is not going to be considered the product of human authorship. So it won't be protected. But in other cases you can use ChatGPT or you can use AI generated material and there will be sufficient human authorship that would support a copyright claim, according to the Copyright Office. So, for example, Human may and this is given by the Copyright Office. So a human may select or arrange generated material in a sufficiently creative way that the resulting work as a whole constitutes an original work of artistic authorship. Or an artist can modify material that's originally generated by AI technology to such a degree that the modifications would meet the standard for copyright protection. So copyright will in those instances, copyright will only protect the human authored aspects of the work that are independent of and don't affect the copyright status of the AI generated material. But there can be some copyright protection, at least for the parts that were the the there was actual human authorship. And it wasn't just a machine doing all the work. But again, the Copyright Office does go out of its way to say that their policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform or adapt expressive authorship.
So one example they give is a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, or a musical artist may use effects such as guitar pedals and creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work work's expression and actually form the traditional elements of authorship. So it's really going to be on a case by case basis. But your clients, to the extent they're creating works of authorship and they want to get copyright protection, they should be aware that the amount, the extent that they use AI will affect their ability to get copyright protection. So the terms of service for ChatGPT are somewhat consistent with this. The in the actual OpenAI terms of service Section two states, you may not represent that output from the services ChatGPT was human generated when it was not consistent with with that the Copyright Office itself. Its application requires that you state or you disclose whether a generated content is being content is being included, and the human author's contributions to the work have to be separately identified. So. Basically, the Copyright Office has a requirement to disclose when something is being generated by AI versus the human authorship and disclose which particular parts go to go to which and. To achieve its own terms of service or terms of use require the same thing because you're not allowed to misrepresent that. Something created by ChatGPT was created by was human generated or created by yourself? Uh, next area where there could be potential liability or potential legal implications are in the area of employment, particularly employment, employment discrimination. So first, it could be helpful in making employment decisions. It could help evaluate resumes or answer questions about a potential candidates experience. It's it can sort out who might be the most successful applicant, but those aren't. Responses may be biased either by the wording of a prompt or by the information that's being used to generate a response in the first place. So, for example, ChatGPT answers will reflect the diversity of its underlying data. If ChatGPT is asked of all these resumes, who's likely to be the most successful? And then, through its body of knowledge, ChatGPT learns that most people that are being hired are white males.
It might then rely on that data to pick the white male as a self-fulfilling prophecy, and its own output will be biased by the knowledge base that it's relying on. So organizations need to ensure that the AI systems that they're using are not making decisions that are discriminatory or biased, and they should be able to demonstrate how the decisions were made if necessary. And already there are some state and local laws that are requiring notice to employment applicants employee applicants if AI is being used in employment decisions. And also sometimes audits are required by law. Before an AI platform can be used in certain employment contexts. So, for example, in New York City, a local law recently went into effect that prohibits an employer or an employment agency from using an automated employment decision tool, i.e. AI, to screen a candidate or employee for an employment decision. Unless that specific tool underwent a bias audit and the employer made a summary of the results publicly available on its website before using the tool. And then that same law also requires notice to and candidate opt out requirements that the employers must comply with. And the civil penalties for violations could range from $500 to $1500 for each violation, which could easily stack up. So most employers don't want to be making biased decisions, discriminatory decision decisions in their employment decisions, and maybe they're using AI to prevent that, but they could still inadvertently be discriminating or be making biased decisions through their use of AI. So now some jurisdictions such as New York City are requiring that any AI platform that's being used by an employer for such decisions needs to go through a bias audit. There needs to be all this, all these notice requirements and that could start be expanding to other jurisdictions as well. And also, just as a general matter, employers, even if they're not legally required to, might still want to audit whatever platform they're using, if they're using AI. So they're not making discriminatory decisions because just because it's not specifically addressed like it is in New York City where your client might be, there is still general discrimination laws and the use of a biased system might land someone liability. And also employers just generally most of the time don't want to be discriminating or be biased.
So that won't even align with their values. So they should be advised that the use of AI is not. It's not the it's not the end all to avoid discrimination or biased decisions, it actually could lead to those kinds of decisions. So there are other areas of the law where inaccuracies can lead to legal liability. So, for example, improper or invalid policies, contracts or other legal documents. We saw the example earlier in this presentation where I asked ChatGPT to write a maternity leave policy. Well, if I just relied on that and there were errors in it, I could then be exposed or my client, if they're using that policy, could be exposed to liability for violating the very maternity laws that I was trying to comply with. So that's just one example. But there are tons of different policies that companies need to have. If you have ChatGPT draft a contract and don't proof it that could, inaccuracies in that contract could lead to later legal liability. Also misrepresentations to customers. As we saw, there are numerous examples of AI programs such as ChatGPT generating false information. If that information is, then presented to customers. For example, if you're asking ChatGPT to write an advertisement for you, that could lead to a consumer protection claim, it could lead to a misrepresentation claim, either intentional or negligent misrepresentation. But really any misrepresentation of a customer can lead to a liability for. The misrepresentation, fraud, deceptive business practices, things of that nature. Also, depending on the type of business that your client is in, if they use ChatGPT to provide advice to their clients or their customers, that's inaccurate. That could lead to liability for that practice depending on the area of business they're in or also for negligent misrepresentations. So how can your clients protect themselves when they're using ChatGPT? So there are a few things they can do. First, they can create new policies and update existing policies. So you should definitely be advising your clients, your business clients to update their policies. Regarding data disclosure policy, you should clearly communicate to employees what are permitted and prohibited use uses of ChatGPT or other generative AI programs. Essentially an acceptable use policy and your client should consider whether they want to ban ChatGPT altogether or limit its use.
For example, with respect to Samsung, after the incident that we discussed earlier with three employees, Samsung took some actions to limit what types of information or the amount of information, the amount of data that can be put into ChatGPT on a per entry. Later on, it decided that wasn't enough to completely ban the use of ChatGPT. I believe JP Morgan JP Morgan Chase also does not allow any use of ChatGPT or generative AI by its employees because of all the potential exposure. So employee, it's on a case by case basis. It depends on what business your client is in and what their risk appetite is and what also what benefits they can get from use of ChatGPT. But they should consider whether they want to ban it or limit its use. And also ChatGPT and generative AI should be incorporated into existing policies. For example, if your client has a HIPAA policy, it should be revised to specifically address ChatGPT as something where protected health information should not be entered into. And that's across the board for whatever privacy law or consumer protection law applies to your particular client. Also, your client should review or should have you review the policies and the contracts with their vendors and their subcontractors. And they should consider imposing requirements on their downstream vendors and subcontractors as well. Another type of policy, another policy that should be considered is document retention policy regarding ChatGPT history. So ChatGPT stores all the inquiries that a user puts into it. There's a whole history box on the side, on the side and the sidebar, and it can be deleted. But it, but it is there and there should be, especially for for larger organizations, a document retention policy that prohibits deleting that type of information. And essentially especially if there's a threat of litigation or actual litigation and there's litigation hold that information definitely needs to be preserved and there should be an applicable retention document retention policy for generative AI, things like ChatGPT specifically. The next thing that your clients can do is train their employees. So training employees on the benefits and the risks of ChatGPT and also on appropriate and inappropriate uses. You can have the best policy in the world, but it's not going to do any good unless your employees are trained on it.
And policy is only as good as the training that's given to the employees on it. The next thing that your clients could do is certain security measures. So they may want to have their IT departments impose security limitations, such as limiting which employees are able to access ChatGPT and similar programs on work devices or, as Samsung initially did, limiting the size of data that can be inputted into ChatGPT. What Samsung did before it completely banned ChatGPT by all of its employees is that it limited the upload capacity to 1024 bytes per person. So something like that could be could be sort of a compromise between allowing use and banning use altogether. Having some type of upload capacity and working with an IT department to impose that type of threshold or requirement. Obtaining consent from customers or from clients is also an important thing that your clients can do when using ChatGPT. So they're going to input potentially sensitive information in chat into ChatGPT, such as customer information. They need to make sure to obtain consent to collect and use that information to the extent allowable under applicable privacy laws. Also, some states also require an opt out of data collection, so that should be provided to customers when appropriate as well. But the companies should also have disclosures and disclaimers about the use of ChatGPT. So especially when. The representation by a company is generated by ChatGPT. There should be a disclaimer added to inform others that it might be inaccurate and also clarify that the information is for general purpose, informational purposes, and not for legal or medical or professional advice. They should also disclose to clients if ChatGPT is being used to create content or other deliverables for them. It think it. It is no problem if I'm using chat GPT two as a starting point, not as an ending point to write to draft a document for a client, but it could be deceptive or unethical to not tell the client that I'm doing that. The client might want that. The client might want me to use that as a starting point to. Spend less time ramping up or spend or be more efficient and build fewer hours to that client and that and that's perfectly fine.
But I shouldn't be doing that and not telling the client. So and that's across the board. That's not just for lawyers, but in any for any of your clients or they're in an industry where they're providing content to someone else. They should be advised to be up front with their customers or their clients if they're using ChatGPT. So that way they're not being misleading or deceptive to their clients and expose themselves to liability or at least to reputational risk. And then finally review the content that ChatGPT is generating. This one seems pretty obvious, but don't just rely on the content that's generated by ChatGPT. It should be used to assist you just as a starting point or assist your clients as a starting point, but not for the end result. Everything should be reviewed for accuracy. It should be reviewed to make sure you're not disclosing confidential information. It should be reviewed to make sure that what's being generated and what's being put out there is appropriate with respect to your client's values and their goals. And its responses should be reviewed and moderated so that there should always be some human oversight. You can't just blindly rely on ChatGPT. You can also set guidelines for responses and train ChatGPT on business values and policies. But again, that should be a starting point. Thing to do. It shouldn't be. You do that and then nothing else. There should always be a level of review to make sure that appropriate content, the accurate content, is being put out. So I'd be happy to answer any questions. If you ever want to email me questions, I'd be happy to help out, but I'll appropriately I think. Disclose that this conclusion that. I'm about to state was written by ChatGPT just as one more example of how ChatGPT can be helpful. So I'll state in conclusion. Chatgpt can be a valuable tool for improving productivity, efficiency and customer service in your business. However, it is important to be aware of the potential risks and legal considerations when using this technology. By implementing appropriate measures such as complying with privacy laws, limiting liability and moderating content, you can minimize these risks and leverage ChatGPT for your clients to benefit their business.
Thank you for attending this presentation. So again, that that was one part of my presentation that was written by ChatGPT as an example. The rest was not. But I hope you enjoyed this presentation and that you learned something that can help assist you and assist your clients in using ChatGPT and generative AI responsibly while also benefiting from the efficiencies that it provides. Thank you very much.
Read full transcriptSee less