- [Scott] Hi there. My name is Scott Aurnou. Today we're here to talk about Deep Fakes: A Rising Threat to Cyber Security, Law, and Society. So on February 24th of 2022, you may have seen in the news, forces from Russia invaded Ukraine. So that certainly got a lot of news, and rightfully so, as the forces tried to advance the east, south, and Kyiv area of of Ukraine. But what some people noticed and some people might not have was that about three weeks later, on March 16th, a video of Ukrainian president Volodymyr Zelenskyy urging his troops to lay down their arms and surrender started appearing online. It spread around quite a bit. It looked pretty convincing at first glance, but at not-so-first glance, you could see the shape of the head wasn't quite right. His neck didn't quite fit. His body seemed a little off. The lighting was a little funky. And that's because it was something called a deepfake. A deepfake is a type of technology in which a video is produced that is meant to look like something else. So in effect, what's happening there is you can create something that makes someone, quote, unquote, "do something that they didn't actually do." And if it's done well, better than that one, it should be somewhat convincing. So today, let's talk about a few parts of that. First, what are deepfakes, and how do they work? Second, what are their practical and potential uses? Third, what are the associated threats to lawyers, law firms, and organizations? And then finally, we'll talk a little bit about developing solutions, both technical and regulatory. It's pretty early on with this stuff. So there's a few things to cover. So first, what are the pertinent technical concepts here? In effect, what are deepfakes? It's a type of synthetic media, meaning invented. Now, media itself is usually broken into two categories, static and electronic. Static media might mean a physical newspaper that you hold and actually turn the pages, same thing with a book. Now, if you're reading that same book on a Kindle, that's electronic media. Synthetic media means it's something that's literally invented. So if you think of a character on TV that doesn't actually exist, like an animated character, that's synthetic. So how do deepfakes actually work? For starters, let's go through some basic definitions to get up to that. First off, a program, also referred to as software or an app, all the same thing, just a set of instructions for a device or a computer, what have you, to follow. That can be the most simple program. It can be a very, very complex video game. It's just a set of instructions. Now, an algorithm is a set of mathematical steps that a program will execute on a specified data set. So with this data, take steps 1, 2, 3, 4, 5, et cetera. Now, artificial intelligence, or AI, refers to systems or machines designed to mimic human intelligence to perform specified tasks and effectively improve themselves based on the information they collect. The more they get in, the better they can do with their assigned task. Now, mind you, this is not Agent Smith or not, you know, from the "Matrix." It's not replicants from "Blade Runner." It's nothing that advanced. Think something more like a smart thermostat that gets a sense of when you tend to turn down the temperature each night, and starts learning to do it for you. That's a little bit more of what we're talking about. So machine learning, meanwhile, is a type of AI that is focused on building systems that effectively learn or improve their performance based on the data they consume. So effectively, that comes back to that smart thermostat. Now, deep learning is a type of machine learning in which algorithms are modeled to work like the human brain, aka a neural network, as they do, to learn from large amounts of data. Now, effectively, when you hear the name deepfake, what you're getting is deep learning fake, just a little shorter, that's all. Now, deepfake algorithms themselves are effectively trained on real sets of photos or audio or videos to produce realistic-looking false imagery. That's the whole idea here. Now, one thing that's really at the base of this is something called a generative adversarial network, or GAN, G-A-N. Now, that's effectively two AI systems that are working in tandem to create the deepfake. Number one is called the generator. Generator will learn to recognize a subject's face and voice and attempts to replicate it in various expressions. Number two is the discriminator. Discriminator compares the fake images created to the originals. If it can tell the difference, it will then reject the deepfake. Essentially, one algorithm tries to create a convincing forgery while the other tries to detect it. Once the discriminator has been fooled, the deepfake is ready to go. Now, the idea for the AI to gain sufficient expertise on the subject's appearance and mannerisms so that it can create a believable fake clip of the person saying and or doing things they've never actually done, that's the whole point. Now, keep in mind, this type of technology isn't actually new. You've seen this. A lot of times where you'll see it as normal person is, you'll see it in film. An early example I would point to would be the film "The Crow" from 1994. That starred Brandon Lee, who was the son of Bruce Lee. And tragically, he was killed during filming by a stunt gone wrong. They had filmed the vast majority of the movie, and rather than trash it, they decided to honor his memory and finish it. So how would they do that? Because obviously he couldn't be in it. So this became a somewhat involved process in those days, where, in effect, what they did was they took an actor who physically looked a lot like him in terms of physique, and they used early CGI technology to put his face on that actor. Now, one thing that really helped that film, if you haven't seen it, a lot of it takes place in rain and fog and dark in Seattle. So that was a very early use. About a decade later, in 2004, you had the film "Sky Captain and the World of Tomorrow." One of the main characters in the film was Sir Lawrence Olivier, who had died 15 years earlier. He was a major actor in the 1930s and '40s, and they moved him in for sort of an extended cameo of sorts inside the film. A more recent example, which I guess still gets a fair amount of play on streaming services and cable TV and such, is the first Captain America movie, "Captain America: The First Avenger," from 2011. In that, Chris Evans plays Steve Rogers, Captain America. Now, one thing that's... If you're not familiar with the character, one thing that happens with him is that early on, he's a physically very small and frail man, someone they might have referred to as a runt back in the day. This film is set in around 1941, 1942. And he gets what's called a Super Soldier Serum, after which he goes from that small, skinny fellow into a large, broad-shouldered, muscular superhero. And that was actually Chris Evans. So how did they make him both parts? In effect, what they did was they had a smaller actor play the part of pre-suit, pre-serum Steve, and they used a technology very much like deepfakes, and they put Chris Evans's face on that actor. And then once he actually turned into Captain America, it was just him. Now, what's so different about this nowadays isn't so much that this type of technology exists. The new part is really the easy availability of this technology. So what are the potential uses? Deepfake technology can be used for a number of legitimate and not-so-legitimate purposes. On the legitimate end of things, it can be used for business, entertainment, parody in what you might call "what if" videos, can be used in the news media, and also for marketing and advertising. Not so legitimate uses would be non-consensual pornography, disinformation, and social engineering. We'll talk about that a little bit, define it for you. Now, in business, one thing to start with, you wanna talk about the difference between prerecorded versus real-time deepfakes. What am I talking about? So the deepfakes where they're creating, say, a movie-type thing, that's generally something that's done in advance. Like, the thing that was done for Volodymyr Zelenskyy, someone tried to set that up beforehand, Regardless of they didn't do a great job of it, it wasn't done right on the spot. Real-time is a bit different. That's where someone has been scanned enough so that it's basically super-imposed on a person who's literally speaking and moving in front of you such that it appears to be someone else. Now, what's different with this is this would be used for something like live communications, like, let's say, a Zoom meeting where for all you know, you're talking to the person you think you're seeing. With this technology perfected, it could be appearing to be someone else entirely, even to the extent of making their voice sound like someone else. Now, realistically, where would you use that in a sensible area? Well, in business it's actually helpful to clean up communications. So if you're having a conversation, keeps the video a little clearer. It's easier to see people and understand them. It's a positive. Of course, what often happens is something like this is used in a business-practical sense, and then the genie gets out of the bottle, and the technology used to improve a business process then gets out and about in the world, and people do other things with it, shall we say. And also, realizing with this, one thing that's really helpful here is it can be used to effectively make a user more, I guess you might say, presentable. So if you've, you know, spent the weekend up in the mountains, there's dirt in your hair, and you've got bedhead, and you need a shave, and you have a meeting, flip this on, and you have a clean, crisp suit, no bedhead, things are looking good. So in effect, this is sort of like when you're on a Zoom meeting, and someone turns off their video, and there's this really nice, neat-looking avatar of them in a suit. Try to imagine a version of that where, in effect, it's going with you. So the avatar is your video. Your video appears to be, you know, sharp-dressed, ready to go. And that way if you're, you know, rolling out a bed, still holding your coffee, people don't need to know that. So thing to realize, though, that will only work to an extent. It's possible it may improve even more. But right now, and we'll discuss this in a little bit, there are certain things you can do with a real-time deepfake that become a little problematic in terms of certain types of movements just because the system can't keep up. So in the world of entertainment, we talked about this briefly in terms of the early versions of this with CGI. Realistically, where this technology can really, really help out is it will reduce the need for things like re-shoots, and it cuts down on in-person filming requirements and the associated expenses. Obviously, re-shoots for films can cost millions and millions of dollars. And in terms of the technology itself, you can also look at it from the standpoint of potentially reducing or even eliminating the need for expensive costumes and prosthetics. A good example of this happened in 2014 in a film called "The Edge of Tomorrow." It was actually in the news in late 2022, apparently a back-and-forth between Tom Cruise and Emily Blunt, the two stars of the film, where they had disagreed over how to, I guess, express the main technology in the film, which were these exosuits that these near-future soldiers were wearing to fight off an alien invasion. And Ms. Blunt was under the impression that, yeah, we can animate those, we're fine, and Mr. Cruise didn't go along with that. He insisted they have the actual suits built out. And if you've seen the movie, it actually progressed pretty well. So it's possible that had it been done, you know, several years later that might have worked a bit better, but in those days, they went for a little more of a practical effect. Now, high-profile examples of this type of technology, the deepfakes, have been rightly viewed as technical marvels up until this point. Think of something like the planet Pandora, the Na'vi and the various animals in the Avatar films, or the the Gollum character from the Lord of the Rings trilogy. There's even literally specials all about the making of these. You see people wearing these, you know, bright green suits covered in sensors, and as they're moving around, they're making sure they animate everything accurately. Now, of course, these were produced with ludicrously expensive video editing equipment. However, deepfake technology is starting to catch up. It's not there yet. It's not, but it's getting better. An example I can give you is, in late 2020, there was, shall we say, a de-aged version of a certain Jedi knight character that appeared in "The Mandalorian" streaming series. Not long after the series actually aired, videos appeared online noting that convincing deepfake versions of the scene in question could be created without the need for the expensive video editing software. Some people were even of the opinion that the deepfake version actually look more convincing than the produced version from the series. You can take a look for yourself, if you like. Those videos are online. Now, mind you, this is not just science fiction and fantasy, of course. This technology can be used to recreate historical clothing just as easily as it can, say, Iron Man's armored suit. So as for parody and "what if" videos, plenty of users have had fun with deepfake technology. One of my favorite is a very early one in the genre, where someone with quite a sense of humor took the opening scene from the old "Full House" sitcom and replaced everyone's faces with the actor Nick Offerman, including the daughters. It's extremely funny if you're a fan of such things. There's also a popular channel on TikTok called @deeptomcruise, which consists entirely of Tom Cruise deepfake videos and nothing else. Also, if you're a fan of "Game of Thrones," there was a rather amusing apology video put together from the Jon Snow character, in which he effectively apologized for the rather horrid final season of the show. But there are also numerous examples in which different actors were actually placed in well-known roles. A couple of really cool ones I would point to, if you're a fan of such things, once again, Linda Carter, who played the character Wonder Woman in the 1970s was placed into the 2017 "Wonder Woman" movie starring Gal Gadot. They put Linda Carter in that role, and it honestly looks just like her. It's fantastically well done. Also, if you're a fan of Star Trek, something called "Star Trek: First Generation," also, I'm sure it's pretty freely available on YouTube, shows William Shatner and Leonard Nimoy's faces, those are the original actors from the Star Trek from the 1960s into the faces of the actors from this 2009 reboot, which was Christopher Pine and, I believe, Zachary Quinto. So instead, it's the older actors in the newer scenes, and it looks pretty convincing, if you're a fan of such things, once again. So here we see an example of just what I was talking about. You can see this is a scene from the 2017 "Wonder Woman" movie, but that's not the star of the film, Gal Gadot. This is actually Linda Carter, who played that character in the 1970s. And in effect, what was done here is, using the deepfake algorithm, enough pictures were taken of Linda Carter to recreate her and effectively replace Gal Gadot's face with Linda Carter's face, and you wind up with this, which, if you're looking, looks awfully seamless. It's really, really well done. And so just so you don't think I overstated the comedy aspect of it, this is a little piece from the, it's called "Full House of Mustaches," clip. That's the one with Nick Offerman playing all of the characters, including, yes, the three daughters. Yeah. I don't really know what to add to this one, but enjoy. As for news media, I would point to an anchor in South Korea, the station MBN, named Kim Joo-ha. She is a regular anchor, and she's actually on camera the vast majority of the time. But the station decided to scan her in order to deploy a deepfake version of her for occasional use on breaking stories, even something as simple as traffic reports at, say, two in the morning. If there's a jam, they're not gonna roll her out of bed. Instead, they just employ the deepfake. Now, while this sounds kind of cute and simplistic, there's no question more widespread use of this approach is coming. It's inevitable, realistically. What does this mean in terms of televised news media? Your guess is as good as mine, but it wouldn't surprise me if there comes a point when news anchors have to work much shorter days because a lot of what they're doing is having their likeness to used to report some rather basic stories. Marketing and advertising can have a little fun with this stuff, too. So once a star or influencer, et cetera, has been scanned, it allows for dynamic ad campaigns potentially tailored to different markets, different languages, you name it. And effectively, what'll happen is you'll get these effectively scalable, hyper-personalized messages or ads for a widely diverse potential audience or customers or what have you, 'cause they can switch it around any way they want once they've got the person scanned. An example of this would be David Beckham's 2019 malaria awareness ad, which was translated into nine languages. And of course, part of this is making sure that it looks like he's saying what you're hearing. So it sounds like him. It's in that language. They have lip-syncing software which works with this, and it looks just like he's saying it. Pretty cool, actually. Another thing that's kind of an interesting one, if you happen to be in St. Petersburg, Florida, the Dali Museum down there actually has an interactive hologram exhibit with the artist himself, Salvador Dali. He acts as a museum assistance of sorts, basically appearing inside a mirror, and he'll talk to you. And to do this, they trained the system, the AI system, the deepfake, with thousands of hours of interviews with him so that it sounds like him, it looks like him, it's his mannerisms, and it's a funky thing if you're ever down there. One other thing that's a little timely is a thing called Messi Messages, which was Lionel Messi, the Argentine soccer player, was basically put in a Frito Lay app which allowed fans to create customized messages from him to them, or to anyone else, in their own language. And again, it's case of, they scanned him saying various things, moving certain ways, so that it looks like he's actually talking to you. Pretty cool. Now, what companies can also do is to allow people to virtually try on a product before buying. So in effect, it'll scan you, and then once you're scanned in the system, it'll put things on you, anything from, you know, glasses to hats to shirts to ties, suits. You name it, they can do it. Now, of course, this all sounds great, but it's not all sunshine and rainbows, unfortunately. The proverbial elephant in the room is non-consensual deepfake pornography. This is distinct from what you might call standard revenge porn, which would typically entail taking actual pictures of usually someone who had, you know, a couple who'd been dating, one of them actually will share pictures of the other without consent, often in public. It's a pretty lousy thing to do. So, in 2019, a search company called Deeptrace put out a report that noted that deepfake pornography constituted 96% of all deepfake content they found online. Another company, in 2021, put out a report, which I believe was its third or fourth one, 'cause it started tracking in late 2018. Every year, they've found that 90 to 95% of the content it finds online is, again, deepfake porn. Now, unsurprisingly, and certainly unfortunately, it targets women the vast majority of the time. Initial deepfake pornography typically involve female celebrities. That was as a result of the numerous photos which are needed to train the algorithm, that of course, with celebrities, there are so many more of them from so many different angles. So that was what started early on. But where you could get 15,000 photos of, say, a celebrity, now the technology has improved, and a lot fewer photos are needed. So maybe you get by with two or 300 photos now, and now non-celebrity women are being targeted with these things, which, well, sucks, if you'll forgive me. There are, also, on a related note, something called nudifying apps, another little creepy device. On that one, you basically feed the app of someone in their clothing, and it basically estimates what their body looks like underneath it, unclothed. Lovely. So one other problem with this, as some people have gotten better and better with the technology, not only are folks producing obviously a lot of this non-consensual pornography, there are actually people in some of the less savory parts of the internet offering this type of pornography as a service. So this would be revenge porn turbocharged, basically, where someone would give over a lot of photos, and then this person online would, I guess, take some cash and produce non-consensual pornography. Nice and creepy. Now, of course, much, much worse than this would be child pornography, sometimes referred to as CSAM, child sexual abuse materials. The thing with that is obviously you've got, I mean, everything about that's really repulsive. But one thing that has kind of been bouncing around my head a little bit, I don't if it's necessarily someone has a solution for it, the idea of actual versus synthetic children. Actual children, obviously very easy to point to crimes there. With synthetic, I'm not really sure what the criminal theory is. I've been wondering about that 'cause just simply societal scorn might not be enough. But I digress, and you'll forgive me from all these happy topics. There's also a new twist. In mid-2022, a number of text-to-image generators began to appear. These let you create synthetic art. Popular ones are Stable Diffusion, DALL-E, Midjourney, a few others. Basically, you can type in sort of what you wanna see, and it'll just create something. One I saw last week I thought was pretty cool was van Gogh's "Starry Night," where someone had taken the picture, instead of using it as a landscape, which is the original aspect ratio, they turned it into portrait, so, you know, longer on the horizontal axis. And what they did in the bottom was they basically turned it into a giant lake or bay or something, which then was rolling water reflecting the "Starry Night" picture. It was absolutely gorgeous. So it doesn't have to be bad, but, of course, it wasn't long before users realized that programs could also be used to create more convincing deepfake pornography. Yeah, folks can be wonderful. I digress. So here's another one, this is a scene from the "Star Trek: The First Generation" clip. Certainly, if you're a fan of these shows, this is extremely well done. And the point is not so much to go, hey wow, that's a really good young Leonard Nimoy. The point is you see how well done this is. This is for fun. Try to imagine something like this used not for fun. That's where the potential danger in this technology lies. So another happy topic is disinformation. A natural concern with deepfakes is not being able to believe what you see. And sometimes that's with good reason. Certainly, the Zelenskyy surrender video we spoke about earlier is just one example of something that's way more widespread. You've something referred to as sockpuppets, you know, odd little term there. That refers to fake accounts that are used to generally bolster a particular position, often on behalf of a nation state, sometimes political group. Where you'll typically see that would be something like online, where something that seems like some weird fringe position suddenly has thousands of accounts chiming in like, yeah, I hate that guy. And you know, to a normal person reading, it's like, wow that's a lot of enthusiasm. That's because it's fake enthusiasm. These are sockpuppets. They're generated to look like they're angry over something. Now, something related to this was a, I would say, "plot," in quotes, against Israeli president Benjamin Netanyahu that was reported on Israeli TV, I wanna say about 2019. And what was funky about that one was people were interviewed on TV to support the story of this plot against him. The thing is, it turned out that the people were literally amalgam synthetics. So they weren't actual people. They were built using programs to look like people and then talk. And the funny thing is, you know, they got caught doing this, obviously. That's why we're aware of it. And the reaction was sort of like, oh, well, those just represented real people. Wow, wow. And that kind of makes me wonder, you know, if we hit this point with disinformation where we're so comfortable with it, we don't even give a second thought to it, that's wow. Anyway, so in terms of a law firm, how can a disinformation or a full campaign worth affect one of your clients, or even your firm itself, if it's targeted? 'Cause you know, it's always the issue with, denying it is one thing, but denying it before it spreads all over the place and people believe it anyway is a real problem. So it can have an effect. Now, a related issue is what I would refer to as fake memories, which, I guess, is asking the essential question, can fake media create false memories? There was a book by Dr. Julius Shaw which came out in 2016 called "The Memory Illusion," which talked about a couple of studies. And her essential finding was that memory is surprisingly malleable. The studies in particular there touched upon taking old family photos from people and effectively altering them and then showing them to them and seeing what the effect was. Two of the main ones that caught my attention were one showing Bugs Bunny at Disneyland. And if you're familiar with the old characters, bugs Bunny would not be a Disneyland. He was a Warner Brothers character, as opposed to, say, Mickey Mouse and, I guess, Donald Duck, who would be at Disneyland, and also the idea of on a recent trip to England having an actual picnic with the royal family. Funny thing is a fair amount of the participants, once they were shown the pictures, started to remember them, which was very peculiar because these things hadn't actually happened. So people could remember talking to and playing with and posing with Bugs Bunny while they were at Disneyland. Obviously, that never occurred. Same thing for the picnic with the royal family. The more recent study did this in terms of films where they interchanged people who'd been in popular films to see what effect it would have on people. In this case, they used Will Smith in "The Matrix," which, if you're a big film nerd, is kind of interesting because he was actually originally offered the part of the star, Neo, and turned it down. But they also had Charlize Theron in "Captain Marvel." And what they found there was, again, a rather significant number of the participants later remembered them as the stars of those movies as opposed to the real actors, Keanu Reeves, et cetera. So it was pretty interesting. And what I would wonder there is, what effect can something like this possibly have upon witness recall? I suspect this is gonna be a bigger and bigger and bigger issue in litigation going forward. But like I say, this is still a pretty early technology, so we might have to see how this one shakes out. So how do trickery and deceit fit into this? At the end of the day, lawyers unfortunately make excellent targets for data theft. It's an open secret that law firms are targeted constantly. Attackers often see law firms as a backdoor to strike at the firm's clients. Among criminal hackers, law firms are typically viewed as soft targets with valuable information. Attackers can use human nature to trick a target into giving up valuable information, allowing access to a restricted area or transferring funds. And of course, attacks will come with threats or rewards and often seem urgent. Most people wanna be helpful and responsive, and attackers know this, and they'll take advantage of it. This type of an attack is called social engineering, tricking you into giving up access or greens or whatever else. And it does come in a number of different flavors. Now, these are scams and attacks that you, your colleagues, and anyone you know can encounter on any given day. There's even a term for something attackers tend to do frequently, which is deliberately targeting senior personnel. That's actually called whaling, i.e. targeting the big fish. So I mentioned it's using human nature to get access to information. One of the most basic ones is something called phishing and spearphishing. Guarantee you've seen these in email. So phishing is basically like a shotgun approach, a scattered email trying to trick you into something. It won't necessarily be personalized for you, but it'll say like, you know, dear participant or whatever, you know, your healthcare plan is is about to end, or your Microsoft is about to be shut down, something like that so it gets you to panic and take action. Spearphishing is a particularized version of that, where it will usually know a little bit about you and will ask something specific of you. Like, oh hey, Fred, and it'll reference a case you were just working on. Do you have this? Or, here's the latest report. And you'd click on it, open it, and you're being attacked. Now, how does an attack like this actually work? Like I say, it'll come into you, and it can come in various forms. Why do we mention this during deepfake? Because that's gonna be one of the new forms and a nasty one. Because let's say something comes in. It's an email. You click on it. Let's say it's a voice recording that sounds like someone you know asking you to take an action. You know them. Oh no, there's a deadline. I've gotta do this. That's the idea behind it. And a related thing is called the BEC or business email compromise scam. The idea there is basically, it's wire fraud, getting you to typically send out information or funds directly to an attacker. We'll talk more about that in just a moment. And thanks to deepfake, there's a related thing that's come up called BIC, business identity compromise. The idea there is that your attacker appears to be someone they're not. These things can also be used for straight-up blackmail. Let's say, for example, an attacker threatens to share obviously fake pornographic imagery of your child with their school or employer unless you do what they say, or pay them, whatever. Even if it's eventually revealed as a fake, your child is going to be publicly humiliated or worse. What do you do? Now, I know this sounds like some crazy sort of thing I might have made up, but that's not unrealistic. That sort of thing can happen. And keep in mind, these things can also be web-based. It can be something where you're looking at a website, you click on something, you get a popup from a celebrity who asks you to do something. You're like, oh hey, it's so-and-so, I'll do this. Except, once again, it's a deepfake, so it looks like them, but it's not. Phone call can come in, and again, they can spoof not just the video but also the audio. That's what makes this tricky. Part of what this technology can also do is fooling biometric identifiers and what's called two-factor authentication. Let's back that up for a second. What is two-factor authentication? When you try to log into a system, typically, you're using a password. And sometimes it'll ask you for something in addition to that. It might be a code. It might be something like a retinal scan or a fingerprint or something like that. If it is one of these what's called biometric identifiers, this would, again, be a facial scan, retinal scan, whatever, theoretically this can be fooled by deepfake software. Now, realistically, at this point, that's a little involved. It's a little bit too resource intensive to be practical for now. Like I say, the systems are getting better, and unfortunately one of the big problems you run into as law firms is sometimes you're holding valuable enough data that it's worth the effort to get through to you. And of course, facial recognition software is pretty common. I'm sure a lot of you listening in probably have that on your phone so you don't have to type in a password over and over again. You just look at your phone, and you're done. Now, one thing to realize with these type of attack, they're always changing, they're dynamic. It's not one static type of attack. I can describe a few, but you might run into something totally dissimilar or something a lot like it 'cause they keep changing. And of course, with the addition of deepfake technology, they only become more so, trickier, more challenging, more variety. So I promised we'd talk about business email compromise scams, aka BEC scams. Again, this is wire transfer fraud involving fake vendors and senior firm or company personnel. Typically, it'll target employees who handle financial transactions. It can be a direct request from a senior official at your organization to wire out funds or a vendor, quote, unquote, "updating" its wire transfer info. Of course, when you send it out, it doesn't go to the vendor. This type of attack tends to be very well-researched and will look and sound legitimate. Obviously, deepfakes make that so much more so. Now, despite the name, BEC scams are also launched via phone or instant messenger. There really is no right way to do it. Just whatever tricks you, that's what they'll try. BEC scams can include email spoofing. That's basically where it appears to come from one place but another. Think of it as like an email version of a deepfake. So it appears to be one thing but isn't. Fake websites and even full online conversations with scammers impersonating senior personnel or vendors, that's exactly what we're talking about here. If you've got someone who sounds just like the person and knows exactly what they would say, you really run into a problem. Attackers can use other social engineering attacks, your own website, news reports, and social media to get the inform needed to make those BEC messages look genuine, correct employee titles, relevant business news, et cetera, not to mention any sort of video files which can be added to actually add the actual video component in so that looks accurate as well. And of course, the business identity compromise adds a new layer to it as they can look and sound just like who they purport to be. Now, what can you do to avoid falling for one of these scams? It's always a great idea to assume that a call, email, et cetera is fake until you're given a convincing reason to think otherwise. Mind you, they're always gonna act like it's a hurry, and you've gotta do something right away. Be ready for that. It's critical to have secure procedures for any financial transactions in particular. You wanna have direct what's called out-of-band confirmation with any vendors, senior personnel, et cetera, who request wire transfers or changes to any financial routing information. What does out-of-band mean? Out-of-band means you're not using the same method they use to contact you. So if they're emailing you, you don't email them back. If it's a video call, you don't just respond like, hey, is that really you? No, no, no, no. You might have someone in your office like, hey, could you go call the office just to make sure I'm actually talking to who I think I'm talking to? That kind of thing. It may sound a little overboard, but that can save you some really big trouble. And make sure it's not just something you happen to do when you remember it, but make sure it's an actual consistent procedure throughout your organization. Now, of course, if the worst does happen, you know, your organization has been targeted, don't hesitate to call both the financial institution and law enforcement right away. Generally a good starting point would be the FBI or possibly the secret service. And again, secure, repeatable procedures to block this sort of stuff is what's really critical. So what are cybersecurity concerns for law firms in particular? Well, deep defect technology is not in widespread use for cyber attacks just yet. That's because the technology is still evolving. It's not really, you know, solid as to where it is yet. deepfake baked, based, sorry, deepfake-based attacks can be time consuming to set up. And again, that becomes the question of, is it worth the trouble to go for what you're holding? And in some cases, with a law firm, yeah, it is. But for the most part, most attacks are actually about money. So from the attacker perspective, if they see something cheaper and easier that'll be just as profitable as breaking into your firm, they'll go with that. But again, with law firms, you run into the problem where you're sometimes gonna get these, I guess, higher-end or A-list attackers and more involved strikes because of what's in the network. It's not uncommon for firms to store potentially valuable non-public data on their networks, be it the firm data or client data. As a result, sophisticated and often well-financed attackers can target both firm data directly and use a law firm network as a backdoor to get at otherwise secure client data. So if your client's pretty locked down, and they are having trouble getting in that way, yeah, they might come right after you as a way to get in. Now, of course, there's, once again, the BEC and BIC scams are getting senior personnel, vendors, clients, et cetera. Those are actually, if you look at the numbers from the FBI, they typically report them, something you tend to hear about a lot in the news is something called ransomware, in which an attacker will lock up your system and charge a ransom to release it. BEC scams dwarf ransomware in terms of the amount lost each year. And mind you, that's only the reporting numbers. A lot of companies get hit with it and don't say anything 'cause they don't want anyone to know they're hit. So we were talking a little bit about these BEC scams before, and just now really. An audio version of that happened with an energy company in the United Kingdom in which the CEO of that company was contacted by the CEO of the parent company from Germany. And the caller had the correct accent, sounded just like him, had the correct mannerisms, said the right things, and told him he had to send out things to a Hungarian vendor right away. Yeah, I think you know where this is going. It was a fake, and they lost $243,000 as a result. Now, again, the underlying technology for this is easily available, not just the deepfakes itself, there's also a lot of audio-faking technology you can get. It's just, it's not hard to do this. And not to mock the CEO, I don't know anything bad about him, anything like that, but if that company had had a consistent procedure in place in which, well, it's a financial transaction, you have to call and check, and someone had called and checked, wouldn't have happened. Another barrier that becomes a problem is video conferencing. Now, this can include lots of programs you're probably quite familiar with, Zoom, WebEx, Microsoft Teams, Slack, you name it. And this is where we come back to the idea of real-time deepfakes. So that same technology that can make us look more presentable on a video call can also be used by any attacker to look like someone they're not. Now, where this comes up in terms of where it's already popped into the news is actually fake remote job applicants. These are effectively new employees, who, once they get hired, are granted access to a firm's network without having to actually break in. It's a great tactic. They don't have to trick their way into the network. They literally get "hired," quote, unquote, to work there. Now, initially this was noticed at tech companies, though this can certainly happen at any kind of firm. And if you're with a decent-sized law firm, no doubt you have a tech department too. So anyone who is remote, realistically, could potentially be one of these folks. And it became a bad enough thing that in June of 2002, the Internet Crime Complaint Center, which is sort of the FBI's public-facing cyber crime stuff, actually put out a post specifically about this. Incidentally, if you wanna learn more about upcoming scams in readable language, that's a great place to go. That's IC3 is the name of it. And they usually put out probably a warning a month or so. And they're usually pretty relevant stuff. Now, one thing that's related, and this might sound like a goofy one, deepfake test takers. Now, these would be useful as exams increasingly become remote. So in effect, a real student will scan themselves, the test-taker will use that scan during the exam, and, poof, the test-taker passes, it looks like the real student was there, and if the camera doesn't recognize the difference, and most won't, that's a real problem. So it begs the question, did your new associate actually pass the bar? Now, realistically, don't panic just yet. We're probably not quite there technologically, but it's not so far away, and it's coming. So attackers also used a pretty clever version of this against a bunch of crypto industry folks, both executives and in-house counsel, to gain access to non-public information to make early trades. And in effect, what they did was they created a fake CCO, a hologram of him to go in on these meetings and just appear to be one of the gang. And they would listen in occasionally, say as little as possible, and just get the non-public information and go. And they did this for several months and made quite a killing off it. Now, are there signs you should be looking for in terms of how do you pick this out, or do you just hope it doesn't happen to you? Well, there are actually things that will look a little weird, especially with the real-time deepfakes. In particular, sudden movement, sudden movement is a little funky because in effect, what you're looking for there is something like a head turn. It could be something like, believe it or not, a sneeze. Also, something like blurry background areas in a photo, or sorry, in a video image, like, let's say you're talking to someone, and some things just do not look in focus, or it could be something really weird, like you're interacting with someone on video, and you notice that, let's say, her earrings are kind of blending into her ear a little bit, that's weird, that doesn't happen with real people, but it can happen with this. Also, you'll get sometimes inconsistent lighting because typically it's trained on photos with one set of lighting, and then it's bringing that forward into the situation in which it's being plugged in, which often will have a different type of lighting. So it becomes an interesting thing, where, like, the shadows won't quite look right. You also might get something where, if you look closely, the eyes don't quite match, that kind of thing. It's, you know, different. It can look different in different kind of lighting. Speaking of eyes, one thing that is kind of funky with this: blinking. In the very early days with deepfakes, they didn't blink at all. And that was a really easy way to pick them out. It's like, hey, yeah, that person didn't blink. Real people blink several times a minute. That's normal. These programs still haven't quite gotten it down. While they do blink now, it's atypical. It's either there's way too much spacing in between it, or they're blinking a ton, like there's something in their eye. So it's just something to look for visually that's a little weird. But obviously, as the technology improves, and it's improving, these will all become less noticeable over time. But for now, there's some stuff. Of course, that's the one end of it. The other side of it is to train these things, they need the footage. And one thing they can do is target the footage that you might have stored on your network. So let's say, for example, you're using recorded depositions, presentations. Let's say you have meetings that you're recording for later reference. Maybe you've got trial practice footage, you've even got interviews, whatever it is where people are seen on video speaking, if an attacker can get into that and steal it, they can use that to train the deepfake algorithm. And then, because they've gotten into your network, they can then create convincing characters who've been in there and of course put them in there in an attack against you. And of course, from an attacker's perspective, actually, the more varied the source, that's actually better for them. Because that means the actual final, produced deepfake will be a bit more versatile as opposed to something that all comes from the same place. Now, if your firm does, in fact, retain such footage, you gotta make sure it's properly secured. Couple of basic tech defensive technologies you might wanna look at, and obviously there are other courses here on Quimbee, in which I talk about this stuff in a lot more detail, if you're curious. Encryption is a very valuable one. That's one in which an algorithm, yes, that dreaded word again, is applied to data to scramble it. And the idea is you can scramble or descramble it using what's called a key. And if you don't have the key, it comes across as gibberish. So obviously, the idea is you don't want the key stored anywhere near where any actual encrypted data is. And the idea here is that if an attacker gets into your system and steals that, what they're stealing is unusable garbage, at least in theory. Now, I mentioned the idea of two-factor authentication before. The idea behind two-factor authentication that makes it useful is it's generally coming from two different sources. So let's say, for example, you type in a password, and then you have like a little fob key that gives you a code that you have to enter as well. So the password is something you know. Key fob is something you have. The idea with biometrics like a retinal scan or fingerprint is that's something you are. The ideas are coming from different sources. Because if someone gets into a database and steals passwords, okay, that's one factor they can get in. But if in addition to that they have to do an accurate fingerprint or the code from key fob, they still can't get into the system. And the fact that they'll try to do it, a well-designed system, hopefully you have a well-designed system, will record that an improper entry was made and sort of give you a hint, okay, something's up, everyone change your passwords. So it's just a good way to let you know something's going on before the real problem starts. So what do you do to control this? I mean, realistically, you're addressing an evolving problem. You're gonna have to hit it from a few different parts, honestly, certainly technology. As lawyers, we're gonna be eyeing regulation. That's not gonna get it done by itself. And user awareness is critically important because, I mean, and it's a totally natural reaction, most people, you show them what deepfakes can do, you're typically showing them something like the, you know, Linda Carter "Wonder Woman" video, not the really scary stuff. So they'll look and say, "Wow, that's awesome." Then you have to explain it in context, so that it's like, okay, that is really cool, but try and imagine if this happens, you know, and walk people through how it can actually work. So in terms of preventative steps, from a technical standpoint, you wanna start off with network access controls, not to get way too into the weeds of the technology end of it. I imagine you've probably heard of firewalls. Firewalls are filters, effectively. Traffic that's coming in and out of a network will run through a firewall often, and different devices might have firewalls in front of them, and the firewalls will have what are called rule sets. And the rules will judge the data coming in, in terms of like, well, we might allow you in, we might not allow you in kind of thing. If you'll forgive the quick digression, data itself is broken into little pieces called data packets. A way to think of it is, if you've ever seen the old movie "Willy Wonka and the Chocolate Factory," and if you have, I feel bad for you, it's awesome, there's a scene in there in which one of the kids in the movie is zapped by something called WonkaVision. He gets basically zapped, he disappears, and other characters are looking up in the ceiling, where he's in a million pieces, and then he gets reassembled on the other side of the room. Now, that's a serious oversimplification, but to an extent, that's kind of what happens with data when you send it from place to place. It gets broken up into smaller little pieces that can actually travel on a network, and they'll go through things that'll run through different network devices and protective devices, something like the firewall, and then get reconstituted on the other side, assuming it doesn't get blocked by any of those security mechanisms. Now, with a firewall, one thing that's particularly helpful in terms of this type of technology would be something that's location-based. Let's say, for example, we talked about the the idea of a fake job applicant. Let's say we have that fake job applicant, and according to their resume, they're calling you from Utah, which is awesome, except according to their video, they're calling you from Western Russia. That's a problem. And a firewall might be able to catch that. Now, there are ways in which people can fake their location, but there's a fair chance the firewall might be able to tell you something's wrong. There's also developing technology of deepfake detectors. We'll talk a little bit more about that momentarily in terms of how some of them work, but they basically look for things that are off. So where I'm talking about little signs you should keep an eye out for, they're looking for them in more detail than the naked eye might necessarily pick up. Actually, one thing in particular we're talking about, let's see, the idea with these real-time things, I'll mention a little bit, we'll talk about sort of a challenge response thing you can do for this, a way to check quickly to see if something doesn't quite line up, I'll put that where it belongs in the challenge response bit. We'll get to that in a second. But what that effectively is, is it's a type of verification. So for example, we talk about the out-of-band confirmation, where you're using a different contact method to go and check, like I say with the challenge response thing, I'll mention a little now, let's say, for example, you have a video interview, and you'll want the person to quickly wave their hand in front of their face. That sounds like an awful weird thing to ask for, but if they're using real-time deepfake, none of that technology can do that, and it will turn into a big pixelated blur, and, poof, the cat's out of the bag, just a thing. Now, on a related note, you wanna have security awareness training. This is paramount because at the end of the day, deepfake technology is advancing in leaps and bounds. So you're gonna need more than just annual training. People need to understand what this is and what they might realistically run across. And the training should include, like I say, what to expect. You should mention the different types of social engineering attacks and security incidents, not just for deepfake, that should cover how those work. 'Cause realistically, for most employees, that's how security tends to impact people directly. And of course, you wanna actually look at, what are the deep fakes, and what can attackers do with them, especially as it relates to those social engineering attacks. Because again, for most employees, that's what you're realistically going to see. And then you wanna look for things. What makes a contact look suspicious? What's a little off, you know? And then you wanna look at how to properly authenticate or verify that a contact is genuine. So I mentioned the idea of a challenge response, certainly a physical challenge like that hand wave. You can also have someone stand up and sit down in an interview. While that may sound ridiculous or weird, it's a great way to make sure that they're not a deepfake person you're talking to. Because again, none of the systems can actually do that yet. So if you're talking to someone, and you say, "Oh hey, can you stand up for a second and sit back down, just a requirement for our system," and they give you grief, that's a hint. Gotta make sure you want to insist on something like that because attackers will often pretend to be in a hurry and act annoyed if they're, you know, quote, unquote, "bothered" with authenticating themselves, like, "Stop with this garbage, I've gotta move." Yeah, of course they're gonna say that. Because most people, again, wanna be helpful, and if the idea there is that an attacker's trying to trick you, and you're sort of playing along, uh-uh, stand your ground. Now, also, one thing just to mention, on a related note, is something called role-based training. That's nothing too strange. It's just make sure that the training fits the actual role of the person you're training. So if someone's an attorney versus someone in a financial department versus someone in a technical department, they're going to have different risks which they're going to have to deal with from day to day. And they need to understand what they personally might see, what they personally might have as an issue to be ready for. Just a better way to protect the firm. 'Cause otherwise, you're giving people unnecessary training, which can often be rather confusing for folks. So what about efforts made to govern the underlying technology? Well, there have been a few. Some social media companies have made a few attempts. Twitter, in 2020, enacted a policy that prohibits users from sharing, quote, "synthetic or manipulated media that are likely to cause harm." And Reddit also updated its policies to ban content that, quote, "impersonates individuals or entities in a misleading or deceptive manner," unquote, while still permitting satire and parody. But, big but, at least one Reddit forum also has a list matching Hollywood actresses to pornographic actresses with corresponding body types to create more, quote, unquote, "accurate" deepfake pornography. So yeah, it sounds like they're doing work, but there is some stuff that's definitely missing. Google itself has a ban list generally, which it applies to most deepfakes, especially the type which obviously is not parody nature. Then you have the not-so-savory sites online, like a 4chan or an 8chan. Obviously, they don't ban any of this stuff. They could care less. A site like Pornhub actually does theoretically ban this stuff. It's listed as non-consensual pornography, though they don't really do a lot of effort to actually enforce it. One thing that can actually help in that regard, if you unfortunately have a client who's dealing with this, is an organization called Stop Non-Consensual Intimate Image Abuse. The website is stop, S-T-O-P, ncii.org. And effectively, what they do is they help people create a case in which they can present to one of these hosting services to have the revenge porn taken down. And they claim that they have a 90% success rate, so certainly not a bad place to start. Obviously, again, law enforcement is always a great place to go as well. But getting an image like that offline as soon as possible is certainly a good thing to do. I mentioned the deepfake detectors. Again, this is a technological approach. The most recent one is, at the time of this recording, is one called FakeCatcher from Intel. It was released in mid to late 2022, and it claims 96% accuracy. Now, what's really, really cool with this one is how it works. What it's looking for is, believe it or not, blood flow patterns underneath the skin of videos, something that with the naked eye, we're just not gonna notice. But it's looking for the flow of the pixels under skin to make sure it looks consistent. Because if it doesn't, it catches something's up, and, poof, they're done. And some of them are pretty interesting. There's one from Stanford University that specifically zeroes in on lip syncing software. And if it detects that, it, again, calls BS and shuts it down. There's also different ones from the Department of Defense, Facebook, Adobe, Google, an outfit called Chipzilla, and others. The idea is, of course, they wanna try and give people a leg up on catching this stuff before it's too late. So what about legal efforts to try to challenge some of this? Again, this is pretty nascent stuff at this point. 'Cause the technology's fairly new, and it's advancing at a rather rapid pace. One thing you can go with is certainly an intellectual property approach. That would be deepfakes using unauthorized IP. It's not always going to work, but that is one way to go about it. Certainly, a tort claim approach is another one to go with. You could have invasion of privacy, defamation, false light or fraud against the deepfake creators. And that's of course assuming they can be identified, 'cause often people don't exactly put their names on some of the more unsavory stuff. And one other approach is to go with negligence against companies that fail to protect their customers from deepfake-induced account compromise. Okay, let's plain-English that. So let's say you're tricked into giving up some credentials to get into a system, a password, et cetera, by a deepfake, be it a deepfake audio or whatever else, and the company that let that happen was supposed to have protections in place to stop that from happening. Like, let's say, your bank is supposed to always ask for a confirmation with you, and they didn't do it, and this person masqueraded as you and moved funds out. So you might be able to go after the bank with that, with a negligence claim. Also, under certain regulations, there might be a regulatory right of action, like the consumer, sorry, the California Consumer Privacy Act is one, for example. And also, depending on which state you're in, there are potential rights of actions under some state constitutions. I mean, another consideration with this, I certainly don't mean to slam the idea of litigation by any means, but realistically, the speed of litigation versus the speed at which videos like this spread across the internet can really be problematic. Might have to take some emergency measures to attempt to slow it down, but at this point it's an uphill battle. So in terms of criminal liability, there are existing revenge pornography laws. That's great. However, they typically relate to actual images or videos of the victim being shared. So in many cases, they don't actually refer to what's happening here. There are a few states that have touched upon it, in particular, Texas, Virginia, and California. Texas is one that's a little odd because it doesn't actually criminalize deepfake pornography. It criminalizes deepfake political content. So while that's something, it obviously has a very limited scope. Virginia and California are a little more direct. It's also, this is being recorded in late 2022, and as this is coming out, there's actually a law coming in, which, hard fought by a bunch of folks who are really involved. In the UK, the Online Safety Bill is hopefully going to be amended as of early 2023. And that will criminalize the sending or sharing of deepfake pornography. However, it does not include the creation of it, just actually disseminating it, sending it, and sharing it, et cetera. And it proposes a criminal offense, I think minimum of two years, that prohibits sharing non-consensual intimate imagery as a whole. So the key there is that it includes deepfakes, but it doesn't create a specific law on deepfakes alone. So it's sort of being rolled into what an effect is a revenge porn bill. And one thing that is good, though, is it does specifically define deepfakes for the first time in that law. And it, also, big key, it removes the intent requirement. What's so critical with that is typically there'd be an intent requirement where it would be an intent to cause harm or humiliate the victim. And what someone would often respond to that with would be something to the effect of, "Oh hey, I was just kiddin' around, ha-ha-ha," which of course is not at all funny to the victim. And for this, that's gone. You spread it around, you're subject to the law. So realistically, we're just at the start of this. This technology is going to get better, and the associated threats with it are going to become a lot worse. Technological solutions, regulations, and user awareness are going to work to have to keep up. Sorry, I don't mean to sound a little negative about this, but it's gonna require some actual effort here. It does seem like a lot of companies are up for the technical challenge. Regulations might be a little trickier. Hopefully, more states will come up to speed with it, and hopefully this can become more of a fun technology than a scary one. Thank you so much for your time.
Read full transcriptSee less