Forget about lies in meme form and alternate facts, this video gave me the chills: From this Vanity Fair article: http://www.vanityfair.com/news/2017/01/fake-news-technology Less than a month after Donald Trump was improbably elected the 45th president of the United States, a strange story began to make its way across social media. In the quaint days before Russia’s dissemination of fake-news stories in the interest of facilitating Trump’s victory became front-page news, a 28-year-old named Edgar Maddison Welch began reading about a pizzeria in Washington, D.C., that housed young children as sex slaves in a devilish operation masterminded by the recently vanquished Democratic candidate for president, Hillary Clinton. So Welch decided to drive the six or so hours up from his home in Salisbury, North Carolina, to Comet Ping Pong in northwest D.C., where he opened fire with an AR-15. The Comet Ping Pong story, and the even more disturbing news of the Kremlin’s role in our election, merely underscore fake news’s rapid ascent from an amorphous notion to perhaps the most significant digital epidemic facing the media, government, and, at the risk of sounding mildly hysterical, democracy itself. One Pakistani military offender, confused by a fake-news story, raised the prospect of a nuclear war with Israel. (Recall that Michael Flynn Jr., the son of Trump’s national security adviser, shared the Comet Ping Pong story on Twitter.) Meanwhile, our current president spent virtually his entire campaign inventing or proliferating fabricated stories such as his suggestion that Ted Cruz’s father was involved in the plot to assassinate John F. Kennedy (he wasn’t) and his pronouncement that violent crime was at an all-time high in the U.S. (crimes rates, while rising slightly in the last year, are near a 20-year low). While all of these stories were obviously fabricated in various ways, they did share one technological commonality: they were almost entirely text-based. And that is about to change. At corporations and universities across the country, incipient technologies appear likely to soon obliterate the line between real and fake. Or, in the simplest of terms, advancements in audio and video technology are becoming so sophisticated that they will be able to replicate real news—real TV broadcasts, for instance, or radio interviews—in unprecedented, and truly indecipherable, ways. One research paper published last year by professors at Stanford University and the University of Erlangen-Nuremberg demonstrated how technologists can record video of someone talking and then change their facial expressions in real time. The professors’ technology could take a news clip of, say, Vladimir Putin, and alter his facial expressions in real time in hard-to-detect ways. In fact, in this video demonstrating the technology, the researchers show how they did manipulate Putin’s facial expressions and responses, among those of other people, too. This is eerie, to say the least. But it’s only one part of the future fake-news menace. Other similar technologies have been in the works in universities and research labs for years, but they have never really pulled off what computers can do today. Take for example “The Digital Emily Project,” a study in which researchers created digital actors that could be used in lieu of real people. For the past several years, the results have been crude and easily detectable as digital re-creations. But technologies that are now used by Hollywood and the video-game industry have largely rendered digital avatars almost indecipherable from real people. (Go and watch the latest Star Wars to see if you can tell which actors are real and which are computer-generated. I bet you can’t tell the difference.) You could imagine some political group utilizing that technology to create a fake hidden video clip of President Trump telling Rex Tillerson that he plans to drop a nuclear bomb on China. The velocity with which news clips spread across social media would also mean that the administration would have frightfully little time to respond before a fake-news story turned into an international crisis. Audio advancements may be just as harrowing. At its annual developer’s conference, in November, Adobe showed off a new product that has been nicknamed “Photoshop for audio.” The product allows users to feed about ten to 20 minutes of someone’s voice into the application and then allows them to type words that are expressed in that exact voice. The resultant voice, which is comprised of the person’s phonemes, or the distinct units of sound that distinguish one word from another in each language, doesn’t sound even remotely computer-generated or made up. It sounds real. This sort of technology could facilitate the ability to feed one of Trump’s interviews or stump speeches into the application, and then type sentences or paragraphs in his spoken voice. You could very easily imagine someone creating fake audio of Trump explaining how he dislikes Mike Pence, or how he lied about his taxes, or that he did indeed enjoy that alleged “golden shower” in the Russian hotel suite. Then you could circulate that audio around the Internet as a comment that was overheard on a hot microphone. Worse, you could imagine a scenario in which someone uses Trump’s voice to call another world leader and threaten some sort of violent action. And perhaps worst of all, as the quality of imitation gets better and better, it will become increasingly difficult to discern between what is real behavior and what isn’t. Perhaps the scariest part is that, one day soon, this sort of technology will transcend beyond the academies and institutions to the point where you or I will be able to create fake digital clips as easily as regular people created fake-news stories during this cycle. The technology out of Stanford that can manipulate a real-time news clip doesn’t need an array of high-end computers like those used by Pixar; it simply needs a news clip from YouTube and a standard Webcam on your laptop. In many ways, we’re starting to see the beginning of this phenomenon. This week, an animated GIF floated around Twitter depicting Trump at his sparsely attended inauguration. In the short clip, Trump turns around and looks at his wife, Melania; the couple exchanges a few words; she smiles and laughs, and then, as he turns back, Melania’s smiles morphs into a sad, maligned look. The problem is that, on social networks, no one knew if the clip was real, or if it was being played in reverse, in which case Melania would have become happy when her husband turned around and looked at her. (As far as I could tell, the clip was real.) Or take the controversy around the movie A Dog’s Purpose, which has been boycotted by PETA and countless celebrities after TMZ published a clip of a German shepherd being forced into raging waters on the film’s set. While the footage TMZ published was in fact real (and awful, on many levels), one of the film’s producers came out to say that the clip from the movie’s trailer that PETA was circulating to protest the film was misleading. The dog jumping into the treacherous waters, he said, was computer-generated. It’s impossible to tell the difference. If there’s one thing we learned from this election cycle, it is that there are a number of reasons that people create fake-news stories. One of those is financial. Cameron Harris, a college bro, told The New York Times that he made up stories on a fake-news site that he created for $5. Harris, who concocted stories about voter fraud and Hillary Clinton, made as much as $1,000 an hour while millions of people clicked on his fabricated posts. “I spent the money on student loans, car payments, and rent,” he bragged to the Times. Some people create fake news simply to demonstrate how easy it is to mess with the mainstream media. Then there is the reality that other governments can weaponize fake news as an act of digital terror. After interfering with our election process (a point that even Trump now concedes), Putin’s propaganda henchmen are now doing the same thing in Europe. Even given the relatively primitive technology currently at our disposal, it has been chillingly effective. It has become clear, after all, that most new consumers don’t want to know if what they are reading is real or fake;, they just want to know that it helps support their worldview. As Pew Research has noted, in today’s media-saturated society, “liberals and conservatives inhabit different worlds.” What’s even scarier, actually, is where these technologies and ossified worldviews will be by the 2018 midterm election, or the presidential election in 2020. At that point, one suspects, there will not only be thousands of fake-news articles floating around the Internet, but also countless fake videos and fake audio clips, too. If you combine those technologies with a president who is known to lie about even the most trifling matters, we won’t know what is real and what is fake any longer. If ever there was a time for the people creating technologies to keep in mind the impact of their creations, it’s now.