Tag: deepfakes

Even while a thing is in the act of coming into existence, some part of it has already ceased to be

Cat next to laptop which has on-screen "System error"

💻 Zoom and gloom

🤖 ‘Machines set loose to slaughter’: the dangerous rise of military AI

📏 Wittgenstein’s Ruler: When Our Opinions Speak More About Us Instead The Topic

🤨 Inside the strange new world of being a deepfake actor

🎡 Japanese Amusement Park Turns Ferris Wheel Into Wi-Fi Enabled Remote Workspace


Quotation-as-title from Marcus Aurelius. Image from top-linked post.

Technology is the name we give to stuff that doesn’t work properly yet

So said my namesake Douglas Adams. In fact, he said lots of wise things about technology, most of them too long to serve as a title.

I’m in a weird place, emotionally, at the moment, but sometimes this can be a good thing. Being taken out of your usual ‘autopilot’ can be a useful way to see things differently. So I’m going to take this opportunity to share three things that, to be honest, make me a bit concerned about the next few years…

Attempts to put microphones everywhere

Alexa-enabled EVERYTHING

In an article for Slate, Shannon Palus ranks all of Amazon’s new products by ‘creepiness’. The Echo Frames are, in her words:

A microphone that stays on your person all day and doesn’t look like anything resembling a microphone, nor follows any established social codes for wearable microphones? How is anyone around you supposed to have any idea that you are wearing a microphone?

Shannon Palus

When we’re not talking about weapons of mass destruction, it’s not the tech that concerns me, but the context in which the tech is used. As Palus points out, how are you going to be able to have a ‘quiet word’ with anyone wearing glasses ever again?

It’s not just Amazon, of course. Google and Facebook are at it, too.

Full-body deepfakes

Scary stuff

With the exception, perhaps, of populist politicians, I don’t think we’re ready for a post-truth society. Check out the video above, which shows Chinese technology that allows for ‘full body deepfakes’.

The video is embedded, along with a couple of others in an article for Fast Company by DJ Pangburn, who also notes that AI is learning human body movements from videos. Not only will you be able to prank your friends by showing them a convincing video of your ability to do 100 pull-ups, but the fake news it engenders will mean we can’t trust anything any more.

Neuromarketing

If you clicked on the ‘super-secret link’ in Sunday’s newsletter, you will have come across STEALING UR FEELINGS which is nothing short of incredible. As powerful as it is in showing you the kind of data that organisations have on us, it’s the tip of the iceberg.

Kaveh Waddell, in an article for Axios, explains that brains are the last frontier for privacy:

“The sort of future we’re looking ahead toward is a world where our neural data — which we don’t even have access to — could be used” against us, says Tim Brown, a researcher at the University of Washington Center for Neurotechnology.

Kaveh Waddell

This would lead to ‘neuromarketing’, with advertisers knowing what triggers and influences you better than you know yourself. Also, it will no doubt be used for discriminatory purposes and, because it’s coming directly from your brainwaves, short of literally wearing a tinfoil hat, there’s nothing much you can do.


So there we are. Am I being too fearful here?

Friday ferretings

These things jumped out at me this week:

  • Deepfakes will influence the 2020 election—and our economy, and our prison system (Quartz) ⁠— “The problem doesn’t stop at the elections, however. Deepfakes can alter the very fabric of our economic and legal systems. Recently, we saw a deepfake video of Facebook CEO Mark Zuckerberg bragging about abusing data collected from users circulated on the internet. The creators of this video said it was produced to demonstrate the power of manipulation and had no malicious intent—yet it revealed how deceptively realistic deepfakes can be.”
  • The Slackification of the American Home (The Atlantic) — “Despite these tools’ utility in home life, it’s work where most people first become comfortable with them. ‘The membrane that divides work and family life is more porous than it’s ever been before,’ says Bruce Feiler, a dad and the author of The Secrets of Happy Families. ‘So it makes total sense that these systems built for team building, problem solving, productivity, and communication that were invented in the workplace are migrating to the family space’.”
  • You probably don’t know what your coworkers think of you. Here’s how to change that (Fast Company) — “[T]he higher you rise in an organization, the less likely you are to get an accurate picture of how other people view you. Most people want to be viewed favorably by others in a position of power. Once you move up to a supervisory role (or even higher), it is difficult to get people to give you a straight answer about their concerns.”
  • Sharing, Generosity and Gratitude (Cable Green, Creative Commons) — “David is home recovering and growing his liver back to full size. I will be at the Mayo Clinic through the end of July. After the Mayo surgeons skillfully transplanted ⅔ of David’s liver into me, he and I laughed about organ remixes, if he should receive attribution, and wished we’d have asked for a CC tattoo on my new liver.”
  • Flexibility as a key benefit of open (The Ed Techie) — “As I chatted to Dames and Lords and fiddled with my tie, I reflected on that what is needed for many of these future employment scenarios is flexibility. This comes in various forms, and people often talk about personalisation but it is more about institutional and opportunity flexibility that is important.”
  • Abolish Eton: Labour groups aim to strip elite schools of privileges (The Guardian) — “Private schools are anachronistic engines of privilege that simply have no place in the 21st century,” said Lewis. “We cannot claim to have an education system that is socially just when children in private schools continue to have 300% more spent on their education than children in state schools.”
  • I Can’t Stop Winning! (Pinboard blog) – “A one-person business is an exercise in long-term anxiety management, so I would say if you are already an anxious person, go ahead and start a business. You’re not going to feel any worse. You’ve already got the main skill set of staying up and worrying, so you might as well make some money.”
  • How To Be The Remote Employee That Proves The Stereotypes Aren’t True (Trello blog) — “I am a big fan of over-communicating in general, and I truly believe that this is a rule all remote employees should swear by.”
  • I Used Google Ads for Social Engineering. It Worked. (The New York Times) — “Ad campaigns that manipulate searchers’ behavior are frighteningly easy for anyone to run.”
  • Road-tripping with the Amazon Nomads (The Verge) — “To stock Amazon’s shelves, merchants travel the backroads of America in search of rare soap and coveted toys.”

Image from Guillermo Acuña fronts his remote Chilean retreat with large wooden staircase (Dezeen)

There’s no viagra for enlightenment

This quotation from the enigmatic Russell Brand seemed appropriate for the subject of today’s article: the impact of so-called ‘deepfakes’ on everything from porn to politics.

First, what exactly are ‘deepfakes’? Mark Wilson explains in an article for Fast Company:

In early 2018, [an anonymous Reddit user named Deepfakes] uploaded a machine learning model that could swap one person’s face for another face in any video. Within weeks, low-fi celebrity-swapped porn ran rampant across the web. Reddit soon banned Deepfakes, but the technology had already taken root across the web–and sometimes the quality was more convincing. Everyday people showed that they could do a better job adding Princess Leia’s face to The Force Awakens than the Hollywood special effects studio Industrial Light and Magic did. Deepfakes had suddenly made it possible for anyone to master complex machine learning; you just needed the time to collect enough photographs of a person to train the model. You dragged these images into a folder, and the tool handled the convincing forgery from there.

Mark Wilson

As you’d expect, deepfakes bring up huge ethical issues, as Jessica Lindsay reports for Metro. It’s a classic case of our laws not being able to keep up with what’s technologically possible:

With the advent of deepfake porn, the possibilities have expanded even further, with people who have never starred in adult films looking as though they’re doing sexual acts on camera.

Experts have warned that these videos enable all sorts of bad things to happen, from paedophilia to fabricated revenge porn.

[…]

This can be done to make a fake speech to misrepresent a politician’s views, or to create porn videos featuring people who did not star in them.

Jessica Lindsay

It’s not just video, either, with Google’s AI now able to translate speech from one language to another and keep the same voice. Karen Hao embeds examples in an article for MIT Technology Review demonstrating where this is all headed.

The results aren’t perfect, but you can sort of hear how Google’s translator was able to retain the voice and tone of the original speaker. It can do this because it converts audio input directly to audio output without any intermediary steps. In contrast, traditional translational systems convert audio into text, translate the text, and then resynthesize the audio, losing the characteristics of the original voice along the way.

Karen Hao

The impact on democracy could be quite shocking, with the ability to create video and audio that feels real but is actually completely fake.

However, as Mike Caulfield notes, the technology doesn’t even have to be that sophisticated to create something that can be used in a political attack.

There’s a video going around that purportedly shows Nancy Pelosi drunk or unwell, answering a question about Trump in a slow and slurred way. It turns out that it is slowed down, and that the original video shows her quite engaged and articulate.

[…]

In musical production there is a technique called double-tracking, and it’s not a perfect metaphor for what’s going on here but it’s instructive. In double tracking you record one part — a vocal or solo — and then you record that part again, with slight variations in timing and tone. Because the two tracks are close, they are perceived as a single track. Because they are different though, the track is “widened” feeling deeper, richer. The trick is for them to be different enough that it widens the track but similar enough that they blend.

Mike Caulfield

This is where blockchain could actually be a useful technology. Caulfield often talks about the importance of ‘going back to the source’ — in other words, checking the provenance of what it is you’re reading, watching, or listening. There’s potential here for checking that something is actually the original document/video/audio.

Ultimately, however, people believe what they want to believe. If they want to believe Donald Trump is an idiot, they’ll read and share things showing him in a negative light. It doesn’t really matter if it’s true or not.


Also check out:

Friday fabrications

These things made me sit up and take notice:


Image via xkcd

The benefits of Artificial Intelligence

As an historian, I’m surprisingly bad at recalling facts and dates. However, I’d argue that the study of history is actually about the relationship between those facts and dates — which, let’s face it, so long as you’re in the right ballpark, you can always look up.

Understanding the relationship between things, I’d argue, is a demonstration of higher-order competence. This is described well by the SOLO Taxonomy, which I featured in my ebook on digital literacies:

SOLO Taxonomy

This is important, as it helps to explain two related concepts around which people often get confused: ‘artificial intelligence’ and ‘machine learning’. If you look at the diagram above, you can see that the ‘Extended Abstract’ of the SOLO taxonomy also includes the ‘Relational’ part. Similarly, the field of ‘artificial intelligence’ includes ‘machine learning’.

There are some examples of each in this WIRED article, but for the purposes of this post let’s just leave it there. Some of what I want to talk about here involves machine learning and some artificial intelligence. It’s all interesting and affects the future of tech in education and society.

If you’re a gamer, you’ll already be familiar with some of the benefits of AI. No longer are ‘CPU players’ dumb, but actually play a lot like human players. That means with no unfair advantages programmed in by the designers of the game, the AI can work out strategies to defeat opponents. The recent example of OpenAI Five beating the best players at a game called Dota 2, and then internet teams finding vulnerabilities in the system, is a fascinating battle of human versus machine:

“Beating OpenAI Five is a testament to human tenacity and skill. The human teams have been working together to get those wins. The way people win is to take advantage of every single weakness in Five—some coming from the few parts of Five that are scripted rather than learned—gradually build up resources, and most importantly, never engage Five in a fair fight.” OpenAI co-founder Greg Brockman told Motherboard.

Deepfakes, are created via “a technique for human image synthesis based on artificial intelligence… that can depict a person or persons saying things or performing actions that never occurred in reality”. There’s plenty of porn, of course, but also politically-motivated videos claiming that people said things they never did.

There’s benefits here, though, too. Recent AI research shows how, soon, it will be possible to replace any game character with one created from your own videos. In other words, you will be able to be in the game!

It only took a few short videos of each activity — fencing, dancing and tennis — to train the system. It was able to filter out other people and compensate for different camera angles. The research resembles Adobe’s “content-aware fill” that also uses AI to remove elements from video, like tourists or garbage cans. Other companies, like NVIDIA, have also built AI that can transform real-life video into virtual landscapes suitable for games.

It’s easy to be scared of all of this, fearful that it’s going to ravage our democratic institutions and cause a meltdown of civilisation. But, actually, the best way to ensure that it’s not used for those purposes is to try and understand it. To play with it. To experiment.

Algorithms have already been appointed to the boards of some companies and, if you think about it, there’s plenty of job roles where automated testing is entirely normal. I’m looking forward to a world where AI makes our lives a whole lot easier and friction-free.


Also check out:

  • AI generates non-stop stream of death metal (Engadget) — “The result isn’t entirely natural, if simply because it’s not limited by the constraints of the human body. There are no real pauses. However, it certainly sounds the part you’ll find plenty of hyper-fast drums, guitar thrashing and guttural growling.”
  • How AI Will Turn Us All Into Filmmakers (WIRED) “AI-assisted editing won’t make Oscar-­worthy auteurs out of us. But amateur visual storytelling will probably explode in complexity.”
  • Experts Weigh in on Merits of AI in Education (THE Journal) — “AI systems are perfect for analyzing students’ progress, providing more practice where needed and moving on to new material when students are ready,” she stated. “This allows time with instructors to focus on more complex learning, including 21st-century skills.”
Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box