︎︎︎ About

With 5G as a narrative and framework, this design project explores future use cases and discusses the impact it could have on our society.

︎︎︎ Artifacts

  1. Autonomous Decisions
  2. Remote Work
  3. Fake Society
  4. Cultural Streaming
  5. Decentralized Health
  6. Contagion Mapping
  7. Digital Education
  8. Connectivity as Real Estate
  9. Virtual Shopping

︎︎︎ Final Reflections

We are in the middle of a connectivity-shift, where 5G is expanding every day.

︎︎︎Picture Library



Fake Society


What if the real world and synthetic reality were impossible to distinguish?


Face-swapping and deepfake technology is becoming increasingly lightweight and realistic. This next generation of social media filters allows you to not just “wear” the face of famous persons or your friends, but make your words sound just like theirs. By combining video and audio, anyone can create convincing fake videos with matching audio in an instant.


Original picture
︎︎︎

Video input
︎︎︎

Video output
︎︎︎



︎︎︎Created with first order model

5G and Feasibility

The face recognition software running on our smartphones today has existed for several years already, and it runs well even on older devices. To reach the next level of realism in video manipulation, our smartphones would need some serious hardware upgrades. With 5G, the needed hardware upgrade is made available through edge computing, relocating the computation to a 5G base station closer to the user. Deepfake technology relies on large video or picture reference material to train AI models.

These models map facial properties, and manipulate the original photo or video with a face of your choice. Given enough time and reference material, the results can be highly convincing, and spotting what is real and what's fake can be challenging, even if you know what to look for. The same is true for synthesized speech and voice generation software, which can be described as deepfake for audio instead of video.

We have tested out the deepfake model “First Order Motion Model for Image Animation”.made by Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci and Nicu Sebe.  This was already reasonably straightforward and without any prior knowledge.



︎︎︎Using a selfie video as dataset to train the deep fake

The 5G network could speed up the creation of generative media like deepfakes and voice generation and allow them to be created using a mobile device.

Reflection

Our society has been aware of the possibilities of manipulating images for a long time. The same is not true when it comes to video, and we are just starting to see the evidence. In 2018, the governor of São Paulo, Joao Doria was accused of being unfaithful to his wife by taking part in an orgy. The incident was recorded on video and although Joao seems to be present, he claimed it as fake. Nobody has been able to prove him wrong.

Video material has been used as evidence for several years in courts all over the world.

How would it impact society when a ten year old possesses the tools to create highly convincing video and audio material? What if digital media completely lost its credibility?


︎︎︎Posting the deepfake to TikTok. In this visualization it is a deep fake of Queen Sonja of Norway.

There are many examples of this technology used for entertainment purposes , and the potential for using it in the movie industry is becoming evident. Although we doubt that the general population would use tools like this to cause harm, there is an immense potential to cause serious harm to individuals, society or government. Facebook has apparently ‘banned’ the use of deepfakes on their platform, as they alter and distort reality in ways that are hard for an average person to detect. However, what happens when the quality of these synthetic pieces of reality are indistinguishable from the actual reality? By the time new tools are able to spot a deepfake, the quality of the deepfake could have evolved. What if generative media was used to blackmail you or your family? How do you prove that a video of ‘you’ is fake, if you have no alibi?

Should this be considered identity theft? Where do we draw the line between harmless fun and societal threats?

The questions we ask through this artifact involve the future of deep fakes, fake news and identity theft. As opposed to the other artifacts we explore, this one has fewer desirable outcomes. It is a looming challenge, and big corporations like Microsoft and Google are trying to combat deepfakes. Because of the potential to do harm, several contributors to the evolution of deepfake AI have removed information on how to use the AI from the internet. Despite this, the information is still out there, and it’s just a question of time before someone makes it accessible and easier to use for the general public. As consumers, having a critical approach to what we see, hear or read will be increasingly important.


Normalized through
everyday use
︎︎︎

Deep fake
︎︎︎

Synthetic reality



Entertainment
︎︎︎

Movies
Social Media
︎︎︎

Sharing
Communication
Memes
Media
︎︎︎

Information
Debate
Criticism
Internet