︎︎︎ About

With 5G as a narrative and framework, this design project explores future use cases and discusses the impact it could have on our society.

︎︎︎ Artifacts

  1. Autonomous Decisions
  2. Remote Work
  3. Fake Society
  4. Cultural Streaming
  5. Decentralized Health
  6. Contagion Mapping
  7. Digital Education
  8. Connectivity as Real Estate
  9. Virtual Shopping

︎︎︎ Final Reflections

We are in the middle of a connectivity-shift, where 5G is expanding every day.

︎︎︎Picture Library



Artifacts

Our artifacts evidence a series of “what if’s” sentences. These sentences are based on our key findings, trends and characteristics, and address the overarching theme that the artifact explores.

We have placed the artifacts 2 - 5 years in the future, where we imagine that nationwide 5G is implemented, with a close to 100% coverage. This fits well in terms of the Gartner Hype Cycle which expects 5G to reach the “plateau of productivity” resulting in mainstream adoption. Because of this, we have evidenced the artifact through devices most people are familiar with such as smartphones, tablets and laptops as they will probably be around at the time.

We have chosen to zoom in on three of the artifacts, offering a broader level of reflection ︎︎︎


Autonomous Decisions

Exploring trust and interaction with autonomous systems.

︎︎︎ What if we embrace autonomous systems in decision making roles?

Remote work

Exploring the future of deep fakes, fake news and identity theft
︎︎︎ What if physical presence did not determine the jobs you could perform?

Fake society

Exploring physical presence, job markets and decentralization
︎︎︎ What if the real world and synthetic reality were impossible to distinguish?

Autonomous Decisions


What if we embrace autonomous systems in decision making roles?


Combining drones, artificial intelligence and football the NFF Element system is distributed to clubs and communities around the country. The system allows football matches to be organized and conducted without a human referee, and initially serves as a way of being able to play in the event of a physical referee missing.




5G and Feasibility

The autonomous system behind the NFF Element relies on edge computing to be able to transfer and process the data it uses to make decisions, mainly based on live video. Since the computation is located at the edge, the drone itself needs less computing hardware, freeing up space for other useful features such as more battery power. Although similar systems could work with cabled connections and mounted cameras, this would limit its viability to urban areas or require expensive fiber installations. The nimble nature, reduced setup and off-the-shelf availability of the drone is what has resulted in keeping costs down, enabling widespread adoption.

We have seen evidence of AI’s that serve the role of a referee before, both in video games like FIFA or in previous scientific experiments.  The use of Goal-line and VAR (Video Assisted Referee) technologies are already implemented in several international competitions, despite controversy and often heavy opposition. As a football match is tied to a limited physical space and consists of a defined ruleset, it is a good candidate for considering AI in the first place.


︎︎︎Accessing the NFF Elemtent through a smartphone.


Reflection

However, there are many human factors that can be harder to judge. The core rules of football are the same for a team of seven year olds and for Liverpool FC. How would the system adapt to and handle different levels of physical contact, aggression and general behaviour? A human referee should know the rules well and have full authority, guiding the players when they make mistakes. Experiencing the referee as fair, impartial and understanding will impact children's own behaviour towards the referee later.  How will these values translate to a digital system? How does an autonomous system compare to a referee with 30 years of experience, or one who is about to judge their first match?


How does an autonomous system handle a furious dad, who is willing to fight for his child's claim to a penalty?

What could be the consequences of a setting where the provider of such a system had commercial interests like Nike or Adidas, and could profit directly from the data being generated? By merging the physical and digital world, it could enable opportunities like playing  a digital version of yourself in the newest FIFA or Football Manager game. However, the same data could be used to offer

The questions we ask through this artifact involve feelings, trust and decisions. If we imagine this artifact to be taken further, it can quickly relate to more serious matters than football (although we acknowledge that football is more than just a game for a large number of people). In a future where autonomous systems are implemented in contexts like traffic, law enforcement, search and rescue and food delivery these questions have serious ethical concerns. Who is  responsible when something goes wrong? Could 5G scramblers be used to shut down important societal functionality?  How do we avoid transferring human bias into the systems and services we create?





︎︎︎The team can use the footage the NFF Elements records for their social media platforms or for analytics purposes.


︎︎︎Commercial on youtube for the new FIFA2020. Players can use their data supplied by NFF Element and play with and against their idols with their own skillset.


︎︎︎In this case Nike have created a campaign. You can upload your data to their website, the data is analyzed and you can get recommentadtions about what kind of shoes or other gear you could possibly buy to increase your own performance.


Ruleset with
human discretion
︎︎︎

Digital Decisions
︎︎︎

Ruleset



Sports
︎︎︎


Referee
Traffic
︎︎︎


Fines
Redirecting
Traffic lights
Search and
rescue
︎︎︎

Urban
Mountain
Air-Sea
Law
Enforcement
︎︎︎

Discovering
Deterring
Rehabilitating
Punishing

Remote Work


What if physical presence did not determine the jobs you could perform?


By 2050 it’s projected that more than two-thirds of the world population will live in urban areas. Expanding cities to meet the needs of the future requires extensive construction work, in every part of the world. As a consequence, a new type of business model has appeared for the construction industry, enabling remote operations on a large scale.





5G and Feasibility


The remote operation of heavy machinery such as excavators, cranes and bulldozers relies on the 5G networks precision positioning, increased speeds, reduced latency and network slicing. The combination of characteristics allows for haptic feedback, real-time video transmission in high definition, with a secure and prioritized portion of the network dedicated to the construction business. Ericsson has done similar projects using 4G previously, but suggests that 5G is the future for this kind of evolution as the 4G network is not suited for tasks where a couple milliseconds of latency could be disastrous.
They also address the challenges of hiring skilled people for jobs in remote areas, as they are often drawn to the cities, which this artifact explores. Much like the physical boundaries of the football field in the previous artifact, construction sites are usually restricted areas with many security precautions, and a suitable starting point for implementing remote controlled machinery, as opposed to open, accessible and crowded locations.


︎︎︎The commucation between the operator and customer. If the operator speaks a different language the chat will instantaneously translate the messages for both of them.


Reflection

When discussing remote or digital presence, especially tied to critical services or operations, there are several issues that arise. The possibility for people to live where they want, and perform a job where work is available, is something many people would appreciate. However, this idea might also open up possibilities that have less desirable effects. What if large companies recruit a workforce in low-cost countries with poor wages, and use them to outperform local actors based on prices? Would anyone care where the workers were located, and their working conditions? How could you even be sure who was piloting the machinery in the first place?


When discussing this artifact with Joakim Formo who has been working on the Ericsson 4G excavator project mentioned previously, we learned that in the case of current physical excavator operation, there is a huge amount of time spent waiting. One person could effectively operate 10 excavators at once by connecting to another job while waiting for others to perform their tasks on the previous one. Would the amount of machines controlled by a single operator be limited due to security concerns, or would they be offered piecework payment encouraging simultaneous jobs? What does the remote construction industry look like on the customer's side?


The questions we ask through this artifact involve physical presence, job markets and decentralization. It can be perceived as the construction industry equivalent to Econovenience services (Fiverr, Uber, Foodora, Voi) which might eventually expand to other industries like mining, forestry or shipping. What would this mean for future job markets, education and demographics? Would we see the disappearance of local providers and workers due to outsourcing? Or could the need for on-demand machinery create new jobs and opportunities locally?


Physical presence =
illusion of trust?
︎︎︎

Remote Work
︎︎︎

Physical Precense



Operator
︎︎︎

Excavator
Crane
Mining
Cargo transport
︎︎︎

Ocean
Air
Rail
Road
Transport of
people
︎︎︎
Bus
Tram
Underground
Train
Boat
Plane





Fake Society


What if the real world and synthetic reality were impossible to distinguish?


Face-swapping and deepfake technology is becoming increasingly lightweight and realistic. This next generation of social media filters allows you to not just “wear” the face of famous persons or your friends, but make your words sound just like theirs. By combining video and audio, anyone can create convincing fake videos with matching audio in an instant.


Original picture
︎︎︎

Video input
︎︎︎

Video output
︎︎︎



︎︎︎Created with first order model

5G and Feasibility

The face recognition software running on our smartphones today has existed for several years already, and it runs well even on older devices. To reach the next level of realism in video manipulation, our smartphones would need some serious hardware upgrades. With 5G, the needed hardware upgrade is made available through edge computing, relocating the computation to a 5G base station closer to the user. Deepfake technology relies on large video or picture reference material to train AI models.

These models map facial properties, and manipulate the original photo or video with a face of your choice. Given enough time and reference material, the results can be highly convincing, and spotting what is real and what's fake can be challenging, even if you know what to look for. The same is true for synthesized speech and voice generation software, which can be described as deepfake for audio instead of video.

We have tested out the deepfake model “First Order Motion Model for Image Animation”.made by Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci and Nicu Sebe.  This was already reasonably straightforward and without any prior knowledge.



︎︎︎Using a selfie video as dataset to train the deep fake

The 5G network could speed up the creation of generative media like deepfakes and voice generation and allow them to be created using a mobile device.

Reflection

Our society has been aware of the possibilities of manipulating images for a long time. The same is not true when it comes to video, and we are just starting to see the evidence. In 2018, the governor of São Paulo, Joao Doria was accused of being unfaithful to his wife by taking part in an orgy. The incident was recorded on video and although Joao seems to be present, he claimed it as fake. Nobody has been able to prove him wrong.

Video material has been used as evidence for several years in courts all over the world.

How would it impact society when a ten year old possesses the tools to create highly convincing video and audio material? What if digital media completely lost its credibility?


︎︎︎Posting the deepfake to TikTok. In this visualization it is a deep fake of Queen Sonja of Norway.

There are many examples of this technology used for entertainment purposes , and the potential for using it in the movie industry is becoming evident. Although we doubt that the general population would use tools like this to cause harm, there is an immense potential to cause serious harm to individuals, society or government. Facebook has apparently ‘banned’ the use of deepfakes on their platform, as they alter and distort reality in ways that are hard for an average person to detect. However, what happens when the quality of these synthetic pieces of reality are indistinguishable from the actual reality? By the time new tools are able to spot a deepfake, the quality of the deepfake could have evolved. What if generative media was used to blackmail you or your family? How do you prove that a video of ‘you’ is fake, if you have no alibi?

Should this be considered identity theft? Where do we draw the line between harmless fun and societal threats?

The questions we ask through this artifact involve the future of deep fakes, fake news and identity theft. As opposed to the other artifacts we explore, this one has fewer desirable outcomes. It is a looming challenge, and big corporations like Microsoft and Google are trying to combat deepfakes. Because of the potential to do harm, several contributors to the evolution of deepfake AI have removed information on how to use the AI from the internet. Despite this, the information is still out there, and it’s just a question of time before someone makes it accessible and easier to use for the general public. As consumers, having a critical approach to what we see, hear or read will be increasingly important.


Normalized through
everyday use
︎︎︎

Deep fake
︎︎︎

Synthetic reality



Entertainment
︎︎︎

Movies
Social Media
︎︎︎

Sharing
Communication
Memes
Media
︎︎︎

Information
Debate
Criticism
Internet

Cultural Streaming


What if special moments were accessible and offered better viewing experiences?


Strimo is a platform for viewing and streaming events that generates dynamic viewing experiences. This artifact focuses on the types of events that normally don’t have the opportunity or capacity to record and stream, making the content available and more engaging for anyone. These events could be anything from smaller local concerts, or school plays.



5G and Feasibility

With the next generation of smartphones supporting 5G, the speed, latency and capacity it brings will be with us everywhere. Most new devices also feature 4K and/or 8K video resolutions, making it possible to zoom and crop video while still offering good resolutions. The video content is edited live, using the 5G network to enable an AI as the director. When multiple smartphones are used, the AI director will be able to create a multi-camera production, adding production value to the content being streamed.



︎︎︎Automated editing function could create an engaging viewing experience.

Existing multi-camera software already synchronizes footage, and live-streaming via 4G is possible, although limited to lower resolutions. This artifact could make events attendable digitally, without needing dedicated technical staff or persons with the knowledge to set up a dedicated stream.

Reflection

This artifact could benefit small local performers, expanding their audience or enabling family and friends to tune in, without the viewing experience being impacted by lagging video or choppy sound. It would also work as an arena for bigger stages to expand their reach, offering a digital ticket to performances that would only be available to those living or travelling to the venue.


︎︎︎Simultaneous streaming to multiple platfroms.The automated editing can create personolized experiences focusing on parts or person you want to pay attention to.

What if  the convenience of watching school plays remotely results in empty seats or theaters? Since the event is streamed anyways, it might mean that you could spend that extra hour at work, and still see the play.



The questions we ask through this artifact involve the future of entertainment, events and digital participation. During the COVID-19 pandemic we have seen the importance and reach of digital services, especially related to video and streaming. What if this crisis changes the way we stay connected in the future? The new Munch museum recently had 4,3 million digital visitors on one of their digital tours, which is around the same amount of people that have visited the old Munch Museum in total since it’s opening in 1963. Reaching out to the whole world offers many possibilities. But does this mean that everything should be streamed, shared and available online? What if there is no room for an offline world in the future?