Difference between revisions of "Cloud Gaming"

From NMSL
(Created page with "Cloud gaming allows users with thin-clients to play complex games on their end devices as the bulk of processing is offloaded to remote servers. A thin-client is only required...")
 
 
(8 intermediate revisions by one other user not shown)
Line 1: Line 1:
Cloud gaming allows users with thin-clients to play complex games on their end devices as the bulk of processing is offloaded to remote servers. A thin-client is only required to have basic decoding capabilities which exist on most modern devices. The result of the remote processing is an encoded video that gets streamed to the client. As modern games are complex in terms of graphics and motion, the encoded video requires high bandwidth to provide acceptable Quality of Experience (QoE) to end users. The cost incurred by the cloud gaming service provider to stream the encoded video at such high bandwidth grows rapidly with the increase in the number of users.
 
  
We present a content-aware video encoding method for cloud gaming (referred to as CAVE) to improve the perceptual quality of the streamed video frames with comparable bandwidth requirements. This is a challenging task because of the stringent requirements on latency in cloud gaming, which impose additional restrictions on frame sizes as well as processing time to limit the total latency perceived by clients. Unlike many of the previous works, the proposed method is suitable for the state-of-the-art High Efficiency Video Coding (HEVC) encoder, which by itself offers substantial bitrate savings compared to prior encoders. The proposed method leverages information from the game such as the Regions-of-Interest (ROIs), and it optimizes the quality by allocating different amounts of bits to various areas in the video frames. Through actual implementation in an open-source cloud gaming platform, we show that the proposed method achieves quality gains in ROIs that can be translated to bitrate savings between 21% and 46% against the baseline HEVC encoder and between 12% and 89% against the closest work in the literature.
+
Cloud gaming is a large, rapidly growing, multi-billion-dollar industry. Cloud gaming enables users to play games on thin clients such as tablets, smartphones, and smart TVs without worrying about processing power, memory size, graphics card capabilities. It allows high-quality games to be played virtually on any device and anywhere, without the need for high-end gaming consoles or installing/updating software. This significantly increases the potential number of users and thus the market size. Most major IT companies offer cloud gaming services, such as Sony PlayStation Now, Google Stadia, Nvidia GeForce Now, and Amazon Tempo.
 +
 
 +
Cloud gaming essentially moves the game logic and rendering from the user’s device to the cloud. As a result, the entire game runs on the cloud and the rendered scenes are then streamed to users in real-time. Rendering and streaming from the cloud, however, substantially increase the required bandwidth to serve gaming clients. Moreover, given the large-scale and heterogeneity of clients, numerous streams need to be created and served from the cloud in real-time, which creates a major challenge for cloud gaming providers. Thus, minimizing the resources needed to render, encode, customize, and deliver gaming streams to millions of users is an important problem. This problem gets more complex when we consider advanced and next-generation games such as ultra-high definition and immersive games, which are getting popular.
 +
 
 +
In this project, we partner with AMD Canada with the goal of designing next-generation cloud gaming systems that optimize the quality, bitrate, and delay, which will not only improve the quality of experience for both the players and viewers, but will also reduce the cost and minimize resources for service providers.  
 +
 
 +
 
 +
__TOC__
 +
 
  
 
== People ==
 
== People ==
  
* Mohamed Hegazy (M.Sc. student)
+
* [http://www.sfu.ca/~omossad/ Omar Mossad (PhD student)]
 +
 
 +
* Haseeb Ur Rehman (PhD student, University of Ottawa)
 +
 
 +
* Ammar Rashed (PhD student, University of Ottawa)
  
* Khaled Diab (Ph.D. Student)
+
* Deniz Ugur (MSc student)
  
* Mehdi Saeedi (Advanced Micro Devices)
+
* Ghazaleh Bakhtiariazad (MSc Stduent)
  
* Boris Ivanovic (Advanced Micro Devices)
+
* Khaled Al Butainy (MSc Stduent)
  
* Ihab Amer (Advanced Micro Devices)
+
* Mohamed Hegazy (MSc student, graduated)
  
* Yang Liu (Advanced Micro Devices)
+
* [http://www.sfu.ca/~kdiab/ Khaled Diab (University Research Associate)]
  
* Gabor Sines (Advanced Micro Devices)  
+
* Ihab Amer (AMD Fellow)
  
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda]
+
* [https://www.site.uottawa.ca/~shervin/ Shervin Shirmohammadi (Co-PI), University of Ottawa]
  
 +
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda (PI)]
  
== Publications ==
 
  
* M. Hegazy, K. Diab, M. Saeedi, B. Ivanovic, I. Amer, Y. Liu, G. Sines, and M. Hefeeda, [Content-aware Video Encoding for Cloud Gaming]. In Proc. of ACM Multimedia Systems Conference 2019, 14 pages, Amherst, MA, USA. June 2019.  
+
'''DeepGame: Efficient Video Encoding for Cloud Gaming'''
 +
 
 +
Cloud gaming enables users to play games on virtually any device. This is achieved by offloading the game rendering and encoding to cloud datacenters. As game resolutions and frame rates increase, cloud gaming platforms face a major challenge to stream high-quality games due to the high bandwidth and low latency requirements. We propose a new video encoding pipeline, called DeepGame, for cloud gaming platforms to reduce the bandwidth requirements with limited to no impact on the player quality of experience. DeepGame learns the player’s contextual interest in the game and the temporal correlation of that interest using a spatio-temporal deep neural network. Then, it encodes various areas in the video frames with different quality levels proportional to their contextual importance. DeepGame does not change the source code of the video encoder or the video game, and it does not require any additional hardware or software at the client side. We implemented DeepGame in an open-source cloud gaming platform and evaluated its performance using multiple popular games. We also conducted a subjective study with real players to demonstrate the potential gains achieved by DeepGame and its practicality. Our results show that DeepGame can reduce the bandwidth requirements by up to 36% compared to the baseline encoder, while maintaining the same level of perceived quality for players and running in real time.  
 +
 
 +
[[File:DeepGame.png|thumb|center|700px|DeepGame is a process running between the game process and the video encoder process. The input to DeepGame is the raw game frames generated by the game process, and the output is the encoding parameters (QPs), which are used by the video encoder to produce encoded frames to be transmitted to the client. During a game session, DeepGame understands the game context and optimizes the encoding parameters in real-time. DeepGame consists of three stages: (i) Scene Analysis, (ii) ROI Prediction, and (iii) Encoding Parameters Calculation. These stages operate in parallel while forwarding the outputs from one stage to the next.]]
 +
 
 +
== Code and  Datasets ==
  
== Software and  Data ==
+
* [https://github.com/omossad/DeepGame DeepGame: Efficient Video Encoding for Cloud Gaming]
  
 
* [https://github.com/mohamedhegazy/CAVE Content-aware Video Encoding for Cloud Gaming]
 
* [https://github.com/mohamedhegazy/CAVE Content-aware Video Encoding for Cloud Gaming]
 +
 +
== Publications ==
 +
 +
* O. Mosaad, K. Diab, I. Amer, and M. Hefeeda, [https://www2.cs.sfu.ca/~mhefeeda/Papers/mm21_deepGame.pdf DeepGame Efficient Video Encoding for Cloud Gaming], In Proc. of ACM Multimedia Conference (MM'21), Chengdu, China, October 2021.
 +
 +
* M. Hegazy, K. Diab, M. Saeedi, B. Ivanovic, I. Amer, Y. Liu, G. Sines, and M. Hefeeda, [https://www2.cs.sfu.ca/~mhefeeda/Papers/mmsys19_cave.pdf Content-aware Video Encoding for Cloud Gaming]. In Proc. of ACM Multimedia Systems Conference (MMSys'19), Amherst, MA, June 2019. '''(received the Best Student Paper Award and the ACM Artifacts Evaluated and Functional badge)'''

Latest revision as of 20:53, 18 December 2023

Cloud gaming is a large, rapidly growing, multi-billion-dollar industry. Cloud gaming enables users to play games on thin clients such as tablets, smartphones, and smart TVs without worrying about processing power, memory size, graphics card capabilities. It allows high-quality games to be played virtually on any device and anywhere, without the need for high-end gaming consoles or installing/updating software. This significantly increases the potential number of users and thus the market size. Most major IT companies offer cloud gaming services, such as Sony PlayStation Now, Google Stadia, Nvidia GeForce Now, and Amazon Tempo.

Cloud gaming essentially moves the game logic and rendering from the user’s device to the cloud. As a result, the entire game runs on the cloud and the rendered scenes are then streamed to users in real-time. Rendering and streaming from the cloud, however, substantially increase the required bandwidth to serve gaming clients. Moreover, given the large-scale and heterogeneity of clients, numerous streams need to be created and served from the cloud in real-time, which creates a major challenge for cloud gaming providers. Thus, minimizing the resources needed to render, encode, customize, and deliver gaming streams to millions of users is an important problem. This problem gets more complex when we consider advanced and next-generation games such as ultra-high definition and immersive games, which are getting popular.

In this project, we partner with AMD Canada with the goal of designing next-generation cloud gaming systems that optimize the quality, bitrate, and delay, which will not only improve the quality of experience for both the players and viewers, but will also reduce the cost and minimize resources for service providers.



People

  • Haseeb Ur Rehman (PhD student, University of Ottawa)
  • Ammar Rashed (PhD student, University of Ottawa)
  • Deniz Ugur (MSc student)
  • Ghazaleh Bakhtiariazad (MSc Stduent)
  • Khaled Al Butainy (MSc Stduent)
  • Mohamed Hegazy (MSc student, graduated)
  • Ihab Amer (AMD Fellow)


DeepGame: Efficient Video Encoding for Cloud Gaming

Cloud gaming enables users to play games on virtually any device. This is achieved by offloading the game rendering and encoding to cloud datacenters. As game resolutions and frame rates increase, cloud gaming platforms face a major challenge to stream high-quality games due to the high bandwidth and low latency requirements. We propose a new video encoding pipeline, called DeepGame, for cloud gaming platforms to reduce the bandwidth requirements with limited to no impact on the player quality of experience. DeepGame learns the player’s contextual interest in the game and the temporal correlation of that interest using a spatio-temporal deep neural network. Then, it encodes various areas in the video frames with different quality levels proportional to their contextual importance. DeepGame does not change the source code of the video encoder or the video game, and it does not require any additional hardware or software at the client side. We implemented DeepGame in an open-source cloud gaming platform and evaluated its performance using multiple popular games. We also conducted a subjective study with real players to demonstrate the potential gains achieved by DeepGame and its practicality. Our results show that DeepGame can reduce the bandwidth requirements by up to 36% compared to the baseline encoder, while maintaining the same level of perceived quality for players and running in real time.

DeepGame is a process running between the game process and the video encoder process. The input to DeepGame is the raw game frames generated by the game process, and the output is the encoding parameters (QPs), which are used by the video encoder to produce encoded frames to be transmitted to the client. During a game session, DeepGame understands the game context and optimizes the encoding parameters in real-time. DeepGame consists of three stages: (i) Scene Analysis, (ii) ROI Prediction, and (iii) Encoding Parameters Calculation. These stages operate in parallel while forwarding the outputs from one stage to the next.

Code and Datasets

Publications

  • M. Hegazy, K. Diab, M. Saeedi, B. Ivanovic, I. Amer, Y. Liu, G. Sines, and M. Hefeeda, Content-aware Video Encoding for Cloud Gaming. In Proc. of ACM Multimedia Systems Conference (MMSys'19), Amherst, MA, June 2019. (received the Best Student Paper Award and the ACM Artifacts Evaluated and Functional badge)