Difference between revisions of "Private:3DV Remote Rendering"

From NMSL
(New page: Here we describe the components of a 3D video remote rendering system for mobile devices based on cloud computing services. We also discuss the main design choices and challenges that need...)
 
Line 1: Line 1:
 
Here we describe the components of a 3D video remote rendering system for mobile devices based on cloud computing services. We also discuss the main design choices and challenges that need to be addressed in such a system.
 
Here we describe the components of a 3D video remote rendering system for mobile devices based on cloud computing services. We also discuss the main design choices and challenges that need to be addressed in such a system.
 +
 +
The system will be composed of three main components:
 +
* Mobile receiver(s)
 +
* Adaptation proxy
 +
* View synthesis and rendering cloud service
 +
 +
Transmission is to be carried via unicast over an unreliable wireless channel. A feedback channel would be necessary between the receiver and the proxy. This channel would be utilized to send information about current/desired viewpoint, buffer status, and network conditions.
 +
 +
Because of the limited wireless bandwidth, we need efficient and adaptive compression of transmitted views/layers. In addition, an unequal error protection (UEP) technique will be required to overcome the unreliable nature of the wireless channel.
 +
 +
What is the format of the stored video files?
 +
How many views (and possible depth maps) need to be sent to the receiver?
 +
What compression format should be used to compress the texture images of the views?
 +
What compression format is efficient for compressing the depth maps without affecting the quality of synthesized views?

Revision as of 17:49, 22 January 2011

Here we describe the components of a 3D video remote rendering system for mobile devices based on cloud computing services. We also discuss the main design choices and challenges that need to be addressed in such a system.

The system will be composed of three main components:

  • Mobile receiver(s)
  • Adaptation proxy
  • View synthesis and rendering cloud service

Transmission is to be carried via unicast over an unreliable wireless channel. A feedback channel would be necessary between the receiver and the proxy. This channel would be utilized to send information about current/desired viewpoint, buffer status, and network conditions.

Because of the limited wireless bandwidth, we need efficient and adaptive compression of transmitted views/layers. In addition, an unequal error protection (UEP) technique will be required to overcome the unreliable nature of the wireless channel.

What is the format of the stored video files? How many views (and possible depth maps) need to be sent to the receiver? What compression format should be used to compress the texture images of the views? What compression format is efficient for compressing the depth maps without affecting the quality of synthesized views?