Difference between revisions of "Private:3DV Remote Rendering"

From NMSL
Line 17: Line 17:
 
* What compression format should be used to compress the texture images of the views?
 
* What compression format should be used to compress the texture images of the views?
 
* What compression format is efficient for compressing the depth maps without affecting the quality of synthesized views?
 
* What compression format is efficient for compressing the depth maps without affecting the quality of synthesized views?
** will MVC be suitable for depth maps?
+
** Will MVC be suitable for depth maps?
 
* How much will quality reduction of one of the views to reduce bandwidth affect the synthesis process at the receiver side?  
 
* How much will quality reduction of one of the views to reduce bandwidth affect the synthesis process at the receiver side?  
 
** Will the effect be significant given that receiver's display size is small?
 
** Will the effect be significant given that receiver's display size is small?

Revision as of 18:10, 22 January 2011

Here we describe the components of a 3D video remote rendering system for mobile devices based on cloud computing services. We also discuss the main design choices and challenges that need to be addressed in such a system.

The system will be composed of three main components:

  • Mobile receiver(s)
  • Adaptation proxy
  • View synthesis and rendering cloud service

Transmission is to be carried via unicast over an unreliable wireless channel. A feedback channel would be necessary between the receiver and the proxy. This channel would be utilized to send information about current/desired viewpoint, buffer status, and network conditions.

Because of the limited wireless bandwidth, we need efficient and adaptive compression of transmitted views/layers. In addition, an unequal error protection (UEP) technique will be required to overcome the unreliable nature of the wireless channel.

  • What is the format of the stored video files?
  • How many views (and possible depth maps) need to be sent to the receiver?
    • two views
    • two views + two depth maps
    • one view + depth map
  • What compression format should be used to compress the texture images of the views?
  • What compression format is efficient for compressing the depth maps without affecting the quality of synthesized views?
    • Will MVC be suitable for depth maps?
  • How much will quality reduction of one of the views to reduce bandwidth affect the synthesis process at the receiver side?
    • Will the effect be significant given that receiver's display size is small?