Difference between revisions of "Content-Aware Adaptive Streaming"

From NMSL
Line 14: Line 14:
 
== Issues ==
 
== Issues ==
 
* How to set parameters to detect shot boundaries?  
 
* How to set parameters to detect shot boundaries?  
 +
** Different values for parameters result in different peaks in frame significance plots, which setting should be used?
 
* Which distance metric to be used for computing distortion?
 
* Which distance metric to be used for computing distortion?
 
** pixel-based metrics such as MSE, SSIM are too sensitive to camera motions, they do not 'understand' the content
 
** pixel-based metrics such as MSE, SSIM are too sensitive to camera motions, they do not 'understand' the content
 
* How many key frames for each shot? based on shot length? motion? significance variation?
 
* How many key frames for each shot? based on shot length? motion? significance variation?
 +
** Knowing the target number of key frames how to distribute it among shots?
  
  

Revision as of 14:37, 25 March 2008

We are designing adaptive streaming algorithms that are based on the visual content of the video streams. The goal is to adaptively transmit the most important frames to clients to yield the best quality. Several ideas are being explored, including: real time and offline processing of video streams, summarization of sports videos, and adaptation of multi-layer scalable video streams.


People

  • Majid Bagheri (PhD Student)


Issues

  • How to set parameters to detect shot boundaries?
    • Different values for parameters result in different peaks in frame significance plots, which setting should be used?
  • Which distance metric to be used for computing distortion?
    • pixel-based metrics such as MSE, SSIM are too sensitive to camera motions, they do not 'understand' the content
  • How many key frames for each shot? based on shot length? motion? significance variation?
    • Knowing the target number of key frames how to distribute it among shots?


Discussion and Ideas


References and Links