Difference between revisions of "Content-Aware Adaptive Streaming"

From NMSL
Line 20: Line 20:
 
* The feature space idea can be used to design a new quality metric. Frames from the original and the scaled sequence are mapped to a lower dimension and the distance is measured.
 
* The feature space idea can be used to design a new quality metric. Frames from the original and the scaled sequence are mapped to a lower dimension and the distance is measured.
 
* Online Summary Generator (submit a video, get the summary)
 
* Online Summary Generator (submit a video, get the summary)
 +
* investigate MPEG CLD and MAD features
 
* [[Private:Surveillance| Initial Results on Video Summarization (Login Required)]]
 
* [[Private:Surveillance| Initial Results on Video Summarization (Login Required)]]
  

Revision as of 10:30, 2 April 2008

We are designing adaptive streaming algorithms that are based on the visual content of the video streams. The goal is to adaptively transmit the most important frames to clients to yield the best quality. Several ideas are being explored, including: real time and offline processing of video streams, summarization of sports videos, and adaptation of multi-layer scalable video streams.


People

  • Majid Bagheri (PhD Student)



Discussion and Ideas


References and Links