Difference between revisions of "Private:progress-sharangi"

From NMSL
 
(99 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Spring 2011 (RA) =
+
= Spring 2013 (RA) =
* Research: Exploring topics on cloud computing, tethering and wireless multimedia streaming
+
=== [Feb 7] 3D FVV over DASH ===
* Reports:
+
* Worked with Ahmed on surveying open source DASH tools and 3D video player tools for implementing the feature. Details are updated in the wiki page [http://nsl.cs.sfu.ca/wiki/index.php/Private:FTV here].
** Survey report on WiMAX/LTE testbed design options
+
 
** Report on current status of DVB-H testbed and design of EPG feature
+
= Fall 2012 (RA) =
 +
=== [Nov20] Cloud rewrite project ===
 +
* Worked on addressing the MMSys review comments for the project and submitted the updated report to ICDCS'13. Report also available in svn [https://cs-nsl-svn.cs.surrey.sfu.ca/nsl-projects/browser/Cloud/avcRewriter/documents/techReps/icdcs13/ here].
 +
 
 +
=== [Oct29] Adaptable Video Caching ===
 +
* I obtained medisyn source code from one of the authors and investigated it. However, it does not have some of the features like scalable video description that we need for creating a DASH benchmark. I also came across some papers which describe important features like partial download of popular vod applications like Youtube. Consequently I am preparing a workload incorporating there features and currently investigating the byte-range queries in HTTP steaming.
 +
 
 +
=== [Oct12] Adaptable Video Caching ===
 +
* I investigated caching policies for the general problem of adaptable and segmented media content caching at edge-proxies. The term adaptable refers to content that are adapted to user’s requirement at the edge location. Adaptation can be in terms of codec specific operations like H.264 rewriting, more generic transcoding or even video scaling/retargeting. Caching a segment of video makes more sense as entire video files are huge and more often only a part of it is viewed. There have been many work on the segmented video caching but it is more relevant now as DASH becomes the de-facto streaming protocol. Our particular caching scenario is different than previous caching problems because it needs to consider four characteristics of the system: miss-penalty, segment- size, segment-class(SVC or AVC) and temporal correlation between segments. The metrics of performance are byte-hit-rate, cost and user-perceived latency. As LRU and LFU are the most popular caching policies we should start by investigating how these two policies perform in our caching scenario, particularly how the aforementioned four parameters affect the performance of the two policies. Then we can further investigate to design our own algorithm that improves performance by giving consideration to all  the four requirements.
 +
* There are very few works on size, class and temporal correlation aware caching. The cost for our caching scenario is also different that the normal caching problem where higher latency is usually used as the metric of penalty. In our case it will be cost of bandwidth and processing in addition to latency. I tried mathematically analyzing the problem but due to the large number of variables reaching a closed form expression for the caching goal proved to be difficult. One experimental approach is to divide the cache into two regions for SVC and AVC and operate LRU/LFU in each region separately. I am currently looking for a good synthetic media access trace generator to get some idea about this scheme’s performance.
 +
 
 +
* I came across several papers referring to MediSyn from HP Labs but I don’t seem to find a download link for the same. I am avoiding use of general web access generators like ProWGen. The best will be if we can get some traces of our own by tracking Youtube usage on campus but I don’t know whether this is feasible. I am also trying to improve my simulator to accommodate this but validation through an external system/trace will be better.
 +
 
 +
=== [Sep27] Proposed extensions to cloud-rewrite project: ===
 +
* Model the relationship between the GoP video parameters and VM capability information to accurately estimate the processing times. Currently this is being done in an ad-hoc manner. Maybe we can apply machine learning/ neural networks framework for this. The training phase can be done off-line. We note that this is different that computing the GoP-to-VM assignment. However, we can apply machine learning framework for finding the assignment of VMs too.
 +
 
 +
* Derive the full mathematical model for the caching cost. This model currently uses simulation to find the cache replacement probability values. We should try to derive them analytically. One challenge is that the file sizes of media chunks are going to be different which is much harder to model than if we assume them to be same. Need to look into cache-algorithm literature.
 +
 
 +
* We did not conduct any experiments to measure the latency performance of our solution. This is one area that we can look at. But it will require a more elaborate experimental setup. Better if we perform our experiments with VoD access traces than simulation.
 +
 
 +
= Summer 2012 (RA) =
 +
=== Research: Cloud Rewriter report [https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/sharangi/reports/cloudrewriter/cloud_rewriter.pdf pdf] ===
 +
=== Course: CMPT894 (Directed Reading) report [https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/sharangi/reports/cmpt894/report/som_directed_reading_report.pdf pdf] ===
 +
=== Jun 15 ===
 +
* Implemented a m3u8 based dynamic http streaming scheme utilising the rewriting functionality. Conceptually it works and I can play the video which is processed through the rewriting work flow. Following is an example playlist file with two version of a stream:
 +
---bbb.m3u8---
 +
#EXTM3U
 +
#EXT-X-STREAM_INF:PROGRAM-ID=1,BANDWIDTH=149300
 +
'http://142.58.185.226:8080/m3u8/bbb_l11.m3u8'
 +
#EXT-X-STREAM_INF:PROGRAM-ID=1,BANDWIDTH=137700
 +
'http://142.58.185.226:8080/m3u8/bbb_l10.m3u8'
 +
---bbb_l11.m3u8---
 +
#EXTM3U
 +
#EXT-X-TARGETDURATION:5
 +
#EXT-X-MEDIA-SEQUENCE:1
 +
#EXTINF:5
 +
'http://142.58.185.226:8080/AvcRewriter.php?file=bbb_l11_part_01.ts'
 +
#EXTINF:5
 +
'http://142.58.185.226:8080/AvcRewriter.php?file=bbb_l11_part_02.ts'
 +
This can be played by selecting Media->Open Network Stream->  http/142.58.185.226:8080/bbb.m3u8 in VLC
 +
* Configuring C++ applications did not work with either of the three servers(apache, nginx, lighttpd) I tried. Therefore I implemented a basic PHP application with Apache to call the rewriter binaries on server. This solution works but the calls to each video chunk file take some time to process so the playout is not smooth.
 +
 
 +
* There are some components of the delay that we did not consider before:
 +
** The SVC substream extraction process has to be added to the rewriter module to make the streaming adaptive. This adds some delay as the extraction process needs to go through the entire file and this cannot be done in conjunction with the rewriting on the fly.
 +
** The second component is the MUX-ing software(ffmpeg) which is currently taking lot of time, probably because it is initializing many resources which are not required for our particular case. One solution to this is to integrate the muxing component into rewriter module as well and have a combined substreamExtractor-Rewriter-Muxer biinary.
 +
** The delay due to PHP is probably not significant but I cannot isolate it with the current setup.
 +
 
 +
* Another challenge I am facing is with the client software. Only some versions of VLC play the stream and I am not able to observe if they are employing any bandwidth adaptation strategy or not. Another approach would be to add the extraction logic on an application server and write our own video player with adaptation logic.
 +
 
 +
 
 +
=== Jun 10 ===
 +
* I am trying to implement the connection between web server and C++ application which is turning out to be more difficult than expected. I tried with two low footprint servers nginx and lighttpd but the mechanism is not working due to their asynchronous request processing. Currently trying with Apache.
 +
 
 +
* As per the MPEG-DASH standard the content has to be created before creating the MPD file which rules out the possibility of using MPEG-DASH with on demand avcRewrite.  One option is to use the .m3u8 playlists that have simpler format with explicit urls for file chunks. In that case we have to mux the video into TS.
 +
 
 +
* I did some modifications and obtained the following table for large number of layers. As expected, the storage requirement for AVC becomes much higher than SVC. One interesting observation is that for the top two layers SVC files are actually smaller in size than their AVC versions.
 +
 
 +
    LayerID  Resolution  Framerate  Bitrate SVC(KB) AVC(KB)
 +
  ============================================
 +
          2    352x288      12.0000      28.20      446   483
 +
          3    352x288      24.0000      31.10      487   559
 +
          4    352x288        3.0000      60.50      938   726
 +
          5    352x288        6.0000      65.10    1027   811
 +
          6    352x288      12.0000      69.50    1116   903
 +
          7    352x288      24.0000      76.00    1204 1004
 +
          8    352x288        3.0000    115.80    1954 1269
 +
          9    352x288        6.0000    126.00    2149 1428
 +
        10    352x288      12.0000    134.40    2330 1580
 +
        11    352x288      24.0000    145.80    2499 1728
 +
  ============================================
 +
                                                                              10491(4.2x)
 +
 
 +
=== May 28 ===
 +
* Created 1min of AVC video chunks and encoded into mp4 files.
 +
* Storage calculations (bytes) for a 1 min CIF sized BigBuckBunny Video split into 5sec chunks:
 +
** Size of the raw SVC files: 2565058
 +
** Size of raw AVC files: 1776821(L3) + 902641(L2) + 438308(L1) =  3117770 (~21% extra compared to SVC)
 +
** Audio and container overhead SVC : 4366459 - 1776821 = 2589638
 +
** Est. size of playable SVC content: 2565058 + 2589638 = 5154696
 +
** Size of playable Simulcast content: 4366459 + 3580554 + 2535836 = 10482849 (~ 100% extra compared to SVC, but this will probably decrease with larger picture size or video length when the video data dominates the size).
 +
** Audio encoded in AAC stereo using NeroAAC for Linux and is same for both the SVC and simulcast versions.
 +
** Video encoded in 3 layer rewritable SVC with layer QPs: 36, 32, 28
 +
** AVC files for simulcast are created from the SVC file by first extracting a substream and then rewriting it.
 +
** ffmpeg and MP4Box is used for muxing the AVC and AAC files into an MP4 file with the following commands(Note: currently this is the only command that is working. Muxing the AVC file with MP4Box tools do not work):
 +
    $>ffmpeg -i ../avc/BBB_CIF_24fps_1min_GOP01.h264 -s 352x288 -r 24000/1001 -t 00:00:05 bbb_muxed_01.mp4
 +
    $>MP4Box -add ../aac/bbb_01.mp4 bbb_muxed_01.mp4
  
=== February ===
+
* Currently Investigating:
* '''Report:''' [https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/hamza/reports/January2011/doc/reportTemplate.pdf here]
+
** JSVM layer extracter does not work with rewriter. E.g. I have a 3 layer rewritable SVC file. When I rewrite all 3 layers, the our put is playable in VLC. When I extract 2 layers and then rewrite, the output is no longer usable. It does not show any difference in output compare to the 3 layer files in VLC player or ffmpeg. It has  severe problems with  audio synchronization and drifting.
* [[Private:3DV_Remote_Rendering | 3D Video Remote Rendering and Adaptation System]]
+
** Interface between web server and media files. Is may be better idea to integrate a HTTP server into the C++ media adaption/rewriting application than to implement link code between standard http server and the media application through  php/CGI. The later is slower.  
 +
* Waiting for: Kaushik's update on the DASH setup. Earlier sent him a set of mp4 files for creating DASH content and verifying the setup with it.
 +
* Challenges: The JSVM encoder is very slow. took me 3 days to encode 1 min of CIF video.
  
  
 +
=== May 15 ===
 +
* Currently working on creating chunked content from YUV files and composing the DASH MPD file.
 +
* Tested that video chunks created by splitting YUV files and AVC rewritten can be played back in VLC media player. Also investigated that the rewrite works with layer dropping.
 +
* Storage implications of splitting YUV file instead of splitting compressed file for parallel processing :
 +
Video: bridge_far.yuv, CIF(352x288), 30fps, 900 frames(30s)
 +
Encoding: 3layer SVC then rewritten to AVC, GOP size 16, YUV split into 2s chunks
 +
Size of single compressed video of 30s = 723191b = 706kb (Note that this contains only one IDR frame)
 +
Size of 15 videos of length 2s each = 829674b = 810kb.
  
  
  
 +
= Spring 2012 (TA + RA) =
 +
Course: CMPT886 (Multicore Systems)
 
=== January ===
 
=== January ===
* Exploring potential research directions in 3D videos, including: adaptive virtual view rendering in free-viewpoint video, view synthesis, and rate adaptation in 3D video streaming.  
+
Jan 27:
* Investigating the potential of cloud computing as a platform for enabling remote rendering of 3D video for mobile devices.  
+
* Conducted a survey of exiting results on SVC to AVC rewriting. While it has been reported that the re-writing process performs about 80% faster than the cascaded trans-coding process on a CPU, there are not many results on the feasibility of streaming implementation of the technique. Sablatschan et.al. have reported that the real time rewriting is not feasible for resolutions more than 480x320 videos using a quad core 3GHz processor. This leads to two directions for exploration: (1)explore the JSVM code to find better ways of doing the rewriting in parallel hardware and (2) using GOP distribution based approach on cloud to explore possibility of improving parallel performance. (1) has been extensively studied in the context of parallel encoders and decoders. (2) has scalability issues because it needs to buffer the GOPs before distributing them.
 +
** One idea here is to distribute the pictures instead of GOPs which will increase the achievable degree of parallelism at the cost of increased picture management overhead.
 +
** Another idea is to use the GOP based parallel rewriting at the server where the entire video is available. Then there will e no need to buffer the GoPs. Need to check whether this has been already done.
 +
 
 +
*  Looked at some possibilities to implement scalable video streaming. Not many streaming server applications support SVC. Will look info DASH related implementations.
 +
 
 +
 
 +
 
 +
Jan 10:
 +
* Started exploring the problem of video trans-coding in cloud
 +
* Started exploring the feasibility of implementing a cloud test-bed using Openstack.
 +
=== Other ===
 +
* Courses: CMPT 886 (Multicore Systems)
 +
* TA : CMPT 371 , CMPT 379
 +
= =
 +
 
 +
 
 +
Previous work
 +
 
 +
 
 +
= =
 +
 
 +
 
 +
 
 +
 
 +
= Spring 2011 (RA) =
 +
=== April ===
 +
* April 8
 +
** Working on the simulator for hybrid uni/multicast experiments. Taking longer than expected, may miss the MM'11 deadline
 +
=== March ===
 +
* Mar 28:
 +
** Updated tech-report on hybrid multicast-unicast [https://cs-nsl-svn.cs.surrey.sfu.ca/nsl-projects/browser/MobileVideo/WiMAX/documents/techReps/hybrid/hybrid.pdf link]
 +
* Mar 7:
 +
** Tech-report on hybrid multicast-unicast [https://cs-nsl-svn.cs.surrey.sfu.ca/nsl-projects/browser/MobileVideo/WiMAX/documents/techReps/hybrid/hybrid.pdf here]
 +
** Working on the formulation for mobile patching scheme. Derived an expression for the bandwidth requirement and energy consumption of all-unicast and adaptive-patching schemes. Need to verify correctness analytically.
 +
** Working on numerical examples of the mobile patching scheme.
 +
** Documentation GENI-WiMAX project details.
 +
 
 +
=== Feb ===
 +
* Feb 28:
 +
** Reviewed the existing literature and identified the main challenges in adapting video streaming schemes to wireless networks. Identified four major schemes which seem promising : Skyscraper Broadcasting, Hierarchical Stream Merging, Harmonic Patching and Piggybacking.
 +
** Explored Skyscraper Broadcasting scheme and found it to be unsuitable.
 +
** Documentation of the survey.
  
 +
* Feb 11: Meeting with Saleh
 +
** Saleh to survey energy efficiency techniques in ad-hoc/sensor network domain and possible adaptation
 +
** Som to survey Internet VoD results and look for possible adaptation in the wireless domain
 +
** Meet on Feb 15 to discuss progress and decide on the first draft of one or two problems
 +
** (If time) Saleh to look at WiMAX model in OPNET to see if it can be used for experiments
 +
* Investigating  wireless multimedia streaming in muticast/unicast mixed mode networks
 +
=== Jan ===
 +
* Investigating cloud computing for video trans-coding, video mining and mobile video applications
 +
* Survey report on WiMAX/LTE testbed design options [https://cs-nsl-svn.cs.surrey.sfu.ca/nsl-projects/browser/MobileVideo/WiMAX/documents/techReps/testbed/wimax_testbed_report.pdf here]
 +
* Report on current status of DVB-H testbed and design of EPG feature [https://cs-nsl-svn.cs.surrey.sfu.ca/nsl-projects/browser/MobileVideo/DVB-H/documents/techReps/design/mobileTV.pdf here]
  
 
= Fall 2010 (RA) =
 
= Fall 2010 (RA) =
* '''Poster/Demo:'''
+
* '''Poster/Demo:''' Efficient Multiplexing for Mobile Video Streaming (CONNECT'10)
** Efficient Multiplexing for Mobile Video Streaming (CONNECT'10)
 

Latest revision as of 19:29, 7 February 2013

Spring 2013 (RA)

[Feb 7] 3D FVV over DASH

  • Worked with Ahmed on surveying open source DASH tools and 3D video player tools for implementing the feature. Details are updated in the wiki page here.

Fall 2012 (RA)

[Nov20] Cloud rewrite project

  • Worked on addressing the MMSys review comments for the project and submitted the updated report to ICDCS'13. Report also available in svn here.

[Oct29] Adaptable Video Caching

  • I obtained medisyn source code from one of the authors and investigated it. However, it does not have some of the features like scalable video description that we need for creating a DASH benchmark. I also came across some papers which describe important features like partial download of popular vod applications like Youtube. Consequently I am preparing a workload incorporating there features and currently investigating the byte-range queries in HTTP steaming.

[Oct12] Adaptable Video Caching

  • I investigated caching policies for the general problem of adaptable and segmented media content caching at edge-proxies. The term adaptable refers to content that are adapted to user’s requirement at the edge location. Adaptation can be in terms of codec specific operations like H.264 rewriting, more generic transcoding or even video scaling/retargeting. Caching a segment of video makes more sense as entire video files are huge and more often only a part of it is viewed. There have been many work on the segmented video caching but it is more relevant now as DASH becomes the de-facto streaming protocol. Our particular caching scenario is different than previous caching problems because it needs to consider four characteristics of the system: miss-penalty, segment- size, segment-class(SVC or AVC) and temporal correlation between segments. The metrics of performance are byte-hit-rate, cost and user-perceived latency. As LRU and LFU are the most popular caching policies we should start by investigating how these two policies perform in our caching scenario, particularly how the aforementioned four parameters affect the performance of the two policies. Then we can further investigate to design our own algorithm that improves performance by giving consideration to all the four requirements.
  • There are very few works on size, class and temporal correlation aware caching. The cost for our caching scenario is also different that the normal caching problem where higher latency is usually used as the metric of penalty. In our case it will be cost of bandwidth and processing in addition to latency. I tried mathematically analyzing the problem but due to the large number of variables reaching a closed form expression for the caching goal proved to be difficult. One experimental approach is to divide the cache into two regions for SVC and AVC and operate LRU/LFU in each region separately. I am currently looking for a good synthetic media access trace generator to get some idea about this scheme’s performance.
  • I came across several papers referring to MediSyn from HP Labs but I don’t seem to find a download link for the same. I am avoiding use of general web access generators like ProWGen. The best will be if we can get some traces of our own by tracking Youtube usage on campus but I don’t know whether this is feasible. I am also trying to improve my simulator to accommodate this but validation through an external system/trace will be better.

[Sep27] Proposed extensions to cloud-rewrite project:

  • Model the relationship between the GoP video parameters and VM capability information to accurately estimate the processing times. Currently this is being done in an ad-hoc manner. Maybe we can apply machine learning/ neural networks framework for this. The training phase can be done off-line. We note that this is different that computing the GoP-to-VM assignment. However, we can apply machine learning framework for finding the assignment of VMs too.
  • Derive the full mathematical model for the caching cost. This model currently uses simulation to find the cache replacement probability values. We should try to derive them analytically. One challenge is that the file sizes of media chunks are going to be different which is much harder to model than if we assume them to be same. Need to look into cache-algorithm literature.
  • We did not conduct any experiments to measure the latency performance of our solution. This is one area that we can look at. But it will require a more elaborate experimental setup. Better if we perform our experiments with VoD access traces than simulation.

Summer 2012 (RA)

Research: Cloud Rewriter report pdf

Course: CMPT894 (Directed Reading) report pdf

Jun 15

  • Implemented a m3u8 based dynamic http streaming scheme utilising the rewriting functionality. Conceptually it works and I can play the video which is processed through the rewriting work flow. Following is an example playlist file with two version of a stream:
---bbb.m3u8---
#EXTM3U
#EXT-X-STREAM_INF:PROGRAM-ID=1,BANDWIDTH=149300
'http://142.58.185.226:8080/m3u8/bbb_l11.m3u8' 
#EXT-X-STREAM_INF:PROGRAM-ID=1,BANDWIDTH=137700
'http://142.58.185.226:8080/m3u8/bbb_l10.m3u8'
---bbb_l11.m3u8---
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:1
#EXTINF:5
'http://142.58.185.226:8080/AvcRewriter.php?file=bbb_l11_part_01.ts'
#EXTINF:5
'http://142.58.185.226:8080/AvcRewriter.php?file=bbb_l11_part_02.ts'

This can be played by selecting Media->Open Network Stream-> http/142.58.185.226:8080/bbb.m3u8 in VLC

  • Configuring C++ applications did not work with either of the three servers(apache, nginx, lighttpd) I tried. Therefore I implemented a basic PHP application with Apache to call the rewriter binaries on server. This solution works but the calls to each video chunk file take some time to process so the playout is not smooth.
  • There are some components of the delay that we did not consider before:
    • The SVC substream extraction process has to be added to the rewriter module to make the streaming adaptive. This adds some delay as the extraction process needs to go through the entire file and this cannot be done in conjunction with the rewriting on the fly.
    • The second component is the MUX-ing software(ffmpeg) which is currently taking lot of time, probably because it is initializing many resources which are not required for our particular case. One solution to this is to integrate the muxing component into rewriter module as well and have a combined substreamExtractor-Rewriter-Muxer biinary.
    • The delay due to PHP is probably not significant but I cannot isolate it with the current setup.
  • Another challenge I am facing is with the client software. Only some versions of VLC play the stream and I am not able to observe if they are employing any bandwidth adaptation strategy or not. Another approach would be to add the extraction logic on an application server and write our own video player with adaptation logic.


Jun 10

  • I am trying to implement the connection between web server and C++ application which is turning out to be more difficult than expected. I tried with two low footprint servers nginx and lighttpd but the mechanism is not working due to their asynchronous request processing. Currently trying with Apache.
  • As per the MPEG-DASH standard the content has to be created before creating the MPD file which rules out the possibility of using MPEG-DASH with on demand avcRewrite. One option is to use the .m3u8 playlists that have simpler format with explicit urls for file chunks. In that case we have to mux the video into TS.
  • I did some modifications and obtained the following table for large number of layers. As expected, the storage requirement for AVC becomes much higher than SVC. One interesting observation is that for the top two layers SVC files are actually smaller in size than their AVC versions.
    LayerID   Resolution   Framerate   Bitrate 	SVC(KB)	AVC(KB)
 ============================================
         2     352x288      12.0000       28.20       446	  483
         3     352x288      24.0000       31.10       487	  559
         4     352x288        3.0000       60.50       938	  726
         5     352x288        6.0000       65.10     1027	  811 
         6     352x288      12.0000       69.50     1116	  903 
         7     352x288      24.0000       76.00     1204	1004
         8     352x288        3.0000     115.80     1954	1269
         9     352x288        6.0000     126.00     2149	1428
       10     352x288       12.0000     134.40     2330	1580
       11     352x288       24.0000     145.80     2499	1728
 ============================================
                                                                              10491(4.2x)

May 28

  • Created 1min of AVC video chunks and encoded into mp4 files.
  • Storage calculations (bytes) for a 1 min CIF sized BigBuckBunny Video split into 5sec chunks:
    • Size of the raw SVC files: 2565058
    • Size of raw AVC files: 1776821(L3) + 902641(L2) + 438308(L1) = 3117770 (~21% extra compared to SVC)
    • Audio and container overhead SVC : 4366459 - 1776821 = 2589638
    • Est. size of playable SVC content: 2565058 + 2589638 = 5154696
    • Size of playable Simulcast content: 4366459 + 3580554 + 2535836 = 10482849 (~ 100% extra compared to SVC, but this will probably decrease with larger picture size or video length when the video data dominates the size).
    • Audio encoded in AAC stereo using NeroAAC for Linux and is same for both the SVC and simulcast versions.
    • Video encoded in 3 layer rewritable SVC with layer QPs: 36, 32, 28
    • AVC files for simulcast are created from the SVC file by first extracting a substream and then rewriting it.
    • ffmpeg and MP4Box is used for muxing the AVC and AAC files into an MP4 file with the following commands(Note: currently this is the only command that is working. Muxing the AVC file with MP4Box tools do not work):
   $>ffmpeg -i ../avc/BBB_CIF_24fps_1min_GOP01.h264 -s 352x288 -r 24000/1001 -t 00:00:05 bbb_muxed_01.mp4
   $>MP4Box -add ../aac/bbb_01.mp4 bbb_muxed_01.mp4
  • Currently Investigating:
    • JSVM layer extracter does not work with rewriter. E.g. I have a 3 layer rewritable SVC file. When I rewrite all 3 layers, the our put is playable in VLC. When I extract 2 layers and then rewrite, the output is no longer usable. It does not show any difference in output compare to the 3 layer files in VLC player or ffmpeg. It has severe problems with audio synchronization and drifting.
    • Interface between web server and media files. Is may be better idea to integrate a HTTP server into the C++ media adaption/rewriting application than to implement link code between standard http server and the media application through php/CGI. The later is slower.
  • Waiting for: Kaushik's update on the DASH setup. Earlier sent him a set of mp4 files for creating DASH content and verifying the setup with it.
  • Challenges: The JSVM encoder is very slow. took me 3 days to encode 1 min of CIF video.


May 15

  • Currently working on creating chunked content from YUV files and composing the DASH MPD file.
  • Tested that video chunks created by splitting YUV files and AVC rewritten can be played back in VLC media player. Also investigated that the rewrite works with layer dropping.
  • Storage implications of splitting YUV file instead of splitting compressed file for parallel processing :

Video: bridge_far.yuv, CIF(352x288), 30fps, 900 frames(30s) Encoding: 3layer SVC then rewritten to AVC, GOP size 16, YUV split into 2s chunks Size of single compressed video of 30s = 723191b = 706kb (Note that this contains only one IDR frame) Size of 15 videos of length 2s each = 829674b = 810kb.


Spring 2012 (TA + RA)

Course: CMPT886 (Multicore Systems)

January

Jan 27:

  • Conducted a survey of exiting results on SVC to AVC rewriting. While it has been reported that the re-writing process performs about 80% faster than the cascaded trans-coding process on a CPU, there are not many results on the feasibility of streaming implementation of the technique. Sablatschan et.al. have reported that the real time rewriting is not feasible for resolutions more than 480x320 videos using a quad core 3GHz processor. This leads to two directions for exploration: (1)explore the JSVM code to find better ways of doing the rewriting in parallel hardware and (2) using GOP distribution based approach on cloud to explore possibility of improving parallel performance. (1) has been extensively studied in the context of parallel encoders and decoders. (2) has scalability issues because it needs to buffer the GOPs before distributing them.
    • One idea here is to distribute the pictures instead of GOPs which will increase the achievable degree of parallelism at the cost of increased picture management overhead.
    • Another idea is to use the GOP based parallel rewriting at the server where the entire video is available. Then there will e no need to buffer the GoPs. Need to check whether this has been already done.
  • Looked at some possibilities to implement scalable video streaming. Not many streaming server applications support SVC. Will look info DASH related implementations.


Jan 10:

  • Started exploring the problem of video trans-coding in cloud
  • Started exploring the feasibility of implementing a cloud test-bed using Openstack.

Other

  • Courses: CMPT 886 (Multicore Systems)
  • TA : CMPT 371 , CMPT 379

Previous work


Spring 2011 (RA)

April

  • April 8
    • Working on the simulator for hybrid uni/multicast experiments. Taking longer than expected, may miss the MM'11 deadline

March

  • Mar 28:
    • Updated tech-report on hybrid multicast-unicast link
  • Mar 7:
    • Tech-report on hybrid multicast-unicast here
    • Working on the formulation for mobile patching scheme. Derived an expression for the bandwidth requirement and energy consumption of all-unicast and adaptive-patching schemes. Need to verify correctness analytically.
    • Working on numerical examples of the mobile patching scheme.
    • Documentation GENI-WiMAX project details.

Feb

  • Feb 28:
    • Reviewed the existing literature and identified the main challenges in adapting video streaming schemes to wireless networks. Identified four major schemes which seem promising : Skyscraper Broadcasting, Hierarchical Stream Merging, Harmonic Patching and Piggybacking.
    • Explored Skyscraper Broadcasting scheme and found it to be unsuitable.
    • Documentation of the survey.
  • Feb 11: Meeting with Saleh
    • Saleh to survey energy efficiency techniques in ad-hoc/sensor network domain and possible adaptation
    • Som to survey Internet VoD results and look for possible adaptation in the wireless domain
    • Meet on Feb 15 to discuss progress and decide on the first draft of one or two problems
    • (If time) Saleh to look at WiMAX model in OPNET to see if it can be used for experiments
  • Investigating wireless multimedia streaming in muticast/unicast mixed mode networks

Jan

  • Investigating cloud computing for video trans-coding, video mining and mobile video applications
  • Survey report on WiMAX/LTE testbed design options here
  • Report on current status of DVB-H testbed and design of EPG feature here

Fall 2010 (RA)

  • Poster/Demo: Efficient Multiplexing for Mobile Video Streaming (CONNECT'10)