Difference between revisions of "Private:progress-spenard"

From NMSL
 
(15 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
* Courses:
 
* Courses:
 
** CMPT 771: Internet Architectures and Protocols
 
** CMPT 771: Internet Architectures and Protocols
** CMPT 885 : Special Topics, Computer Architecture
+
 
 +
=== March 14th, 2011 ===
 +
Had some limited time this week due to the router assignment in CMPT 771
 +
 
 +
* Review what OProfile is, and how it works.  Still a bit unsure as when it properly measures clock cycles.  Need to dig deeper on the setting, such as the architecture.  Having the proper metric is crucial.
 +
* Need to better understand Cheng's formula to map the number of clock cycle into power consumption, and how OProfile output ties into that
 +
* Abandoned the idea of V+D as I cannot get one working with multiple view.  Now trying to generate an appropriate Multiview one.
 +
 
 +
 
 +
=== March 4th, 2011 ===
 +
Current report:
 +
https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/spenard/reports/March2011/problemStatement/problemStatement.pdf
 +
 
 +
Updates:
 +
* Problem Statement
 +
 
 +
=== February 28, 2011 ===
 +
* JMVC Reference software now compiles
 +
* Can't encode many views; investigating how configuration files work
 +
* Changes to code can be found here:https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/spenard/reports/February2011/notesJMVC.txt
 +
 
 +
* Microsoft Research Videos installed: they are a series of bitmaps, including depths
 +
* Can generate MGP out of those sequences; investigating for YUV
 +
* Instructions on how to generate MPP: http://nsl.cs.sfu.ca/wiki/index.php/Private:Technical
 +
 
 +
* Have to find a way to MUX together a color map, and its depth together for my experiment. 
 +
* Have to find a way to properly represent many views together
 +
 
 +
 
 +
=== February 23, 2011 ===
 +
Worked on CMPT 771 problem set #2 during the break.
 +
 
 +
At this moment, I have the JM software installed properly.  However, I was able to download the JMVC software, which is used to encode Multiview (and I think to a certain extend video plus depth but some use JSVM - scalable videos).  The problem is that it won't build properly.  I have to find out which librarires are missing on my machine and will make a list of what needs to be on a machine so we have documentation.
 +
 
 +
Was able to get 3D videos from the MOBILE3DTV project (got the Horse from KUK Produktion), and am now able to play the YUV files of the view and the depth on my machine.  This is good as it could provide a comparison point between the original version.  All I need now is a proper encoder/decoder, and start experimenting.
 +
 
 +
I am trying to understand OProfile, an open source system profiler for Linux.
 +
http://oprofile.sourceforge.net/news/
 +
 
 +
The sample report page has one that is exactly what we need, where it basically tells how many clock cycle each process has consumed while the daemon was monitoring the system.  Fixed the issues on my local machine; it all works well now!
 +
 
 +
 
 +
=== February 11, 2011 ===
 +
I have the JM Reference software, and think I have found how to get the JMVC (multiview) reference software.  Need to see the differences between the 2 of them, and see if there is a video plus depth version of such reference software.
 +
 
 +
=== February 10, 2011 ===
 +
My current report is available here
 +
https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/spenard/reports/February2011/mobileClient/mobileClient.pdf
 +
 
 +
'''ToDo'''
 +
Establish a relationship between the number of views being rendered using a video plus depth 3D video decoder, and the energy consumption associated to such process.  This could be established on an Android phone, using a similar process as I did for the CMPT 820 project I did.  However, such decoder might not be easily available.
 +
 
 +
Therefore, I will start doing this on a Linux machine.  I will encode various video plus depth videos, and render them locally using a video plus depth video decoder.  Using some tools, I will be able to measure the number of clock cycles required for each of them.  While this won't give me power consumption directly, this will give me a measure of how much processing power is required as the number of views varies.  An serious advantage of such method is that it would be possible to see if a mobile device can deliver such number of clock cycles.  For example, if a video requires 1000 clock cycles over 10 seconds for 10 views, but if the mobile device can only deliver 800, there is no point trying to play such number of views on a mobile device.
 +
 
 +
'''Tools Required'''
 +
* Video Plus Depth Videos (found on the MOBILE3DVIDEO project page)
 +
* Video Plus Depth Encoder
 +
* Video Plus Depth Decoder (probably comes with the encoder)
 +
* Profiling tools (Cheng used Juice if I remember; see if this could work or find something else) and get familiar with them
 +
* MatLab (Available and know how to use it)
 +
 
 +
 
 +
=== February 7, 2011 ===
 +
* After investigating the problem proposed by Cheng, I realised that this does not match my research interests as I first anticipated.  Therefore, I decided to come back to 3D, with a clear focus on the mobile client being energy-aware
 +
* In order to understand the capabilities and limitations of Google Android, I read 100+ page of "Hello, Android: Introducing Google's Mobile Development Platform" (http://www.amazon.ca/Hello-Android-Introducing-Development-Platform/dp/1934356565/ref=sr_1_1?s=books&ie=UTF8&qid=1297114273&sr=1-1)
 +
* Read more about background work done on mobile 3D client.
 +
* Will work on a project proposal
 +
 
 +
'''Ideas'''
 +
* Focus on how to render video plus depth on Android, as it is the representation that gives the more flexibility.  For now, maybe have 2 2D-video players on one screen, where we can show one view to the left, and its depth to the right.  It would be something like a POC (proof of concept) in order to show that we can indeed demux that information in Android.
 +
* Focus on establishing a "framework" that could potentially estimate how much energy will be required by each operations required for decoding over the air.  Power tutor could be used, as I did in CMPT 820.  However, we might want to talk to Arrvindh in order to use that more precise measuring tool, where we could establish the average value of some operations and include those in our framework.
 +
 
 +
 
 +
 
 +
=== January 31, 2011 ===
 +
My current progress report can be found at
 +
/nsl/students/spenard/projects/adhocTethering/documents/reports
 +
and is quite dynamic.  My references can be found at
 +
/nsl/students/spenard/projects/adhocTethering/references
 +
 
 +
* Discussed with Cheng about what needs to be done short term.  I have started writing a related work section on SVN, and I am focusing on peer discovery in wireless ad hoc networks as my first priority.  At the moment, some work seems to have been done, and I will continue searching for more related work in order to stimulate new ideas.  I have found an interesting article about how mobile devices can keep a routing table of surrounding nodes, but the article is too generic and would like to get something more detailed.
 +
*As Cheng and I agreed, this part of the whole scheme seems to be the hardest, and it might not be possible to implement this on Android, despite its promised "openess".  For example, we suspect such feature has been blocked for some business reasons.  One of his colleague spent over 2 weeks on trying to enable this without success.
 +
* Was also busy with 2 assignments for CMPT 771; all done.
 +
* Will work tonight and tomorrow on finding which phone is the best, and start the ordering process with Jason so we can have them ASAP
 +
* This week, I will continue working on the related work section, finish what I have started with peer discovery, and jump to ad hoc routing, where we need to find a good protocol for it.
  
 
=== January 25, 2011 ===
 
=== January 25, 2011 ===

Latest revision as of 12:15, 14 March 2011

Spring 2011 (RA)

  • Courses:
    • CMPT 771: Internet Architectures and Protocols

March 14th, 2011

Had some limited time this week due to the router assignment in CMPT 771

  • Review what OProfile is, and how it works. Still a bit unsure as when it properly measures clock cycles. Need to dig deeper on the setting, such as the architecture. Having the proper metric is crucial.
  • Need to better understand Cheng's formula to map the number of clock cycle into power consumption, and how OProfile output ties into that
  • Abandoned the idea of V+D as I cannot get one working with multiple view. Now trying to generate an appropriate Multiview one.


March 4th, 2011

Current report: https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/spenard/reports/March2011/problemStatement/problemStatement.pdf

Updates:

  • Problem Statement

February 28, 2011

  • Have to find a way to MUX together a color map, and its depth together for my experiment.
  • Have to find a way to properly represent many views together


February 23, 2011

Worked on CMPT 771 problem set #2 during the break.

At this moment, I have the JM software installed properly. However, I was able to download the JMVC software, which is used to encode Multiview (and I think to a certain extend video plus depth but some use JSVM - scalable videos). The problem is that it won't build properly. I have to find out which librarires are missing on my machine and will make a list of what needs to be on a machine so we have documentation.

Was able to get 3D videos from the MOBILE3DTV project (got the Horse from KUK Produktion), and am now able to play the YUV files of the view and the depth on my machine. This is good as it could provide a comparison point between the original version. All I need now is a proper encoder/decoder, and start experimenting.

I am trying to understand OProfile, an open source system profiler for Linux. http://oprofile.sourceforge.net/news/

The sample report page has one that is exactly what we need, where it basically tells how many clock cycle each process has consumed while the daemon was monitoring the system. Fixed the issues on my local machine; it all works well now!


February 11, 2011

I have the JM Reference software, and think I have found how to get the JMVC (multiview) reference software. Need to see the differences between the 2 of them, and see if there is a video plus depth version of such reference software.

February 10, 2011

My current report is available here https://cs-nsl-svn.cs.surrey.sfu.ca/cssvn/nsl-members/spenard/reports/February2011/mobileClient/mobileClient.pdf

ToDo Establish a relationship between the number of views being rendered using a video plus depth 3D video decoder, and the energy consumption associated to such process. This could be established on an Android phone, using a similar process as I did for the CMPT 820 project I did. However, such decoder might not be easily available.

Therefore, I will start doing this on a Linux machine. I will encode various video plus depth videos, and render them locally using a video plus depth video decoder. Using some tools, I will be able to measure the number of clock cycles required for each of them. While this won't give me power consumption directly, this will give me a measure of how much processing power is required as the number of views varies. An serious advantage of such method is that it would be possible to see if a mobile device can deliver such number of clock cycles. For example, if a video requires 1000 clock cycles over 10 seconds for 10 views, but if the mobile device can only deliver 800, there is no point trying to play such number of views on a mobile device.

Tools Required

  • Video Plus Depth Videos (found on the MOBILE3DVIDEO project page)
  • Video Plus Depth Encoder
  • Video Plus Depth Decoder (probably comes with the encoder)
  • Profiling tools (Cheng used Juice if I remember; see if this could work or find something else) and get familiar with them
  • MatLab (Available and know how to use it)


February 7, 2011

Ideas

  • Focus on how to render video plus depth on Android, as it is the representation that gives the more flexibility. For now, maybe have 2 2D-video players on one screen, where we can show one view to the left, and its depth to the right. It would be something like a POC (proof of concept) in order to show that we can indeed demux that information in Android.
  • Focus on establishing a "framework" that could potentially estimate how much energy will be required by each operations required for decoding over the air. Power tutor could be used, as I did in CMPT 820. However, we might want to talk to Arrvindh in order to use that more precise measuring tool, where we could establish the average value of some operations and include those in our framework.


January 31, 2011

My current progress report can be found at /nsl/students/spenard/projects/adhocTethering/documents/reports and is quite dynamic. My references can be found at /nsl/students/spenard/projects/adhocTethering/references

  • Discussed with Cheng about what needs to be done short term. I have started writing a related work section on SVN, and I am focusing on peer discovery in wireless ad hoc networks as my first priority. At the moment, some work seems to have been done, and I will continue searching for more related work in order to stimulate new ideas. I have found an interesting article about how mobile devices can keep a routing table of surrounding nodes, but the article is too generic and would like to get something more detailed.
  • As Cheng and I agreed, this part of the whole scheme seems to be the hardest, and it might not be possible to implement this on Android, despite its promised "openess". For example, we suspect such feature has been blocked for some business reasons. One of his colleague spent over 2 weeks on trying to enable this without success.
  • Was also busy with 2 assignments for CMPT 771; all done.
  • Will work tonight and tomorrow on finding which phone is the best, and start the ordering process with Jason so we can have them ASAP
  • This week, I will continue working on the related work section, finish what I have started with peer discovery, and jump to ad hoc routing, where we need to find a good protocol for it.

January 25, 2011

  • Read more about view synthesis for video plus depth, and tried to better understand my previous readings as some of them had a lot of information. Gathered a list of requirements for streaming 3D videos to mobile devices, which is available on the wiki.
  • Read about tethering for mobile devices, and am currently gathering requirtements for a system where users could share and resell their unused data of their data plans. I am meeting Cheng over Skype on the 26th. Again, our ideas are available on the wiki.

January 18, 2011

  • Read a paper about various types of coding used in 3D technologies. Surprisingly, H.264/MVC does not seem to add that much of compression when considering the significant increase of processing power required for decoding. On the other hand, Video plus depth requires a fraction of bitrate of its coloured counterpart (in the range of 10-20%) in order to achieve high quality. Multiview video plus depth coding can be useful in order to estimate views instead of sending them. Such process can again, be computionally expensive, and estimation algorithms are still not perfect. However, such method has an interesting aspect to it: one could use multiview video plus depth in order to have a proxy storing the video, and estimates the specific number of views required for the device that needs to display the video. The beauty of this is the flexibility when it comes to support a broad array of mobile devices.
  • In my readings, I have found that MPEG-3 Part C can be used for Video Plus Depth, something that should be looked a bit more into. It gives some flexibility when it comes to how to encode the colour video, and the depth map independently. H.264 Auxiliary Picture Syntax, on the other hand, does not offer such flexiblity


January 11, 2011

  • Had a busy weekend with my birthday ;-) But, also worked on the presentation I gave about 3D videos during our weekly team meeting. During the same week, I got familiar with the concept of remote rendering. I have read about some previous experiments where it was used in a virtual reality environment, and how the concept was used to improve the current viewpoint selection process on mobile devices. The papers helped me understand the concept, and the various way a proxy server could be used, even if it was used in a freeview point context rather than a stereo video one. For example, it can be used to partially render a video, or even completely. GPUs could also be used if needed. However, the problematic being solved being different than the one I am focusing on, the aspect of computing the reference frames needed when a user change the viewpoint was of little interest.

Fall 2010 (TA)

  • Courses:
    • CMPT 705: Design/Analysis Algorithms
    • CMPT 820: Multimedia Systems
  • TA - Full Time
    • CMPT 165: Introduction to the Internet and the World Wide Web
  • Worked on Litterature Survey of 3D technologies, and understanding the current research areas with a focus on those that are related to the field of 3DTV on Mobile Devices. Also, I got familiar with a power measurement tool for Android, and understand how to apply it in the context of academic research.