Difference between revisions of "Private:progress-hamza"

From NMSL
Line 10: Line 10:
 
* The problem now however seems to be more complicated as this will be a strange variant of the knapsack problem where we have two knapsacks but items that go into each knapsack come from different classes while there is only one joint objective function. I'm trying to figure out whether this resembles any known variant of the knapsack problem, but until now I was not successful.
 
* The problem now however seems to be more complicated as this will be a strange variant of the knapsack problem where we have two knapsacks but items that go into each knapsack come from different classes while there is only one joint objective function. I'm trying to figure out whether this resembles any known variant of the knapsack problem, but until now I was not successful.
 
* After thinking more about it, I decided that if there are only two views and the client is going to be receiving from only two broadcast/multicast channels, then it doesn't matter to which channel a substream is allocated and we can relax any assignment restrictions. This leads to a multiple choice multiple knapsack problem (MCMKP) which is still NP-hard.
 
* After thinking more about it, I decided that if there are only two views and the client is going to be receiving from only two broadcast/multicast channels, then it doesn't matter to which channel a substream is allocated and we can relax any assignment restrictions. This leads to a multiple choice multiple knapsack problem (MCMKP) which is still NP-hard.
* One way in which the MCMKP can be tackled is by partitioning it (sub-optimally) into two sub-problems: a multiple choice knapsack problem (MCKP) and a multiple knapsack problem (MKP). A similar approach was taken [here] for the problem of providing QoS support for prioritized, bandwidth-adaptive, and fair media streaming using a multimedia server cluster.
+
* One way in which the MCMKP can be tackled is by partitioning it (sub-optimally) into two sub-problems: a multiple choice knapsack problem (MCKP) and a multiple knapsack problem (MKP). A similar approach was taken [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4150841 here] for the problem of providing QoS support for prioritized, bandwidth-adaptive, and fair media streaming using a multimedia server cluster.
 +
* After selecting the optimal substreams by solving the MCKP over the aggregate capacity of the two channels, we need to perform an assignment for each selected substream to one of the two channels in a way that will minimize any bandwidth fragmentation.
  
  

Revision as of 15:39, 31 May 2011

Summer 2011 (RA)

  • Courses: None


May 31

  • Report: here (last updated: May 31)
  • Working on formulating the optimization that we discussed in the last meeting, it came to my attention that although the problem I'm trying to solve is an instance of the multiple-choice knapsack problem (MCKP), the number of classes is fixed and very small (only 4 streams). This means that, unlike Som's work where it was assumed that we have a large number of streams (e.g. between 10 and 50 in his experiments), the problem of selecting the best substreams in this case is indeed achievable in real-time. Assuming each stream is encoded into 4 layers, we have 4^4 =256 combinations which can easily be enumerated.
  • While trying to think about an actual problem. I'm thinking of extending Som's work and utilizing client-driven multicast where the client subscribes to the channels of the desired views. A number of 3D video streams is to be transmitted. Each stream has N views. The views are encoded using SVC to a number of layers. The ith view of the streams is multiplexed over a single broadcast/multicast channel. The receiver tunes-in/joins channels i and i+2 to receive two reference views which can be utilized to synthesize any view in-between.
  • The problem now however seems to be more complicated as this will be a strange variant of the knapsack problem where we have two knapsacks but items that go into each knapsack come from different classes while there is only one joint objective function. I'm trying to figure out whether this resembles any known variant of the knapsack problem, but until now I was not successful.
  • After thinking more about it, I decided that if there are only two views and the client is going to be receiving from only two broadcast/multicast channels, then it doesn't matter to which channel a substream is allocated and we can relax any assignment restrictions. This leads to a multiple choice multiple knapsack problem (MCMKP) which is still NP-hard.
  • One way in which the MCMKP can be tackled is by partitioning it (sub-optimally) into two sub-problems: a multiple choice knapsack problem (MCKP) and a multiple knapsack problem (MKP). A similar approach was taken here for the problem of providing QoS support for prioritized, bandwidth-adaptive, and fair media streaming using a multimedia server cluster.
  • After selecting the optimal substreams by solving the MCKP over the aggregate capacity of the two channels, we need to perform an assignment for each selected substream to one of the two channels in a way that will minimize any bandwidth fragmentation.


May 19

  • Report: here
  • Found a more recent and simple model for the distortion of synthesized views based on the distortions of reference views and their depth maps.
  • Formulating an optimization problem to select the best combination of substreams to transmit out of the two reference views and their corresponding depth maps in order to minimize the average distortion over all intermediate synthesized views while not exceeding the current channel capacity.
  • There have been relevant work in the context of joint bit allocation between texture videos and depth maps in 3D video coding. In addition, another model has been utilized in a recent work on RD-optimized interactive streaming of multiview video. However, in that work, the authors assume the presence of multiple encodings of the views at the server-side. Our work will attempt to utilize SVC to encode the views at various qualities/bitrates and extract the best substreams that maximize the quality and satisfy the capacity constraint.
  • Could not find the power consumption characteristics of the wireless chipsets mentioned by reviewer of ToM paper. The companies are not revealing them in their datasheets and they only advertise that they are ultra-low power. One article claims that the Broadcom BCM4326 and BCM4328 Wi-Fi chips enable a 54-Mbps full-rate active receive power consumption of less than 270mW. But no more details are given.


May 5

  • Currently working on formulating the rate adaptation problem for 3D video streaming using SVC that I mentioned in the last meeting. I'm writing a report on the problem and working on formally formulating it as an optimization problem. Expecting to be done with the formulation by the end of this week.
  • Discussed with Som about his work on hybrid multicast/unicast systems but we could not find a common ground for leveraging that work to solve the high bit rate problem of 3D videos. The main issue is that in such systems patching is used to recover the leading portion from the beginning of the video stream which the multicast session has already passed. Attempting to transmit depth streams for example using separate unicast channels or patching does not apply here because the texture and depth streams are synchronized and are utilized concurrently. Moreover, other than streaming different views using separate multicast channels (which has been already proposed in several papers), it is not clear to me how multicasting would enable an interactive free viewpoint experience where the user is free to navigate to any desired viewpoint of the scene.


Spring 2011 (RA)

  • Courses: None


Apr 22

  • Downloaded and compiled Insight Segmentation and Registration Toolkit (ITK) and the Visualization ToolKit (VTK). The two libraries are huge and it took at least an hour to compile each. They utilize the cmake utility for configuring and building (even for creating new projects), and VTK provides a tutorial on how to utilize cmake with Eclipse. Managed to read DICOM slices and save them as volume in the MetaImage format. Was also able to render the generated MetaImage as a volume using VTK. To take DICOM images as input and view them through VTK, we should first open the files and save them to volume as indicated in the Insight/Examples/IO/DicomSeriesReadImageWrite2.cxx example. Then we visualize the volume as shown in the example InsightApplications/Auxiliary/vtk/itkReadITKImageShowVTK.cxx. Some modifications to the latter source file were necessary to render a 3D volume.
  • In biomedicine, 3-D data are acquired by a multitude of imaging devices [magnetic resonance imaging (MRI), CT, 3-D microscopy, etc.]. In most cases, 3-D images are represented as a sequence of two-dimensional (2-D) parallel image slices. Three-dimensional visualization is a series of theories, methods and techniques, which applies computer graphics, image processing technique and human-computer interacting technique to transform the resulting data from the process of scientific computing to graphics.
  • DICOM files consist of a header and a body of image data. The header contains standardized as well as free-form fields. The set of standardized fields is called the public DICOM dictionary. A single DICOM file can contain multiples frames, allowing storage of volumes or animations. Image data can be compressed using a large variety of standards, including JPEG (both lossy and lossless), LZW (Lempel Ziv Welch), and RLE (Run-length encoding).
  • Going from slices to a surface model (e.g. a mesh) requires some work. The most important is the segmentation. One needs to isolate on each slice the tissue that will be used to create the 3D model. Generally, there are three main steps to generate a mesh from a series of DICOM slices:
    • read DICOM image(s): vtkDICOMImageReader
    • extract isocontour to produce a mesh: vtkContourFilter
    • write mesh in STL file format: vtkSTLWriter
  • It seems progressive meshes may not be very appropriate for representing the objects in medical applications. The doctors need to slice the object and look at cross sections. Meshes will only show the outer surface. The anatomical structure or the region of interest needs to be delineated and separated out so that it can be viewed individually. This process is known as image segmentation in the world of medical imaging. However, segmentation of organs or region-of-interest from single image is of hardly any significance for volume rendering. What is more important is the segmentation from 3D volumes (which are basically consecutive images stacked together), such techniques are known as volume segmentation. A good, yet probably outdated, survey of volume segmentation techniques is given by Lakare in this report. A more recent evaluation of four different 3D segmentation algorithms with respect to their performance on three different CT Data Sets is given by Bulu and Alpkocak here.
  • As mentioned by Lakare, segmentation in medical imaging is generally considered a very difficult problem. There are many approaches for volume segmentation proposed in literature. These vary widely depending on the specific application, imaging modality (CT, MRI, etc.), and other factors. For example., the segmentation of lungs has different issues than the segmentation of colon. The same algorithm which gives excellent results for one application, might not even work for another. According to Lakare, at the time of writing of his report, there was no segmentation method that provides acceptable results for every type of medical dataset.
  • A somewhat old, yet still valid, tutorial on visualizing using the VTK was published in IEEE Computer Graphics and Applications in 2000. The vtkImageData object can be used to represent one-, two-, and three-dimensional image data. As a sub-class of vtkDataSet, vtkImageData can be represented by a vtkActor and rendered with a vtkDataSetMapper. In 3D this data can be considered a volume. Alternatively, it can be represented by a vtkVolume and rendered with a subclass of vtkVolumeMapper. Since some subclasses of vtkVolumeMapper use geometric techniques to render the volume data, the distinction between volumes and actors mostly arises from the different terminology and parameters used in volumetric rendering as opposed to the underlying rendering method. VTK currently supports three types of volume rendering—ray tracing, 2D texture mapping, and a method that uses the VolumePro graphics board.
  • VTK can render using the Open GL API, or more recently Manta. The iPhone (and other devices such as Android) use OpenGL ES, which is essentially a subset of OpenGL targeted at embedded systems. A recent post (December 2010) on the VTK's mailing list indicate that there is interest in writing/collaborating on a port of VTK's rendering for OpenGL ES.
  • A paper on mesh decimation using VTK can be found here.
  • MeshLab is an open source, portable, and extensible system for the processing and editing of unstructured 3D triangular meshes.


Apr 15

  • Jang et al. proposed a real-time implementation of a multi-view image synthesis system. This implementation is based on lookup tables (LUTs). In their implementation, the sizes of the LUTs for rotation conversion and disparity are 1.1 MBytes and 900 Bytes for each viewpoint, respectively. The processing time to create the left and right images before using LUT was 3.845 sec, which doesn't enable real-time synthesis. Using LUTs reduced the processing time to 0.062 sec.
  • Park et al. presented a depth-image-based rendering (DIBR) technique for 3DTV service over terrestrial-digital multimedia broadcasting (T-DMB), the mobile TV standard adopted by Korea. They leverage the previously mentioned real-time view synthesis technique by Jang et al. to overcome the computational cost of generating the auto-stereoscopic image. Moreover, they propose a depth pre-processing method using two adaptive smoothing filters to minimize the amount of resulting holes due to disocclusion during the view synthesis process.
  • Gurler et al. presented a multi-core decoding architecture for multiview video encoded in MVC. Their proposal is based on the idea of decomposing the input N-view stream into M-independently decodable sub-streams and performing decoding of each sub-stream by separate threads using multiple instances of the MVC decoder. However, to obtain such independently decodable sub-streams, the video must be encoded using special inter-view prediction schemes depending on the number of cores.
  • As indicated by Yuan et al., the distortion of virtual views is influenced by four factors in 3DV systems:
    • compression of texture videos and depth maps
    • performance of the view synthesis algorithm
    • inherent inaccuracy of depth maps
    • whether the captured texture videos are well rectified
  • Trying to encode two-view texture and depth map streams using JMVC (the multiview reference encoder) to get an idea of how much overhead transmitting an additional view along with depth maps will be incurred when transmitting a 3D video over wireless channels. Managed to compile the source and edit the configuration files, but still get errors when encoding. Looking more into the configuration files parameters.
  • Looked more into DICOM slices, it is simply taking parallel 2D sections of an object. Using those slices, and knowing the inter-slice distance, medical imaging software are able to reconstruct the 3D representation. The more recent versions of the DICOM standard enable packaging all the slices into one file to reduce the overhead of headers by eliminating redundant ones.


Apr 8

  • Gathered different thoughts from my readings in the Readings and Thoughts section of the 3D Video Remote Rendering and Adaptation System Wiki page.
  • Could not find any work on distributed view synthesis.
  • I went over the work done by Dr. Hamarneh's students. I read the publications and the report he sent. However, as far as I can see, it is an implementation work for porting an existing open source medical image analysis toolkit to the iOS platform. There are no algorithms or theory involved. That said, one of their future goals is to facilitate reading, writing, and processing of 3D or higher dimensional medical images on iOS (which only supports normal 2D image formats). Current visualization of such imagery on desktop machines is performed via the Visualization ToolKit (VTK). One of their goals is to also port this toolkit to iOS. Another possible tool that I found that is also based on VTK is Slicer, an open source software package for visualization and image analysis.
  • Based on my readings progressive mesh streaming, it should be applicable in this context. However, I'm still not familiar with the standard formats and the encoding of such meshes (especially in medical image analysis and visualization applications). Generally, it seems that medical images have their own formats such as the DICOM standard. Their initial thought is to transmit a number of what are known as DICOM slices to the receiver and then the receiver would construct the 3D model from them. So, this is still not very clear to me, as well as whether 3D video technologies may play a role in this.


Mar 14

  • Report: here
  • Added more details on homographies in the report.
  • Implemented double warping and blending, as well as inverse warping using Armadillo C++ linear algebra library.


Mar 7

  • Added more detailed description of the view synthesis process.
  • Implemented the first phase of the process (forward warping) and the z-buffer competition resolution technique in C/C++. I tested it on the Breakdancers sequence from MSR.
  • Working on profiling the code using OProfile to calculate the number of cycles required by the view synthesis process to derive preliminary estimates of power consumption.
  • Implementing double warping and a hole filling technique to get a feeling of the final quality that can be obtained.
  • Understanding homography matrices and how they are used to speed up the synthesis process.
  • Working on deriving a formal analysis of the time complexity of the view synthesis process. The projection phase basically involves a number of matrix multiplications.


Feb 28

Feb 21

  • Familiarizing myself with JSVM and its tools and options.
  • Contacted the lab that developed the reference software for disparity estimation and view synthesis described in the MPEG technical reports. Still haven't received a reply.


Feb 14

  • Reading about SVC and how to perform bitstream extraction
  • Reading Cheng's paper on viewing time scalability and Som's IWQoS paper.
  • Reading a couple of papers on optimized substream extraction
  • Reading papers on modelling the synthesized view distortion in V+D 3D videos


Jan 24

  • Report: here
  • 3D Video Remote Rendering and Adaptation System
  • Market survey:
    • The mobile market seems to shifting towards multicore processors. At CES 2011, at least two companies showcased their new mobile phones (LG Optimus 2X and Motorola ATRIX 4G) based on the NVIDIA Tegra 2 dual-core ARM Cortex A9 processor. This looks promising as it may enable smoother graphics capabilities and may be useful for fast view synthesis on the mobile device. However, some evaluation of power consumption needs to be performed. The chip also includes an ultra-low power (ULP) GeForce GPU and is capable of decoding 1080p HD video. Demo Video
    • Tablets emerging in the market nowadays are using the Tegra 2 processor (e.g. Dell Streak 7 and Motorola XOOM)
    • Qualcomm Snapdragon, Samsung Orion (Video), and Texas Instruments OMAP4 are all dual-core processors expected in the first half of 2011.
    • Slides leaked this weekend from NVIDIA's presentation at the Mobile World Congress indicate that the company will be shipping a Tegra 2 3D processor this year intended for use in mobile gadgets featuring a 3D screen! Although this is yet to be confirmed, it is expected that devices such as LG's G-Slate which is expected to have a glasses-free, three-dimensional display and will be shipping around the same time will run on this processor. Moreover, an announcement of a Tegra 3 processor is expected in February.
    • The recent release of Gingerbread (Android 2.3) has witnessed a concurrent release of a new NDKr5 which allows application lifecycle management and window management to be performed outside Java. This means an application can be written entirely in C/C++/ARM assembly code without need to develop Java or JNI bindings.


Jan 17

  • Concentrating on view synthesis in 3D video systems and read two recent survey papers about the topic.
  • Reading about multiple view geometry to understand the warping process and the related terms from epipolar, trifocal, and projective geometry.
  • Understanding the commonly used camera pinhole model.
  • Reading about stereo-based view synthesis.
  • Went over 3 papers on real-time view synthesis using GPUs.


Jan 10

  • Exploring potential research directions in 3D videos, including: adaptive virtual view rendering in free-viewpoint video, view synthesis, and rate adaptation in 3D video streaming.
  • Investigating the potential of cloud computing as a platform for enabling remote rendering of 3D video for mobile devices.


Fall 2010 (RA)

  • Courses:
    • CMPT-765: Computer Communication Networks
  • Submissions:
    • Energy Saving in Multiplayer Mobile Games (TOM'11)
  • Publications:
    • Energy-Efficient Gaming on Mobile Devices using Dead Reckoning-based Power Management (NetGames'10)


Summer 2010 (DGS-GF)

  • Courses: None
  • Submissions:
    • Energy-Efficient Gaming on Mobile Devices using Dead Reckoning-based Power Management (NetGames'10)


Spring 2010 (TA)

  • Courses:
    • CMPT-705: Design and Analysis of Algorithms


Fall 2009 (RA)

  • Courses:
    • CMPT-771: Internet Architecture and Protocols
  • Submissions:
    • Efficient AS Path Computation and Its Application to Peer Matching (NSDI'10)


Summer 2009 (RA)

  • Submissions:
    • Efficient Peer Matching Algorithms (CoNEXT'09)


Spring 2009 (TA)

  • Courses:
    • CMPT-820: Multimedia Systems