Difference between revisions of "Immersive Videos"
(→People) |
|||
(33 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
− | This project addresses next generation video systems, including VR (virtual reality), 360, multiview, and 3D videos. We address problems ranging from content generation to adaptation to different platforms to streaming to heterogeneous receivers. | + | This project addresses next-generation video systems, including VR (virtual reality), 360, multiview, and 3D videos. We address problems ranging from content generation to adaptation to different platforms to streaming to heterogeneous receivers. |
− | |||
== People == | == People == | ||
− | * Ahmed Hamza | + | * [https://www.sfu.ca/~aah10/ Ahmed Hamza] |
* Kiana Calagari | * Kiana Calagari | ||
− | * Khaled Diab | + | * [https://www.sfu.ca/~kdiab/ Khaled Diab] |
* Hamed Ahmadi | * Hamed Ahmadi | ||
− | * Mohamed Hefeeda | + | * [https://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] |
− | |||
− | + | '''2D to 3D Video Conversion''' | |
+ | Widespread adoption of 3D displays is hindered by the lack of content that matches | ||
+ | the user's expectations. Producing 3D videos is far more costly and time-consuming | ||
+ | than regular 2D videos, which makes it challenging and thus rarely attempted, especially | ||
+ | for live events, such as soccer games. | ||
+ | In this project, we develop a high-quality automated 2D-to-3D conversion method for soccer videos. | ||
+ | Our method is data-driven, relying on a reference database of 3D videos. Our key insight is that we use computer-generated depth from | ||
+ | current computer sports games for creating a synthetic 3D database. | ||
− | [[ | + | [[2D to 3D Video Conversion|More info ...]] |
− | + | '''Immersive Content Generation from Standard 2D Videos''' | |
+ | The aim of this project is to create compelling immersive videos suitable for VR (virtual reality) devices using only standard 2D videos. The focus of the work is on field sports such as soccer, hockey, basketball, etc. Currently the only way to create immersive content is by using multiple cameras and 360 camera rigs. This means that in addition to the already existing broadcast cameras around the field, expensive infrastructure should be added and managed in order to shoot and generate immersive content. In this project, however, we propose a more favorable alternative in which we can utilize the content of the already existing standard 2D cameras around the field to generate an immersive video. | ||
+ | [[Immersive Content Generation from Standard 2D Videos|More info ...]] | ||
− | + | '''Adaptive Streaming of Free-viewpoint Videos''' | |
+ | Free-viewpoint video (FVV) enables users to interact with the scene by navigating to different viewpoints. These videos are composed of multiple streams representing the captured scene and its geometry from different vantage points. In this project we study the problem of FVV adaptive streaming. Rendering non-captured views at the client requires transmitting multiple views with associated depth map streams, thereby increasing the network traffic requirements for such systems. Adding to the complexity of these systems is the fact that different component streams contribute differently to the quality of the final rendered view. We propose novel quality-aware rate adaptation methods for FVV streaming based on empirical and analytical virtual view distortion models. We study the performance of these methods using a complete FVV streaming client implementation based on open-source libraries developed in our lab. Moreover, we also study the challenges associated with streaming FVV content over next-generation mobile networks such as LTE-Advanced and 5G and propose optimized solutions to address them. | ||
− | + | [[adaptive-fvv|More info ...]] | |
− | + | '''MASH: Adaptive Streaming of Multiview Videos over HTTP''' | |
+ | Multiview videos offer unprecedented experience by allowing users to explore scenes from different angles and perspectives. Thus, such videos have been gaining substantial | ||
+ | interest from major content providers such as Google and Facebook. Adaptive streaming of multiview videos is, however, challenging because of the Internet dynamics and the diversity of | ||
+ | users interests and network conditions. To address this challenge, we propose a novel rate adaptation algorithm for multiview videos (called MASH). Streaming multiview videos is more user centric than single-view videos, because it heavily depends on how users interact with the different views. To efficiently support this interactivity, MASH constructs probabilistic view switching models that capture the switching behavior of the user in the current session, as well as the aggregate switching behavior across all previous sessions of the same video. MASH then utilizes these models to dynamically assign relative importance to different views. Furthermore, MASH uses a new buffer-based approach to request video segments of various views at different qualities, such that the quality of the streamed videos is maximized while the network bandwidth is not wasted. We have implemented a | ||
+ | multiview video player and integrated MASH in it. | ||
− | [[ | + | [[MASH:_Adaptive_Streaming_of_Multiview_Videos_over_HTTP|More info ...]] |
Latest revision as of 09:01, 7 August 2021
This project addresses next-generation video systems, including VR (virtual reality), 360, multiview, and 3D videos. We address problems ranging from content generation to adaptation to different platforms to streaming to heterogeneous receivers.
People
- Ahmed Hamza
- Kiana Calagari
- Khaled Diab
- Hamed Ahmadi
- Mohamed Hefeeda
2D to 3D Video Conversion
Widespread adoption of 3D displays is hindered by the lack of content that matches the user's expectations. Producing 3D videos is far more costly and time-consuming than regular 2D videos, which makes it challenging and thus rarely attempted, especially for live events, such as soccer games. In this project, we develop a high-quality automated 2D-to-3D conversion method for soccer videos. Our method is data-driven, relying on a reference database of 3D videos. Our key insight is that we use computer-generated depth from current computer sports games for creating a synthetic 3D database.
Immersive Content Generation from Standard 2D Videos
The aim of this project is to create compelling immersive videos suitable for VR (virtual reality) devices using only standard 2D videos. The focus of the work is on field sports such as soccer, hockey, basketball, etc. Currently the only way to create immersive content is by using multiple cameras and 360 camera rigs. This means that in addition to the already existing broadcast cameras around the field, expensive infrastructure should be added and managed in order to shoot and generate immersive content. In this project, however, we propose a more favorable alternative in which we can utilize the content of the already existing standard 2D cameras around the field to generate an immersive video.
Adaptive Streaming of Free-viewpoint Videos
Free-viewpoint video (FVV) enables users to interact with the scene by navigating to different viewpoints. These videos are composed of multiple streams representing the captured scene and its geometry from different vantage points. In this project we study the problem of FVV adaptive streaming. Rendering non-captured views at the client requires transmitting multiple views with associated depth map streams, thereby increasing the network traffic requirements for such systems. Adding to the complexity of these systems is the fact that different component streams contribute differently to the quality of the final rendered view. We propose novel quality-aware rate adaptation methods for FVV streaming based on empirical and analytical virtual view distortion models. We study the performance of these methods using a complete FVV streaming client implementation based on open-source libraries developed in our lab. Moreover, we also study the challenges associated with streaming FVV content over next-generation mobile networks such as LTE-Advanced and 5G and propose optimized solutions to address them.
MASH: Adaptive Streaming of Multiview Videos over HTTP
Multiview videos offer unprecedented experience by allowing users to explore scenes from different angles and perspectives. Thus, such videos have been gaining substantial interest from major content providers such as Google and Facebook. Adaptive streaming of multiview videos is, however, challenging because of the Internet dynamics and the diversity of users interests and network conditions. To address this challenge, we propose a novel rate adaptation algorithm for multiview videos (called MASH). Streaming multiview videos is more user centric than single-view videos, because it heavily depends on how users interact with the different views. To efficiently support this interactivity, MASH constructs probabilistic view switching models that capture the switching behavior of the user in the current session, as well as the aggregate switching behavior across all previous sessions of the same video. MASH then utilizes these models to dynamically assign relative importance to different views. Furthermore, MASH uses a new buffer-based approach to request video segments of various views at different qualities, such that the quality of the streamed videos is maximized while the network bandwidth is not wasted. We have implemented a multiview video player and integrated MASH in it.