Loading web-font TeX/Main/Regular
Design and Analysis of MEC- and Proactive Caching-Based - Mobile VR Video Streaming | IEEE Journals & Magazine | IEEE Xplore

Design and Analysis of MEC- and Proactive Caching-Based 360^{\circ } Mobile VR Video Streaming


Abstract:

Recently, 360-degree mobile virtual reality video (MVRV) has become increasingly popular because it can provide users with an immersive experience. However, MVRV is usual...Show More

Abstract:

Recently, 360-degree mobile virtual reality video (MVRV) has become increasingly popular because it can provide users with an immersive experience. However, MVRV is usually recorded in a high resolution and is sensitive to latency, which indicates that broadband, ultra-reliable, and low-latency communication is necessary to guarantee the users’ quality of experience. In this paper, we propose a mobile edge computing (MEC)-based 360-degree MVRV streaming scheme with field-of-view (FoV) prediction, which jointly considers video coding, proactive caching, computation offloading, and data transmission. To meet the requirement of stringent end-to-end (E2E) latency, the user’s viewpoint prediction is utilized to cache video data proactively, and computing tasks are partially offloaded to the MEC server. In addition, we propose an analytical model based on diffusion process to study the packet transmission process of 360-degree MVRV in multihop wired/wireless networks and analyze the performance of the MEC-enabled scheme. The simulation results verify the accuracy of the analysis and the effectiveness of the proposed MVRV streaming scheme in reducing the E2E delay. Furthermore, the analytical framework sheds some light on the impacts of system parameters, e.g., FoV prediction accuracy and transmission rate, on the balance between computation delay and communication delay.
Published in: IEEE Transactions on Multimedia ( Volume: 24)
Page(s): 1529 - 1544
Date of Publication: 19 March 2021

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Mobile virtual reality (VR) is expected to become an extremely popular application on 5 G networks and refers to the transmission of video and sound files from a cloud server to a user’s terminal device via a multihop network in order to achieve storage and rendering by the cloud server or edge server in virtual reality business. For example, this application is practical with the aid of cloud computing technology and stable gigabit fiber networks [1]. A 360-degree video, which is also known as a three-degree-of-freedom (3-DoF) spherical video, can provide users with an immersive experience. Because 360-degree mobile virtual reality video (MVRV) combines the multiple requirements of high capacity of enhanced mobile broadband (eMBB) services and stringent latency and reliability of ultra-reliable low-latency communication (URLLC) services, there are currently many technical difficulties in supporting this application [2], [3]. Currently, research in this area focuses not only on traditional approaches including increasing the transmission rate or decreasing the bandwidth requirements but also on jointly utilizing the resources of caching, computation, communication (3 C) [4]–[6].

Select All
1.
"Huawei iLab", 2020, [online] Available: https://www-file.huawei.com/-/media/corporate/pdf/ilab/2018/cloud_vr_solutions_wp_cn.pdf?source=corp_comm.
2.
B. Soret, P. Mogensen, K. I. Pedersen and M. C. Aguayo-Torres, "Fundamental tradeoffs among reliability latency and throughput in cellular networks", Proc. IEEE Globecom Workshops, pp. 1391-1396, Dec. 2014.
3.
M. Zink, R. Sitaraman and K. Nahrstedt, " Scalable \$360^{circ }\$ video stream delivery: Challenges solutions and opportunities ", Proc. IEEE, vol. 107, no. 4, pp. 639-650, Apr. 2019.
4.
M. Erol-Kantarci and S. Sukhmani, "Caching and computing at the edge for mobile augmented reality and virtual reality (AR/VR) in 5 G" in , Springer, 2018.
5.
Y. Sun, Z. Chen, M. Tao and H. Liu, "Communications caching and computing for mobile virtual reality: Modeling and tradeoff", IEEE Trans. Commun., vol. 67, no. 11, pp. 7573-7586, Nov. 2019.
6.
J. Chakareski, "VR/AR immersive communication: Caching edge computing and transmission trade-offs", Proc. ACM Workshop Virtual Reality Augmented Reality Netw., pp. 36-41, 2017.
7.
X. Ge, L. Pan, Q. Li, G. Mao and S. Tu, "Multipath cooperative communications networks for augmented and virtual reality transmission", IEEE Trans. Multimedia, vol. 19, no. 10, pp. 2345-2358, Oct. 2017.
8.
J. Park, P. Popovski and O. Simeone, "Minimizing latency to support VR social interactions over wireless cellular systems via bandwidth allocation", IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 776-779, Oct. 2018.
9.
M. S. Elbamby, C. Perfecto, M. Bennis and K. Doppler, "Toward low-latency and ultra-reliable virtual reality", IEEE Netw., vol. 32, no. 2, pp. 78-84, Mar. 2018.
10.
Y. Chen et al., "Cooperative communications in ultra-wideband wireless body area networks: Channel modeling and system diversity analysis", IEEE J. Sel. Areas Commun., vol. 27, no. 1, pp. 5-16, Jan. 2009.
11.
F. Qian, L. Ji, B. Han and V. Gopalakrishnan, "Optimizing 360 video delivery over cellular networks", Proc. 5th Workshop Things Cellular: Operations Appl. Challenges, pp. 1-6, 2016.
12.
K. Lee et al., "Outatime: Using speculation to enable low-latency continuous interaction for mobile cloud gaming", Proc. Int. Conf. Mobile Syt, pp. 151-165, 2015.
13.
M. Chen, W. Saad and C. Yin, "Virtual reality over wireless networks: Quality-of-service model and learning-based resource management", IEEE Trans. Commun, vol. 66, no. 11, pp. 5621-5635, 2018.
14.
V. W. S. Wong, R. Schober, D. W. K. Ng and L.-C Wang, Key Technologies for 5 G Wireless Systems, Cambridge, U.K.:Cambridge Univ. Press, 2017.
15.
X. Yang et al., "Communication-constrained mobile edge computing systems for wireless virtual reality: Scheduling and tradeoff", IEEE Access, vol. 6, pp. 16 665-16 677, Mar. 2018.
16.
S. Sukhmani, M. Sadeghi, M. Erol-Kantarci and A. El Saddik, "Edge caching and computing in 5G for mobile AR/VR and tactile internet", IEEE MultiMedia, vol. 26, no. 1, pp. 21-30, Jan. 2019.
17.
L. Zhang, Y. Xu, W. Huang, L. Yang, J. Sun and W. Zhang, "A MMT-based content classification scheme for vod service", Proc. IEEE Int. Symp. Broadband Multimedia Syst. Broadcast, pp. 1-5, Jun. 2015.
18.
J. Yang, Q. Yang, K. S. Kwak and R. R. Rao, "Power-delay tradeoff in wireless powered communication networks", IEEE Trans. Veh. Technol., vol. 66, no. 4, pp. 3280-3292, Apr. 2017.
19.
T. T. Le, D. V. Nguyen and E. Ryu, "Computing offloading over mmWave for mobile VR: Make 360 video streaming alive", IEEE Access, vol. 6, pp. 66 576-66 589, Sep. 2018.
20.
T.-Y Huang, R. Johari, N. McKeown, M. Trunnell and M. Watson, "A buffer-based approach to rate adaptation: Evidence from a large video streaming service", Proc. ACM Conf. SIGCOMM, pp. 187-198, Aug. 2014.
21.
C. Zhou, C. Lin and Z. Guo, "mDASH: A markov decision-based rate adaptation approach for dynamic http streaming", IEEE Trans. Multimedia, vol. 18, no. 4, pp. 738-751, Apr. 2016, [online] Available: .
22.
"MPEG-DASH (Dynamic Adaptive Streaming Over HTTP)", Aug. 2020, [online] Available: https://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-DASH_schema_files/.
23.
T. H. Luan, L. X. Cai and X. Shen, "Impact of network dynamics on user’s video quality: Analytical framework and QoS provision", IEEE Trans. Multimedia, vol. 12, no. 1, pp. 64-78, Jan. 2010.
24.
Y. Zhang, P. Zhao, K. Bian, Y. Liu, L. Song and X. Li, "DRL360: 360-degree video streaming with deep reinforcement learning", Proc. IEEE INFOCOM, pp. 1252-1260, Apr. 2019.
25.
H. Pang, C. Zhang, F. Wang, J. Liu and L. Sun, " Towards low latency multi-viewpoint \$360^{circ }\$ interactive video: A multimodal deep reinforcement learning approach ", Proc. IEEE INFOCOM Conf. Comput. Commun., pp. 991-999, Apr. 2019.
26.
T. Judd, K. Ehinger, F. Durand and A. Torralba, "Learning to predict where humans look", Proc. IEEE Int. Conf. Comput. Vis, pp. 2106-2113, Sep. 2009.
27.
S. Dodge and L. Karam, "Visual saliency prediction using a mixture of deep neural networks", IEEE Trans. Image Process., vol. 27, no. 8, pp. 4080-4090, Aug. 2018.
28.
M. Kümmerer, L. Theis and M. Bethge, "Deep gaze I: Boosting saliency prediction with feature maps trained on imagenet", Proc. Int. Conf. Learn. Representations Workshop, pp. 1-12, 2015.
29.
E. Vig, M. Dorr and D. Cox, "Large-scale optimization of hierarchical features for saliency prediction in natural images", Proc. IEEE Conf. Comput. Vis. Pattern Recognit, pp. 2798-2805, 2014.
30.
L. Bazzani, H. Larochelle and L. Torresani, "Recurrent mixture density network for spatiotemporal visual attention", Proc. Int. Conf. Learn. Representations, pp. 1-17, 2017.

Contact IEEE to Subscribe

References

References is not available for this document.