研究生: |
周奕宇 Yi-Yu Chou |
---|---|
論文名稱: |
基於生成對抗網路且應用於AMOLED顯示器之限制功率曝光校正 GAN-Based Power-Constrained Exposure Correction for AMOLED Displays |
指導教授: |
阮聖彰
Shanq-Jang Ruan |
口試委員: |
阮聖彰
Shanq-Jang Ruan 林淵翔 Yuan-Hsiang Lin 蔡坤霖 Kun-Lin Tsai 白御廷 Yu-Ting Pai |
學位類別: |
碩士 Master |
系所名稱: |
電資學院 - 電子工程系 Department of Electronic and Computer Engineering |
論文出版年: | 2020 |
畢業學年度: | 108 |
語文別: | 英文 |
論文頁數: | 78 |
中文關鍵詞: | 省電 、曝光校正 、主動矩陣式有機發光二極體顯示器 、對抗式學習 |
外文關鍵詞: | Power saving, Exposure correction, Active matrix organic light emitting diode display, Adversarial learning |
相關次數: | 點閱:179 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
主動矩陣式有機發光二極體(Active-Matrix Organic Light Emitting Diodes)顯示技術在近年來已成為智慧型裝置的主流顯示器,然而主動矩陣式有機發光二極體顯示器是智慧型裝置上主要耗電的元件之一,特別是在顯示高亮度的影像內容時,將使主動矩陣式有機發光二極體顯示器產生大量功率消耗。為了校正具有高環境光源的高曝光影像,在此研究中,我們提出一個基於深度學習技術的限制功率曝光校正神經網路,此神經網路採用一個亮度引導機制且基於U-Net架構的生成網路,透過全域與局部辨別網路進行對抗式學習,為了將高曝光區域分布轉換為正常曝光區域分布,同時確保生成的影像在主動矩陣式有機發光二極體顯示器上不產生額外的功率消耗,我們進一步在生成網路中加入了功率限制,目的是約束生成網路產生比輸入影像更高的亮度。實驗結果顯示,本研究能有效轉換高曝光區域分布至正常曝光區域分布並限制其功率,更詳細的說,此方法在常見的資料集驗證中,在相近的功率節省率下,提出的方法與現有的過度曝光校正省電技術相比,能增強影像的飽和度與對比度,提供更佳的影像品質。
The Active-Matrix Organic Light Emitting Diodes (AMOLED) technology has become the mainstream of displays in recent years. However, it will generate a lot of power consumption on AMOLED displays when displaying high-brightness content. To address this problem, an exposure correction mechanism is needed to remove high-brightness ambient light in the image. In this thesis, we propose a Power-Constrained Exposure Correction (PCEC) network based on a Generative Adversarial Network (GAN) architecture, the PCEC network utilizes a U-Net-based generator with brightness-guided, and adopts the global-local discriminator architecture for adversarial learning. To transform the distribution of high-exposed regions into the distribution of normal-exposed regions while avoid generating additional power consumption on AMOLED displays, we add a power-constraint to the generator to restrict the increasing brightness. The experimental results show that the proposed method can effectively correct high-exposed regions as well as reducing power. At a similar power saving rate, the proposed method can enhance the saturation and contrast of the image and provide better visual quality compared with the existing over-exposure correction power saving technologies.
[1] S. Kang, "Image-Quality-Based Power Control Technique for Organic Light Emitting Diode Displays," Journal of Display Technology, vol. 11, no. 1, pp. 104-109, Jan. 2015.
[2] A. Carroll and G. Heiser, “An analysis of power consumption in a smartphone,” in Proc. 2010 USENIX conf, Jun. 2010, pp. 1–14.
[3] S. Pasricha, R. Ayoub, M. Kishinevsky, S. K. Mandal, and U. Y. Ogras, "A Survey on Energy Management for Mobile and IoT Devices," IEEE Design & Test.
[4] H. Kawamoto, "The history of liquid-crystal displays," Proceedings of the IEEE, vol. 90, no. 4, pp. 460-500, April 2002.
[5] S. Kunić and Z. Šego, "OLED technology and displays," in Proceedings ELMAR-2012, Zadar, 2012, pp. 31-35.
[6] M. Dong and L. Zhong, “Chameleon: A color-adaptive web browser for mobile OLED displays,” IEEE Trans. Mobile Comput., vol. 11, no. 5, pp. 724–738, May 2012.
[7] C.-K. Kang, C.-H. Lin, and P.-C. Hsiu, “A Win-Win camera: Quality-enhanced power-saving images on mobile OLED displays,” in 2015 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), Rome, 2015, pp. 267-272.
[8] X. Chen, J. Zheng, Y. Chen, M. Zhao, and C. J. Xue, "Quality-retaining OLED dynamic voltage scaling for video streaming applications on mobile devices," in DAC Design Automation Conference 2012, San Francisco, CA, 2012, pp. 1000-1005.
[9] J.-H. Park, Z. Maregn, and Y.-J. Kim, “Color transformation-based dynamic voltage scaling for mobile AMOLED displays,” IEICE Electron. Exp., vol. 12, no. 8, pp. 1–12, 2015.
[10] S. Hong, S. Kim, and Y. Kim, "LGC-DVS: Local Gamma Correction-Based Dynamic Voltage Scaling for Android Smartphones with AMOLED Displays," IEEE Journal of the Electron Devices Society, vol. 5, no. 6, pp. 432-444, Nov. 2017.
[11] J. Su, Y. Huang, J. Yin, B. Chen, and S. Qu, "Saliency-Guided Deep Framework for Power Consumption Suppressing on Mobile Devices," in 2018 1st IEEE International Conference on Knowledge Innovation and Invention (ICKII), Jeju, 2018, pp. 191-194.
[12] K. W. Tan, T. Okoshi, A. Misra, and R. K. Balan, “FOCUS: A Usable & Effective Approach to OLED Display Power Management,” in Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2013.
[13] H. Lin, P. Hsiu, and T. Kuo, "ShiftMask: Dynamic OLED power shifting based on visual acuity for interactive mobile applications," in 2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), Taipei, 2017, pp. 1-6.
[14] S. Kang, "Image-Quality-Based Power Control Technique for Organic Light Emitting Diode Displays," Journal of Display Technology, vol. 11, no. 1, pp. 104-109, Jan. 2015.
[15] S. Kang, "Perceptual Quality-Aware Power Reduction Technique for Organic Light Emitting Diodes," Journal of Display Technology, vol. 12, no. 6, pp. 519-525, June 2016.
[16] P. Chondro, C. Chang, S. Ruan, and C. Shen, "Advanced Multimedia Power-Saving Method Using a Dynamic Pixel Dimmer on AMOLED Displays," IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 9, pp. 2200-2209, Sept. 2018.
[17] T. Chang and S. S. Xu, "Real-time quality-on-demand energy-saving schemes for OLED-based displays," in 2013 IEEE International Symposium on Industrial Electronics, Taipei, 2013, pp. 1-5.
[18] X. Chen, Y. Chen, and Chun Jason Xue, "DaTuM: Dynamic tone mapping technique for OLED display power saving based on video classification," in 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), San Francisco, CA, 2015, pp. 1-6.
[19] T. Chang, S. S. Xu, and S. Su, "SSIM-Based Quality-on-Demand Energy-Saving Schemes for OLED Displays," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 5, pp. 623-635, May 2016.
[20] C. Lee, C. Lee, Y. Lee, and C. Kim, "Power-Constrained Contrast Enhancement for Emissive Displays Based on Histogram Equalization," IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 80-93, Jan. 2012.
[21] Y. Nam, D. Choi, and B. C. Song, "Power-Constrained Contrast Enhancement Algorithm Using Multiscale Retinex for OLED Display," IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3308-3320, Aug. 2014.
[22] C. Y. Jang, S. Kang, and Y. H. Kim, "Noniterative Power-Constrained Contrast Enhancement Algorithm for OLED Display," Journal of Display Technology, vol. 12, no. 11, pp. 1257-1267, Nov. 2016.
[23] Li-Ming Jan, Fan-Chieh Cheng, Chia-Hua Chang, Shanq-Jang Ruan, and Chung-An Shen, "A Power-Saving Histogram Adjustment Algorithm for OLED-Oriented Contrast Enhancement," IEEE/OSA Journal of Display Technology, vol 12, issue 4, Apr. 2016, pp. 368-375.
[24] Y. Shin, S. Park, Y. Yeo, M. Yoo, and S. Ko, "Unsupervised Deep Contrast Enhancement with Power Constraint for OLED Displays," IEEE Transactions on Image Processing, vol. 29, pp. 2834-2844, 2020.
[25] H. Chen, J. Wang, W. Chen, H. Qu, and W. Chen, “An image-space energy-saving visualization scheme for OLED displays,” Computers & Graphics, vol. 38, pp. 61-68, 2014.
[26] K. Liu, Z. Zhou, Y. Zhuang, M. Dai, and P. Kuang, "A log-function power-constrained algorithm for OLED displays," in TENCON 2015 - 2015 IEEE Region 10 Conference, Macao, 2015, pp. 1-5.
[27] C. Yeh, K. S. Lo, and W. Lin, "Visual-Attention-Based Pixel Dimming Technique for OLED Displays of Mobile Devices," IEEE Transactions on Industrial Electronics, vol. 66, no. 9, pp. 7159-7167, Sept. 2019.
[28] P. Tsai, P. Chondro, and S. Ruan, "Depth-Guided Pixel Dimming with Saliency-Oriented Power-Saving Transformation for Stereoscope AMOLED Displays," in IEEE Transactions on Circuits and Systems for Video Technology.
[29] P. Chondro, Z. Yao, and S. Ruan, "Depth-based Dynamic Lightness Adjustment Power-saving Algorithm for AMOLED in Head-mounted Display," Optics Express, vol 26, No. 25, pp. 33158-33165, Dec. 2018.
[30] P. Chondro and S. Ruan, "Perceptually Hue-Oriented Power-Saving Scheme with Overexposure Corrector for AMOLED Displays," Journal of Display Technology, vol. 12, no. 8, pp. 791-800, Aug. 2016.
[31] C. Liu, W. Liao, and S. Ruan, "Crowd Gathering Detection Based on the Foreground Stillness Model," IEICE Transactions on Information and Systems, vol. E101-D, No. 7, pp.1968-1971, July 2018.
[32] P. Manwatkar and S. Yadav, "Text recognition from images," in 2015 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, 2015, pp. 1-6.
[33] Wikipedia, “Exposure (photography),” https://en.wikipedia.org/. [Online]. Available: /https://en.wikipedia.org/wiki/Exposure_(photography)/. [Accessed: 20- Jun- 2020].
[34] K. Gu, D. Tao, J. Qiao, and W. Lin, "Learning a No-Reference Quality Assessment Model of Enhanced Images with Big Data," IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 4, pp. 1301-1313, April 2018.
[35] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Advances Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[36] J. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proc. ICCV, 2017, pp. 2223–2232.
[37] T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive growing of GANs for improved quality, stability, and variation," in Proc. Int. Conf. Learn. Represent. (ICLR), 2018, pp. 1–26.
[38] T. Wang, M. Liu, J. Zhu, G. Liu, A. Tao, J. Kautz, and B. Catanzaro, “Video-to-Video Synthesis,” in Advances in Neural Information Processing Systems (NIPS), 2018.
[39] T. Karras, S. Laine, and T. Aila, "A Style-Based Generator Architecture for Generative Adversarial Networks," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 4396-4405
[40] X. Huang and S. Belongie, "Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization," in IEEE International Conference on Computer Vision (ICCV), Venice, 2017, pp. 1510-1519.
[41] A. Brock, J. Donahue, and K. Simonyan, "Large scale GAN training for high fidelity natural image synthesis," in International Conference on Learning representations, 2019.
[42] T. Miyato and M. Koyama, "cGANs with projection discriminator," in International Conference on Learning Representations, 2018.
[43] J. Gui, Z. Sun, Y. Wen, D. Tao, and J. Ye, “A review on generative adversarial networks: Algorithms, theory, and applications,” 2020, arXiv preprint arXiv:2001.06937
[44] M. Dong, Y. Choi, and L. Zhong, "Power modeling of graphical user interfaces on OLED displays," in 2009 46th ACM/IEEE Design Automation Conference, San Francisco, CA, 2009, pp. 652-657.
[45] S. Wang, J. Zheng, H. Hu, and B. Li, "Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images," IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3538-3548, Sept. 2013.
[46] Q. Zhang, G. Yuan, C. Xiao, L. Zhu, and W. Zheng, “High-quality exposure correction of underexposed photos,” in Proc. ACM Int. Conf. Multimedia, 2018, pp. 582–590.
[47] X. Guo, Y. Li, and H. Ling, "LIME: Low-Light Image Enhancement via Illumination Map Estimation," IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982-993, Feb. 2017.
[48] D. Guo, Y. Cheng, S. Zhuo, and T. Sim, "Correcting over-exposure in photographs," in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, 2010, pp. 515-521.
[49] D. Lee, Y. Yoon, M. Cho and S. Ko, "Correction of the overexposed region in digital color image," in 2014 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, 2014, pp. 224-225.
[50] M. A. Abebe, A. Booth, J. Kervec, T. Pouli, and M. Larabi, “Towards an automatic correction of over-exposure in photographs: Application to tone-mapping,” Computer Vision and Image Understanding, vol. 168, 2018, pp. 3-20.
[51] Q. Zhang, Y. Nie, and W. Zheng, "Dual illumination estimation for robust exposure correction," Computer Graphics Forum, 2019.
[52] Y. Kim, "Contrast enhancement using brightness preserving bi-histogram equalization," IEEE Transactions on Consumer Electronics, vol. 43, no. 1, pp. 1-8, Feb. 1997.
[53] J. A. Stark, "Adaptive image contrast enhancement using generalizations of histogram equalization," IEEE Transactions on Image Processing, vol. 9, no. 5, pp. 889-896, May 2000.
[54] G. Yadav, S. Maheshwari and A. Agarwal, "Contrast limited adaptive histogram equalization-based enhancement for real time video system," in 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), New Delhi, 2014, pp. 2392-2397.
[55] L. Yuan and J. Sun, "Automatic exposure correction of consumer photographs," in Proc. 12th Conf. Comput. Vis., 2012, pp. 771–785.
[56] P. Shirley, J. Ferwerda, E. Reinhard, and M. Stark, "Photographic tone reproduction for digital images," in ACM SIGGRAPH'02, 2002, pp. 267–276.
[57] M. Gharbi, J. Chen, J. Barron, S. Hasinoff, and F. Durand, "Deep bilateral learning for real-time image enhancement," ACM Trans. Graph., vol. 36, no. 4, pp. 118, 2017.
[58] Y. Chen, Y. Wang, M. Kao, and Y. Chuang, "Deep Photo Enhancer: Unpaired Learning for Image Enhancement from Photographs with GANs," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 6306-6314.
[59] Y. Hu, H. He, C. Xu, B. Wang, and S. Lin, "Exposure: A white-box photo post-processing framework," ACM Trans. Graph., vol. 37, no. 2, 2018, Art. no. 26.
[60] S. Kosugi and T. Yamasaki, "Unpaired image enhancement featuring reinforcement-learning-controlled image editing software," 2019, arXiv preprint arXiv:1912.07833.
[61] Y. Jiang et al., “Enlightengan: Deep light enhancement without paired supervision,” 2019, arXiv preprint arXiv:1906.06972.
[62] Q. Zhu, J. Mai, and L. Shao, "A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, Nov. 2015.
[63] K. He, J. Sun, and X. Tang, "Single Image Haze Removal Using Dark Channel Prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec. 2011.
[64] D. Berman, T. Treibitz, and S. Avidan, "Air-light estimation using haze-lines," in 2017 IEEE International Conference on Computational Photography (ICCP), Stanford, CA, 2017, pp. 1-9.
[65] J. Tarel and N. Hautière, "Fast visibility restoration from a single color or gray level image," in 2009 IEEE 12th International Conference on Computer Vision, Kyoto, 2009, pp. 2201-2208
[66] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, "Efficient Image Dehazing with Boundary Constraint and Contextual Regularization," 2013 IEEE International Conference on Computer Vision, Sydney, NSW, 2013, pp. 617-624.
[67] C. Ancuti, C. O. Ancuti, C. De Vleeschouwer, and A. C. Bovik, "Night-time dehazing by fusion," in 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, 2016, pp. 2256-2260.
[68] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, "DehazeNet: An End-to-End System for Single Image Haze Removal," IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, Nov. 2016.
[69] K. Swami and S. K. Das, "CANDY: Conditional Adversarial Networks based End-to-End System for Single Image Haze Removal," in 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, 2018, pp. 3061-3067.
[70] D. Engin, A. Genc, and H. K. Ekenel, "Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, 2018, pp. 938-9388.
[71] X. Yang, Z. Xu, and J. Luo, "Towards perceptual image dehazing by physics-based disentanglement and adversarial training," in Thirty third-second AAAI conference on Artificial Intelligence (AAAI-18), 2018.
[72] J. Yan, C. Li, Y. Zheng, S. Xu, and X. Yan, "MMP-Net: A Multi-Scale Feature Multiple Parallel Fusion Network for Single Image Haze Removal," IEEE Access, vol. 8, pp. 25431-25441, 2020.
[73] A. Jolicoeur-Martineau, "The relativistic discriminator: a key element missing from standard gan,", 2018, arXiv preprint arXiv:1807.00734.
[74] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, "Least Squares Generative Adversarial Networks," in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, 2017, pp. 2813-2821.
[75] F. Lv, Y. Li, and F. Lu, "Attention Guided Low-light Image Enhancement with a Large Scale Low-light Simulation Dataset," 2019, arXiv preprint arXiv: 1908.00682.
[76] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in Proc. Med. Image Comput. Comput.-Assisted Intervention, 2015, pp. 234–241.
[77] J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," in ECCV. Springer, 2016.
[78] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, "SLIC Superpixels Compared to State-of-the-Art Superpixel Methods," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274-2282, Nov. 2012.
[79] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge,” Int’l J. Computer Vision, vol. 88, no. 2, pp. 303-338, June 2010.
[80] J. Pech-Pacheco, G. Cristobal, J. Chamorro-Martinez and J. Fernandez-Valdivia, "Diatom autofocusing in brightfield microscopy: a comparative study," in Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, Barcelona, Spain, vol.3., 2000, pp. 314-317
[81] D. Hasler, S.E. Suesstrunk, "Measuring colorfulness in natural images," Human vision and electronic imaging VIII, pp. 87–96, vol. 5007, 2003.
[82] B. Li et al., "Benchmarking single-image dehazing and beyond," IEEE Trans. Image Process., vol. 28, no. 1, pp. 492–505, Jan. 2019.
[83] V. Bychkovsky, S. Paris, E. Chan, and F. Durand, "Learning photographic global tonal adjustment with a database of input/output image pairs," in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit., 2011, pp. 97–104.
[84] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 2004.
[85] L. Zhang, Y. Shen, and H. Li, "VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment," in IEEE Transactions on Image Processing, vol. 23, no. 10, pp. 4270-4281, Oct. 2014.
[86] T. Celik, “Two-dimensional histogram equalization and contrast enhancement,” Pattern Recognition, vol. 45, no. 10, pp. 3810–3824, 2012.
[87] S. S. Agaian, K. Panetta, and A. M. Grigoryan, “A new measure of image enhancement,” in IASTED Int. Conf. Signal Processing Communication, Marbella, Spain, Sep. 2000, pp. 19–22.
[88] L. Zhang, Z. Gu, and H. Li, "SDSP: A novel saliency detection method by combining simple priors," in 2013 IEEE International Conference on Image Processing, Melbourne, VIC, 2013, pp. 171-175.
[89] B. Jähne, H. Haubecker, and P. Geibler, Handbook of Computer Vision and Applications. New York, NY, USA: Academic, 1999.
[90] P. Arbela ́ez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 5, pp. 898-916, May 2011.
[91] H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, Nov. 2006.
[92] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti, “TID2008—A database for evaluation of full-reference visual quality assessment metrics,” Adv. Modern Radioelectron., vol. 10, pp. 30–45, Nov. 2009.
[93] Monsoon Solutions Inc., ”Support,” https://www.msoon.com/. [Online]. Available: https://www.msoon.com/powermonitor-support. [Accessed: 20- Jun- 2020].