簡易檢索 / 詳目顯示

研究生: 蘇俊榮
Chun-Rong Su
論文名稱: 運用對偶式形態運算之自動影像分割法及點對點網路影像檢索應用
Unsupervised Image Segmentation by Dual Morphological Operations and Peer-to-Peer Content-Based Image Retrieval Applications
指導教授: 陳建中
Jiann-Jone Chen
口試委員: 貝蘇章
Soo-Chang Pei
杭學鳴
Hsueh-Ming Hang
許新添
Hsin-Teng Sheu
鍾國亮
Kuo-Liang Chung
廖弘源
Hong-Yua Mark Liao
賴尚宏
Shang-Hong Lai
學位類別: 博士
Doctor
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 英文
論文頁數: 125
中文關鍵詞: 對偶多階型態學重建運算多詢例多特徵資料庫影像檢索點對點網路
外文關鍵詞: dual multiscale morphological, p2p networks
相關次數: 點閱:234下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文研究點對點網路影像資料庫檢索(Peer-to-Peer Content-Based Image Retrieval, P2P-CBIR)系統與方法: (1) 在此P2P-CBIR系統中之前級處理方面,我們開發自動去背的方法; (2) 在網路搜尋引擎設計上,我們提出檢索規模可調與漸進檢索的方法。
    本文第一部分討論自動化影像物件辨別前級處理單元,第二部分探討在點對點網路架構上進行內容式影像檢索的系統運作方法。在前級處理上,我們提出一個對偶多階型態學開放閉合運算重建(dual multiScalE Graylevel mOrphological open and close recoNstructions, SEGON)演算法,並結合影像中前景物件邊緣點的涵蓋比例,以辨別出影像中前景物件與背景區域部分。此外我們利用SEGON方法所分割出的影像物件區域,建立背景網格(BG Mesh)影像,以改善物件分割的準確率。本方法根據不同影像背景色彩變化資訊,所制定出的影像物件分割演算法程序,非常適合應用在大型影像資料庫處理影像前景物件分割。評估影像前景物件分割準確率方面,我們透過計算分割後的影像前景物件區域與人工標示的影像物件區域之匹配值,作為評估的依據。本方法所分割之影像物件結果,與前人設計物件分割演算法的結果相比較,能夠獲得較佳的物件分割準確率。評估影像檢索結果,本論文提出整合多詢例多特徵(Multi-Instance with Multiple Features, MIMF)檢索的方法,藉由循例影像間的相關系數,以整合多特徵影像相似性排名。實驗結果顯示,採用所提出之SEGON影像物件分割程序做為前級處理,與未經前級處理之檢索結果相比較,本SEGON方法可以獲得較佳的檢索準確率。
    在點對點網路資料庫影像檢索系統(P2P-CBIR)方面,為有效探索分散於網路中相關端點的資料庫,我們採用分散式非結構化的P2P網路拓樸。根據過往的檢索記錄,當網路異常停擺或是端點在網路中頻繁的上下線,皆能夠彈性的控制網路搜尋方式以因應各種網路狀況。在搜尋引擎的設計上,本論文將MIMF檢索的方法運用在點對點網路上,在維持原準確度的情況下,能有效降低網路流量。另外,針對此一P2P-CBIR系統,我們提出檢索規模可調的方法,並提出漸進式過濾的網路影像檢索機制,可以在傳送相似影像過程中層層過濾篩選高相似影像。此外為了因應網路端點資料庫為時變(Time Variant)的特性,我們提出系統調適與更新的方法(System Reconfiguration),進一步更新端點間相似性連結的參數,如此可以讓網路搜尋引擎在較為正確的系統參數下進行檢索程序。且為了提供最佳的效能,本系統針對不同之線上使用者人數,提出一個最佳化的系統規劃(Optimal Configuration)方法。模擬結果顯示,本P2P-CBIR系統相較於過去的系統,可以獲得較高的準確率,另外系統經過連結拓樸調適與更新,能夠達到最好的準確率。經最佳化的模擬結果證實,在相同使用者人數的情形下,可以獲得較佳的準確率,以及較低的網路傳輸流量。


    In this thesis, we proposed to perform content-based image retrieval (CBIR) on Internet scale databases connected through peer-to-peer (P2P) networks, abbreviated as P2P-CBIR, which utilizes an intelligent preprocessing to identify the object regions and provides scalable retrieval function. For preprocessing, we proposed a dual multiScalE Graylevel mOrphological open and close recoNstructions (SEGON) algorithm, and utilized edge coverage rate to segment foreground (FG) object regions in one image. To improve FG object segmentation accuracy, a background (BG) gray-level variation mesh is built. The SEGON was developed from a macroscopic perspective on image BG gray levels and implemented through regular procedures to deal with large-scale database images. To evaluate the segmentation accuracy, the probability of coherent segmentation labeling, i.e., the normalized probability random index (PRI), between a computer-segmented image and the hand-labeled one is computed for comparisons. Experiments showed that the proposed object segmentation method outperforms others in the PRI performance. The normalized correlation coefficient of features among query samples was computed to integrate the similarity ranks of different features in order to perform multi-instance queries with multiple features (MIMF). Retrieval precision–recall (PR) and rank performances, with and without SEGON, were compared. Performing SEGON-enabled CBIR on large-scale databases yields higher PR performance.
    For performing Internet scale CBIR, a P2P-CBIR system has been proposed, which helps to effectively explore the large-scale image database distributed over connected peers. The decentralized unstructured P2P network topology is adopted to compromise with the structured one, and informed-like instead of blind-like searching approach enables flexible routing control when peers join/leave or network fails. The P2P-CBIR adopts MIMF to reduce average network traffics while maintaining high retrieval accuracy on the query peer. In addition, scalable retrieval control can also be developed based on the P2P-CBIR framework, which can adapt the query scope and progressively refine the accuracy during the retrieval process. We also proposed to record instant local database characteristics of peers for the P2P-CBIR system to update peer linking information. By reconfiguring system at each regular interval time, we can effectively reduce trivial peer routing and retrieval operations due to
    imprecise configurations. We also proposed to optimally configure the P2P-CBIR system such that, under a certain number of online users, which would yield the highest recall rate. Experiments show that the average recall rate of the proposed P2P-CBIR method with reconfiguration is higher than that of the one without, and the latter outperforms previous methods, under the same retrieval scope, i.e., same time-to-live (TTL) settings. Furthermore, simulations demonstrate that, with the optimal configuration, recall rates can be improved while the network traffic of each peer is reduced, under the same number of on-line users.

    1 Introduction. . . . . . . . . . . . . . . . . . . . . 1 1.1 Image Feature Description . . . . . . . . . . . . . 2 1.1.1 Object-Based Feature Extraction . . . . . . . . . 3 1.1.2 Frame-Based Feature Extraction . . . . . . . . . 4 1.2 Image Segmentation . . . . . . . . . . . . . . . . 5 1.3 CBIR on P2P Networks . . . . . . . . . . . . . . . 6 1.3.1 P2P Network Models . . . . . . . . . . . . . . . 6 1.3.2 P2P System Configuration . . . . . . . . . . . . 7 1.3.3 P2P Information Retrieval . . . . . . . . . . . . 8 1.4 Organization and Contribution of the Thesis . . . . 9 2 Image Features and Retrieval . . . . . . . . . . . . 11 2.1 Introduction . . . . . . . . . . . . . . . . . . . 11 2.2 Image Descriptors of MPEG-7 . . . . . . . . . . . . 13 2.2.1 Color Descriptors . . . . . . . . . . . . . . . . 13 2.2.2 Texture Descriptors . . . . . . . . . . . . . . . 16 2.2.3 Shape Descriptors . . . . . . . . . . . . . . . . 17 2.3 Multi-Instance Query with Multiple Features . . . . 19 3 Object Segmentation of Database Images by Dual Multi-Scale Morphological Reconstructions. . . . . . . . . . . . . 22 3.1 Introduction . . . . . . . . . . . . . . . . . . . 22 3.2 Morphological Reconstruction Operation . . . . . . 23 3.3 Background Graylevel Variation . . . . . . . . . . 25 3.4 Object Region Segmentation . . . . . . . . . . . . 32 3.5 False Accept Exclusion . . . . . . . . . . . . . . 36 3.6 Simulation Study . . . . . . . . . . . . . . . . . 39 3.6.1 Object Segmentation Accuracy . . . . . . . . . . 39 3.6.2 Image Retrieval with Shape Descriptor . . . . . 44 3.6.3 Image Retrieval with Color Feature . . . . . . . 48 3.6.4 Evaluation Criterion . . . . . . . . . . . . . . 49 3.6.5 CBIR Performance . . . . . . . . . . . . . . . . 50 3.7 Summary . . . . . . . . . . . . . . . . . . . . . 55 4 Content-based Image Retrieval on Peer-to-Peer Networks . . . . . . 58 4.1 Introduction . . . . . . . . . . . . . . . . . . . 58 4.2 P2P-CBIR System . . . . . . . . . . . . . . . . . . 58 4.2.1 System Initial Setup . . . . . . . . . . . . . . 60 4.2.2 Similarity Measure Between Two Peer Databases . . . . . 62 4.2.3 Evaluation Criteria . . . . . . . . . . . . . . . 63 4.3 P2P-CBIR Retrieval Control . . . . . . . . . . . . 64 4.3.1 Fix the Number of Transmitted Images ntop . . . . . . . . . 64 4.3.2 Fix the Similarity Threshold T^I_{τi}. . . . . . . 65 4.4 Scalable Retrieval . . . . . . . . . . . . . . . . . 68 4.4.1 Progressive Retrieval Refinement . . . . . . . . . 70 4.4.2 System Reconfiguration . . . . . . . . . . . . . . 72 4.4.3 Timing Analysis . . . . . . . . . . . . . . . . . 75 4.4.4 Optimal Configuration . . . . . . . . . . . . . . 76 4.4.5 Bandwidth Loading . . . . . . . . . . . . . . . . 78 4.5 Simulation Study . . . . . . . . . . . . . . . . . 80 4.5.1 P2P-CBIR Performance Analysis . . . . . . . . . 80 4.5.2 Subjective P2P-CBIR Performance Analysis . . . . . 81 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . 86 5 Conclusions & Future Researches . . . . . . . . . . . 92 5.1 Contributions of the Thesis . . . . . . . . . . . . 92 5.2 Future Researches . . . . . . . . . . . . . . . . . 93 References . . . . . . . . . . . . . . . . . . . . . . 95 Appendix A Examples of Stability Criteria. . . . . . . .102 Appendix B Flowchart of the SEGON Procedure. . . . . . .104 Appendix C The Timing Analysis of Scalable Retrieval . . . . . . . . . . . . .105

    [1] M. Flickner et al., “Query by image and video content: the QBIC system,” IEEE Computer, vol. 28, no. 9, pp. 23-32, 1995.
    [2] C. Carson et al., “Blobworld: image segmentation using expectation maximizations and it application to image querying,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, pp. 1026V38, 2002.
    [3] J. R. Smith and S.-F. Chang, “Visualseek: a fully automated content-based image query system,” in Proc. ACM Multimedia, 1996.
    [4] T. Gevers and A. Smeulders, “Pictoseek: combining color and shape invariant features for image retrieval,” IEEE Trans. Image Process, pp. 102-119, 2000.
    [5] P. Natsev et al., “WALRUS: a similarity matching algorithm for image databases,” Technical report, Bell Laboratories, Murray Hill, 1998.
    [6] R. Rahmani et al., “Localized content based image retrieval,” IEEE Trans. Pattern Anal. Mach. Intell., 2008.
    [7] Y. Chen, J. Bi, and J. Z.Wang, “MILES: multiple-instance learning via embedded instance selection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, pp. 1931-1947, Dec. 2006.
    [8] Overview of the MPEG-7 Standard (version 10). ISO/IEC JTC1/SC29/WG11 N6828, Oct. 2004.
    [9] H. Eidenberger, “Distance measures for mpeg-7-based retrieval,” in procs. ACM Multimedia Information Retrieval, pp. 130-137, 2003.
    [10] X.-S. Zhou and T.-S. Huang, “Relevance feedback for image retrieval: a comprehensive review,” ACM Multimedia Systems Journal–Special Issue on CBIR, pp. 536-544, 2003.
    [11] W. Jiang et al., “Similarity-based online feature selection in content-based image retrieval,” IEEE Trans. Image Processing, vol. 15, no. 3, pp. 702-712, 2006.
    [12] R.-F. Zhang and Z.-F. Zhang, “Effective image retrieval based on hidden concept discovery in image database,” IEEE Trans. Image Processing, vol. 16, no. 2, Feb. 2007.
    [13] Y. Rui, T. S. Huang, M. Ortega, and S. Mehrotra, “Relevance feedback: a power tool in interactive content-based image retrieval,” IEEE Trans. Circuits Sys. Video Tech., vol. 8, no. 5, pp. 644-655, 1998.
    [14] X. Xie, “Active contouring based on gradient vector interaction and constrained level set diffusion,” IEEE Trans. Image Processing, vol. 19, no. 1, pp. 154-164, Jan. 2010.
    [15] F. Mokhtarian et al., “A theory of multi-scale, curvature-based shape representation for planar curves,” IEEE Trans. Pattern Anal. Machine Intell., vol. 14, no. 8,
    pp. 789-805, 1992.
    [16] J. Ricard et al., “Generalization of angular radial transform,” IEEE Conf. Image Processing, vol. 4, pp. 2211-2214, 2004.
    [17] T. Deselaers, D. Keysers, and H. Ney, “Features for image retrieval: an experimental comparison,” Information Retrieval, pp. 77-107, 2008.
    [18] D. Zhang et al, “A review on automatic image annotation techniques,” Pattern Recog., pp. 346-262, 2012.
    [19] R. Datta et al., “Image retrieval: ideas, influences, and trends of the new age,” ACM Computing Surveys (CSUR), pp.1-60, 2008. [20] C. Liu et al., “Sift flow: Dense correspondence across scenes and its applications,”
    IEEE Trans. Pattern Anal. Mach. Intell., pp. 978-994, 2011.
    [21] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, pp. 91-110, 2004.
    [22] L.-W. Kang et al., “Feature-based sparse representation for image similarity assessment,” IEEE Trans. Multimedia, vol. 13, no. 5, pp. 1019-1030, 2011.
    [23] Fei-Fei, L. and P. Perona, “A bayesian hierarchical model for learning natural scene categories,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2005.
    [24] R. Ji et al., “Learning to distribute vocabulary indexing for scalable visual search,” IEEE Trans. Multimedia, vol. 15, no. 1, pp. 1019-1030, 2013.
    [25] H. Bay et al., “Surf: speeded up robust features,” Computer Vision - ECCV, vol. 3951, pp. 404-417, 2006.
    [26] Y. Ke, and R. Sukthankar, “PCA-SIFT: a more distinctive representation for local image descriptors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 511-517, 2004.
    [27] L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,” International Journal of Image Processing, vol. 3, pp. 143-152, 2009.
    [28] J. Huang et al., “Image indexing using color correlogram,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 1997.
    [29] M. J. Swain and D. H. Ballard, “Color indexing,” International Journal of Computer Vision, vol. 7, no. 1, pp. 11-32, 1991.
    [30] H. Tamura et al., “Textural features corresponding to visual perception,” IEEE Trans. Systems, Man, and Cybernetics, vol. 8, pp. 460-473, 1978.
    [31] D. Comanicu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Machine Intell., vol. 24, pp. 603-619, May 2002.
    [32] M. P. Pathegama and ‥Ozdemir G‥ol, “Edge-end pixel extraction for edge-based image segmentation,” in Proc. World Academy Science, Engineering & Technolgoy, vol. 2, Jan. 2005.
    [33] P. Salembier and F. Marques, “Region-based representations of image and video segmentation tools for multimedia Services,” IEEE Trans. Circuits Sys. Video Tech., vol. 9, no. 8, Dec. 1999.
    [34] Y. Deng and B. S. Manjunath, “Unsupervised segmentation of color-texture regions in images and video,” IEEE Trans. Pattern Anal.Machine Intell., Aug. 2001.
    [35] L. G. Ugarriza et al., “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Processing, vol. 99, no. 99, 2009.
    [36] K. Nallaperumal et al., “A novel multi-scale Morphological watershed segmentation algorithm,” Int. J. Imaging Sci. & Eng., vol. 1, no. 2, April 2007.
    [37] P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” Int. J. Computer Vision, vol. 59, pp. 167-181, no. 2, 2004.
    [38] The Gnutella protocol specification v0.41 2004 [Online]. Available:
    http://www9.limewire.com/developer/gnutella protocol 0.4.pdf.
    [39] Morpheus v5.4, Jan. 18, 2007, <http://morpheus.com/>.
    [40] B. Liu et al., “Supporting complex multi-dimensional queries in P2P Systems,” IEEE Conf. Distributed Comp. Sys., pp. 155-164, June 2005.
    [41] I. King et al., “Distributed content-based visual information retrieval system on peer-to-peer networks,” ACM Trans. Info. Sys., vol. 22, no. 3, pp. 477-501, 2004.
    [42] S. Fanning, Napster 2007 [Online]. Available: http://www.napster.com/.
    [43] S. Ratnasamy et al., “Routing algorithm for DHTs: some open question,” in Proc. First International Peer-to-Peer Workshop, pp. 45-52, 2002.
    [44] I. Clarke et al., “Freenet: a distributed anonymous information storage and retrieval system,” Design. Privacy Enhancing Tech., no. 2009, pp. 46-66, July 2000.
    [45] M. Eisenhardt et al., “Clustering-based source selection for efficient image retrieval in peer-to-peer networks,” IEEE Int. Symb. Multimedia, 2006.
    [46] T. Inaba et al., “Design and implementation of an efficient search mechanism based on the hybrid P2P model for ubiquitous computing systems,” Int. Symb. Applications Internet, pp. 45-53, 23-27 Jan.
    [47] J. Yang et al., “An efficient interest-group based search mechanism in unstructured peer-to-peer networks,” Int. Conf. Computer Networks and Mobile Computing, pp. 247-252, Oct. 2003.
    [48] X. Li and J. Wu, “A hybrid searching scheme in unstructured P2P networks,” Int. Conf. Parallel Processing, pp. 277-284, June 2005.
    [49] D. Zeinalipour-Yazti et al., “Information retrieval techniques for peer-to-peer networks,” IEEE Computing in Science and Engineering, vol. 6, no. 4, pp. 20-26, 2004.
    [50] Y. Zhu et al., “Making search efficient on Gnutella-like P2P systems,” Int. Parallel Distributed Processing Symp., pp. 56a-56a, 2005.
    [51] T. Lin et al., “Search performance analysis and robust search algorithm in unstructured peer-to-peer networks,” IEEE/ACM Int. Symp. Cluster Computing and the Grid, pp. 346-354, 2004.
    [52] H. Zhang et al., “A multi-agent approach for peer-to-peer-based information retrieval systems,” Int. Joint Conf. Autonomous Agents and Multiagent Systems, pp. 456-464, 2004.
    [53] E. Ardizzone et al., “Enhanced P2P services providing multimedia content,” in Proc. IEEE Int. Symp. Multimedia, San Diego, CA, Dec. 2006.
    [54] C. H. Teh and R. T. Chin, “On image analysis by the methods of moments,” IEEE Trans. Pattern Anal. Machine Intell., vol. 10, no. 4, pp. 496-513, July 1988.
    [55] P.-T. Yap, X.D. Jiang, and A. Kot, “Two-dimensional polar harmonic transforms for invariant image representation,” IEEE Trans. Pattern Anal. Machine Intell.,
    vol. 32, no. 7, pp. 1259-1270, July 2010.
    [56] M. Coimbra and J. P. Silva Cunha, “MPEG-7 visual descriptors - contributions for automated feature extraction in capsule endoscopy,” IEEE Trans. Circuits Sys. Video Tech., vol. 16, no. 5, 2006.
    [57] J. Han and M. Kamber, Data Mining Concepts and Techniques, 2nd ed., Morgan Kaufmann Publisher, ch. 2, pp. 71-73, 2006.
    [58] J. Hafner et al., “Efficient color histogram indexing for quadratic form distance functions,” IEEE Trans. Pattern Anal. Machine Intell., vol. 17, no. 7, pp. 729-736, 1995.
    [59] Ying Liu and Xiaofang Zhou, “Automatic texture segmentation for texture-based image retrieval,” in Proc. Conf. Multimedia Modelling., pp. 285-290, Jan. 2004.
    [60] S. Biswas et al., “An efficient and robust algorithm for shape indexing and retrieval,” IEEE Trans. Multimedia, vol. 12, no. 5, 2010.
    [61] J.-J. Chen et al., “Similarity retrieval in image databases by boosted common shape features among query images,” IEEE Pacific-Rim Conf. Multimedia, pp. 285-292, Oct. 2001.
    [62] X. Jin and J. C. French, “Improving image retrieval effectiveness via multiple queries,” Multimedia Tools and Applications, pp. 221-245, 2005.
    [63] S. Mukhopadhyay et al., “Multiscale morphological segmentation of gray-scale images,” IEEE Trans. Image Processing, vol. 12, no. 5, May 2003.
    [64] J. Serra and L. Vincent, “An overview of morphological filtering,” Circuits Syst. Signal Process., vol. 11, no. 1, pp. 47-108, Mar. 1992.
    [65] W. Zhang et al., “An adaptive computational model for salient object detection,” IEEE Trans. Multimedia, vol. 12, no. 4, pp. 300-316, 2010.
    [66] P. Maragos and R. W. Schaffer, “Morphological filters-part I: their set theoretic analysis and relations to linear shift-invariant filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 35, Aug. 1987.
    [67] I. Kharitonenko, X. Zhang, and S. Twelves, “A wavelet transform with point symmetric extension at tile boundaries,” IEEE Trans. Image Processing, vol. 11, no. 12, pp. 1357-1364, Dec. 2002.
    [68] R. Unnikrishnan, C. Pantofaru, and M. Hebert, “Towards objective evaluation of image segmentation algorithms,” IEEE Trans. Pattern Anal. Machine Intell., vol. 29, no. 6, June 2007.
    [69] L. Goldmann et al., “Towards fully automatic image segmentation evaluation,” Advanced Concepts for Intelligent Vision Systems, LNCS 5259, pp. 566-577, 2008.
    [70] F. Ge, S. Wang, and T. Liu, “Image-segmentation evaluation from the perspective of salient object extraction,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 1146-1153, June 2006.
    [71] J. Sethian, Level Setmethods and fast marching methods, Cambridge, U.K.: Cambridge Univ. Press, 1999.
    [72] C. Li et al., “Distance regularized level set evolution and its application to image segmentation,” IEEE Trans. Image Processing, vol. 19, no. 12, pp. 3243-3254, Dec. 2010.
    [73] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” J. Comput. System Sci., pp. 119-139, 1997.
    [74] K. Tieu and P. Viola, “Boosting image retrieval,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., vol. 1, pp. 228-235, 2000.
    75] B. S. Manjunath, J.-R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circuits Syst. Video Technol., vol. 11, no. 6, 703-715, Jun. 2001.
    [76] P. Ndjiki-Nya et al.,, Subjective evaluation of the MPEG-7 retrieval accuracy measure (ANMRR) ISO, Geneva, Switzerland, Tech. Rep. ISO/WG11 MPEG Doc. M6029, May 2000.
    [77] Y. Lin et al., “Large-scale image classification: fast feature extraction and svm training,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp.1689-1696, 2011.
    [78] Y. Li, D. J. Crandall, and D. P. Huttenlocher, “Landmark classification in large scale image collections,” in Proc. IEEE Conf. Comput. Vis., pp. 1957-1964, 2009.
    [79] F. Marozzo, D. Talia, and P. Trunfio, “P2P-MapReduce: parallel data processing in dynamic Cloud environments,” J. Comput. System Sci., pp. 1382-1402, vol. 78, no. 5, 2012.

    QR CODE