簡易檢索 / 詳目顯示

研究生: 李旭清
Xu-Qing Li
論文名稱: 基於文字與圖像結合之跨電商平台商品匹配深度學習模型
Deep Learning-Based Model for Cross-Ecommerce Platform Product Matching Using Text and Images
指導教授: 鍾聖倫
Sheng-Luen Chung
口試委員: 鍾聖倫
Sheng-Luen Chung
蘇順豐
Shun-Feng Su
陸敬互
Ching-Hu Lu
徐繼聖
Gee-Sern Hsu
陳冠宇
Kuan-Yu Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 62
中文關鍵詞: 電商平台實體匹配深度學習圖像與文字結合
外文關鍵詞: E-commerce platforms, Product matching, Deep learning, Text and image integration
相關次數: 點閱:48下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究旨在解決消費者在不同電商平台上進行比價時所面臨的商品匹配問題。各電商平台的渠道多樣化,包括原廠、代理商和個人賣家上架重複商品,且命名慣例不同,導致相同商品的名稱和圖片有所差異。本論文提出了一種基於文字和圖像的深度學習模型,用於跨平台商品匹配。該模型利用產品名稱和照片作為實體提及,目的是在降低計算量的同時提升匹配效能。我們的方法包括兩個階段的深度學習網路架構:首先,通過三元組網路訓練的限縮 (Block) 網路進行初步過濾,然後由孿生網路訓練的匹配分類 (Match) 網路進一步分類,以確定商品提及是否吻合。我們還設計了一個電商競品採集與標註平台,用於提供產品搜尋對比和匹配商品標註,作為訓練匹配模型所需的正負樣本依據。實驗結果顯示,透過持上述訓練所持續微調語意編碼器,本兩階模型能顯著提升商品匹配的精確性和效率,即使在計算資源有限的情況下也能實現高效的商品匹配,特別適用於需要快速更新和適應不斷變化市場的電商平台。此外,我們還提出了一個商品比對塑模與佈署框架,統合了電商競業之商品匹配所需的標註、模型建模、線上查詢以及匹配佈署等功能。


    This study addresses the issue of product matching for consumers comparing prices or services across different e-commerce platforms. The diverse channels on e-commerce platforms, including original manufacturers, agents, and individual sellers, lead to variations in product listings. Additionally, differing naming conventions across platforms result in discrepancies in product names and images for the same items. This paper proposes a deep learning-based model that leverages both text and images to match products across platforms. The model uses product names and images as entity mentions to improve matching accuracy while minimizing computational complexity. Our method involves a two-stage deep learning network architecture: initially, a Block network trained with triplet loss filters out less similar products, followed by a Match network trained with a Siamese network to classify whether product mentions match. We also developed a platform for collecting and annotating competitive products across e-commerce platforms, providing the necessary positive and negative samples for training the two-stage model. Experimental results show that continuously fine-tuning the semantic encoder through the aforementioned training significantly improves the accuracy and efficiency of product matching. The model remains effective even with limited computational resources, making it particularly suitable for e-commerce platforms that require rapid updates and adaptation to an ever-changing market. Additionally, we introduce a product comparison modeling and deployment framework, integrating the annotation, model building, online querying, and matching deployment functionalities required for competitive product matching in e-commerce.

    摘要 II Abstract III 目錄 IV 圖目錄 VIII 表目錄 X 第 1 章 、簡介 1 1.1 實體匹配問題的定義 1 1.2 研究背景 3 1.2.1 相同商品名稱不同 3 1.2.2 商品重複問題 3 1.2.3 集合類商品 4 1.2.4 特定領域的預訓練模型 4 1.2.5 商品匹配的具體應用情境 5 1.3 研究動機和貢獻 6 1.4 論文結構 7 第 2 章 、文獻審閱 8 2.1 實體匹配符號定義 8 2.2 實體匹配 8 2.2.1 非結構化數據 9 2.3 深度學習方法 10 2.3.1 Text Model 11 2.3.2 Image Model 13 2.3.3 Milti Model 15 第 3 章 、訓練與測試資料 17 3.1 商品匹配任務資料處理流程 17 3.1.1 模型預訓練資料 19 3.2 資料來源與取得 20 3.3 商品匹配資料標註 21 3.3.1 商品特徵 21 3.3.2 商品名稱差異的原因 21 3.3.3 商品匹配的不同任務 24 3.3.4 標註方式 25 3.3.5 標註工具 26 第 4 章 、研究方法 29 4.1 電商領域預訓練模型:eComBert 29 4.2 eComMatch模型架構 30 4.2.1 Siamese Network (Matching模型) 31 4.2.2 Triplet Network (Blocking模型) 32 4.2.3 兩階段的優點 33 4.2.4 圖像處理模型 34 4.2.5 多模態模型的架構 34 4.2.6 損失函數 35 4.3 正負樣本採樣策略 36 4.3.1 字面上最不像的正樣本 (Lexical Bottom-K Positive Sample) 37 4.3.2 字面上最像的負樣本(Lexical Top-K Negative Sample) 37 4.3.3 簡易隨機負樣本(Simple Random Negative Sample) 38 4.4 訓練樣本的分類 38 4.4.1 B2C 39 4.4.2 C2C 40 第 5 章 、實驗與討論 42 5.1 商品比對兩階段模型的評估方式與評測指標 42 5.1.1 Blocking模型的評估方式 42 5.1.2 Matching模型的評估方式 43 5.1.3 兩階段評估方法 44 5.2 B2C情境實驗結果 44 5.2.1 B2C兩階段實驗 45 5.2.2 B2C Blocking實驗 45 5.2.3 B2C Matching實驗 46 5.3 C2C情境實驗結果 46 5.3.1 C2C兩階段實驗 47 5.3.2 C2C Blocking 實驗 47 5.3.3 C2C Matching實驗 48 5.3.4 實驗結果分析 48 5.4 預訓練模型對照實驗 49 5.4.1 Blocking模型-預訓練模型對照實驗 49 5.4.2 Matching 模型-預訓練模型對照實驗 49 5.4.3 實驗結果分析 50 5.4.4 兩階段績效實驗 50 5.4.5 兩階段不同 K 值對比實驗 50 5.4.6 兩階段與單階段對比實驗 51 第 6 章 、結論與未來展望 52 6.1 研究的主要貢獻 52 6.1.1 多模態商品匹配技術 52 6.1.2 電商競品系統與標準平台 52 6.2 未來展望 53 6.2.1 多模態數據融合的策略 53 6.2.2 系統的可拓展性 53 6.2.3 模型可解釋性 53 6.2.4 數據標註與質量控制 54 6.3 結論 54 參考文獻 55 附錄A、中英文詞彙對照表 58 附錄B、口試委員之建議與答覆 60

    [1] J. Pollack, H. Köpcke, and E. Rahm, “Intermediate Fusion for Multimodal Product Matching,” 2024.
    [2] "Shopee - Price Match Guarantee." https://www.kaggle.com/competitions/shopee-product-matching (accessed.
    [3] W. Zijia, L. Ye, and Z. Zhongkai, "BERT-based knowledge extraction method of unstructured domain text," arXiv preprint arXiv:2103.00728, 2021.
    [4] P. M. Alves, P. Geraldo Filho, and V. P. Gonçalves, "Leveraging BERT's Power to Classify TTP from Unstructured Text," in 2022 Workshop on Communication Networks and Power Systems (WCNPS), 2022: IEEE, pp. 1-7.
    [5] D. Zhu, M. A. Hedderich, F. Zhai, D. I. Adelani, and D. Klakow, "Is BERT robust to label noise? A study on learning with noisy labels in text classification," arXiv preprint arXiv:2204.09371, 2022.
    [6] S. Mudgal et al., "Deep learning for entity matching: A design space exploration," in Proceedings of the 2018 International Conference on Management of Data, 2018, pp. 19-34.
    [7] J. Wang, Y. Li, and W. Hirota, "Machamp: A generalized entity matching benchmark," in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 4633-4642.
    [8] A. Naeim abadi, M. T. Nayeem, and D. Rafiei, "Product Entity Matching via Tabular Data," in Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023, pp. 4215-4219.
    [9] R. Peeters, R. C. Der, and C. Bizer, "WDC Products: A Multi-Dimensional Entity Matching Benchmark," arXiv preprint arXiv:2301.09521, 2023.
    [10] D. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
    [11] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
    [12] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” ArXiv Prepr. ArXiv181004805, 2018.
    [13] REIMERS, Nils; GUREVYCH, Iryna. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
    [14] K. Shah, S. Kopru, and J. D. Ruvini, "Neural network based extreme classification and similarity models for product matching," in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), 2018, pp. 8-15.
    [15] M. Almagro, D. Jiménez, D. Ortego, E. Almazán, and E. Martínez, "Block-SCL: Blocking Matters for Supervised Contrastive Learning in Product Matching," arXiv preprint arXiv:2207.02008, 2022.
    [16] R. Peeters and C. Bizer, "Supervised contrastive learning for product matching," in Companion Proceedings of the Web Conference 2022, 2022, pp. 248-251.
    [17] U. Ahsan et al., "Visually Compatible Home Decor Recommendations Using Object Detection and Product Matching," in 2021 International Conference on Computational Science and Computational Intelligence (CSCI), 2021: IEEE, pp. 214-220.
    [18] L. Huang, W. Shao, F. Wang, W. Xie, and K.-C. Wong, "Metric Learning Based Vision Transformer for Product Matching," in Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part I 28, 2021: Springer, pp. 3-13.
    [19] X. Wu, A. Magnani, S. Chaidaroon, A. Puthenputhussery, C. Liao, and Y. Fang, "A Multi-task Learning Framework for Product Ranking with BERT," in Proceedings of the ACM Web Conference 2022, 2022, pp. 493-501.
    [20] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
    [21] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, "End-to-end object detection with transformers," in European conference on computer vision, 2020: Springer, pp. 213-229.
    [22] A. Radford et al., "Learning transferable visual models from natural language supervision," in International conference on machine learning, 2021: PMLR, pp. 8748-8763.
    [23] Q. Sun, Y. Fang, L. Wu, X. Wang, and Y. Cao, "Eva-clip: Improved training techniques for clip at scale," arXiv preprint arXiv:2303.15389, 2023.
    [24] M. Wilke and E. Rahm, "Towards Multi-Modal Entity Resolution for Product Matching," in GvDB, 2021.
    [25] K. Hari Krishnan, “E-commerce Product Similarity Match Detection using Product Text and Images,” Dublin, National College of Ireland, 2021.
    [26] M. Sortur, P. Rajpoot, Manjunath, Subhanandh, and H. C. Rao, "E-Commerce Product Matching at Internet Scale," in 2022 13th International Conference on E-business, Management and Economics, 2022, pp. 45-51.
    [27] M. Tsimpoukelli, J. L. Menick, S. Cabi, S. Eslami, O. Vinyals, and F. Hill, "Multimodal few-shot learning with frozen language models," Advances in Neural Information Processing Systems, vol. 34, pp. 200-212, 2021.
    [28] J. Li, D. Li, S. Savarese, and S. Hoi, "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models," arXiv preprint arXiv:2301.12597, 2023.
    [29] J. Pollack, H. Köpcke, and E. Rahm, "Intermediate Fusion for Multimodal Product Matching," 2024.

    無法下載圖示
    全文公開日期 2027/08/26 (校外網路)
    全文公開日期 2027/08/26 (國家圖書館:臺灣博碩士論文系統)
    QR CODE