簡易檢索 / 詳目顯示

研究生: 黃俊宏
Chun-Hong HUANG
論文名稱: 有效率的平行頻繁樣式挖掘與建立語言學習物件樣式之方法
Efficient parallel frequent pattern mining and a method for establishing language learning object patterns
指導教授: 呂永和
Yung-Ho Leu
口試委員: 葉耀明
Yao-Ming Yeh
楊維寧
Wei-Ning Yang
陳雲岫
Yun-Shiow Chen
查士朝
Shi-Cho Cha
呂永和
Yungho Leu
學位類別: 博士
Doctor
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 80
中文關鍵詞: 條件頻繁樣式集單層平行頻繁樣式挖掘多重層分割平行頻繁樣式挖掘合作學習動態文字物件化
外文關鍵詞: CPC, PFIM, SLPFIM, MLPFIM, EDOM
相關次數: 點閱:243下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,資料採礦及數據處理是資訊產業界的重要議題,大數據及各種文本資料被廣泛使用,數據的處理耗費大量的處理時間及數據空間,數據處理的效率及數據使用空間的有效利用,一直都是相當重要的研究問題。本論文以三篇研究組成,主要聚焦於平行化頻繁樣式挖掘的效率提昇,以及網頁文本中物件樣式的數據空間減少與物件樣式存取效率的提昇。
    第一篇我們發表一個平行侯選頻繁樣式集挖掘演算法(CPC),此演算法依1-itemsets分割交易資料庫為次交易資料,稱為侯選樣式集,侯選樣式集再經由不同的執行緒同時進行處理,執行緒可採用現存各種FIM演算法。實驗結果顯示,CPC演算法於四核心個人電腦中可提昇原有eclat FP-growth及Apriori演算法2至20倍的效率,而且CPC演算法容易實現,尤其是儲存於資料庫中的資料集。
    第二篇我們發表一個多重分割平行頻繁樣式挖掘演算法(MLPFIM),針對單層分割平行頻繁樣式挖掘演算法(SLPFIM)加以改進,多重分割平行頻繁樣式挖掘演算法將重度耗時1-itemsets的次交易資料集,進行再分割為相對應的次次交易資料集,藉此達到負載平衡,以降低SLPFIM的效能瓶頸。實驗結果顯示,我們的多重分割平行頻繁樣式挖掘演算法,相較於單執行緒FIM演算法可達23倍的效能提昇,相較於單層分割平行頻繁樣式挖掘演算法亦有2.14倍的效能提昇。相對於於稀疏的資料集,MLPFIM對密集資料集的效能提昇較顯著。
    第三篇論文則針對網上文本各物件樣式的物件化研究,我們發表一種有效率動態文字物件化方法(Efficient Dynamic Objectization Method,EDOM)提供更直覺的網路文本操作模式,可同時提供字音、字義及註記的功能,此外,句子物件可提供整句的註記、翻譯及發音的功能,註記內容並具備合作學習的功能。本研究透過科技接受模式 (Technology Acceptance Model,TAM) 問卷及實際的試用比較,研究結果顯示,受訪者對EDOM應用於閱讀理解及字彙學習,有相當高的滿意度,試用比較部份,EDOM注釋的易用性顯著高於Google Dictionary,EDOM Annotation的易用性與有用性顯著高於Annotator。EDOM演算法可節省網路文本中物件樣式的儲存空間,減少物件化造成的網路流量增加,並已獲得中華民國發明專利。


    In recent years, data mining and data processing are important issues in the information industry. Big data and various text materials are widely used. Data processing consumes a lot of processing time and data space. The efficiency of data processing and the effective use of memory space have always been very important research issues. This thesis consists of three articles. We focus on the improvement of parallel frequent pattern mining and the reduction of data space and access efficiency of object pattern in online texts.
    The first article proposed a parallel FIM algorithm for a multi-core computer system. By using the LINQ queries, the proposed algorithm divides the transaction database into frequent 1-itemsets based subsets of transaction database, called the conditional pattern collections. The conditional pattern collections are then processed concurrently by different threads of the computer system using the existing implementations of different FIM algorithms. The experimental results showed that the proposed algorithm achieves 2, 4 and 20 folds of speedup over the existing implementations of the eclat, FP-growth and Apriori algorithms, respectively, in a four-core Intel i7 personal computer system. Moreover, the CPC algorithm is easy to implement, especially the dataset be stored in the database. According to the experiments, the MLPFIM provided better improvements for the dense datasets than those of the sparse datasets.
    The second paper presents a parallel frequent itemset mining algorithm on a cluster of personal computers. To facilitate parallel frequent itemset mining, we use prefix path based method to decompose a transactional dataset into its frequent 1-itemset sub-datasets. We called the parallel frequent itemset mining algorithm based on the frequent 1-itemset sub-dataset decomposition the single-level parallel frequent itemset mining algorithm (SLPFIM) in our PC cluster platform. To mitigate the bottleneck caused by time-consuming 1-itemset sub-datasets, we propose a multi-level parallel frequent itemset mining (MLPFIM) algorithm to further decompose the time-consuming 1-itemset sub-datasets into their corresponding sub-sub-datasets. The fine granule of the sub-sub-datasets enhances the load balancing in parallel frequent itemset mining. The experimental results showed that the SLPFIM offered a maximum of 11.9 times speed-ups against the non-parallel execution of the FP-Growth algorithm while the MLPFIM achieved a maximum of 23.1 times speed-ups against the non-parallel execution of the FP-Growth algorithm. The experimental results also showed that the MLPFIM offered a maximum of 2.14 times speed-ups against the SLPFIM.
    The third article is aimed at objectizing the patterns in an online text. We proposed an efficient dynamic text objectization method (EDOM) to facilitate intuitive operation on the online texts. EDOM offers a more intuitive operation to access the functions of pronunciation, meaning, and annotation on a word, phrase or sentence. Additionally, EDOM allows cooperative learning between users.
    To evaluate the acceptance of the EDOM component by the public, we prepared a technology acceptance model (TAM) questionnaire to acquire the opinion on using the EDOM component and other comparison tools. According to the questionnaire, the respondents were highly satisfied with the EDOM tool in reading comprehension and vocabulary learning. For user's experience, the ease of use of the EDOM Gloss function was significantly higher than that of the Google Dictionary. Furthermore, the ease of use and usefulness of the EDOM Annotation function were significantly higher than that the Annotator. Besides, the EDOM component is very easy to be included into teaching materials. The EDOM algorithm can save the storage space of an object pattern in the online text and reduce the network traffic for transmission of the object. EDOM has acquired an invention patent certificate of the Republic of China, Taiwan.

    Index of Contents 論文摘要 II Abstract III Acknowledgements IV List of Figures IX List of Tables XI Chapter1 Introduction 1 1-1 Research motivation 1 1-2 Problem definition 1 1-3 Terms definition 2 1-4 Research method 4 Chapter2 The Conditional Pattern Collection Algorithm for Parallel Frequent Itemset Mining on a Multi-Core Computer System 5 2-1Introduction 5 2-2 Background 7 1. FP-tree 7 2. Language Integrated Query 8 3. Storage format of transaction database 8 2-3 Disk I/O Bottleneck 9 2-4 The conditional pattern collection based algorithm 10 1. Generating conditional pattern collections 10 2. Task scheduling based on conditional pattern collections 14 2-5 Experimental results 16 1. Experimental setup 16 2. Datasets 16 3. Performance comparison 16 4. Execution time of different steps in FIM 18 5. Load Balancing 19 2-6 Discussions 21 1. Complexity reduction by divide-and-conquer 21 2. Disk I/O bottleneck prevention by using multiple disks 22 3. Load balancing by randomization 22 Chapter 3 Multi-Level Dataset Decomposition for Parallel Frequent Itemset Mining on a Cluster of Personal Computers 23 3-1 Introduction 23 3-2. Related work 23 3-3 The proposed algorithms 25 1. Prefix-path based 1-itemset sub-dataset 25 2. Execution time estimation of frequent 1-itemset sub-datasets 27 3. System architecture and the frequent itemset mining algorithm 31 3-4. Experimental setup and experimental results 33 1. Experimental setup 34 2. Experimental results 35 Chapter 4 On the Development and Evaluation of the EDOM Web Reading component 41 4-1 Introduction 41 4-2 Related Work 43 1. Reading and vocabulary learning environment 43 2. Web gloss in CAVL 45 3. Web annotation function 46 4-3 Discussion 46 1. The existing Web gloss 46 2. The existing Web annotation 48 4-4 The EDOM algorithm and its features 49 1. The EDOM data structure 49 2. The EDOM features 51 4-5 Evaluation 52 Chapter 5 Conclusion 58 References 60 Published Works 67 List of Figures Fig. 2-1 The FP-tree of transaction database in Table 2 8 Fig. 2-2 File format vs Table format of a transaction database 9 Fig. 2-3 Generating frequent 1-itemsets 11 Fig. 2-4 Using LINQ queries to construct the reduced transaction Table 12 Fig. 2-5 The link-path algorithm and the prefix path Table 13 Fig. 2-6 An execution of the Link-path algorithm 14 Fig. 2-7 LINQ query to generate conditional pattern collections 15 Fig. 2-8 The Conditional Pattern Collections (CPCs) 15 Fig. 2-9 Execution time of FP-Growth 17 Fig. 2-10 Execution time of eclat 17 Fig. 2-11 Performance on accidents dataset 17 Fig. 2-12 Disk I/O bottleneck 17 Fig. 2-13 Apriori on T40I10D100K 18 Fig. 2-14 Apriori on the accidents dataset 18 Fig. 2-15 Load balancing by randomization 20 Fig. 2-16 Performance improvement by randomization 21 Fig. 3-1 Frequent1-itemset sub-dataset decomposition of a transactional dataset 27 Fig. 3-2 The logarithmic relationship between the order of a frequent 1-itemset and the 28 Fig. 3-3 Execution time distribution on the Chess dataset 29 Fig. 3-4 Execution time distribution on the Kosarak dataset 29 Fig. 3-5 System architecture 31 Fig. 3-6 The frequent itemset mining algorithm 33 Fig. 3-7 Speed-ups of the MLPFIM and SLPFIM vs. non-parallel FP-Growth 35 Fig. 3-8 The execution times of different algorithms 35 Fig. 3-9 The execution times for different numbers of threads 36 Fig. 3-10 The execution times of the algorithms for different numbers of threads 37 Fig. 3-11 The execution times of the SPFIM and MLPFIM on Kosarak dataset 38 Fig. 3-12 Speed-ups on accidents dataset with support count set at 1700 40 Fig. 4-1 Journey of a vocabulary item 41 Fig. 4-2 Readlang 47 Fig. 4-3 VoiceTube 48 Fig. 4-4 Annotor.org 49 Fig. 4-5 EDOM data structure 50 Fig. 4-6 EDOM Opteration 52 Fig. 4-7 EDOM Gloss compare to Google Dic. 54 Fig. 4-8 EDOM Annotation compare to Annotator 55 List of Tables Table 1-1 An example of k-itemsets 2 Table 2-1 A transaction database 8 Table 2-2 Execution time with different number of disks 10 Table 2-3 The transaction Table 10 Table 2-4 Experimental environment 16 Table 2-5 The Datasets 16 Table 2-6 Execution time of different steps in FIM 19 Table 2-7 Execution time summary 21 Table 3-1 Execution time comparison of 1-itemset sub-datasets 30 Table 3-2 Execution time analysis of the FP-Growth program on different datasets 31 Table 3-3 Specification of the personal computers 34 Table 3-4 The experimental datasets 34 Table 3-5 Bottleneck analysis 37 Table 3-6 Execution time analysis of a specific thread 39 Table 3-7 The execution times of all the threads on Kosarak dataset 39 Table 4-1 Evaluation on the Ease of Use and Usefulness of EDOM Gloss 53 Table 4-2 Evaluation on Ease of Use and Usefulness of EDOM Annotations 55 Table 4-3 Time of Trial Use of EDOM and Annotator+Google Dic. 56 Table 4-4 Function comparison 56

    1. R. Agrawal, T. Imielinski, and A. Swami (1993). Mining association rules between sets of items in large databases. Proceedings of the 1993 ACM SIGMOD Conference, pp. 1-10.
    2. J. Han, J. Pei, and Y. Yin (2000). Mining frequent patterns without candidate generation. ACM SIGMOD Record, Vol. 29, pp. 1-12.
    3. B. Wu, D. Zhang, Q. Lan and J. Zheng (2008). An efficient frequent patterns mining algorithm based on Apriori algorithm and the FP-tree structure. Convergence and Hybrid Information Technology . ICCIT '08. Third International Conference, pp. 1099-1102.
    4. J. Han, J. Pei ,Y. Yin, R. Mao (2004). Mining frequent patterns without candidate generation: A frequent-pattern tree approach. Data Mining and Knowledge Discovery, Vol. 8, pp. 53–87.
    5. Y. Qiu, Y. Lan, Q.-S. Xie (2004). An improved algorithm of mining from FP-tree. Proceedings of the Third International Conference on Machine Learning and Cybernedcs, Shanghai, pp. 26-29.
    6. X. Shang, K.-U. Sattler and I. Geist (2005). SQL based frequent pattern mining with FP-growth. Lecture Notes in Computer Science, Vol. 3392, pp. 32-46.
    7. K.-C. Lin, I-E. Liao and Z.-S. Chen (2011). An improved frequent pattern growth method for mining association rules. Expert Systems with Applications, Vol. 38, pp. 5154–5161.
    8. L. Liu, E. Li, Y. Zhang and Z. Tang (2007). Optimization of frequent itemset mining on multiple-core processor. In Proceedings of VLDB, pp.1275-1285.
    9. A. Ghoting, G. Buehrer, S. Parthasarathy, D. Kim, A. Nguyen, Y.-K. Chen, and P. Dubey (2007). Cache-conscious frequent pattern mining on modern and emerging processors. In International Journal on Very Large Data Bases, pp. 77-96.
    10. K.-M. Yu and J. Zhou (2010). Parallel TID-based frequent pattern mining algorithm on a PC Cluster and grid computing system. Expert Systems with Applications, Vol. 37, 2010, pp. 2486-2494.
    11. C. I. Sidló and A. Lukács (2006). Shaping SQL-based frequent pattern mining algorithms. Knowledge Discovery in Inductive Databases, Lecture Notes in Computer Science, Vol. 3933, 2006, pp. 188-201.
    12. A., Javed and A. Khokhar (2004). Frequent pattern mining on message passing multiprocessor systems. Distributed and Parallel Databases, Vol. 16, 2004, pp. 321-334.
    13. O. R. Zaiane, M. E.-H. and P. Lu (2001). Fast parallel association rule mining without candidacy generation. First IEEE International Conference on Data Mining, San Jose, California, USA, 2001, pp. 665-668.
    14. S.-Y. Wur and Y. Leu (1999). An Effective Boolean Algorithm for Mining Association Rules in Large Databases. 6th International Conference on Database Systems for Advanced Applications, IEEE DASFAA’99.
    15. C. Borgelt (2005). An implementation of the FP-growth algorithm. Proceedings of the 1st international workshop on open source data mining: frequent pattern mining implementations, pp. 1-5.
    16. Download site of FIM implementations maintained by C. Borgelt, http://www.borgelt.net/
    17. Download site of FIMI repository, http://fimi.ua.ac.be/data/
    18. Wikipedia, “Language Integrated Query, ”http://en.wikipedia.org/wiki/Language_Integrated_Query.
    19. W. Fang, M. Lu, X. Xiao, B. He and Q. Luo (2009). Frequent Itemset Mining on Graphics Processors. Proceedings of the Fifth International Workshop on Data Management on New Hardware
    20. M. J. Zaki(2000). Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering 12 (3): 372–390.
    21. B. Goethals and M. J. Zaki. Advances in Frequent Itemset Mining Implementations: Report on FIMI’03,” http://pdf.aminer.org/001/063/472/advances_in_frequent_itemset_mining_implementations_report_on_fimi.pdf
    22. F. Zhang, Y. Zhang and J. D. Bakos (2013). Accelerating frequent itemset mining on graphics processing units. The Journal of Supercomputing, DOI 10.1007/s11227-013-0937-4
    23. Zaki, M. J. (2000). Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, 12(3), 372–390.
    24. Wur, S.-Y., & Leu, Y. (1999). An Effective Boolean Algorithm for Mining Association Rules in Large Databases. Paper presented at the Sixth International Conference on Database Systems for Advanced Applications, Hsinchu, Taiwan.
    25. Zaiane, O. R., El-Hajj, M., & Lu, P. (2001). Fast parallel association rule mining without candidacy generation. Paper presented at the Data Mining, 2001. ICDM 2001, Proceedings IEEE International Conference on.
    26. Dong, J., & Han, M. (2007). BitTableFI: An efficient mining frequent itemsets algorithm. Knowledge-Based Systems, 20(4), 329-335.
    27. Grahne, G., & Zhu, J. (2003). Efficiently Using Prefix-trees in Mining Frequent Itemsets Paper presented at the Workshop Frequent Item Set Mining Implementations (FIMI 2003). Melbourne, FL. Rácz, B. (2004).
    28. Nonordfp: an FP-growth variation without rebuilding the FP-tree. Paper presented at the 2nd International Workshop on Frequent Itemset Mining Implementations (FIMI 2004), Brighton, UK.
    29. Goethals, B., & Zaki, M. J. (2003). Advances in Frequent Itemset Mining Implementations: Report on FIMI’03. from http://fimi.cs.helsinki.fi
    30. Javed, A., & Khokhar, A. (2004). Frequent pattern mining on message passing multiprocessor systems. Distributed and Parallel Databases, 16, 321-334.
    31. Fang, W., Lu, M., Xiao, X., He, B., & Luo, Q. (2009, June 28-28). Frequent Itemset Mining on Graphics Processors. Paper presented at the DaMoN '09 Proceedings of the Fifth International Workshop on Data Management on New Hardware Providence, RI, USA
    32. Zhang, F., Zhang, Y., & Bakos, J. D. (2013). Accelerating frequent itemset mining on graphics processing units. The Journal of Supercomputing.
    33. Zhou, J., Kun-Ming, Y., & Bin-Chang, W. (2010, 10-13 Oct. 2010). Parallel frequent patterns mining algorithm on GPU. Paper presented at the Systems Man
    34. Özdogan, G., & Abul, O. (2010). Task-Parallel FP-Growth on Cluster Computers. In E. Gelenbe, R. Lent, G. Sakellari, A. Sacan, H. Toroslu & A. Yazici (Eds.), Computer and Information Sciences (Vol. 62, pp. 383-388): Springer Netherlands.
    35. Pramudiono, I., & Kitsuregawa, M. (2003). Parallel FP-Growth on PC Cluster. In K.-Y. Whang, J. Jeon, K. Shim & J. Srivastava (Eds.), Advances in Knowledge Discovery and Data Mining (Vol. 2637, pp. 467-473): Springer Berlin Heidelberg.
    36. Yu, K.-M., & Zhou, J. (2010). Parallel TID-based frequent pattern mining algorithm on a PC Cluster and grid computing system. Expert Systems with Applications, 37, 2486-2494.
    37. Huang, C.-H., & Leu, Y. (2015). A LINQ-based Conditional Pattern Collection Algorithm for Parallel Frequent Itemset Mining on a Multi-Core Computer. Paper presented at the Proceedings of the ASE BigData & SocialInformatics 2015, Kaohsiung, Taiwan.
    38. Liu, L., Li, E., Zhang, Y., & Tang, Z. (2007). Optimization of frequent itemset mining on multiple-core processor. Paper presented at the Very Large Data Base.
    39. Vu, L., & Alaghband, G. (2013). Novel parallel method for mining frequent patterns on multi-core shared memory systems. Paper presented at the Proceedings of the 2013 International Workshop on Data-Intensive Scalable Computing Systems.
    40. Hadoop. (2015). Hadoop. from http://zh.wikipedia.org/wiki/Apache_Hadoop
    41. Farzanyar, Z., & Cercone, N. (2013). Accelerating Frequent Itemsets Mining on the Cloud: A MapReduce -Based Approach. Paper presented at the Proceedings of the 2013 IEEE 13th International Conference on Data Mining Workshops.
    42. Le, Z., Zhiyong, Z., Jin, C., Junjie, L., Joshua Zhexue, H., & Shengzhong, F. (2010, 28-30 Nov. 2010). Balanced parallel FP-Growth with MapReduce. Paper presented at the Information Computing and Telecommunications (YC-ICT), 2010 IEEE Youth Conference on.
    43. Li, H., Wang, Y., Zhang, D., Zhang, M., & Chang, E. Y. (2008). Pfp: parallel fp-growth for query recommendation. Paper presented at the Proceedings of the 2008 ACM conference on Recommender systems, Lausanne, Switzerland.
    44. Li, N., Zeng, L., He, Q., & Shi, Z. (2012, 8-10 Aug. 2012). Parallel Implementation of Apriori Algorithm Based on MapReduce. Paper presented at the Software Engineering, Artificial Intelligence, Networking and Parallel & Distributed Computing (SNPD), 2012 13th ACIS International Conference on.
    45. Moens, S., Aksehirli, E., & Goethals, B. (2013, 6-9 Oct. 2013). Frequent Itemset Mining for Big Data. Paper presented at the Big Data, 2013 IEEE International Conference on.
    46. Xun, Y., Zhang, J., Qin, X., & Zhao, X. (2017). FiDoop-DP: data partitioning in frequent itemset mining on hadoop clusters. IEEE Transactions on Parallel and Distributed Systems, 28(1), 101-114
    47. Goethals, B. (2015). FIMI repository. from http://fimi.ua.ac.be/data/
    48. Borgelt, C. (2015). Christian Borgelt's Web Pages from http://www.borgelt.net/
    49. Chen Lin and Junzhong Gu (2016). "PFIN: A Parallel Frequent Itemset Mining Algorithm Using Nodesets," International Journal of Database Theory and Application, Vol.9, No.6,pp.81-92
    50. Eray Ozkural, Bora Ucar, and Cevdet Aykanat (2011), "Parallel Frequent Item Set Mining with Selective Item Replication," IEEE Transactions on Parallel and Distributed systems, vol. 22, no. 10, October 20
    51. R. Joy and K. K. Sherly (2016). "Parallel frequent itemset mining with spark RDD framework for disease prediction," 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT).
    52. Al-Seghayer, K. (2001). The effect of multimedia annotation modes on L2 vocabulary acquisition: A comparative study. Language Learning & Technology, 5(1), 202-232.
    53. Azouaou, F., Mokeddem, H., Berkani, L., Ouadah, A., & Mostefai, B. (2013). WebAnnot: a learner’s dedicated web-based annotation tool. International Journal of Technology Enhanced Learning, 5(1), 56-84.
    54. Bonifazi, F., Levialdi, S., Rizzo, P., & Trinchese, R. (2002, May). A web-based annotation tool supporting e-learning. In Proceedings of the Working Conference on Advanced Visual Interfaces (pp. 123-128). ACM.
    55. Cheng, Ying-Hsueh; Good, Robert L(2009). L1 glosses: Effects on EFL learners’ reading comprehension and vocabulary retention, Reading in a Foreign Language, v21 n2 p119-142
    56. Chen, C. M., & Huang, S. H. (2014). Web‐based reading annotation system with an attention‐based self‐regulated learning mechanism for promoting reading performance. British Journal of Educational Technology, 45(5), 959-980.
    57. Chen, Y. C., Hwang, R. H., & Wang, C. Y. (2012). Development and evaluation of a Web 2.0 annotation system as a learning tool in an e-learning environment. Computers & Education, 58(4), 1094-1105.
    58. Davis, N. 1989. Facilitating effects of marginal glosses on foreign language reading. The Modern71XU Hong Language Journal, 73 (1), 41-48
    59. Da Silva, A. C. (2015, January). InkAnnotation: An annotation tool for e-learning environments. In Proceedings of the International Conference on e-Learning, e-Business, Enterprise Information Systems, and e-Government (EEE) (p. 73). The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp).
    60. Desmontils, E., Jacquin, C., & Simon, L. (2004). Dinosys: An annotation tool for Web-based learning. In Advances in Web-Based Learning–ICWL 2004 (pp. 59-66). Springer Berlin Heidelberg.
    61. Di Vesta, F. J., & Gray, G. S. (1972). Listening and note taking. Journal of educational psychology, 63(1), 8.
    62. Davis, J. N., & Lyman‐Hager, M. A. (1997). Computers and L2 Reading: Student Performance, Student Attitudes1. Foreign Language Annals, 30(1), 58-72.
    63. Daskalovska, N. (2014). Incidental Vocabulary Acquisition from Reading an Authentic Text. The Reading Matrix: An International Online Journal, 14(2), 201-216.
    64. Farzan, R., & Brusilovsky, P. (2008). AnnotatEd: A social navigation and annotation service for web-based educational resources. New Review of Hypermedia and Multimedia, 14(1), 3-32.
    65. Hirsh, D., & Nation, P. (1992). What vocabulary size is needed to read unsimplified texts for pleasure?. Reading in a foreign language, 8, 689-689.
    66. Hulstijn, J. H., Hollander, M., & Greidanus, T. (1996). Incidental vocabulary learning by advanced foreign language students: The influence of marginal glosses, dictionary use, and reoccurrence of unknown words. The Modern Language Journal, 80(3), 327-339.
    67. Hsin-chou Huang (2013) E-reading and e-discussion: EFL learners' perceptions of an e-book reading program, Computer Assisted Language Learning, 26:3, 258-281, DOI: 10.1080/09588221.2012.656313
    68. Hong, X. (2010). Review of effects of glosses on incidental vocabulary learning and reading comprehension. Chinese Journal of Applied Linguistics, 33(1), 56-73.
    69. Huang, H. T., & Liou, H. C. (2007). Vocabulary learning in an automated graded reading program. Language Learning & Technology, 11(3), 64-82.
    70. Hegelheimer, V. & Tower, D. (2004). Using CALL in the classroom: Analyzing student interactions in an authentic classroom. System, 32, 185-205.
    71. Hulstijn J.H. (1992). Retention of inferred and given word meanings: Experiments in incidental vocabulary learning. In: Arnaud P.,Bejoint H. (Eds.), Vocabulary and applied linguistics (pp. 113–125). London: Macmillan.
    72. Jacobs, G. M., Dufon, P., & Hong, F. C. (1994). L1 and L2 vocabulary glosses in L2 reading passages: Their effectiveness for increasing comprehension and vocabulary knowledge. Journal of Research in Reading, 17(1), 19-28.
    73. Johnson, P. (1982). Effects on Reading Comprehension of building background knowledge. Tesol Quarterly, 16(4), 503-516.
    74. Kukulska-Hulme, A. (1988). A computerized interactive vocabulary development system for advanced learners. System, 16(2), 163-170.
    75. Kobayashi, K. (2005). What limits the encoding effect of note-taking? A meta-analytic examination. Contemporary Educational Psychology, 30(2), 242-262.
    76. Kobayashi, K. (2006). Combined Effects of Note‐Taking/‐Reviewing on Learning and the Enhancement through Interventions: A meta‐analytic review. Educational Psychology, 26(3), 459-477.
    77. Knight, S. (1994). Dictionary use while reading: The effects on comprehension and vocabulary acquisition for students of different verbal abilities. Modern language journal, 285-299.
    78. Laufer, B., & Hill, M. (2000). What Lexical Information Do L2 Learners Select in a CALL Dictionary and How Does It Affect Word Retention?.
    79. Lomicka, L. (1998). To gloss or not to gloss: An investigation of reading comprehension online. Language Learning & Technology, 1, 41-50. Retrieved September 5, 2007, from
    80. Ma, Q., & Kelly, P. (2006). Computer assisted vocabulary learning: Design and evaluation. Comp
    81. MIYASAKO, Nobuyoshi (2002). Does Text-glossing Have Any Effects on Incidental Vocabulary Learning through Reading for Japanese Senior High school Students? Language Education and Technology (39), 1-20, 2002-06
    82. Nation, P., & Waring, R. (1997). Vocabulary size, text coverage and word lists.Vocabulary: Description, acquisition and pedagogy, 14, 6-19.
    83. Nation, I. S. (2001). Learning vocabulary in another language. Ernst Klett Sprachen.
    84. Ovsiannikov, I. A., Arbib, M. A., & McNeill, T. H. (1999). Annotation technology.International journal of human-computer studies, 50(4), 329-362.
    85. Pigada, M., & Schmitt, N. (2006). Vocabulary acquisition from extensive reading: A case study. Reading in a foreign language, 18(1), 1.
    86. Robert Sanderson; Paolo Ciccarese(2014). Web Annotation Data Model. W3C Working Draft. Retrieved from http://www.w3.org/TR/annotation-model/
    87. Robert Sanderson. Web Annotation Protocol. 2 July 2015. W3C Working Draft. URL:http://www.w3.org/TR/annotation-protocol/
    88. Ronchetti, M. (2002). Why web pages annotation tools are not killer applications? A new approach to an old problem. In M. Driscoll & T. Reeves (Eds.), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2002 (pp. 837-841). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).
    89. Su, A. Y., Yang, S. J., Hwang, W. Y., & Zhang, J. (2010). A Web 2.0-based collaborative annotation system for enhancing knowledge sharing in collaborative learning environments. Computers & Education, 55(2), 752-766.
    90. Slotte, V., & Lonka, K. (2003). Note-taking review - practical value for learners. Arob@se,1-2, 79-86. In: A. Piolat (Ed). Note-taking in L1 and L2. ISSN: 1278-379X. France.
    91. Vela, V. (2015). Using glosses for incidental vocabulary acquisition. Procedia-Social and Behavioral Sciences, 199, 305-310.
    92. Varol, B., & Erçetin, G. (2016). Effects of Working Memory and Gloss Type on L2 Text Comprehension and Incidental Vocabulary Learning in Computer-Based Reading. Procedia-Social and Behavioral Sciences, 232, 759-768.
    93. Xin, C., & Glass, G. (2005). Enhancing online discussion through web annotation. In World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (Vol. 2005, No. 1, pp. 3212-3217).
    94. Xu, X. (2010). The effects of glosses on incidental vocabulary acquisition in reading. Journal of language teaching and research, 1(2), 117-120.
    95. Yanguas, I. (2009). Multimedia glosses and their effect on L2 text comprehension and vocabulary learning. Language Learning & Technology, 13(2), 48-67.
    96. Yoshii, M. (2006). L1 and L2 glosses: Their effects on incidental vocabulary learning. Language Learning & Technology,10(3), 85-101.

    無法下載圖示 全文公開日期 2024/01/30 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE