簡易檢索 / 詳目顯示

研究生: 陳國修
Kuo-Hsiu Chen
論文名稱: 一個可處理異質模糊資訊的整合式學習架構
An Integrated Consistency-Based Framework for Learning Heterogeneous Fuzzy Information
指導教授: 李漢銘
Hahn-Ming Lee
口試委員: 李育杰
Yuh-Jye Lee
許清琦
Ching-Chi Hsu
何正信
Cheng-Seen Ho
何建明
Jan-Ming Ho
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2005
畢業學年度: 93
語文別: 英文
論文頁數: 117
中文關鍵詞: 模式識別機器學習模糊理論資料簡化相似函數網絡理論資料探勘
外文關鍵詞: Data Reduction, Data Mining, Lattice Theory, Fuzzy Set Theory, Machine Learning, Pattern Recognition, Similarity Measure
相關次數: 點閱:277下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 數位科技的發展及網際網路的盛行,使得巨量的資訊以驚人的速度快速的成長。這些資料都是來自於許多不同的來源,並且使用於不同的用途,因此常出現各種不同的資料型態,例如二值向量 (binary vectors)、多維的數值向量、名義屬性 (nominal attributes)、規則、模糊資料等等。如何有效處理這些不同型態的異質資料,並且在大量的資料中分析出有用的知識、趨勢、或是異常的規則,已經是資訊時代的一項重要課題。近年來知識發掘 (knowledge discovery in databases, KDD) 與資料探勘 (data mining) 的發展,應用了統計方法、機器學習、類神經網路、模糊理論、遺傳演算法等多項技術,正是為了能在大量的資料中找出有用的知識以提供決策分析之需。然而對於異質資料的處理,仍有一些缺失必須克服。首先,目前的技術多僅能處理單一型態的資料,無法直接輸入多種型態的異質資料。異質資料必須事先轉換,但轉換之後常會造成失真的問題。雖然運用模糊數可以同時表示數值及語意的資料,但實際上仍是轉換為數值而處理。而且這種運算的方式,並不適用於無序的名義屬性。其次,傳統的近似函數大都為單一型態的資料而設計,不能直接用來比較混合型態的物件。再者,資料探勘的目標是為了將大量的資料簡化為濃縮的概念。在精簡的過程中為了保持與原始資料的一致性,常必須進行極為耗時的檢查。而且大部分的簡化技術,也不適用於異質的資料型態。
    本論文主要著重在前述異質資料學習的幾項關鍵問題,包含異質資料與概念的表示方式、異質物件的相似函數、以及異質概念的一致性學習。首先,本論文結合了數學網絡 (mathematical lattice) 與模糊集合理論,設計出模糊區域 (fuzzy region) 與模糊範例 (fuzzy exemplar) 的表示法,可以同時表示二元數值、精確數值、精確區間、名義屬性、精確規則、模糊數、模糊區間、離散模糊集合、模糊規則等不同資料型態的任意結合。這樣的表示方式,可以用於表示原始的異質資料,也可以表示學習後的異質概念或規則。其次,本論文提出適用於異質資料與概念的相似函數,包含距離函數、有向性的距離函數、以及隸屬度函數。傳統用於離散模糊集合的相似函數,大都必須計算兩個集合的交集程度。若兩者並無交集,其相似程度即為零。這樣的特性可能會降低系統一般化的能力。本論文所提出的相似函數,是將兩相異名義值以機率計算的值差距離 (value difference metric, VDM),擴充應用於異質模糊集合的比較。這樣就可以得到異質特徵空間中任兩者合理的相似程度,即使兩者並沒有任何交集。再者,本論文以數學網絡理論為基礎,提出一個能有效率維持一致性的學習程序。這樣的程序在縮減樣本大小的學習過程中,可以確保與原始樣本的一致性,卻不必每次都進行耗時的檢查。這個學習程序能依照特徵空間中異質樣本的分佈,在單一資料週期內自動形成合適的判斷區域,並且藉由階層化的學習架構,漸進得到高度精簡的異質概念。這樣不但可以有效降低記憶空間的需求,提高後續分類的速度,卻仍然能維持分類結果與原始資料的一致性。
    透過實際資料的實驗結果可以發現,本論文所提出的架構不但可以直接處理各種不同的異質資料與規則,對於所有特徵空間的異質樣本,也都可以計算出其合理的相似程度。而且學習所得的精簡概念,不但大幅降低了記憶空間的需求,其分類結果的一致性,也優於其他的學習方法。再者,簡化過程中不必為了維持一致性而進行耗時的檢查,也大幅提高了學習的速度。這些結果說明本論文所提出的解決方案,結合了數學網絡與模糊集合理論,可以成功應用於不同型態異質的資料,並且能在大量的資料中快速分析出精簡的知識,以作為後續決策分析之需。


    Advances in digital technology and Internet have led to the fast growth of huge amounts of information at a phenomenal rate. Such information is gathered from various resources and applied to different applications. Thus, there are disparate types of information such as binary vectors, multidimensional vectors of numbers, nominal attributes, rules, fuzzy data, etc. Dealing properly with heterogeneous data and discovering useful knowledge, trends, and anomalies from vast amounts of data is one of the grand challenges of the information age. In recent years, knowledge discovery in databases (KDD) and data mining which merges together statistics, machine learning, neural networks, fuzzy set theory, genetic algorithms and related areas aims at extracting useful knowledge from large collections of data. However, there still remain deficiencies in the treatment of disparate, heterogeneous information. Firstly, recent approaches always handle only a single data type rather than deal with disparate types of data directly. The disparate heterogeneous data should be transformed into a single data type in advance, but the transformation may cause the problem of distortion. Although fuzzy numbers can express both numeric and linguistic data, they are still treated in the numeric way. Such treatment is not proper for nominal data since they are usually discrete unordered attributes. Secondly, previously proposed similarity measures usually focus on a single data type and cannot be used directly in the comparison of mixed types of information. Furthermore, data mining try to find a reduced set of representatives, but most approaches employ exhaustive verification for maintaining the consistency with the original data. Also, most reduction methods cannot be applied to disparate heterogeneous data.
    This research focuses mainly on the critical issues in learning disparate heterogeneous data, including the representation of heterogeneous data and concepts, the similarity measures of heterogeneous objects, and consistency-based learning for heterogeneous concepts. First, this research combines mathematical lattice and fuzzy set theories to design a representation scheme with fuzzy regions and fuzzy exemplars. This representation can express any combination of disparate types of data including binary numbers, crisp numbers, crisp intervals, nominal attributes, crisp rules, fuzzy numbers, fuzzy intervals, discrete fuzzy sets, fuzzy rules, etc. Such representation can be used to represent both the original heterogeneous data, and the discovered heterogeneous concepts or rules. Next, this research proposes a new class of similarity measures that can be applied to heterogeneous data and concepts, including distance measures, directed distance measures, and inclusion measures. A problem with previous similarity measures used to for discrete fuzzy sets is that they should, in most cases, calculate the intersection of two sets. If there is no intersection their similarity will be zero. Such property may decrease the generalization ability of a learner. The suggested similarity measures are based on the value difference metric (VDM) between nominal attributes calculated using probability. The concept of value difference metric is extended and applied for the comparison of heterogeneous fuzzy sets. In this way, the reasonable similarity of any two objects in the discrete universe can be appropriately obtained, even if there is no intersection. Moreover, this research proposes an efficient consistent learning procedure based on mathematical lattice theory. According to this learning procedure, the consistency with the original samples can be ensured while reducing the sample size without doing time consuming verifications. This learning procedure can follow the distribution of heterogeneous data in the feature space, and automatically form a suitable decision region in a single pass through the data. Highly reduced heterogeneous concepts can be gradually condensed following the hierarchical learning scheme. This not only is able to reduce the memory requirement, raise the classification speed, but also is able to sustain the consistency of the classification results with the original data.
    The experimental results of real world data sets indicated that the proposed approach can directly process disparate types of data sets and rules, and calculate reasonable similarity of all heterogeneous samples in the feature space. Moreover, the condensed concepts obtained by the proposed learning scheme greatly reduced the memory requirement, and the consistency of the classification result is better than other learning methods. Since the reduction process does not have to go through exhaustive verification for consistency, the learning speed is greatly raised. These results suggest that the proposed approach which combines mathematical lattice and fuzzy set theories that can be applied effectively on disparate types of heterogeneous data. It can rapidly obtain greatly condensed knowledge in the overwhelming information for the need of decision analysis.

    Chinese Abstract I English Abstract III Acknowledgement VI Contents VIII List of Tables X List of Figures XI Chapter 1 Introduction 1 1.1 Motivations 1 1.2 Concept Representation 3 1.3 Similarity Measures 5 1.4 Consistency Maintenance in Machine Learning 8 1.5 Our Goal and Design 9 1.6 Organization of this Dissertation 12 Chapter 2 Background Knowledge 13 2.1 Lattice Theory 13 2.2 Fuzzy Set Theory 16 Chapter 3 Heterogeneous Concept Representation 22 3.1 Lattices and Fuzzy Sets 22 3.2 The Lattice of Fuzzy Regions 27 3.3 Fuzzy Exemplars for Heterogeneous Concepts 31 Chapter 4 Heterogeneous Measures of Similarity 37 4.1 Heterogeneous Distance Measures for Fuzzy Regions 37 4.2 The Heterogeneous Hausdorff Separation between Fuzzy Regions 42 4.3 The Heterogeneous Inclusion Measure between Fuzzy Regions 44 Chapter 5 Consistent Learning with Fuzzy Exemplars 47 5.1 Derivations of Fuzzy Exemplar Sets 47 5.2 Consistency of Derived Fuzzy Exemplar Sets 48 5.3 Fuzzy Exemplar Derivation Algorithm with Efficient Consistency Verification 51 5.4 Fuzzy Exemplar Derivation Algorithm with Pruning 57 Chapter 6 Experiments 60 6.1 Experimental Evaluation on Fuzzy Datasets 60 6.2 Experimental Evaluation on Real World Datasets 65 Chapter 7 Conclusions and Future Research 70 7.1 Conclusions 70 7.2 Future Research 72 References 73 Appendix A A Multiclass Neural Network Classifier with Fuzzy Teaching Inputs 81 Appendix B A Neural Network Classifier with Disjunctive Fuzzy Information 103 Curriculum Vitae 116 Publication List 117

    [1] U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth and R. Uthurusamy, Advances in Knowledge Discovery and Data Mining. Menlo Park, CA: American Association for Artificial Intelligence, 1996.
    [2] J. Han and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2000.
    [3] D. E. Rumelhart and J. L. McClell, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations. MIT Press, 1986.
    [4] C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, Inc., 1995.
    [5] M. Kantardzic, Data Mining: Concepts, Models, Methods and Algorithms, John Wiley Sons, Inc., 2002.
    [6] I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques With Java Implementations, Morgan Kaufmann Publishers Inc., 2000.
    [7] V. Petridis and V. G. Kaburlasos, "Learning in the framework of fuzzy lattices," Fuzzy Systems, IEEE Transactions on, vol. 7, no. 4, pp. 422-440, 1999.
    [8] V. G. Kaburlassos and V. Petridis, "Learning and decision-making in the framework of fuzzy lattices," New learning paradigms in soft computing, Physica-Verlag GmbH, pp. 55-96, 2002.
    [9] L. A. Zadeh, "Fuzzy Sets," Information and Control, vol. 8, no. 3, pp. 338-353, 1965.
    [10] J. R. Jang, C.-T. Sun and E. Mizutani, Neuro-Fuzzy and Soft Computing : A Computational Approach to Learning and Machine Intelligence, in MATLAB curriculum series, Upper Saddle River, NJ: Prentice Hall, 1997.
    [11] D. R. Wilson and T. R. Martinez, "Improved Heterogeneous Distance Functions," Journal of Artificial Intelligence Research (JAIR), vol. 1, pp. 1-34, 1997.
    [12] D. E. Rutherford, Introduction to Lattice Theory, Edinburgh, Great Britain: Oliver and Boyd, 1965.
    [13] G. Birkhoff, Lattice Theory, 3rd ed., American Mathematical Society, 1967.
    [14] B. A. Davey, Introduction to Lattices and Order, 2nd ed., Cambridge, UK: Cambridge University Press, 2002.
    [15] L. A. Zadeh, Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers By Lotfi A. Zadeh, River Edge, NJ, USA: World Scientific Publishing Co., Inc., 1996.
    [16] J. J. Valdes, L. Belanche and R. Alquezar, "Fuzzy heterogeneous neurons for imprecise classification problems," International Journal of Intelligent Systems, vol. 15, no. 3, pp. 265-276, 2000.
    [17] V. Petridis and V. G. Kaburlasos, "Finknn: a fuzzy interval number k-nearest neighbor classifier for prediction of sugar production from populations of samples," J. Mach. Learn. Res., vol. 4, pp. 17-37, 2003.
    [18] T. M. Mitchell, Machine Learning, McGraw-Hill, 1997.
    [19] R. A. Mollineda, F. J. Ferri and E. Vidal, "An efficient prototype merging strategy for the condensed 1-NN rule through class-conditional hierarchical clustering," Pattern Recognition, vol. 35, no. 12, pp. 2771-2782, 2002.
    [20] J. J. VALDES AND G. MATEESCU, "Time series model mining with similarity-based neuro-fuzzy networks and genetic algorithms: A parallel implementation," in TSCTC '02: Proceedings of the Third International Conference on Rough Sets and Current Trends in Computing, pp. 279-288, 2002.
    [21] D. W. Aha, D. Kibler and M. K. Albert, "Instance-Based Learning Algorithms," Mach. Learn., vol. 6, no. 1, pp. 37-66, 1991.
    [22] J. R. Quinlan, "Induction of Decision Trees," Mach. Learn., vol. 1, no. 1, pp. 81-106, 1986.
    [23] F. Esposito, D. Malerba and V. Marengo, "Inductive learning from numerical and symbolic data: An integrated framework," Intell. Data Anal., vol. 5, no. 6, pp. 445-461, 2001.
    [24] J. W. Shavlik, R. J. Mooney and G. G. Towell, "Symbolic and Neural Learning Algorithms: An Experimental Comparison," Mach. Learn., vol. 6, no. 2, pp. 111-143, 1991.
    [25] P. Clark and T. Niblett, "The CN2 Induction Algorithm," Mach. Learn., vol. 3, no. 4, pp. 261-283, 1989.
    [26] J. Zhang and R. S. Michalski, "An Integration of Rule Induction and Exemplar-Based Learning for Graded Concepts," Mach. Learn., vol. 21, no. 3, pp. 235-267, 1995.
    [27] P. Domingos, "Unifying instance-based and rule-based induction," Mach. Learn., vol. 24, no. 2, pp. 141-168, 1996.
    [28] S.-M. Chen, S.-H. Lee and C. L. Lee, "A New Method for Generating Fuzzy Rules from Numerical Data for Handling Classification Problems," Applied Artificial Intelligence, vol. 15, no. 7, pp. 645-664, 2001.
    [29] S. L. Salzberg, Learning With Nested Generalized Exemplars, Kluwer Academic Publishers, 1990.
    [30] S. Salzberg, "A Nearest Hyperrectangle Learning Method," Mach. Learn., vol. 6, no. 3, pp. 251-276, 1991.
    [31] D. Wettschereck and T. G. Dietterich, "An Experimental Comparison of the Nearest-Neighbor and Nearest-Hyperrectangle Algorithms," Mach. Learn., vol. 19, no. 1, pp. 5-27, 1995.
    [32] G. A. Carpenter, S. Grossberg and D. B. Rosen, "Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system," Neural Netw., vol. 4, no. 6, pp. 759-771, 1991.
    [33] G. A. Carpenter, S. Grossberg, N. Markuzon, J. H. Reynolds and D. B. Rosen, "Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps," Neural Networks, IEEE Transactions on, vol. 3, no. 5, pp. 698-713, 1992.
    [34] P. K. Simpson, "Fuzzy min-max neural networks- Part 1: Classification," Neural Networks, IEEE Transactions on, vol. 3, no. 5, pp. 776-786, 1992.
    [35] P. K. Simpson, "Fuzzy min-max neural networks- Part 2: Clustering," Fuzzy Systems, IEEE Transactions on, vol. 1, no. 1, pp. 32-45, 1993.
    [36] B. Gabrys and A. Bargiela, "General fuzzy min-max neural network for clustering and classification," Neural Networks, IEEE Transactions on, vol. 11, no. 3, pp. 769-783, 2000.
    [37] K.-H. CHEN, H.-L. CHEN AND H.-M. LEE, "A multiclass neural net classifier with fuzzy teaching inputs," in Third European Congress on Intelligent Techniques and Soft Computing, Germany, pp. 387-391, 1995.
    [38] K.-H. CHEN, H.-L. CHEN AND H.-M. LEE, "A fuzzy neural network classifier with fuzzy teaching inputs and multiclass outputs," in Proceedings for 1995 International Symposium on Artificial Neural Networks, pp. D2-07-12, 1995.
    [39] K.-H. CHEN, I.-F. JIANG AND H.-M. LEE, "A multiclass neural network classifier with disjunctive fuzzy information," in International Symposium on Multi-Technology Information Processing, pp. 189-194, 1996.
    [40] K.-H. Chen, H.-L. Chen and H.-M. Lee, "A multiclass neural network classifier with fuzzy teaching inputs," Fuzzy Sets Systems, vol. 91, no. 1, pp. 15-35, 1997.
    [41] H.-M. LEE, K.-H. CHEN AND I.-F. JIANG, "The handling of disjunctive fuzzy information via neural network approach," in Proceedings of Frontiers in Soft Computing and Decision Systems, AAAI Symposium, pp. 16-21, 1997.
    [42] H. M. Lee, K. H. Chen and I. F. Jiang, "A neural network classifier with disjunctive fuzzy information," Neural Networks, vol. 11, no. 6, pp. 1113-1125, 1998.
    [43] V. Petridis and V. G. Kaburlasos, "Fuzzy lattice neural network (FLNN): a hybrid model for learning," Neural Networks, IEEE Transactions on, vol. 9, no. 5, pp. 877-890, 1998.
    [44] V. G. Kaburlasos and V. Petridis, "Fuzzy lattice neurocomputing (FLN) models," Neural Netw., vol. 13, no. 10, pp. 1145-1169, 2000.
    [45] V. Petridis and V. G. Kaburlasos, "Clustering and classification in structured data domains using Fuzzy Lattice Neurocomputing (FLN)," Knowledge and Data Engineering, IEEE Transactions on, vol. 13, no. 2, pp. 245-260, 2001.
    [46] S. Cost and S. Salzberg, "A weighted nearest neighbour algorithm for learning with symbolic features," Machine Learning, vol. 10, pp. 57-78, 1993.
    [47] J. RACHLIN, S. KASIF, S. SALZBERG AND D. W. AHA, "Towards a better understanding of memory based reasoning systems," in 11th International Machine Learning Conference, New Brunswick, NJ., pp. 242-250, 1994.
    [48] D. VENTURA AND T. R. MARTINEZ, "An empirical comparison of discretization methods," in Proceedings of the Tenth International Symposium on Computer and Information Sciences, pp. 443-450, 1995.
    [49] Y. A. Tolias, S. M. Panas and L. H. Tsoukalas, "Generalized fuzzy indices for similarity matching," Fuzzy Sets and Systems, vol. 120, no. 3, pp. 255- 270, 2001.
    [50] G. Finnie and Z. Sun, "Similarity and metrics in case-based reasoning," International Journal of Intelligent Systems, vol. 17, no. 3, pp. 273- 287, 2002.
    [51] G. P. Zhang, "Neural networks for classification: A survey," IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 30, no. 4, pp. 451- 462, 2000.
    [52] R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification, 2nd ed., Wiley-Interscience, 2000.
    [53] D. Medin, B. Ross and A. Markman, Cognitive Psychology, 4th ed., Hoboken, NJ: John Wiley & Sons, 2005.
    [54] C. Stanfill and D. Waltz, "Toward memory-based reasoning," Commun. ACM, vol. 29, no. 12, pp. 1213-1228, 1986.
    [55] D. Ventura. On Discretization as a Preprocessing Step For Supervised Learning Models. Master's Thesis, Computer Science Department, Brigham Young University, 1995.
    [56] A. Tversky, "Features of Similarity," Psychological Review, vol. 84, no. 4, pp. 327-352, 1977.
    [57] R. Zwich, E. Larlstein and D. Budescu, "Measures of similarity among fuzzy sets: A comparative analysis," International Journal of Approximate Reasoning, vol. 1, pp. 221-242, 1987.
    [58] T. M. Cover and P. E. Hart, "Nearest neighbor pattern classification," IEEE, Transactions on Information Theory, IT-13, vol. 1, pp. 21 1967.
    [59] D. W. AHA, "Case-based learning algorithms," in Proc.of the Case-Based Reasoning Workshop, pp. 147-158, 1991.
    [60] D. W. Aha, "Lazy Learning," Artificial Intelligence Review, vol. 11, no. 1-5, pp. 7-10, 1997.
    [61] D. R. Wilson and T. R. Martinez, "An Integrated Instance-Based Learning Algorithm," Computational Intelligence, vol. 1, pp. 1-28, 2000.
    [62] P. E. Hart, "The condensed nearest neighbor rule," IEEE Trans. Information Theory, vol. 14, no. 3, pp. 515-516, 1968.
    [63] G. W. Gates, "The reduced nearest neighbor rule," IEEE Trans. Information Theory, vol. 18, no. 5, pp. 431-433, 1972.
    [64] D. R. Wilson. Advances in Instance-Based Learning Algorithms. PhD Thesis, Computer Science Department, Brigham Young University, 1997.
    [65] D. R. Wilson and T. R. Martinez, "Reduction Techniques for Exemplar-Based Learning Algorithms," Machine Learning, vol. 3, pp. 257-286, 2000.
    [66] W. Lam, C.-K. Keung and D. Liu, "Discovering Useful Concept Prototypes for Classification Based on Filtering and Abstraction," IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 8, pp. 1075-1090, 2002.
    [67] N. JANKOWSKI AND M. GROCHOWSKI, "Comparison of instances seletion algorithms i. Algorithms survey," in Proceedings of the ICAISC 2004, the Seventh International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, pp. 598-603, 2004.
    [68] M. GROCHOWSKI AND N. JANKOWSKI, "Comparison of instance selection algorithms ii. Results and comments," in Proceedings of the ICAISC 2004, the Seventh International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, pp. 580-585, 2004.
    [69] Y. Yuan and M. J. Shaw, "Induction of fuzzy decision trees," Fuzzy Sets Syst., vol. 69, no. 2, pp. 125-139, 1995.
    [70] R. S. Michalski, "A Theory and Methodology of Inductive Learning," Artif. Intell., vol. 20, no. 2, pp. 111-161, 1983.
    [71] I. G. Rodriguez, J. Lawry and J. F. Baldwin, "Induction And Fusion Of Fuzzy Prototypes," International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 12, no. 4, pp. 409-446, 2004.
    [72] R. A. Mollineda, F. J. Ferri and E. Vidal, "A merge-based condensing strategy for multiple prototype classifiers," Systems, Man and Cybernetics, Part B, IEEE Transactions on, vol. 32, no. 5, pp. 662-668, 2002.
    [73] D. W. Aha, "Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms," Int. J. Man-Mach. Stud., vol. 36, no. 2, pp. 267-287, 1992.
    [74] B. Bouchon-Meunier, M. Rifqi and S. Bothorel, "Towards general measures of comparison of objects," Fuzzy Sets Syst., vol. 84, no. 2, pp. 143-153, 1996.
    [75] L. Egghe and C. Michel, "Construction of weak and strong similarity measures for ordered sets of documents using fuzzy set techniques," Inf. Process. Manage., vol. 39, no. 5, pp. 771-807, 2003.
    [76] H. Gericke, Lattice Theory, New York: Frederck Ungar Publishing Co., 1966.
    [77] H.-J. Zimmermann, Fuzzy Set Theory and Its Applications, 4th ed., Kluwer Academic Publishers, 2001.
    [78] C. T. Lin and C. S. G. Lee, Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems, New Jersey: Prentice-Hall, 1996.
    [79] L. Chaudron, N. Maille and M. Boyer, "The Cube Lattice Model and its Applications," Applied Artificial Intelligence, vol. 17, no. 3, pp. 207-242, 2003.
    [80] R. E. Moore, Methods and Applications of Interval Analysis, SIAM, 1979.
    [81] W.-L. Hung and M.-S. Yang, "Similarity measures of intuitionistic fuzzy sets based on Hausdorff distance," Pattern Recognition Letters, vol. 25, no. 14, pp. 1603-1611, 2004.
    [82] S. S. T. Liao, T. H. Liu, W.-Y., "Finding Relevant Sequences in Time Series Containing Crisp, Interval, and Fuzzy Interval Data," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 5, pp. 2071-2079, 2004.
    [83] T. Kohonen, Self-Organization and Associative Memory, 3rd ed., New York: Springer-Verlag, 1989.
    [84] T. Kohonen, "The Self-Organising Map," Proceedings of the IEEE, vol. 78, no. 9, pp. 1464-1480, 1990.
    [85] T. Kohonen, Self-Organizing Maps, New York: Springer-Verlag, 1997.
    [86] M. J. D. Powell, "Radial basis functions for multivariable interpolation: A review," In J. C. Mason and M. G. Cox (Eds.), Algorithms for approximation, Oxford Clarendon Press, pp. 143-167, 1987.
    [87] J. Moody and C. Darken, "Fast learning in networks of locally-tuned processing units," Neural Computation, vol. 1, no. 2, pp. 281-294, 1989.
    [88] R. HOLTE, L. ACKER AND B. PORTER, "Concept learning and the problem of small disjuncts," in Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pp. 813-819, 1989.
    [89] G. M. Weiss. The Effect of Small Disjuncts and Class Distribution on Decision Tree Learning. PhD Thesis, Department of Computer Science, Rutgers University, New Brunswick, New Jersey, 2003.
    [90] G. M. WEISS AND H. HIRSH, "A quantitative study of small disjuncts," in Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pp. 665-670, 2000.
    [91] R. Keller, Expert System Technology: Development and Application, Yourdon Press, 1987.
    [92] F. BOTANA, "Learning efficient rulsets from fuzzy data with a generic algorithm," in IWANN (1), pp. 517-526, 1999.
    [93] F. BOTANA, "Construction of efficient rulesets from fuzzy data through simulated annealing," in AIMSA, pp. 283-291, 2000.
    [94] C.-H. Wang, C.-J. Tsai, T.-P. Hong and S.-S. Tseng, "Fuzzy Inductive Learning Strategies," Applied Intelligence, vol. 18, no. 2, pp. 179-193, 2003.
    [95] Hettich, S., Blake, C. & Merz, C. (1998). UCI Repository of machine learning databases, from http://www.ics.uci.edu/~mlearn/MLRepository.html.
    [96] F. Zarndt. A Comprehensive Case Study: An Examination of Machine Learning and Connectionist Algorithms. Master's Thesis, Dept. of Computer Science, Brigham Young University, 1995.
    [97] J. Li, G. Dong, K. Ramamohanarao and L. Wong, "DeEPs: A New Instance-Based Lazy Discovery and Classification System," Mach. Learn., vol. 54, no. 2, pp. 99-124, 2004.
    [98] M. Stone, "Cross-validatory choice and assessment of statistical prediction," Journal of the Royal Statistical Society B, vol. 36, no. 2, pp. 111-147, 1974.
    [99] G. L. Ritter, H. B. Woodruff, S. R. Lowry and T. L. Isenhour, "An algorithm for a selective nearest neighbor decision rule," IEEE Trans. Information Theory, vol. 21, pp. 665-669, 1975.
    [100] D. L. Wilson, "Asymptotic properties of nearest neighbor rules using edited data," IEEE Trans. Systems, Man, and Cybernetics, vol. 2, no. 3, pp. 408-421, 1972.
    [101] I. Tomek, "An experiment with the edited nearest-neighbor rule," IEEE Trans. Systems, Man, and Cybernetics, vol. 6, pp. 448-452, 1976.
    [102] R. M. CAMERON-JONES, "Instance selection by encoding length heuristic with random mutation hill climbing," in Proc. Eighth Australian Joint Conf. Artificial Intelligence, pp. 293-301, 1995.
    [103] C. Cortes and V. Vapnik, "Support-Vector Networks," Mach. Learn., vol. 20, no. 3, pp. 273-297, 1995.
    [104] B. LIU, W. HSU AND Y. MA, "Integrating classification and association rule mining," in Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, New York, USA, pp. 80-86, 1998.
    [105] J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, 1993.
    [106] R., Q. (2004). C5.0: An Informal Tutorial, Rulequest Research, from http://www.rulequest.com/see5-unix.html.
    [107] L. Breiman, J. Friedman, R. Olshen and C. Stone, Classification and Regression Trees, Wadsworth and Brooks, 1984.
    [108] J. L. McClelland and D. E. Rumelhart, Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises, MIT Press, 1988.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE