Presentation on theme: "CEN 559 Machine Learning 2011-2012 Fall Term CEN 559 Machine Learning 2011-2012 Fall Term DEPARTMENT of COMPUTER SCIENCE and INFORMATION TECHNOLOGIES Dr."— Presentation transcript:
CEN 559 Machine Learning 2011-2012 Fall Term CEN 559 Machine Learning 2011-2012 Fall Term DEPARTMENT of COMPUTER SCIENCE and INFORMATION TECHNOLOGIES Dr. Abdülhamit Subaşı email@example.com
zOffice Hour: Open Door Policy zClass Schedule:Monday 17:00-19:45
Course Objectives zPresent the key algorithms and theory that form the core of machine learning. zDraw on concepts and results from many fields, including statistics, artifical intelligence, philosophy, information theory, biology, cognitive science, computational complexity, and control theory.
1.Du and Swamy, Neural Networks in a Softcomputing Framework, Springer-Verlag London Limited, 2006. 2.Sebe, Cohen, Garg and Huang, Machine Learning in Computer Vision, Springer, 2005. 3.Chow and Cho, Neural Networks and Computing, Imperial College Press, 2007. 4.Mitchell T., Machine Learning, McGraw Hill, 1997. 5.T. Hastie,R. Tibshirani, J. Friedman, The Elements of Statistical Learning, Second Edition, Springer, 2008. Textbooks
Brief Contents z Introduction z Concept Learning z Decision Tree Learning z Artificial Neural Networks z Evaluation Hypotheses z Bayesian Learning z Computational Learning Theory z Reinforcement Learning
Grading Midterm Examination 25% Research & Presentation25% Final Examination 50% Minimum 15 pages word document, related PPT and presentation
Research Topics: zLinear Methods for Classification zLinear Regression zLogistic Regression zLinear Discriminat Analysis zPerceptron z Kernel Smoothing Methods Ref5 zKernel Density Estimation and Classification (Naive Bayes) zMixture Models for Density Estimation and Classification zRadial Basis Function Networks - Ref1 zBasis Function Networks for Classification – Ref3 zAdvanced Radial Basis Function Networks– Ref3 zFundamentals of Machine Learning and Softcomputing –Ref1 zNeural Networks Ref5 zMultilayer Perceptrons- Ref1 zHopfield Networks and Boltzmann Machines - Ref1 zSVM Ref5 zKNN Ref5 zCompetitive Learning and Clustering - Ref1 zUnsupervised Learning k means Ref5 zSelf-organizing Maps– Ref3
Research Topics: zPrincipal Component Analysis Networks (PCA, ICA)- Ref1 zFuzzy Logic and Neurofuzzy Systems - Ref1 zEvolutionary Algorithms and Evolving Neural Networks (PSO) - Ref1 zDiscussion and Outlook (SVM, CNN, WNN) - Ref1 zDecision Tree Learning Duda&Hart zRandom Forest Ref5 zPROBABILISTIC CLASSIFIERS-REF2 zSEMI-SUPERVISED LEARNING-REF2 zMAXIMUM LIKELIHOOD MINIMUM ENTROPY HMM-REF2 zMARGIN DISTRIBUTION OPTIMIZATION-REF2 zLEARNING THE STRUCTURE OF BAYESIAN NETWORK CLASSIFIERS-REF2 zOFFICE ACTIVITY RECOGNITION-REF2 zModel Assessment and Selection REF5 zCross-Validation zBootstrap Methods zPerformance ROC, statistic zWEKA Machine Learning Tool zTANGARA Machine Learning Tool zORANGE Machine Learning Tool zNETICA Machine Learning Tool zRAPID MINER Machine Learning Tool
What is Machine Learning? Machine learning is the process in which a machine changes its structure, program, or data in response to external information in such a way that its expected future performance improves. Learning by machines can overlap with simpler processes, such as the addition of records to a database, but other cases are clear examples of what is called learning, such as a speech recognition program improving after hearing samples of a persons speech.
Components of a Learning Agent Curiosity Element – problem generator; knows what the agent wants to achieve, takes risks (makes problems) to learn from Learning Element – changes the future actions (the performance element) in accordance with the results from the performance analyzer Performance Element – choosing actions based on percepts Performance Analyzer – judges the effectiveness of the action, passes info to the learning element
Why is machine learning important? Or, why not just program a computer to know everything it needs to know already? Many programs or computer-controlled robots must be prepared to deal with things that the creator would not know about, such as game-playing programs, speech programs, electronic learning pets, and robotic explorers. Here, they would have access to a range of unpredictable knowledge and thus would benefit from being able to draw conclusions independently.
Relevance to AI Helps programs handle new situations based on the input and output from old ones Programs designed to adapt to humans will learn how to better interact Could potentially save bulky programming and attempts to make a program foolproof Makes nearly all programs more dynamic and more powerful while improving the efficiency of programming.
Approaches to Machine Learning Boolean logic and resolution Evolutionary machine learning – many algorithms / neural networks are generated to solve a problem, the best ones survive Statistical learning Unsupervised learning – algorithm that models outputs from the input, knows nothing about the expected results Supervised learning – algorithm that models outputs from the input and expected output Reinforcement learning – algorithm that models outputs from observations
Current Machine Learning Research Almost all types of AI are developing machine learning, since it makes programs dynamic. Examples: Facial recognition – machines learn through many trials what objects are and arent faces Language processing – machines learn the rules of English through example; some AI chatterbots start with little linguistic knowledge but can be taught almost any language through extensive conversation with humans
Future of Machine Learning Gaming – opponents will be able to learn from the players strategies and adapt to combat them Personalized gadgets – devices that adapt to their owner as he changes (gets older, gets different tastes, changes his modes) Exploration – machines will be able to explore environments unsuitable for humans and quickly adapt to strange properties
Problems in Machine Learning Learning by Example: Noise in example classification Correct knowledge representation Heuristic Learning Incomplete knowledge base Continuous situations in which there is no absolute answer Case-based Reasoning Human knowledge to computer representation
Problems in Machine Learning Grammar – meaning pairs y new rules must be relearned a number of times to gain strength Conceptual Clustering yDefinitions can be very complicated yNot much predictive power
Successes in Research Aspects of daily life using machine learning yOptical character recognition yHandwriting recognition ySpeech recognition yAutomated steering yAssess credit card risk yFilter news articles yRefine information retrieval yData mining