Local Clustering Algorithm DISCOVIR 9-16-2002. Image collection within a client is modeled as a single cluster. Current Situation.

Slides:



Advertisements
Similar presentations
Clustering k-mean clustering Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein.
Advertisements

Clustering.
DATA MINING CLUSTERING ANALYSIS. Data Mining (by R.S.K. Baber) 2 CLUSTERING Example: suppose we have 9 balls of three different colours. We are interested.
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Perceptron Learning Rule
Incremental Clustering Previous clustering algorithms worked in “batch” mode: processed all points at essentially the same time. Some IR applications cluster.
1 Machine Learning: Lecture 10 Unsupervised Learning (Based on Chapter 9 of Nilsson, N., Introduction to Machine Learning, 1996)
Unsupervised learning
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 On Rival Penalization Controlled Competitive Learning.
Lecture 6 Image Segmentation
1 Abstract This paper presents a novel modification to the classical Competitive Learning (CL) by adding a dynamic branching mechanism to neural networks.
Segmentation Divide the image into segments. Each segment:
Ensemble Learning: An Introduction
Clustering Color/Intensity
Semi-Supervised Clustering Jieping Ye Department of Computer Science and Engineering Arizona State University
1 Kunstmatige Intelligentie / RuG KI2 - 7 Clustering Algorithms Johan Everts.
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
What is Cluster Analysis?
K-means Clustering. What is clustering? Why would we want to cluster? How would you determine clusters? How can you do this efficiently?
Clustering Ram Akella Lecture 6 February 23, & 280I University of California Berkeley Silicon Valley Center/SC.
Ulf Schmitz, Pattern recognition - Clustering1 Bioinformatics Pattern recognition - Clustering Ulf Schmitz
Lecture 09 Clustering-based Learning
Birch: An efficient data clustering method for very large databases
Radial Basis Function Networks
Health and CS Philip Chan. DNA, Genes, Proteins What is the relationship among DNA Genes Proteins ?
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Computer Vision James Hays, Brown
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques.
Region Segmentation Readings: Chapter 10: 10.1 Additional Materials Provided K-means Clustering (text) EM Clustering (paper) Graph Partitioning (text)
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Distributed Information Retrieval Server Ranking for Distributed Text Retrieval Systems on the Internet B. Yuwono and D. Lee Siemens TREC-4 Report: Further.
CLUSTERING. Overview Definition of Clustering Existing clustering methods Clustering examples.
Quantitative analysis of 2D gels Generalities. Applications Mutant / wild type Physiological conditions Tissue specific expression Disease / normal state.
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
Clustering.
Lecture 6 Spring 2010 Dr. Jianjun Hu CSCE883 Machine Learning.
Flat clustering approaches
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
K means ++ and K means Parallel Jun Wang. Review of K means Simple and fast Choose k centers randomly Class points to its nearest center Update centers.
Given a set of data points as input Randomly assign each point to one of the k clusters Repeat until convergence – Calculate model of each of the k clusters.
Clustering Approaches Ka-Lok Ng Department of Bioinformatics Asia University.
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Intro. ANN & Fuzzy Systems Lecture 20 Clustering (1)
CURE: An Efficient Clustering Algorithm for Large Databases Authors: Sudipto Guha, Rajeev Rastogi, Kyuseok Shim Presentation by: Vuk Malbasa For CIS664.
Introduction to Data Mining Clustering & Classification Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Machine Learning Lecture 4: Unsupervised Learning (clustering) 1.
Data Mining – Algorithms: K Means Clustering
Homework 1 Tutorial Instructor: Weidong Shi (Larry), PhD
Unsupervised Learning
Fuzzy Logic in Pattern Recognition
Data Mining: Basic Cluster Analysis
Clustering MacKay - Chapter 20.
Semi-Supervised Clustering
Clustering CSC 600: Data Mining Class 21.
Clustering 1 (Introduction and kmean)
Other Applications of Energy Minimzation
Haim Kaplan and Uri Zwick
Clustering.
AIM: Clustering the Data together
Jianping Fan Dept of Computer Science UNC-Charlotte
KMeans Clustering on Hadoop Fall 2013 Elke A. Rundensteiner
of the Artificial Neural Networks.
K-Means Clustering Who is my neighbor?.
Fuzzy Clustering Algorithms
Introduction to Machine learning
Reinforcement Learning (2)
Unsupervised Learning
Reinforcement Learning (2)
Presentation transcript:

Local Clustering Algorithm DISCOVIR

Image collection within a client is modeled as a single cluster. Current Situation

Proposed Improvement Multiple clusters exist in the image collection

Group of similar local cluster A

Group of similar local cluster B

Group of similar local cluster C

Clustering Algorithm 3 clustering algorithms are proposed and tested C – set of cluster center X – dataset Goal of clustering : minimize Error

Procedure Randomly pick k x j from {X} and assign them as the set {C} as initial cluster center. Find closest cluster center c and update Error change < threshold iterate a certain step? Input dataset and cluster Randomly pick a point from dataset X N Return cluster center Y

Shifting Mean (SM) Suppose x j is picked and c i is the closest cluster center Let p be number of times c i wins, initially p=1 Update c i by

Competitive Learning (CL) Update c i by t – the current number of iteration so far T – total number of iteration intend to run We choose  by 0.5, 0.3, 0.1

Illustration cici xjxj

cici xjxj Winner (move closer)

Rival Penalized Competitive Learning (RPCL) Suppose c l is the second closest cluster center to x j Update c i by Update c l by We choose  = 0.05 

Illustration cici xjxj Winner (move closer) Rival (move away) unchanged

Final Steps For each x j,find the closest c i and mark x j belongs to c i Calculate error function Carryout experiments by varying # of iteration, learning rate

Results 0.3Error (400)Error (1000)Error (5000) SM CL RPCL Error (0.5)Error (0.3)Error (0.1) SM CL RPCL Fixed Iteration Varying learning rate Fixed Learning rate Varying iteration 0.1Error (400)Error (1000)Error (5000) SM CL RPCL Fixed Learning rate Varying iteration

Screen Capture

Others Other variation  i – initial learning rate  f – final learning rate Interesting link for competitive learning some competitive learning methods some competitive learning methods