# Identifying Interesting Association Rules with Genetic Algorithms

## Presentation on theme: "Identifying Interesting Association Rules with Genetic Algorithms"— Presentation transcript:

Identifying Interesting Association Rules with Genetic Algorithms
Elnaz Delpisheh York University Department of Computer Science and Engineering April-10-17

Data mining I = {i1,i2,...,in} is a set of items.
Too much data Data Data Mining I = {i1,i2,...,in} is a set of items. D = {t1,t2,...,tn} is a transactional database. ti is a nonempty subset of I. An association rule is of the form AB, where A and B are the itemsets, A⊂ I, B⊂ I, and A∩B=∅ . Apriori algorithm is mostly used for association rule mining. {milk, eggs}{bread}. Association rules There exist other algorithms apart from apriori such as Fp-growth.

Apriori Algorithm TID List of item IDs T100 I1,I2,I3 T200 I2, I4 T300
I1, I2, I3, I5 T900 I1, I2, I3

Apriori Algorithm (Cont.)

Association rule mining
Too much data Data Data Mining Too many association rules Association rules

Interestingness criteria
Comprehensibility. Conciseness. Diversity. Generality. Novelty. Utility. ...

Interestingness measures
Subjective measures Data and the user’s prior knowledge are considered. Comprehensibility, novelty, surprisingness, utility. Objective measures The structure of an association rule is considered. Conciseness, diversity, generality, peculiarity. Example: Support It represents the generality of a rule. It counts the number of transactions containing both A and B.

Drawbacks of objective measures
Detabase-dependence Lack of knowledge about the database Threshold dependence Solution Multiple database reanalysis Problem Large number of disk I/O Problem with Multiple database reanalysis is that, some databases are simply large. Association rule mining must confront exponential search spaces. Detabase-independence This approach does not require users to specify thresholds. Instead of generating unknown number of interesting rules like the traditional models, only the most interesting rules are extracted according to the interestingness measure as defined by the fitness function! Detabase-independence

Genetic algorithm-based learning (ARMGA )
Initialize population Evaluate individuals in population Repeat until a stopping criteria is met Select individuals from the current population Recombine them to obtain more individuals Evaluate new individuals Replace some or all the individuals of the current population by off-springs Return the best individual seen so far Usually genetic algorithm for rule mining are divided into 2 groups according to their encoding of rules in the population of chromosomes. -Michigan Approach Many ppl have used this approach. However, if the number of rules is too many, this approach is impractical. -Pittsburgh Approach

ARMGA Modeling Given an association rule XY Requirement
Conf(XY) > Supp(Y) Aim is to maximise Conf(XY) > Supp(Y), since we are only interested in positive rules.

ARMGA Encoding Michigan Strategy
Given an association k-rule XY, where X,Y⊂I, I is a set of items I=i1,i2,..., in, and X∩Y=∅. For example {A1,...,Aj}{Aj+1,...,Ak} Michigan Approach Each rule is encoded into an individual Pittsburgh Approach A set of rules are encoded into a chromosome.

ARMGA Encoding (Cont.) The aforementioned encoding highly depends on the length of the chromosome. We use another type of encoding: Given a set of items {A,B,C,D,E,F} Association rule ACFB is encoded as follows 00A11B00C01D11E00F 00: Item is antecedent 11: Item is consequence 01/10: Item is absent

ARMGA Operators Select Crossover Mutation

ARMGA Operators-Select
Select(c,ps): Acts as a filter of the chromosome C: Chromosome Ps: pre-specified probability

ARMGA Operators-Crossover
This operation uses a two-point strategy

ARMGA Operators-Mutate

ARMGA Initialization

ARMGA Algorithm

Empirical studies and Evaluation
Implement the entire procedure using Visual C++ Use WEKA to produce interesting association rules Compare the results

Similar presentations