Download presentation

Presentation is loading. Please wait.

Published byRachel Cantrell Modified over 3 years ago

1
**Sequence motifs, information content, logos, and HMM’s**

Morten Nielsen, CBS, BioCentrum, DTU

2
**Outline Multiple alignments and sequence motifs**

Weight matrices and consensus sequence Sequence weighting Low (pseudo) counts Information content Sequence logos Mutual information Example from the real world HMM’s and profile HMM’s TMHMM (trans-membrane protein) Gene finding Links to HMM packages

3
**Multiple alignment and sequence motifs**

Core Consensus sequence Weight matrices Problems Sequence weights Low counts MLEFVVEADLPGIKA MLEFVVEFALPGIKA MLEFVVEFDLPGIAA YLQDSDPDSFQD ---GSDTITLPCRMKQFINMWQE ---RNQEERLLADLMQNYDPNLR YDPNLRPAERDSDVVNVSLK------ NVSLKLTLTNLISLNEREEA--- ----EREEALTTNVWIEMQWCDYR WCDYRLRWDPRDYEGLWVLR--- --LWVLRVPSTMVWRPDIVLEN IVLENNVDGVFEVALYCNVL- YCNVLVSPDGCIYWLPPAIF PPAIFRSACSISVTYFPFDW---- ********* FVVEFDLPG Consensus

4
**Sequences weighting 1 - Clustering**

MLEFVVEADLPGIKA MLEFVVEFALPGIKA MLEFVVEFDLPGIAA YLQDSDPDSFQD ---GSDTITLPCRMKQFINMWQE ---RNQEERLLADLMQNYDPNLR YDPNLRPAERDSDVVNVSLK------ NVSLKLTLTNLISLNEREEA--- ----EREEALTTNVWIEMQWCDYR WCDYRLRWDPRDYEGLWVLR--- --LWVLRVPSTMVWRPDIVLEN IVLENNVDGVFEVALYCNVL- YCNVLVSPDGCIYWLPPAIF PPAIFRSACSISVTYFPFDW---- ********* } Homologous sequences Weight = 1/n (1/3) Consensus sequence YRQELDPLV Previous FVVEFDLPG

5
**Sequences weighting 2 - (Henikoff & Henikoff)**

FVVEADLPG 0.37 FVVEFALPG 0.43 FVVEFDLPG 0.32 YLQDSDPDS 0.59 MKQFINMWQ 0.90 LMQNYDPNL 0.68 PAERDSDVV 0.75 LKLTLTNLI 0.85 VWIEMQWCD 0.84 YRLRWDPRD 0.51 WRPDIVLEN 0.71 VLENNVDGV 0.59 YCNVLVSPD 0.71 FRSACSISV 0.75 Waa’ = 1/rs r: Number of different aa in a column s: Number occurrences Normalize so S Waa= 1 for each column Sequence weight is sum of Waa F: r=7 (FYMLPVW), s=4 w’=1/28, w = 0.055 Y: s=3, w`=1/21, w = 0.073 M,P,W: s=1, w’=1/7, w = 0.218 L,V: s=2, w’=1/14, w = 0.109

6
**Low count correction Limited number of data**

P1 Limited number of data Poor sampling of sequence space I is not found at position P1. Does this mean that I is forbidden? No! Use Blosum matrix to estimate pseudo frequency of I MLEFVVEADLPGIKA MLEFVVEFALPGIKA MLEFVVEFDLPGIAA YLQDSDPDSFQD -GSDTITLPCRMKQFINMWQE -RNQEERLLADLMQNYDPNLR -----YDPNLRPAERDSDVVNVSLK------ NVSLKLTLTNLISLNEREEA--- --EREEALTTNVWIEMQWCDYR WCDYRLRWDPRDYEGLWVLR--- LWVLRVPSTMVWRPDIVLEN IVLENNVDGVFEVALYCNVL- YCNVLVSPDGCIYWLPPAIF PPAIFRSACSISVTYFPFDW---- *********

7
**Low count correction using Blosum matrices**

Every time for instance L/V is observed, I is also likely to occur Estimate low (pseudo) count correction using this approach As more data are included the pseudo count correction becomes less important Blosum62 substitution frequencies # I L V L V NL = 2, NV=2, Neff=12 => fI = (2* *0.1646)/12 = 0.05 pI* = (Neff * pI + b * fI)/(Neff+b) = (12*0 + 10*0.05)/(12+10) = 0.02

8
**Information content Information and entropy Shannon information**

Conserved amino acid regions contain high degree of information (high order == low entropy) Variable amino acid regions contain low degree of information (low order == high entropy) Shannon information D = log2(N) + S pi log2 pi (for proteins N=20, DNA N=4) Conserved residue pA=1, pi<>A=0, D = log2(N) ( = 4.3 for proteins) Variable region pA=0.05, pC=0.05, .., D = 0

9
**Sequence logo Height of a column equal to D**

MHC class II Logo from 10 sequences Height of a column equal to D Relative height of a letter is pA Highly useful tool to visualize sequence motifs High information position

10
**Frequency matrix Frequencies x 100**

A R N D C Q E G H I L K M F P S T W Y V

11
**More on Logos Information content Shannon, qi = 1/N = 0.05**

D = S pi log2 (pi/qi) Shannon, qi = 1/N = 0.05 D = S pi log2 (pi) - S pi log2 (1/N) = log2 N - S pi log2 (pi) Kullback-Leibler, qi = background frequency V/L/A more frequent than for instance C/H/W

12
**Mutual information ALWGFFPVA ILKEPVHGV ILGFVFTLT LLFGYPVYV GLSPTVWLS**

YMNGTMSQV GILGFVFTL WLSLLVPFV FLPSDFFPS I(i,j) = Saai Saaj P(aai, aaj) * log[P(aai, aaj)/P(aai)*P(aaj)] P(G1) = 2/9 = 0.22, .. P(V6) = 4/9 = 0.44,.. P(G1,V6) = 2/9 = 0.22, P(G1)*P(V6) = 8/81 = 0.10 log(0.22/0.10) > 0

13
Mutual information 313 binding peptides 313 random peptides

14
**Weight matrices Wij = log(pij/qj)**

Estimate amino acid frequencies from alignment inc. sequence weighting and pseudo counts Now a weight matrix is given as Wij = log(pij/qj) Here i is a position in the motif, and j an amino acid. qj is the background frequency for amino acid j. W is a L x 20 matrix, L is motif length Score sequences to weight matrix by looking up and adding L values from matrix

15
**Example from real life 10 peptides from MHCpep database**

Bind to the MHC complex Relevant for immune system recognition Estimate sequence motif and weight matrix Evaluate on 528 peptides ALAKAAAAM ALAKAAAAN ALAKAAAAR ALAKAAAAT ALAKAAAAV GMNERPILT GILGFVFTM TLNAWVKVV KLNEPVLLL AVVPFIVSV

16
**Example from real life (cont.)**

Raw sequence counting No sequence weighting No pseudo count Prediction accuracy 0.45 Sequence weighting Prediction accuracy 0.5

17
**Example from real life (cont.)**

Sequence weighting and pseudo count Prediction accuracy 0.60 Motif found on all data (485) Prediction accuracy 0.79

18
Hidden Markov Models Weight matrices do not deal with insertions and deletions In alignments, this is done in an ad-hoc manner by optimization of the two gap penalties for first gap and gap extension HMM is a natural frame work where insertions/deletions are dealt with explicitly

19
**HMM (a simple example) Example from A. Krogh**

Core region defines the number of states in the HMM (red) Insertion and deletion statistics is derived from the non-core part of the alignment (blue) ACA---ATG TCAACTATC ACAC--AGC AGA---ATC ACCG--ATC Core of alignment

20
**HMM construction ACA---ATG TCAACTATC ACAC--AGC AGA---ATC ACCG--ATC .4**

5 matches. A, 2xC, T, G 5 transitions in gap region C out, G out A-C, C-T, T out Out transition 3/5 Stay transition 2/5 A C G T .2 .4 .2 .2 .6 .6 A C G T .8 A C G T A C G T .8 A C G T 1 A C G T A C G T 1. 1. .4 1. 1. .8 .2 .8 .2 .2 .2 .2 .8 ACA---ATG 0.8x1x0.8x1x0.8x0.4x1x0.8x1x0.2 = 3.3x10-2

21
Align sequence to HMM ACA---ATG 0.8x1x0.8x1x0.8x0.4x1x0.8x1x0.2 = 3.3x10-2 TCAACTATC 0.2x1x0.8x1x0.8x0.6x0.2x0.4x0.4x0.4x0.2x0.6x1x1x0.8x1x0.8 = x10-2 ACAC--AGC = 1.2x10-2 AGA---ATC = 3.3x10-2 ACCG--ATC = 0.59x10-2 Consensus: ACAC--ATC = 4.7x10-2, ACA---ATC = 13.1x10-2 Exceptional: TGCT--AGG = x10-2

22
**Align sequence to HMM - Null model**

Score depends strongly on length Null model is a random model. For length L the score is 0.25L Log-odd score for sequence S Log( P(S)/0.25L) ACA---ATG = 4.9 TCAACTATC = 3.0 ACAC--AGC = 5.3 AGA---ATC = 4.9 ACCG--ATC = 4.6 Consensus: ACAC--ATC = 6.7 ACA---ATC = 6.3 Exceptional: TGCT--AGG = -0.97 Note!

23
**HMM’s and weight matrices**

Note. In the case of un-gapped alignments HMM’s become simple weight matrices It still might be useful to use a HMM tool package to estimate a weight matrix Sequence weighting Pseudo counts

24
**Profile HMM’s Insertion Deletion**

EM55_HUMAN WWQGRVEGSSKESAGLIPSPELQEWRVASMAQSAP--SEAPSCSPFGKKKK-YKDKYLAK CSKP_HUMAN WWQGKLENSKNGTAGLIPSPELQEWRVACIAMEKTKQEQQASCTWFGKKKKQYKDKYLAK KAPB_MOUSE PENLLIDHQGYIQVTDFGFAKRVKG NRC2_NEUCR PENILLHQSGHIMLSDFDLSKQSDPGGKPTMIIGKNGTSTSSLPTIDTKSCIANF EM55_HUMAN HSSIFDQLDVVSYEEVVRLPAFKRKTLVLIGASGVGRSHIKNALLSQNPEKFVYPVPYTT CSKP_HUMAN HNAVFDQLDLVTYEEVVKLPAFKRKTLVLLGAHGVGRRHIKNTLITKHPDRFAYPIPHTT KAPB_MOUSE RTWTLCGTPEYLAPEIILSKGYNKAVDWWALGVLIYEMAAGYPPFFADQPIQIYEKIVSG NRC2_NEUCR RTNSFVGTEEYIAPEVIKGSGHTSAVDWWTLGILIYEMLYGTTPFKGKNRNATFANILRE EM55_HUMAN RPPRKSEEDGKEYHFISTEEMTRNISANEFLEFGSYQGNMFGTKFETVHQIHKQNKIAIL CSKP_HUMAN RPPKKDEENGKNYYFVSHDQMMQDISNNEYLEYGSHEDAMYGTKLETIRKIHEQGLIAIL KAPB_MOUSE KVRFPSHF-----SSDLKDLLRNLLQVDLTKRFGNLKNGVSDIKTHKWFATTDWIAIYQR NRC2_NEUCR DIPFPDHAGAPQISNLCKSLIRKLLIKDENRRLG-ARAGASDIKTHPFFRTTQWALI--R EM55_HUMAN NNGVDETLKKLQEAFDQACSSPQWVPVSWVY CSKP_HUMAN NNEIDETIRHLEEAVELVCTAPQWVPVSWVY KAPB_MOUSE EKCGKEFCEF NRC2_NEUCR ENAVDPFEEFNSVTLHHDGDEEYHSDAYEKR Deletion

25
**All M/D pairs must be visited once**

Profile HMM’s All M/D pairs must be visited once

26
**TMHMM (trans-membrane HMM) (Sonnhammer, von Heijne, and Krogh)**

Model TM length distribution. Power of HMM. Difficult in alignment.

27
**Combination of HMM’s - Gene finding**

Start codon Stop codon x xxxxxxxxATGccc ccc cccTAAxxxxxxxx Inter-genic region Region around start codon Coding region Region around stop codon

28
**HMM packages NET-ID, HMMpro (http://www.netid.com/html/hmmpro.html)**

HMMER (http://hmmer.wustl.edu/) S.R. Eddy, WashU St. Louis. Freely available. SAM (http://www.cse.ucsc.edu/research/compbio/sam.html) R. Hughey, K. Karplus, A. Krogh, D. Haussler and others, UC Santa Cruz. Freely available to academia, nominal license fee for commercial users. META-MEME (http://metameme.sdsc.edu/) William Noble Grundy, UC San Diego. Freely available. Combines features of PSSM search and profile HMM search. NET-ID, HMMpro (http://www.netid.com/html/hmmpro.html) Freely available to academia, nominal license fee for commercial users. Allows HMM architecture construction.

29
**Simple Hmmer command hmmbuild --gapmax 0.0 --fast A2.hmmer A2.fsa**

hmmbuild - build a hidden Markov model from an alignment HMMER 2.2g (August 2001) Alignment file: A2.fsa File format: a2m Search algorithm configuration: Multiple domain (hmmls) Model construction strategy: Fast/ad hoc (gapmax 0.0) Null model used: (default) Sequence weighting method: G/S/C tree weights Alignment: #1 Number of sequences: 232 Number of columns: 9 Determining effective sequence number done. [192] Weighting sequences heuristically done. Constructing model architecture done. Converting counts to probabilities done. Setting model name, etc done. [A2.fasta] Constructed a profile HMM (length 9) Average score: bits Minimum score: bits Maximum score: bits Std. deviation: bits hmmbuild --gapmax 0.0 --fast A2.hmmer A2.fsa >HLA-A Example_for_Ligand SLLPAIVEL YLLPAIVHI TLWVDPYEV SXPSGGXGV GLVPFLVSV

30
**Weight matrix A R N D C Q E G H I L K M F P S T W Y V**

Similar presentations

OK

. Lecture #8: - Parameter Estimation for HMM with Hidden States: the Baum Welch Training - Viterbi Training - Extensions of HMM Background Readings: Chapters.

. Lecture #8: - Parameter Estimation for HMM with Hidden States: the Baum Welch Training - Viterbi Training - Extensions of HMM Background Readings: Chapters.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on 7 wonders of the world free download Ppt on plants for grade 4 Ppt on water conservation techniques Reproductive system anatomy and physiology ppt on cells Ppt on natural numbers examples Ppt on environmental sustainability Ppt on man made disaster management Ppt on poultry farm management Ppt on verbs for grade 5 Ppt on bond length and strength