Future directions in computer science research John Hopcroft Department of Computer Science Cornell University CINVESTAV-IPN Dec 2,2013.

Slides:



Advertisements
Similar presentations
TARGET DETECTION AND TRACKING IN A WIRELESS SENSOR NETWORK Clement Kam, William Hodgkiss, Dept. of Electrical and Computer Engineering, University of California,
Advertisements

Future directions in computer science research John Hopcroft Department of Computer Science Cornell University Heidelberg Laureate Forum Sept 27, 2013.
Sergey Bravyi, IBM Watson Center Robert Raussendorf, Perimeter Institute Perugia July 16, 2007 Exactly solvable models of statistical physics: applications.
Scale Free Networks.
Shortest Vector In A Lattice is NP-Hard to approximate
Week 5 - Models of Complex Networks I Dr. Anthony Bonato Ryerson University AM8002 Fall 2014.
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
VL Netzwerke, WS 2007/08 Edda Klipp 1 Max Planck Institute Molecular Genetics Humboldt University Berlin Theoretical Biophysics Networks in Metabolism.
Learning the Language of Linear Algebra John Hannah (Canterbury, NZ) Sepideh Stewart (Oklahoma, US) Mike Thomas (Auckland, NZ)
Future directions in computer science research 23rd International Symposium on Algorithms and Computation John Hopcroft Cornell University ISAAC.
1 By Gil Kalai Institute of Mathematics and Center for Rationality, Hebrew University, Jerusalem, Israel presented by: Yair Cymbalista.
Support Vector Machines and Kernel Methods
Clustering short time series gene expression data Jason Ernst, Gerard J. Nau and Ziv Bar-Joseph BIOINFORMATICS, vol
Graph Based Semi- Supervised Learning Fei Wang Department of Statistical Science Cornell University.
Network Statistics Gesine Reinert. Yeast protein interactions.
Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Advanced Topics in Data Mining Special focus: Social Networks.
Future Directions in Computer Science John Hopcroft Cornell University Ithaca, New York.
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
Computer Science 1 Web as a graph Anna Karpovsky.
Network Measures Social Media Mining. 2 Measures and Metrics 2 Social Media Mining Network Measures Klout.
How Robust are Linear Sketches to Adaptive Inputs? Moritz Hardt, David P. Woodruff IBM Research Almaden.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Measurement and Evolution of Online Social Networks Review of paper by Ophir Gaathon Analysis of Social Information Networks COMS , Spring 2011,
Peer-to-Peer and Social Networks Random Graphs. Random graphs E RDÖS -R ENYI MODEL One of several models … Presents a theory of how social webs are formed.
Data Mining Techniques
The Statistical Properties of Large Scale Structure Alexander Szalay Department of Physics and Astronomy The Johns Hopkins University.
Random Graph Models of Social Networks Paper Authors: M.E. Newman, D.J. Watts, S.H. Strogatz Presentation presented by Jessie Riposo.
Large-scale organization of metabolic networks Jeong et al. CS 466 Saurabh Sinha.
The Erdös-Rényi models
Lecture 6 - Models of Complex Networks II Dr. Anthony Bonato Ryerson University AM8002 Fall 2014.
Wei Wang Xi’an Jiaotong University Generalized Spectral Characterization of Graphs: Revisited Shanghai Conference on Algebraic Combinatorics (SCAC), Shanghai,
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
CAS May 24, 2010 Creating a science base to support new directions in computer science John Hopcroft Cornell University Ithaca, New York.
/09/dji-phantom-crashes-into- canadian-lake/
Estimation of Statistical Parameters
Creating a science base to support new directions in computer science Cornell University Ithaca New York John Hopcroft.
MapReduce and Graph Data Chapter 5 Based on slides from Jimmy Lin’s lecture slides ( (licensed.
Building a Science Base for the Information Age John Hopcroft Cornell University Ithaca, NY Xiamen University.
Discrete Mathematical Structures (Counting Principles)
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
An Efficient Approach to Clustering in Large Multimedia Databases with Noise Alexander Hinneburg and Daniel A. Keim.
Approximating the Minimum Degree Spanning Tree to within One from the Optimal Degree R 陳建霖 R 宋彥朋 B 楊鈞羽 R 郭慶徵 R
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Marina Drosou, Evaggelia Pitoura Computer Science Department
Random Dot Product Graphs Ed Scheinerman Applied Mathematics & Statistics Johns Hopkins University IPAM Intelligent Extraction of Information from Graphs.
Lecture 10: Network models CS 765: Complex Networks Slides are modified from Networks: Theory and Application by Lada Adamic.
Data Mining: Knowledge Discovery in Databases Peter van der Putten ALP Group, LIACS Pre-University College LAPP-Top Computer Science February 2005.
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Kaplan University AB140 Introduction to Management Welcome to our Unit 2 Seminar Foundations of Management.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
DM GROUP MEETING PRESENTATION PLAN Eigenvector-based Centrality Measures For Temporal Networks by D Taylor et.al. Uncovering the Small Community.
Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016.
Link Analysis Algorithms Page Rank Slides from Stanford CS345, slightly modified.
1 Travel Times from Mobile Sensors Ram Rajagopal, Raffi Sevlian and Pravin Varaiya University of California, Berkeley Singapore Road Traffic Control TexPoint.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Chapter 7: Counting Principles
Rocky K. C. Chang September 4, 2017
LECTURE 11: Advanced Discriminant Analysis
Rocky K. C. Chang September 3, 2018
Subhash Khot Theory Group
Quiz 1 (lecture 4) Ea
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
Modelling and Searching Networks Lecture 6 – PA models
Vector Spaces RANK © 2012 Pearson Education, Inc..
Discrete Mathematics and its Applications Lecture 6 – PA models
Presentation transcript:

Future directions in computer science research John Hopcroft Department of Computer Science Cornell University CINVESTAV-IPN Dec 2,2013

Time of change The information age is a revolution that is changing all aspects of our lives. Those individuals, institutions, and nations who recognize this change and position themselves for the future will benefit enormously. CINVESTAV-IPN Dec 2,2013

Computer Science is changing Early years Programming languages Compilers Operating systems Algorithms Data bases Emphasis on making computers useful CINVESTAV-IPN Dec 2,2013

Computer Science is changing The future years Tracking the flow of ideas in scientific literature Tracking evolution of communities in social networks Extracting information from unstructured data sources Processing massive data sets and streams Extracting signals from noise Dealing with high dimensional data and dimension reduction The field will become much more application oriented CINVESTAV-IPN Dec 2,2013

Computer Science is changing Merging of computing and communication The wealth of data available in digital form Networked devices and sensors Drivers of change CINVESTAV-IPN Dec 2,2013

Implications for Theoretical Computer Science Need to develop theory to support the new directions Update computer science education CINVESTAV-IPN Dec 2,2013

Theory to support new directions Large graphs Spectral analysis High dimensions and dimension reduction Clustering Collaborative filtering Extracting signal from noise Sparse vectors Learning theory CINVESTAV-IPN Dec 2,2013

Outline of talk A short view of the future Examples of a science base Large graphs High dimensional space CINVESTAV-IPN Dec 2,2013

Sparse vectors There are a number of situations where sparse vectors are important. Tracking the flow of ideas in scientific literature Biological applications Signal processing CINVESTAV-IPN Dec 2,2013

Sparse vectors in biology plants Genotype Internal code Phenotype Observables Outward manifestation CINVESTAV-IPN Dec 2,2013

Digitization of medical records Doctor – needs my entire medical record Insurance company – needs my last doctor visit, not my entire medical record Researcher – needs statistical information but no identifiable individual information Relevant research – zero knowledge proofs, differential privacy CINVESTAV-IPN Dec 2,2013

A zero knowledge proof of a statement is a proof that the statement is true without providing you any other information. CINVESTAV-IPN Dec 2,2013

Zero knowledge proof Graph 3-colorability Problem is NP-hard - No polynomial time algorithm unless P=NP CINVESTAV-IPN Dec 2,2013

Zero knowledge proof CINVESTAV-IPN Dec 2,2013

Digitization of medical records is not the only system Car and road – gps – privacy Supply chains Transportation systems CINVESTAV-IPN Dec 2,2013

In the past, sociologists could study groups of a few thousand individuals. Today, with social networks, we can study interaction among hundreds of millions of individuals. One important activity is how communities form and evolve. CINVESTAV-IPN Dec 2,2013

Future work Consider communities with more external edges than internal edges Find small communities Track communities over time Develop appropriate definitions for communities Understand the structure of different types of social networks CINVESTAV-IPN Dec 2,2013

Our view of a community TCS Me Colleagues at Cornell Classmates Family and friends More connections outside than inside CINVESTAV-IPN Dec 2,2013

What types of communities are there? How do communities evolve over time? Are all social networks similar? CINVESTAV-IPN Dec 2,2013

Are the underlying graphs for social networks similar or do we need different algorithms for different types of networks? G(1000,1/2) and G(1000,1/4) are similar, one is just denser than the other. G(2000,1/2) and G(1000,1/2) are similar, one is just larger than the other. CINVESTAV-IPN Dec 2,2013

Two G(n,p) graphs are similar even though they have only 50% of edges in common. What do we mean mathematically when we say two graphs are similar? CINVESTAV-IPN Dec 2,2013

Physics Chemistry Mathematics Biology

CINVESTAV-IPN Dec 2,2013 Survey Expository Research

CINVESTAV-IPN Dec 2,2013 English Speaking Authors Asian Authors English Second Language Others

CINVESTAV-IPN Dec 2,2013 Established Authors Young Authors

Discovering hidden structures in social networks

One structure with random noise One structureAdd random noise After randomly permute

Two structures in a graph Dominant structure Randomly permute Add hidden structure Randomly permute

Another type of hidden structure Permuted by dominant structure Randomly permuted Permuted by hidden structure

Science Base What do we mean by science base? Large Graphs High dimensional space CINVESTAV-IPN Dec 2,2013

Theory of Large Graphs Large graphs with billions of vertices Exact edges present not critical Invariant to small changes in definition Must be able to prove basic theorems CINVESTAV-IPN Dec 2,2013

Erdös-Renyi n vertices each of n 2 potential edges is present with independent probability NnNn p n (1-p) N-n vertex degree binomial degree distribution number of vertices CINVESTAV-IPN Dec 2,2013

Generative models for graphs Vertices and edges added at each unit of time Rule to determine where to place edges Uniform probability Preferential attachment- gives rise to power law degree distributions CINVESTAV-IPN Dec 2,2013

Vertex degree Number of vertices Preferential attachment gives rise to the power law degree distribution common in many graphs. CINVESTAV-IPN Dec 2,2013

Protein interactions 2730 proteins in data base 3602 interactions between proteins Science 1999 July 30; 285: Only 899 proteins in components. Where are the 1851 missing proteins? CINVESTAV-IPN Dec 2,2013

Protein interactions 2730 proteins in data base 3602 interactions between proteins Science 1999 July 30; 285: CINVESTAV-IPN Dec 2,2013

Science Base for High Dimensional Space CINVESTAV-IPN Dec 2,2013

High dimension is fundamentally different from 2 or 3 dimensional space CINVESTAV-IPN Dec 2,2013

High dimensional data is inherently unstable. Given n random points in d-dimensional space, essentially all n 2 distances are equal. CINVESTAV-IPN Dec 2,2013

High Dimensions Intuition from two and three dimensions is not valid for high dimensions. Volume of cube is one in all dimensions. Volume of sphere goes to zero. CINVESTAV-IPN Dec 2,2013

Gaussian distribution Probability mass concentrated between dotted lines CINVESTAV-IPN Dec 2,2013

Gaussian in high dimensions CINVESTAV-IPN Dec 2,2013

Two Gaussians CINVESTAV-IPN Dec 2,2013

Distance between two random points from same Gaussian Points on thin annulus of radius Approximate by a sphere of radius Average distance between two points is (Place one point at N. Pole, the other point at random. Almost surely, the second point will be near the equator.) CINVESTAV-IPN Dec 2,2013

Expected distance between points from two Gaussians separated by δ CINVESTAV-IPN Dec 2,2013

Can separate points from two Gaussians if CINVESTAV-IPN Dec 2,2013

Why is there a unique sparse solution?

If there were two s-sparse solutions x 1 and x 2 to Ax=b, then x=x 1 -x 2 is a 2s-sparse solution to Ax=0. For there to be a 2s-sparse solution to the homogeneous equation Ax=0, a set of 2s columns of A must be linearly dependent. Assume that the entries of A are zero mean, unit variance Gaussians.

Select first column in the set of linearly dependent columns Set forms an orthonormal basis and hence cannot be linearly dependent.

Dimension reduction Project points onto subspace containing centers of Gaussians. Reduce dimension from d to k, the number of Gaussians CINVESTAV-IPN Dec 2,2013

Centers retain separation Average distance between points reduced by CINVESTAV-IPN Dec 2,2013

Can separate Gaussians provided > some constant involving k and γ independent of the dimension CINVESTAV-IPN Dec 2,2013

We have just seen what a science base for high dimensional data might look like. For what other areas do we need a science base? CINVESTAV-IPN Dec 2,2013

Ranking is important Restaurants, movies, books, web pages Multi-billion dollar industry Collaborative filtering When a customer buys a product, what else is he or she likely to buy? Dimension reduction Extracting information from large data sources Social networks CINVESTAV-IPN Dec 2,2013

This is an exciting time for computer science. There is a wealth of data in digital format, information from sensors, and social networks to explore. It is important to develop the science base to support these activities. CINVESTAV-IPN Dec 2,2013

Remember that institutions, nations, and individuals who position themselves for the future will benefit immensely. Thank You! CINVESTAV-IPN Dec 2,2013