Counting Algorithms for Knapsack and Related Problems 1 Raghu Meka (UT Austin, work done at MSR, SVC) Parikshit Gopalan (Microsoft Research, SVC) Adam.

Slides:



Advertisements
Similar presentations
Agnostically Learning Decision Trees Parikshit Gopalan MSR-Silicon Valley, IITB00. Adam Tauman Kalai MSR-New England Adam R. Klivans UT Austin
Advertisements

Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Pseudorandom Generators for Polynomial Threshold Functions 1 Raghu Meka UT Austin (joint work with David Zuckerman)
=
Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
NP-Hard Nattee Niparnan.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Determinant Sums (and Labeled Walks) for Undirected Hamiltonicity
DNF Sparsification and Counting Raghu Meka (IAS, Princeton) Parikshit Gopalan (MSR, SVC) Omer Reingold (MSR, SVC)
. Exact Inference in Bayesian Networks Lecture 9.
Great Theoretical Ideas in Computer Science for Some.
1 Heat flow and a faster Algorithm to Compute the Surface Area of a Convex Body Hariharan Narayanan, University of Chicago Joint work with Mikhail Belkin,
Fast FAST By Noga Alon, Daniel Lokshtanov And Saket Saurabh Presentation by Gil Einziger.
Approximate Counting via Correlation Decay Pinyan Lu Microsoft Research.
02/01/11CMPUT 671 Lecture 11 CMPUT 671 Hard Problems Winter 2002 Joseph Culberson Home Page.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Yi Wu (CMU) Joint work with Parikshit Gopalan (MSR SVC) Ryan O’Donnell (CMU) David Zuckerman (UT Austin) Pseudorandom Generators for Halfspaces TexPoint.
Counting and Representing Intersections Among Triangles in R 3 Esther Ezra and Micha Sharir.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
Accelerating Simulated Annealing for the Permanent and Combinatorial Counting Problems.
Classification and Prediction: Regression Analysis
Online Learning Algorithms
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Fixed Parameter Complexity Algorithms and Networks.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Pseudorandom Generators for Combinatorial Shapes 1 Parikshit Gopalan, MSR SVC Raghu Meka, UT Austin Omer Reingold, MSR SVC David Zuckerman, UT Austin.
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
On-Line Decision Problems with Exponentially Many Experts Richard M. Karp Hoffman Fest September 19-20, 2014.
Hardness of Learning Halfspaces with Noise Prasad Raghavendra Advisor Venkatesan Guruswami.
An FPTAS for #Knapsack and Related Counting Problems Parikshit Gopalan Adam Klivans Raghu Meka Daniel Štefankovi č Santosh Vempala Eric Vigoda.
Polynomial-time reductions We have seen several reductions:
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Negative Examples for Sequential Importance Sampling of Binary Contingency Tables Ivona Bezáková (RIT) Daniel Štefankovič (Rochester) Alistair Sinclair.
Non-Bayes classifiers. Linear discriminants, neural networks.
Map-Reduce for Machine Learning on Multicore C. Chu, S.K. Kim, Y. Lin, Y.Y. Yu, G. Bradski, A.Y. Ng, K. Olukotun (NIPS 2006) Shimin Chen Big Data Reading.
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness and course wrap up.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
Two Dimension Measures: A New Algorithimic Method for Solving NP-Hard Problems Yang Liu.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Quantum Computing MAS 725 Hartmut Klauck NTU
Spatial decay of correlations and efficient methods for computing partition functions. David Gamarnik Joint work with Antar Bandyopadhyay (U of Chalmers),
NPC.
The Poincaré Constant of a Random Walk in High- Dimensional Convex Bodies Ivona Bezáková Thesis Advisor: Prof. Eric Vigoda.
Sampling algorithms and Markov chains László Lovász Microsoft Research One Microsoft Way, Redmond, WA 98052
Complexity 24-1 Complexity Andrei Bulatov Counting Problems.
Instructor: Shengyu Zhang 1. Optimization Very often we need to solve an optimization problem.  Maximize the utility/payoff/gain/…  Minimize the cost/penalty/loss/…
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
The NP class. NP-completeness
Fast Hamiltonicity Checking via Bases of Perfect Matchings
An FPTAS for Counting Integer Knapsack Solutions Made Easy
A new characterization of ACC0 and probabilistic CC0
Circuit Lower Bounds A combinatorial approach to P vs NP
Monomer-dimer model and a new deterministic approximation algorithm for computing a permanent of a 0,1 matrix David Gamarnik MIT Joint work with Dmitriy.
Exact Algorithms via Monotone Local Search
Possibilities and Limitations in Computation
Computability and Complexity
Random Sampling over Joins Revisited
Intro to NP Completeness
Richard Anderson Lecture 25 NP-Completeness
Hariharan Narayanan, University of Chicago Joint work with
Adaptive annealing: a near-optimal connection between
Exact Inference ..
Richard Anderson Lecture 30 NP-Completeness
Bart M. P. Jansen Jesper Nederlof
DNF Sparsification and Counting
On Approximating Covering Integer Programs
Presentation transcript:

Counting Algorithms for Knapsack and Related Problems 1 Raghu Meka (UT Austin, work done at MSR, SVC) Parikshit Gopalan (Microsoft Research, SVC) Adam Klivans (UT Austin) Daniel Stefankovic (Univ. of Rochester) Santosh Vempala (Georgia Tech) Eric Vigoda (Georgia Tech)

Can we Count? 2  Count proper 4-colorings? 533,816,322,048!O(1)

Can we Count? 3 Count the num. of sols. to a 2-SAT instance? Count the number of perfect matchings? Counting ~ Random Sampling Volume estimation, statistics, statistical physics. Counting ~ Random Sampling Volume estimation, statistics, statistical physics. Above problems are #P-hard #P ~ NP in the counting world Above problems are #P-hard #P ~ NP in the counting world

Approximate Counting for #P 4  #P introduced by Valiant in  Don’t expect to solve #P-hard problems exactly. Duh.  How about approximating? Want relative error: compute p such that Want relative error: compute p such that

Approximate Counting for #P 5  Triggered counting through MCMC: Permanent/Matchings: Jerrum, Sinclair 1988; Jerrum, Sinclair, Vigoda 2001 Volume estimation: Dyer, Frieze, Kannan 1989; Lovasz, Vempala 2003 Does counting require randomness? Approximate Counting ~ Random Sampling Jerrum, Valiant, Vazirani 1986 Approximate Counting ~ Random Sampling Jerrum, Valiant, Vazirani 1986

Deterministic Approximate Counting for #P?  Derandomizing simple complexity classes is important. Primes is in P – Agarwal, Kayal, Saxena 2001 SL=L – Reingold 2005  Most previous work through sampling Need new techniques for counting Efficiency?  Examples: Weitz 06, Bavati et al. 07, Ultimate Goal: Derandomize BPP … 6

Our Work 7 Techniques of independent interest Similar results for multi-dimensional knapsack, contingency tables. Efficient algorithm for learning functions of halfspaces with small error. First deterministic approximate counting algorithm for Knapsack. Near-linear time sampling. First deterministic approximate counting algorithm for Knapsack. Near-linear time sampling.

Weight could be exponential Knapsack 8 Applications: Optimization, Packing, Finance, Auctions

Counting for Knapsack 9 Estimate ReferenceComplexity Dynamic programming Dyer et al. 1993Randomized Morris and Sinclair 1999Randomized Dyer 2003Randomized

Counting for Knapsack 10  Efficient sampling: after a preprocessing phase each sample takes time O(n). Deterministic algorithm for knapsack in time.

Multi-Dimensional Knapsack 11 Given, estimate

Multi-Dimensional Knapsack 12  Near linear-time sampling after preprocessing.  Previously: randomized analogues due to Morris and Sinclair, Dyer. Thm: Deterministic counting algorithm for k-dimensional knapsack in time

Counting Contingency Tables 13  Dyer: randomized poly. time when rows constant.  This work: deterministic poly. time when rows constant Right-handedLeft-handed TOTALS Males43952 Females44448 TOTALS TOTALS ??52 ??48 TOTALS

Learning Results: Halfspaces 14 Applications: Perceptrons, Boosting, Support Vector Machines

Functions of Halfspaces 15 Intersections Depth 2 Neural Networks

Learning Functions of Halfspaces 16 Input: Uniformly random examples and labels. Output: Hypothesis agreeing with f. Query algorithm to learn functions of k halfspaces in time. First algorithm for intersection of two halfspaces.

17 Explicitly construct a small-width approximating branching program. Motivated by monotone trick of M. and Zuckerman Main Technique: Approximation by Branching Programs.

Read Once Branching Programs 18 Layered directed graph vertices per layer Edges between consecutive layers Edges labeled Input: Output: Label of final vertex reached n layers

Counting for ROBPs 19 Can count number of accepting solutions in time by dynamic programming. n layers

Knapsack computable by ROBPs 20 n layers Can we use counting for ROBPs? No – width too large. Our observation: Yes – reduce width by approximating.

Knapsack and Monotone ROBPs 21 n layers Order vertices by partial sums

Approximating with Small Width 22 Intuition: Only need to know when acc. prob. increase.

Approximating ROBP: Rounding 23  Say we know when jumps occur. How about edges? Approximating: error-factor per layer is Round edges

 Problem: Finding probabilities is another knapsack instance.  Solution: Build ROBP backward one layer at time. When rounding layer i, already know the following layers. Computing an Approximating ROBP 24 Build ROBP backwards with binary search.

Thank You 25