Presentation is loading. Please wait.

Presentation is loading. Please wait.

Weird things that surprise academics trying to commercialize a static checking tool. Andy Chou, Ben Chelf, Seth Hallem Charles Henri-Gros, Bryan Fulton,

Similar presentations

Presentation on theme: "Weird things that surprise academics trying to commercialize a static checking tool. Andy Chou, Ben Chelf, Seth Hallem Charles Henri-Gros, Bryan Fulton,"— Presentation transcript:

1 Weird things that surprise academics trying to commercialize a static checking tool. Andy Chou, Ben Chelf, Seth Hallem Charles Henri-Gros, Bryan Fulton, Ted Unangst Chris Zak Coverity Dawson Engler Stanford

2 A naïve view u Initial market analysis: –“We handle linux, bsd, we just need a pretty box!” –Not quite. u First rule of static analysis: no check, no bug. –Two first order examples we never would have guessed. –Problem 1: if you can’t find the code, can’t check it. –Problem 2: if you can’t compile code, you can’t check it. u And then: how to make money on software tool? –“Tools. Huh. Tools are hard.” Any VC in early 2000.

3 Myth: the C (or C++) language exists. u Well, not really. The standard is not a compiler. –What exists: gcc-2.1.9-ac7-prepatch-alpha, xcc-i-did- not-understand-pages4,33,208-242-of-standard. –Oh. And Microsoft. Conformance = competitive disadvantage. Do the math on how this deforms.c files –Basic LALR law: What can be parsed will be written. u Rule: static analysis must compile code to check. –If you cannot (correctly) parse “language” cannot check. –Common (mis)usage model: “allegedly C” header file does something bizarre not-C thing. Included by all source. Customer watches your compiler emit voluminous parse errors. (This is not impressive.) –Of course: gets way worse with C++ (which we support)

4 u Banal. But take more time than you can believe: u And, of course, asm: Some bad examples to find in headers unsigned x = 0xdead_beef; int foo(int a, int a); unsigned x @ “TEXT”; // unless “-packed”! __packed (…) struct foo { … } End lines with “\r” rather than “\n” #pragma asm mov eax, eab #pragma end_asm // newline = end __asm mov eax, eab // “]” = end __asm [ mov eax, eab ] asm foo() { mov eax, eab; } void x; short x; int *y = &(int)x;

5 Microsoft example: precompiled headers u Spec: u Implication u It gets worse: on-the-fly header fabrication The compiler treats all code occurring before the.h file as precompiled. It skips to just beyond the #include directive associated with the.h file, uses the code contained in the.pch file, and then compiles all code after filename I can put whatever I want here. It doesn’t have to compile. If your compiler gives an error it sucks. #include

6 Solution: pre-preprocessing rewrite rules. u Supply regular expressions to rewrite bad constructs ppp_translate (“/#pragma asm/#if 0/”); ppp_translate(“/#pragma end_asm/#endif/”); #pragma asm … #pragma end_asm #if 0 … #endif

7 What this all means concretely. u We use Edison Design Group (EDG) frontend –Pretty much everyone uses. Been around since 1989. –Aggressive support for gcc, microsoft, etc. (bug compat!) u Still: coverity by far the largest source of EDG bugs: –146 parsing test cases (i.e., we got burned) –219 compiler line translation test cases (i.e., ibid). –163 places where frontend hacked (“#ifdef COVERITY”) Still need custom rewriter for many supported compilers: 205 hpux_compilers.c 215 iar_compiler.c 240 ti_compiler.c 251 green_hills_compiler.c 377 intel_compilers.c 453 diab_compilers.c 453 sun_compilers.c 485 arm_compilers.c 617 gnu_compilers.c 748 microsoft_compilers.c 1587 metrowerks_compilers.c …

8 Academics don’t understand money. u “We’ll just charge per seat like everyone else” –Finish the story: “Company X buys three Purify seats, one for Asian, one for Europe and one for the US…” u Try #2: “we’ll charge per lines of code” –“That is a really stupid idea: (1) …, (2) …, … (n) …” –Actually works. I’m still in shock. Would recommend it. u Good feature for seller: –No seat games. Revenue grows with code size. Run on another code base = new sale. u Good feature for buyer: No seat-model problems –Buy once for project, then done. No per-seat or per- usage cost; no node lock problems; no problems adding, removing or renaming developers (or machines) –People actually seem to like this pitch.

9 Some experience. u Surprise: Sales guys are great –Easy to evaluate. Modular. u Company X buys tool, then downsizes. Good or bad? –For sales, very good: X fires 110 people. They get jobs elsewhere. Recommend coverity. 4 closed deals. u Large companies “want” to be honest –Veritas: want monitoring so don’t accidently violate! u What can you sell? –User not same as tool builder. Naïve. Inattentive. Cruel. –Makes it difficult to deploy anything sophisticated. –Example: statistical inference. –Some ways, checkers lag much behind our research ones.

10 “No, your tool is broken: that’s not a bug” u “No, the loop will go through once!” u “No, && is ‘or’!” u “No, ANSI lets you write 1 past end of the array!” –(“We’ll have to agree to disagree.” !!!!) for(s=0; s < n; s++) { … switch(s) { case 0: assert(0); return; … } …dead code… for(i=1; i < 0; i++) { …deadcode… } void *foo(void *p, void *q) { if(!p && !q) return 0; unsigned p[4]; p[4] = 1;

11 Coverity’s commercial history 2003 7 early adopter customers, including VMWare, SUN, Handspring. Coverity achieves profitability. Achieved profitability 2004-05 Version 2.0 product released. Company quadruples 70+ customers including Juniper, Synopsys, Oracle, Veritas, nVidia, palmOne. Self funded Product growth and proliferation 2002 Company incorporated Deluge of requests from companies wanting access to the new technology. First customer signs: Sanera systems Breakthrough technology out of Stanford 2000-2002 Meta-level compilation checker (“Stanford Checker") detects 2000+ bugs in Linux.

12 A partial list of 70+ customers… Biz Applications Government OS EDASecurity Storage Open Source NetworkingEmbedded

13 Summary u Static analysis –Better at checking surface properties –Big wins: don’t run code, all paths. Easy diagnosis. –Low incremental cost per line of code –Can get results in an afternoon: much easier to commercialize. –10-100x more bugs. u Model checking –Better at checking code implications. –Major win over testing: explore all actions a state can do before going to next –Makes low-probability events as probable as high. –Works very well when massive interleavings and bugs horrible.

14 Open Q: how to get the bugs that matter? u Myth: all bugs matter and all will be fixed –*FALSE* –Find 10 bugs, all get fixed. Find 10,000… u Reality –All sites have many open bugs (observed by us & PREfix) –Myth lives because state-of-art is so bad at bug finding –What users really want: The 5-10 that “really matter” u General belief: bugs follow 90/10 distribution –Out of 1000, 100 account for most pain. –Fixing 900 waste of resources & may make things worse u How to find worst? No one has a good answer to this.

15 Open Q: Do static tools really help? –Dangers: Opportunity cost. Deterministic bugs to non- deterministic. Bugs found Bugs that mattered The hope Bugs found Bugs that mattered The null hypothesis Bugs found Bugs that mattered A Possibility

16 Some cursory static analysis experiences u Bugs are everywhere –Initially worried we’d resort to historical data… –100 checks? You’ll find bugs (if not, bug in analysis) u Finding errors often easy, saying why is hard –Have to track and articulate all reasons. u Ease-of-inspection *crucial* –Extreme: Don’t report errors that are too hard. u The advantage of checking human-level operations –Easy for people? Easy for analysis. Hard for analysis? Hard for people. u Soundness not needed for good results.

17 Myth: more analysis is always better u Does not always improve results, and can make worse u The best error: –Easy to diagnose –True error u More analysis used, the worse it is for both –More analysis = the harder error is to reason about, since user has to manually emulate each analysis step. –Number of steps increase, so does the chance that one went wrong. No analysis = no mistake. u In practice: –Demote errors based on how much analysis required –Revert to weaker analysis to cherry pick easy bugs –Give up on errors that are too hard to diagnose.

18 Myth: Soundness is a virtue. u Soundness: Find all bugs of type X. –Not a bad thing. More bugs good. –BUT: can only do if you check weak properties. u What soundness really wants to be when it grows up: –Total correctness: Find all bugs. –Most direct approximation: find as many bugs as possible. u Opportunity cost: –Diminishing returns: Initial analysis finds most bugs –Spend on what gets the next biggest set of bugs –Easy experiment: bug counts for sound vs unsound tools. u End-to-end argument: –“It generally does not make much sense to reduce the residual error rate of one system component (property) much below that of the others.

Download ppt "Weird things that surprise academics trying to commercialize a static checking tool. Andy Chou, Ben Chelf, Seth Hallem Charles Henri-Gros, Bryan Fulton,"

Similar presentations

Ads by Google