(no subject)

Date: 2010-02-14 10:39 pm (UTC)
From: [identity profile] baljemmett.livejournal.com
Very interesting; thanks for sharing!

Comparing the sort of issues Coverity finds in Wine1 to those cppcheck finds underlines how right they are with regards the confidence-denting nature of false positives, and the intricacies of parsing 'valid' code in the first place. That's pretty much the reason I've not even dared to run such a tool over the codebase I'm responsible for at work; the commercial Coverity tool could probably do a reasonable job but anything that involves money is not something I can successfully argue for. A nuisance.

1 From 'outside' observation; I only have one patch in, but I've lurked on wine-devel since 2003ish so see the discussions!

(no subject)

Date: 2010-02-14 11:05 pm (UTC)
ext_8103: (Default)
From: [identity profile] ewx.livejournal.com
Perhaps they should bill per checker rather than (or as well as) per LOC; if a given checker produces too many false positives on your code you stop using and don't pay for it. Normalizing the graph of bugs found against checkers enabled might help with the presentational difficulty exhibited in the figure 1/figure 2 graphs too.

(no subject)

Date: 2010-02-15 12:46 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
It sounds from that article as if Coverity is considerably better than any static checker I've encountered in person. People have occasionally sent me logs derived from running static checkers against my free software (most often PuTTY, as you might expect), and generally what's happened is that they produce thousands and thousands of lines of yammering and when I spot-check a few they all turn out to be pointless knee-jerk reactions without even bothering to check the context – a good example is "you've used strcpy, have you checked for buffer overruns?" when a cursory examination of the previous code would show that indeed I did check for buffer overruns and furthermore I did so correctly.

My opinion has therefore tended to be that such a tool might be useful if you were using it from day one of starting a new code base – the warnings would pop up one or five at a time as you added code, and if you made sure to fix them as soon as they showed up, you could write the whole program so that the checker gave it a clean bill of health. But applying it to any pre-existing body of code yields so many false positives that you'd lose the will to live long before investigating enough of them to find a real problem, and most likely miss the one real problem when it did show up because your eyes had glazed over.

(no subject)

Date: 2010-02-15 01:38 pm (UTC)
ext_8103: (Default)
From: [identity profile] ewx.livejournal.com
Yes, Coverity's checker is seriously clever whole-program analysis, not the kind of glorified style-checker you often see described as “static analysis”. Perhaps you should try to get PuTTY into their open source scanning project (http://scan.coverity.com).

(no subject)

Date: 2010-02-15 01:50 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
So how do they arrange seriously clever whole-program analysis when at the same time that article said they were willing to just ignore any chunk of code their parser framework couldn't make sense of? The two sound contradictory to me.

(no subject)

Date: 2010-02-15 01:51 pm (UTC)
ext_8103: (Default)
From: [identity profile] ewx.livejournal.com
No idea, I'm afraid!

November 2025

S M T W T F S
      1
2345678
91011121314 15
1617 181920 2122
23242526272829
30      

Most Popular Tags

Expand Cut Tags

No cut tags