Presentation is loading. Please wait.

Presentation is loading. Please wait.

Alexander Serebrenik, Serguei Roubtsov, and Mark van den Brand D n -based Design Quality Comparison of Industrial Java Applications.

Similar presentations


Presentation on theme: "Alexander Serebrenik, Serguei Roubtsov, and Mark van den Brand D n -based Design Quality Comparison of Industrial Java Applications."— Presentation transcript:

1 Alexander Serebrenik, Serguei Roubtsov, and Mark van den Brand D n -based Design Quality Comparison of Industrial Java Applications

2 What makes a good software architecture? PAGE 1 Among other things… easy to build additions and make changes Why? Software evolves: not flexible enough architecture causes early system’s decay when significant changes become costly Maintenance typically accounts for 75% or more of the total software workload

3 Goals of good architectural design Make software easier to change when we want to minimize changes which we have to make PAGE 2 Flexible design: more abstract classes and less dependencies between packages

4 PAGE 3 Abstractness/stability balance Stable packages Do not depend upon classes outside Many dependents Should be extensible via inheritance (abstract) Instable packages Depend upon many classes outside No dependents Should not be extensible via inheritance (concrete) Stability is related to the amount of work required to make a change [Martin, 2000].

5 What does balance mean? PAGE 4 A good real-life package must be instable enough in order to be easily modified It must be generic enough to be adaptable to evolving requirements, either without or with only minimal modifications Hence: contradictory criteria

6 How to measure Instability? C a – afferent coupling measures incoming dependencies C e – efferent coupling measures outgoing dependencies Instability = C e /(C e +C a )

7 PAGE 6 D n – Distance from the main sequence Abstractness = #AbstrClasses/#Classes Instability = C e /(C e +C a ) 1 1 0 D n = | Abstractness + Instability – 1 | main sequence zone of pain zone of uselessness [R.Martin 1994]

8 Maintainability of software architecture in numbers PAGE 7

9 PAGE 8 Instability: What does “depend” mean? [Martin 1994] [Martin 2000] [JDepend] Still: Entire architecture? Ce: Instability = C e /(C e +C a )

10 PAGE 9 2 Flavors of Architecture Assessment Averages Industrial practice  Benchmarking for Java OSS  Distributions  Expectation for threshold exceeding values

11 PAGE 10 Benchmarks? 21 Java OSS Different domains EJB frameworks, entertainment, web-app development tools, machine learning, code analysis, … Different ages (2001 - 2008) Size: ≥ 30 original packages Development status: focus on Stable/Mature Also include alpha, beta and inactive

12 PAGE 11 Average D n 1.00 0.00 0.15 0.25 Benchmarks 0.32 Dresden OCL Toolkit But… average is not all! [  -2  ;  +2  ] Exceeds  + 4 

13 PAGE 12 How are the D n -values distributed? Exponential distribution?

14 PAGE 13 Exponential distribution? Exponential distribution: Support [0;1] rather than [0;  ): Hence, we normalize : And use max-likelihood fitting to find

15 PAGE 14 Benchmarking DnDn Higher Sharper peaks Thinner tails Smaller averages increases Why is interesting? PAGE 14

16 PAGE 15 Estimate excessively high values! How many packages exceed threshold z? z P(D n ≥z)  -3   +3  PAGE 15

17 PAGE 16 D n ≥ 0.6 Dresden OCL Toolkit: 23.7% packages  -3  +3  P(D n ≥0.6)

18 PAGE 17 Dresden OCL Toolkit: Why? Started in 1998. BUT: We are looking at the Eclipse version! Version 1.0 – June 2008 Version 1.1 – December 2008 Has yet to mature…

19 Can we compare proprietary systems using D n ? Case study: System A and System B support loan and lease approval business processes Both systems employ three-tier enterprise architecture: System A uses the IBM Websphere application server System B uses a custom made business logic layer implemented on the Tomcat web server System A: 249 non-third-party packages System B: 284 non-third-party packages PAGE 18

20 PAGE 19 Average D n 1.00 0.00 Benchmarks 0.337 System B Exceeds  + 4  0.186 System A

21 What about distributions? PAGE 20 D n threshold value % of packages beyond threshold System B System A an average OSS

22 Independent assessments Cyclic dependencies between packages A, B, C should be released and maintained together PAGE 21 B A C JDepend reports # of cyclic dependencies: System A - 1 System B - 23 The dependencies between packages must not form cycles [Martin, 2000]

23 Layering PAGE 22 System ASystem B Upcoming dependencies

24 Chidamber and Kemerer OO metrics PAGE 23 System ASystem BNASA avgNASA lowNASA high WMC 10.9827.9114.8745.711.1 LCOM 224.252506.18210.11447.6578.34 * The lower the better System A (white bars) has more (%) low- WMC packages than System B (blue bars). The same holds for LCOM.

25 PAGE 24 Conclusions Java OSS benchmarks for average D n g(x) – statistical model Expectation for threshold exceeding values Applicable to other metrics as well! practical feasibility of D n -based assessment of industrial applications


Download ppt "Alexander Serebrenik, Serguei Roubtsov, and Mark van den Brand D n -based Design Quality Comparison of Industrial Java Applications."

Similar presentations


Ads by Google