Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Social Network Analysis Methods for the Prediction of Faulty Components Gholamreza Safi.

Similar presentations


Presentation on theme: "Using Social Network Analysis Methods for the Prediction of Faulty Components Gholamreza Safi."— Presentation transcript:

1 Using Social Network Analysis Methods for the Prediction of Faulty Components Gholamreza Safi

2 List of Contents Motivations Our Vision Goal Models Software Architectural Slices(SAS) Predicting Fault prone components Comparison with related works Conclusions 2

3 Motivations Finding errors as early as possible in software life- cycle is important Using dependency data, socio-technical analysis Considering dependency between software elements Considering interactions between developers during the life- cycle 3 [Bird et al 2009]

4 Our Vision Provide a facility for considering concerns of roles other than developers who participating in development process Not directly like socio-technical based approaches Complexity Some basis is needed to model concerns Goal Models and software architectures 4

5 Goal Models 5

6 Goal models and software architectures Software Architecture(SA): Set of principal design decisions goal models represent the different way of satisfaction of a high-level goal They could have impacts 0n SA Components and connectors is a common representation of SA So we should show the impact of Goal models on this representation of SA 6

7 Software Architectural Slices(SAS) Software Architectural Slices(SAS): is part of a software architecture (a subset of interacting components and related connectors) that provides the functionality of a leaf level goal of a goal model graph An Algorithm is designed to extract SAS of a system, given goal model and the entry point of leaf level goals in the SA 7

8 Example of SAS Leaf-level Goal in goal ModelSlice Send request for topicUser Interface, User Manager, User Data Interface Decrypt received messageUser Manager Send Requests for InterestsUser Interface, User Manager, User Data Interface Send Request for time tableUser Interface, User Manager, User Data Interface, Time Table Manager Choose schedule AutomaticallyUser Interface, User Manager, User Data Interface, Event Manager, Event Data Interface Select Participants ExplicitlyUser Interface, User Manager, User Data Interface Collect Timetables by system from Agents User Interface, Agent Manager Interface Interface Layer User Interface Business Logic User Manager Timetable Manager Event Manager Agent Manager Interface Data Layer User Data Interface Event Data Interface 8

9 Predicting Fault prone components Social Networks analysis methods Metrics Connectivity metrics: individual nodes and their immediate neighbors Degree Centrality Metrics relation between non-immediate neighbor nodes in network Closeness Betweeness 9

10 ComponentsDegreeClosenessBetweeness User Interface48/6=1.336+1+1+1=9 User Manager211/6=1.83½+1/2+1/2=1.5 Timetable Manager39/6=1.5½+1/3+1+1/3+1/2+ 1/2=2.88 Event Manager211/6=1.831/3+1/3=2/3 Agent Manager Interface 211/6=1.831/3+1/3=2/3 User Data Interface 214/6=2.330 Event Data Interface 312/6=20 Interface Layer User Interface Business Logic User Manager Timetable Manager Event Manager Agent Manager Interface Data Layer User Data Interface Event Data Interface 10

11 Aggregated Metrics based on SAS Leaf Level GoalsAggregated Degree Aggregated Closeness Aggregated Betweeness Send request for topic833/610.5 Decrypt received message211/61.5 Send Requests for Interests 833/610.5 Send Request for time table 1142/6=710.5+2.88=13..33 Choose schedule Automatically 1358/610.5+2/3=11.16 Select Participants Explicitly 833/610.5 Collect Timetables by system from Agents 619/69+2/3=9.66 metrics for individual components could not be very useful for test related analysis, since it only provide information for unit level testing In a real computation many components collaborate with each other to provide a service or satisfy a goal of the system A bug in one of them could have bad impact on all of the other collaborators 11

12 Logistic Regression 12

13 Logistic regression and Architectural slices we want to select the beta values for three aggregated metrics After this by using f(z) we could find the probability of the event that the corresponding architectural slice encounter at least one error The process for making a logistic regression ready for prediction contains two stages: Training Validation 13

14 Consider a test suite and based on the number of failed test cases, compute the probability of a slice to being faulty (number of failed test case for that slice/total number of test cases) then using metrics, try to find beta values to make f(z) close to computed probability. Evaluate the model by actual data. Validation measures could help us to determine the quality of our initial model. The process of training and validation should be repeated until we reach to a certain level of confidence about our model. How to train and validate? 14

15 Measures for Validation Precision: Ratio of (true positives) to (true positives + false positives) True positives: number of error prone slices which also determine to be error-prone in the model False positives: Those which have not error but shown to have errors using approach Recall: Ratio of (true positives) to (true positives + false negatives) False Negatives: Those which are considered to be error free by mistake using approach F Score: 15

16 Related works Zimmerman and Nagappan Uses dependency data between different part of code These kind of techniques are accurate Central components could have more errors Bird et al. using a socio-technical network consider the work of developers and the updates that they make to files Similar to Meneely et al. The main idea: A developer who participated in developing different files could have the same impact on those files –Make the same sort of faults 16

17 Comparison with related works Our approach has the benefits of dependency data approaches Dependency between SA components Dependency between goals and SA Goal models introduce some privileges Compare to Social Network based approaches: They only consider simple contribution of developers such as updating a file Goals and their relations shows the concerns of stakeholders consider impacts of different stakeholders implicitly other aspects of a developer –lack of knowledge in using a specific technology –Strong experience of a developer in using a method or technology Augmenting our approach to consider developer interaction is also possible 17

18 Conclusion Introduce metrics based on dependency between components of software architecture Introduce aggregate metrics to show impact of goal selection on error prediction using architectural slices Prediction using logistic regression Training Validation Compare to existing works We could consider other roles than developers Different aspect of contribution of developers Evaluations 18

19 Reference T. Zimmermann and N. Nagappan, “Predicting Subsystem Failures using Dependency Graph Complexities,” The 18th IEEE International Symposium on Software Reliability (ISSRE '07), Trollhattan, Sweden: 2007, pp. 227-236. A. Meneely, L. Williams, W. Snipes, and J. Osborne, “Predicting failures with developer networks and social network analysis,” Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering - SIGSOFT '08/FSE-16, Atlanta, Georgia: 2008, p. 13. C. Bird, N. Nagappan, H. Gall, B. Murphy, and P. Devanbu, “Putting It All Together: Using Socio-technical Networks to Predict Failures,” 2009 20th International Symposium on Software Reliability Engineering, Mysuru, Karnataka, India: 2009, pp. 109-119. 19

20 20


Download ppt "Using Social Network Analysis Methods for the Prediction of Faulty Components Gholamreza Safi."

Similar presentations


Ads by Google