Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Security Research Group (SSRG), University of Ottawa in collaboration with IBM Software Security Research Group (SSRG), University of Ottawa In.

Similar presentations


Presentation on theme: "Software Security Research Group (SSRG), University of Ottawa in collaboration with IBM Software Security Research Group (SSRG), University of Ottawa In."— Presentation transcript:

1 Software Security Research Group (SSRG), University of Ottawa in collaboration with IBM Software Security Research Group (SSRG), University of Ottawa In collaboration with IBM A Statistical Approach for Efficient Crawling of Rich Internet Applications M. E. Dincturk, S. Choudhary, G.v.Bochmann, G.V. Jourdan, I.V. Onut University of Ottawa, Canada Presentation given at the Intern. Conf. on Web Engineering ICWE, Berlin, 2012 1

2 Software Security Research Group (SSRG), University of Ottawa & IBM Abstract  Modern web technologies, like AJAX result in more responsive and usable web applications, sometimes called Rich Internet Applications (RIAs). Traditional crawling techniques are not sufficient for crawling RIAs. We present a new strategy for crawling RIAs. This new strategy is designed based on the concept of “Model-Based Crawling” introduced in [3] and uses statistics accumulated during the crawl to select what to explore next with a high probability of uncovering some new information. The performance of our strategy is compared with our previous strategy, as well as the classical Breadth-First and Depth-First on two real RIAs and two test RIAs. The results show this new strategy is significantly better than the Breadth-First and the Depth-First strategies (which are widely used to crawl RIAs), and outperforms our previous strategy while being much simpler to implement. 2

3 Software Security Research Group (SSRG), University of Ottawa & IBM SSRG Members University of Ottawa  Prof. Guy-Vincent Jourdan  Prof. Gregor v. Bochmann  Suryakant Choudhary(Master student)  Emre Dincturk(PhD student)  Khaled Ben Hafaiedh(PhD student)  Seyed M. Mir Taheri(PhD student)  Ali Moosavi (Master student) In collaboration with Research and Development, IBM ® Rational ® AppScan ® Enterprise  Iosif Viorel Onut (PhD) 3

4 Software Security Research Group (SSRG), University of Ottawa & IBM Overview  Background –The evolving Web - Why crawling  RIA crawling –State identification - State equivalence  Crawling strategies –Crawling objectives - Breadth-first and Depth-first - Model-based strategies  Statistical approach –Probabilistic crawling strategy - Experimental results  On-going work and conclusions 4

5 Software Security Research Group (SSRG), University of Ottawa & IBM The evolving Web  Traditional Web –static HTML pages identified by an URL –HTML pages dynamically created by server, identified by URL with parameters  Rich Internet Applications (Web-2) –pages contain executable code (e.g. JavaScript, Silverlight, Adobe Flex...); executed in response to user interactions, or time- outs (so-called events); script may change displayed page (“state” of the application changes) – with same URL. –AJAX: script may interact asynchronously with the server to update the page. 5

6 Software Security Research Group (SSRG), University of Ottawa & IBM Why crawling  Objective A: find all (or all “important”) pages –for content indexing –for security testing –for accessibility testing  Objective B: find all links between pages –for ranking pages, e.g. Google ranking in search queries –for building a graph model of the application pages (or application states) are nodes links (or events) are edges between nodes 6

7 Software Security Research Group (SSRG), University of Ottawa & IBM Example web application and models  Show my web site (http://www.eecs.uottawa.ca/~bochmann/ )http://www.eecs.uottawa.ca/~bochmann/ Model of web site Web-2 model 7 Bochmann Hobbies DSRG Pub Painter B research group hobbies Gregor von Bochmann publications Bochmann Hobbies DSRG Pub Painter B research group hobbies Gregor von Bochmann publications page with URLlink (event)State (no URL)

8 Software Security Research Group (SSRG), University of Ottawa & IBM IBM Rational AppScan Enterprise Edition Product overview IBM Security Solutions

9 Software Security Research Group (SSRG), University of Ottawa & IBM IBM Rational AppScan Suite Comprehensive Application Vulnerability Management 9 REQUIREMENTS CODE BUILD PRE-PROD PRODUCTION QA AppScan Standard AppScan Source AppScan Tester Security Requirements Definition AppScan Standard Security / compliance testing incorporated into testing & remediation workflows Security requirements defined before design & implementation Outsourced testing for security audits & production site monitoring Security & Compliance Testing, oversight, control, policy, audits Build security testing into the IDE Application Security Best Practices – Secure Engineering Framework Automate Security / Compliance testing in the Build Process SECURITY AppScan Build AppScan Enterprise AppScan Reporting Console AppScan onDemand

10 Software Security Research Group (SSRG), University of Ottawa & IBM AppScan Enterprise Edition capabilities Large scale application security testing Client-server architecture designed to scale Multiple users running multiple assessments Centralized repository of all assessments Scheduling and automation of assessments REST-style API for automation and integrations Enterprise visibility of security risk High-level dashboards Detailed security issues reports, advisories and fix recommendations Correlation of results discovered using dynamic and static analysis techniques Over 40 compliance reports like PCI, GLBA, SOX Governance & collaboration User roles & access permissions Test policies Issue management Defect tracking systems integration 10

11 Software Security Research Group (SSRG), University of Ottawa & IBM 11 Information Security  Schedule and automate assessments  Conduct assessments with AppScan Standard and AppScan Source and publish findings for remediation and trending Build automation  Source code analysis for security issues as part of build verification  Publish findings for remediation and trending Tools:  AppScan Standard Edition  AppScan Source Edition AppScan Enterprise Workflows Tools:  AppScan Source for Automation  AppScan Standard Edition CLI Compliance Officers  Review compliance reports AppScan Enterprise Management  Review most common security issues  View trends  Assess risk  Evaluate progress Development & QA  Conduct assessments  View assessment results  Remediate issues  Assign issue status

12 Software Security Research Group (SSRG), University of Ottawa & IBM 12 View detailed security issues reports  Security Issues Identified with Static Analysis  Security Issues Identified with Dynamic Analysis  Aggregated and correlated results  Remediation Tasks  Security Risk Assessment

13 Software Security Research Group (SSRG), University of Ottawa & IBM Traditional Web Crawling  HTML page – is a tree data structure, called DOM. It includes information about display by the browser events that can be activated by the user (for instance, clicking on certain displayed fields); for each event –URL to be requested from the server through an HTTP Request (link to next page) –The page returned by the server for a given URL, in general, depends on the server state and the values of cookies –The displayed page is identified by its URL if we ignore server state and cookies 13

14 Software Security Research Group (SSRG), University of Ottawa & IBM Traditional web crawling algorithm Given: –an initial seed URL –a domain (or list of domains) defining the limit of the web space to be explored Crawler variables (of type “set of URLs”): –exploredURLs = empty –unexploredURLs = {seedURL} Algorithm –While unexploredURLs is not empty do Take a URL from unexploredURLs, add it to exploredURLs, request it from the server, analyse the returned page (according to the purpose of the crawl), extract the links in the page and add the corresponding URLs (if they are in the domain) to unexploredURLs. 14

15 Software Security Research Group (SSRG), University of Ottawa & IBM RIA Crawling  Difference from traditional web –The HTML DOM structure returned by the server in response to a URL may contain scripts. –When an event triggers the execution of a script, the script may change the DOM structure – which may lead to a new display and a new set of enabled events – that is a new state of the application.  Crawling means: –finding all URLs that are part of the application, plus –for each URL, find all states reached (from this “seed” URL) by the execution of any sequence of events Important note: only the “seed” states are directly accessible by a URL 15

16 Software Security Research Group (SSRG), University of Ottawa & IBM Difficulties for crawling RIA  State identification – A state can not be identified by a URL. –Instead, we consider that the state is identified by the current DOM in the browser.  Most links (events) do not contain a URL –An event included in the DOM may not explicitly identify the next state reached when this event is executed. –To determine the state reached by such an event, we have to execute that event. In traditional crawling, the event contains the URL (identification) of the next state reached (but not for RIA crawling) 16

17 Software Security Research Group (SSRG), University of Ottawa & IBM Important consequence  For a complete crawl (a crawl that ensures that all states of the application are found), the crawler has to execute all events in all states of the application –since for any of these events, we do not know, a priory, whether its execution in the current state will lead to a new state or not. –Note: In the case of traditional web crawling, it is not necessary to execute all events on all pages; it is sufficient to extract the URLs from these events, and get the page for each URL only once. 17

18 Software Security Research Group (SSRG), University of Ottawa & IBM Example  The links “publication” in the pages “Bochmann” and “DSRG” have the same URL  The page “Pub” will be retrieved only once. 18 Bochmann Hobbies DSRG Pub Painter B research group hobbies Gregor von Bochmann publications Bochmann Hobbies DSRG Pub Painter B research group hobbies Gregor von Bochmann publications  The events “publication” in the pages “Bochmann” and “DSRG” have no URL  Both events “publication” must be executed, and the crawler finds out that they both lead to the same page.

19 Software Security Research Group (SSRG), University of Ottawa & IBM AJAX: asynchronous interactions with the server 19  We ignore the intermediate states in our current work, by simply waiting that a new stable state is reach after each user input

20 Software Security Research Group (SSRG), University of Ottawa & IBM RIA: Need for DOM equivalence  A given page often contains information that changes frequently, e.g. advertizing, time of the day information. This information is usually of no importance for the purpose of crawling.  In the traditional web, the page identification (i.e. the URL) does not change when this information changes.  In RIA, states are identified by their DOM. Therefore similar states with different advertizing would be identified as different states (which leads to a too large state space). –We would like to have a state identifier that is independent of the unimportant changing information. –We introduce a DOM equivalence, and all states with equivalent DOMs have the same identifier. 20

21 Software Security Research Group (SSRG), University of Ottawa & IBM DOM equivalence  The DOM equivalence depends on the purpose of the crawl. –In the case of security testing, we are not interested in the textual content of the DOM, –however, this is important for content indexing.  The DOM equivalence relation is realized by a DOM reduction algorithm which produces (from a given DOM) a reduced canonical representation of the information that is considered relevant for the crawl.  If the reduced DOMs obtained from two given DOMs are the same, then the given DOMs are considered equivalent, that is, they represent the same application state (for this purpose of crawl). 21

22 Software Security Research Group (SSRG), University of Ottawa & IBM Form of the state identifiers  The reduced DOM could be used as state identifier. –however, it is quite voluminous we have to store the application model in memory during its exploration, each edge in the graph contains the identifiers of the current and next states.  Condensed state identifier: –A hash of the reduced DOM used to check whether a state obtained after the execution of some event is a new state or a known one –The crawler also stores for each state the list of events included in the DOM, and whether they are executed or not used to select the next event to be executed during the crawl 22

23 Software Security Research Group (SSRG), University of Ottawa & IBM Crawling Strategies for RIA  Most work on crawling RIA do not intend to build a complete model of the application  Some consider standard strategies, such as Depth-First and Breadth-First, for building complete models  We have developed more efficient strategies based on the assumed structure of the application (“model-based strategies”, see below) 23

24 Software Security Research Group (SSRG), University of Ottawa & IBM Example of crawling sequence Depth-first strategy getURL(Bochmann); analyseDOM; execute(publications) and find new state Pub; analyseDOM; - go back - getURL(Bochmann); execute(research group) and find new state DSRG; analyseDOM; execute(publications) and find known state Pub; - go back - getURL(Bochmann); execute(hobbies) and find new state Hobbies; analyseDOM and find new URL PainterB; getURL(PainterB); analyseDOM; 24 Bochmann Hobbies DSRG Pub Painter B research group hobbies Gregor von Bochmann publications Such a systematic approach will execute all events and eventually find all states.

25 Software Security Research Group (SSRG), University of Ottawa & IBM Resets  Each time there is a “go back” in the crawling sequence, the crawler has to go back to a seed-URL (which takes more time than executing an event) and possibly execute several events in order to reach the desired state. –For instance, in the Breadth-First strategy, the crawler has to go back to the state DSRG in order to execute the event publications 25 Bochmann Hobbies DSRG Pub Painter B research group hobbies Gregor von Bochmann publications  Resets are much more “expensive” (in terms of execution times) than event executions  The number of resets should be minimized.

26 Software Security Research Group (SSRG), University of Ottawa & IBM Disadvantages of standard strategies  Breadth-First: –No longer sequences of event executions –Very many Resets  Depth-First: –Advantage: has long sequences of event executions –Disadvantage: when reaching a known state, the strategy takes a path back to a specific previous state for further event exploration. This path through known edges is often long and may involve a Reset (overhead) – going back to another state with non- executed events may be much more efficient. 26

27 Software Security Research Group (SSRG), University of Ottawa & IBM Comparing crawling strategies  Objectives –Complete crawl: Given enough time, the strategy terminates the crawl when all states of the application have been found. –Efficiency of finding states - “finding states fast”: If the crawl is terminated by the user before a complete crawl is attained, the number of discovered state should be as large as possible. For many applications, a complete crawl cannot be obtained within a reasonable length of time. Therefore the second objective is very important. 27

28 Software Security Research Group (SSRG), University of Ottawa & IBM ```` 28710 19128 11803 11233 164 Comparing efficiency of finding states Cost (number of event executions + reset cost) Number of states discovered Total: 129 This is for a specific application such comparisons should be done for many different types of applications Note: log scale

29 Software Security Research Group (SSRG), University of Ottawa & IBM Comparing efficiency of exploring all edges 32025 19914 12643 12356 Cost (number of event executions + reset cost) Number of edges explored Total: 10364

30 Software Security Research Group (SSRG), University of Ottawa & IBM Model-based Crawling Idea: –Meta-model: assumed structure of the application –Crawling strategy is optimized for the case that the application follows these assumptions –Crawling strategy must be able to deal with applications that do not satisfy these assumptions 30

31 Software Security Research Group (SSRG), University of Ottawa & IBM State and transition exploration phases  State exploration phase –finding all states assuming that the application follows the assumptions of the meta-model  Transition exploration phase –executing all remaining events in all known states (that have not been executed during the state exploration phase)  Order of execution –Start with state exploration; then transition exploration –If new states are discovered during transition phase, go back to the state exploration phase, etc. 31

32 Software Security Research Group (SSRG), University of Ottawa & IBM Three meta-models  Hypercube –The state reached by a sequence of events from the initial state is independent of the order of the events. –The enabled events at a state are those at the initial state minus those executed to reach that state.  Menu model  Probability model 32 Example: 4-dim. Hypercube

33 Software Security Research Group (SSRG), University of Ottawa & IBM Hypercube strategy The strategy is optimal if the model assumptions are satisfied  State exploration: –The 2 n states can be covered by C(n,n/2) paths out of n! paths E.g. for n=4, the 16 states covered by 6 paths out of 24  Transition exploration: –C(n,n/2)*n/2 paths are sufficient 33

34 Software Security Research Group (SSRG), University of Ottawa & IBM Hypercube transition exploration Some of the transitions are already traversed during the state exploration phase.  Optimal transition exploration algorithm: –From the initial state, go to the closest state that has an unexecuted event. –Start executing unexecuted events as long as the next state has one. –When a state with no unexecuted event is reached, reset the application and start over from the initial state. 34

35 Software Security Research Group (SSRG), University of Ottawa & IBM Violations of Hypercube assumptions Unexpected Split We expect to find a known state, and get a different state. Unexpected Merge We expect to find a new state, and get a known state. Appearing Events The new state has events we did not expect. Disappearing Events Some event expected for the new state is missing. 35

36 Software Security Research Group (SSRG), University of Ottawa & IBM Hypercube strategy: Handling violations  We adapt to the discovered model and do predictions based on the meta-model 36 Initial anticipated model Actual Model discovered so far (events e3, e4 appearing, event e1 disappearing) Updated anticipated model

37 Software Security Research Group (SSRG), University of Ottawa & IBM 37 A prototype of the RIA crawling method was implemented and some experiments very simple test web sites were performed. Initial experiments H4D

38 Software Security Research Group (SSRG), University of Ottawa & IBM 38 ```` Initial experiments: Number of states discovered ApplicationTotal States States Discovered by CommercialCrawljax Our Tool H4D 16511 16 NH1 848 8 NH2 1336 NH3 24314 24 PrevNext 933 9

39 Software Security Research Group (SSRG), University of Ottawa & IBM “MENU” Meta – Model  Example web site: Ikebana-Ottawa ( ikebanaottawa.ca )  Hypothesis: –Two types of events: menu events and non-menu events –The next state obtained by the execution of a menu event is independent of the state where the event was executed S3 S4 S1 S2 E2 E1 Simple example:

40 Software Security Research Group (SSRG), University of Ottawa & IBM 40

41 Software Security Research Group (SSRG), University of Ottawa & IBM Menu strategy: state exploration From the current state, choose the next event according to following event priority (based on expected probability of finding a new state): 1.Globally non-executed 2.Locally non-executed, but globally executed 1.Globally executed once: depending on second execution, classify event as menu or non-menu 2.Non-menu events 3.Menu events (we do not expect to find a new state) 4.Self-loop events (yet another class of events) 3.If all events have already be executed on the current page: find a “short” path to a page with an event of high priority 41

42 Software Security Research Group (SSRG), University of Ottawa & IBM Menu Strategy: finding a path to next event  If all events have already be executed on the current page: find a “short” path to a page with an event of high priority  Find path on current application model based on –executed edges –predicted edges: Locally non- executed, but globally executed events are predicted to be of type menu 42 Predicted edges Executed edges

43 Software Security Research Group (SSRG), University of Ottawa & IBM Probability strategy  Like in the menu model, we use event priorities.  The priority of an event is based on statistical observations (during the crawl of the application) about the number of new states discovered when executing the given event.  The strategy is based on the belief that an event which was often observed to lead to new states in the past will be more likely to lead to new states in the future. 43

44 Software Security Research Group (SSRG), University of Ottawa & IBM Probability strategy: state exploration  Probability of a given event e finding a new state from the current state is P(e) = ( S(e) + p S ) / ( N(e) + p N ) –N = number of executions –S = number of new states found –Bayesian formula, with p S = 1 and p N = 2 gives initial probability = 0.5  From current state s, find a locally non-executed event e from state s’ such that P(e) is high and the path from s to s’ is short –Note: the path from s to s’ is through events already executed –Question: How to find e and s’ 44

45 Software Security Research Group (SSRG), University of Ottawa & IBM Choosing an event to explore Def: P(s) = max (P(e)) for all non-executed events at state s Example:hat –has high probability of discovering a state and –has a small distance (in terms of event executions) from the current state Should we explore e 1 or e 2 next? Note that although P(e 1 ) is higher, the distance to reach e 1 (from current state) is also higher Solid edges are already explored events, dashed ones are yet to be explored e 1 and e 2 are the unexplored events with max prob. in s 1 and s 2, respectively s e2e2 P(e 2 ) = P(s 2 ) = 0.5 P(e 1 ) = P(s 1 ) = 0.8 e1e1 Currently, we are here s2s2 s1s1

46 Software Security Research Group (SSRG), University of Ottawa & IBM Choosing an event to explore (2) It is not just to compare P(e1) and P(e2) directly since the distances (the number of event executions required to explore e1 and e2) are different. The number of events executed is the same in the following two cases option 1 : explore e 1 option 2 : explore e 2 and if a known state is reached explore one more event The probabilities for finding new states of these options are: option 1 : P = P(e 1 ) option 2 : P = P(e 2 ) + (1 – P(e 2 ))P avg where P avg is the probability of discovering a new state averaged over all knows states (we do not know which state would be reached). s e2e2 P(e 2 ) = 0.5 P(e 1 ) = 0.8 e1e1 current state s2s2 s1s1

47 Software Security Research Group (SSRG), University of Ottawa & IBM Choosing an event to explore (3) In general, two events e1 and e2 (where e2 requires k more steps to be reached) are compared by looking at –P(e 1 ) –1 - (1 - P(e 2 )) (1 - P avg ) k this value is 1 – (probability of not discovering a state by exploring e 2 and k more events) Using this comparison, we decide on a state s chosen where the next event should be explored We use an iterative search to find s chosen Initalize s chosen to be the current state At iteration i Let s be the state with max probability at distance i from the current state If s is more preferable to s chosen, update s chosen to be s

48 Software Security Research Group (SSRG), University of Ottawa & IBM Choosing an event to explore (4) When do we stop the iteration ? When it is not possible to find a better state than the current s chosen How do we know that it is not possible to find a better state ? We know the maximum probability, P best, among all unexplored events. We can stop at a distance d from s chosen if we have 1 – (1- P(s chosen ))(1 - P avg ) d ≥ P best That is, if we cannot find a better state in d steps after the last value of s chosen, then no other state can be better (since even the best event would not be preferable)

49 Software Security Research Group (SSRG), University of Ottawa & IBM Experiments  We did experiments with the different crawling strategies using the following web sites: –Periodic table (Local version: http://ssrg.eecs.uottawa.ca/periodic/) –Clipmarks (Local version: http://ssrg.eecs.uottawa.ca/clipmarks/) –TestRIA ( http://ssrg.eecs.uottawa.ca/TestRIA/ ) –Altoro Mutual (http://www.altoromutual.com/ ) 49

50 Software Security Research Group (SSRG), University of Ottawa & IBM State Discovery Efficiency – Periodic Table Plots are in logarithmic scale. Cost of reset for this application is 8. Cost = number of event executions + R * number of resets

51 Software Security Research Group (SSRG), University of Ottawa & IBM State Discovery Efficiency – Clipmarks Plots are in logarithmic scale. Cost of reset for this application is 18.

52 Software Security Research Group (SSRG), University of Ottawa & IBM State Discovery Efficiency – TestRIA Plots are in logarithmic scale. Cost of reset for this application is 2.

53 Software Security Research Group (SSRG), University of Ottawa & IBM State Discovery Efficiency – Altoro Mutual Plots are in logarithmic scale. Cost of reset for this application is 2.

54 Software Security Research Group (SSRG), University of Ottawa & IBM State Discovery Efficiency – Box Plot Plots are in logarithmic scale. Reset costs are Periodic Table: 8, Clipmarks: 18, TestRIA: 2, Altoro Mutual: 2

55 Software Security Research Group (SSRG), University of Ottawa & IBM Results: Transition exploration (cost of exploring all transitions) Cost for a complete crawl –Cost = number of event executions + R * number of resets R = 18 for the Clipmarks web site 55

56 Software Security Research Group (SSRG), University of Ottawa & IBM On-going work  Exploring regular page structures with widgets –reducing the exponential blow-up of combinations  Exploring the structure of mobile applications –Applying similar crawling principles to the exploration of the behavior of mobile applets  Concurrent crawling –For increasing the performance of crawling, consider coordinated crawling by many crawlers running on different computers, e.g. in the cloud 56 Cost = n

57 Software Security Research Group (SSRG), University of Ottawa & IBM Conclusions  RIA crawling is quite different from traditional web crawling  Model-based strategies can improve the efficiency of crawling  We have developed prototypes of these crawling strategies, integrated with the IBM Apscan product 57

58 Software Security Research Group (SSRG), University of Ottawa & IBM References Background;  MESBAH, A., DEURSEN, A.V. AND LENSELINK, S., 2011. Crawling Ajax-based Web Applications through Dynamic Analysis of User Interface State Changes. ACM Transactions on the Web (TWEB), 6(1), a23. Our Papers:  Dincturk, M.E., Choudhary, S., Bochmann, G.v., Jourdan, G.-V. and Onut, I.V., A Statistical Approach for Efficient Crawling of Rich Internet Applications, in Proceedings of the 12th International Conference on Web engineering (ICWE 2012), Berlin, Germany, July 2012. 8 pages - [pdf]. A longer version of the paper is also available (15 pages) [pdf]pdf  Choudhary, S., Dincturk, M.E., Bochmann, G.v., Jourdan, G.-V., Onut, I.V. and Ionescu, P., Solving Some Modeling Challenges when Testing Rich Internet Aplications for Security, in The Third International Workshop on Security Testing (SECTEST 2012), Montreal, Canada, April 2012. 8 pages - [pdf].pdf  Benjamin, K., Bochmann, G.v., Dincturk, M.E., Jourdan, G.-V. and Onut, I.V., A Strategy for Efficient Crawling of Rich Internet Applications, in Proceedings of the 11th International Conference on Web engineering (ICWE 2011), Paphos, Cyprus, July 2011. 15 pages - [pdf].pdf  Benjamin, K., Bochmann, G.v., Jourdan, G.-V. and Onut, I.V., Some Modeling Challenges when Testing Rich Internet Applications for Security, in First International Workshop on Modeling and Detection of Vulnerabilities (MDV 2010), Paris, France, April 2010. 8 pages - [pdf].pdf  Dincturk, M.E., Jourdan, G.-V., Bochmann, G.v. and Onut, I.V., A Model-Based Approach for Crawling Rich Internet Applications, submitted to a journal. 58

59 Software Security Research Group (SSRG), University of Ottawa & IBM Questions ?? Comments ?? These slides can be downloaded from http://????/RIAcrawlingProb.pptxhttp://????/RIAcrawlingProb.pptx *** ssrg server was down


Download ppt "Software Security Research Group (SSRG), University of Ottawa in collaboration with IBM Software Security Research Group (SSRG), University of Ottawa In."

Similar presentations


Ads by Google