Download presentation
Presentation is loading. Please wait.
Published byAlexander Lang Modified over 5 years ago
1
Identifying and Testing for Insecure Paths in Authentication protocols : A Model-based Approach
Jayaram KR Graduate Student - Computer Science Purdue University Aditya P Mathur Professor of Computer Science Purdue University SERC Showcase – Fall 2006 Ball State University, Muncie IN November 15-16, 2006
2
Research Objective To evaluate the effectiveness of statechart-based test generation techniques in Identifying insecure paths in cryptographic protocol implementations Identify stress points in the protocol which may lead to security vulnerabilities 1/12/2019
3
Importance Increased use of cryptographic protocols in critical systems and their increased complexity. Software errors are becoming common in protocol implementations leading to serious security flaws. [ Verification only ensures correctness of design. Need to understand the effectiveness of model-based testing approaches in security testing. 1/12/2019
4
Our Approach - Modeling
Formalism – Statecharts [A component of UML.] Concurrency – protocols involve multiple concurrent principals. Each principal may have multiple concurrent threads of computation Need to model arbitrary computations at varying levels of abstraction – encryption, certificate validation, etc. Memory – this is critical. The use of variables is unavoidable. Principals remember keys, nonces (a value used only once), etc. Extended FSMs supporting all the above are no different from statecharts! 1/12/2019
5
Example – TLS Client Handshake [RFC 2246]
1/12/2019
6
Test Generation [1] Enhanced Wp method [Chow78]
Wp method results in state-space explosion in concurrent states (due to product construction) We reduce the number of tests generated by eliminating infeasible paths Consider an example ……. 1/12/2019
7
Example: Concurrent State
CLIENT SERVER Example: Concurrent State Request_cert(s) Request_cert(s) C1 Servcert = getcert() S1 Recv(s,cert)[Isvalid(cert)] send(client,servcert) Pubkey = getkey(cert) C2 clirandom = rand() Mess = encrypt(pubkey,clirandom) S2 recv(client,M) [M==encrypt(pubkey,clirandom)] send(serv,mess) C3 recv(serv,M) [M==encrypt(pubkey, servrandom)] S Servrandom = rand() Sesskey = f(clirandom,servrandom) Msg = encrypt(privkey,servrandom) A sample client -server. Client requests server’s certificate, checks whether the received certificate is valid, generates random number and sends it to server. Both client and server compute session key as a function of (clirandom, serverrandom) Sesskey = f(clirandom, servrandom) C4 send(serv,Msg) 1/12/2019
8
C1;S1 C2;S1 C1;S2 C2;S2 C3;S2 C2;S2 C2;S3 C4;S2 C3;S3 C3;S2 C3;S3
C3;F RED = INFEASIBLE C4;S3 C4,F 1/12/2019
9
Test Generation [2] Since authentication involves a significant amount of communication, there usually are many infeasible paths that can be eliminated. Use the statechart as a tool to identify malicious inputs and stress points Instantiate test traces using Boundary Value Analysis (with malicious inputs as well) Prune away those traces where variables are not under the control of tester – Try to cover those paths through BVA 1/12/2019
10
Test Generation [3] For TLS:
Client: 13 states (handshake) * 4 states (Transmit) Server: 15 states (Handshake) * 4 states (Transmit) Total: 3120 states in product automaton, at least 1600 paths Most infeasible (because of communication)! After elimination: 32 states, 41 paths, 58 test cases Client Handshake Part 13 states, Transmit Part 4 states; Server Handshake Part 15 states, Transmit Part 4 states. So totally 13*4*15*4 = 3120 states. Most states are infeasible because the way we model, we model all local computation (on client/server) inside a state and all communication as transitions. After elimination = 32 states. # of paths is greater than # of states because of error conditions (all of which lead to the final state but inturn create many paths). Test case is obtained by instantiating paths with BVA. 1/12/2019
11
Test Application [Example]
GnuTLS Certificate Authority xinu3.cs.purdue.edu GnuTLS client xinu5.cs.purdue.edu GnuTLS server xinu10.cs.purdue.edu xinu5 & xinu10 share a common file system enabling us to easily collect coverage data Code is instrumented to collect coverage metrics using Bullseye during compilation In this case, client and server obtain certificates from a common certificate authority (CA) 1/12/2019
12
Sample Condition/Decision coverage
We use the GNU Implementation of TLS v 1.4.1( Three sample components Gnutls_handshake - 64% Gnutls_transmit - 73% Gnutls_algorithms - 73% Examples of uncovered code: Error checks for dynamic memory allocation Ptr = malloc(sizeof(cert)); If(ptr == NULL) { gnutls_assert(); Return GNUTLS_MEMORY_ERROR; } AN ERROR IN HANDLING NULL POINTERS MAY LEAD TO DENIAL OF SERVICE - SO TEST ENHANCEMENT BASED ON COVERAGE APPEARS ESSENTIAL 1/12/2019
13
Examples…contd An error here may lead to compromised sessions
being stored on the server and later resumed Ret = _gnutls_handshake_hash_add_recvd(); If(ret <0){ Gnutls_assert(); _gnutls_handshake_header_buffer_clear(session); Return ret; } Tests do not cover low level decisions not explicitly modeled in the statechart Examples are contracts between functions. Function A may call Function B which is expected to return a value or -1(SYSERR). Function A may check for the return of SYSERR. But since this check isn’t modeled in the statechart, it is not guaranteed to be covered Such conditions will be covered if the SYSERR corresponds to an error condition modeled in the statechart. 1/12/2019
14
Research Plan – Short term
Adequacy Assessment Measure adequacy of tests generated using existing metrics based on control flow and data flow Analyze whether uncovered code can lead to security vulnerabilities Adequacy Assessment using Mutation Use existing mutation techniques to assess adequacy Research on techniques to generate “security” mutants. At a high level, security mutants are mutants which nullify specific security requirements or introduce program vulnerabilities like buffer overflows, etc. 1/12/2019
15
Related Work Kirill Bogdanov, Automated Testing of Harel’s Statecharts, PhD thesis, University of Sheffield, Jan 2000 Simon Burton, Automatic Generation of High Integrity Test Suites from Graphical Specifications, PhD thesis, University of York, March 2002 Jayaram K R and Aditya P. Mathur, Software Engineering for Secure Software – State of the Art: A survey; SERC Tech Report SERC-TR-279; Oct 1, 2005 1/12/2019
16
Related Work Márcio Eduardo Delamaro, José Carlos Maldonado, Alberto Pasquini, Aditya P. Mathur: Interface Mutation Test Adequacy Criterion: An Empirical Evaluation. Empirical Software Engineering 6(2): (2001) Aditya P. Mathur, W. Eric Wong: An Empirical Comparison of Data Flow and Mutation-Based Test Adequacy Criteria. Softw. Test., Verif. Reliab. 4(1): 9-31 (1994) 1/12/2019
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.