Projects Overview Andrea Forte Fast L3 handoff Passive DAD (pDAD) Cooperative Roaming (CR) Highly congested IEEE networks – Measurements and Analysis
Fast L3 Handoff We optimize the IP address acquisition time as follows: Subnet Discovery Checking Cache for a valid IP Temp_IP (Cache miss) The client “picks” a candidate IP using particular heuristics. SIP re-invite The CN will update its session with the TEMP_IP. Normal DHCP procedure to acquire the final IP SIP re-invite The CN will update its session with the final IP.
Fast L3 Handoff - Results
Passive DAD - Architecture Address Usage Collector (AUC)DHCP server Router/Relay Agent SUBNET AUC builds DUID:MAC pair table (DHCP traffic only). AUC builds IP:MAC pair table (broadcast and ARP traffic). The AUC sends a packet to the DHCP server when: a new pair IP:MAC is added to the table a potential duplicate address has been detected a potential unauthorized IP has been detected DHCP server checks if the pair is correct or not and it records the IP address as in use. (DHCP has the final decision!) IPMACExpire IP1MAC1570 IP2MAC2580 IP3MAC3590 Broadcast-ARP-DHCP Client IDMAC DUID1MAC1 DUID2MAC2 DUID3MAC3 TCP Connection IPClient IDFlag
Cooperative Roaming (CR) Stations can cooperate and share information about the network (topology, services). Stations can cooperate and help each other in common tasks such as IP address acquisition. Stations can help each other during the authentication process without sharing sensitive information, maintaining privacy and security. Stations can also cooperate for application- layer mobility and load balancing.
CR – Results (1/2)
CR – Results (2/2)
Wireless measurements in highly congested networks IETF meeting in Dallas (IETF-65) Three days of measurements (~8GB of data). 400~500 people in one room (plenary). IEEE a/b Multiple APs on same channel. Congestion analysis (throughput, retries, ARF), handoff analysis (Apple vs. others), unusual behaviors (broadcast feedback), load balancing (num. of clients vs. bandwidth).
Projects Overview Kundan Singh P2P-SIP using external DHT Thread and event models Conference server scalability
SIP-using-P2P P2P-SIP using an external distributed hash table (DHT) Data vs service modes Data: treat DHT as data storage using put/get/remove Service: join DHT to provide registrar/presence service using join/leave/lookup Logical operations Contact management put (user id, signed contact) Cryptographic key storage User certificates and private configurations Presence put (subscribee id, signed encrypted subscriber id) Composition needs service model Offline message put (recipient, signed encrypted message) NAT and firewall traversal STUN and TURN server discovery needs service model Proposed an XML-based data format
SIP-using-P2P Implementation in SIPc with the help of Xiaotao Wu OpenDHT Trusted nodes Robust Fast enough (<1s) Identity protection Certificate-based SIP id == P2P for Calls, IM, presence, offline message, STUN server discovery and name search P2P clients better than proxies: Less DHT calls OpenDHT quota for fairness imposes limit on proxies Should this be made open source?
SIP proxy performance Effect of software architecture and multi-processor hardware Calls/s for stateless proxy, UDP, no DNS, 6 msg/call Architecture /Hardware 1 PentiumIV 3GHz, 1GB, Linux (1xP) 4 pentium, 450MHz, 512 MB, Linux (4xP) 1 ultraSparc-IIi, 300 MHz, 64MB, Solaris (1xS) 2 ultraSparc-II, 300 MHz, 256MB, Solaris (2xS) Event-based Thread per msg Pool-thread per msg (sipd) Thread-pool Process-pool Calls/s for stateful proxy, UDP, no DNS, 8 msg/call Architecture /Hardware 1 PentiumIV 3GHz, 1GB, Linux (1xP) 4 pentium, 450MHz, 512 MB, Linux (4xP) 1 ultraSparc-IIi, 360MHz, 256 MB, Solaris5.9 (1xS) 2 ultraSparc-II, 300 MHz, 256 MB, Solaris5.8 (2xS) Event-based Thread per msg Thread-pool (sipd) stage thread-pool Better performance as this includes mempool changes Software architecture further improves performance: S3P3 can support 16 million BHCA Both Pentium and Sparc took approx 2 MHz CPU cycles per call/s on single-processor
Not much concurrency in stateful mode: needs more investigation Should sipd use 2-stage thread pool architecture?
SIP conference server Performance For G.711 audio mixing on a 3 GHz Pentium 4 with 1 GB memory About 480 participants in a single conference with one active speaker (CPU is bottleneck) About 40 four-party conferences, each with one active speaker (CPU is bottleneck) Memory usage: 20 kB/participant Mixer delay: less than 20 ms Increasing the packetization interval to 40 ms improves performance to 700 participants, but also increases mixer delay Both Pentium and Sparc take about 6 MHz/participant
Cascaded conference server I measured the CPU usage for two cascaded servers: supports about 1000 participants in a single conference. The cascaded architecture scales to tens of thousands of participants. SIP REFER message is used to create cascading
Projects Overview Xiaotao Wu CUTE (Columbia University Telecommunication service Editor) GUI-based service creation tool to help inexperienced users to create services Service learning and service management Service learning Service risk management Handling feature interactions
CUTE (Columbia University Telecommunication service Editor)
Survey on CUTE Evaluating how likely an end user can create telecommunication services by himself and how useful and friendly CUTE is 365
Service learning and service risks Service learning and service risks Causal relationship between call information and call decisions Decision tree induction Incremental Tree Induction algorithm Service risk management Identify: Lose connection, privacy, money, attention Analyze: Possibility, impact, overall risk Resolve: Change communication methods, transfer, reduce overall risk Contingency plan
Feature interaction handling Feature interaction handling accept Tree merging + = If time is between 10:00AM and 11:00AM If address is hgs Forward to conf Incoming call If time is between 10:00AM and 11:00AM If address is hgs reject Forward to conf reject accept Take actions from both scripts. Simply setting precedence rules cannot work.
Service management Service management