Presentation is loading. Please wait.

Presentation is loading. Please wait.

Networking Lab Life of a packet Nicolas Prost Septembre 2015 1.

Similar presentations


Presentation on theme: "Networking Lab Life of a packet Nicolas Prost Septembre 2015 1."— Presentation transcript:

1 Networking Lab Life of a packet Nicolas Prost Septembre 2015 1

2 Networking Lab - Goals From the theory …. to experimentation network switching (level 2) in an openstack environment external world communication with DVR ( network routing / NAT, level 3) network virtualization (underlay with vxlan) Several Use Cases to follow a ping packet Use case 1 East-West routing, VM to VM in single network on single compute node Use case 2 East-West routing, VM to VM in single network on two compute nodes Use case 3 North-South with Floating IP, VM To Internet (DVR / sNAT) Use case 4 East-West routing, VM to VM in two sub-networks on two compute nodes (DVR) Use case 5 North-South routing with SNAT, VM to Internet (Dynamic NAT) 2

3 Main CLI on Compute node 3 network namespace ip netns - process network namespace management (ip, tcpdump, iptables) http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html Libvirt - Virtualization virsh Linux bridge brctl show iptables --list-rules tcpdump openvswicth ovs-vsctl show - utility for querying and configuring OVS ovs-ofctl show - administer / configure OpenFlow switches ovs-appctl - utility for configuring running OVS daemons

4 Use Case 1: VM to VM in single network on single compute node 4

5 Use Case 2: VM to VM in single network on two compute nodes 5

6 6

7 Use Case 4: East-West routing – VM on different computes / networks 7

8 8

9 Network Lab - Pre-requisites Having follow the theory Having done the previous Lab Get the Lab Guide pdf from http site Dashboard: https://192.168.24.31/Dashboard: https://192.168.24.31/ (admin / c7d9b0fe57df051ec6b76c2bb741ab0dfa81720d) a Tenant Id and User Id a Private Network and a subnet 3 VMs (you know how to access to), 2 on the same Compute node, the 3rd one on a different one with security group (Ping and SSH authorized !), keypair, a floating IP A router, connected to external Network 9

10 Lab Environement (reminder) Jump Host RDP to 16.16.11.96 as userXYZ / *ETSSjun2015!* Seed Host SSH 10.2.1.230 as demopaq / P@ssw0rd (from Jump Host) Run sudo –i t switch to root user Seed VM ssh 192.168.24.2 (from Seed Host) source stackrc nova list Please do not stop the SEED VM. ! This would break the entire lab! Undercloud ssh heat-admin@192.168.24.6 (from Seed VM)heat-admin@192.168.24.6 # sudo -i # source stackrc # nova list Overcloud ssh heat-admin@192.168.24.31 (from Seed VM)heat-admin@192.168.24.31 # sudo -i # source stackrc # nova list Compute Node ssh heat-admin@192.168.24.xx (from Seed VM) # sudo -i 10

11 Collecting Information 11

12 Prepared environement Tenant: networklab Network: ext-net – subnet: 192.168.25.0/24 (FIPs) nwlabprivate - subnet: internal – 192.168.200.0/24 with nwlabrouter (ID = c3be0f2e-88c7-445e-89aa-9c17b8d3761b ) Security group: nwlabsecgroup KeyPair: nwlabkeypair VMs 12 Instance IdCompute IPsBridge IdvNIC IdIP + Associated FIPs MAC@ nwlab1on Cumpute 9 instance- 0000005a 192.168.24.44qbr3f3ebb06-ddtap3f3ebb06-dd192.168.200.9 FIP: 192.168.25.87 fa:16:3e:ee:5c:7f nwlab2 on Cumpute 9 instance- 0000005d 192.168.24.44 qbrfed20562-44tapfed20562-44192.168.200.10fa:16:3e:82:49:d 1 nwlab3 on Cumpute 8 instance- 00000060 192.168.24.43qbrd2bca12f-74tapd2bca12f-74192.168.200.11fa:16:3e:dd:ff:cf

13 Collecting Information on VMs Get your project tenant ID (from Overcloud) # keystone tenant-get e.g. 1598e8d4a5e64bed9880514a39a2e940 On what physical compute nodes your instances are running and what is its local VM name (from Overcloud) # nova list --all-tenants 1 --tenant --fields name,OS-EXT-SRV-ATTR:host,OS-EXT-SRV-ATTR:instance_name e.g. NetworkLabVM1 | overcloud-ce-novacompute1-novacompute1-qr52vumlc4in | instance-000001b6 Get compute node IPs (from Overcloud) # nova hypervisor-list # nova hypervisor-show | grep host_ip e.g. 192.168.24.35 (compute 0) and 192.168.24.36 (compute 1) Log into compute node and Get the Virtual Nic + bridge (from Seed VM) # ssh heat-admin@ $ sudo –i [# virsh list] [# virsh dumpxml | grep “<nova:name” to check it is your VM] # virsh dumpxml | grep -A 7 "<interface“ e.g. tap551d286a-e4/ qbr551d286a-e4 13

14 Overcloud Compute IP +--------------------------------------+-----------------------------------------------------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------------------------------------------------+--------+------------+-------------+------------------------+ | 914b9e90-af7e-48a1-8f2a-a9fdc607743c | overcloud-ce-controller-SwiftStorage0-xupnrgqv6byz | ACTIVE | - | Running | ctlplane=192.168.24.34 | | d13ded44-7f6a-47e5-a7d2-5ade062208a8 | overcloud-ce-controller-SwiftStorage1-3qxf35lkkagj | ACTIVE | - | Running | ctlplane=192.168.24.33 | | 6bc6e42a-ef3b-45ae-b445-662e3525914d | overcloud-ce-controller-controller0-6udsmj2xdjbi | ACTIVE | - | Running | ctlplane=192.168.24.31 | | 39266048-b727-4254-8b35-5f3dd2f4cd2f | overcloud-ce-controller-controller1-k3iiokbfjvey | ACTIVE | - | Running | ctlplane=192.168.24.30 | | e9e89f62-762b-496f-9318-01292d7a0c10 | overcloud-ce-controller-controller2-ssbsl5uulnmn | ACTIVE | - | Running | ctlplane=192.168.24.32 | | 189f1f0b-17ef-4526-824b-0cb66f2745f5 | overcloud-ce-novacompute0-NovaCompute0-mxdy3klm45np | ACTIVE | - | Running | ctlplane=192.168.24.35 | | 7933d944-9914-4146-91ae-15541a3c9df7 | overcloud-ce-novacompute1-NovaCompute1-dcemqprercrx | ACTIVE | - | Running | ctlplane=192.168.24.36 | | 5d71a273-9f42-432b-8838-473d9b6e75ac | overcloud-ce-novacompute2-NovaCompute2-6gzjf42rxtvf | ACTIVE | - | Running | ctlplane=192.168.24.37 | | 34ae25e9-87cb-4fcd-9ef9-00f86fe88e25 | overcloud-ce-novacompute3-NovaCompute3-3yek7if6k3pm | ACTIVE | - | Running | ctlplane=192.168.24.38 | | c7920407-b93c-410c-aa66-a2734f697dea | overcloud-ce-novacompute4-NovaCompute4-oc6xz72joshk | ACTIVE | - | Running | ctlplane=192.168.24.39 | | 13463fb4-68f8-451f-8762-baac928763a1 | overcloud-ce-novacompute5-NovaCompute5-42mkfaniod5e | ACTIVE | - | Running | ctlplane=192.168.24.40 | | a654ec46-2284-4c8e-8e57-9d6fe74b1517 | overcloud-ce-novacompute6-NovaCompute6-nknrdp3bxirp | ACTIVE | - | Running | ctlplane=192.168.24.41 | | d89666e7-da13-4c0a-9321-d74ab3d3c692 | overcloud-ce-novacompute7-NovaCompute7-th2gxbphpvyj | ACTIVE | - | Running | ctlplane=192.168.24.42 | | 9a91f928-164b-41b6-867e-b711643f6ae8 | overcloud-ce-novacompute8-NovaCompute8-hxkfrs7fmum5 | ACTIVE | - | Running | ctlplane=192.168.24.43 | | b64e0d6d-9226-43d7-b793-5a76f15aa505 | overcloud-ce-novacompute9-NovaCompute9-2fcag4clpflk | ACTIVE | - | Running | ctlplane=192.168.24.44 | +--------------------------------------+-----------------------------------------------------+--------+------------+-------------+------------------------+ 14

15 Use Case 1 VM to VM in single network on single compute node 15

16 Use Case 1: VM to VM in single network on single compute node 16

17 Use Case 1: VM to VM in single network on single compute node What you need (Refer to the Cloud Lab for How To) 2 VMs, on the same network and on the same compute node, with Security Group allowing Ping / SSH Tips: to ensure you are on the same compute node, create your first VM and check on what compute node it is hosted. Then create your second VM using the relevant Availability Zone Scenario Connect to first instance and initiate ping to second instance

18 Use Case 1: VM to VM in single network on single compute node 18 VM0 eth0 tcpdump icmp -e -i (the VM vNIC) check Dst MAC : fa:16:3e:d5:14:0c per-VM Linux Bridge (qbr) iptables --list-rules | grep neutron-openvswi-i551d286a-e => Input neutron-openvswi-o551d286a-e => Output iptables –list -v –n 0 0 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0 => ICMP security rule (ingress) 7 1056 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 => SSH security rule (ingress) brctl show tcpdump icmp -e -i ping Compute1 vSwitch Integration Bridge (br- int) ovs-vsctl show | grep -A3 qvo tag: 47 Tenants are locally isolated on L2 by assigning VLAN tags ovs-ofctl show br-int | grep qvo 140 qvo port Id used for OpenFlow rules ovs-ofctl dump-flows br-int table=0 match of Dst MAC is with rule forward NORMAL (we will do L2 forwarding) ovs-appctl fdb/show br-int | grep packet switch to port 141 (dst MAC known) qvo tap qv b VLAN Table 0 – Forward NORMAL Iptables

19 Use Case 1: VM to VM in single network on single compute node 19 ovs-ofctl show br-int | grep 141 qvo8f0d43bf-95 not leaving br-int, going to local bridge tcpdump icmp -e -i qvb 19 Compute vSwitch Internal Bridge qvo VLAN Tag Table - Forward tcpdump icmp -e -i tap ==> Test with a security rules without ICMP VM2 eth0 per-VM Linux Bridge (qbr) tap qv b Iptables

20 Use Case 2 VM to VM in single network on two compute nodes 20

21 Use Case 2: VM to VM in single network on two compute nodes 21

22 Use Case 2: VM to VM in single network on two compute nodes 22 http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html

23 Use Case 2: VM to VM in single network on two compute nodes What you need (Refer to the Cloud Lab for How To) 2 VMs, on the same network BUT on different compute nodes, with Security Group allowing Ping / SSH Tips: to ensure you are on the same compute node, create your first VM and check on what compute node it is hosted. Then create your second VM using the relevant Availability Zone Scenario Connect to first instance and initiate ping to second instance

24 Use Case 2: VM to VM in single network on two compute nodes 24 VM0 eth0 tcpdump icmp -e -i (the VM vNIC) check fa:16:3e:dd:ff:cf per-VM Linux Bridge (qbr) iptables --list-rules | grep neutron-openvswi-i3f3ebb06-d => Input chain neutron-openvswi-o3f3ebb06-d => Output chain iptables –list -v –n 0 0 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0 => ICMP security rule (ingress) 7 1056 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 => SSH security rule (ingress) brctl show tcpdump icmp -e -i ping Compute1 vSwitch Integration Bridge (br- int) ovs-vsctl show | grep -A3 qvo tag: 2 Tenants are locally isolated on L2 by assigning VLAN tags ovs-ofctl show br-int | grep qvo 13 Port Id used for OpenFlow rules ovs-ofctl dump-flows br-int table=0 match is with rule forward NORMAL (we will do L2 forwarding) ovs-appctl fdb/show br-int | grep packet switch to port 6 (dst MAC known) qvo tap qv b VLAN Table 0 – Forward NORMAL Iptables

25 Compute1 Tunnel Bridge (br-tun) Use Case 2: VM to VM in single network on two compute nodes ovs-ofctl show br-int | grep patch Tun MAC is not reachable on br-int and we need to go out of compute node Compute 1 Integration Bridge (br-int) Table – Forward ovs-ofctl show br-tun | grep '(' 1(patch-int): addr:f2:a9:2e:fd:d9:22 patch-int port Id ovs-ofctl dump-flows br-tun table=0 cookie=0x0, duration=173548.496s, table=0, n_packets=37963, n_bytes=13248284, idle_age=0, hard_age=65534, priority=1,in_port=1 actions=resubmit(,1) ovs-ofctl dump-flows br-tun table=1 cookie=0x0, duration=173603.994s, table=1, n_packets=38004, n_bytes=13252670, idle_age=0, hard_age=65534, priority=0 actions=resubmit(,2) ovs-ofctl dump-flows br-tun table=2 cookie=0x0, duration=173834.782s, table=2, n_packets=528, n_bytes=49526, idle_age=0, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) ovs-ofctl dump-flows br-tun table=20 | grep cookie=0x0, duration=8076.520s, table=20, n_packets=509, n_bytes=49098, idle_age=0, priority=2,dl_vlan=2,dl_dst=fa:16:3e:dd:ff:cf actions=strip_vlan,set_tunnel:0x3ed,output:7 strip VLAN tag, set VXLAN VNI 0x3ed (in Hex = 1005 in Dec) and send to port 7 ovs-ofctl show br-tun | grep '(‘ 7(vxlan-c0a8182b): addr:8e:39:ac:11:c0:ea ovs-vsctl show | grep –A2 options: {df_default="false", in_key=flow, local_ip="192.168.24.44", out_key=flow, remote_ip="192.168.24.43"} This is compute8 ÏP Table 0: From ? VM Table 1: Routed ? Table 2: Unicast ? Table 20: Tunnel patch- tun patch-int VLAN VNI

26 Use Case 2: VM to VM in single network on two compute nodes tcpdump -e -i eth0 -c 100 | grep -B1 09:16:56.583110 c4:34:6b:ae:d7:b8 (oui Unknown) > c4:34:6b:ae:28:50 (oui Unknown), ethertype IPv4 (0x0800), length 148: overcloud-ce-novacompute9-NovaCompute9-2fcag4clpflk.42717 > overcloud-ce- novacompute8-NovaCompute8-hxkfrs7fmum5.4789: VXLAN, flags [I] (0x08), vni 1005 Internal MAC and IP are not visible to underlay tcpdump -e -i eth0 -c 100 | grep -B1 09:28:03.584266 IP overcloud-ce-novacompute9-NovaCompute9-2fcag4clpflk.42717 > overcloud-ce-novacompute8- NovaCompute8-hxkfrs7fmum5.4789: VXLAN, flags [I] (0x08), vni 1005 IP 192.168.200.9 > 192.168.200.11: ICMP echo request, id 6486, seq 1615, length 64 ovs-vsctl show Port "vxlan-c0a8182c" Interface "vxlan-c0a8182c" type: vxlan options: {df_default="false", in_key=flow, local_ip="192.168.24.43", out_key=flow, remote_ip="192.168.24.44"}Port “ ovs-ofctl show br-tun | grep '(' 12(vxlan-c0a8182c): addr:e6:c3:36:83:61:a6 VXLAN packet it is coming from port 12 1(patch-int): addr:7a:45:57:ab:04:f4 connects br-tun with br-int, where our VM is Compute1 Tunnel Bridge (br-tun) Table 20: Tunnel VNI Compute2 Tunnel Bridge (br-tun) Underlay VNI

27 Use Case 2: VM to VM in single network on two compute nodes Compute2 Tunnel Bridge (br-tun) Table 0: From ? Tunnel Table 4: Add VLAN based on VNI Table 9: Routed ? Table 10: Learn, sent to br-int ovs-ofctl dump-flows br-tun table=0 cookie=0x0, duration=9960.459s, table=0, n_packets=2465, n_bytes=240439, idle_age=0, priority=1,in_port=12 actions=resubmit(,4) ovs-ofctl dump-flows br-tun table=4 cookie=0x0, duration=10215.592s, table=4, n_packets=2753, n_bytes=269001, idle_age=0, priority=1,tun_id=0x3ed actions=mod_vlan_vid:5,resubmit(,9) set VLAN tag ovs-ofctl dump-flows br-tun table=9 cookie=0x0, duration=176122.550s, table=9, n_packets=3149, n_bytes=301923, idle_age=0, hard_age=65534, priority=0 actions=resubmit(,10) ovs-ofctl dump-flows br-tun table=10 cookie=0x0, duration=178689.832s, table=10, n_packets=3191, n_bytes=305983, idle_age=0, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],lo ad:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 learn table 20, sent to port 1 (patch-int) VLAN VNI patch-int

28 Use Case 2: VM to VM in single network on two compute nodes 28 Compute2 vSwitch Internal Bridge (br-int) ovs-vsctl show | grep -A1 'tag: ' tag: 5 Interface "qvod2bca12f-74’ ovs-ofctl show br-int | grep '(‘ 8(patch-tun): addr:26:dc:b4:4f:df:91 19(qvod2bca12f-74): addr:ba:9b:58:5e:0f:7d Port Id is 19 ovs-ofctl dump-flows br-int table=0 cookie=0x0, duration=178960.748s, table=0, n_packets=50913, n_bytes=15060268, idle_age=0, hard_age=65534, priority=1 actions=NORMAL match is with rule forward NORMAL ovs-appctl fdb/show br-int | grep 195 fa:16:3e:dd:ff:cf 0 packet switch to port 19 which is qvo qvo Table 0 – Forward normal brctl show qbr0d4c2f0e-8b 8000.ba89713f6904 no qvb0d4c2f0e-8b tap0d4c2f0e-8b tcpdump icmp -e -i (the VM vNIC) virsh list virsh dumpxml | grep “<nova:name” to check it is your VM virsh dumpxml | grep -A 7 "<interface“ per-VM Linux Bridge (iptables) tap qv b qbr VM eth0 patch- tun VLAN

29 Use Case 3 29

30 30

31 31 http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html

32 What you need (Refer to the Cloud Lab for How To) 1 VMs, with a Floating IP attached to it, with Security Group allowing Ping / SSH Scenario Start ping from VM to outside world (www.hp.com = 15.201.49.153 ) and start chasing packetwww.hp.com Note: in this case Helion OpenStack will use distributed routing and static NAT capability

33 33 VM eth0 virsh list virsh dumpxml | grep “<nova:name” to check it is your VM virsh dumpxml | grep -A 7 "<interface“ tcpdump icmp -e -i 10:58:40.252780 fa:16:3e:ee:5c:7f (oui Unknown) > fa:16:3e:10:8a:e6 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.200.9 > 15.201.49.155: ICMP echo request, id 6517, seq 71, length 64 (sending packet to MAC of default gateway which is DVR MAC) ping 15.201.49.155 (www.hp.com)www.hp.com Don’t care it is not answering Compute1 vSwitch Integration Bridge (br- int) ovs-vsctl show | grep -A3 tag: 2 Tenants are locally isolated on L2 by assigning VLAN tags ovs-ofctl show br-int 12(qr-e6f4ab72-5b): addr:00:00:00:00:00:00 13(qvo3f3ebb06-dd): addr:ca:70:14:31:ba:c3 12 Port Id used for OpenFlow rules ovs-ofctl dump-flows br-int table=0 cookie=0x0, duration=180787.809s, table=0, n_packets=67245, n_bytes=16690680, idle_age=0, hard_age=65534, priority=1 actions=NORMAL match is with rule forward NORMAL ovs-appctl fdb/show br-int | grep 12 2 fa:16:3e:10:8a:e6 33 packet switch to router port 12 (= qr-e6f4ab72-5b) qvo VLAN Tag Table 0 – Forward normal qr per-VM Linux Bridge (qbr) tap qv b Iptables

34 34 Get router ID fom GUI c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip netns | grep c3be0f2e-88c7-445e-89aa-9c17b8d3761b qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip a 2: rfp-c3be0f2e-8 inet 192.168.25.87/32 and 169.254.31.238/31 38: qr-e6f4ab72-5b inet 192.168.200.1/24 ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip rule list 32769: from 192.168.200.9 lookup 16 ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip route show table 16 default via 169.254.31.239 dev rfp-c3be0f2e-8 ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b iptables --table nat --list target prot opt source destination SNAT all -- 192.168.200.9 anywhere to:192.168.25.87 {DNAT all -- anywhere 192.168.25.87 to:192.168.200.9] ip netns exec qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d tcpdump icmp -e -l -i rfp-c3be0f2e-8 11:26:33.261025 b2:eb:f8:8c:0d:02 (oui Unknown) > c2:3b:9c:8f:b6:66 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.25.87 > 15.201.49.155: ICMP echo request, id 6517, seq 1744, length 64 SNATing Done: IP has been translated (compared to a tcpdump on qr port) qr Compute 1 Router namespace (qrouter) rfp Static NAT Routing

35 35 ip netns fip-46059b8d-52a0-4934-86f2-e0364f119797 ip netns exec fip-46059b8d-52a0-4934-86f2-e0364f119797 ip a 2: fpr-c3be0f2e-8 inet 169.254.31.239/31 43: fg-86f4105d-89inet 192.168.25.91/24 ip netns exec fip-46059b8d-52a0-4934-86f2-e0364f119797 ip route | grep fpr-c3be0f2e-8 169.254.31.238/31 dev fpr-c3be0f2e-8 proto kernel scope link src 169.254.31.239 192.168.25.87 via 169.254.31.238 dev fpr-c3be0f2e-8 ip netns exec fip-46059b8d-52a0-4934-86f2-e0364f119797 tcpdump icmp -e -l -i fg-86f4105d-89 11:38:37.267321 fa:16:3e:4f:af:aa (oui Unknown) > 78:48:59:38:41:e3 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.25.87 > 15.201.49.155: ICMP echo request, id 6517, seq 2468, length 64 versus 11:37:22.265723 b2:eb:f8:8c:0d:02 (oui Unknown) > c2:3b:9c:8f:b6:66 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.25.87 > 15.201.49.155: ICMP echo request, id 6517, seq 2393, length 64 Compute 1 Floating IP namespace (fip) rfp fpr fg Compute 1 External Bridge (br-ex) ovs-vsctl show | grep –A4 br-ex Port "fg-86f4105d-89“ Port "vlan25“ ovs-ofctl show br-ex | grep '(‘ 1(vlan25): addr:c4:34:6b:ae:d7:b8 ovs-ofctl dump-flows br-ex cookie=0x0, duration=183526.882s, table=0, n_packets=20685, n_bytes=2211058, idle_age=1, hard_age=65534, priority=0 actions=NORMAL ovs-appctl fdb/show br-ex 1 0 78:48:59:38:41:e3 4 VLAN2 5 fg MAC Switching

36 Use Case 4 36

37 Use Case 4: East-West routing – VM on different computes / networks 37

38 Use Case 4: East-West routing – VM on different computes / networks 38 http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html

39 Use Case 5 39

40 40

41 Conclusion 41

42 Reference http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html http://docs.openstack.org/networking-guide/ incl. http://docs.openstack.org/networking-guide/deploy_scenario3a.htmlhttp://docs.openstack.org/networking-guide/deploy_scenario3a.html http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html 42

43 Annex 43

44 Main CLI on Compute node 44 Instance eth0 Linux Bridge (qbr) Integration Bridge (br-int) Patch tap qvb KVM Libvirt - Virtualization virsh Linux bridge brctl show iptables --list-rules tcpdump openvswicth.org ovs-vsctl show - utility for querying and configuring ovs-vswitchd ovs-ofctl show - administer OpenFlow switches ovs-appctl - utility for configuring running Open vSwitch daemons qr Distributer Router namespace (qrouter) rfp Floating IP namespace (fip) fpr fg Underla y qr network namespace ip-netns - process network namespace management (ip, tcpdump, iptables) fg Tunnel Bridge (br- tun) External Bridge (br- ext) qvo Openvswicth Internet

45 Legacy routing in Neutron IP forwarding Inter-subnet (east-west), traffic between VMs Floating IP (north-south), traffic between external and VM Default SNAT (north-south), traffic from VM to external 45

46 46 Network, subnet and port are the 3 core ressources of Neutron

47 DVR - Neutron plug-in and Agent On Compute Node / Hypervisor L2 agent (OVS or bridge) – to configure the SW bridges – Applies Security Group Rules L3 agent (Linux Network namespace) Metadata nova On Network Node L3 agent (Linux Network namespace) – centralized part DHCP Services: LBaaS, FWaaS (north -> South) in qr, VPNaaS 47

48 DVR – Distributed Routing Avoid inter-subnet traffic to reach the network note Basically it is about duplicate the router in the compute node, same for Floating IP SNAT still centralized Do a ip netns to see the existing namespaces qr – one per tenant rfp = router to floating IP fip – one per compute node fpr = floating to router IP, internal port 169.254.31.x fg = FIP gateway port, with Public IP @ snat – on the network node sg = snap gateway qdhcp – on the network node 48

49 from Openstack summit vancouver – DVR namepsace prez 49 network node compute node: 2 tenants


Download ppt "Networking Lab Life of a packet Nicolas Prost Septembre 2015 1."

Similar presentations


Ads by Google