Download presentation
Presentation is loading. Please wait.
1
Introduction to Tornado Environments
Author: Chun-Yi Tsai
2
What is Tornado Tornado System is an integrated environment for developing real-time and embedded application. Detail VxWorks : Target embedded operation system (RTOS) Tornado : Host developing utility (IDE environment) Latest version Tornado 2.01-C and C++ support (Host OS: Solaris 2.51,2.6,2.7. Windows 95,98 and NT,HP-UX 10 hosts) VxWorks 5.4
3
Typical Tornado Development Configuration
Boot parameters are set via a serial link VxWorks is booted through the ethernet (which is much faster)
4
Development Tools Tornado Development Tools:
Launch -Launch Tornado tools WinSh -Access target interactively CrossWind -Source-level debugger Browser -Display system information Project Facility -Configure application or VxWorks WindView -Analyze multitasking application Simulator -Simulate VxWorks target on host OS Tools are customizable with Tcl : Add new functionality Customize user interface some target-resident tools are available
5
Tornado Directory Tree
Host Tornado host-resident tools SETUP SETUP program share Shared XDR code (eXternal Data representation) target VxWorks OS, BSP docs On-line HTML document /tornado
6
PC: Host Software Configuration
The torVars.bat script, located in tornado\host\x86-win32 /bin,sets environment variables needed for command-line use of the tools. Here is an example for torVars.bat set WIND_HOST_TYPE = x86-win32 set WIND_BASE = D:\Ttwo set PATH=WIND_BASE%\host\%WIND_HOST_TYPE%\bin;%PATH%
7
Boot ROM Target’s boot ROM code executes on power up.
Boot ROM’s does not contain the VxWorks system. The boot ROM code : Allows setting of boot parameter. Downloads VxWorks into target memory via the network (FTP) Starts executing VxWorks.
8
Host-Target Interaction
Target Device Host tornado Tool VxWorks tgtsvr WDB Agent back end
9
Portability HSP : Host support Package BSP : Board Support Packet
10
Target Server After booting a target, you must start a target server to access the target using the Tornado tools. Target server provides host-based management of target resources needed by development tools. Communication with debug agent on target. Dynamic module loading and unloading Host-resident symbol table for target Allocation of memory on target for host tools cache of target program text segment memory Virtual I/O facilities
11
WDB Agent The WDB Agent acts on the target on behalf of the target server and Tornado tools: Reading or modifying memory. Setting or clearing break points. Creating, starting, stopping,and deleting tasks Calling functions Gathering system object information Agent configurable : Specify task, external, or dual debug mode Select communication strategy consistent with target server back end Set amount of target memory reserved for agent’s use.
12
Project Facility Terminology
Key project facility concepts: Bootable Project (stable) A project used to configure and build VxWorks images for a particular BSP. Application code may be statically linked to such a VxWorks image, and the application’s start-up code may be specified. (RAM or ROM version) Downloadable Project (development) A project used to manage and build application modules which can be downloaded and dynamically linked with a running VxWorks image. Allows “on the fly” development.
13
Real-Time System
14
Address Space All tasks reside in a common address space
Makes intertask communication fast and easy Makes context switch faster (no need to save and restore virtual address contexts) All tasks run in supervisor(privileged) mode No system call overhead
15
Context On a context switch, a task’s context is saved in the task control block(TCB). A task’s context includes: A thread of execution,that is, the task’s program counter The CPU and FPU registers A stack for dynamic variables and function calls I/O assignments for standard input, output, and error A delay timer A timeslice timer Kernel control structures Signal handlers Debugging and performance monitoring value
16
Kernel Operation Kernel manage the tasks, moving them from state to state based on kernel operations invoked.
17
Wind Task Scheduling Preemptive Priority Scheduling
With a preemptive priority-based scheduler, each task has a priority(from 0 to 255) and the kernel ensures that the CPU is allocated to the highest priority task that is ready to run. Priority Preemption
18
Wind Task Scheduling Round-Robin Scheduling Known as time-slice
Achieves fair allocation of CPU usage. Enabled with the routine kernelTimeSlice(ticks). Each task keeps a run-time counter.
19
Configure Development Environment
Setup the FTP server on the NT host system Open console terminal : 9600 baud,8 data bits, 1 stop bit , no parity, no flow control Power up IXP1200 target machine and setup boot parameter VxWorks System Boot Copyright Wind River Systems, Inc. CPU: Level One ixp1200eb - ARM IXP1200 Version: 5.4 BSP version: 1.2/1 Creation date: Aug Press any key to stop auto-boot... 3
20
Configure Development Environment
Boot line command : p : print boot parameter c: change boot parameter @: load Vxworks image from host through FTP connection [vx_works_Boot]: c ‘.’ = clear field; ‘-’ = go to prev field ; ^D = quit boot device : eeE unit number : 0 processor number : 0 host name : NPVS file name : c:\ixp1200\boardsupport\bin\vb\vxworks inet on ethernet (e) : host inet (h) : gateway inet (g) : user (u) : ccl ftp password (pw) : ccl flags (f) : 0x8 target name (tn) : IntelEvalDevice
21
Boot VxWorks Image From Host
Initialize the WDB debug Task
22
Start Tornado IDE
23
Tornado IDE-Target Server Setup
Tools > TargetServer > Configure. Then launch the target server Target svr name Connect via network Target IP
24
Connect the target server
25
Connect target server successfully
26
Target Shell Launch 1.select target server 2.Launch target shell
27
Test your functions in the shell
28
Code Developing
29
Download your code to target
30
Task and Object Monitoring
31
Debugging
32
Code Tracing
33
Introduction to NPVS Design
Author: Chun-Yi Tsai
34
LVS-DR Review Only one interface is needed on the virtual server.
Virtual Server must have one of interface physically linked to Real Server.
35
LVS Real Server Configuration Review
Step1: ifconfig eth netmask broadcast up Step2: route add –net netmask dev eth0 Step4: ifconfig lo: netmask broadcast up Step5: route add –host dev lo:0 Step6: echo 1 > /proc/sys/net/ipv4/ip_forward //Turn off the arp-reply capability of lo:0 on all real servers Step7: echo 1 > /proc/sys/net/ipv4/conf/all/hidden Step8: echo1 > /proc/sys/net/ipv4/conf/lo/hidden
36
NPVS Developing Framework
37
NPVS Overview Cluster structure Based on LVS ipvsadm Ex:
ipvsadm –A –t :80 –s wlc ipvsadm –a –t :80 –r –g –w 1 ipvsadm –a –t :80 –r –g –w 2 Current redirecting type support Directing Routing Current scheduling type support RR, WRR, LC, WLC Persistent ability support Ex: FTP-data(port 20) connection
38
Packet Receiving Flow
39
Download Vxworks kernel
Run the following commands in the target shell command line: cd “c:\IXP1200\VxWorks_Lib” or cd c:\IXP1200\VxWorks_Lib\debug” ld < VxWorks_gig.o. NetApp_GigInit Initialize the WorkBench Debugging Task
40
Nortel’s Pseudo Ethernet Driver -- pethPoller task
When running NetApp_GigInit() which is located in \IXP1200_RELEASE12 \SA1_CORELIBS\APP_1200\NET_APP.CPP , it also runs NetApp_Init() which includes Micro engine WorkBench debugging task. Loading ethernet driver number and start it. Attaching the above devices to ip protocol stack. Spawning pethPoller task to call pethRecv() which is used to receive packets. pethRecv() (in \IXP1200_RELEASE12\SA1_CORELIBS\OCTALMAC_21440 \PSUDEODRVEND.CPP) Receives the packet which is sent from micro engines and needs to be processed by StrongARM.
41
Run Micro Engine Driver
42
Latest Developing State of NPVS
Make the microcode “ucload buffer” and NetApp_GigInit into the VxWorks kernel together as a bootable VxWorks Image and it has all things ready except running pseudo ethernet driver at booting time. Spawn a UI task(npvsadm) in Tornado shell to configure the NPVS cluster structure which contains the rules for load balacing. Run our own PethDrvInit() which is modified and removed from NetApp_ GigInit() to start pseudo ethernet driver(pethPoller task and pethRecv()). In PethDrvInit(), We use the system call userNetIfConfig(“peth”,0,”xxx.xxx.xxx.xxx”,”NPVS”,0xffffff00) to assign port 0 with the given ip (as the virtual server ip address). When pethRecv() is running, the packet will be intercepted by npvs_main_process() which is the main process of NPVS. After processing, send this packet out by calling pethSend().
43
Developing Diagram
44
Packet Interception in pethRecv()
ethdr = (struct ether_header *)pMblk->mBlkHdr.mData ; switch(ntohs(ethdr->ether_type)) { case ETHERTYPE_IP : // This is an IP packet iphdr = (struct ip *)(pMblk->mBlkHdr.mData +14) ; switch(iphdr->ip_p) case IPPROTO_TCP: case IPPROTO_UDP: //Only process neither broadcast nor multicast packet if( ((iphdr->ip_dst.s_addr >>24 ) == 255) || ((iphdr->ip_dst.s_addr & 0x000000ff) == 224) ) break; npvs_main_process(ethdr); //pass this packet to our main process pethSend(pDrvCtrl,pMblk); //send this packet out } default:
45
Cluster Setting in Terminal
< NPVS Configuration > (1)Create a new cluster (2)Edit an exist cluster (3)Delete an exist cluster (4)Show an exist cluster (5)Exit Choice: 1 Virtual Server IP address = Create a new cluster sucessfully! Choice: 2 < Select Cluster > (1)Cluster 1 Choice: 1 < Edit Cluster > (1)Create a virtual service (2)Edit a virtual service (3)Delete a virtual service (4)Exit New Virtual Service Transport Service(0:tcp 1:udp) = 0 Virtual Service port number = 80 Scheduling Type(0:rr 1:wrr 2:lc 3:wlc) = 3 Number of Real Servers = 2 Number 1 Real Server IP address = redirect port = 80 redirect type(0:NAT 1:DR 2:TUN) = 1 scheduling weight = 1 Number 2 Real Server IP address = scheduling weight = 2
46
NPVS Design Main Flow
47
Telnet Running Demo
48
FTP Running Demo Support persistent processing(Ex: “ls –la” and “get” work well)
49
HTTP Running Demo
50
Monitor Connection and Hashing Messages
51
Future Work Maybe supported by micro code in the future
NAT and IPIP tunnel redirecting Connection Hasing table lookup Failover handling ability between LB-to-LB and LB-to-RS Cookie and Content parsing ability support 3DNS ability support Bandwidth control (Ex: DiffServ) on the path from real servers back to clients Enhance graphic user interface(GUI) support. Piranha RedHat High Availability Server
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.