Presentation is loading. Please wait.

Presentation is loading. Please wait.

NIDays 2007 Worldwide Virtual Instrumentation Conference

Similar presentations


Presentation on theme: "NIDays 2007 Worldwide Virtual Instrumentation Conference"— Presentation transcript:

1

2 NIDays 2007 Worldwide Virtual Instrumentation Conference
Evaluating New Technologies for Test and Measurement: PCI Express, Multicore Processing, and Microsoft Windows Vista NIDays 2007 Worldwide Virtual Instrumentation Conference Welcome to the “Evaluating New Technologies for Test and Measurement: PCI Express, Multicore Processing, and Microsoft Windows Vista” session.

3 Evaluating Test and Measurement Buses
10,000 1,000 100 10 1 0.1 Good Better Best PCI Express (x4) Gigabit Ethernet PCI/PXI (32/33) Increasing (Improving) Bandwidth Max Bandwidth (MB/s) Hi-Speed USB IEEE 1394a Fast Ethernet VME/VXI GPIB (HS488) When considering the technical merits of alternative buses, bandwidth and latency are two of the most important bus characteristics. Bandwidth measures the rate at which data is sent across the bus, typically in MBytes/s, while latency measures the inherent delay in data transmission across the bus. By analogy, if we were to compare an instrumentation bus to a road, bandwidth would correlate to the width of the road and the speed of travel, while the latency would correlate to the number of stoplights in the road. A bus with high bandwidth would be able to transmit more data in a given period than a bus with low bandwidth. A bus with low, meaning good, latency would introduce less of a delay between the time data was transmitted on one end and processed on the other end. Most users recognize the importance of bandwidth since it affects whether data can be sent as fast as it is acquired and how much onboard memory their instruments will need. Latency, while less observable, has a direct impact on applications such as digital multimeter (DMM) measurements, switching, and instrument configuration, since it affects how quickly a command sent from one node on the bus, such as the PC controller, arrives at and is processed at another node, such as the instrument. It is essential to choose a bus with high bandwidth and low latency, such as PCI and PCI Express, to achieve the optimal performance in your test and measurement systems. USB 1.1 GPIB (488.1) Approximate Latency (μs) Decreasing (Improving) Latency

4 Increasing Bus Bandwidth Opens New Applications
24 Multichannel Audio 20 IF Communications 16 Number of Bits Data Acquisition High- Resolution Digitizers 12 Instrument Control High-Speed Imaging 8 As the bandwidth of PC buses increase, the applications that can be solved by PC-based test and measurement systems also increase. The increase takes place in both the number of bits of resolution and sample rate. With the ISA bus, instrument control via GPIB and low-speed/resolution data acquisition was possible. PCI made application such as multichannel audio and high-resolution modular instruments, such as digitizers, possible. With PCI Express, high-speed/resolution applications such as imaging and IF communications, which require a great deal of bandwidth, can be solved with PC-based test and measurement systems. ISA PCI PCI Express 1M 10M 100M 1G 10G 100G Sample Rate (S/s)

5 PCI Express Overview Evolutionary version of PCI
Uses same software model as PCI, ensuring compatibility Inside every new PC and notebook today Low cost – built into PC chipsets Serial interconnect at 2.5 Gb/s PCI transactions are packetized and then serialized Low-voltage differential signaling, point-to-point, 8 B/10 B encoded Bandwidth is dedicated PER slot and in BOTH directions Multiple lanes can be grouped together to form links x1 (by 1) has bandwidth of 250 MB/s/direction x16 (by 16) has bandwidth of 4 GB/s/direction Scalable interconnect – chip-to-chip, backplane, or cabled Roadmap for longevity with Gen-2 clocking (5 Gb/s) PCI Express was introduced to improve upon the PCI bus platform. The most notable PCI Express advancement over PCI is its point-to-point bus topology. The shared bus used for PCI is replaced with a shared switch, which provides each device its own direct access to the bus. Unlike PCI, which divides bandwidth between all devices on the bus, PCI Express provides each device with its own dedicated data pipeline. Data is sent serially in packets through pairs of transmit and receive signals called lanes, which enable 250 MBytes/s bandwidth per direction, per lane. Multiple lanes can be grouped together into x1 (“by-one”), x2, x4, x8, x12, x16, and x32 lane widths to increase bandwidth to the slot. PCI Express dramatically improves data bandwidth compared to PCI buses, minimizing the need for onboard memory and enabling faster data streaming. For instance, with a x16 slot, users can achieve up to 4 GB/s of dedicated bandwidth as opposed to the 132 MB/s shared across all devices of the 32 bit, 33 MHz PCI.

6 Dedicated Bandwidth per Device
Unlike Ethernet and PCI, which divides bandwidth between all devices on the bus, PCI Express provides each device with its own dedicated data pipeline. Thus, as multiple PCI Express devices are added to a system, the total bus throughput scales linearly.

7 Software Layer PCI software-model compatible
100% OS and driver-level compatible PCI enumeration, configuration, and power management mechanisms Existing operating systems boot with no changes (including BIOS) PCI Express hierarchy mapped using PCI elements Host bridges P2P bridges All enumerated using the regular PCI device configuration space PCI capability pointer for PCI Express-specific extensions Software compatibility is of paramount importance for PCI Express. There are two facets of software compatibility – initialization (or enumeration) and run time. PCI has a robust initialization model where the operating system can discover all of the add-in hardware devices present and then allocate system resources, such as memory, I/O space and interrupts, to create an optimal system environment. The PCI configuration space and the programmability of I/O devices are key concepts that remain unchanged within the PCI Express architecture. In fact, all operating systems will be able to boot without modification on a PCI Express-based machine. The run-time software model used by PCI is a load-store, shared memory model, which is maintained within the PCI Express architecture to enable all existing software to execute unchanged. New software can also take advantage of some of the more advanced features of PCI Express, such as Advanced Switching.

8 Physical Layer Data Data Device A x1 Lane Device B Data Data Clock
Frame Sequence Number Packet Request CRC Frame Data Device A x1 Lane Device B Data Data Frame CRC Packet Request Sequence Number Frame Clock Clock Point-to-point, differential interconnect with two endpoints Low-voltage signaling, AC coupled Two unidirectional links, no sideband signals Bit rate: >2.5 Gb/s/pin/direction and beyond Clocking: Embedded clock signaling using 8 B/10 B encoding Link widths (per direction): x1, x2, x4, x8, x12, x16, x32 Gen-2 (5 Gb/s) speed increase The fundamental PCI Express link consists of two low-voltage, AC-coupled, differential pairs of signals (a transmit pair and a receive pair) as shown above. The physical link signal uses a de-emphasis scheme to reduce intersymbol interference, thus improving data integrity. A data clock is embedded using the 8b/10b encoding scheme to achieve very high data rates (consumes 20% of available bandwidth). The initial signaling frequency is 2.5 Gbits/s/direction (Generation 1 signaling) and this is expected to increase in line with advances in silicon technology to 10 Gbits/s/direction (the practical maximum for signals in copper). The physical layer transports packets between the link layers of two PCI Express agents.

9 PCI Express and PCI Slots on a Motherboard
The desktop motherboard shown above provides connectivity for both PCI Express and PCI slots. This is representative of the motherboards that are on the market and the chipsets that are shown on Intel’s public roadmap. The PC industry expects to continue providing PCI slots for some time, and it is also very easy to build systems with PCI Express-to-PCI bridges that will provide future support for existing PCI devices. There are two main reasons why PCI is expected to coexist with PCI Express for quite some time: No driving software advantage to moving from PCI to PCI Express (unlike the advantages of Plug-and-Play with transition from ISA to PCI) Existing installed base of PCI modules that do not require bandwidth of PCI Express 2 PCI Slots

10 PCI Express Cards NI PCIe-GPIB Instrument Control (x1) NI PCIe-1429
Image Acquisition (x4) PCI Express Graphics Card (x16) Here is an image of several different PCI Express card types (link widths). The different connectors, sizes from a x1 (read “by 1”), x4, and x16, provide scalable bandwidth that is available per device. Most PCI Express desktop computers provide a x16 slot for PCI Express graphics cards, and this slot provides bandwidth of 4 GBytes/s. x1 and x4 are the other two common link widths. Examples of Different PCI Express Link Widths: x1, x4, and x16

11 Up-Plugging and Down-Plugging
Up-plugging: Installing boards in higher-lane slots Allowed by PCI Express Example: Plugging a x4 module in a x8 slot Caveat: Motherboard vendors are only required to support a x1 data rate in this configuration Full-bandwidth support will be vendor specific Example: x16 slots may operate as a x1, even for x4 cards Down-plugging: Installing boards in lower-lane slots Physically prevented by the design of the slots and connectors for the desktop form factor Allowed in PXI Express and CompactPCI Express Up-plugging: Installing boards in higher-lane slots Allowed by PCI Express Example: Plugging a x4 module in a x8 slot Caveat: Motherboard vendors are only required to support a x1 data rate in this configuration Full-bandwidth support will be vendor specific Example: x16 slots may operate as a x1, even for x4 cards Down-plugging: Installing boards in lower-lane slots Physically prevented by the design of the slots and connectors for the desktop form factor Allowed in PXI Express and CompactPCI Express

12 ExpressCard – PCI Express for Laptops
Both x1 PCI Express and Hi-Speed USB signaling on host 34 mm and 54 mm form factors PXI embedded controllers include ExpressCard/34 slot The ExpressCard standard gives users a very easy way to add hardware to their systems. The primary market for ExpressCard modules are laptops and small PCs needing only limited expansion. The ExpressCard module can be plugged in or removed at almost any time without any tools (unlike traditional add-in boards for desktop computers). ExpressCard technology replaces conventional parallel buses for I/O devices with two high-speed serial interfaces – PCI Express and USB 2.0. Each slot of the ExpressCard host interface must include a x1 PCI Express link operating at the baseline 2.5 Gbits/s data rate in each direction. The ExpressCard host interface must also accept the low, full, and high-speed USB data rates as defined by the USB 2.0 Specification. Providing both interfaces is a condition for being an ExpressCard-compliant host platform. An ExpressCard module can use one or both of the interfaces depending on the application requirements.

13 PCI Express Industry Adoption
First PCI Express desktops shipped mid 2004 First ExpressCard laptops shipped early 2005 PCI and PCI Express are side-by-side in all Intel/Dell roadmaps Primary consumer driver is graphics processing (gamers, video editing) PCI Express x16 replacing AGP PCI Express desktops began shipping mid 2004, and ExpressCard laptops started shipping in early All of the roadmaps from Intel and Dell currently have the PCI and PCI Express slots side-by-side for many years to come – this is a good thing for technology continuity and will help you preserve and continue your existing investment in PCI. The primary consumer driver for PCI Express is the graphics processing capabilities provided by the higher bandwidth.

14 National Instruments Shipping Products
NI PCIe-GPIB (x1) NI PCIe-6251 M Series (x1) NI PCIe-6259 M Series (x1) NI PCIe-1429 Camera Link (x4) NI PCIe-1430 Camera Link (x4) NI PCIe-8361 MXI-Express (x1) NI PCIe-8362 MXI-Express (x1) NI PCIe-8371 MXI-Express (x4) NI PCIe-8372 MXI-Express (x4) NI ExpressCard-8360 MXI-Express National Instruments offers a variety of PCI Express products for instrument control, data acquisition, image acquisition, and control of PXI and PXI Express systems.

15 PCI Express Advantages
Software compatibility with PCI High bandwidth (up to >4 GB/s) Scalable bandwidth Dedicated bandwidth per slot Low latency Peer-to-peer communication Internal and external operation Long life (20+ years in the mainstream market) The main advantages of PCI Express are the following: Software compatibility with PCI High bandwidth (up to > 4 GBytes/s) Scalable bandwidth Dedicated bandwidth per slot Low latency Peer-to-peer communication Internal and external operation Long life (20+ years in the mainstream market)

16 PXI Express – Integrating PCI Express into the PXI Backplane
Up to 6 GB/s backplane and 2 GB/s slot bandwidth Backward compatibility Complete software compatibility Hybrid slot definition – install modules with either PCI or PCI Express signaling in a single slot Enhanced synchronization capabilities 100 MHz differential clock, differential triggering The PXI Express backplane integrates PCI Express while still preserving compatibility with current PXI modules, users benefit from increasing bandwidth while maintaining backward compatibility with existing systems. PXI Express specifies hybrid slots to deliver signals for both PCI and PCI Express. With PCI Express electrical lines connecting the system slot controller to the hybrid slots of the backplane, PXI Express provides a high bandwidth path from the controller to backplane slots. Using an inexpensive PCI Express-to-PCI bridge, PXI Express provides PCI signaling to all PXI and PXI Express slots to ensure compatibility with PXI modules on the backplane. With the ability to support up to a x16 PCI Express link in addition to a x8 link, the system controller slot provides a total of 6 GB/s bandwidth to the PXI Express backplane, representing more than a 45X improvement in PXI backplane throughput.

17 PXI and Hybrid Slots Ensure Compatibility
The first PXI Express chassis will provide PXI peripheral slots. Additionally, by taking advantage of the available pins on the high-density PXI backplane, the PXI Express hybrid slots are capable of delivering signals for both PCI and PCI Express. In doing so, these PXI Express hybrid slots provide backward compatibility that is not available with desktop PC card-edge connectors, where a single slot cannot support both PCI and PCI Express signaling. Thus, the hybrid slot allows you to install a PXI module that uses PCI signaling or a future high-performance PXI Express module that uses PCI Express signaling. In addition to providing hardware compatibility through hybrid slots, PXI Express systems also provide software compatibility so that engineers can preserve their investment in existing software. PCI Express software compatibility is guaranteed through the PCI Special Interest Group (PCI-SIG) which includes companies like Intel and Dell. Because PCI Express uses the same driver and OS model as PCI, the specification guarantees that engineers have complete software compatibility among PCI-based systems, for example PXI, and PCI Express-based systems, such as PXI Express. As a result, both vendors and customers do not need to change driver or application software for PCI Express-based systems. By maintaining software compatibility between PCI and PCI Express technology, the specification drastically reduces cost for vendors and integrators to insert new PCI Express technology into existing test systems. With hardware compatibility provided by the hybrid slot and software compatibility, the cost of adding PXI Express technology is minimal.

18 PXI Slots

19 Hybrid Slots The Hybrid Slot combines the PXI Express Peripheral and most functionality from the PXI-1 Peripheral slot. This allows you the flexibility to put a PXI Express Peripheral or a Hybrid slot-Compatible PXI peripheral into the slot depending on your system needs. Taking a look at the connectors, we can see how this slot can accept either module.

20 PXI Express Hybrid Slots
Power Trigger Bus Star Trigger Clk. 10 x8 PCIe (up to 2 GB/s) Differential Clk. 100 & Star Triggers Reserved Pins Local Bus (typically unused) 32/33 PCI (132 MB/s per system) The Hybrid Slot combines the PXI Express Peripheral and most functionality from the PXI-1 Peripheral slot. This allows you the flexibility to put a PXI Express Peripheral or a Hybrid slot-Compatible PXI peripheral into the slot depending on your system needs. Taking a look at the connectors, we can see how this slot can accept either module. The J1 connector from the PXI slot remains intact. This provides the PCI signaling to a PXI peripheral module. The top connector known as XJ4, provides power, the PXI Trigger bus, the PXI Star Trigger and Clk 10 functionality. In PXI slots, the bottom of the J2 connector contained some reserved pins and support for Local Bus triggering. Local Bus triggering was not widely adopted and is unavailable in PXI Express Hybrid slots. If a PXI Express chassis provides PXI slots, they retain Local Bus for modules that require it. The new PXI Express Hybrid slot replaces this portion of the connector with a PXI Express connector that provides up to x8 PCI Express signaling for up to 1GB/s bandwidth in each direction. In addition, the connector provides PXI Express timing and triggering signals. PXI Express Hybrid PXI

21 Hybrid Slot Flexibility
PXI Express Peripheral Module 32-Bit CompactPCI Module Hybrid Slot Compatible PXI Module A hybrid slot can support a wide range of PXI modules including PXI Express peripherals, CompactPCI peripherals, and hybrid-slot-compatible PXI peripherals.

22 NI PXIe-1062Q Hybrid Chassis
Hybrid Slot Configuration PXI: 2 6 7 8 H H PXI or PXIe: 3 5 PXIe Only: 4 Hybrid Slots

23 PXI-8105 Dual-Core Embedded Controller
Industry’s highest-performance embedded controller Up to 100% higher performance for multithreaded apps 2.0 GHz dual-core Intel Core Duo processor T2500 Dual-channel 667 MHz DDR2 RAM Gigabit Ethernet ExpressCard/34 slot 4 Hi-Speed USB ports 60 GB SATA hard drive DVI-I video

24 NI PXI-1033 Chassis with Integrated MXI Express Controller
110 MB/s sustained throughput with MXI-Express remote control Rugged, compact package with slots for five peripheral modules Quiet acoustic noise emissions as low as 38 dBA Kit includes chassis with integrated controller, host card (PCI Express or ExpressCard), and cable

25 PXI Express Video Demo – NIWeek 2006 Keynote
This video demo shows the PXI Express keynote at NIWeek 2006. Click box to start video demo

26 What Is Multicore Processing?
Multicore processors contain two or more cores, or computing engines, in one physical processor Multicore processors simultaneously execute two or more computing tasks Why Multicore?  Because of power and performance issues, continuing to rely solely on increases in processor clock rates to improve performance is not feasible Multicore processing is generating considerable buzz within the PC industry, largely because both Intel and AMD have released initial versions of their own multicore processors. These first multicore processors contain two cores, or computing engines, located in one physical processor – hence the name dual-core processors. Processors with more than two cores also are on the horizon.

27 Multi-core Programming
“One Holy Grail of computer science research has been finding a way to let a compiler take care of parallelization. “ - Richard Wirt, Intel Senior Fellow C LabVIEW Main Point: You cannot benefit from multi-core performance for free because it requires parallelizing the code – which is VERY difficult. LabVIEW users can take advantage of it for free since LabVIEW inherently parallelizes your code. While the multi-core approach provides a way to continue Moore’s law, taking advantage of this increased processing power is not as easy as it used to be. They can’t just continue to use their existing code as is to see a performance enhancement. To be able to benefit from multi-core, programmers will need to write their code to target the different cores. <animate> This is a very difficult task and as a result a lot of research in computer science has gone into developing a compiler that can take care of the parallelization for the user. Without this type of complier programmers will have to handle the parallelization on their own. Two cores is already difficult to program, but with the knowledge of 80 cores in the future, this will be impossible for a programmer to manage. LabVIEW, however, already inherently handles parallelization. A single loop would target one core, but multiple parallel loops will be automatically handled to divide the task among the different cores. Thus a LV user can take advantage of multi-core technology without having to change their code.

28 Multicore vs. Multiprocessor vs. Hyperthreaded
Multiprocessor systems include two or more physical processors Multiprocessor systems duplicate computing resources that are often shared in multicore systems (front-side bus, etc.) Multiprocessor systems are, most often, higher cost than similar multicore systems (single processor, processor socket, etc.) Hyperthreaded A hyperthreaded processor “acts like” two physical processors Certain resources are duplicated (register set, etc.), but the execution unit is shared Hyperthreaded systems include multiple logical processors The main difference between multicore systems and multiprocessor systems, which have been available for many years, is that multicore systems include a single physical processor that contains two or more cores while multiprocessor systems include two or more physical processors. Multicore systems also share computing resources that are often duplicated in multiprocessor systems, such as the L2 cache and front-side bus. Multicore systems provide similar performance to multiprocessor systems, but often at a significantly a lower cost. The reason is that a multicore processor does not cost as much as multiple equivalent individual processors, and a motherboard with support for multiple processors, such as multiple processor sockets, is not required. A hyperthreaded processor “acts like” two physical processors. Certain resources are duplicated with hyperthreaded processors (register set, etc), but the execution unit is shared. Thus, simultaneous execution does not take place. Hyperthreaded systems include multiple logical processors.

29 Multitasking Multitasking environments (Windows XP, etc.) allow multiple applications to run at the same time With a multicore processor, these multiple applications can simultaneously execute on the processor cores Multicore systems, like multiprocessor systems, can simultaneously execute multiple computing tasks. This is advantageous in multitasking OSs, such as Windows XP, in which you simultaneously run multiple applications. Multitasking refers to the ability of the OS to quickly switch between tasks, giving the appearance of simultaneous execution of those tasks. When running on a multicore system, multitasking OSs can truly execute multiple tasks simultaneously, as opposed to only appearing to do so. For example, on a dual-core system, two applications – such as National Instruments LabVIEW and Microsoft Excel – each can access a separate processor core at the same time, thus improving overall performance for applications such as data logging.

30 Multithreading Multithreaded applications separate their tasks into independent threads A multicore processor can simultaneously execute these threads Multithreading extends the idea of multitasking into applications so you can subdivide specific operations within a single application into individual threads, each of which can run in parallel. Then, the OS can divide processing time not only among different applications, but also among each thread within an application. In a multithreaded NI LabVIEW program, an example application might be divided into three threads – a user interface thread, a data acquisition thread, and an analysis thread. You can assign a priority to each of these, and each operates independently. Thus, in multithreaded applications, multiple tasks can progress in parallel along with other applications that are running on the system. Applications that take advantage of multithreading provide numerous benefits, including more efficient CPU use, better system reliability, and improved performance on multicore systems. More Efficient CPU Use In many applications, you make synchronous calls to resources, such as instruments. Such calls often take a long time to complete. In a single-threaded application, a synchronous call effectively blocks, or prevents, any other task within the application from executing until the operation completes. Multithreading prevents this blocking. While the synchronous call runs on one thread, other parts of the program that do not depend on this call run on different threads. Execution of the application progresses instead of stalling until the synchronous call completes. In this way, a multithreaded application maximizes the efficiency of the CPU because it does not idle if any thread of the application is ready to run. Better System Reliability By separating an application into different execution threads, you can prevent secondary operations from adversely affecting those that are the most important. The most common example is the effect that the user interface can have on more time-critical operations. Many times, screen updates or responses to user events can decrease the execution speed of an application. By giving the user interface thread a lower priority than other more time-critical operations, you can ensure that the user interface operations do not prevent the CPU from executing more important operations, such as acquiring data or process control. Improved Performance on Multicore Systems One of the most compelling benefits of multithreading is that you can harness the full computing power of multicore systems. In a multithreaded application in which several threads are ready to run simultaneously, each core can run a different thread, and the application attains true parallel task execution. This not only enhances the previously discussed benefits of more efficient CPU use and better system reliability, but also purely increases performance. The Graphical Programming Advantage By definition, virtual instrumentation helps you take advantage of each innovation in the PC industry. Multicore processing is no different. When developing software that fully takes advantage of the computing power of multicore processors, you need a development tool that inherently provides parallelism. Because of their sequential nature, text-based programming languages, such as C and C++, require you to call functions to programmatically spawn and manage threads. It is also often difficult to visualize how various sections of code run in parallel because of the sequential, line-by-line syntax of text-based languages. In contrast, graphical programming environments such as LabVIEW can easily represent parallel processes because data flow is inherently parallel. It is considerably easier to visualize the parallel execution of code in a graphical environment, in which two parallel execution paths of graphical code reside side by side. LabVIEW code is also inherently multithreaded. LabVIEW recognizes opportunities for multithreading in programs, and the execution system handles multithreading implementation and communications for you. For example, two independent loops running without any dependencies automatically execute in separate threads. When you execute LabVIEW code on a multicore system, the multiple threads run on the multiple processor cores without any intervention on your part.

31 Multithreaded Application Executing on a Dual-Core Processor
Demo Multithreaded Application Executing on a Dual-Core Processor This demo highlights the performance improvements multithread applications received when executed on multicore processors.

32 PXI-8105 LabVIEW Benchmarks
100% 25% PXI-8105 PXI-8196 One platform that readily embraces multicore processing is PXI – an open, multivendor, PC-based platform for test, measurement, and control. You can remotely control PXI systems from any dual-core desktop or laptop PC, and National Instruments has released the industry’s first embedded and rack-mount dual-core PXI controllers. The National Instruments PXI-8105 embedded controller employs the 2.0 GHz Intel Core Duo processor T2500, and the National Instruments PXI-8351 rack-mount controller includes the 3.0 GHz Intel Pentium D processor 830. Benchmarks in LabVIEW 8 demonstrate a performance improvement for single-threaded applications of up to 25 percent between the NI PXI-8105 and the single-core National Instruments PXI-8196 (2.0 GHz Intel Pentium M processor 760), which have equivalent processor clock rates. This improvement is a result of numerous enhancements in the processor and chipset between these two generations of Intel architectures. The performance improvement resulting from the fact that the PXI-8105 processor is dual core can be seen in the multithreaded application benchmarks, which demonstrate an improvement of up to 100 percent compared to the NI PXI-8196 embedded controller.

33 The Future of Multicore Processing
Architecture improvements to further reduce power and improve memory bandwidth Multiprocessor systems with multicore processors More processor cores Quad-core processors will release in 2007 The future of multicore processing is the following: Architecture improvements to further reduce power and improve memory bandwidth Multiprocessor systems with multicore processors More processor cores Quad-core processors will release in 2007

34 Microsoft Windows Vista Overview
Visualization and Search Security Changes .NET 3.0 API Vista x86 versus Vista x64 Vista Availability Vista System Requirements Microsoft Windows Vista Overview: Visualization and Search Security Changes .NET 3.0 API Vista x86 vs. Vista x64 Vista Availability Vista System Requirements

35 Graphics and Visualization
Demo: Visualization on Vista Show the structure of Start Menu/file structure Show the Windows Sidebar Run the ‘Bouncing cube in an XY graph for NI Days’ and ‘Bessel function – Vibrating membrane for NI Day’ Show ‘Switch between windows’ feature (click on ‘Switch between windows’ and scroll) For the average user, the most noticeable change in Vista will likely be the eye-catching visuals that permeate this release. While this aspect of Vista is not as essential for most engineers and scientists, it makes for a more pleasant and intuitive user experience in many cases. Two of the biggest visual enhancements are Windows Aero and Windows Gadgets. Windows Aero (which stands for authentic, energetic, reflective, and open) is the new visual style in Vista designed to be cleaner, more powerful, and more visually appealing than previous versions of Windows. One example of the Windows Aero design is translucent edges found on all windows and user interfaces running under Vista. Another is the completely revamped Windows-Tab view, which makes it easier to find the open application for which you are looking. Another interesting feature is the Windows Sidebar, a new panel on either the right side (default) or the left side of the Windows desktop. The Sidebar is an engine for Desktop Gadgets, which are mini-applications you can use to control external applications and simultaneously display different information such as the system time and Internet-powered features such as weather feeds. Desktop Gadgets can run on either the Windows desktop or the Windows Sidebar.

36 Vista x86 versus Vista x64 Vista x86 (32-Bit) Vista x64 (64-Bit)
NI Software 2007 After 2007 WoW Emulation Executes in User Mode 32-Bit Application 32-Bit Application 64-Bit Application 32-Bit Service or Driver 64-Bit Service or Driver 64-Bit Service or Driver Executes in Kernel Mode Applications Remain 32-bit Test for compatibility on Vista x64 Drivers Kernel-mode components must be ported Digital signing required for kernel binaries Systems Software creating new tools to support this Driver installers must be refactored A single distribution can install either the 32-bit or the 64-bit kernel portions of driver User-mode components will NOT be ported initially 32-bit WoW emulation mode in Vista x64 might pose challenges in complex cases As Vista x64 gains in popularity, all NI software needs to port to 64-bit (and ship both versions) Advantages for large data sets (DAQ, IMAQ) Vista x64 not likely to be prevalent until 3-4GB RAM common LabVIEW plans some work in Jupiter, but not to release until 2008 or later Separate executables for 32-bit and 64-bit Plans may change based on market demand Driver groups need to port their user-mode components Must consider impact of having both 32-bit and 64-bit installed together

37 Vista System Requirements
Minimum (XP-like experience) 1 GHz “Modern” Processor 512 MB RAM DirectX 9 Video Premium (“Aero” experience) 1 GB RAM DirectX 9 Video with 128 MB VRAM These requirements are for the visualization capability of Aero. It may be worth mentioning here that there will be a several different Vista packages and not all will have Aero support. These are the “current” set of requirements from Microsoft, and are subject to change: Intel’s Vista compatible hardware list:

38 Vista-ready LabVIEW 8.2.1 released on Monday, April 9th
Executive Summary: · NI is announcing the release of LabVIEW on Monday, April 9th. · It includes support for Windows Vista, bug fixes and LabVIEW SignalExpress · LabVIEW is a fully re-kitted version of LabVIEW that can be installed standalone on a new computer · SSP members will receive LabVIEW shipments on or around April 9th, or can download LabVIEW from ni.com beginning April 9th. What is LabVIEW 8.2.1? LabVIEW is an update to LabVIEW In addition to a regular update with bug fixes, this version includes additional features: · Support for Windows Vista (in addition to Windows XP/2000). · Updated NI hardware drivers with support for 32-bit and 64-bit Windows Vista · LabVIEW SignalExpress (Full and Pro for Windows). Customers should use their LabVIEW serial number to license LabVIEW SignalExpress. · Scheduled maintenance release with over 100 bug fixes. When will LabVIEW be available? LabVIEW (development environment and related add-ons) will be announced on April 9th, 2007, for all supported languages and operating systems. Which LabVIEW modules and/or toolkits are being updated with LabVIEW 8.2.1? Nearly every LabVIEW module and a few toolkits are being updated. See the table at the top of this page for more information. How can customers get LabVIEW 8.2.1? · Customers with SSP have been shipped their updates free of charge. Customers should start receiving shipments on or around April 9th. · Customers not on SSP can upgrade using the NI Upgrade Advisor at starting April 9th. · New customers purchasing LabVIEW after April 9th will receive LabVIEW in the LabVIEW box.


Download ppt "NIDays 2007 Worldwide Virtual Instrumentation Conference"

Similar presentations


Ads by Google