Presentation on theme: "Windows File Systems and Improving Performance APCUG 1/5/2008 Greg Hayes Raxco Software"— Presentation transcript:
Windows File Systems and Improving Performance APCUG 1/5/2008 Greg Hayes Raxco Software firstname.lastname@example.org
Introduction Company Background –Raxco founded 1978 –Largest provider of OpenVMS system management software –Spun off Axent Technologies (now part of Symantec) in 1996 –First Windows product released in 1998
Introduction Company Background –Microsoft Gold Certified Partner –Microsoft ISV –Member of Microsoft Development Network –Software used by some of the largest companies and government organizations in the world
Introduction Speaker background –System manager for 10+ years –Raxco customer for 10 years –Started at Raxco in 1996 –Manager, Technical Solutions –Microsoft MVP 2003-2007 - Windows File Systems –Online communities –Speaking/Education
Topic Background Who remembers their first PC? PC hardware performance continues to rise –CPU clock speeds > 2Ghz –2GB RAM –500GB+ capacity hard drives Limiting factor is still disk I/O Anything that speeds up access to disk improves performance
Windows File Systems FAT16 FAT32 exFAT (new) NTFS
Windows File Systems FAT16/32 The File Allocation Table is created when you format a FAT drive and contains information about all of the files found on the drive and where they logically reside. There are actually 2 FATs per partition.
Windows File Systems – FAT16 Benefits: –Faster performance on smaller partitions - less than 4GB –Best Performance for Pagefile Drawbacks: –Little or no security –High rate of file system corruption –Limits on partition size (FAT16 – 4GB)
Windows File Systems – FAT16 FAT16 –4GB FAT16 partition has a cluster size of 64k. –Wastes space as even the smallest files take up a full cluster. –Maximum file size is 2GB –Files per volume 65,536
Windows File Systems – FAT32 FAT32 –Maximum file size is 4GB –Allows smaller cluster sizes to be used – resulting in less space “waste” –Maximum partition size is 32GB (as formatted by XP/Vista/2K3) –Up to 2TB can be written/read (XP/Vista/2K3) –Files per volume ~4,177,920
Windows File Systems – exFAT exFAT –“Extended FAT” –Designed for flash media – exchange media with PCs. –Free space bitmap for faster allocation –Maximum theoretical file size is 16EB –More than 1000 files per folder
Windows File Systems - NTFS Journaling/Transaction Based File System “Self Describing” File System Metadata - data that describes data
Windows File Systems - NTFS Benefits: –Better security –Better performance with large partitions – greater than 4GB –Better resistance to file system corruption Drawbacks: –Can’t natively boot to MSDOS and get access to partition
Windows File Systems - NTFS NTFS –Maximum file size is approx 16TB –Maximum partition size is 2TB – 256TB (depends on cluster size used) –Files per volume 4,294,967,295
Windows File Systems - NTFS NTFS –Disk Quotas (Win2K and newer) –File/Folder compression built into the file system –Encryption built into the file system (Win2K and newer) –Volume Shadow Copy (VSS – WinXP and newer) –txNTFS
NTFS Metadata $MFT - Master File Table (0) –The $MFT contains at least 1 record for each file that exists on the partition. $MFTMirr – Master File Table Mirror (1) –Duplicate of the 1 st 4 records of the $MFT –Used for redundancy/recoverability
NTFS Metadata $LogFil – NTFS Transaction Log (2) –Updates to the file system first get posted to the transaction log –Gives NTFS its “self-healing” abilities $Volume – Volume Information (3) –Volume Label/Version
NTFS Metadata $Bitmap – Cluster Bitmap (6) –Indicates whether a cluster is used or empty –Uses this to quickly find free space $Boot – Partition Boot Sector (7) –Location of the partition boot sector –Bootstrap loader code if bootable drive
NTFS Metadata $BadClus – Bad Cluster File (8) –Clusters the file system has reported as bad –Updated by CHKDSK $Secure – Security Descriptors (9) $Upcase – Upcase Table (10) –Translation of lowercase characters to their equivalent Unicode upper case characters
Anatomy of a $MFT Record Fixed size records – 1K Attribute Records –$FILE_NAME 8.3 file name and parent directory Long File Name –$DATA Extent information for file. Includes Logical Cluster Number (LCN) and Run Length $DATA record contains information on file size
MFT Reserved Zone Created when partition is formatted $MFT will “grow into” as needed File system will keep free if possible Defragmenters can not use free space inside of MFT Reserved Zone (NT4/Win2K)
MFT Reserved Zone NT4 –Fixed Size –12.5% by default –Can be increased up to 50% via registry key
MFT Reserved Zone Windows 2003/2000/XP/Vista/2008 –Dynamically created every time partition is mounted –First record of $MFT to first non-free cluster - up to a default max of 12.5%
Logical vs Physical Clusters Logical Clusters –File system level –Every partition starts at logical cluster 0 –No idea of hard drive technology in use IDE SCSI RAIDx # platters or read/write heads
Logical vs Physical Clusters Physical Clusters –Hard drive level –Hard drive controller translates logical-to- physical and positions heads
Cluster Size and Performance Smaller clusters –less wasted space –Worse performance – especially large files Larger clusters –more wasted space –Better performance – especially large files
Conversion from FATx to NTFS NT4/Win2K –Results in 512byte clusters –Not “the best” for performance - especially with video/image applications WinXP/Vista –Will try to convert using 4k clusters. –Best for general file system performance.
OEM Vendors and Cluster Size IBM/Dell/HP –Provide Software to install/configure Windows Server –Formats system drive with 512byte cluster –Absolute worse for system drive performance –Only way to convert is 3 rd party tools and have server unavailable for extended period (HP SmartStart, Dell OpenManage, IBM ServerGuide)
Cluster Size and Performance Larger cluster sizes result in better performance when reading large files (video,audio,images) The larger the cluster size, the more space can be “wasted” if you store small files on the partition With cluster sizes greater than 4k under NT4 and Win2K, you will NOT be able to defragment With cluster sizes greater than 4k, you will not be able to use NTFS compression
Biggest Cause of Poor File System Performance Fragmentation!
Fragmentation Causes What causes fragmentation? –Occurs when files are created, extended or deleted –Happens regardless of how much free space is available (After XP/SP2 installation – 944 files/2943 fragments) –More than one Logical I/O request has to be made to the hard drive controller to access a file
Fragmentation Impacts What does fragmentation do to my system? –Slows down access to files –Extra CPU/Memory/Disk resource usage –Some applications may not run –Slow system boot/shutdown –Audio/Video record/playback drops frames or “skips”
Measuring Impact of Fragmentation Measuring the performance loss in reading a fragmented file
What Can I Do About Fragmentation? You can’t stop fragmentation from happening (you can only slow it down)! What you CAN do is to defragment
Defragmenting - Results What does defragmenting do? –Locates logical pieces of a file and brings them together Faster to access file and takes less resources Improves read performance –Consolidates free space into larger pieces New files get created in 1 piece Improves write performance
Measuring Impact of Fragmentation Measuring the performance difference in reading a contiguous file
Defragmenting - Issues to Consider Safety, Safety, Safety! –No loss of data –No corruption of data
Defragmenting - Issues to Consider Free Space –How much is enough? –Where is free space located? Inside MFT Reserved Zone Outside of MFT Reserved Zone –Consolidation of free space
Safety - Microsoft’s Defrag APIs Provided as part of the operating system Defragmenters do not actually move files Integrated with caching system and Memory Manager Performs all I/O synchronization Allows even files in use to be defragmented
Safety - Microsoft’s Defrag APIs Restrictions –Move Granularity –Free Space in MFT Reserved Zone –NTFS Cluster Size –Special OS files Pagefile Hibernate file NTFS Metadata
Built In Defragmenter Windows 2000/XP Very Basic –Hard to schedule/No scheduling –Multiple passes –Inability to defragment certain files –High free space requirement –Lack of effective free space consolidation –Resource intensive –Not designed for large drives
Built In Defragmenter Vista Very Basic –Limited scheduling options –Multiple passes –Inability to defragment certain files –High free space requirement –Incomplete free space consolidation –Not designed for large drives –Not easy to get fragmentation information
Advanced Defrag Technology Complete Defrag of All Files Free Space Consolidation Single Pass Defragmentation File Placement Strategy Free Space Requirement Minimal Resource Usage Large Drive Support Easy to Schedule and Manage OS Certification Robust/Easy Reporting
Defrag Completeness Data Files Directories System Files –Pagefile –Hibernate File –NTFS metadata
Free Space Consolidation Allows new files to be created contiguously –Maintains file system performance longer –Requires less frequent defrag passes
Free Space Consolidation Defragmenting files improves read performance Free space consolidation improves write performance
Single Pass Defragmentation As my dad always told me… –If you are going to do a job – do it right the first time After defrag has completed, you don’t have to “wonder” if you need to run again Fragmentation issue is solved – done!
File Placement Strategy When you play chess, do you have a strategy or do you just start moving pieces around? A good file placement strategy accomplishes several things: –Slows down re-fragmentation –Speeds up future defrag passes –Works with the operating system – not against
Free Space Requirement Built in defragmenter requires 10-20% usable free space (outside MFT Reserved Zone – NT4/Win2K) Most commercial defragmenters require about the same – 10-20% Free space consolidation allows you to only actually need about 3%
Resource Usage Run in the background Low Memory Usage Low CPU Usage
OS Certification Certified for Windows Independently tested to ensure that –Application “plays well” with the operating system –Works with Windows power management –Drivers written to MS standards (if applies)
Windows Prefetch What is Prefetching? –Windows monitors system boots and application launches –Uses information gathered to perform driver load optimization (speeds up boot process) –Uses information gathered to speed up application launches
Windows Prefetch Windows stores this information in the \Windows\Prefetch folder –Layout.ini –Xxxx.pf Every 3 days, Windows will automatically perform a “partial” defrag of the files indicated in layout.ini
Windows Prefetch This process depends on a large enough piece of contiguous free space being available. If there isn’t, then it doesn’t happen!
Windows Prefetch Defragmenters - working WITH the operating system NOT against. Microsoft says: –Either respect where Windows places these files OR –Manage these files yourself
“Tweaking” - BootVis BootVis –One of the most mis-understood “tweaking” tools around –Available from Microsoft –Used to profile system boots Optimize driver loading for faster boots Calls built-in defragmenter to “optimize” boot files Also can be used to identify drivers that load slowly
“Tweaking” - BootVis Windows XP does this boot optimization over a period of boots (after 3 boots have been performed) BootVis does it “now” End result is the same
“Tweaking” – Disable Last Access HKLM\System\CurrentControlSet\Control\ FileSystem\Disablelastaccess =1 Improves NTFS performance by 2% Last Access updating turned off by default in future versions of NTFS (2008 Server)
“Tweaking” – 8.3 Name Creation Win2k3/XP/Vista: fsutil.exe behavior set disable8dot3 1 Win2k:HKLM\System\CurrentControlSet\C ontrol\FileSystem\NtfsDisable8dot3NameC reation = 1 Provides faster directory enumeration Legacy apps may no longer install/work.
“Tweaking” – Purging Prefetch Belief is that a large number of entries in layout.ini slows down system. Windows will manage effectively. Use of entries in layout.ini actually speeds up system boots and application launch.
“Tweaking” – Disabling Vista’s SuperFetch Belief is that not having a large amount of free memory available is a bad thing. SuperFetch caches frequently accessed programs/data and uses available RAM. Unused RAM is a wasted resource. SuperFetch survives system restarts. XP caching doesn’t.
Volume Shadow Copy (VSS) VSS and defragmentation Multiple of 16k cluster size Default cluster size is 4k because NTFS compression hasn’t been modified to support greater than 4k cluster size. BitLocker (Vista) also restricted to 4k cluster size
DiskPar/DiskPart DiskPar - Win2k DiskPart – Win2k3 Want to avoid crossing track boundaries Align on 64k for best MS SQL performance Win2k8 – default is 64k when creating volumes Contact storage vendor – i.e. EMC recommends 64k
Disk Technology SATA worse in high write environments – however cheap RAID1+0 best for high read/write environments AND redundancy – however expensive More onboard cache – better performance
Performance Measuring Tools Windows Performance Monitor –Split I/O Count (fragmentation) –Disk Queue Length (<= 2/spindle) hIOMon – www.hiomon.comwww.hiomon.com –Device AND File based metrics SQLio – Microsoft –Stress Test I/O subsystem
Cluster Size Recommendations * You can’t use compression if cluster size greater than 4k
Conclusion To improve file system/drive performance –Use appropriate disk technology –Use the most appropriate file system –Use the most appropriate cluster size –Align on cluster boundaries –Make sure free space is consolidated –When you defragment, make sure that it is being done effectively.