Presentation is loading. Please wait.

Presentation is loading. Please wait.

이동윤 Consulting Services Microsoft Korea SQL Server 2000 OLTP best practices.

Similar presentations


Presentation on theme: "이동윤 Consulting Services Microsoft Korea SQL Server 2000 OLTP best practices."— Presentation transcript:

1

2 이동윤 Consulting Services Microsoft Korea SQL Server 2000 OLTP best practices

3 대상 기술범위 : OLTP Database 설계 SQL Server 2000 성능 튜닝

4 이 주제를 이해하는 데 필요한 지식 Level 300 Windows Performance Monitor SQL Profiler SQL Server Query Plan

5 목차 개요 Design, Techniques and Best practices 성능 이슈들 요약

6 Impact on Performance 성능에 가장 큰 영향을 주는 요소는 Application & Database 설계와 T-SQL 성능 tuning 이 주는 효과는 실제보다 적음 Performance monitoring 은 application 설계, 개발, 운영상에서 부족한 부분을 보완 App Design DB DesignSQLHardware tuning Perf Monitoring workload changes 25% 20%10%20%

7 OLTP 개요 짧고, 반복적인 다량의 transaction – 소량의 데이터 핸들 – 높은 동시 접속성 – 예측 가능한 access patterns OLTP 특성이 고려되어야 할 사항들 –Transaction 설계 –Database 설계 – 성능 목표

8 OLTP 목표 OLTP 성능 목표 – 빠른 transaction Cursor 보다 Set 기반의 처리 인덱스를 통한 적은량의 data access 와 locking –CPU 자원의 최대한 이용 plan 재사용 re-compilation 억제 –IO 자원의 최대한 이용 불 필요한 join 억제 ::Fn_virtualfilestats Transaction log (writelog) Data (io_completion)

9 목차 개요 Design, Techniques and Best practices 성능 이슈들 요약

10 Transactional Design selects, inserts, updates, deletes 모두에 적용될 수 있음 Consistency 와 Concurrency –data consistency 를 위해 Lock 이 필요 – 불합리한 lock 들로 인해 blocking 을 발생 –Transaction 을 짧게 가져가는 것이 concurrency 의 핵심

11 Database Design 1 정규화 정규화 –Database 설계 방식 Process of applying increasingly restrictive design rules 제 3 rd 정규화까지가 일반적 중복 데이터 제거, 작은 크기의 Row 를 가진 여러 개의 관계 Table 을 도출 – 이점 Row 의 크기가 작으므로 Data Page 에 더 많은 Row 들이 위치할 수 있으며 이로 인해 Searching, sorting, and creating index 등이 빠름 Schema 유연성이 좋음으로 schema 변경이나 유지 보수 측면에서 쉽게 대응 – 단점 더 많은 join

12 Database Design 2 반정규화 정규화 규칙들에 대한 부분적인 완화 – 개별 Application 특성에 대한 반영 – 성능관련 이슈가 있을 경우에만 예외적으로 적용 이점 –Join 감소 –Table 당 foreign key 감소 – 인덱스 수의 감소로 인한 Disk 공간 절약 – 데이터 발생시 연산 값들을 미리 계산 후 저장하여 Select 시 연산이 불 필요 ( 단 update cost 와 select cost 에 대한 판단 필요 ) – 테이블수가 줄어들 수 있음 단점 – 조회 속도는 빨라지나 반대 급부로 Update 는 느려질 수 있음 –Application 특성에 종속적인 경우가 많으므로 application 이 바뀌는 경우 schema 의 유연성이 떨어짐 – 대개 table 의 Row 사이즈가 커짐 – 경우에 따라 Select Query 가 단순화되고 반대로 Update 는 data 적합성 보장 때문에 복잡도 증가

13 Database Design 3 정규화대 반 정규화의 선택 정규화가 많이 된 data model 은 다중 join 이 불가피 –Joins 은 대개 work table 이나 tempdb 사용을 추가로 필요 –High concurrency performance objective: reduce joins – 의문 ? 빈번하게 요청되는 정보가 항상 6 개의 table 이 조인되어야 한다면 ? - 과다한 정규화를 의심 Tradeoffs – 유연성 대 성능 – 반정규화의 이슈들 Tradeoff of update cost vs. select cost Few updates (I,U,D) vs. Many selects: 반 정규화가 유리 Many updates (I,U,D) vs. Few selects: 정규화가 유리 일반적인 반 정규화의 대상 : 1. 중복 허용 또는 연산 컬럼 추가 2.indexed view 추가 3.Table 통 폐합

14 Index Design Index 이슈들 –table scans 에 대한 대안 –OLTP 는 DSS / reporting 시스템보다 일반적으로 인덱스를 적게 가짐 Trade off of index usage vs maintenance costs OLTP 의 Index 는 처리 부하에 대한 정확한 예측과 특성에 따라 설계되어야 함

15 Indexes ClusteredNonclustered

16 Index 종류 Clustered – 테이블의 Data 가 해당 Column 들의 순서대로 정렬 – 인덱스의 맨 아래 레벨이 실제 Data – 인덱스의 Row length includes data row –Non-dense data page 의 첫 번째 Row 에는 Page 연결 정보를 저장 Nonclustered – 순서 정렬이 Index 에만 존재 – 인덱스 맨 아래 레벨이 Data Row 의 번지수를 저장 clustered index 가 있을 경우 해당 cluster index 값 없을 경우 RID –Row length 는 nonclustered index columns 과 row locater 를 포함 –Dense 각 leaf 레벨에 모든 row 가 존재 –Covering Index 데이터 Page 검색 없이 인덱스 컬럼 만으로 Query 결과를 만족시킬때 “covered query” 라 부름

17 목차 개요 Design, Techniques and Best practices 성능 이슈들 요약

18 Index 와 성능 –IO 측면 – table scan 의 대안 –Maintenance cost vs. benefit On delete, insert, update maintenance Clustered indexes –Page splits Nonclustered indexes –Heap tables and forwarded records –Randomization Indexes can “randomize” insert/update/delete activity (examples: Name or PhoneNum) –hot spot (blocking) 을 피하게 해주나 page split 을 발생 시킬 수 있음 –Ascending keys hot spots (e.g. blocking) 발생 Row level locking 이 blocking 을 방지할 수 있으나 성능 감소를 일으킬 수 있음

19 Index 권장 사항 테이블에 nonclustered (N/C) index 가 있을 때 복합 clustered index key 사용하지 말 것 – 데이터 Row 를 찾기 위해 Nonclustered index 는 clustered index key (primary key) 를 포함하고 있음 대량의 데이터 bookmark lookups (N/C) 을 제거 Clustered index 잇점 – 대량의 데이터 lookup 제거 (avoids bookmark lookups) – 부분 scan –Scan 시 data row 전체를 access Non-clustered index 잇점 –Query covering – 단순한 정렬 기능으로 사용될 수도 있슴

20 성능을 떨어뜨리게 하는 요인들 ? Queuing –Multiple types of queues Bad configuration –Hardware & Software Bad Queries & Design –Badly written, poorly designed Poor indexing –Not relevant to workload or lack of 부적절한 optimizer plans

21 Shared Resource Limit Scalability Database shared resources –Database performance is limited by maximum Transaction Log throughput, only ONE possible transaction log per database! –Can be resolved by partitioning over multiple databases Server shared resources –TEMPDB –Memory Only data cache can live in AWE, rest restricted to lower 2-3GB of address space –Can be resolved by partitioning over multiple instances Machine/node shared resources –CPU and networking –Can be resolved by partitioning over multiple servers

22 확장성의 규칙들 Database scalability is limited by the maximum throughput of the transaction log –Disk I/O Instance scalability is limited by shared “process” level resources –Memory Server scalability is limited by shared “server”/”machine” level resources –CPU (incl. L1 & L2 cache) –Network bandwidth

23 Disk I/O Determine I/O pattern –Write Transaction Log (~100% sequential) Lazy Writer (random) –Read Random Sequential Establish disk I/O baseline or SLA outside SQL Server, using: –SQLIOStress or IOMeter (Intel, public domain) Special cases: –Transaction log –tempdb contention (see KB 328551)328551

24 I/O Bottlenecks 1 I/O bottlenecks are typically easy to find Be very careful with the transaction log Beware of write cost on RAID5: –In RAID 5 each write has to logically read old data + old parity (to compute parity) and write new data and new parity –Each RAID5 write = 2 READS + 2 WRITES ! However: Disk guys work real hard to optimize this –Recent bulk load tests showed >50% degradation comparing RAID 0+1 vs. RAID 5

25 I/O Bottlenecks 2 Disk subsystem based on I/O throughput required, not size of DB –E.g. 1TB data / 72GB per drive = 14 drives. Will 14 drives provide sufficient IO throughput? May need more smaller drives Random (OLTP) vs. sequential (Reporting) IO/sec Cache on controller – tuned for % read or write Consider all workloads –OLTP (typically random IOs) –Batch ( 작업 종류에 따라 random 일 수도 있고 sequential 일 수도 있음 )

26 Optimizing for the log Profile the log disk –Disk 가 수용할 수 있는 최대 writes / second 는 ? log 에 할당된 Disk 는 Logging 에만 사용 –Keeps the disk heads writing sequentially minimizing seeks unprotected write back cache 사용 주의 – 전원이 Off 되었을 때 최근에 수행된 몇 개의 Transaction 데이터가 아닌 전체 database 가 깨질 수 있음

27 ::fn_virtualfilestats ::fn_virtualfilestats (dbid, [fileId | -1]) –Provides breakdown of physical I/O by file, SQL Server I/O only! –Look for IostallMS, average stall = IoStallMS / (reads+writes) Compare SQL I/O to Performance Monitor –PhysicalDisk:Disk Reads/sec and Disk Writes/sec –PhysicalDisk:Average Disk sec/Read and Average Disk sec/write –PhysicalDisk:Disk Read Bytes/sec and Write Bytes/sec -- DbId = -1 == all databases -- FileId = -1 == all files declare @dbid int select @dbid = db_id('pubs') selectDbId, FileId, TimeStamp, NumberReads, NumberWrites, BytesRead, BytesWritten, IoStallMS, Avg_Stall=IoStallMS / (NumberReads + Numberwrites) from ::fn_virtualfilestats(@dbid, -1)

28 fn_virtualfilestatsex declare@TotalIObigint, @TotalBytesbigint, @TotalStall bigint select@TotalIO= sum(NumberReads + NumberWrites), @TotalBytes = sum(BytesRead + BytesWritten), @TotalStall = sum(IoStallMS) from ::fn_virtualfilestats(-1, -1) select[DbName] = db_name([DbId]), [DbId], [FileId], [NumberReads], [NumberWrites], [BytesRead], [BytesWritten], [IoStallMS], [TotalIO] = (NumberReads + NumberWrites), [TotalBytes] = (BytesRead + BytesWritten), [AvgStallPerIO] = [IoStallMS] / ([NumberReads] + [NumberWrites] + 1), [AvgBytesPerIO] = (BytesRead + BytesWritten) / (NumberReads + NumberWrites), [%IO] = cast(100 * (NumberReads + NumberWrites) / @TotalIO as numeric(20, 2)), [%Bytes] = cast(100 * (BytesRead + BytesWritten) / @TotalBytes as numeric(20, 2)), [%Stall] = cast(100 * IoStallMS / @TotalStall as numeric(20, 2)) from ::fn_virtualfilestats(-1, -1)

29 dbcc showfilestats Syntax: –dbcc showfilestats( ) Shows file information of files in current database Returns –Fileid – ID of file (see sysfiles) –FileGroup – ID of filegroup file belongs to –TotalExtents – Total number of extents allocated on file –UsedExtents – Number of extents in use –Name – Logical name of file –FileName – Physical name and full path of file

30 Monitoring I/O Performance PerfMon CounterDescription Disk Reads/sec & Disk Writes/sec Number of I/O’s being issued against a particular disk. This number varies based on the size of I/O’s issued. Practical limit of 100-140/sec per spindle, however consult with hardware vendor for more accurate estimation Average Disk/sec Read & Average Disk/sec Write. Measure of disk latency. Lower values are better but this can vary and is dependent on the size of I/O’s being issued as well as the workload characteristics. Numbers also vary across different storage configurations (SAN cache size/utilization can impact this greatly). Values higher than normal often indicate sustained disk queues.. On well-tuned OLTP systems deployed on high performance SAN’s ideal values would be in the range of < 2 ms for Log and 4-10 ms for Data. DSS (decision support system) type workloads may result in higher latencies.. Long running values > 100ms could be an indication of I/O problems. This, however, is dependent on the workload’s characteristics and hardware used. Consider in combination with what is normal for your particular system. Average Disk Bytes/Read & Average Disk Bytes/Write Size of I/O’s being issued. Impacts disk latency. Large I/O sizes may result in slightly higher latency. When used with SQLIO this value should correspond to the I/O size being issued during the test. When used to monitor SQL Server, this will tell you the average size of the I/O’s SQL is issuing to fill query requests. Average Disk Queue LengthThe general rule of thumb is <=2 per spindle but this is hard to measure due to virtualization of storage in most SAN environments. Look for higher than average disk queue length in combination with higher than average disk latencies. This combination could be an indication that the SAN’s cache is being over utilized. Disk Read Bytes/sec & Disk Write Bytes/sec Measure of the total throughput for a particular disk or LUN.

31 Blocking Session 들 간에 Blocking 은 자원들에 대한 locking 과 waiting 의 조합으로 발생 Dump information on blocker and block victims –Look at sysprocesses and syslockinfo –sp_blockinfo Lists locking chain Lock waits –dbcc traceon (3605, 3604, -1) –dbcc lock(StallReportThreshold, 200) –Dumps all lock waits longer greater than 200 ms, only reported when lock is granted or failed, does not help to determine block situations, can only reveal the cause of blocking

32 Query for long waits To display: Locks held > 10 seconds Data page I/O waits > 1 second Blocked network I/O > 10 seconds Log I/O >.5 seconds select spid, waittime / 1000.0 as [Wait Time (s)], lastwaittype, waitresource from master..sysprocesses where spid > 50 and (((waittype between 1 and 32) and waittime > 10000) or ((waittype between 1056 and 1061) and waittime > 1000) or (waittype = 2048 and waittime > 10000) or (waittype = 129 and waittime > 500))

33 Finding long transactions Use DBCC OPENTRAN Or: query sysprocesses & syslockinfo Example: SELECT spid,cmd,status,loginame, open_tran, datediff (s, last_batch, getdate ()) As [WaitTime (s)] FROM master..sysprocesses p WHERE open_tran > 0 and spid > 50 and datediff (s, last_batch, getdate ()) > 30 and exists (select * from master..syslockinfo l where req_spid = p.spid and rsc_type <> 2)

34 3 rd Party Performance Tools Veritas (formerly Precise) InDepth for SQL Server –Excellent tool for identifying Resource bottlenecks Resources consumed by statement Waits by statement Performance history

35 Resource problems: IO or Memory Pressure? WaitsQueuesExplanation If you have high…. 1. IO_Completion 2. Async_IO_Completion 3. PageIOLatch_x 4. PageLatch_x Correlate with…. 1. SQL Buffer Mgr –Avg Page Life Expectancy (seconds) –Checkpoint pages/sec –Lazywrites/sec 2. Physical Disk –Avg disk sec/read –Avg disk sec/write –Disk queues These waits may indicate IO or memory issues High Avg disk seconds indicates IO issue HOWEVER Low average page life indicates memory pressure e.g. cache flushing

36 Or Design problems…. – IO or DB Design? WaitsQueuesExplanation If you have high… 1. IO_Completion 2. Async_IO_Completion 3. Writelog Correlate with… 1. SQL Buffer Mgr –Avg Page Life Expectancy (seconds) –Checkpoint pages/sec –Lazywrites/sec 2. Physical Disk –Avg disk sec/read –Avg disk sec/write –Disk queues These waits can also indicate a data base design issue 1. If Profiler shows: –Scan started –Reads –Writes 2. If Showplans do high –Table scans –Clustered index range scans –Nonclustered index range scans –Sorts

37 Network bottlenecks Typical issues include: –Exceeding capabilities of NIC’s Check perfmon [SQLServer:SQL Statistics:Batch Requests / Sec] –>3000/sec max on 100Mb hardware –Exceeding network bandwidth Check perfmon [Network Interface:Bytes Total/Sec] / [Network Interface:Current Bandwidth] –>.6 is pushing it

38 Language vs. RPC Events Server has two distinct and optimized code paths –Goal is to utilize the correct code path! Language event –(stored) procedure 가 아닌 모든 SQL 명령문 RPC event –{call} syntax 를 사용한 Stored procedure 호출문 The problem with language events –Generic code which executes procedures via a language event, for example OSQL, Query Analyzer etc. SQL Server parser - extra parsing to figure out what is in the string Adhoc query plans for string (in addition to Stored Proc plans)

39 Comparing Protocols

40 API - Recent benchmark lessons OLTP Benchmark lessons –Trading floor 65K trades per second on 4way hyper- threaded box –Row length and data types Every byte counts, use correct types –Sometimes big perf gains from best practices Packet size and batch size –Perf of ‘Bind’ on client proportional to batch size »For large batches, avoid ODBC Parameter binding with ? ODBC {Call Proc} better than execute proc syntax –{call dbo.qi ('M01', 'M01.0407040000000002')} –exec dbo.qi @v1='M01', @v2='M01.0407040000000002' –adds ADHOC query plans due to SQL string parsing Net gain from 6,000 TPS to ~40,000 TPS

41 API – unofficial single proc test NoBind provides biggest increase in OLTP perf with large batches RPC events e.g. {Call} syntax eliminates adhoc query plans –no parsing of SQL text

42 Results Handling / Round trips 항상 한번의 fetch 로 모든 결과 집합들을 가져올 것 ! Un-fetched results and result sets can cause concurrency issues on the server Un-fetched results and result sets will cause an attention signal to be send to the server to cancel the pending stream SET NOCOUNT ON –INSERT, UPDATE and DELETE 시 빈 결과 집합을 보내는 것과 같은 불 필요한 round trip 을 제거

43 Concurrency/CPU Issues: Cached Objects Master..Syscacheobjects –Procedure or batch name –Set options for plans –Ref counts, Use counts –Compiled plan Single copy (serial and parallel) Re-entrant and re-usable Re-comps place **lock** on compile plan – Executable plan Data structure for user context, not re-entrant Look for plan reuse: usecounts > 1 Plan re-use of –Procs, Triggers, Views –Defaults, Check contraints, rules –adhoc SQL, sp_executesql

44 Cached objects & plan re-use SQL Batch requests/sec –Compare to initial SQL Compilations/sec SQL Compilations/sec –Includes initial compiles AND re-compiles –Eliminate re-compilations to get initial compiles –Look for identical SQL statements with low usecounts in syscacheobjects SQL Re-compilations/sec –Just re-compiles –Check profiler for sp:recompile event to identify SQL statement.

45 Drilling in to CPU Plan compilation & requests –Perfmon: SQLServer:SQL Statistics Batch requests / sec { >1000’s/sec server is busy} SQL Compilations / sec {>10/sec could be problem} SQL Recompilations / sec {OLTP should avoid high recomps} –Ratio of compiles / requests is important Compiles – recompiles = initial compiles Plan re-use = (Batch requests – initial compiles) / Batch requests –(compared with batch requests, low initial compiles indicates plan re- use) –Recompile 이 일어나는 경우 : schema 상태 변화 – schema 변경 등등 Previously parallelized plan needs to run serially 통계치가 재계산된 경우 Rows changed threshold - sysindexes.rowmodctr

46 Plan re-use vs. CPU usage CPU used for plan determination –OLTP characterized by high numbers of identical small transactions Plan re-use desirable See usecounts in master..syscacheobjects Stored procedure estimates are based on initial parameter values – 대부준의 OLTP 시나리오에서 Re-use 가 정답, – 하지만 결과 집합의 크기가 다양한 경우 re-use 가 성능상 불리할 수도 있음

47 Plan estimation & re-use issues: Set Statistics Profile on –Shows estimates vs. actuals –Look for huge differences (examples) OverEstimates are 100x actuals UnderEstimates are 1% actuals Plan selection 은 추정치에 기반 Overestimation –fixed cost (hash) 가 우선 –Extreme cases can improve with LOOP JOIN hint Execute P1 with recompile Underestimation –variable cost (e.g. nested loops) 가 우선 –Extreme cases can improve with HASH option

48 Plan estimation & Re-use issues:

49 Profiler events Plan re-use (or lack of) –Compare batch requests to SQL compiles/sec IO –Reads and writes Recompilation Cache hit, insert, miss, remove Index usage (or lack of) Object access

50 Profiler events for query plans The Profiler events that track cache management include: –SP:CacheMiss (event ID 34 in Profiler) –SP:CacheInsert (event ID 35 in Profiler) –SP:CacheRemove (event ID 36 in Profiler) –SP:Recompile (event ID 37 in Profiler) –SP:CacheHit (event ID 38 in Profiler) SP:Starting lists stored procedure execution SP:StmtStarting will show corresponding SQL statement –Example: sequence is SP:StmtStarting SP:CacheMiss (no plan found) SP:CacheInsert (plan created) –Watch out: Heavy profiler use will affect performance ! Add Eventsubclass data column to display recompilation reason

51 Concurrency/CPU: Recompilation Plan determination is CPU Intensive –benefit of new plan > CPU cost 일 때는 Recompile 이 유리 Profiler –Lists recomp events and statements –Data column for reason: EventSubClass locks on system tables –Re-compiling stored procedure plans serialize other users during high concurrency places lock on single compile plan Re-compilation based on –Rows changed thresholds (rowmodctr) –DDL placement, schema changes –Code practice & temp tables (P1 & P2)

52 EventSubClass: recompilation 이 일어나는 경우 compile 또는 execute 시 Schema, bindings or permissions 이 변경된 경우 각종 통계들의 변경 compile time 시에 드러나지 않은 Object 가 run-time 시에 check 되었을 때 Batch 안에서 Set option 에 변경이 있을 때 Temp table schema, binding 또는 permission 이 변경되었을 때 Remote rowset schema, binding or permission changed.

53 Concurrency: Helpful Lock scripts Sp_blockinfo – lists locking chain DBCC traceon (3605,3604) Go dbcc lock(StallReportThreshold, 200) – Dumps all locks greater than 200 ms to SQL errorlog

54 Useful Performance Counters Memory: Page faults/sec Memory: pages/sec Physical Disk: Avg. Disk Queue Length Physical Disk: Avg. Disk sec/Transfer Physical Disk: Avg. Disk sec/Read Physical Disk: Avg. Disk sec/Write Physical Disk: Current Disk Queue Length Processor: %Processor Time SS Access Methods: Forwarded Records/sec SS Access Methods: Full Scans/sec SS Access Methods: Index Searches/sec SS Access Methods: Page Splits/sec SS Access Methods: Range Scans/sec SS Access Methods: Table Lock escalations/sec SS Buffer Manager: Checkpoint pages/sec SS Buffer Manager: Lazy writes/sec SS Buffer Manager: Page Life expectancy SS Databases: Log Flush Wait time SS Databases: Log Flush Waits/sec SS General Statistics: User Connections SS Latches: Average Latch Wait Time(ms) SS Latches: Latch Waits/sec SS Latches: Total Latch Wait Time (ms) SS Locks: Average Wait Time(ms) SS Locks: Lock requests/sec SS Locks: Lock Wait Time (ms) SS Locks: Lock Waits/sec SS Memory Manager: Memory grants pending SS SQL Statistics: Auto-Params attempts/sec SS SQL Statistics: Batch requests/sec SS SQL Statistics: Safe Auto-Params/sec SS SQL Statistics: SQL Compilations/sec SS SQL Statistics: SQL Re- Compilations/sec System: Processor Queue Length

55 목차 개요 Design, Techniques and Best practices 성능 이슈들 요약

56 OLTP 요약 Lessons learned Challenge: Scheduling a mix workload evenly across Schedulers Database Log to handle 60,000+ database tx/sec Real time reporting and loading data –Index 는 좋은 점과 않 좋은 점 모두를 가짐 OLTP general goal: limit recompiles –See “SQL Server 2000 Recompilation” at msdn.microsoft.com Multiple database logs for scalability Read-only queries

57 OLTP 요약 Gotchas 처리 부하를 고려한 Database 설계 –Indexes – 반 정규화 –Transactions Maximizing resources –Plan re-use – normally desirable for OLTP –Recompilation – generally try to avoid with OLTP –Set based operations more efficient than cursors – 동시 접속자가 많은 경우 parallel query 의 억제 Sp_configure “max degree of parallelism”,1 -- turns off –Check for good query plans – set statistics profile on –Good data access – see Benchmark lessons

58 OLTP 요약 OLTP applications require appropriate –database design Index usage –Transaction usage High concurrency - must minimize blocking –Application design Use code coding techniques for plan re-use, minimize recompiles –API Maximize performance with most efficient calls –Access methods Efficient query plans for OLTP

59 참고 자료 “Inside SQL Server 2000” by Kalen Delaney “SQL Server 2000 Performance Tuning” by Whalen, Garcia, DeLuca, Thompson “SQL Server 2000 Recompilation” at http://msdn.microsoft.com http://msdn.microsoft.com “SQL Server 2000 Performance Tuning with Waits & Queues” SQL Magazine (January 2004)

60 추천서적 : Microsoft Press IT 전문가를 위한 고급 정보 최신 기술서적에 대한 정보는 여기서 참조하세요. www.microsoft.com/learning/books/

61 이 서적은 국내 대형서점에서 판매되며, 온라인 서점에서도 판매 됩니다. 추천서적 : IT 전문가를 위한 참고서적

62


Download ppt "이동윤 Consulting Services Microsoft Korea SQL Server 2000 OLTP best practices."

Similar presentations


Ads by Google