Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oracle Documentation Oracle DBA Course (9i, 10g, 11g) Lecture 1: Oracle Architectural Components.

Similar presentations


Presentation on theme: "Oracle Documentation Oracle DBA Course (9i, 10g, 11g) Lecture 1: Oracle Architectural Components."— Presentation transcript:

1 Oracle DBA Course (9i, 10g, 11g) Lecture 1: Oracle Architectural Components

2 Oracle Documentation

3 Objectives of the DBA course
Identify the various components of the Oracle architecture. Learn how to: start and shut down an Oracle database, create an operational database, manage Oracle control files, redo log files, database files, tablespaces, segments, extents, manage users, privileges, and resources, use Globalization Support features.

4 Objectives of Lecture 1 Overview of Oracle server architecture, identifying main components. Presentation of English and Polish Oracle server terminology.

5 The Oracle Server Application/ network server Server Users

6 Oracle server The Oracle server consists of:
Oracle database - physical files that store data Oracle instance - memory structures and Oracle processes used to access data (from the physical database files) There are several files, processes, and memory structures in an Oracle server; not all of them are used when processing a SQL statement. Some are used to improve the performance of the database, ensure that the database can be recovered in the event of a software or hardware error, or perform other tasks necessary to maintain the database.

7 Oracle Database An Oracle database consists of operating system files, known as database files, that provide the actual physical storage for database information. The database files are used to ensure that the data is kept consistent and can be recovered in the event of a failure of the instance. Usually there is only one Oracle instance accessing a database. In the Real Application Cluster configuration multiple instances access a single database.

8 Oracle Database Physical Storage Structures
Control and redo log files are multiplexed Control files Parameter file Data files Redo log files Archived log files Password file Database + Backup files, trace and alert files, audit files, …

9 Other Key Physical Structures
Parameter file Password file Archived log files The parameter file defines the characteristics of an Oracle instance. For example, it contains parameters that size some of the memory structures in the SGA. • The password file authenticates users privileged to start up and shut down an Oracle instance. Archived redo log files are offline copies of the redo log files that may be necessary to recover from media failures. Database

10 Connecting to a Database
Server Client Connection established Server process Session created Oracle server User process The client machine communicates the request to the server machine by using OracleNet drivers. The OracleNet listener on the server detects the request and starts a dedicated server process on behalf of the user process after veryfying the username and password. User

11 User Process Runs on the client machine
Is spawned when a tool or an application is invoked Runs the tool or application (SQL*Plus, Oracle Enterprise Manager, OracleForms) Generates calls to the Oracle server

12 Connection Connection is a communication pathway between a user process and an Oracle Database instance. A communication pathway is established using: available interprocess communication mechanisms (on a computer that runs both the user process and Oracle Database) or network software (when different computers run the database application and Oracle Database, and communicate through a network).

13 Session Session is a specific connection of a user to an Oracle Database instance through a user process. For example, when a user starts SQL*Plus, the user must provide a valid user name and password, and then a session is established for that user. A session lasts from the time the user connects until the time the user disconnects or exits the database application. Multiple sessions can be created and exist concurrently for a single Oracle Database user using the same user name. For example, a user with the user name/password of SCOTT/TIGER can connect to the same Oracle Database instance several times.

14 Server Process Runs on the server machine (host)
Services a single user process in the dedicated server configuration Uses an exclusive Program Global Area (PGA) Processes calls generated by the client Returns results to the client Oracle Database creates server processes to handle the requests of user processes connected to the instance. In some situations when the application and Oracle Database operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead. However, when the application and Oracle Database operate on different computers, a user process always communicates with Oracle Database through a separate server process. Server processes (or the server portion of combined user/server processes) created on behalf of each user's application can perform one or more of the following: Parse and run SQL statements issued through the application Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA Return results in such a way that the application can process the information

15 Oracle Instance An Oracle instance:
SGA SGA - System Global Area Background processes An Oracle instance: Is a means to access an Oracle database Always opens one and only one database An Oracle instance is the combination of the background processes and memory structures. Every time an instance is started, a System Global Area (SGA) is allocated and Oracle background processes are started. Background processes perform functions on behalf of the invoking process. They consolidate functions that would otherwise be handled by multiple Oracle programs running for each user. The background processes perform input/output (I/O) and monitor other Oracle processes to provide increased parallelism for better performance and reliability.

16 Oracle Memory Structures
+ Streams Pool in 10g + Result cache in 11g

17 System Global Area (SGA)
Used to store database information that is shared by database processes. It contains data and control information for the Oracle server and is allocated in the virtual memory of the computer where Oracle resides. SHOW SGA; Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes

18 Dynamic SGA Dynamic SGA implements an infrastructure that allows the SGA configuration to change without shutting down the instance. This then allows the sizes of the database buffer cache, shared pool, and large pool to be changed without shutting down the instance. Conceivably, they could be initially under configured and would grow and shrink depending upon their respective work loads, up to a maximum of SGA_MAX_SIZE.

19 Sizing the SGA The size of the SGA is determined by several initialization parameters, including: DB_CACHE_SIZE: The size of the default cache of database blocks. LOG_BUFFER: The number of bytes allocated for the redo log buffer cache. SHARED_POOL_SIZE: The size in bytes of the area devoted to shared SQL and PL/SQL. LARGE_POOL_SIZE: The size of the large pool; the default is zero.

20 System Global Area (SGA)
SGA memory allocated and tracked in granules by SGA components. A granule is a unit of contiguous virtual memory allocation. The size of a granule depends on the parameter SGA_MAX_SIZE: – 4 MB if estimated SGA size is < 128 MB – 16 MB otherwise At instance startup, the Oracle server allocates granule entries, one for each granule. As startup continues, each component acquires as many granules as it requires. The minimum SGA configuration is three granules: one granule for fixed SGA (includes redo buffers); one granule for buffer cache; one granule for shared pool.

21 The Shared Pool Size defined by SHARED_POOL_SIZE
Library cache Data dictionary cache Size defined by SHARED_POOL_SIZE Library cache contains statement text, parsed code, and an execution plan Data dictionary cache contains table and column definitions and privileges

22 Resizing The size of Shared Pool can be dynamically resized using ALTER SYSTEM SET. After performance analysis can be adjusted (but the total SGA size cannot exceed SGA_MAX_SIZE): ALTER SYSTEM SET SHARED_POOL_SIZE = 64M;

23 Library Cache 1. Stores information about the most recently used SQL and PL/SQL statements: Enables the sharing of commonly used statements 2. Consists of two structures: Shared SQL area Shared PL/SQL area 3. Has its size determined by the shared pool sizing The library cache size is based on the sizing defined for the shared pool. Memory is allocated when a statement is parsed or a program unit is called. If the size of the shared pool is too small, statements are continually reloaded into the library cache, which affects performance. The library cache is managed by a least recently used (LRU) algorithm. As the cache fills, less recently used execution paths and parse trees are removed from the library cache to make room for the new entries. If the SQL or PL/SQL statements are not reused, they eventually are aged out.

24 The library cache consists of two structures:
• Shared SQL: The Shared SQL stores and shares the execution plan and parse tree for SQL statements run against the database. The second time that an identical SQL statement is run, it is able to take advantage of the parse information available in the shared SQL to expedite its execution. To ensure that SQL statements use a shared SQL area whenever possible, the text, schema, and bind variables must be exactly the same. • Shared PL/SQL: The shared PL/SQL area stores and shares the most recently executed PL/SQL statements. Parsed and compiled program units and procedures (functions, packages, and triggers) are stored in this area.

25 Data Dictionary Cache It is a collection of the most recently used definitions in the database. It includes information about database files, tables, indexes, columns, users, privileges, and other database objects. During the parse phase, the server process looks at the data dictionary for information to resolve object names and validate access. Caching the data dictionary information into memory improves response time on queries. Size is determined by the shared pool sizing. Caching data dictionary information into memory improves response time. Information about the database (user account data, data file names, segment names, extent locations, table descriptions, and user privileges) is stored in the data dictionary tables. When this information is needed by the database, the data dictionary tables are read, and the data that is returned is stored in the data dictionary cache.

26 Sizing the Data Dictionary
The overall size is dependent on the size of the shared pool size and is managed internally by the database. If the data dictionary cache is too small, then the database has to query the data dictionary tables repeatedly for information needed by the database. These queries are called recursive calls and are slower than the queries that are handled by the data dictionary cache.

27 Database Buffer Cache Stores the most recently used blocks that have been retrieved from the data files. DB_BLOCK_SIZE determines the primary block size. When a query is processed, the Oracle server process looks in the database buffer cache for any blocks it needs. If the block is not found in the database buffer cache, the server process reads the block from the data file and places a copy in the database buffer cache. Because subsequent requests for the same block may find the block in memory, the requests may not require physical reads. The Oracle server uses a least recently used algorithm to age out buffers that have not been accessed recently to make room for new blocks in the database buffer cache.

28 Multiple Block Sizes An Oracle database can be created with a standard block size and up to four non-standard block sizes. Non-standard block sizes can have any power-of-two value between 2 KB and 32 KB.

29 Database Buffer Cache Consists of three independent sub-caches:
DEFAULT Buffer Pool - DB_CACHE_SIZE KEEP Buffer Pool - DB_KEEP_CACHE_SIZE RECYCLE Buffer Pool - DB_RECYCLE_CACHE_SIZE Database buffer cache can be dynamically resized to grow or shrink using ALTER SYSTEM ALTER SYSTEM SET DB_CACHE_SIZE = 96M; Three parameters define the sizes of the buffer caches: DB_CACHE_SIZE: Sizes the default buffer cache size only, it always exists and cannot be set to zero. DB_KEEP_CACHE_SIZE: Sizes the keep buffer cache, which is used to retain blocks in memory that are likely to be reused. DB_RECYCLE_CACHE_SIZE: Sizes the recycle buffer cache, which is used to eliminate blocks from memory that have little chance of being reused.

30 Buffer Cache Advisory Parameter
DB_CACHE_ADVICE can be set to gather statistics for predicting different cache size behavior (values: OFF, READY, ON). The information provided by these statistics can help DBA size the buffer cache optimally for a given workload. The buffer cache advisory information is collected and displayed through the V$DB_CACHE_ADVICE view.

31 Types of buffers Dirty - changed and has not yet been written to the disk. Cold - that has not been recently used according to the least recently used (LRU) algorithm. Free - contain no data or are free to be overwritten. Pinned - currently being accessed or explicitely retained for future use (e.g. the KEEP buffer pool). When a buffer in the database buffer cache is modified, it is marked dirty. A cold buffer is a buffer that has not been recently used according to the least recently used (LRU) algorithm. The DBWn process writes cold, dirty buffers to disk so that user processes are able to find cold, clean buffers that can be used to read new blocks into the cache. As buffers are dirtied by user processes, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWn manages the buffer cache so that user processes can always find free buffers. By writing cold, dirty buffers to disk, DBWn improves the performance of finding free buffers while keeping recently used buffers resident in memory. For example, blocks that are part of frequently accessed small tables or indexes are kept in the cache so that they do not need to be read in again from disk. The LRU algorithm keeps more frequently accessed blocks in the buffer cache so that when a buffer is written to disk, it is unlikely to contain data that will be useful soon.

32 Two lists of buffers Write list (dirty buffer list) - buffers that are modified and need to be written to the disk. Least recently used list (LRU) - free buffers, pinned buffers and the dirty buffers that have not yet been moved to the Write List. The most recently accessed blocks are always in the front (MRU end); the least recently accessed blocks, in the end (LRU end). The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed. When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list. The first time an Oracle user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss. Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the least recently used end of the list. The process searches either until it finds a free buffer or until it has searched the threshold limit of buffers. If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer to the write list and continues to search. When the process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list. If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBW0 background process to write some of the dirty buffers to disk. When the user process is performing a full table scan, it reads the blocks of the table into buffers and puts them on the LRU end (instead of the MRU end) of the LRU list. This is because a fully scanned table usually is needed only briefly, so the blocks should be moved out quickly to leave more frequently used blocks in the cache. You can control this default behavior of blocks involved in table scans on a table-by-table basis. To specify that blocks of the table are to be placed at the MRU end of the list during a full table scan, use the CACHE clause when creating or altering a table or cluster. You can specify this behavior for small lookup tables or large static historical tables to avoid I/O on subsequent accesses of the table.

33 LRU list The least-used blocks are thrown out of the list when new blocks are accessed and added to the MRU end of the list. With one exception – when a full table is scanned – the blocks are written to the LRU end of the list. When an Oracle process accesses a buffer, it moves the buffer to the MRU end of the list so that the most frequently accessed data is available in the buffers.

34 LRU list When an Oracle process requests data, it searches the data in the buffer cache – if it finds, we get cache hit. If not – cache miss – data needs to be copied from disk to the buffer. The server process searches (from the LRU end) until it finds a free buffer or until it has searched a threshold limit of buffers. It moves encountered dirty buffers to the write list. If it does not find a free buffer – DBWn writes some buffers from write list to disk – making available free buffers for the server process.

35 Redo Log Buffer Cache Size defined by LOG_BUFFER
Records changes made through the instance Used sequentially Circular buffer

36 Redo Log Buffer Cache The redo log buffer cache records all changes made to the database data blocks. Its primary purpose is recovery. Changes recorded within are called redo entries. Redo entries contain information to reconstruct or redo changes.

37 Result cache (11g) The result cache buffers query results. If a query is run that already has results in the result cache, the database returns results from the result cache instead of rerunning the query. This speeds up the execution of frequently run queries. The database automatically invalidates a cached result whenever a transaction modifies the data or metadata of any of the database objects used to construct that cached result.

38 Result cache (11g) Users can annotate a query or query fragment with a result cache hint to indicate that results are to be stored in the SQL query result cache. You can set the RESULT_CACHE_MODE initialization parameter to control whether the SQL query result cache is used for all queries (when possible), or only for queries that are annotated.

39 PL/SQL Function Result Cache
A PL/SQL function may depend on several parameters. In some cases, the SQL statements in the function body access data (for example, the catalog of wares in a shopping application) that changes very infrequently compared to the frequency of calling the function. The look-up key for the cache is the combination of actual arguments with which the function is invoked. When a particular invocation of the result-cached function is a cache hit, then the function body is not executed; instead, the cached value is returned immediately.

40 Large Pool The large pool is an optional area of memory in the SGA configured only in a shared server environment. It relieves the burden placed on the shared pool. This configured memory area is used for session memory (UGA), I/O slaves, and backup and restore operations. Unlike the shared pool, the large pool does not use an LRU list. Sized by LARGE_POOL_SIZE ALTER SYSTEM SET LARGE_POOL_SIZE = 64M;

41 Java Pool The Java pool services the parsing requirements for Java commands. Required if installing and using Java. It is sized by the JAVA_POOL_SIZE parameter. In Oracle9i, the default size of the Java Pool is 24M. The Java pool memory is used for all session-specific Java code and data within the Java Virtual Machine (JVM)

42 Streams Pool (10g) Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and you can share these messages in a stream. The stream can propagate information within a database or from one database to another. The stream routes specified information to specified destinations. They are used for distributed databases, applications and data warehouses. The size of the Streams pool starts at zero. The pool size grows dynamically as needed.

43 Program Global Area (PGA)
Not shared Contains: Sort area SORT_AREA_SIZE Sort area size can grow depending on the need. Session information Cursor state Stack space PGA Server process Memory region that contains data and control information for a single server process or a single background process. The PGA is allocated when a process is created and deallocated when the process is terminated.

44 PGA The total memory used by all individual PGAs is known as the total instance PGA memory, and the collection of individual PGAs is referred to as the total instance PGA, or just instance PGA. With Oracle Enterprise Manager Database Control, you set the size of the instance PGA, not individual PGAs.

45 Summary

46 Oracle Processes + Management Monitor MMON in 11g
Oracle Database creates server processes to handle the requests of user processes connected to the instance. In some situations when the application and Oracle Database operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead. However, when the application and Oracle Database operate on different computers, a user process always communicates with Oracle Database through a separate server process. Server processes (or the server portion of combined user/server processes) created on behalf of each user's application can perform one or more of the following: Parse and run SQL statements issued through the application Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA Return results in such a way that the application can process the information

47 Server process Oracle Database creates server processes to handle the requests of user processes connected to the instance. In some situations when the application and Oracle Database operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead. However, when the application and Oracle Database operate on different computers, a user process always communicates with Oracle Database through a separate server process.

48 Server process Server processes (or the server portion of combined user/server processes) created on behalf of each user's application can perform one or more of the following: Parse and run SQL statements issued through the application Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA Return results in such a way that the application can process the information

49 Database Writer (DBWn, n=0,…,19)
Instance SGA Shared pool DBWR Database buffer cache Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes (DBW1 through DBW9 and DBWa through DBWj) to improve write performance if your system modifies data heavily. These additional DBWn processes are not useful on uniprocessor systems. Data files Control files Redo log files

50 The server process records changes to rollback and data blocks in the buffer cache. Database Writer (DBWn) writes the cold, dirty buffers from the database buffer cache to the data files. It ensures that a sufficient number of free buffers — buffers that can be overwritten when server processes need to read in blocks from the data files —are available in the database buffer cache. Database performance is improved because server processes make changes only in the buffer cache. When a buffer in the database buffer cache is modified, it is marked dirty. A cold buffer is a buffer that has not been recently used according to the least recently used (LRU) algorithm. The DBWn process writes cold, dirty buffers to disk so that user processes are able to find cold, clean buffers that can be used to read new blocks into the cache. As buffers are dirtied by user processes, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWn manages the buffer cache so that user processes can always find free buffers. By writing cold, dirty buffers to disk, DBWn improves the performance of finding free buffers while keeping recently used buffers resident in memory. For example, blocks that are part of frequently accessed small tables or indexes are kept in the cache so that they do not need to be read in again from disk. The LRU algorithm keeps more frequently accessed blocks in the buffer cache so that when a buffer is written to disk, it is unlikely to contain data that will be useful soon.

51 DBWn defers writing to the data files until one of the following events occurs:
• Checkpoint • The number of dirty buffers reaches a threshold value • A process scans a specified number of blocks when scanning for free buffers and cannot find any. • Placing a tablespace in read only or offline mode. • Dropping or truncating a table. • ALTER TABLESPACE tablespace_name BEGIN BACKUP;

52 Log Writer (LGWR) Instance SGA Shared pool Redo log buffer LGWR
Data files Control files Redo log files

53 LGWR performs sequential writes from the redo log buffer cache to the redo log file under the following situations: • When a transaction commits • When the redo log buffer cache is one-third full • When there is more than a megabyte of changes records in the redo log buffer cache • Before DBWn writes modified blocks in the database buffer cache to the data files • Every 3 seconds. Because the redo is needed for recovery, LGWR confirms the commit only after the redo is written to disk.

54 COMMIT Processing 1 4 3 2 Server process User process
Instance SGA Shared pool Server process Database buffer cache Redo log buffer 4 2 3 LGWR User process Control files Redo log files Data files Database SCN - System Change Number - represents the state of data after the COMMIT - assigned to data blocks.

55 System Monitor (SMON) If the Oracle instance fails, any information in the SGA that has not been written to disk is lost. For example, the failure of the operating system causes an instance failure. After the loss of the instance, the background process SMON automatically performs instance recovery when the database is reopened.

56 1. Rolling forward to recover data that has not been recorded in the data files but that has been recorded in the online redo log. This data has not been written to disk because of the loss of the SGA during instance failure. During this process, SMON reads the redo log files and applies the changes recorded in the redo log to the data blocks. Because all committed transactions have been written to the redo logs, this process completely recovers these transactions. 2. Opening the database so that users can log on. Any data that is not locked by unrecovered transactions is immediately available. 3. Rolling back uncommitted transactions. They are rolled back by SMON or by the individual server processes as they access locked data.

57 Process Monitor (PMON)
The background process PMON cleans up after failed processes by: • Rolling back the user’s current transaction • Releasing all currently held table or row locks • Freeing other resources currently reserved by the user

58 Checkpoint Event An event called a checkpoint :: the Oracle background process DBWn writes all the modified database buffers in the SGA, including both committed and uncommitted data, to the data files. Helps to reduce the time required for instance recovery. Checkpoints are implemented for the following reasons: • Checkpoints ensure that data blocks in memory that change frequently are written to data files regularly. Because of the least recently used algorithm of DBWn, a data block that changes frequently might never qualify as the least recently used block and thus might never be written to disk if checkpoints did not occur. • Because all database changes up to the checkpoint have been recorded in the data files, redo log entries before the checkpoint no longer need to be applied to the data files if instance recovery is required. Therefore, checkpoints are useful because they can expedite instance recovery.

59 Checkpoint Process (CKPT)
• Writes checkpoint number into the data file headers • Writes checkpoint number, log sequence number, archived log names, and system change numbers into the control file. CKPT itself does not write data blocks to disk or redo blocks to the online redo logs - DBWn does.

60 Archiver (ARCn) Optional background process which automatically archives online redo logs when ARCHIVELOG mode is set Preserves the record of all changes made to the database

61 ARCn, is crucial to recovering a database after the loss of a disk
ARCn, is crucial to recovering a database after the loss of a disk. As online redo log files fill, the Oracle server begins writing to the next online redo log file. The process of switching from one redo log to another is called a log switch. The ARCn process initiates backing up, or archiving, of the filled log group at every log switch. It automatically archives the online redo log before the log can be reused, so that all of the changes made to the database are preserved. This enables the DBA to recover the database to the point of failure, even if a disk drive is damaged. If you anticipate a heavy workload for archiving, such as during bulk loading of data, you can increase the maximum number of archiver processes with the LOG_ARCHIVE_MAX_PROCESSES initialization parameter. The ALTER SYSTEM statement can change the value of this parameter dynamically to increase or decrease the number of ARCn processes.

62 NOARCHIVELOG mode The online redo log files are overwritten each time a log switch occurs. LGWR does not overwrite a redo log group until the checkpoint for that group is complete. This ensures that committed data can be recovered if there is an instance crash. During the instance crash, only the SGA is lost. There is no loss of disks, only memory. For example, an operating system crash causes an instance crash.

63 Dispatcher (Dnnn) Part of the shared server architecture. They minimize the resource needs by handling multiple connections to the database using a limited number of server processes. When a user makes a request, the dispatcher places the request on the request queue.

64 Shared Server (Snnn) Shared server serves multiple user processes.
When the instance starts, Oracle starts a fixed number of server processes working in a round-robin fashion to serve requests from the user processes - placed by the dispatcher into the request queue. The results are put into the response queue, to be taken by the dispatcher.

65 Job Queue Processes Job queue processes are used for batch processing. They run user jobs. They can be viewed as a scheduler service that can be used to schedule jobs as PL/SQL statements or procedures on an Oracle Database instance. Given a start date and an interval, the job queue processes try to run the job at the next occurrence of the interval.

66 Logical storage structures
An Oracle database is a group of tablespaces. • A tablespace may consist of one or more segments. At least: SYSTEM (in 10g also SYSAUX). • A segment is made up of extents. • An extent is made up of logical contiguous blocks allocated in one chunk. • A block is the smallest unit for read and write operations - a multiple of the SO block size.

67 Logical/physical/recovery-related structures of the database
This diagram shows the logical, physical, and recovery-related structures of the database, and the relationships between them. Dotted horizontal lines divide the image into three sections. The top section shows logical structures, the middle section shows physical structures, and the bottom section shows recovery-related structures in the Flash Recovery Area. The logical structures are all tablespaces. Each tablespace points to a datafile or tempfile, which are physical structures. Other physical structures include a control file, the online redo log files, a server parameter file, and a password file. The flash recovery area contains the archived redo log files, which are copies of redo log files after they are filled.

68 Types of segments Data segments - store the table (or cluster) data.
Index segments - store the index data. Temporary segments - temporary work area e.g. for sorting, executing SQL statements. Undo (rollback) segments - store undo information. When you roll back changes made to the database, undo records in the undo tablespace are used to undo the changes.

69 Undo (Rollback) Segment
Old image New image Table Undo segment DML statement

70 Review of physical storage structures
Data files - contain all the database data. Each data file is associated with one and only one tablespace. Redo log files - record changes made to data. There must be at least 2 redo log files because of a circular fashion of using them. If a failure prevents the database change from being written to a data file, the changes can be obtained from the redo log files. Multiplexing of redo logs - maintaining mutiple copies of redo log files (preferably on distinct disks) - called a redo log group. Control files - basic information about the physical structure of the database such as the database name, locations of every data file and redo log file (usually multiplexed).

71 Summary Instance SGA Database Server process User process Shared pool
DBWR LGWR Server process PGA User process Parameter file Control files Data files Archived log files Redo log files Password file Database


Download ppt "Oracle Documentation Oracle DBA Course (9i, 10g, 11g) Lecture 1: Oracle Architectural Components."

Similar presentations


Ads by Google