REDUCING REDO GENERATION TIPS
While Backing Up
As mentioned in the redo generation and recoverability section, user managed backups could generate a lot of redo. The best way to eliminate this problem is to use RMAN. RMAN does not need to write the entire block to redo because it knows when the block is being copied. If the user needs to use the user managed backup then they can follow these steps to reduce redo generation:
- Do not back up all the tablespaces in one go. This will put every tablespace in backup mode for longer than it needs to be and therefore generates redo for longer than it needs to do.
- Automatic backup on the busy tablespaces.
- Backup a tablespace during a time when it is least busy in terms of DML.
Bulk Inserts
By bulk we mean a large percentage compared to the existing data. To reduce the amount of redo generation in a bulk data load, the user needs to disable the indexes (when making a direct load to a table that have indexes, the indexes will produce redo) before the load then re-build them again as follow:
- Alter index index_name unusable ; # Do this for every index
- Alter session set skip_unusable_indexes=true ; (*)
- Insert into table_name select …
- Alter index index_name rebuild;
(*)skip_unusable_indexes is an instance initialization parameter in 10g and it default to true. Before 10g, skip_unusable_indexes needs to be set in a session or the user will get an error. It is a good practice to set it in a session, regardless of the database version, when the above steps are done.
Bulk Delete
- Create table new_table with logging
- Insert into new_table select the records you want to keep from current_table.
- Create the indexes on the new_table (*)
- Create constraints, grants etc.
- Drop current_table.
- Rename new_table to current.
(*) If the data left is so small or there are a lot of dependencies on the table (views, procedures, functions, etc) the following steps can be used instead of 3-6 above:
3. Disable constrains on current_table.
4. Truncate current_table;
5. Insert into current_table select * from new_table ;
6. commit;
7. enable constraints
8. drop table new_table;
Bulk Update
Use this method if indexes are going to be affected by the update. This is because mass updating indexes is more expensive than re-building them. If a small portion of the data is updated then use this method:
- Disable constraints.
- Alter index index_name unusable ;
- Alter session set skip_unusable_indexes=true ;
- update the table.
- commit;
- alter index index_name rebuild ;
- Enable constraints.
If the update causes a good portion of the data to be updated then follow this method:
- Create new_table as select (updating statement)
- create indexes on the new_table,
- create grants, constraints etc on the new_table
- Drop current table
- Rename the new_table to current_table.
Using Partitioning
Table and index partitioning are very useful in reducing redo generation. This is because they divide a table into smaller and manageable units. You can use the partition techniques with a table if you know which partitions will be inserted into, deleted from or updated. This way the redo generation is reduced because only the index partitions need to be built not the full one. Please note that the entire global indexes need to be rebuilt. The examples below handle local indexes.
Alter index index_name unusable ;
Becomes Alter index index_name partition partition_name unusable;
Alter index index_name rebuild ;
Becomes Alter index index_name rebuild partition partition_name ;
You can use the alter table exchange partition command to quickly display the data you inserted into a staging table into the partition so the command:
- Insert into current_table select * from new_table is replaced by
- Alter table current_table exchange partition partition_name with new_table …
Partitioning is very useful in archiving off old historic data. The table to contain historic data is created with a range partition on a date field. When the data becomes old enough to remove, the partition get dropped. This feature is so important that Oracle created a new type of range partitions in Oracle 11g to handle this situation. The new type is called Interval partition. Here is an example for Oracle 8i to 11g:
Create table hist_data(Sample_date date,….)Partition by range( sample_date) (partition data200503 values less than (to_date(’04/01/2005′,’mm/dd/yyyy’)) tablespace ts200503, partition data200504 values less than (to_date(’05/01/2005′,’mm/dd/yyyy’)) tablespace ts200504, ….) ;
A year down the line we want to delete all the data before April 2005. All we need to do is an “alter table sample_data drop partition data200503”. This is much more efficient and produces far less redo than “delete from sample_data where sample_date < to_date( ‘04/01/2005’, ‘mm/dd/yyyy’)” .
Tips For Developers
The points discussed here are a sample of how to reduce the workload on your database:
Run the DML in as few SQL statements as you can. This will reduce the generation of undo and block header update and therefore reduces redo generation.
That’s how it should be done:
SQL> set autotrace on statistics
SQL> insert into test select rownum from dba_objects;
93244 rows created.
Statistics
—————————————————————
… 912326 redo size
… 93244 rows processed
That’s how it should NOT be done:
SQL> set autotrace on statistics
SQL> declare
2 cursor c1 is
3 select rownum r from dba_objects;
4 begin
5 for v in c1
6 loop
7 insert into test values( v.r) ;
8 end loop ;
9 end;
10 / PL/SQL procedure successfully completed.
Statistics
—————————————————————
… 16112247 redo size
Do not commit more than you need. By issuing the commit command you are forcing Oracle to do some internal updates which produces redo. I ran the PL/SQL code above with the command COMMIT; inserted after line 7. The redo generated is: 28,642,216. I also ran the script above with the commit at the end followed by a “select * from test” statement to force committed block cleaning the redo generated was 13,216,188. You can see that a lot of committing to insert the same amount of data has produced far more redo. By reducing commits you reduce the strain on the LGWR process.
Set sequences cache correctly. This is important if the oracle system generates a lot of sequence numbers using oracle sequences. Oracle keeps track of the next sequence number in the SGA but it also keeps the value of the start sequence of the next set of sequences in the data dictionary according to the sequence cache setting. This is needed in case the database crashes. As sequence nextval is acquired the value in the SGA is updated, when this value is the same as the one in the data dictionary the data dictionary is updated producing redo. If sequence cache is small the data dictionary will be updated more often. This is illustrated by the following test:
create sequence seq2 cache 2 ; /* The data dictionary is updated every second nextval */
create sequence seq20 cache 20;
create sequence seq1000 cache 1000;
I created 3 identical tables test2, test20 and test1000. They all have a number column. I inserted into test2 from seq2, into test20 from seq20 and into test1000 from seq1000. Example:
SQL> insert into test20 select seq20.nextval from dba_objects ;
68764 rows created.
The table bellow shows the relation between the redo generated and the sequence cache size:
Cache
Redo 2 21,246,082 20 2,916,868 1000 899,719
Setting the Cache parameter to a high value when a sequence is created will only affect the next sequence value if the database was restarted set it to a higher value if the application accesses the sequence a lot. It is wrong to believe that setting cache to 1000 means that the SGA will have 1000 numbers stored for the sequence There is only one number stored for the sequence so do not worry about setting cache as high as you need.
Wait for next part, I will talk about how to reduce Log Generation, using NOLOGGING.