Change column from number to varchar2 using DBMS_REDEF

>> Friday, 24 June 2016


This does a to_char but it also works with a to_num if you want to go from a varchar to a number.

STEP 1 :
Add disk space required

This needs to be enough to repeat the size of the table and its indexes, it is temporary storage for the period of the change only.


STEP 2:
Redo

The redefinition is going to churn through redo, make sure you have enough logs and they are large enough or you will get a bottle neck that will slow it up. If your db is in archivelog mode then also make sure you have a cunning plan to deal with them too.


STEP 3:
Think about a backout plan

I took an export of my table at the start because I had the luxury to do so.  I love a belt and braces approach to change.


STEP 4 :
count original table - alway nice to be able to prove you haven't lost rows when you finish!

select count (*) from myschema.mytable;
record result


STEP 5:
Note what you have already

select column_id, column_name, data_type, data_length, data_precision, nullable, last_analyzed  from dba_tab_columns
where table_name = 'MYTABLE'
order by column_id;

1 ORDERNUM                        NUMBER 22 8 N dd/mm/yyyy hh:mm:ss
2 COLUMN_TO_BE CHANGED    NUMBER 22 3 N dd/mm/yyyy hh:mm:ss
3 LOAD                              TIMESTAMP(6) 11 N dd/mm/yyyy hh:mm:ss


STEP 6 :
create an interim table

create myschema.INT_mytable the same as the existing table but with COLUMN_TO_BE CHANGED  as a  varchar2 instead of a number (There is no need to create the indexes, but I did include the constraints)

CREATE TABLE myschema.INT_mytable
(
 ORDERNUM                                     NUMBER(8)                          NOT NULL,
 COLUMN_TO_BE CHANGED        VARCHAR2(3 CHAR)          NOT NULL,
  LOAD                                                TIMESTAMP(6)                    NOT NULL
)
TABLESPACE mytablespace
other clauses as per your original table definition
;

STEP 7:
check it

select column_id, column_name, data_type, data_length, data_precision, nullable, last_analyzed
from dba_tab_columns
where table_name = 'MYTABLE'
and column_id =2;
2 COLUMN_TO_BE CHANGED NUMBER 22 3 N dd/mm/yyyy hh:mm:ss

select column_id, column_name, data_type, data_length, data_precision, nullable, last_analyzed
from dba_tab_columns
where table_name = 'INT_MYTABLE'
and column_id =2;
2 COLUMN_TO_BE CHANGED VARCHAR2 12 N


STEP 8
check redef will work

begin
DBMS_REDEFINITION.can_redef_table('MYSCHEMA', 'MYTABLE');
end;
/

STEP 9
Start the redefinition

I parallelised this based on cat /proc/cpuinfo | grep processor | wc -l x 4

set echo on;
spool redef.txt
ALTER SESSION FORCE PARALLEL DML PARALLEL 32;
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 32;
ALTER SESSION ENABLE RESUMABLE;
set timing on;
begin
DBMS_REDEFINITION.START_REDEF_TABLE(
   uname       => 'MYSCHEMA',
   orig_table  => 'MYTABLE',
   int_table   => 'INT_MYTABLE',
   col_mapping => ' ORDERNUM     ORDERNUM  ,
                              to_char (COLUMN_TO_BE CHANGED)  COLUMN_TO_BE CHANGED ,
                              LOAD   LOAD'  ,
options_flag => DBMS_REDEFINITION.CONS_USE_ROWID  );
end;
/


STEP 10:
What else do you want to keep as a part of the redefinition, you need to think about this for yourself based on what is relevant.

set serveroutput on;
DECLARE
error_count pls_integer := 0;
begin
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( UNAME => 'MYSCHEMA',
ORIG_TABLE => 'MYTABLE',
INT_TABLE => 'INT_MYTABLE',
COPY_INDEXES => DBMS_REDEFINITION.cons_orig_params,
COPY_TRIGGERS => FALSE,
COPY_CONSTRAINTS => FALSE,
COPY_PRIVILEGES => TRUE,
IGNORE_ERRORS => FALSE,
COPY_STATISTICS => TRUE,
NUM_ERRORS => error_count);
DBMS_OUTPUT.PUT_LINE('errors := ' || error_count);
end;
/

errors := 0


STEP 11:
finish the redefinition

begin
  DBMS_REDEFINITION.finish_redef_table(
    uname      => 'MYSCHEMA',      
    orig_table => 'MYTABLE',
    int_table  => 'INT_MYTABLE');  
 end;
/



STEP 12:
check counts and do checks



select count (*) from MYSCHEMA.MYTABLE;
same result as STEP 4


select column_id, column_name, data_type, data_length, data_precision, nullable, last_analyzed  from dba_tab_columns
where table_name = 'MYTABLE'
order by column_id;


1 ORDERNUM                                    NUMBER 22 8 N dd/mm/yyyy hh:mm:ss
2 COLUMN_TO_BE CHANGED                VARCHAR2 12 N dd/mm/yyyy hh:mm:ss
3 LOAD                                            TIMESTAMP(6) 11 N dd/mm/yyyy hh:mm:ss



select column_id, column_name, data_type, data_length, data_precision, nullable, last_analyzed
from dba_tab_columns
where table_name = 'MYTABLE'
and column_id =2;
2 COLUMN_TO_BE CHANGED VARCHAR2 12 N 21/05/2016 18:00:32


check parallel on table and indexes - all the same as original
synonyms still there
grants still there
constraints and indexes still there


STEP 13:
drop the interim table

DROP TABLE MYSCHEMA.INT_MYTABLE;


NOTE:

it took a while for me to get my syntax spot on, if you get failure after the redefinition has started you need to abort the process to be able to restart it:


begin
 DBMS_REDEFINITION.abort_redef_table(
    uname      => 'MYSCHEMA',      
    orig_table => 'MYTABLE',
    int_table  => 'INT_MYTABLE');
    end;







Read more...

EXPDP using QUERY with to_timestamp or to_date

>> Friday, 20 May 2016

For expdp with select queries using to_timestamp or to_date I've had a bit of game getting the apostrophes to play happy bunny. Within a parfile I was getting nowhere, so back to my easy friend ksh (run against an 11.2 database):


#!/bin/ksh

#set the environment
export PATH=$PATH:/usr/local/bin
export ORACLE_SID=MYSID
export ORAENV_ASK=NO
. oraenv
unset ORAENV_ASK

export NLS_LANG="English_United Kingdom.AL32UTF8"

#piping password in is good practise for not seeing it when doing a ps -ef | grep expdp
PASSWORD=myuserpassword
echo $PASSWORD | $ORACLE_HOME/bin/expdp myuser \
DIRECTORY=MY_DPUMP_DIR \
CONTENT=ALL \
COMPRESSION=ALL \
DUMPFILE=mydumpfilename.dmp \
LOGFILE=mylogfilename.log \
EXCLUDE=STATISTICS \
tables=schemaowner.table_name \
QUERY=\"where column_name \> to_timestamp\(\'2015-11-10 00:00:00:000000\',\'YYYY-MM-DD HH24:MI:SS:FF\'\)\"means the parameter is simply ignored




Notes:

Whilst out of habit I set NLS_LANG parameter, in expdp it is not really needed any more. See utilities manual for further advice

DIRECTORY: remember for datapump you need to define the directory in oracle:

Create directory MY_DPUMP_DIR as '/myfilesystem/mydirectory';
grant read, write on directory MY_DPUMP_DIR to myuser;
select * from dba_directories;

My other parameters like CONTENT, COMPRESSION, EXCLUDE are there for reasons relevant to what I was doing at the time. You need to think about what you need for the type of file you require, where it is going to be imported to and what is actually needed there.

http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_export.htm#i1006293

and finally, watch your spaces, I found that leaving spaces like this:

VERSION = 11.1.0 \

caused the parameter to be ignored
whereas:

VERSION=11.1.0 \

it is was taken into account


Read more...

How to write messages to the Oracle alert log and custom trace files - dbms_system.ksdwrt

>> Sunday, 24 April 2016


I've been finding this useful on occasion because I have plenty of jobs reading and reporting on alert log errors, so it's saved me having to think of another way to yell "hey DBA" :

exec dbms_system.ksdwrt(1, 'This message goes to trace file in the udump location');

exec dbms_system.ksdwrt(2, 'This message goes to the alert log');

exec dbms_system.ksdwrt(3, 'This message goes to the alert log and trace file in the udump location');


example:

exec dbms_system.ksdwrt(2, 'ORA-09999 hey you wake up and investigate this');


You can put a wrapper around the call and grant execute on the function:

create or replace function write_alert_log(log_or_trace in number, text_message in varchar2) return number

is
begin
  SYS.DBMS_SYSTEM.KSDWRT(log_or_trace,text_message);
  return 0;
  exception
  when others then
  return 1;
end;



Read more...

  © Blogger template Simple n' Sweet by Ourblogtemplates.com 2009

Back to TOP