Network Backup Solution - www.topbits.com/network-backup.html
Network backup is any backup system where the data to be backed up traverses the network to reach the backup media. Network backup typically requires a client-server software architecture. The backup server resides on a centralized server and the backup clients reside on every system to be backed up. Advanced network backup systems can manage backup media which are also connected to the backup server via a network. Network backup systems are much more scalable and manageable than local backup systems where tape drives are attached to each computer system. Because network backup systems can backup so many computers, they normally utilize tape autochangers to give them greater storage capacity.
Free network backup software includes Amanda, Bacula, and BackupPC. Commercial network backup software includes HP OpenView Storage Data Protector, Veritas NetBackup, and EMC Legato Networker.
Sunday, April 25, 2010
Friday, April 23, 2010
RMAN Tablespace Point in Time Recovery or TSPITR
One detail that seems to get over looked when performing TSPITR on a "large" database is that the auxiliary instance is a duplicate of the all UNDO tablespaces, the SYSTEM tablespace and the TEMP tablespace. So if these are gigabytes in size, your auxiliary database will need lots of available diskspace and not to mention the hours it will take to build these structures.
Retrieve Java source from its binary class
Java can be stored in an Oracle database without the source and only the binary data. Retrieving the source from a java class may not be as straight forward as you may think but it is possible with these few steps outlined below.
As you can see below the java class has no source.

select * from lob_class;
commit;
drop table lob_class;
CREATE TABLE lob_class
( lob_id NUMBER(3),
java_clip BLOB DEFAULT empty_blob(),
java_clob CLOB DEFAULT NULL);
Copy binary source to BLOB column
set serveroutput on
declare
blob1 BLOB;
temp_blob BLOB;
temp_clob CLOB;
dest_offset NUMBER := 1;
src_offset NUMBER := 1;
amount INTEGER := dbms_lob.lobmaxsize;
blob_csid NUMBER := dbms_lob.default_csid;
lang_ctx INTEGER := dbms_lob.default_lang_ctx;
warning INTEGER;
begin
dbms_lob.createtemporary(blob1,true, dbms_lob.session);
dbms_java.export_class('com/budco/csc/feed/DataLoader',blob1);
insert into lob_class (lob_id, java_clip) values (1,blob1);
dbms_lob.freetemporary(blob1);
end;
/
Copy binary BLOB column to OS file
Decompile source file with open source java decompiler
DECLARE
clob_locator BLOB;
charbuf VARCHAR2(20);
read_offset INTEGER;
read_amount INTEGER;
vblob BLOB;
vstart NUMBER := 1;
bytelen NUMBER := 32000;
len NUMBER;
my_vr RAW(32000);
x NUMBER;
l_output utl_file.file_type;
BEGIN
l_output := utl_file.fopen('DPUMP_EXP_DIR',
'blob_csc.class','wb', 32760);
vstart := 1;
bytelen := 32000;
SELECT dbms_lob.getlength(java_clip)
INTO len
FROM lob_class
WHERE lob_id = 1;
x := len;
SELECT java_clip
INTO vblob
FROM lob_class
WHERE lob_id = 1;
-- single write if small enough
IF len < 32760 THEN
utl_file.put_raw(l_output,vblob);
utl_file.fflush(l_output);
ELSE -- write in pieces
vstart := 1;
WHILE vstart < len and bytelen > 0
LOOP
dbms_lob.read(vblob,bytelen,vstart,my_vr);
utl_file.put_raw(l_output,my_vr);
utl_file.fflush(l_output);
vstart := vstart + bytelen;
x := x - bytelen;
IF x < 32000 THEN
bytelen := x;
END IF;
utl_file.fclose(l_output);
end loop;
END IF;
END;
/
As you can see below the java class has no source.
Create table with BLOB column
truncate table lob_class;select * from lob_class;
commit;
drop table lob_class;
CREATE TABLE lob_class
( lob_id NUMBER(3),
java_clip BLOB DEFAULT empty_blob(),
java_clob CLOB DEFAULT NULL);
Copy binary source to BLOB column
set serveroutput on
declare
blob1 BLOB;
temp_blob BLOB;
temp_clob CLOB;
dest_offset NUMBER := 1;
src_offset NUMBER := 1;
amount INTEGER := dbms_lob.lobmaxsize;
blob_csid NUMBER := dbms_lob.default_csid;
lang_ctx INTEGER := dbms_lob.default_lang_ctx;
warning INTEGER;
begin
dbms_lob.createtemporary(blob1,true, dbms_lob.session);
dbms_java.export_class('com/budco/csc/feed/DataLoader',blob1);
insert into lob_class (lob_id, java_clip) values (1,blob1);
dbms_lob.freetemporary(blob1);
end;
/
Copy binary BLOB column to OS file
Decompile source file with open source java decompiler
DECLARE
clob_locator BLOB;
charbuf VARCHAR2(20);
read_offset INTEGER;
read_amount INTEGER;
vblob BLOB;
vstart NUMBER := 1;
bytelen NUMBER := 32000;
len NUMBER;
my_vr RAW(32000);
x NUMBER;
l_output utl_file.file_type;
BEGIN
l_output := utl_file.fopen('DPUMP_EXP_DIR',
'blob_csc.class','wb', 32760);
vstart := 1;
bytelen := 32000;
SELECT dbms_lob.getlength(java_clip)
INTO len
FROM lob_class
WHERE lob_id = 1;
x := len;
SELECT java_clip
INTO vblob
FROM lob_class
WHERE lob_id = 1;
-- single write if small enough
IF len < 32760 THEN
utl_file.put_raw(l_output,vblob);
utl_file.fflush(l_output);
ELSE -- write in pieces
vstart := 1;
WHILE vstart < len and bytelen > 0
LOOP
dbms_lob.read(vblob,bytelen,vstart,my_vr);
utl_file.put_raw(l_output,my_vr);
utl_file.fflush(l_output);
vstart := vstart + bytelen;
x := x - bytelen;
IF x < 32000 THEN
bytelen := x;
END IF;
utl_file.fclose(l_output);
end loop;
END IF;
END;
/
Thursday, April 22, 2010
Private and Public Clouds
Public cloud computing still has some obvious drawbacks like security control. The majority of IT management intend to keep their clouds private while maintaining control over security concerns, such as regulatory compliance, and legal and contractual issues. The key concept to cloud computing is virtualization and Oracle VM software is a good place to start.
Wednesday, April 21, 2010
Automatic Storage Management
ASM Disk Stats
set lines 200col unbalanced for a10
col name for a7
col "allo size" for 9,999,999
select
name
, group_number
, block_size
, allocation_unit_size "allo size"
, state,total_mb
, free_mb
, usable_file_mb
, offline_disks
, unbalanced
from v$asm_diskgroup_stat;
Scheduler: Find Jobs
Find enabled and/or running scheduler jobs
col owner for a8set lines 200
col next_run_date for a45
col repeat_interval for a65
col LAST_START_DATE for a55
col start_date for a55
select owner, job_name
, repeat_interval
--, start_date
--, substr(last_run_duration,11,30) last_run_dur
, next_run_date
--, last_start_date
-- select owner, job_name, repeat_interval, next_run_date, LAST_START_DATE
-- select owner, job_name, next_run_date
--, last_start_date
, enabled
--,job_priority
from dba_scheduler_jobs
-- where enabled = 'TRUE'
where state = 'RUNNING'
-- and next_run_date < sysdate
-- and next_run_date not like '%AMERICA%'
-- where owner in ('ECOMDEV','ECOMLIVE','ECOMSTG')
-- where owner in (select username from dba_users
where profile = 'DA_UNLIMITED')
-- and job_name in ('FULL_DATABASE_STATS','STALE_DATABASE_STATS')
-- and next_run_date not like '%EASTER%'
and next_run_date not like '%AMERICA%'
-- and job_name = 'GRANT_MISSING_GRANTS'
order by 5,1,2;
Listern Log Mining
create or replace directory listener_log_dir
as '/opt/oracle/product/10.2.0/db_2/network/log';
drop table listener_log;
create table listener_log
(
log_date date,
connect_data varchar2(300),
protocol_data varchar2(300),
command varchar2(20),
service_name varchar2(19),
return_code number(10)
)
organization external (
type oracle_loader
default directory DATA_PUMP_DIR
access parameters
(
records delimited by newline
nobadfile
nologfile
nodiscardfile
fields terminated by "*" lrtrim
missing field values are null
(
log_date char(30) date_format
date mask "DD-MON-YYYY HH24:MI:SS",
connect_data,
protocol_data,
command,
service_name,
return_code
)
)
location ('listener.log.crash.20100311')
)
reject limit unlimited
/
create or replace function extract_value
-- Used for Listener Log Monitoring
(
p_in varchar2,
p_param in varchar2
)
return varchar2
as
l_begin number(3);
l_end number(3);
l_val varchar2(2000);
begin
l_begin := instr (upper(p_in), '('||p_param||'=');
l_begin := instr (upper(p_in), '=', l_begin);
l_end := instr (upper(p_in), ')', l_begin);
l_val := substr (p_in, l_begin+1, l_end - l_begin - 1);
return l_val;
end;
/
to_char(log_date,'mm/dd/yy hh24')||':00' logdate,
count(*)
from listener_log
-- where extract_value (connect_data, 'PROGRAM') like ('%ECOMETRY%')
group by to_char(log_date,'mm/dd/yy hh24')
order by 1 asc
/
to_char(log_date,'mm/dd/yy hh24:mi') logdate,
count(*)
from listener_log
where extract_value (connect_data, 'PROGRAM') like ('%ECOMETRY%')
group by to_char(log_date,'mm/dd/yy hh24:mi')
order by 1 asc
/
col l_user format a20
col service format a30
col program for a43
col instance for a8
col logdate format a20
col host format a20
col RC format 99999
col service_name for a19
col port_num for a10
col host for a18
select to_char(log_date,'mm/dd/yy hh24:mi:ss') logdate,
extract_value (connect_data,'HOST') host,
extract_value (connect_data,'USER') l_user,
extract_value (connect_data,'PROGRAM') program,
extract_value (connect_data,'SERVICE_NAME') service_name,
extract_value (connect_data,'INSTANCE_NAME') instance,
extract_value (protocol_data,'HOST') host_ip,
extract_value (protocol_data,'PORT') port_num,
return_code RC
from listener_log
where return_code <> 0
and to_char(log_date,'mm/dd/yy hh24:mi:ss') < '03/09/10 05:00:00'--where extract_value (protocol_data,'HOST') = '172.16.5.190
--and extract_value (connect_data,'INSTANCE_NAME') like '%xxx%'
--or extract_value (connect_data,'SERVICE_NAME') like '%xxx%'
--where extract_value (connect_data,'HOST') is not null
--and extract_value (connect_data,'USER') not in ('oracle')
--where extract_value (connect_data,'USER') like 'rfhandheld%'
--where extract_value (connect_data, 'PROGRAM') like ('%ECOMETRY%') or extract_value (connect_data, 'PROGRAM') like ('%w3wp.exe%')
--and extract_value (connect_data,'HOST') = 'xxxx-45'
order by 1 asc
/
as '/opt/oracle/product/10.2.0/db_2/network/log';
drop table listener_log;
create table listener_log
(
log_date date,
connect_data varchar2(300),
protocol_data varchar2(300),
command varchar2(20),
service_name varchar2(19),
return_code number(10)
)
organization external (
type oracle_loader
default directory DATA_PUMP_DIR
access parameters
(
records delimited by newline
nobadfile
nologfile
nodiscardfile
fields terminated by "*" lrtrim
missing field values are null
(
log_date char(30) date_format
date mask "DD-MON-YYYY HH24:MI:SS",
connect_data,
protocol_data,
command,
service_name,
return_code
)
)
location ('listener.log.crash.20100311')
)
reject limit unlimited
/
create or replace function extract_value
-- Used for Listener Log Monitoring
(
p_in varchar2,
p_param in varchar2
)
return varchar2
as
l_begin number(3);
l_end number(3);
l_val varchar2(2000);
begin
l_begin := instr (upper(p_in), '('||p_param||'=');
l_begin := instr (upper(p_in), '=', l_begin);
l_end := instr (upper(p_in), ')', l_begin);
l_val := substr (p_in, l_begin+1, l_end - l_begin - 1);
return l_val;
end;
/
Count per hour
select to_char(log_date,'mm/dd/yy hh24')||':00' logdate,
count(*)
from listener_log
-- where extract_value (connect_data, 'PROGRAM') like ('%ECOMETRY%')
group by to_char(log_date,'mm/dd/yy hh24')
order by 1 asc
/
Count per minute
select to_char(log_date,'mm/dd/yy hh24:mi') logdate,
count(*)
from listener_log
where extract_value (connect_data, 'PROGRAM') like ('%ECOMETRY%')
group by to_char(log_date,'mm/dd/yy hh24:mi')
order by 1 asc
/
Gather all info from listener log
col host_ip for a15col l_user format a20
col service format a30
col program for a43
col instance for a8
col logdate format a20
col host format a20
col RC format 99999
col service_name for a19
col port_num for a10
col host for a18
select to_char(log_date,'mm/dd/yy hh24:mi:ss') logdate,
extract_value (connect_data,'HOST') host,
extract_value (connect_data,'USER') l_user,
extract_value (connect_data,'PROGRAM') program,
extract_value (connect_data,'SERVICE_NAME') service_name,
extract_value (connect_data,'INSTANCE_NAME') instance,
extract_value (protocol_data,'HOST') host_ip,
extract_value (protocol_data,'PORT') port_num,
return_code RC
from listener_log
where return_code <> 0
and to_char(log_date,'mm/dd/yy hh24:mi:ss') < '03/09/10 05:00:00'--where extract_value (protocol_data,'HOST') = '172.16.5.190
--and extract_value (connect_data,'INSTANCE_NAME') like '%xxx%'
--or extract_value (connect_data,'SERVICE_NAME') like '%xxx%'
--where extract_value (connect_data,'HOST') is not null
--and extract_value (connect_data,'USER') not in ('oracle')
--where extract_value (connect_data,'USER') like 'rfhandheld%'
--where extract_value (connect_data, 'PROGRAM') like ('%ECOMETRY%') or extract_value (connect_data, 'PROGRAM') like ('%w3wp.exe%')
--and extract_value (connect_data,'HOST') = 'xxxx-45'
order by 1 asc
/
Relocate a Service
Relocate service from test1 to test2 instace
srvctl relocate service -d test -s test_dss -i test1 -t test2
RMAN Backup Listing
col status heading 'Status' for a10
col input_type heading 'Type' for a9
col output_bytes_display heading 'Size' for a10
col time_taken_display heading 'Time|Taken' for a10
col output_bytes_per_sec_display heading 'Output|Rate|Per Sec' for a10
col input_bytes_display heading 'Input|Size' for a10
set line 300 pagesize 200 echo off termout on
set verify off feedback off
SELECT
substr(db_name,1,5) "DB",
status,
input_type,
TO_CHAR(start_time,'mm/dd/yy hh:mi:ss am') "Start Time",
TO_CHAR(end_time,'mm/dd/yy hh:mi:ss am') "End Time",
output_bytes_display,
time_taken_display,
output_bytes_per_sec_display,
input_bytes_display
FROM
rcat.rc_rman_backup_job_details
where db_name in ('ORCL')
--and start_time > (sysdate-100)
--and input_type <> 'ARCHIVELOG'
ORDER BY
db_name,
start_time;
col input_type heading 'Type' for a9
col output_bytes_display heading 'Size' for a10
col time_taken_display heading 'Time|Taken' for a10
col output_bytes_per_sec_display heading 'Output|Rate|Per Sec' for a10
col input_bytes_display heading 'Input|Size' for a10
set line 300 pagesize 200 echo off termout on
set verify off feedback off
SELECT
substr(db_name,1,5) "DB",
status,
input_type,
TO_CHAR(start_time,'mm/dd/yy hh:mi:ss am') "Start Time",
TO_CHAR(end_time,'mm/dd/yy hh:mi:ss am') "End Time",
output_bytes_display,
time_taken_display,
output_bytes_per_sec_display,
input_bytes_display
FROM
rcat.rc_rman_backup_job_details
where db_name in ('ORCL')
--and start_time > (sysdate-100)
--and input_type <> 'ARCHIVELOG'
ORDER BY
db_name,
start_time;
Archived Log Details
View Archived Log Details
col status for a5
select thread#, sequence#, first_time, first_change#, next_time
,next_change#
,completion_time, name
--,archived,applied
--,deleted, status
--,(blocks*512)
from v$archived_log
where completion_time > sysdate - 5
--and status = 'A'
and deleted = 'NO'
order by sequence#, thread#;
View Log History Details (similar to above)
select thread#, sequence#, first_change#, first_time, next_change#
from v$log_history
where first_time > sysdate - 1/24 /* 1 hour */
order by sequence#;
col status for a5
select thread#, sequence#, first_time, first_change#, next_time
,next_change#
,completion_time, name
--,archived,applied
--,deleted, status
--,(blocks*512)
from v$archived_log
where completion_time > sysdate - 5
--and status = 'A'
and deleted = 'NO'
order by sequence#, thread#;
View Log History Details (similar to above)
select thread#, sequence#, first_change#, first_time, next_change#
from v$log_history
where first_time > sysdate - 1/24 /* 1 hour */
order by sequence#;
RMAN Scripts
Global Script - Full Backup
create|rename global script full_backup
{
backup incremental level 0 tag 'FULL' database;
backup archivelog all delete all input tag 'Archive';
crosscheck backupset;
crosscheck copy;
delete noprompt obsolete;
}
Global Script - Incremental Backup
create global script inc_backup
{
backup incremental level 1 tag 'INC' database;
backup archivelog all delete all input tag 'Archive';
crosscheck backupset;
crosscheck copy;
delete noprompt obsolete;
}
Other Commands
run { execute global script full_backup; }
print global script full_backup;
list global script names;
delete script full_backup;
create|rename global script full_backup
{
backup incremental level 0 tag 'FULL' database;
backup archivelog all delete all input tag 'Archive';
crosscheck backupset;
crosscheck copy;
delete noprompt obsolete;
}
Global Script - Incremental Backup
create global script inc_backup
{
backup incremental level 1 tag 'INC' database;
backup archivelog all delete all input tag 'Archive';
crosscheck backupset;
crosscheck copy;
delete noprompt obsolete;
}
Other Commands
run { execute global script full_backup; }
print global script full_backup;
list global script names;
delete script full_backup;
Subscribe to:
Posts (Atom)