Tuesday, July 14, 2020

Snapshots with Exadata Cloud

Currently the OCI cloud tooling does not support cloning or snapshots however you can do these manually.   Below are a couple of examples of using SQL to create PDB snapshots on the Exadata Cloud Service or Exadata Cloud at Customer services.





Example #1 -- Creating PDB Snapshot using Full Test Masters
Scenario is to create a test master full copy of production, developers then create snapshots from the test master

Show pdbs

/* Create Full test master PDB - note this would usually be a remote clone*/
create pluggable database prod1tm1 from prod1 keystore identified by "WELcome__2019";

show pdbs;

/* Need to open Test Master read write before opening it read only */
alter pluggable database prod1tm1 open   instances=all;
alter pluggable database prod1tm1 close immediate instances=all;
alter pluggable database prod1tm1 open read only instances=all;  /* read only test master pdb */

/* Create  First PDB snapshot from Test Master */

create pluggable database tm1snap1 from prod1tm1 tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";

alter pluggable database tm1snap1 open instances=all;
alter session set container=tm1snap1;

create table panda as select * from dba_users;
select * from panda;

/* Create second snapshot from same test master */
create pluggable database tm1snap2 from prod1tm1 tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database tm1snap2 open instances=all;

alter session set container=cdb$root;
Show pdbs

/* query shows parent of snapshots */
column name format a20
select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs;

alter pluggable database tm1snap1 close immediate instances=all;
alter pluggable database tm1snap2 close immediate instances=all;
alter pluggable database prod1tm1 close immediate instances=all;
drop pluggable database tm1snap1 including datafiles;
drop pluggable database tm1snap2 including datafiles;
drop pluggable database prod1tm1 including datafiles;



Example #2 -- Creating PDB Snapshots using Sparse  Test Masters
Scenario is to create one test master full copy of production, then create  sparse test masters updated each night from prod using GG, developers then create snapshots from the test masters.  Provide a new copy of production each night for the developers to create snaps from, full test master for Monday , sparse test masters Tuesday thru Friday.
Note: Using the same name for the test master each night so that the GG configuration doesn't have to change

/* create full copy of test master from prod, usually this would be a remote clone */
create pluggable database prod2tm from prod2 keystore identified by "WELcome__2019";
alter pluggable database prod2tm open  instances=all;

/* setup GG , catch up test master with production */

/* stop GG  when it is time to create a new test master*/

Alter pluggable database prod2tm close immediate instances=all;
alter pluggable database prod2tm unplug into '/home/oracle/snapshot/prod2tm_monday.xml' encrypt using "zzz";
drop pluggable database prod2tm keep datafiles;

/* create full copy of  test master*/
create pluggable database prod2tm_monday using '/home/oracle/snapshot/prod2tm_monday.xml' nocopy keystore identified by "WELcome__2019" decrypt using "zzz";

/* open full copy of test master read only,    need to open read/write first then read only */
alter pluggable database prod2tm_monday open instances=all;
alter pluggable database prod2tm_monday close immediate instances=all;
alter pluggable database prod2tm_monday  open read only instances=all;

/* create next days test master as a sparse */
create pluggable database prod2tm from prod2tm_monday tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database prod2tm open  instances=all;

/* sync test master to prod*/
/* Restart GG */

/* developer creates sparse pdb to use */
create pluggable database prod2tm_monday_greg from prod2tm_monday tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database prod2tm_Monday_greg open instances=all;

show pdbs
column name format a20
select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs;

/* repeat process for each day's  sparse test master */

alter pluggable database prod2tm close immediate instances=all;
alter pluggable database prod2tm_monday close immediate instances=all;
alter pluggable database prod2tm_Monday_greg close immediate instances=all;
drop pluggable database prod2tm including datafiles;
drop pluggable database prod2tm_monday including datafiles;
drop pluggable database prod2tm_Monday_greg including datafiles;

rm /home/oracle/snapshot/*.xml


ASM

sqlplus / as sysasm
ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';

Saturday, July 11, 2020

Querying External Data with Autonomous Database

Below is an example of querying an externaltext file with a comma delimiter on OCI object storage using an Autonomous Database.  This example is from the documentation which had a couple of typos that are corrected here.

I am using an ADB on Dedicated Exadata Infrastructure.

channels.txt file:
1,Direct Sales,Direct
2,Tele Sales,Direct
3,Catalog,Indirect
4,Internet,Indirect
5,Partners,Others


BEGIN
  DBMS_CREDENTIAL.CREATE_CREDENTIAL(
    credential_name => 'DEF_CRED_NAME',
    username => 'your OCI user name here',
    password => 'your token here'
  );
END;
/

select owner, credential_name, username, enabled from dba_credentials;

BEGIN
   DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
    table_name =>'CHANNELS_EXT',
    credential_name =>'DEF_CRED_NAME',
    file_uri_list =>'https://objectstorage.us-ashburn-1.oraclecloud.com/n/youtenancynamehere/b/sampledata/o/channels.txt',
    format => json_object('delimiter' value ','),
    column_list => 'CHANNEL_ID NUMBER,
    CHANNEL_DESC VARCHAR2(20),
    CHANNEL_CLASS VARCHAR2(20)' );
END;
/

SELECT count(*) FROM channels_ext;
select * from channels_ext;
desc channels_ext;

Thursday, July 9, 2020

Creating an ACFS filesystem on Exadata Cloud Service

Below is an example on how to create an ACFS filesystem on an Exadata Cloud Service quarter rack.  This example creates a 2TB ACFS in the RECO1 disk group and is mounted at /scratch.



[opc@exacsphx-xyzn1 ~]$ sudo -s
[root@exacsphx-xyzn1 opc]# su - grid
Last login: Thu Oct 17 09:38:15 EDT 2019
[grid@exacsphx-xyzn1 ~]$ asmcmd
ASMCMD> volcreate -G RECOC1 -s 2000G upload_vol
ASMCMD> volinfo -G RECOC1 upload_vol
Diskgroup Name: RECOC1

         Volume Name: UPLOAD_VOL
         Volume Device: /dev/asm/upload_vol-58
         State: ENABLED
         Size (MB): 2048000
         Resize Unit (MB): 64
         Redundancy: HIGH
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

ASMCMD> exit
[grid@exacsphx-xyzn1 ~]$ logout

[root@exacsphx-xyzn1 opc]# /sbin/mkfs -t acfs /dev/asm/upload_vol-58
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/upload_vol-58
mkfs.acfs: volume size               = 2147483648000  (   1.95 TB )
mkfs.acfs: Format complete.
[root@exacsphx-xyzn1 opc]# /u01/app/19.0.0.0/grid/bin/srvctl add filesystem -d /dev/asm/upload_vol-58 -g RECOC1 -v upload_vol -m /scratch
[root@exacsphx-xyzn1 opc]# /u01/app/19.0.0.0/grid/bin/srvctl start filesystem -d /dev/asm/upload_vol-58

[root@exacsphx-xyzn1 opc]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
devtmpfs                      355G     0  355G   0% /dev
tmpfs                         709G  1.5G  707G   1% /dev/shm
tmpfs                         355G  3.0M  355G   1% /run
tmpfs                         355G     0  355G   0% /sys/fs/cgroup
/dev/mapper/VGExaDb-LVDbSys1   24G   11G   12G  47% /
/dev/xvda1                    488M   82M  381M  18% /boot
/dev/xvdi                     1.1T   69G  959G   7% /u02
/dev/mapper/VGExaDb-LVDbOra1   20G  7.4G   12G  40% /u01
/dev/xvdb                      50G  9.3G   38G  20% /u01/app/19.0.0.0/grid
/dev/asm/acfsvol01-58         800G   39G  762G   5% /acfs01
tmpfs                          71G     0   71G   0% /run/user/2000
tmpfs                          71G     0   71G   0% /run/user/0
/dev/asm/upload_vol-58        2.0T  4.6G  2.0T   1% /scratch
[root@exacsphx-xyzn1 opc]#

Tuesday, July 7, 2020

Autonomous Database with HammerDB

Summary of the basic steps to configure HammerDB with Oracle's Autonomous Database either shared or dedicated.
  1. Download and install Oracle Database Client on client machine which is hosting HammerDB
  2. Optionally it is recommended practice to patch the client to the latest version, today that is 19.7
  3. set environment variable Oracle_HOME
  4. Download the wallet for the database you will be using with HammerDB
  5. Unzip the wallet , modify sqlnet.ora and set TNS_ADMIN to your wallet
  6. Modify oracle.xml file in your hammerdb/config directory
  7. Verify above settings
My Environment:
  • Windows jump server
  • Linux Compute VM to host hammerdb
  • ADB on Dedicated Quarter Rack
  • ADB on Shared Infrastructure
  • all on same VCN
Steps:

#1   Download and install latest Oracle Database Client which currently is 19.3 onto your linux compute VM.



#2 Skipping this step

#3   Set the environment variable ORACLE_HOME to your client on .
       i.e.   export ORACLE_HOME=/home/opc/app/opc/product/19.0.0/client_1

#4   Download the wallet  for the database you will be using onto  your linux compute VM

  










#5  Unzip the wallet onto your linux compute VM and set TNS_ADMIN to this directory
        export TNS_ADMIN=/home/opc/wallet_phxhammer1

 

Also modify the sqlnet.ora to point to this directory.


#6  Either modify the oracle.xml in  the config directory within the hammer directory or use the GUI.   Make the following changes in config.xml:
  • system_user = admin
  • system_password = your_admin_password
  • instance = use one of the defined services   i.e.  phxhammer1_tp
  • tpcc_user = tpcc
  • tpcc_pass = insert password for the tpcc user
  • tpcc_def_tab = DATA
  • tpcc_def_temp= TEMP 
Here is a portion of the config file:





#7 start HammerDB  and verify connection