always remember

Nothing is foolproof to a sufficiently talented fool... Make something
idiot proof, and the world will simply make a bigger idiot.

OGG-00730 – No minimum Supplemental Logging is enabled

This issue was encountered whilst shipping an Oracle 12c schema to an MSSQL Server 2014 instance using OGG 12.3.

During the Change Data Capture configuration and EXTRACT setup and start processes, you may find your EXTRACT abends with:

OGG-00730  No minimum supplemental logging is enabled.

There are 2 reasons this may occur, the first is that you actually don’t have any supplemental logging enabled… The second is a documented Oracle bug, in which the GoldenGate process detects the presence of LOG DATA, but reports back on it incorrectly. Both scenarios are explained below.

CHECK TO SEE IF DATABASE LEVEL SUPPLEMENTAL LOGGING IS ENABLED OR NOT:

SQL> SELECT force_logging, supplemental_log_data_min FROM v$database;

FORCE_LOGGING             SUPPLEME
------------------------- --------
NO                        NO

SQL>

In this case, there is no logging, so OGG is correct. We can enable it with:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

Read On… ->

dave / July 27, 2018 / Uncategorized

OGG-01194 – Oracle Golden Gate CHARSET mismatch

When entertaining the loathsome idea of shipping an established Oracle data set to MSSQL (SQL Server 2014, Oracle 12c, and OGG 12.3 in this case), you may run into an issue that presents itself in the following form in your EXTRACT report:

WARNING OGG-01194
EXTRACT task RINI9001 abended : Conversion from character set UTF-8 of source column <COLUMN_NAME> to character set windows-1252 of target column <COLUMN_NAME> failed because the source column contains a character 'ef 81 8a' at offset 123 that is not available in the target character set.

THE PROBLEM?:

Essentially, the issue is that you are trying to have your REPLICAT process convert Unicode data into a CHARSET where that Unicode character doesn’t exist. This is the default behaviour of REPLICAT, it will always try to convert source data charsets to the target machine native.

RESOLUTION:

This can be controlled with “SOURCECHARSET” parameter in your REPLICAT task param file. Specifically “SOURCECHARSET PASSTHRU”. Using this parameter will force REPLICAT to blindly import the source data and not try to convert it to the native charset of the target machine.

More information on SOURCECHARSET here

dave / July 26, 2018 / Uncategorized

Monitor Pending Connections – Zen/Zevenet Load Balancers

In my working environment, we use (rather extensively) ZenLB (or as they are now know, Zevenet) Load Balancers. In production systems, sometimes the back-ends of an infrastructure, or the “real servers” behind the load balancers, can become unresponsive for whatever reason. A typical one that I see quite often is when using clustered MS Exchange Client Access servers behind a load balanced pool. IIS may lock up on one or multiple CAS’s causing the connections coming in from clients to be stored at LB level as “pending”.

This is fine, but in my experience, once the Zevenet LB racks up 1500+ pending connections on one of its farms, it quickly exhausts it’s available memory.

The following check is called by the Nagios NRPE agent installed locally on the LB (It’s just Debian 8 afterall)

#!/bin/bash
#
# ZenLB Pending/Established Connection Tracking v1.0 - Dave Byrne
#
hour=`date +%H`
pending=`cat /proc/net/nf_conntrack |grep SYN_SENT |grep dport='443|80' |wc -l`
established=`cat /proc/net/nf_conntrack |grep ESTABLISHED |grep dport='443|80' |wc -l`

if [ $pending -gt 5 ]
   then
      printf "CRITICAL - Pending connections above threshold! Pending: $pending -- Established: $establishedn"
   exit 2
elif [ $established -eq 0 ] && [ $hour -ge 8 ] && [ $hour -le 23 ];
   then
      printf "CRITICAL - No established connections! Pending: $pending -- Established: $establishedn"
   exit 2
else
      printf "OK - Pending connections at acceptable level. Pending: $pending -- Established: $establishedn"
   exit 0
fi

The check will go CRITICAL if pending connections across ANY of the farms goes above 5. It will also go CRITICAL is the established connections drops to 0 (probably bad). But I have limited this to a certain time frame, as I appreciate that there may well be 0 established connections at 4am!!

-Dave

dave / August 21, 2017 / Uncategorized

dave / July 27, 2016 / Uncategorized

How To Upgrade PostgreSQL From 9.3 to 9.4 (In-Place)

To make use of the JSONB features implemented in 9.4, it is required that you upgrade your existing PgSQL 9.3 cluster to 9.4+. I cover the basics on how to perform an in-place upgrade.

1. Add the PostgreSQL repo to apt:

echo "deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main" > /etc/apt/sources.list.d/pgdg.list

 

2. Install the repo’s key:

wget -q -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add

Read On… ->

dave / April 7, 2016 / Uncategorized

How To Setup Binary Replication Between 2 PostgreSQL 9.4 Hosts (Hot-Standby)

Utilising a master/slave (hot-standby) setup to provide a resilience layer at database level can be easy. The following assumes you have 2 PgSQL hosts at 10.10.50.1 and 10.10.50.2, both running Ubuntu 14.04 LTS and PostgreSQL 9.4 (9.4.5).

1. On the master 10.10.50.1, edit the following in postgresql.conf:

listen_addresses = '*'
wal_level = hot_standby
max_wal_senders = 3

listen_addresses can also be scoped down to single or multiple server bound IP addresses, for added security/best practice

Read On… ->

dave / April 7, 2016 / Uncategorized