Posts

How I Use RMAN’s Auxiliary Destination to Recover Tables Safely

Image
If you’ve ever had to recover a single table in Oracle without touching the rest of your database, you know it can feel like performing surgery on a live patient. That’s where RMAN ’s auxiliary destination comes to the rescue. What Is the Auxiliary Destination? In simple terms, it’s a temporary workspace where Oracle can safely restore and recover a table before it goes back into the main database. Think of it as a sandbox: everything happens there first, away from the production environment, so nothing breaks if something goes wrong. When I first started using it, I was worried about messing up live data. But once I understood the concept, it made table-level recovery so much smoother. Why You Need It You can’t just yank a table out of a backup and drop it into a running database—especially if users are working on it. RMAN needs a safe place to do its magic : It restores the table from backups. It applies redo logs to bring the table to the exact point-in-time you nee...

Recovering a Data Sync Issue in Oracle GoldenGate

Image
Recently, I faced a synchronization issue in one of our Oracle GoldenGate (OGG) environments. The setup was designed to extract table data from an Oracle database and replicate it into a Microsoft SQL Server target . After creating a new extract group using the begin now option, I noticed that a portion of the data had not synced correctly. Recovery was required to ensure the target database was fully consistent. Here’s the step-by-step process I followed to resolve the issue πŸ‘‡ 🧰 Step 1: Stop Existing Processes I began by stopping the currently running extract and replicat groups to prevent any further replication while investigating the issue. ⚙️ Step 2: Create a New Extract Group A new extract group was created on the source Oracle database, specifying a precise timestamp to capture all transactions from the point where data loss began. πŸ“ Step 3: Update the Parameter File The new extract group’s parameter file was updated with the required environment settings, dat...

Fixing System Slowness Caused by Excessive Inactive Sessions

Image
Recently, I faced an interesting performance issue in one of our Oracle 10g databases running on IBM AIX . The system had become noticeably slow , and user operations were taking longer than usual. When I started investigating, I discovered that the database was holding a large number of inactive (idle) sessions — most of them left open by the application layer . These idle sessions were quietly consuming system resources, leading to overall slowness. ⚙️ The Immediate Fix As a temporary measure, I manually killed the inactive sessions to free up resources. This immediately improved system responsiveness, but it was only a short-term solution. I needed a more permanent fix to ensure that idle sessions wouldn’t keep piling up in the future. 🧠 The Long-Term Solution — Profile Configuration To automate the management of idle sessions, I decided to create a new Oracle profile with an idle time limit of 30 minutes . πŸ—️ Step 1: Create a New Profile CREATE PROFILE IDLE_30MIN L...

Solving a Data Redaction Mystery — When Data Was Masked Only in the Application!

Recently, I encountered an interesting challenge while working with Oracle Data Redaction . A database user had been granted the following privilege: GRANT EXEMPT REDACTION POLICY TO APP_SCHEMA; Naturally, I expected that the redacted data would now appear unmasked everywhere . But to my surprise — the data was clearly visible in SQL Developer , yet still masked inside the application . πŸ€” πŸ” Investigating the Behavior At first glance, this seemed like a strange inconsistency. If the user APP_SCHEMA already had the EXEMPT REDACTION POLICY privilege, why was the data still masked in the application? After checking with the client, they mentioned an important detail: “Our application does not connect directly as APP_SCHEMA — it connects through a connection pool using another database user.” πŸ’‘ That one line changed everything. 🧠 Understanding What Was Really Happening Oracle Data Redaction works per database session , not by the logical application user. Here’s w...

Resolving ORA-10635 and ORA-39171 Errors During BLOB Tablespace Maintenance

Recently, I came across an interesting situation while performing a space management task on one of our databases. The challenge involved a table that contained BLOB data distributed across multiple tablespaces . What seemed like a simple shrink operation ended up turning into a deeper learning experience that I’d like to share. 🧩 Background In this environment, the main table BINARY_ARCHIVE was stored in the MAIN_DATA tablespace. Its BLOB segment was placed separately in the ATTACHMENTS_DATA tablespace to improve performance and manage large object storage efficiently. Additionally, another table called BINARY_OBJECT_LINK maintained a foreign key reference to the BLOB data and resided in the LINK_DATA tablespace. When I attempted to reclaim space from the BLOB segment using the SHRINK SPACE command, Oracle returned the following error. ⚠️ The Error ORA -10635 : Invalid segment or tablespace type 10635. 00000 - " Invalid segment or tablespace type " *Ca...

My Journey Upgrading MySQL 5.7 to 8.4 – and the Foreign Key Surprise

I recently had a project where I needed to upgrade a database from MySQL 5.7 to MySQL 8.4 . As with most MySQL upgrades, I had two options in front of me: Do an in-place upgrade , swapping the binaries and running mysql_upgrade . Go with a logical upgrade , using mysqldump to export everything and then import it into the new version. I decided to take the logical path . It felt safer for this use case, since it gives you a fresh environment and avoids any messy leftovers from the old version. So far, so good. I dumped my 5.7 database and began the import into MySQL 8.4. That’s when things went sideways. The Error That Stopped Everything The import was moving along nicely until I hit this: ERROR 6125 (HY000): Failed to add the foreign key constraint . Missing unique key for constraint 'my_foriegn_FK1' in the referenced table 'my_parent_table' . My first thought was: “Wait… what? This schema has been running in production for years without a p...

Resolving "Access Denied for user 'xxxx'@'xxx.xx.xxx.x' (Using password: YES)" in MySQL

Image
  When managing a MySQL database, it's common to encounter the "Access Denied for user 'xxxx'@'xxx.xx.xxx.x' (Using password: YES)" error. This issue typically arises when a MySQL user’s host does not have the necessary permissions to access the database from a remote machine. Understanding the Error This error is triggered when the MySQL server refuses a connection attempt from a user due to host restrictions. The error message usually looks like this: ERROR 1045 (28000): Access denied for user 'xxxx'@'xxx.xx.xxx.x' (using password: YES) If you encounter this issue, it could be because the MySQL user is restricted to accessing the database only from specific IP addresses.   Error:     To resolve this issue, you can modify the MySQL user’s host settings, allowing access from different remote machines. Here's a step-by-step guide on how to change the MySQL user host. Solution: 1.       Log into the MySQL console   ...

PURGE BINARY LOGS IN MYSQL

Managing the binary logs in MySQL is a critical aspect of database administration, especially for those who oversee replication or large-scale data modifications. The PURGE BINARY LOGS statement is an essential tool that helps maintain the health of your MySQL server by removing unnecessary binary log files, freeing up disk space, and ensuring smooth operations. What are Binary Logs? Binary logs in MySQL are files that store information about data modifications, such as INSERT , UPDATE , DELETE , and even structural changes like CREATE TABLE . These logs are vital for replication and data recovery, but they can accumulate over time, leading to performance issues or excessive disk usage. The PURGE BINARY LOGS command helps manage these files efficiently. The PURGE BINARY LOGS Statement The PURGE BINARY LOGS statement allows you to delete old binary logs that are no longer needed. It offers two options: TO 'log_name' : This option deletes all binary logs up to,...