Wednesday, February 4, 2026

OCI Load Balancer Health Check Failing Despite Correct Security Rules? Here’s How to Fix It

 If you’re managing Oracle Cloud Infrastructure (OCI), you might have encountered a frustrating issue: your OCI Load Balancer backend health check fails even though your security lists, route tables, and firewall settings are correct.

In this blog, we’ll break down why this happens and provide a step-by-step solution.


Understanding the OCI Load Balancer Health Check

OCI Load Balancers periodically check the health of backend servers by sending requests to a specific port and protocol (TCP or HTTP). If the response is not as expected, the backend is marked unhealthy, and traffic is not routed to it.

Common symptoms of failing health checks:

  • Status: Connection failed

  • Status: Status code mismatch

  • Backend remains Critical in the OCI console

Even if all your network rules are correct, the LB may still mark the backend as unhealthy due to application-level issues.


Case Study: Health Check Failing Despite Correct Security Settings

Here’s an example scenario:

  • Backend VM private IP: **.**.**.**

  • OCI Load Balancer health check node IP: **.**.**.**

  • Security lists and NSGs are correctly configured to allow traffic from the LB subnet

  • Firewall on the VM is disabled

Yet, the LB health check reports:

Critical – Connection failed

Step 1: Check if the backend application is listening

Run:

ss -lntp | grep :80
  • If nothing is listening on the configured port, the health check will fail

  • In our case, starting Apache (httpd) fixed the “Connection failed” issue:

sudo systemctl start httpd sudo systemctl enable httpd

Step 2: Check the HTTP response code

After starting the web server, the health check may still fail with:

Status code mismatch

Run a test from the backend VM:

curl -i http://10.24.139.43/
  • In our example, the response was:

HTTP/1.1 403 Forbidden
  • OCI expects HTTP 200 OK by default. A 403 indicates Apache cannot serve the requested page, even if the server is running.


Step 3: Fix Apache configuration and permissions

  1. Ensure Apache allows access to the DocumentRoot:

<Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory>
  1. Create a simple index file to return HTTP 200:

sudo bash -c 'echo "<html><body><h1>OK</h1></body></html>" > /var/www/html/index.html' sudo chmod 644 /var/www/html/index.html sudo chown apache:apache /var/www/html/index.html sudo systemctl restart httpd
  1. Test again:

curl -i http://10.24.139.43/

Output should be:

HTTP/1.1 200 OK <html><body><h1>OK</h1></body></html>

Step 4: Verify LB health check

  • OCI Load Balancer will now mark the backend Healthy within 30–60 seconds

  • Traffic through the LB will work correctly for end users


Key Takeaways

  1. Security rules alone do not guarantee a healthy backend. Always check the application layer.

  2. 403 Forbidden or 404 Not Found responses will cause health check failures.

  3. Ensure the backend serves HTTP 200 OK on the health check path.

  4. Always test using curl or nc to simulate LB requests.


Conclusion

If your OCI Load Balancer health check is failing despite correct network settings, don’t panic. Most likely, the issue is at the application level — either the server is not listening, or the HTTP response is not 200 OK.

By ensuring your backend web server is running, the correct permissions are set, and a valid index page is served, your LB will pass the health checks, and traffic will flow smoothly.

Friday, January 23, 2026

Step-by-Step Guide: OCI DevOps & Resource Manager Terraform Infrastructure Provisioning

 

Introduction

Infrastructure provisioning on Oracle Cloud Infrastructure (OCI) can be automated with Infrastructure as Code (IaC) using OCI DevOps, OCI Resource Manager, and Terraform — enabling CI/CD-driven deployments across environments.

In this blog, we’ll walk through a real-world, high level plan of provisioning OCI infrastructure using OCI DevOps build pipelines integrated with OCI Resource Manager (Plan & Apply).

High-Level Architecture

The overall workflow looks like this:

  1. OCI DevOps Code Repository stores Terraform and pipeline artifacts

  2. OCI DevOps Build Pipeline is triggered on code changes

  3. Build Pipeline invokes OCI Resource Manager

  4. Resource Manager runs Terraform Plan and Apply

  5. Infrastructure is provisioned automatically




Step 1: Create OCI DevOps Code Repository

Start by creating a Code Repository inside your OCI DevOps Project. This repository will store:

  • build_spec.yaml

  • Terraform configuration files

Once created, clone the repository using Cloud Shell:

Authenticate using your OCI username and Auth Token.

Initially the repository will be blank and terraform codes will pushed to the repository using cloud shell.



Step 2: Create OCI DevOps Build Pipeline

Next, create a Build Pipeline in OCI DevOps. This pipeline will:

  • Read Terraform artifacts

  • Trigger OCI Resource Manager operations

You don’t need to configure all stages immediately; the pipeline will be connected later using triggers.




Step 3: Prepare Repository Structure

Organize your repository with a clean structure:

devops-repository/
├── build_spec.yaml
└── terraform/
└── resource_manager.tf

At present, the files are present locally in cloud shell but not on OCI Devops


Step 4: Upload Artifacts and Push to Repository

Add the Terraform and build specification files, then push them to the repository:

git add .
git commit -m "updated artifacts first time"--this will prompt for the username and password
git push -u origin main

Note:The build file has to be present in the root folder of the devops repository so that build pipeline can read it.


This ensures the build pipeline always pulls the latest Terraform configuration.

Step 5: Upload Terraform Artifacts to Object Storage

In this case, OCI Resource Manager requires Terraform configuration to be sourced from OCI Object Storage. You can also call resource manager directly from build pipeline.

  1. Create an Object Storage bucket

  2. Upload the Terraform artifacts (resource_manager.tf files)



Step 6: Create OCI Resource Manager Stack (CLI)

In this scenario, the stack cannot be created from the OCI Console. In such cases, use OCI CLI:

export compartment_id=<compartment_ocid>
export config_source_bucket_name=ORM_STACK
export config_source_namespace=<namespace>
export config_source_region=us-ashburn-1
export stack_display_name=ORM-STACK
export terraform_version=1.1.x

oci resource-manager stack create-from-object-storage \
--display-name $stack_display_name \
--compartment-id $compartment_id \
--config-source-bucket-name $config_source_bucket_name \
--config-source-namespace $config_source_namespace \
--config-source-region $config_source_region \
--terraform-version $terraform_version

Successful output returns the Stack OCID.




Step 7: Update build_spec.yaml

Update the build_spec.yaml file to reference the Resource Manager Stack OCID. This file defines:

  • Build stages

  • Resource Manager Plan

  • Resource Manager Apply

This allows OCI DevOps to orchestrate Terraform execution automatically.


Step 8: Create Build Pipeline Trigger

Create a trigger that connects:

  • Code Repository (main branch)

  • Build Pipeline

Now, every git push automatically triggers infrastructure provisioning




Step 9: Commit and Trigger the Pipeline

Make final updates and push changes:

git add .
git commit -m "updated resource manager stack"
git push









Under the resource manager, we can see that both apply and plan jobs were triggered.



Note: In this case, the state file is internally being managed by Resource manager.

Thus every commit code pushed to devops will trigger the resource manager stack to create resources in OCI.

Benefits of This Approach

  • Fully automated infrastructure provisioning

  • Terraform state managed securely by OCI

  • CI/CD driven infrastructure changes

  • Repeatable, auditable deployments

  • Reduced manual errors


Conclusion

By integrating OCI DevOps, OCI Resource Manager, and Terraform, you can achieve a powerful Infrastructure as Code (IaC) pipeline on Oracle Cloud. This setup is ideal for enterprises looking to standardize cloud provisioning with governance, automation, and scalability


Saturday, January 10, 2026

Using Oracle SQLcl MCP Server with Oracle 19c: A Step-by-Step Guide for NLP-Based Database Queries

 

Introduction

With the rapid evolution of AI, databases are no longer limited to traditional SQL-only interactions. Oracle has taken a major step forward by introducing MCP (Model Context Protocol) support in SQLcl, allowing AI tools like Claude Desktop to interact directly with Oracle databases using natural language.

In this blog, I’ll walk you through a hands-on, end-to-end setup of Oracle SQLcl MCP Server with an on-prem / OCI-hosted Oracle 19c database, and show how conversational AI can query enterprise databases securely.

This guide is ideal for Oracle DBAs, Cloud Architects, and AI-curious professionals who want to explore NLP-driven database access.


   Image source:-https://blogs.oracle.com/database/introducing-mcp-server-for-oracle-database


Architecture Overview

AI Client (Claude Desktop)
⬇️ MCP Protocol
SQLcl MCP Server (Local Machine)
⬇️ JDBC
Oracle Database 19c (OCI / On-Prem)

The AI never connects to the database directly. SQLcl acts as a secure MCP bridge, translating natural language into database operations.


Prerequisites

Before starting, ensure you have:

  • Oracle Database 19c (On-Prem or OCI Compute VM)

  • Windows laptop or desktop

  • Internet access to download tools

  • Basic Oracle SQL knowledge


Step 1: Install JDK 17 (Required for SQLcl)

Oracle SQLcl requires Java 17.

  • Download JDK 17 for Windows from Oracle

  • Install using the .exe

  • Set JAVA_HOME and update PATH

Verify:

java -version

Step 2: Install Oracle SQLcl

  • Download SQLcl from Oracle

  • Unzip it to a directory (example):

    C:\AI\sqlcli

SQLcl is portable—no installer required.


Step 3: Install Claude Desktop

Claude Desktop will act as the AI MCP client.

  • Download Claude Desktop

  • Install and launch once

  • Close it before MCP configuration


Step 4: Prepare Oracle Database 19c

Verify PDBs

show pdbs;

Ensure your PDB (e.g., ORCLPDB) is in READ WRITE mode.

Listener and Network Setup

  • Ensure port 1521 is open

  • Disable firewall (lab use only):

systemctl stop firewalld
systemctl disable firewalld
  • Confirm connectivity from Windows:

Test-NetConnection <DB_PUBLIC_IP> -Port 1521

Step 5: Create SQLcl Connection

Launch SQLcl:

sql /nolog

Create and save a connection:

conn -save oracle19c_mcptest -savepwd system/password@<IP>:1521/ORCLPDB

Validate:

CONNMGR test oracle19c_mcptest

Step 6: Start SQLcl MCP Server

sql -mcp -name oracle19c_mcptest

You should see:

MCP Server started successfully

This process must remain running.


Step 7: Configure Claude Desktop for MCP

Edit Claude configuration file:

{
"mcpServers": {
"oracle19c": {
"command": "C:/AI/sqlcli/sqlcl/sqlcl/bin/sql.exe",
"args": ["-mcp", "-name", "oracle19c_mcptest"]
}
}
}

Restart Claude Desktop and allow MCP access when prompted.


Step 8: Follow Least Privilege (Best Practice)

Instead of SYSTEM, create an application user:

CREATE USER app_user IDENTIFIED BY password;
GRANT CREATE SESSION, CREATE TABLE TO app_user;

Create sample data:

CREATE TABLE sales_orders (...);
INSERT INTO sales_orders VALUES (...);
COMMIT;

Create a separate SQLcl MCP connection for this user.

This ensures:

  • AI only sees approved schemas

  • SYS/SYSTEM access is avoided


Step 9: Test NLP Queries via Claude

Now the magic ✨

Ask Claude:



Claude:

  • Understands intent

  • Calls SQLcl MCP

  • Executes SQL

  • Returns results

No SQL typing required.



Security Considerations

✔ SQLcl connections are local-only ✔ Credentials stored in user profile ✔ Secure with OS file permissions ✔ Use separate DB users ✔ Optional: Oracle Wallet for credentials

AI never gets raw database access.


Why This Matters

This setup demonstrates:

  • Conversational AI for ad-hoc querying

  • AI + Oracle DB without exposing credentials

  • Perfect for DBAs, Support, and Architects


Final Thoughts

Oracle SQLcl MCP Server bridges the gap between enterprise databases and modern AI—securely, locally, and powerfully.

If you’re running Oracle 19c today, you can already start experimenting with conversational data access.