Wednesday, February 18, 2026

Complete CI/CD Pipeline Setup on Oracle Cloud Infrastructure (OCI) with Oracle Kubernetes Engine (OKE) – Blue/Green Deployment Strategy

 Building a modern CI/CD pipeline is essential for delivering applications faster, safer, and with minimal downtime. In this blog, I’ll walk you through a complete end-to-end implementation of a CI/CD pipeline in OCI DevOps, integrating GitHub, Container Registry, OKE, and implementing a Blue/Green Deployment strategy.

This setup ensures:

  • Automated build and deployment

  • Zero-downtime releases

  • Easy rollback mechanism

  • Secure secret management

Let’s dive in.

Architecture Overview

We will configure:

  • GitHub repository (source code)

  • OCI Vault & Secrets

  • OCI Container Registry

  • OCI DevOps Project

  • Build Pipeline

  • Deployment Pipeline

  • OKE Cluster

  • NGINX Ingress Controller

  • Blue/Green namespaces (ns-blue & ns-green)


Step 1: Secure GitHub Token in OCI Vault

Instead of hardcoding secrets:

  1. Create a Vault

  2. Create a Master Encryption Key

  3. Store the GitHub PAT token as a Secret

  4. Reference the secret in DevOps pipeline

This ensures enterprise-grade security for repository mirroring.

After  a vault is created, create the encryption key inside the vault


Now refer the key to the secrets






Secret contents is the PAT(Personal Access token) for the github. 

Step 2: Create an OKE Cluster

Create an OKE cluster from OCI Console.

After creation, access it from Cloud Shell:

kubectl get nodes

Your Kubernetes cluster is now ready for deployments.


3 worker nodes are up and running


Step 3: Create Container & Artifact Registry

  • Create Container Registry → Stores Docker images

  • Create Artifact Registry → Stores Kubernetes manifest files (YAML)

These artifacts will be referenced inside the pipeline.


Container registry:-



Artifact Registry





Step 4: Mirror GitHub Repository in OCI DevOps

Inside DevOps Project:

  1. Click Mirror Repository

  2. Provide GitHub credentials (via Vault secret which stores the GitHub PAT)

  3. Wait for sync

After a few minutes, your source code will reflect in OCI.








Once it is created, we will be able to see the repositories present inside my github

 






After few minutes, the files will be displayed



Create the artifcats inside the devops which will store the container images and OKE manifest files

 

 




 Click on add

 

 Create the OKE manifest artifacts (choose the artifact registry created earlier)

 

 

 




Step 5: Create Build Pipeline

Important: Ensure build_spec.yaml is present in the root of the repository.

Build Pipeline Flow:

  • Fetch Source Code

  • Build Docker Image

  • Push Image to Container Registry

  • Export Image Artifact

  • Trigger Deployment Pipeline




Add the next stage to deliver the artifacts for next stage

 

Click on +







Step 6: Create OKE Deployment Environment

Create an environment pointing to:

  • Your OKE Cluster

  • Target Namespace

  • Kubernetes Manifest (oci-oke-deployment.yaml)




Step 7: Create Deployment Pipeline










Click on next



Select the environment created earlier
















Add the trigger deployment under the build pipeline








Conclusion (End of Part 1)

By completing the steps above, we have successfully:

  • Secured GitHub credentials using OCI Vault

  • Created an OKE cluster

  • Configured OCI Container & Artifact Registry

  • Set up OCI DevOps Project

  • Created a Build Pipeline

  • Configured a Deployment Environment

  • Implemented Blue/Green namespaces (ns-blue & ns-green)

At this stage, the CI/CD foundation is fully ready.


However, traffic is not yet exposed externally. The application is deployed inside the cluster, but we still need:

  • An Ingress Controller

  • Load Balancer configuration

  • Traffic routing between Blue and Green

  • Manual approval-based traffic shift

  • Rollback mechanism

These critical production-grade components will be covered in the next blog.


What’s Coming in next post

In the next post, we will cover:

  • Setting up NGINX Ingress Controller on OKE

  • Configuring LoadBalancer service

  • Executing build pipeline runs

  • Deploying to Green namespace

  • Traffic shifting to Blue namespace

  • Rollback strategy in OCI DevOps

This is where the real power of OCI DevOps Blue/Green deployment becomes visible.


Thanks for reading.



Tuesday, February 10, 2026

OCI Generative AI Authentication Explained: Dynamic Groups vs Tenancy API Keys vs New GenAI API Keys

 

Introduction

When working with OCI Generative AI, one of the most common questions engineers struggle with is:

"How exactly should I authenticate when calling GenAI services?"

Should you use:

  • Dynamic Groups with Instance or Resource Principals?

  • The global OCI tenancy API key?

  • Or the newly launched API Keys for OCI Generative AI?

Oracle’s documentation covers each option individually, but the practical differences — especially when agents are involved — are often unclear. This confusion frequently leads to authorization errors like “Authorization failed or requested resource not found”, even when IAM policies appear correct.

In this article, we’ll break down when to use each authentication method, what they can and cannot do, and how to choose the right one for your OCI Generative AI workload.


The Three Authentication Options in OCI Generative AI



1. Dynamic Groups with Instance / Resource Principals

This is the recommended enterprise approach for OCI-native workloads.

How it works

  • Your OCI resource (Compute VM, OKE Pod, Function, etc.) is added to a Dynamic Group

  • IAM policies grant that dynamic group access to Generative AI services

  • Authentication happens automatically via Instance Principal or Resource Principal

Example policy:

Allow dynamic-group GENAI to manage genai-agent-family in tenancy

What this is used for

  • OCI Compute → Generative AI

  • OCI Functions → Generative AI

  • OKE workloads → Generative AI

Supported GenAI features

Model inference
 Embeddings
GenAI Agent Runtime (agents, RAG, tools)
Object Storage integration
Enterprise IAM governance

This is mandatory when calling:

agent-runtime.generativeai.<region>.oci.oraclecloud.com

2. Global OCI Tenancy API Keys

These are the classic OCI user API keys.

How they work

  • API key is generated for an IAM user

  • Requests are signed using OCI’s request signing mechanism

  • Typically used from laptops, CI/CD, or external systems

Downsides

  • Long-lived credentials

  • Manual key rotation

  • Not ideal for workloads running inside OCI

Supported GenAI features

Model inference
Limited agent support
Not recommended for production agents

Oracle generally discourages this approach for GenAI workloads running on OCI resources.


3. Newly Launched OCI Generative AI API Keys

In early 2025, Oracle introduced service-specific API keys for Generative AI:

🔗 Official announcement:
https://docs.oracle.com/en-us/iaas/releasenotes/generative-ai/api-keys.htm

These are not OCI tenancy API keys.

What makes them different

  • Scoped only to OCI Generative AI

  • Simple Bearer-token authentication

  • Similar in concept to OpenAI or Anthropic API keys

  • No OCI request signing required

Supported endpoints

https://inference.generativeai.<region>.oci.oraclecloud.com

Supported GenAI features

Chat completion
Text generation
Embeddings

Critical limitation

GenAI Agent Runtime is NOT supported

API keys cannot authenticate against:

agent-runtime.generativeai.<region>.oci.oraclecloud.com

This means:

  • No agents

  • No RAG

  • No tools

  • No memory

  • No OCI service integrations

Oracle Generative AI has two distinct service planes:

PlanePurposeAuth supported
 Inference Plane  Direct model calls   API Keys, IAM
Agent Runtime Plane   Agents, tools, RAG    IAM only

The new API keys work only for inference, while agents are treated as first-class OCI resources, governed by IAM.


Which One Should You Use?

Use Dynamic Groups + Instance Principal if:

  • You are calling GenAI Agents

  • Your workload runs inside OCI

  • You need RAG, tools, Object Storage access

  • You want enterprise-grade security

Use GenAI API Keys if:

  • You only need model inference

  • You are calling from outside OCI

  • You want quick experimentation

  • You do NOT need agents

Avoid Global Tenancy API Keys when possible

They still work, but they are rarely the best option for modern GenAI workloads. These API keys are stored in your resources such as VM and hence poses security risks.


Final Thoughts

OCI Generative AI gives multiple authentication choices — but they are not interchangeable.

The new GenAI API keys simplify access to hosted models, while dynamic groups and instance principals remain the only supported path for GenAI Agents.

Understanding this distinction upfront can save hours of debugging IAM policies and mysterious 404 authorization errors

Wednesday, February 4, 2026

OCI Load Balancer Health Check Failing Despite Correct Security Rules? Here’s How to Fix It

 If you’re managing Oracle Cloud Infrastructure (OCI), you might have encountered a frustrating issue: your OCI Load Balancer backend health check fails even though your security lists, route tables, and firewall settings are correct.

In this blog, we’ll break down why this happens and provide a step-by-step solution.


Understanding the OCI Load Balancer Health Check

OCI Load Balancers periodically check the health of backend servers by sending requests to a specific port and protocol (TCP or HTTP). If the response is not as expected, the backend is marked unhealthy, and traffic is not routed to it.

Common symptoms of failing health checks:

  • Status: Connection failed

  • Status: Status code mismatch

  • Backend remains Critical in the OCI console

Even if all your network rules are correct, the LB may still mark the backend as unhealthy due to application-level issues.


Case Study: Health Check Failing Despite Correct Security Settings

Here’s an example scenario:

  • Backend VM private IP: **.**.**.**

  • OCI Load Balancer health check node IP: **.**.**.**

  • Security lists and NSGs are correctly configured to allow traffic from the LB subnet

  • Firewall on the VM is disabled

Yet, the LB health check reports:

Critical – Connection failed

Step 1: Check if the backend application is listening

Run:

ss -lntp | grep :80
  • If nothing is listening on the configured port, the health check will fail

  • In our case, starting Apache (httpd) fixed the “Connection failed” issue:

sudo systemctl start httpd sudo systemctl enable httpd

Step 2: Check the HTTP response code

After starting the web server, the health check may still fail with:

Status code mismatch

Run a test from the backend VM:

curl -i http://10.24.139.43/
  • In our example, the response was:

HTTP/1.1 403 Forbidden
  • OCI expects HTTP 200 OK by default. A 403 indicates Apache cannot serve the requested page, even if the server is running.


Step 3: Fix Apache configuration and permissions

  1. Ensure Apache allows access to the DocumentRoot:

<Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory>
  1. Create a simple index file to return HTTP 200:

sudo bash -c 'echo "<html><body><h1>OK</h1></body></html>" > /var/www/html/index.html' sudo chmod 644 /var/www/html/index.html sudo chown apache:apache /var/www/html/index.html sudo systemctl restart httpd
  1. Test again:

curl -i http://10.24.139.43/

Output should be:

HTTP/1.1 200 OK <html><body><h1>OK</h1></body></html>

Step 4: Verify LB health check

  • OCI Load Balancer will now mark the backend Healthy within 30–60 seconds

  • Traffic through the LB will work correctly for end users


Key Takeaways

  1. Security rules alone do not guarantee a healthy backend. Always check the application layer.

  2. 403 Forbidden or 404 Not Found responses will cause health check failures.

  3. Ensure the backend serves HTTP 200 OK on the health check path.

  4. Always test using curl or nc to simulate LB requests.


Conclusion

If your OCI Load Balancer health check is failing despite correct network settings, don’t panic. Most likely, the issue is at the application level — either the server is not listening, or the HTTP response is not 200 OK.

By ensuring your backend web server is running, the correct permissions are set, and a valid index page is served, your LB will pass the health checks, and traffic will flow smoothly.

Friday, January 23, 2026

Step-by-Step Guide: OCI DevOps & Resource Manager Terraform Infrastructure Provisioning

 

Introduction

Infrastructure provisioning on Oracle Cloud Infrastructure (OCI) can be automated with Infrastructure as Code (IaC) using OCI DevOps, OCI Resource Manager, and Terraform — enabling CI/CD-driven deployments across environments.

In this blog, we’ll walk through a real-world, high level plan of provisioning OCI infrastructure using OCI DevOps build pipelines integrated with OCI Resource Manager (Plan & Apply).

High-Level Architecture

The overall workflow looks like this:

  1. OCI DevOps Code Repository stores Terraform and pipeline artifacts

  2. OCI DevOps Build Pipeline is triggered on code changes

  3. Build Pipeline invokes OCI Resource Manager

  4. Resource Manager runs Terraform Plan and Apply

  5. Infrastructure is provisioned automatically




Step 1: Create OCI DevOps Code Repository

Start by creating a Code Repository inside your OCI DevOps Project. This repository will store:

  • build_spec.yaml

  • Terraform configuration files

Once created, clone the repository using Cloud Shell:

Authenticate using your OCI username and Auth Token.

Initially the repository will be blank and terraform codes will pushed to the repository using cloud shell.



Step 2: Create OCI DevOps Build Pipeline

Next, create a Build Pipeline in OCI DevOps. This pipeline will:

  • Read Terraform artifacts

  • Trigger OCI Resource Manager operations

You don’t need to configure all stages immediately; the pipeline will be connected later using triggers.




Step 3: Prepare Repository Structure

Organize your repository with a clean structure:

devops-repository/
├── build_spec.yaml
└── terraform/
└── resource_manager.tf

At present, the files are present locally in cloud shell but not on OCI Devops


Step 4: Upload Artifacts and Push to Repository

Add the Terraform and build specification files, then push them to the repository:

git add .
git commit -m "updated artifacts first time"--this will prompt for the username and password
git push -u origin main

Note:The build file has to be present in the root folder of the devops repository so that build pipeline can read it.


This ensures the build pipeline always pulls the latest Terraform configuration.

Step 5: Upload Terraform Artifacts to Object Storage

In this case, OCI Resource Manager requires Terraform configuration to be sourced from OCI Object Storage. You can also call resource manager directly from build pipeline.

  1. Create an Object Storage bucket

  2. Upload the Terraform artifacts (resource_manager.tf files)



Step 6: Create OCI Resource Manager Stack (CLI)

In this scenario, the stack cannot be created from the OCI Console. In such cases, use OCI CLI:

export compartment_id=<compartment_ocid>
export config_source_bucket_name=ORM_STACK
export config_source_namespace=<namespace>
export config_source_region=us-ashburn-1
export stack_display_name=ORM-STACK
export terraform_version=1.1.x

oci resource-manager stack create-from-object-storage \
--display-name $stack_display_name \
--compartment-id $compartment_id \
--config-source-bucket-name $config_source_bucket_name \
--config-source-namespace $config_source_namespace \
--config-source-region $config_source_region \
--terraform-version $terraform_version

Successful output returns the Stack OCID.




Step 7: Update build_spec.yaml

Update the build_spec.yaml file to reference the Resource Manager Stack OCID. This file defines:

  • Build stages

  • Resource Manager Plan

  • Resource Manager Apply

This allows OCI DevOps to orchestrate Terraform execution automatically.


Step 8: Create Build Pipeline Trigger

Create a trigger that connects:

  • Code Repository (main branch)

  • Build Pipeline

Now, every git push automatically triggers infrastructure provisioning




Step 9: Commit and Trigger the Pipeline

Make final updates and push changes:

git add .
git commit -m "updated resource manager stack"
git push









Under the resource manager, we can see that both apply and plan jobs were triggered.



Note: In this case, the state file is internally being managed by Resource manager.

Thus every commit code pushed to devops will trigger the resource manager stack to create resources in OCI.

Benefits of This Approach

  • Fully automated infrastructure provisioning

  • Terraform state managed securely by OCI

  • CI/CD driven infrastructure changes

  • Repeatable, auditable deployments

  • Reduced manual errors


Conclusion

By integrating OCI DevOps, OCI Resource Manager, and Terraform, you can achieve a powerful Infrastructure as Code (IaC) pipeline on Oracle Cloud. This setup is ideal for enterprises looking to standardize cloud provisioning with governance, automation, and scalability


Saturday, January 10, 2026

Using Oracle SQLcl MCP Server with Oracle 19c: A Step-by-Step Guide for NLP-Based Database Queries

 

Introduction

With the rapid evolution of AI, databases are no longer limited to traditional SQL-only interactions. Oracle has taken a major step forward by introducing MCP (Model Context Protocol) support in SQLcl, allowing AI tools like Claude Desktop to interact directly with Oracle databases using natural language.

In this blog, I’ll walk you through a hands-on, end-to-end setup of Oracle SQLcl MCP Server with an on-prem / OCI-hosted Oracle 19c database, and show how conversational AI can query enterprise databases securely.

This guide is ideal for Oracle DBAs, Cloud Architects, and AI-curious professionals who want to explore NLP-driven database access.


   Image source:-https://blogs.oracle.com/database/introducing-mcp-server-for-oracle-database


Architecture Overview

AI Client (Claude Desktop)
⬇️ MCP Protocol
SQLcl MCP Server (Local Machine)
⬇️ JDBC
Oracle Database 19c (OCI / On-Prem)

The AI never connects to the database directly. SQLcl acts as a secure MCP bridge, translating natural language into database operations.


Prerequisites

Before starting, ensure you have:

  • Oracle Database 19c (On-Prem or OCI Compute VM)

  • Windows laptop or desktop

  • Internet access to download tools

  • Basic Oracle SQL knowledge


Step 1: Install JDK 17 (Required for SQLcl)

Oracle SQLcl requires Java 17.

  • Download JDK 17 for Windows from Oracle

  • Install using the .exe

  • Set JAVA_HOME and update PATH

Verify:

java -version

Step 2: Install Oracle SQLcl

  • Download SQLcl from Oracle

  • Unzip it to a directory (example):

    C:\AI\sqlcli

SQLcl is portable—no installer required.


Step 3: Install Claude Desktop

Claude Desktop will act as the AI MCP client.

  • Download Claude Desktop

  • Install and launch once

  • Close it before MCP configuration


Step 4: Prepare Oracle Database 19c

Verify PDBs

show pdbs;

Ensure your PDB (e.g., ORCLPDB) is in READ WRITE mode.

Listener and Network Setup

  • Ensure port 1521 is open

  • Disable firewall (lab use only):

systemctl stop firewalld
systemctl disable firewalld
  • Confirm connectivity from Windows:

Test-NetConnection <DB_PUBLIC_IP> -Port 1521

Step 5: Create SQLcl Connection

Launch SQLcl:

sql /nolog

Create and save a connection:

conn -save oracle19c_mcptest -savepwd system/password@<IP>:1521/ORCLPDB

Validate:

CONNMGR test oracle19c_mcptest

Step 6: Start SQLcl MCP Server

sql -mcp -name oracle19c_mcptest

You should see:

MCP Server started successfully

This process must remain running.


Step 7: Configure Claude Desktop for MCP

Edit Claude configuration file:

{
"mcpServers": {
"oracle19c": {
"command": "C:/AI/sqlcli/sqlcl/sqlcl/bin/sql.exe",
"args": ["-mcp", "-name", "oracle19c_mcptest"]
}
}
}

Restart Claude Desktop and allow MCP access when prompted.


Step 8: Follow Least Privilege (Best Practice)

Instead of SYSTEM, create an application user:

CREATE USER app_user IDENTIFIED BY password;
GRANT CREATE SESSION, CREATE TABLE TO app_user;

Create sample data:

CREATE TABLE sales_orders (...);
INSERT INTO sales_orders VALUES (...);
COMMIT;

Create a separate SQLcl MCP connection for this user.

This ensures:

  • AI only sees approved schemas

  • SYS/SYSTEM access is avoided


Step 9: Test NLP Queries via Claude

Now the magic ✨

Ask Claude:



Claude:

  • Understands intent

  • Calls SQLcl MCP

  • Executes SQL

  • Returns results

No SQL typing required.



Security Considerations

✔ SQLcl connections are local-only ✔ Credentials stored in user profile ✔ Secure with OS file permissions ✔ Use separate DB users ✔ Optional: Oracle Wallet for credentials

AI never gets raw database access.


Why This Matters

This setup demonstrates:

  • Conversational AI for ad-hoc querying

  • AI + Oracle DB without exposing credentials

  • Perfect for DBAs, Support, and Architects


Final Thoughts

Oracle SQLcl MCP Server bridges the gap between enterprise databases and modern AI—securely, locally, and powerfully.

If you’re running Oracle 19c today, you can already start experimenting with conversational data access.