oracle – RedStack
https://redstack.dev
Microservices, Cloud Native and AIFri, 06 Mar 2026 02:01:34 +0000en
hourly
1 http://wordpress.com/4033274https://s0.wp.com/i/buttonw-com.pngoracle – RedStack
https://redstack.dev
Exploring some new Helidon features – Data Repositories and SE Declarative
https://redstack.dev/2026/03/05/exploring-some-new-helidon-features-data-repositories-and-se-declarative/
https://redstack.dev/2026/03/05/exploring-some-new-helidon-features-data-repositories-and-se-declarative/#respondFri, 06 Mar 2026 01:58:29 +0000http://redstack.dev/?p=4251Continue reading →]]>With Helidon 4.4.0 right around the corner, I’ve been spending some time playing with the latest milestone release (4.4.0-M2). If you haven’t been following along, Helidon 4 was a major milestone because it was the first framework built from the ground up on Java 21 virtual threads.
Now, with 4.4, we are seeing some really cool incubating features becoming more stable inlcuding Helidon Data Repositories, Helidon SE Declarative and Helidon AI. I am working on a sample project called Helidon-Eats to show how these work together. In this installment, I am looking at the first two.
The “No-Magic” Power of Helidon SE Declarative
If you’ve used Helidon MP (MicroProfile), or if you are more familiar with Spring Boot’s Inversion of Control approach (like me), then you’re used to the convenience of dependency injection and annotations. Helidon SE, on the other hand, has always focused on transparency and avoiding “magic.” While that’s great for performance, it usually meant writing more boilerplate code to register routes and manage services manually.
Helidon SE Declarative changes that. It gives you an annotation-driven model like MP, but here is the trick: it does everything at build-time. Using Java annotation processors, Helidon generates service descriptors during compilation. This means you get the clean, injectable code you want, but without the runtime reflection overhead that slows down startup and eats memory. Benchmarks have even shown performance gains of up to +295% over traditional reflection-based models on modern JDKs. (see this article)
Now, to be completely fair, I will say that I am not completely sold on the build-time piece yet. For example, the @GenerateBinding annotation (if I am not mistaken) causes an ApplicationBinding class to be generated at build time, and that lives in your target/classes directory. I found during refactoring that you have to be careful to mvn clean each time to make sure it stays in sync with your code, just doing a mvn compile or package could get you into trouble. And I am not sure I am happy with it not being checked into the source code repository. But, I’ll withhold judegment until I have worked with it a bit more!
Simplifying Persistence with Helidon Data
The Helidon Data Repository is another big addition. It’s a high-level abstraction that acts as a compile-time alternative to heavy runtime ORM frameworks. Instead of writing JDBC code, you define a Java interface, and Helidon’s annotation processor generates the implementation for you.
It supports standard patterns like CrudRepository and PageableRepository, which I used in this project to handle the recipe collection. The framework can even derive queries directly from your method names (like Spring Data does) – so a method like findById is automatically turned into the correct SQL at build-time.
The Backend: Oracle AI Database 26ai
For this sample, I’m using Oracle AI Database 26ai Free. I sourced some public domain recipe data from Kaggle that comes as a line-by-line JSON file (LDJSON).
Normally, if you want to store hierarchical JSON in relational tables, you have to write complex mapping logic in your application. But I wanted to try a more novel approach using JSON Relational Duality Views (DV).
Duality Views are a game-changer because they decouple how data is stored from how it is accessed. My data is stored in three normalized tables (RECIPE, INGREDIENT, and DIRECTION) which ensures ACID consistency and no data duplication. The database can surface this data to applications as a single, hierarchical JSON document. I am not using that feature in this post, but I will in the future!
GraphQL-Based View Creation and Loading
One of the coolest parts of Oracle AI Database 26ai is that you can define these views using a GraphQL-based syntax . The database engine automatically figures out the joins based on the foreign key relationships.
Isn’t that just the cleanest piece of SQL that deals with JSON that you’ve ever seen?
Because the view is “updatable” (@insert, @update), I used it to actually load the data. Instead of a complex ETL process, my startup script just reads the LDJSON file line-by-line and does a simple SQL insert directly into the view. The database engine takes that single JSON object and automatically decomposes it into rows for the three underlying tables.
Modeling the Service
On the Java side, I modeled the Recipe entity to handle the parent-child relationships using standard @OneToMany collections.
One detail I want to highlight is the use of @JsonbTransient. When you build a REST API, you often have internal metadata like database primary keys or sort ordinals that you don’t want messing up the JSON that the end user gets to see. By annotating those fields with @JsonbTransient, they are excluded from the final JSON response. This keeps the API response clean and focused only on the recipe data.
@Entity
@Table(name = "RECIPE")
public class Recipe {
@Id
@Column(name = "ID")
private Long recipeId;
private String recipeTitle;
@JsonbTransient
private Long internalId; // Hidden from the API
@OneToMany(mappedBy = "recipe")
private List<Ingredient> ingredients;
//...
}
In the repository object, you can use the method naming conventions to automatically create queries (like Spring Data) and you can also write your own JPQL (not SQL) queries, as I did in this case (also like Spring Data):
The configuration is handled in the application.yaml, where I point Helidon to the Oracle instance using syntax that again is very reminiscent of what I’d do in Spring Boot.
With Declarative SE, injecting the repository into my endpoint is simple. I just use @Service.Inject on the constructor, which allows me to keep my fields private final.
@Service.Singleton
@Http.Path("/recipe")
public class RecipeEndpoint {
private final RecipeRepository repository;
@Service.Inject
public RecipeEndpoint(RecipeRepository repository) {
this.repository = repository;
}
@Http.GET
@Http.Path("/{id}")
public Optional<Recipe> getRecipe(Long id) {
return repository.findById(id);
}
}
Finally, the Main class uses @Service.GenerateBinding. This tells the annotation processor to generate the “wiring” code that starts the server and initializes the service registry without needing to scan the classpath at runtime.
@Service.GenerateBinding
public class Main {
public static void main(String args) {
LogConfig.configureRuntime();
ServiceRegistryManager.start(ApplicationBinding.create());
}
}
In this context, the “service registry” is something in Helidon that keeps track of the services in the application and handles injection and so on. It’s a lot like the way Spring Boot scans for beans and wires/injects them where needed.
The Result
When you hit the service, you get a clean, well structured JSON response that masks all the complexity of the underlying three-table relational join.
Example Response for http://localhost:8080/recipe/22387:
{
"category": "Appetizers And Snacks",
"description": "I came up with this rhubarb salsa while trying to figure out what to do with an over-abundance of rhubarb...",
"directions":,
"ingredients": [
"2 cups thinly sliced rhubarb",
"1 small red onion, coarsely chopped",
"3 roma (plum) tomatoes, finely diced"
],
"recipeId": 22387,
"recipeTitle": "Tangy Rhubarb Salsa",
"subcategory": "Salsa"
}
Wrap Up
Helidon 4.4 is making the SE flavor feel a lot more like a high-productivity framework without sacrificing performance. By shifting the data transformation logic to the database with Duality Views and using build-time code generation for injection, we can build services that are both incredibly fast and easy to maintain.
Now, you may have noticed that I said “like Spring” a lot in this post – and that’s because of two things – I do happen to use Spring a lot more than I use Helidon, and I like it. So I am very happy that Helidon is looking more like Spring, it makes it a lot easier to switch between the two, and I think it lowers the barrier to entry for people who are coming from the Spring world.
Grab the code from the Helidon-Eats repo and let me know what you think – and stay tuned for the next steps as I explore Helidon AI!
]]>https://redstack.dev/2026/03/05/exploring-some-new-helidon-features-data-repositories-and-se-declarative/feed/04251markxnelsonCustom vector distance functions in Oracle (using JavaScript)
https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/
https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/#respondMon, 20 Oct 2025 16:00:42 +0000http://redstack.dev/?p=4188Continue reading →]]>In case you missed it, Oracle Database 26ai was announced last week at Oracle AI World, with a heap of new AI features and capabilities like hybrid vector search, MCP server support, acceleration with NVIDIA and much more – check the link for details.
Of course, I wanted to check it out, and I was thinking about what to do first. I remembered this LinkedIn post from Anders Swanson about implementing custom vector distance functions in Oracle using the new JavaScript capabilities, and I thought that could be something interesting to do, so I am going to show you how to implement and use Jaccard distance for dense vector embeddings for similarity searches.
Now, this is a slightly contrived example, because I am more interested in showing you how to add a custom metric than in the actual metric itself. I chose Jaccard because the actual implementation is pretty compact.
Now, Oracle does already include Jaccard distance, but only for the BINARY data type, which is where Jaccard is mostly used. But there is a version that can be used for continuous/real-valued vectors as well (this version is for dense vectors), and that is what we will implement.
This is the formula for Jaccard similarity for continuous vectors. This is also known as the Tanimoto coefficient. It is the intersection divided by the union (or zero if the union is zero):
To get the Jaccard distance, we just subtract the Jaccard similarity from one.
Before we start, let’s look at a two-dimensional example to get a feel for how it works. Of course, the real vectors created by embedding models have many more dimensions, but it is hard for us to visualize more than two or three dimensions without also introducing techniques like dimensionality reduction and projection).
Here we have two vectors A [5 8] and B [7 4]:
The union is calculated using the max values, as you see in the formular above, so in this example it is 7×8, as shown by the area shaded pink. The intersection is calculated with the min values, so it is 5×4, as shown by the green area.
So in this example, the Jaccard similarity is (7×8) / (5×4) = 56 / 20 = 0.6
And so the Jaccard distance is 1 – 0.6 = 0.4
Ok, now that we have some intuition about how this distance metric works, let’s implement it in Oracle.
Start up an Oracle Database
First, let’s fire up Oracle Database Free 26ai in a container:
This will pull the latest image, which at the time of writing is 26ai (version tag 23.26.0.0). You can check the logs to see when startup is complete, you’ll see a message “DATABASE IS READY TO USE”:
docker logs -f db26ai
Let’s create a user called vector with the necessary privileges:
docker exec -i db26ai sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba <<EOF
alter session set container=FREEPDB1;
create user vector identified by vector;
grant connect, resource, unlimited tablespace, create credential, create procedure, create mle, create any index to vector;
commit;
EOF
Now you can connect with your favorite client. I am going to use Oracle SQL Developer for VS Code. See the link for install instructions.
Implement the custom distance function
Open up an SQL Worksheet, or run this in your tool of choice:
create or replace function jaccard_distance("a" vector, "b" vector)
return binary_double
deterministic parallel_enable
as mle language javascript pure {{
// check the vectors are the same length
if (a.length !== b.length) {
throw new Error('Vectors must have same length');
}
let intersection = 0;
let union = 0;
for (let i = 0; i < a.length; i++) {
intersection += Math.min(a[i], b[i]);
union += Math.max(a[i], b[i]);
}
// handle the case where union is zero (all-zero vectors)
if (union === 0) {
return 0;
}
const similarity = intersection / union;
return 1 - similarity;
}};
/
Let’s walk throught this. First, you see that we are creating a function called jaccard_distance which accepts two vectors (a and b) as input and returns a binary_double. This function sugnature is required for distance functions. Next we must include the deterministric keyword and we have also included the parallel_enable keyword so that this function could be used with HNSW vector indexes. For the purposes of this example, you can just ignore those or assume that they are just needed as part of the function signature.
Next you see that we mention this will be an MLE function written in JavaScript, and we added the pure keyword to let the database know that this is a pure function – meaning it has no side effects, it will not update any data, and its output will always be the same for a given set of inputs (i.e., that it is memoizable).
Then we have the actual implementation of the function. First, we check that the vectors have the same length (i.e., the same number of dimensions) which is required for this calculation to be applicable.
Then we work through the vectors and collect the minimums and maximums to calculate the intersection and the union.
Next, we check if the union is zero, and if so we return zero to handle that special case. And finally, we calculate the similarity, then subtract it from one to get the distance and return that.
Using our custom distance function
Great, so let’s test our function. We can start by creating a table t1 to store some vectors:
create table t1 (
id number,
v vector(2, float32)
);
And let’s add a couple of vectors, including the one we saw in the example above [5 8]:
Create a new Jupyter Notebook using File > New File… then choose Jupyter Notebook as the type of file, and save your new file at jaccard.ipynb.
First, we need to set up the Python runtime environment. Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.
Now, let’s install the libraries we will need – enter this into a cell and run it:
%pip install oracledb sentence-transformers
Now, connect to the same Oracle database (again, enter this into a cell and run it):
Let’s create a table to hold 1024 dimension vectors that we will create with the mxbai-embed-large-v1 embedding model. Back in your SQL Worksheet, run this statement:
create table t2 (
id number,
v vector(1024, float32)
);
Ok, now let’s create some embeddings. Back in your notebook, create a new cell with this code:
import oracledb
from sentence_transformers import SentenceTransformer
# Initialize the embedding model
print("Loading embedding model...")
model = SentenceTransformer('mixedbread-ai/mxbai-embed-large-v1')
# Your text data
texts = [
"The quick brown fox jumps over the lazy dog",
"Machine learning is a subset of artificial intelligence",
"Oracle Database 23ai supports vector embeddings",
"Python is a popular programming language",
"Embeddings capture semantic meaning of text"
]
# Generate embeddings
print("Generating embeddings...")
embeddings = model.encode(texts)
Let’s discuss what we are doing in this code. First, we are going to download the embedding model usign the SentenceTransformer. Then, we define a few simple texts that we can use for this example and use the embedding model to create the vector embeddings for those texts.
If you want to see what the embeddings look like, just enter “embeddings” in a cell and run it. In the output you can see the shape is 5 (rows) with 1024 dimensions and the type is float32.
Now, let’s insert the embeddings into our new table t2:
import array
cursor = connection.cursor()
# Insert data
for i in range(len(embeddings)):
cursor.execute("""
INSERT INTO t2 (id, v)
VALUES (:1, :2)
""", [i, array.array('f', embeddings[i].tolist())])
connection.commit()
print(f"Successfully inserted {len(texts)} records")
You can take a look at the vectors using the simple query (back in your SQL Worksheet):
select * from t2
Which will show you something like this:
And, now let’s try our distance function with these vectors. Back in your notebook, run this cell. I’ve included the built-in cosine distance as well, just for comparison purposes:
query = array.array('f', model.encode("Antarctica is the driest continent").tolist())
cursor = connection.cursor()
cursor.execute("""
select
id,
jaccard_distance(v, :1),
vector_distance(v, :2, cosine)
from t2
order by id
""", [query, query])
for row in cursor:
print(f"id: {row[0]} has jaccard distance: {row[1]} and cosine distance: {row[2]}")
cursor.close()
Your output will look something like this:
id: 0 has jaccard distance: 2.0163214889484307 and cosine distance: 0.7859490566650003
id: 1 has jaccard distance: 2.0118706751976925 and cosine distance: 0.6952327173906239
id: 2 has jaccard distance: 2.0152858933816775 and cosine distance: 0.717824211314015
id: 3 has jaccard distance: 2.0216149035530537 and cosine distance: 0.6455277387099003
id: 4 has jaccard distance: 2.0132575761281766 and cosine distance: 0.6962028121886988
Well, there you go! We implemented and used a custom vector distance function. Enjoy!
]]>https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/feed/04188markxnelsonGetting started with Oracle Vector Store support in LangChain
https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/
https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/#commentsFri, 16 May 2025 22:00:53 +0000http://redstack.dev/?p=4041Continue reading →]]>In this post, I would like to show you the basics of how to use the Oracle Vector Store support in LangChain. I am using Visual Studio Code with the Python and Jupyter extensions from Microsoft installed. I will show more detailed usage in future posts!
Prefer to watch a video? Check it out here:
To get started, create a new project in Visual Studio code, and then create a new Jupyter Notebook using File > New File… then choose Jupyter Notebook as the type of file, and save your new file at getting_started.ipynb,
First, we need to set up the Python runtime environment. Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.
In this example, we will use OpenAI for our chat model. You’ll need to get an API Key from OpenAI, which you can do by logging into https://platform.openai.com/settings/organization/api-keys and creating a key. Of course you could use a different model, including a self-hosted model so that you don’t have to send your data outside your organization. I’ll cover that in future posts, stay tuned!
In the first cell, check that the type is Python and enter this code:
%pip install -qU "langchain[openai]"
Press Shift+Enter or click on the Run icon to run this code block. This will also take a minute or so to install the LangChain library for OpenAI.
Now create a second cell and paste in this code:
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain.chat_models import init_chat_model
model = init_chat_model("gpt-4o-mini", model_provider="openai")
Run this block, and when it prompts you for your key, paste in the key, it will start with something like sk-proj and have a long string of mostly letters and numbers after that. This will save your key in the environment so that you don’t have to keep entering it each time.
Now, we are ready to talk to the LLM! Let’s try a simple prompt. Create a new cell and enter this code:
model.invoke("Hello, world!")
Run this cell and observe the output. It should look something like this:
Great, now we are ready to connect to a vector store. If you don’t already have one, start up an instance of Oracle Database 23ai in a container on your machine. Run this command in a terminal window (not the notebook)
This will start up an Oracle Database 23ai Free instance in a container. It will have a PDB called FREEPDB1 and the password for PDBADMIN (and SYS and SYSTEM) will be Welcome12345.
Now, run the following command to create an Oracle user with appropriate permissions to create a vector store:
docker exec -i db23ai sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba <<EOF
alter session set container=FREEPDB1;
create user vector identified by vector;
grant connect, resource, unlimited tablespace, create credential, create procedure, create any index to vector;
commit;
EOF
Let’s connect to the database! First we’ll isntall the oracledb library. Create a new cell and enter this code:
Now, import the things we will need by creating a new call with this code and running it:
from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
from langchain_huggingface import HuggingFaceEmbeddings
We are going to need some documents to load into the vector store, so let’s define some to use for an example. In real life, you’d probably want use your own non-public documents to load a vector store if you were building a chatbot or using retrieval augmented generation. Create and run a new call with this code:
documents_json_list = [
{
"id": "moby_dick_2701_P1",
"text": "Queequeg was a native of Rokovoko, an island far away to the West and South. It is not down in any map; true places never are.",
"link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0012",
},
{
"id": "moby_dick_2701_P2",
"text": "It was not a great while after the affair of the pipe, that one morning shortly after breakfast, Ahab, as was his wont, ascended the cabin-gangway to the deck. There most sea-captains usually walk at that hour, as country gentlemen, after the same meal, take a few turns in the garden.",
"link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0036",
},
{
"id": "moby_dick_2701_P3",
"text": "Now, from the South and West the Pequod was drawing nigh to Formosa and the Bashee Isles, between which lies one of the tropical outlets from the China waters into the Pacific. And so Starbuck found Ahab with a general chart of the oriental archipelagoes spread before him; and another separate one representing the long eastern coasts of the Japanese islands—Niphon, Matsmai, and Sikoke. ",
"link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0109",
},
]
Now, let’s load them into a LangChain documents list with some metadata. Create and run a cell with this code:
Let’s have a look in the database and see what was created. Run this code in your terminal:
docker exec -i db23ai sqlplus vector/vector@localhost:1521/FREEPDB1 <<EOF
select table_name from user_tables;
describe documents_cosine;
column id format a20;
column text format a30;
column metadata format a30;
column embedding format a30;
set linesize 150;
select * from documents_cosine;
EOF
You should see output similar to this:
SQL>
TABLE_NAME
--------------------------------------------------------------------------------
DOCUMENTS_COSINE
SQL> Name Null? Type
----------------------------------------- -------- ----------------------------
ID NOT NULL RAW(16)
TEXT CLOB
METADATA JSON
EMBEDDING VECTOR(768, FLOAT32)
SQL> SQL> SQL> SQL> SQL> SQL>
ID TEXT METADATA EMBEDDING
-------------------- ------------------------------ ------------------------------ ------------------------------
957B602A0B55C487 Now, from the South and West t {"id":"moby_dick_2701_P3","lin [9.29364376E-003,-5.70030287E-
he Pequod was drawing nigh to k":"https://www.gutenberg.org/ 002,-4.62282933E-002,-1.599499
Formosa and the Bash cache/epub/2701/pg27 58E-002,
A8A71597D56432FD Queequeg was a native of Rokov {"id":"moby_dick_2701_P1","lin [4.28722538E-002,-8.80071707E-
oko, an island far away to the k":"https://www.gutenberg.org/ 003,3.56001826E-003,6.765306E-
West and South. It cache/epub/2701/pg27 003,
E7675836CF07A695 It was not a great while after {"id":"moby_dick_2701_P2","lin [1.06763924E-002,3.91203648E-0
the affair of the pipe, that k":"https://www.gutenberg.org/ 04,-1.01576066E-002,-3.5316135
one morning shortly cache/epub/2701/pg27 7E-002,
Now, let’s do a vector similarity search. Create and run a cell with this code:
query = 'Where is Rokovoko?'
print(vector_store.similarity_search(query, 1))
query2 = 'What does Ahab like to do after breakfast?'
print(vector_store.similarity_search(query2, 1))
This will find the one (1) nearest match in each case. You should get an answer like this:
[Document(metadata={'id': 'moby_dick_2701_P1', 'link': 'https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0012'}, page_content='Queequeg was a native of Rokovoko, an island far away to the West and South. It is not down in any map; true places never are.')]
[Document(metadata={'id': 'moby_dick_2701_P2', 'link': 'https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0036'}, page_content='It was not a great while after the affair of the pipe, that one morning shortly after breakfast, Ahab, as was his wont, ascended the cabin-gangway to the deck. There most sea-captains usually walk at that hour, as country gentlemen, after the same meal, take a few turns in the garden.')]
Well, there you go, that’s the most basic example of creating a vector store, loading some documents into it and doing a simple similarity search. Stay tuned to learn about more advanced features!
]]>https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/feed/14041markxnelsonRunning Oracle Autonomous Database in a container
https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/
https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/#respondFri, 25 Apr 2025 14:51:04 +0000http://redstack.dev/?p=4020Continue reading →]]>Did you know that you can easily run Oracle Autonomous Database in a container on your local machine? This is a great for development. It’s totally free, and you don’t even need to authenticate to pull the image. It also includes Oracle REST Data Services, APEX, Database Actions and the MongoDB API, so you get a nice built-in browser-based UI to work with your database. The free version does have a 20GB limit on database size, but for development purposes, that’s fine.
To start up a database, you can use this command, just replace the “xxxxxx”s with proper passwords. Note that the volume is needed so data will be persisted across container restarts, if you leave that out, you’ll get a new empty database every time you restart the container:
Here’s what the Database Actions login page looks like, you can log in with the user “admin” and the password you specified:
When you sign in, you will see the launchpad, from where you can access various tools:
For example, you could open the SQL tool and try executing a statement:
You may also want to connect to your database using other tools like Oracle SQL Developer (which is a Visual Studio Code extension) or SQLcl (which is a command line tool), or from a program. To do this, you will probably want to grab the wallet, read on!
Connecting the the database
If you want to use mTLS, you can get the wallet by copying it from the image using this command, just provide the desired destination path in the last argument:
Note that the address will be ‘localhost’ in the tnsnames.ora, so you will need to update that if necessary.
To use the wallet, set your TNS_ADMIN environment variable:
export TNS_ADMIN=/path/to/wallet
The following TNS aliases are provided, for mTLS:
myatp_medium
myatp_high
myatp_low
myatp_tp
myatp_tpurgent
And for TLS:
myatp_medium_tls
myatp_high_tls
myatp_low_tls
myatp_tp_tls
myatp_tpurgent_tls
Here’s an example of connecting with SQLcl:
$ TNS_ADMIN=/path/to/wallet sql admin/xxxxxx@myatp_high
SQLcl: Release 24.1 Production on Fri Apr 25 10:41:46 2025
Copyright (c) 1982, 2025, Oracle. All rights reserved.
Last Successful login time: Fri Apr 25 2025 10:41:48 -04:00
Connected to:
Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - Production
Version 23.6.0.24.11
SQL> select sysdate;
SYSDATE
____________
25-APR-25
Here’s an example of connecting from SQL Developer. When you create the connection, just choose the location of the wallet (tnsnames.ora file) and it will let you select the TNS name to connect to:
Enjoy!
]]>https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/feed/04020markxnelsonHow to read the content of a JMS message using PL/SQL
https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/
https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/#respondFri, 26 May 2023 17:50:57 +0000http://redstack.wordpress.com/?p=4000Continue reading →]]>This is just a short post – but all the details are in this post from Rob Van Wijk.
Today I wanted to read the contents of a JMS Text Message sitting in a queue. I wrote a Spring Boot micrsoervice that sends a message, and I have not written the one that recieves and processes the message yet, so I wanted to look at the message on the queue to check it was correct.
So I went and did a good old “select user_data from deposits_qt” and stared at the answer: “Object”. Hmmm, not what I wanted.
After a quick bit of Googling, I found Rob’s post which told me exactly what I needed to know. Yay! Thanks Rob!
Then I changed my query to this:
select qt.user_data.text_vc from account.deposits_qt qt;
And I got exactly what I needed:
{"accountId":2,"amount":200}
Fantastic! Thnaks a lot Rob!
]]>https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/feed/04000markxnelsonStart up an Oracle Database in Kubernetes with Oracle REST Data Services and Database Actions in no time at all!
https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/
https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/#respondMon, 01 May 2023 19:33:11 +0000http://redstack.wordpress.com/?p=3962Continue reading →]]>Hi everyone! Today I want to show you how easy it is to get an instance of Oracle up and running in Kubernetes, with Oracle REST Data Services and Database Actions using the Oracle Database Operator for Kubernetes
Let’s assume you have a Kubernetes cluster running and you have configured kubectl access to the cluster.
The first step is to install Cert Manager, which is a pre-requisite for the Oracle Database Operator:
That will start up pretty quickly too, and you can check with this command:
kubectl -n oracle-database-operator-system get pods
Let’s create a Single Instance Database. The Oracle Database Operator will let you create other types of databases too, including sharded and multitenant databases, and to manage cloud database instances like Autonomous Database and Database Cloud Service. But today, I’m going to stick with a simple single instance.
Here’s the Kubernetes YAML file to describe the database we want, I called this sidb.yaml:
If you have not before, head over to Oracle Container Registry and go to the Database group, and accept the license agreement for the Enterprise option. You’ll also want to create a Kubernetes secret with your credentials so it can pull the image:
You will want to change the storageClass to match your cluster. I am using Oracle Container Engine for Kuberentes in this example, so I used the “oci-bv” storage class. If you are using a different flavor of Kubernetes you should check what storage classes are available and use one of them.
This YAML describes a databse with the SID ORCL1 and a PDB called orclpdb1. It will get the password for sys, pdbadmin, etc., from a Kubernetes secret – so let’s create that:
Now we can create the database by applying that YAML file to our cluster:
kubectl apply -f sidb.yaml
It will take few minutes to start up fully – it has to pull the image (which took 3m30s on my cluster, for the “enterprise” image which is the biggest one), create the database instance the first time (mine took 8m), and apply any patches that are required (just over 1m for me). Subsequent startups will be much faster of course (I stopped it by scaling to zero replicas, then started it again by scaling back to one replica and it reached ready/healthy status in about 90s). For reference, my cluster had two nodes each with one OCPU and 16 GB of RAM. You can check on the progress with this command:
kubectl get singleinstancedatabases -o wide -w
As the database starts up, you will see the connection string and other fields populate in the output.
Now, let’s add Oracle REST Data Services. Here’s a Kubernetes YAML file that describes what we want, I called this ords.yaml:
You can apply that to your cluster with this command:
kubectl apply -f ords.yaml
And we can check on progress with this command:
kubectl get oraclerestdataservice -w
As it becomes ready, you will see the URLs for the Database API REST endpoint and for Database Actions. Mine took about 2m to reach ready/healthy status.
If your nodes are on a private network, the quickest way to access the REST APIs and Database Actions is to use a port forward. You can get the name of the ORDS pod and start a port forwarding session with commands like this:
kubectl get pods
kubectl port-forward pod/ords-sample-g4wc7 8443
Now you can hit the Database API REST endpoint with curl:
On the login page, enter ORCLPDB1 for the PDB Name and mark as the user. Then on the password page enter Welcome12345, and you are good to go!
While we are at it, let’s also get SQLcl access to the database.
Again, we can use port forwarding to access the database from outside the cluster:
kubectl port-forward svc/sidb-sample 1521 &
And then connect from SQLcl (if you have not checked out SQLcl yet, you should, it’s got cool features like command line completion and history):
sql mark/Welcome12345@//localhost:1521/orclpdb1
SQLcl: Release 22.2 Production on Mon May 01 14:32:57 2023
Copyright (c) 1982, 2023, Oracle. All rights reserved.
Last Successful login time: Mon May 01 2023 14:32:56 -04:00
Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0
SQL> select * from dual;
DUMMY
________
X
SQL>
There you go! That was super quick and easy! Enjoy!
]]>https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/feed/03962markxnelsonNew 23c version of Kafka-compatible Java APIs for Transactional Event Queues published
https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/
https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/#respondMon, 01 May 2023 15:59:34 +0000http://redstack.wordpress.com/?p=3931Continue reading →]]>We just published the new 23c version of the Kafka-compatible Java APIs for Transactional Event Queues in Maven Central, and I wanted to show you how to use them! If you are not familiar with these APIs – they basically allow you to use the standard Kafka Java API with Transactaional Event Queues acting as the Kafka broker. The only things that you would need to change are the broker address, and you need to use the Oracle versions of KafkaProducer and KafkaConsumer – other than that, your existing Kafka Java code should just work!
Let’s build a Kafka producer and consumer using the updated Kafka-compatible APIs.
Prepare the database
The first thing we want to do is start up the Oracle 23c Free Database. This is very easy to do in a container using a command like this:
docker run --name free23c -d -p 1521:1521 -e ORACLE_PWD=Welcome12345 container-registry.oracle.com/database/free:latest
This will pull the image and start up the database with a listener on port 1521. It will also create a pluggable database (a database container) called “FREEPDB1” and will set the admin passwords to the password you specified on this command.
You can tail the logs to see when the database is ready to use:
docker logs -f free23c
(look for this message...)
#########################
DATABASE IS READY TO USE!
#########################
Also, grab the IP address of the container, we’ll need that to connect to the database:
To set up the necessary permissions, you’ll need to connect to the database with a client. If you don’t have one already, I’d recommend trying the new SQLcl CLI which you can download here. Start it up and connect to the database like this (note that your IP address and password may be different):
sql sys/Welcome12345@//172.17.0.2:1521/freepdb1 as sysdba
SQLcl: Release 22.2 Production on Tue Apr 11 12:36:24 2023
Copyright (c) 1982, 2023, Oracle. All rights reserved.
Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
SQL>
Now, run these commands to create a user called “mark” and give it the necessary privileges:
SQL> create user mark identified by Welcome12345;
User MARK created.
SQL> grant resource, connect, unlimited tablespace to mark;
Grant succeeded.
SQL> grant execute on dbms_aq to mark;
Grant succeeded.
SQL> grant execute on dbms_aqadm to mark;
Grant succeeded.
SQL> grant execute on dbms_aqin to mark;
Grant succeeded.
SQL> grant execute on dbms_aqjms_internal to mark;
Grant succeeded.
SQL> grant execute on dbms_teqk to mark;
Grant succeeded.
SQL> grant execute on DBMS_RESOURCE_MANAGER to mark;
Grant succeeded.
SQL> grant select_catalog_role to mark;
Grant succeeded.
SQL> grant select on sys.aq$_queue_shards to mark;
Grant succeeded.
SQL> grant select on user_queue_partition_assignment_table to mark;
Grant succeeded.
SQL> exec dbms_teqk.AQ$_GRANT_PRIV_FOR_REPL('MARK');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> quit;
begin
-- Creates a topic named TEQ with 5 partitions and 7 days of retention time
dbms_teqk.aq$_create_kafka_topic('TEQ', 5);
-- Creates a Consumer Group CG1 for Topic TEQ
dbms_aqadm.add_subscriber('TEQ', subscriber => sys.aq$_agent('CG1', null, null));
end;
/
You should note that the dbms_teqk package is likely to be renamed in the GA release of Oracle Database 23c, but for the Oracle Database 23c Free – Developer Release you can use it.
Ok, we are ready to start on our Java code!
Create a Java project
Let’s create a Maven POM file (pom.xml) and add the dependencies we need for this application. I’ve also iunclude some profiles to make it easy to run the two main entry points we will create – the producer, and the consumer. Here’s the content for the pom.xml. Note that I have excluded the osdt_core and osdt_cert transitive dependencies, since we are not using a wallet or SSL in this example, so we do not need those libraries:
This is a pretty straightforward POM. I just set the project’s coordinates, declared my one dependency, and then created the two profiles so I can run the code easily.
Next, we are going to need a file called ojdbc.properties in the same directory as the POM with this content:
user=mark
password=Welcome12345
The KafkaProducer and KafkaConsumer will use this to connect to the database.
Create the consumer
Ok, now let’s create our consumer. In a directory called src/main/jaba/com/example, create a new Java file called SimpleConsumerOKafka.java with the following content:
package com.example;
import java.util.Properties;
import java.time.Duration;
import java.util.Arrays;
import org.oracle.okafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
public class SimpleConsumerOKafka {
public static void main(String[] args) {
// set the required properties
Properties props = new Properties();
props.put("bootstrap.servers", "172.17.0.2:1521");
props.put("group.id" , "CG1");
props.put("enable.auto.commit","false");
props.put("max.poll.records", 100);
props.put("key.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("oracle.service.name", "freepdb1");
props.put("oracle.net.tns_admin", ".");
props.put("security.protocol","PLAINTEXT");
// create the consumer
Consumer<String , String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList("TEQ"));
int expectedMsgCnt = 4000;
int msgCnt = 0;
long startTime = 0;
// consume messages
try {
startTime = System.currentTimeMillis();
while(true) {
try {
ConsumerRecords <String, String> records =
consumer.poll(Duration.ofMillis(10_000));
for (ConsumerRecord<String, String> record : records) {
System.out.printf("partition = %d, offset = %d, key = %s, value = %s\n ",
record.partition(), record.offset(), record.key(), record.value());
for(Header h: record.headers()) {
System.out.println("Header: " + h.toString());
}
}
// commit the records we received
if (records != null && records.count() > 0) {
msgCnt += records.count();
System.out.println("Committing records " + records.count());
try {
consumer.commitSync();
} catch(Exception e) {
System.out.println("Exception in commit " + e.getMessage());
continue;
}
// if we got all the messages we expected, then exit
if (msgCnt >= expectedMsgCnt ) {
System.out.println("Received " + msgCnt + ". Expected " +
expectedMsgCnt +". Exiting Now.");
break;
}
} else {
System.out.println("No records fetched. Retrying...");
Thread.sleep(1000);
}
} catch(Exception e) {
System.out.println("Inner Exception " + e.getMessage());
throw e;
}
}
} catch(Exception e) {
System.out.println("Exception from consumer " + e);
e.printStackTrace();
} finally {
long runDuration = System.currentTimeMillis() - startTime;
System.out.println("Application closing Consumer. Run duration " +
runDuration + " ms");
consumer.close();
}
}
}
Let’s walk through this code together.
The first thing we do is prepare the properties for the KafkaConsumer. This is fairly standard, though notice that the bootstrap.servers property contains the address of your database listener:
Then, we add some Oracle-specific properties – oracle.service.name is the name of the service we are connecting to, in our case this is freepdb1; oracle.net.tns_admin needs to point to the directory where we put our ojdbc.properties file; and security.protocol controls whether we are using SSL, or not, as in this case:
With that done, we can create the KafkaConsumer and subscribe to a topic. Note that we use the Oracle version of KafkaConsumer which is basically just a wrapper that understand those extra Oracle-specific properites:
The rest of the code is standard Kafka code that polls for records, prints out any it finds, commits them, and then loops until it has received the number of records it expected and then exits.
Run the consumer
We can build and run the consumer with this command:
mvn exec:exec -P consumer
It will connect to the database and start polling for records, of course there won’t be any yet because we have not created the producer. It should output a message like this about every ten seconds:
No records fetched. Retrying...
Let’s write that producer!
Create the producer
In a directory called src/main/jaba/com/example, create a new Java file called SimpleProducerOKafka.java with the following content:
package com.example;
import org.oracle.okafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.header.internals.RecordHeader;
import java.util.Properties;
import java.util.concurrent.Future;
public class SimpleProducerOKafka {
public static void main(String[] args) {
long startTime = 0;
try {
// set the required properties
Properties props = new Properties();
props.put("bootstrap.servers", "172.17.0.2:1521");
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("batch.size", "5000");
props.put("linger.ms","500");
props.put("oracle.service.name", "freepdb1");
props.put("oracle.net.tns_admin", ".");
props.put("security.protocol","PLAINTEXT");
// create the producer
Producer<String, String> producer = new KafkaProducer<String, String>(props);
Future<RecordMetadata> lastFuture = null;
int msgCnt = 4000;
startTime = System.currentTimeMillis();
// send the messages
for (int i = 0; i < msgCnt; i++) {
RecordHeader rH1 = new RecordHeader("CLIENT_ID", "FIRST_CLIENT".getBytes());
RecordHeader rH2 = new RecordHeader("REPLY_TO", "TOPIC_M5".getBytes());
ProducerRecord<String, String> producerRecord =
new ProducerRecord<String, String>(
"TEQ", String.valueOf(i), "Test message "+ i
);
producerRecord.headers().add(rH1).add(rH2);
lastFuture = producer.send(producerRecord);
}
// wait for the last one to finish
lastFuture.get();
// print summary
long runTime = System.currentTimeMillis() - startTime;
System.out.println("Produced "+ msgCnt +" messages in " + runTime + "ms.");
producer.close();
}
catch(Exception e) {
System.out.println("Caught exception: " + e );
e.printStackTrace();
}
}
}
This code is quite similar to the consumer. We first set up the Kafka properties, including the Oracle-specific ones. Then we create a KafkaProducer, again using the Oracle version which understands those extra properties. After that we just loop and produce the desired number of records.
Make sure your consumer is still running (or restart it) and then build and run the producer with this command:
mvn exec:exec -P producer
When you do this, it will run for a short time and then print a message like this to let you know it is done:
Produced 4000 messages in 1955ms.
Now take a look at the output in the consumer window. You should see quite a lot of output there. Here’s a short snippet from the end:
partition = 0, offset = 23047, key = 3998, value = Test message 3998
Header: RecordHeader(key = CLIENT_ID, value = [70, 73, 82, 83, 84, 95, 67, 76, 73, 69, 78, 84])
Header: RecordHeader(key = REPLY_TO, value = [84, 79, 80, 73, 67, 95, 77, 53])
Committing records 27
Received 4000. Expected 4000. Exiting Now.
Application closing Consumer. Run duration 510201 ms
It prints out a message for each record it finds, including the partition ID, the offset, and the key and value. It them prints out the headers. You will also see commit messages, and at the end it prints out how many records it found and how long it ws running for. I left mine running while I got the producer ready to go, so it shows a fairly long duration But you can run it again and start the producer immediately after it and you will see a much shorter run duration.
Well, there you go! That’s a Kafka producer and consumer using the new updated 23c version of the Kafka-compatible Java API for Transactional Event Queues. Stay tuned for more!
]]>https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/feed/03931markxnelsonSpring Boot Starters for Oracle updated
https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/
https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/#respondMon, 24 Apr 2023 19:11:00 +0000http://redstack.wordpress.com/?p=3923Continue reading →]]>Hi everyone. We have just published some updates to the Spring Boot Starters for Oracle Database – we added a starter for UCP (Universal Connection Pool) for Spring 3.0.2. This makes it easy to access the Oracle Database from a Spring Boot application – just two steps!
Add a dependency to your Maven POM file (or equivalent)
Here’s the dependency to add:
<dependency>
<groupId>com.oracle.database.spring</groupId>
<artifactId>oracle-spring-boot-starter-ucp</artifactId>
<version>3.0.2</version> <!-- or 2.7.7 for Spring Boot 2.x -->
<type>pom</type>
</dependency>
Add the datasource properties to your Spring Boot application.yaml
Here’s an example, assuming you are also using Spring Data JPA:
We are working to add more Spring Boot Starters for Oracle Database to make it even easier to use, and to make sure we cover all the versions you need! Stay tuned for more updates!
]]>https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/feed/03923markxnelsonOracle Backend for Spring Boot (and Parse Platform) introductory video plublished!
https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/
https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/#respondThu, 20 Apr 2023 13:03:20 +0000http://redstack.wordpress.com/?p=3920Continue reading →]]>We just published a short YouTube video that introduces the Oracle Backend for Spring Boot (and Parse Platform) which makes it super easy to develop, run and manage Spring Boot microservices and mobile applications leveraging all the power of Oracle’s converged database.
I hope you can check it out!
]]>https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/feed/03920markxnelsonBig news today – Oracle Database 23c Free—Developer Release just released!
https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/
https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/#commentsMon, 03 Apr 2023 23:11:12 +0000http://redstack.wordpress.com/?p=3867Continue reading →]]>Hi everyone! Big news today, just announced at Oracle CloudWorld in Singapore!
The new Oracle Database 23c Free – Developer Release is now available.
Oracle Database 23c Free – Developer Release is the first release of the next-generation Oracle Database, allowing developers a head-start on building applications with innovative 23c features that simplify development of modern data-driven apps. The entire feature set of Oracle Database 23c is planned to be generally available within the next 12 months.
It has heaps of new developer-focused features and its completely free! And easy to download and use!
My two favorite features are:
the new JSON Relational Duality Views which allow you to create a JSON document representation from a number of existing tables, and they are read/write! So you can use JSON in your applications and have the underlying data stored in relational tables. Of course you can store it in JSON too if you want to!
JavaScript Stored Procedures, or as I like to think of them – in-database microservices which can scale to zero, with fast startup and scaling, and resource management to prevent noisy neighbors!
I look forward to writing posts about those, and some other exicting new features really soon.
]]>https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/feed/13867markxnelsonI’m speaking at Level Up 2023
https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/
https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/#commentsFri, 17 Feb 2023 20:03:00 +0000http://redstack.wordpress.com/?p=3858Continue reading →]]>Hi! I am going to be speaking at the Level Up 2023 event at Oracle Redwood Shores in March. I will talking about our new Developer Previews for both Oracle Backend for Spring Boot and Oracle Backend for Parse Platform, and running a hands on lab where we will use those to build a “Cloud Banking” application in Spring Boot complete with a web and mobile front end user interface. In the lab we’ll explore topics like service discovery, external configuration, workflow, API management, fault tolerance and observability.
If you’re in the Bay Area and you’d like to attend in person – or if you’d like to attend from anywhere digitally – you can find more information and register here:
]]>https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/feed/13858markxnelsonA first Spring Boot microservice with Oracle
https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/
https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/#respondFri, 03 Feb 2023 18:40:08 +0000http://redstack.wordpress.com/?p=3773Continue reading →]]>In this post, I want to walk through creating a first simple Spring Boot microservice using Oracle. If you want to follow along, see this earlier post about setting up a development environment.
I want to create a “customer” microservice that I can use to create/register customers, and to get customer details. I want the customer information to be stored in my Oracle database. I am going to create a dedicated schema for this microservice, where it will keep its data. I could create a separate pluggable database, but that seems a little excessive given the simplicity of this service.
So my “customer” data will have the following attributes:
Customer ID
First name
Surname
Email address
My service will have endpoints to:
Create a customer
List all customers
Get a customer by ID
I am going to use Spring 3.0.0 with Java 17 and Maven. Spring 3.0.0 was just released (when I started writing this post) and has support for GraalVM native images and better observability and tracing.
Create the project
Let’s start by creating a project. If you set up your development environment like mine, with Visual Studio Code and the Spring Extension Pack, you can type Ctrl+Shift+P to bring up the actions and type in “Spring Init” to find the “Spring Initializr: Create a Maven project” action, then hit enter.
It will now ask you a series of questions. Here’s how I set up my project:
Spring Boot Version = 3.0.0
Language = Java
Group ID = com.redstack
Artifact ID = customer
Packaging = JAR
Java version = 17
Dependencies:
Spring Web
Spring Data JPA
Oracle Driver
After that, it will ask you which directory to create the project in. Once you answer all the questions, it will create the project for you and then give you the option to open it (in a new Visual Studio Code window.)
Note: If you prefer, you can go to the Spring Initializr website instead and answer the same questions there instead. It will then generate the project and give you a zip file to download. If you choose this option, just unzip the file and open it in Visual Studio Code.
Whichever approach you take, you should end up with a project open in Code that looks a lot like this:
I like to trim out a few things that we don’t really need. I tend to delete the “.mvn” directory, the “mvnw” and “mvnw.cmd” files and the “HELP.md” file. Now is also a great time to create a git repository for this code. I like to add/commit all of these remaining files and keep that as my starting point.
Explore the generated code
Here’s the Maven POM (pom.xml) that was generated:
There’s a few things to note here. The parent is the standard spring-boot-starter-parent and this will bring in a bunch of useful defaults for us. The dependencies list contains the items we chose in the Spring Initializr (as expected) and finally, note the build section has the spring-boot-maven-plugin included. This will let us build and run the Spring Boot application easily from maven (with “mvn spring-boot:run“).
Lombok offers various annotations aimed at replacing Java code that is well known for being boilerplate, repetitive, or tedious to write. We’ll use it to avoid writing getters, setters, constructors and builders.
And here is the main CustomerApplication Java class file:
package com.redstack.customer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class CustomerApplication {
public static void main(String[] args) {
SpringApplication.run(CustomerApplication.class, args);
}
}
Nothing much to see here. Notice it has the SpringBootApplciation annotation.
Define the Customer Entity
Let’s go ahead and define our data model now. Since we are using JPA, we define our data model using a POJO. Create a Customer.java file in src/main/java/com/redstack/customer with this content:
Starting from the bottom, we see the definition of the four fields that we wanted for our Customer entity – ID, first and last names, and email address.
The id field has some annotations on it. First it has @Id which identifies it as the key. Then we have a @SequenceGenerator annotation, which tells JPA that we want to create a “sequence” in the database and gives it a name. A sequence is a database object from which multiple users may generate unique integers. The last annotation, @GeneratedValue tells JPA that this field should be populated from that sequence.
The class also has some annotations on it. It has the JPA @Entity annotation which tells JPA that this is an entity that we want to store in the database. The other annotations are Lombok annotations to save us writing a bunch of boilerplate code. @Data generates getters for all fields, a useful toString method, and hashCode and equals implementations that check all non-transient fields. It will also generate setters for all non-final fields, as well as a constructor. @Builder generates some nice APIs to create instances of our object – we’ll see how we use it later on. And @AllArgsConstructor and @NoArgsConstructor generate pretty much what their names suggest they do.
Set up the Spring Boot Application Properties
Ok, next let’s set up the JPA configuration in the Spring Boot Application Properties. You will find a file called application.properties in src/main/resources. This file can be in either the “properties” format, or in YAML. I personally prefer to use YAML, so I renamed that file to application.yaml and here is the content:
Let’s look at what we have here. First we set the port to 8080, and the application’s name to “customer”. If you prefer to use the properties format these first two setting would like like this:
server.port=8080
spring.application.name=customer
After that we set up the data source. You can provide the JDBC URL for your Oracle Database, and the username and password and the JBDC driver class, as shown. Note that the use will need to actually exist. You can create the user in the database by running these statements as an admin user:
create user customer identified by Welcome123;
grant connect, resource to customer;
alter user customer quota unlimited on users;
commit;
The final section of config we see here is the JPA configuration where we need to declare which “dialect” we are using – this identifies what kind of SQL should be generated, in our case Oracle. The format-sql and show-sql settings are jsut there to make the SQL statements we see in logs easier for us to read.
The ddl-auto setting is interesting. Here’s a good article that explains the possible values and what they do. We’ve used update in this example, which “instructs Hibernate to update the database schema by comparing the existing schema with the entity mappings and generate the appropriate schema migration scripts.” That’s a resonable choice for this scenario, but you shoudl be aware that there are probably better choices in some cases. For example, if you are actively developing the entity and making changes to it, create-drop might be better for you. And if the database objects already exist and you just want to use them, then none might be the best choice – we’ll talk more about this in a future post!
Create the JPA Repository Class
Next, let’s create the JPA Repository class which we can use to save, retrieve and delete entities in/from the database. Create a file called CustomerRepository.java in src/main/java/com/redstack/customer with this content:
Ok, that takes care of our JPA work. Now, let’s get started on our services.
Create the Customer Service
Let’s start with a service to register (create) a new customer. We can start by defining the input data that we expect. Let’s create a CustomerRegistrationRequest.java in the same directory with this content:
package com.redstack.customer;
public record CustomerRegistrationRequest(
String firstName,
String lastName,
String email) {
}
Notice that we did not include the ID, because we are going to get that from the database sequence. So we just need the client/caller to give us the remaining three fields.
Next, we can create our controller. Create a new file called CustomerController.java in the same directory with this content:
So here we used a Java record to define the controller, and we ask Spring to inject the CustomerService for us. Obviously, we have not created that yet, we’ll get to that in a minute! The reocrd has two annotations – @RestController tells spring to expose a REST API for this record, and @RequestMapping lets us set up the URL path for this controller. Since we set the port to 8080 earlier, and assuming we just run this on our development machine for now, this REST API will have a URL of http://localhost:8080/api/v1/customers.
Next we can define the handlers. Here we have just the first one, to handle HTTP POST requests. We will add others later. Our registerCustomer method will be exposed as the handler for POST requests, because we gavt it the @PostMapping annotation, and it will be able to return an HTTP response with a status code and body becauase we gave it the @RepsonseBody annotation. This method accepts the CustomerRegistrationRequest that we defined earlier. Notice that we add the @RequestBody annotation to that method argument. This tells Spring that the data will be provided by the caller as JSON in the HTTP Request Body (as opposed to being in a query or header, etc.) And this handler simply calls the registerCustomer method in the service and passes through the data.
So, its time to write the service! Create a new file called CusotmerService.java in the same directory with this content:
package com.redstack.customer;
import org.springframework.stereotype.Service;
@Service
public record CustomerService(CustomerRepository repository) {
public void registerCustomer(CustomerRegistrationRequest req) {
Customer customer = Customer.builder()
.firstName(req.firstName())
.lastName(req.lastName())
.email(req.email())
.build();
repository.saveAndFlush(customer);
}
}
Again, we are using a Java record for the service. Records are immutable data classes that require only the type and name of fields. The equals, hashCode, and toString methods, as well as the private,final fields and public constructor, are generated by the Java compiler. You can also include static variables and methods in records. I’m using them here to save a bunch of boilerplate code that I do not want to write.
We put the @Service annotation on the record to tell Spring that this is a service. In the record arguments, we have Spring inject an instance of our CustomerRepository which we will need to talk to the database.
For now, we just need one method in our service, registerCustomer(). We’ll add more later. This method also accepts the CustomerRegistrationRequest and the first thing we do with it is create a new Customer entity object. Notice that we are using the builder that we auto-generated with Lombok – we never wrote any code to create this builder! Yay! Then, all we need to do is use our JPA repository’s saveAndFlush() method to save that customer in the database. saveAndFlush will do an INSERT and then a COMMIT in the database.
Time to test the application!
Let’s start up our service and test it! Before we start, you might want to connect to your database and satisfy yourself that there is no CUSTOMER table there:
sql customer/Welcome123@//172.17.0.2:1521/pdb1
SQL> select table_name from user_tables;
no rows selected
To run the service, run this Maven command:
mvn spring-boot:run
This will compile the code and then run the service. You will see a bunch of log messages appear. In around the middle you should see something like this:
2023-02-03T11:15:37.827-05:00 INFO 8488 --- [ main] SQL dialect : HHH000400: Using dialect: org.hibernate.dialect.OracleDialect
Hibernate: create global temporary table HTE_customer(id number(10,0), email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), rn_ number(10,0) not null, primary key (rn_)) on commit delete rows
Hibernate: create table customer (id number(10,0) not null, email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), primary key (id))
There’s the SQL that it ran to create the CUSTOMER table for us! If you’d like to, you can check in the database with this statement:
SQL> describe customer;
Name Null? Type
_____________ ___________ _____________________
ID NOT NULL NUMBER(10)
EMAIL VARCHAR2(255 CHAR)
FIRST_NAME VARCHAR2(255 CHAR)
LAST_NAME VARCHAR2(255 CHAR)
You can also take a look at the sequence if you would like to:
Now, let’s invoke the service to test it! We can invoke the service using cURL, we need to do a POST, set the Content-Type header and provide the data in JSON format:
The “-i” tells cURL to pring out the response. You can see that we got a HTTP 201 (created), i.e., success!
Now we see the new record in the database, as expected:
SQL> select * from customer ;
ID EMAIL FIRST_NAME LAST_NAME
_____ ________________ _____________ ____________
1 [email protected] Mark Nelson
Great, that is working the way we wanted, so we can create customers and have them stored in the database. Now let’s add some endpoints to query customers from the database.
Add a “get all customers” endpoint
The first endpoint we want to add will allow us to get a list of all customers. To do this, let’s add this new method to our controller:
Here we have a getAllCustomers() method that simply calls the corresponding method in the service (we’ll write that in a moment) and returns the results. Of course, we have some annotations too. The @GetMapping tells Spring Boot that this method will be exposed as an HTTP GET method handler. The produces defines the output body’s Content-Type, in this case it will be “application/json“. The @ResponseStatus sets the HTTP status code.
Here’s the method we need ot add to our CustomerService, notice it just uses a built-in method on the repository to get the data, its very simple:
// add this import
import java.util.List;
// ...
public List<Customer> getAlCustomers() {
return repository.findAll();
}
With those changes in place, we can restart the service and call this new GET endpoint like this:
You might like to do a few more POSTs and another GET to observe what happens.
Add a “get customer by ID” endpoint
Let’s add the final endpoint that we wanted in our service. We want to be able to get a specific customer using the ID. Here’s the code to add to the controller:
Here we see some differences to the previous endpoint implementation. This one is a little more sophisticated. First, we have added a path to the @GetMapping annotation to add a positional parameter to the end of the path, so this endpoint will be /api/v1/customers/{id}. In the method arguments we have a @PathVariable annotation to grab that {id} from the path and use it as an argument to our method.
Also, notice that the method returns ResponseEntity<Customer>. This gives us some more control over the response, and allows us to set different HTTP status codes (and if we wanted to we could also control the headers, body, etc.) based on our own business logic.
Inside this method we call our service’s (soon to be written) getCustomer(id) method which returns an Optional<Customer>. Then we check if the Optional actually contains a Customer, indicating that a customer entity/record was found for the specified id, and if so we return it along with an HTTP 200 (OK). If the Optional is empty, then return an HTTP 404 (not found).
Here’s the new method to add to the service:
// add this import
import java.util.Optional;
// ...
public Optional<Customer> getCustomer(Integer id) {
return repository.findById(id);
}
This one is fairly sinple, we are just calling a standard built-in method on the JPA Repository class to get the data.
Now we can restart the application, and test the new endpoint by asking for customers that we know exist, and do not exist to observe the different outcomes:
Notice the HTTP status codes are different in each case. Also, notice that the JSON returned when a customer is found is just one JSON object {…} not a list [{…}, … ,{…}] as in the get all customers endpoint.
Conclusion
Well there you have it, we have completed our simple customer microservice built using Spring Boot and Oracle Database. I hope you followed along and built it too, and enjoyed learing a bit about Spring Boot and Oracle! Stay tuned for more posts on this topic, each covering a little more advanced toopic than the last. See you soon!
]]>https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/feed/03773markxnelsonDevelopment environment setup for Spring Boot with Oracle
https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/
https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/#commentsThu, 08 Dec 2022 14:29:27 +0000http://redstack.wordpress.com/?p=3752Continue reading →]]>Hi again! I am starting a series of posts about writing Spring Boot microservice applications with the Oracle Database, I plan to cover topics like databsae access, messaging, external configuration, service discovery, fault tolerance, workflow, observability and so on. But before I get started, I wanted to document how I set up my development environment.
Personally, I work on Windows 11 with the Windows Subsystem for Linux and Ubuntu 20.04. Of course you can adjust these instructions to work on macOS or Linux.
$ java -version
java version "17.0.3" 2022-04-19 LTS
Java(TM) SE Runtime Environment (build 17.0.3+8-LTS-111)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.3+8-LTS-111, mixed mode, sharing)
Great! Now, let’s move on to build automation.
Maven
You can use Maven or Gradle to build Spring Boot projects, and when you generate a new project from Spring Initialzr (more on that later) it will give you a choice of these two. Personally, I prefer Maven, so that’s what I document here. If you prefer Gradle, I’m pretty sure you’ll already know how to set it up
These days I find I am using Visual Studio Code for most of my coding. It’s free, lightweight, has a lot of plugins, and is well supported. Of course, you can use a different IDE if you prefer.
Another great feature of Visual Studio Code that I really like is the support for “remote coding.” This lets you run Visual Studio Code itself on Windows but it connects to a remote Linux machine and that’s where the actual code is stored, built, run, etc. This could be an SSH connection, or it could be connecting to a WSL2 “VM” on your machine. This latter option is what I do most often. So I get a nice friendly, well-behaved native desktop applciation, but I am coding on Linux. Kind of the best of both worlds!
You can download Visual Studio Code from its website and install it.
I use a few extensions (plugins) that you will probably want to get too! These add support for the languages and frameworks and give you things like completion and syntax checking and so on:
You can install these by opening the extensions tab (Ctrl-Shift-X) and using the search bar at the top to find and install them.
Containers and Kubernetes
Since our microservices applications are probably almost certainly going to end up running in Kubernetes, its a good idea to have a local test environment. I like to use “docker compose” for initial testing locally and then move to Kubernetes later.
I use Rancher Desktop for both containers and Kubernetes on my laptop. There are other options if you prefer to use something different.
Oracle Database
And last, but not least, you will need the Oracle Database container image so we can run a local database to test against. If you don’t already have it, you will need to go to Oracle Container Registry first, and navigate to “Database,” then “Enterprise,” and accept the license agreement, then pull the image with these commands:
The first time yoiu start it up, it will create a database instacne for you. This takes a few minutes, you can watch the logs to see when it is done:
docker logs -f oracle-db
You will see this message in the logs when it is ready:
#########################
DATABASE IS READY TO USE!
#########################
You can then stop and start the database container as needed – you won’t need to wait for it to create the database instance each time, it will stop and start in just a second or two.
docker stop oracle-db
docker start oracle-db
You are going to want to grab its IP address for later on, you can do that with this command:
docker inspect oracle-db | grep IPAddress
This container image has SQL*Plus in it, and you can use that as a database command line client, but I prefer the new Oracle SQLcl which is a lot nicer – it has completion and arrow key navigation and lots of other cool new features. Here’s a permalink for the latest version: https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip
You can just unzip this and add it to your path too, like Maven and Java.
You can connect to the database using SQLcl like this (use the IP address you got above):
sql sys/Welcome123@//172.12.0.2:1521/pdb1 as sysdba
Well, that’s about everything we need! In the next post we’ll get started building a Spring Boot microservice!
]]>https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/feed/13752markxnelsonOracle REST Data Services 22.3 brings new REST APIs for Transactional Event Queueing
https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/
https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/#commentsMon, 31 Oct 2022 16:01:11 +0000http://redstack.wordpress.com/?p=3718Continue reading →]]>Oracle REST Data Services 22.3 was released a couple of weeks ago, and it is now available on Oracle Autonomous Database as well! This release has a slew of new REST APIs for Oracle Transactional Event Queueing (or TxEventQ). If you have not heard of it, TxEventQ is essentially a new, faster implementation of Oracle Advanced Queueing which has been in the database for around twenty years.
Many of these new REST APIs are very simliar to the Kafka REST APIs, since TxEventQ provides Kafka compatibility as one of its features.
In this post, I want to show you how to use a few of the APIs, and then I’ll give you an idea of what kinds of APIs are available and where to find more information.
The first thing to do is the grab an Autonomous Database instance. It’s available in the Always Free tier, so you can try this for free! If you are not familiar with creating one, and accessing SQL and so on – check out this free LiveLab for details.
Make sure you get a 21c database – you may need to toggle the “Always Free” option to see 21c. The APIs described in this post are supported in Oracle Database 21c (and later).
When you get into your SQL worksheet, grab the URL from the browser, it will be something like this:
Now, chop off the end of the URL and replace it with the base URL for the TxEventQ REST APIs, and save that in an environment variable to save us some typing!
And let’s create another environment variable with the authentication details. You can encode them using base64 like this, assuming your userid is admin and your passsword is your_password:
Great, that will save us from typing those each time!
Create a topic
Let’s start with by creating a topic. We are going to need to know the database name for this – you can find that by running this query in your SQL worksheet:
select sys_context('userenv','db_name') from dual
You’ll need to put that database name into the URL below after “clusters” and before “topics”, in this example my database name is “XYZABC_RED1“:
In the body we specified the name of the topic (“mark1” in this case) and how many parititions we want the topic to have. When you run this request, you’ll see output something like this:
Now let’s create a consumer group, here’s the request, notice the topic name is in the body, and the name of the consumer group is the last part of the URL (“sub1” in this case):
You can put mutliple records in the body to send put more than one message on the topic.
Consume messages
Now, let’s consume the messages off that topic with our consumer sub1. Here’s the request, notice the topic name is in the body, and the soncumer name is in the URL after “consumers”:
Great, hopefully that gives you a feel for how these REST APIs for TxEventQ work!
But wait, there’s more!
Of course there are a lot more APIs available than the few I have shown you so far. They all follow a fairly similar pattern, let’s take a look at a list of what’s available:
Topics APIs
Create topic, optionally with partition count
List topics
Get a topic
Create a consumer group
Publish message(s)
List topics in a specific cluster
Get a topic in a specific cluster
Delete a topic
Partitions APIs
List paritions in a topic
Get details of one partition in a topic
Get partition message offsets
List partitions in a topic in a cluster
Get details of one partition in a topic in a cluster
Or if you prefer, you can open the OpenAPI specification on your database instance. The URL will be something like this, and you can search the output for “teq” to find the APIs:
I hope you enjoyed this quick introduction to the new REST APIs for Transactional Event Queueing! Of course, this is available in any Oracle database, not just Autonomous Database. If you want to use Oracle REST Data Services with your own database, you might find this post about installing a standalone version interesting too!
]]>https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/feed/33718markxnelsonGetting started with the new observability exporter for Oracle database
https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/
https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/#respondTue, 27 Sep 2022 16:05:41 +0000http://redstack.wordpress.com/?p=3681Continue reading →]]>My colleague Paul Parkinson recently published our new unified obserability exporter for Oracle Database on GitHub, you can read about it here. I wanted to start playing around with it to see what we can do with it.
In this post I will start with a really simple example that just gets the exporter up and running and collects a few simple metrics from the database into Prometheus. In subsequent posts, I’ll go further and look at dashboards in Grafana, and also cover the logging and metrics capabilities! But you have to start somewhere right!
First thing we need is a database of course! I just fired one up in a container like this:
If you have not used this image before, you will first need to go to Oracle Container Registry at https://container-registry.oracle.com, log in, and navigate to the Database category and then the “enterprise” image and accept the license agreement. You will also need to login your docker client so you can pull the image:
docker login container-registry.oracle.com
# this will prompt you for your username and password
The image will take a short time to pull the first time, and the first startup will actually create the database instance, and that takes a few minutes too. You can watch the logs to see when the database is ready:
docker logs -f oracle-db
You only need to have these delays the first time you start the image. After that you can stop and start the container as needed and it will retain the data and startup very quickly.
# to stop the container:
docker stop oracle-db
# to start the container:
docker start oracle-db
Ok, so now we have a database available. Let’s connect to it and create some data to play with. You can use your favorite client – there’s SQL*Plus in that image if you don’t have anything else available. You can start it and connect to the database like this:
If you don’t have those installed and you don’t want to – you can skip this step and just grab a pre-built container image from Oracle Container Registry:
If you do it this way, make sure to use the full name later when we start the exporter, not the short version!
Now we need to create a configuration file and define our metrics. I called mine mark-metrics.toml and here’s the content:
[[metric]]
context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }
[[metric]]
context = "system"
request = "select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')"
metricsdesc = { session_count = "Current session count." }
[[metric]]
context = "system"
request = "select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')"
metricsdesc = { active_sessions = "Active sessions." }
[[metric]]
context = "system"
request = "select (c.session_count - a.active_sessions) as inactive_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')) c, (select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')) a"
metricsdesc = { inactive_sessions = "Inactive sessions." }
[[metric]]
context = "system"
request = "select b.session_count as blocked_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and blocking_session_status = 'VALID' and con_id = sys_context('userenv','con_id')) b"
metricsdesc = { blocked_sessions = "Blocked sessions." }
I defined five metrics in this file. Each metric starts with the [[metric]] heading and can have several fields. You can see more information in the documentation here. In the spirit of keeping this first post simple, I just created basic metrics with no labels or anything fancy
Let’s take a close look at the first metric, here it is again:
[[metric]]
context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }
It is in the context (or group) called customers. The metric itself is called num_custs. You can see how we use the metricsdesc to create a human-readable documentation/description for the metric. And the metric itself is defined with an SQL statement. Wow! That’s pretty cool, right? That means that anything I can write an SQL statement to get from the database can be exported as a metric! In this one I just count the number of entries in that customer table we just created.
The other four metrics are some simple queries that get the number of sessions in the database as well as how many are active, inactive and blocked. These are all in the system context. You can define whatever contexts you like.
When you later look at a metric in Prometheus its name will be something like this:
oracledb_customers_num_custs
Notice how the context (customers) and the metric name (num_custs) are in there.
Ok, now that we have defined our metrics, we can start up the exporter. Let’s run it in another container, alongside the database. We can start it like this:
There’s a couple of things to note here. First, I am providing the configuration file we just created using the -v mount. This will give the exporter access to the metrics definitions. Second, we need to tell it how to connect to the database. You’ll need to get the IP address of the database container using this command:
docker inspect oracle-db | grep IPAddress
Yours will probably be diffrent to mine, so you’ll need to update the value of DATA_SOURCE_NAME to match your environment. And finally, a reminder – if you pulled the pre-built image down from Oracle Container Registry, you’ll need to use the fully qualified name on the last line.
Once this container starts up, grab its IP address too, we’ll need that in a minute:
docker inspect exporter | grep IPAddress
The exporter should start right up, and assuming we got the address right and no typos, it should be working and we can get metrics like this:
$ curl localhost:9161/metrics
# HELP oracledb_system_inactive_sessions Inactive sessions.
# TYPE oracledb_system_inactive_sessions gauge
oracledb_system_inactive_sessions 1.0
# HELP oracledb_up Whether the Oracle database server is up.
# TYPE oracledb_up gauge
oracledb_up 1.0
# HELP oracledb_system_blocked_sessions Blocked sessions.
# TYPE oracledb_system_blocked_sessions gauge
oracledb_system_blocked_sessions 0.0
# HELP oracledb_customers_num_custs Number of customers.
# TYPE oracledb_customers_num_custs gauge
oracledb_customers_num_custs 2.0
# HELP oracledb_system_active_sessions Active sessions.
# TYPE oracledb_system_active_sessions gauge
oracledb_system_active_sessions 1.0
# HELP oracledb_system_session_count Current session count.
# TYPE oracledb_system_session_count gauge
oracledb_system_session_count 2.0
If you don’t see this, check the container logs to see what the error was:
docker logs exporter
Assuming everything is working now, let’s start up Prometheus and configure it to scrape these metrics.
First, let’s create a configuration file called prometheus.yml with this content:
The user interface looks like this, and you can type into that search field to find a metric. If you start typing “num_custs” it should find our metric. Then hit enter, or click on the Execute button to see the value of the metric. It might take up to 10 seconds for data to be available, since we configured the scrape interval as 10 seconds in our configuration file. You should see something like this – yours will probably say 1, not 2:
If you go insert some more records into that table and then check again, you’ll see the value is updated. You can also click on the Graph tab to view that as a time series graph. Try adding and removing records to see what happens. Remember to wait a little while between each update so that new metrics are collected.
You can also try the other metrics we created! So there we go, that’s covered the very basic starting steps of defining some metrics, running the exporter and scraping the metrics into Prometheus! Stay tuned for some follow up posts where I will build dashboards in Grafana, and also look at exporting logs and distributed tracing!
Bonus info: If you use WSL2 like I do, you might see a warning on the Prometheus web user interface about clock skew. If you do, you can fix that by updating the time in WSL like this:
sudo hwclock -s
]]>https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/feed/03681markxnelsonSome big updates for the Python Oracle library (cx_Oracle)
https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/
https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/#respondThu, 26 May 2022 13:46:58 +0000http://redstack.wordpress.com/?p=3642There are some really interesting updates for the open source Python Oracle library (known as cx_oracle, but changing its name is part of this) – check it out here:
]]>https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/feed/03642markxnelsonOracle WebCenter Spaces Task Flow Customization
https://redstack.dev/2010/01/15/oracle-webcenter-spaces-task-flow-customization/
https://redstack.dev/2010/01/15/oracle-webcenter-spaces-task-flow-customization/#respondFri, 15 Jan 2010 06:12:22 +0000http://redstack.wordpress.com/?p=402Continue reading →]]>I’v posted a blog about how to customize Oracle WebCenter task flows in your own custom application.
Some of us might also wonder how to do such kinds of customizations on Spaces.
There are some differentiates between custom application and Spaces.
Here are some tips of those:
1. Make WebCenter Spaces Task Flow Libraries Available to Your Application
To customize the WebCenter Spaces task flows, you need to include WebCenter Spaces View JAR file in your customization project.
The WebCenter View jar (webcenter-app-view.jar) is packaged and shipped with WebCenter Spaces, and is located here: FMW_HOME/lib/java/internal/oracle.webcenter.spaces/11.1.1.0.0/webcenter-app-view.jar
FMW_HOME is the installed home of the WebCenter Spaces install.
WebCenter Service task flows, such as the Discussion Forum and Mail task flows, are part of the WebCenter Extension and are available in the Task Flows folder in the WebCenter Services Catalog, located in the Resource Palette in JDeveloper.
To set up JDeveloper for WebCenter Spaces task flow customization:
(1) In the Application Navigator, right-click the ViewController project for your WebCenter application and choose Project Properties.
(2) In the Project Properties dialog, select Libraries & Classpath and click Add JAR /Directory….
(3) Navigate to the Oracle WebCenter Spaces installation directory and select webcenter-app-view.jar.
(4) Click Select.
(5) Click OK.
(6) Save your files.
(7) In the Application Navigator, next to the Projects drop-down list, click the Navigator Display Options icon and select Show Libraries.
(8) Save all your files.
2. Applying Seeded Customizations to Deployed WebCenter Applications
In previous section, we customized WebCenter task flows for different scenarios. The output of these exercises is the generated MDS customization. The customizations show up as .xml.xml or .jspx.xml files in the View project of the application under “libraryCustomization” package.
These customization documents are essentially instructions for MDS to apply delta on top of the base document that is shipped to show the customized behavior at runtime. Once you create the customizations, you must apply them to the deployed WebCenter application.
If those customized task flows are used in your own custom WebCenter application, the customizations in MDS will be deployed to WebCenter container as well as the custom WebCenter application. So you can skip this step. But for Spaces, we have to apply those customizations to WebCenter Spaces application seperately.
To see customizations performed on Oracle WebCenter task flows in JDeveloper at runtime, you must import these customizations to the MDS repository of the deployed WebCenter application. Since this step updates the runtime WebCenter application metadata repository, you should first back up the MDS schema before performing this step. Oracle also advises that you test your customizations in a development or stage environment first.
These steps cover how to apply the seeded customizations to a deployed instance of Oracle WebCenter Spaces. The same steps can be followed to deploy customizations to a custom WebCenter application repository as well.
To import design time Oracle WebCenter task flow seeded customizations to the MDS repository:
(1) While the your application is open in JDeveloper, rebuild the ViewController project by right-clicking ViewController and choosing Rebuild ViewController.jpr.
(2) Next, create a JAR profile and package the library customizations under the ViewController project in the JAR archive. In the Application Navigator, these customizations display under ADFLibraryCustomization package. On your file system, the customizations are located in the libraryCustomization folder under ViewController. Right-click the Model project and select New.
(3) In the New Gallery, select Deployment Profiles, choose JAR File from the Items list, and click OK to generate the JAR profile.
(4) Create a name for the JAR file, such as SampleWebCenterApplicationCustomizations.
(5) In the Edit JAR Deployment Profile Properties dialog, click Filters on the left, then select all the customization documents in the ADFLibrarycustomization package, then click OK.
(6) Copy this JAR file, which now contains the customizations, to the machine where the WebCenter application is deployed. Note: Maintain a backup of this file so that you always have a version of the customizations for this application.
(7) Extract the JAR to a temporary directory on the machine, for example /tmp/wc-cust.
(8) Use the MDS WLST import command to import the task flow customizations to the application’s MDS repository.
The following is an example of the command to import customizations.