Category Archives: Oracle

ILM – is it possible to mix ADO policies for compression and storage?

Never thought I would write much about Information Lifecycle Management, as I am actually a developer and not a DBA. I think, it is indeed a topic mostly relevant for DBA’s. But it is generally a good thing, if developers and DBA’s have a deep understanding of each others job, isn’t it? We are giving an overview of the ILM features in our training “12c New Features for Developers” and I’m one of the course instructors for it. That’s the reason, why I’m writing meanwhile the third post about it. Just to clarify some questions, which are not so obviously documented.

After looking at ADO conditions for storage tiering policies and using user defined PL/SQL conditions in previous posts, I was curious whether it is possible to mix storage and compression policies for the same segment? Wouldn’t it for example make sense to move the segment to a low cost tablespace and compress the data within the same action as well? I’ve sometimes heard the opposite statement. But it is very simple to test it, not just trust it. Let’s try it. Continue reading

Custom ILM ADO policy conditions using PL/SQL

In my previous post about ILM storage tiering I wrote about ADO conditions that we can provide for storage tiering policies if moving the segments to a read only tablespace. Another option to make storage tiering “conditional” is to provide a custom condition as pl/sql function returning a boolean. How does it work? Can we use Heat Map information? Continue reading

ILM storage tiering

Being not really new term introduced in 12c, Information Lifecycle Management has been enriched with two new features: Heat Map and ADO (Automatic Data Optimization). The letter allows to define policies on data segments (tables, partitions), which consist of conditions and actions. The conditions are generally being checked based on the information gathered with the Heat Map and defined actions are then taken: compress the data (compression tiering) or move the data (storage tiering). I said “generally”, because, on the one hand, one can define custom conditions (PL/SQL functions) not necessarily using Heat Map and, on the other hand, because of the rather confusing situation using conditions with storage tiering. Continue reading

Debugging SCD2

This post is again about the Slowly Changing Dimensions Type 2, but focusing on another problem. Once you have a need to validate the versioning mechanism, how you can do this? Or, in other words, having several versions of the same data (identified by the natural key), how to check what fields have been changed from version to version? Working with systems like Siebel CRM, which have some tables with 500+ columns, this possibility was really useful.
Of course you can write some PL/SQL code and iterate through the columns to compare their values. But I’m a friend of “pure SQL” solutions – let’s see how this can be done. Continue reading

Data Historization II

In the previous post I have shown how to use a combination of a UNION ALL and a GROUP BY to do a slowly changing dimension type 2 data historization. I’ve done some tests since then to compare performance of this approach with common methods in various situations. Continue reading

How to simplify the data historization?

Maintaining a data historization is a very common but time consuming task in a data warehouse environment. You face it while loading historized Core-Layer (also known as Enterprise Data Warehouse or Foundation Layer), Data Vault Models, Slowly Changing Dimensions, etc. The common techniques used involve outer joins and some kind of change detection. This change detection must be done with respect of Null-values and is possibly the most trickiest part. A very good overview by Dani Schnider can be found in his blog: Delta Detection in Oracle SQL

But, on the other hand, SQL offers standard functionality with exactly desired behaviour: Group By or Partitioning with analytic functions. Can it be used for this task? Does it make sense? And how would the ETL process look like? Can we further speed up the task using partition exchange and when does it make sense? I’ll look at this in the next few posts. Continue reading

MERGE and ORA-30926 – Part 2 or differences in write consistency between update and merge

In the first part we’ve seen how Oracle actually executes the Merge Statement three times if it hits ORA-30926. To understand what’s going on, we should refresh the concept behind so called “update restarts” sometimes also called “mini rollbacks”. This concept is very well explained by Tom Kyte: Part I, Part II and Part III. If you are not familiar with this “write consistency” concept, I strongly encourage you to follow this links. Continue reading

MERGE and ORA-30926

If you’ve ever used MERGE, then it is quite probable that you are familiar with ORA-30926 – unable to get a stable set of rows in the source tables. You have also probably noticed, that in case of hitting this error, your query takes longer. Three times longer, because the whole query is executed three times, as following tests show. Continue reading

Merging temporal intervals using MATCH_RECOGNIZE

Some time ago Philipp Salvisberg has posted several thoughts about joining and merging temporal intervals. Recently I was looking for some examples of using the new MATCH_RECOGNIZE clause introduced in 12c and found that these queries can be significantly simplified using the new clause.
Continue reading