Category Archives: Trivadis

Online Statistics Gathering for ETL – Part 3

Preparing my session on Online Statistics Gathering for ETL for the DOAG conference, I noticed some points that I didn’t covered in the previous two blog posts. The first point is showing the problem that may arise if indexes are involved. The second one is about partition exchange load and it completes the topic of partitioned tables started in part 2. No blog posting on Oracle products is nowadays complete without mentioning the cloud. The third point is about Autonomous Data Warehouse Cloud Service and Online Statistics Gathering improvements. Continue reading

Polymorphic Table Functions (PTF) – Tinkering with Rowsets

Writing the second “basics” post on PTF I discovered, that there were much more details worth mentioning, than it would be acceptable for a “basics” post and would blow it up anyway 😉 So I decided to to separate the tests and finding in this (more deep-dive) post. Continue reading

Polymorphic Table Functions (PTF) , Part 1 – Basics

I have already posted some examples on Polymorphic Table Functions in Oracle 18c in the last months. I quickly realized how difficult it is to explain completely new feature using advanced examples and wanted to write a series of posts starting from very basics. Now that the Germany’s Oracle User Group (DOAG) has accepted my presentation on PTF for their annual conference is the time to do it.

Continue reading

Conditional Logic in SQL

A few days ago Sven Weller has published an excellent post about writing conditional logic in SQL and it reminded me of an example I used in one of my presentations. It is also about conditional logic in SQL but maybe some less obvious use cases in context of function calls and sorting. Continue reading

Polymorphic Table Functions Example – Transposing Columns To Rows

Hey, Oracle 18c is now available on the cloud and for engineered systems! For more than a week now. That also means you can play with it at LiveSQL. And of course you can try polymorphic table functions (PTF)! At least I’ve done that this weekend 😉 And here is my first working example. Continue reading

ILM – is it possible to mix ADO policies for compression and storage?

Never thought I would write much about Information Lifecycle Management, as I am actually a developer and not a DBA. I think, it is indeed a topic mostly relevant for DBA’s. But it is generally a good thing, if developers and DBA’s have a deep understanding of each others job, isn’t it? We are giving an overview of the ILM features in our training “12c New Features for Developers” and I’m one of the course instructors for it. That’s the reason, why I’m writing meanwhile the third post about it. Just to clarify some questions, which are not so obviously documented.

After looking at ADO conditions for storage tiering policies and using user defined PL/SQL conditions in previous posts, I was curious whether it is possible to mix storage and compression policies for the same segment? Wouldn’t it for example make sense to move the segment to a low cost tablespace and compress the data within the same action as well? I’ve sometimes heard the opposite statement. But it is very simple to test it, not just trust it. Let’s try it. Continue reading

Debugging SCD2

This post is again about the Slowly Changing Dimensions Type 2, but focusing on another problem. Once you have a need to validate the versioning mechanism, how you can do this? Or, in other words, having several versions of the same data (identified by the natural key), how to check what fields have been changed from version to version? Working with systems like Siebel CRM, which have some tables with 500+ columns, this possibility was really useful.
Of course you can write some PL/SQL code and iterate through the columns to compare their values. But I’m a friend of “pure SQL” solutions – let’s see how this can be done. Continue reading

How to simplify the data historization?

Maintaining a data historization is a very common but time consuming task in a data warehouse environment. You face it while loading historized Core-Layer (also known as Enterprise Data Warehouse or Foundation Layer), Data Vault Models, Slowly Changing Dimensions, etc. The common techniques used involve outer joins and some kind of change detection. This change detection must be done with respect of Null-values and is possibly the most trickiest part. A very good overview by Dani Schnider can be found in his blog: Delta Detection in Oracle SQL

But, on the other hand, SQL offers standard functionality with exactly desired behaviour: Group By or Partitioning with analytic functions. Can it be used for this task? Does it make sense? And how would the ETL process look like? Can we further speed up the task using partition exchange and when does it make sense? I’ll look at this in the next few posts. Continue reading