{"id":871,"date":"2018-10-29T17:48:54","date_gmt":"2018-10-29T15:48:54","guid":{"rendered":"http:\/\/blog.sqlora.com\/en\/?p=871"},"modified":"2018-11-05T16:40:15","modified_gmt":"2018-11-05T14:40:15","slug":"polymorphic-table-functions-ptf-part-2-more-basics-with-some-deep-dive","status":"publish","type":"post","link":"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-part-2-more-basics-with-some-deep-dive\/","title":{"rendered":"Polymorphic Table Functions (PTF) , Part 2 \u2013 More Basics With Some Deep-Dive"},"content":{"rendered":"<p>In the <a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-part-1-basics\/\" rel=\"noopener\" target=\"_blank\">first part<\/a> of PTF series we went through a very basic example removing some columns and adding a new column with a constant value. Starting from the same example we&#8217;ll do something more meaningful in the second part. How about concatenating the values of the removed columns as CSV in a new column? <!--more--><\/p>\n<p>First, let&#8217;s extend the table we were using in Part 1 just to have some more columns to test with:<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nDROP TABLE t;\r\n\r\nCREATE TABLE t (A NUMBER, B NUMBER, C NUMBER, V VARCHAR2(10));\r\n\r\nINSERT INTO t VALUES (1,2,3, 'ONE');\r\nINSERT INTO t VALUES (4,5,6, 'TWO');\r\nINSERT INTO t VALUES (7,8,9, 'THREE');\r\n\r\nCOMMIT;\r\n<\/pre>\n<p><em>Listing 1: Setup<\/em><\/p>\n<p>Back to the task. How should the PTF look like? What parameters do we need? Let&#8217;s make the function to accept a list of columns to stay. All remaining columns will be concatenated, so that we don&#8217;t need a separate list for them. Additionally we need a new column name. We can define a default column name and make this parameter optional though.  <\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nCREATE OR REPLACE PACKAGE my_ptf_package AS\r\n \r\nFUNCTION describe (tab IN OUT dbms_tf.table_t\r\n                 , cols2stay IN dbms_tf.columns_t \r\n                 , new_col_name IN DBMS_ID DEFAULT 'AGG_COL')\r\n         RETURN dbms_tf.describe_t; \r\n\r\nPROCEDURE fetch_rows (new_col_name IN DBMS_ID DEFAULT 'AGG_COL');\r\n           \r\nEND my_ptf_package;\r\n\/  \r\n<\/pre>\n<p><em>Listing 2: package specification<\/em><\/p>\n<p>The implementation of the DESCRIBE method looks very similar to that from <a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-part-1-basics\/\" rel=\"noopener\" target=\"_blank\">Part 1<\/a>. We just changed the semantic of the columns parameter.<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nCREATE OR REPLACE PACKAGE BODY my_ptf_package AS\r\n \r\nFUNCTION describe (tab IN OUT dbms_tf.table_t\r\n                 , cols2stay IN dbms_tf.columns_t\r\n                 , new_col_name IN DBMS_ID DEFAULT 'AGG_COL') \r\n         RETURN dbms_tf.describe_t IS\r\nBEGIN\r\n  FOR I IN 1 .. tab.COLUMN.COUNT LOOP  \r\n     IF NOT tab.COLUMN(i).description.name MEMBER OF cols2stay THEN\r\n          tab.column(i).pass_through := false;\r\n          tab.column(i).for_read := true;\r\n     END IF;\r\n  END LOOP;\r\n   \r\n  RETURN dbms_tf.describe_t( \r\n           new_columns =&gt; \r\n               dbms_tf.columns_new_t( \r\n                  1 =&gt; dbms_tf.column_metadata_t(\r\n                           name =&gt; new_col_name,  \r\n                           TYPE =&gt; dbms_tf.type_varchar2)));\r\nEND; \r\n\r\n...\r\n\r\n<\/pre>\n<p><em>Listing 3: DESCRIBE method <\/em><\/p>\n<p><strong>FETCH_ROWS<\/strong><\/p>\n<p>Now let&#8217;s look at the implementation of <strong>FETCH_ROWS.<\/strong> As we already know from <a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-part-1-basics\/\" rel=\"noopener\" target=\"_blank\">Part 1<\/a>, FETCH_ROWS can be called multiple times, but we don&#8217;t need to concern about this. Just fetch, process and output. How to do this? What tools do we have? Well, there are two methods, both are somewhat complicated in their own way, but the combination of both methods looks better. Take a little popcorn: it will take some time. We&#8217;ll look at both pure methods first and then at their combination, looking at some other little things like tracing as well.<\/p>\n<p><em><strong>Working with row sets <\/strong><\/em><\/p>\n<p>The implementation of the FETCH_ROWS is shown in Listing 4. We are working with the table type <strong>ROW_SET_T<\/strong> here, using variables of this type for fetching the data into (<strong>inp_rs<\/strong>) and as an output structure (<strong>out_rs<\/strong>) as well. For clarity, I&#8217;ll include the picture, we&#8217;ve seen in the first part again:<\/p>\n<p><a href=\"https:\/\/blog.sqlora.com\/en\/wp-content\/uploads\/sites\/2\/2018\/04\/PTF_ROWSET2.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.sqlora.com\/en\/wp-content\/uploads\/sites\/2\/2018\/04\/PTF_ROWSET2.png\" alt=\"\" width=\"982\" height=\"578\" class=\"alignnone size-full wp-image-684\" srcset=\"https:\/\/blog.sqlora.com\/en\/wp-content\/uploads\/sites\/2\/2018\/04\/PTF_ROWSET2.png 982w, https:\/\/blog.sqlora.com\/en\/wp-content\/uploads\/sites\/2\/2018\/04\/PTF_ROWSET2-300x177.png 300w, https:\/\/blog.sqlora.com\/en\/wp-content\/uploads\/sites\/2\/2018\/04\/PTF_ROWSET2-768x452.png 768w, https:\/\/blog.sqlora.com\/en\/wp-content\/uploads\/sites\/2\/2018\/04\/PTF_ROWSET2-624x367.png 624w\" sizes=\"auto, (max-width: 982px) 100vw, 982px\" \/><\/a><\/p>\n<p>To fetch the values of the columns of interest we use the procedure <strong><em>dbms_tf.get_row_set<\/em><\/strong> (line 7). It fetches only the columns marked as FOR_READ into the data structure of the shown type <strong><em>dbms_tf.row_set_t<\/em><\/strong> returning also the row and column count.<\/p>\n<p>Now, our task is to loop through rows (line 8) and columns (line 10) and produce the desired output. We are using <strong><em>dbms_tf.col_to_char<\/em><\/strong> for that (line 13). This function converts the column value to its string representation and expects column data collection &#8211; <em><strong>inp_rs(c)<\/strong><\/em> in our case &#8211; and the row number as parameters. <\/p>\n<p>We are assigning concatenated values directly to the elements of VARCHAR2 collection of the output row set <strong>out_rs.<\/strong> Because we have only one column to output, we can just use <strong>out_rs(1).<\/strong> I find this kind of referencing nested structured types and arrays to be rather complicated.<\/p>\n<p>After the loop we use <strong>dbms_tf.put_row_set<\/strong> to pass the output row set back to the database (to the main query).<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nPROCEDURE fetch_rows(new_col_name IN DBMS_ID DEFAULT 'AGG_COL')  IS\r\n inp_rs dbms_tf.row_set_t; \r\n colcnt PLS_INTEGER;\r\n rowcnt PLS_INTEGER;\r\n out_rs dbms_tf.row_set_t; \r\nBEGIN \r\n    dbms_tf.get_row_set(inp_rs, rowcnt, colcnt);\r\n    FOR r IN 1..rowcnt LOOP\r\n      out_rs(1).tab_varchar2(r) := '';\r\n      FOR c IN 1..colcnt LOOP\r\n        out_rs(1).tab_varchar2(r) := \r\n                            out_rs(1).tab_varchar2(r)||\r\n                            ';'||DBMS_TF.COL_TO_CHAR(inp_rs(c), r);\r\n      END LOOP;\r\n      out_rs(1).tab_varchar2(r) := \r\n                            ltrim (out_rs(1).tab_varchar2(r) ,';'); \r\n    END LOOP;\r\n   dbms_tf.put_row_set(out_rs); \r\nEND;\r\n...\r\n\r\nSQL&gt; SELECT * FROM my_ptf(t, COLUMNS(A));\r\n\r\n         A AGG_COL             \r\n---------- --------------------\r\n         1 2;3;&quot;ONE&quot;           \r\n         4 5;6;&quot;TWO&quot;           \r\n         7 8;9;&quot;THREE&quot;    \r\n\r\n\r\n<\/pre>\n<p><em>Listing 4: FETCH_ROWS procedure with get_row_set\/put_row_set<\/em><\/p>\n<p>We have two row sets here: input and output. Why two of them? What is the difference? Could we only use one? Basically, no, we can&#8217;t. The columns in the input row set are those, that we have defined as FOR_READ in the DESCRIBE method. They are called GET-columns in context of FETCH_ROWS. The columns in the output row set are in turn those that we have returned as NEW_COLUMNS from DESCRIBE method. They are called PUT-columns here.<\/p>\n<p>Having played with PTF&#8217;s for a while now, I have often made experience that after some change the PTF just stopped working, leaving the result (new columns) empty. Most of the time the reason for this was careless handling of row sets. I&#8217;ve made some tests to better understand the interrelationships and <a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-tinkering-with-rowsets\/\" rel=\"noopener\" target=\"_blank\">put them into a separate post<\/a> to avoid blowing up this post. <\/p>\n<p>The main findings: <\/p>\n<ul>\n<li>the output rowset holds only PUT-columns <\/li>\n<li>the order of columns is as defined in DESCRIBE (new columns), column names are not taken into account<\/li>\n<li>always populate row sets to be dense. They are not allowed to be sparse.<\/li>\n<li>adding columns to rowset will have no impact, if not declared in DESCRIBE altogether <\/li>\n<\/ul>\n<p><em><strong>Working with individual columns<\/strong><\/em><\/p>\n<p>Besides the possibility to work with the whole row sets, DBMS_TF also provides  procedures to fetch and output the values of particular columns: <em><strong>get_col<\/strong><\/em> and <em><strong>put_col.<\/strong><\/em> As parameters they expect a ColumnId and a collection to fetch data into or holding new data respectively. What&#8217;s the hell is this ColumnId now? It is the column number in the order as created in DESCRIBE method. The PTF gets the table parameter containing column metadata and thus also the column order. We then make some or all of these columns to be FOR_READ. This gives the order of GET-columns, which is relevant for <em><strong>get_col.<\/strong><\/em>  We also define new columns, which are the part of the return datatype DESCRIBE_T. This in turn gives the order of PUT-columns to use with <em><strong>put_col.<\/strong><\/em> Remember, they are not the same. The same ColumnId is likely to mean different columns in GET and PUT rowsets.<\/p>\n<p>Caution! There is a Copy&#038;Paste mistake in the documentation of <em><strong>put_col:<\/strong><\/em><br \/>\n<em>This procedure is used to put the read column values in the collection of scalar type.<br \/>\nThe column numbers are in the get column order as created in DESCRIBE method of PTF. <\/em><br \/>\nThese two sentences are just misleading. they should rather be something like this:<br \/>\n<em>This procedure is used to put the collection of scalar type back to the query. The column numbers are in the <strong>put column order<\/strong> as created in DESCRIBE method of PTF.<\/em><\/p>\n<p>The rough planning will be to fetch the read-columns, process them in a loop and pass the result back. But using <em><strong>get_col<\/strong><\/em> to fetch doesn&#8217;t return the number of rows like <em><strong>get_row_set<\/strong><\/em> does. So, what can we do? There is a record type <strong>ENV_T<\/strong>, which can be filled with <em><strong>dbms_tf.get_env<\/strong><\/em> and contains the metadata about the PTF&#8217;s execution state, including the number of rows, get and put column lists and so on. <\/p>\n<p>We&#8217;ll turn some tracing on to see what the content of the ENV-record is. The package DBMS_TF provides several overloaded functions TRACE, the one that acts just like dbms_output.put_line, whereas others accept various structured types as parameter and present their content in the pretty manner. We\u2019ll use the one to output the content of the ENV_T record.<\/p>\n<p>Having that start point with ENV_T, we can now implement the loop through get columns and use their values producing the result. <\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\n\r\n...\r\n\r\nPROCEDURE fetch_rows(new_col_name IN DBMS_ID DEFAULT 'AGG_COL')  IS\r\n data_col dbms_tf.column_data_t;\r\n agg_col  dbms_tf.tab_varchar2_t; \r\n env    dbms_tf.env_t := dbms_tf.get_env();  \r\nBEGIN \r\n    dbms_tf.trace(env);\r\n    FOR r IN 1..env.row_count LOOP\r\n        agg_col(r) := '';\r\n    END LOOP;\r\n    FOR i IN 1..env.get_columns.count LOOP\r\n      IF env.get_columns(i).type = dbms_tf.type_varchar2 THEN \r\n         dbms_tf.get_col(i, data_col.tab_varchar2);\r\n         data_col.description.type := dbms_tf.type_varchar2;\r\n      ELSIF  env.get_columns(i).type = dbms_tf.type_number THEN \r\n         dbms_tf.get_col(i, data_col.tab_number);\r\n         data_col.description.type := dbms_tf.type_number;\r\n      END IF;\r\n      FOR r IN 1..env.row_count LOOP\r\n        agg_col(r) := agg_col(r)||';'||\r\n                                 DBMS_TF.COL_TO_CHAR(data_col, r);\r\n      END LOOP;\r\n    END LOOP;\r\n    FOR r IN 1..env.row_count LOOP\r\n       agg_col(r) := ltrim (agg_col(r) ,';'); \r\n    END LOOP;    \r\n   \r\n   dbms_tf.put_col(1,agg_col); \r\nEND;\r\n\r\n...\r\n\r\nSQL&gt; SELECT * FROM my_ptf(t, COLUMNS(A));\r\n\r\n         A AGG_COL             \r\n---------- --------------------\r\n         1 2;3;&quot;ONE&quot;           \r\n         4 5;6;&quot;TWO&quot;           \r\n         7 8;9;&quot;THREE&quot;     \r\n\r\n\r\n\r\nCurrent Parallel Environment:\r\n....Instance id:..........65535\r\n....Session id:...........65535\r\n....Slave Server Grouup:..65535\r\n....Slave Set number:.....65535\r\n....No of Sibling Slaves:.-1\r\n....Global Slave Number:..-1\r\n....No of local Slaves:...-1\r\n....Local Slave No:.......-1\r\n\r\n....Get Columns:\r\n\r\n........get column&#x5B;1] = \r\n__________Type:...............NUMBER\r\n__________Max Length:.........22\r\n__________Name:...............&quot;B&quot;\r\n__________Name Length:........3\r\n__________Precision:..........0\r\n__________Scale:..............-127\r\n\r\n\r\n........get column&#x5B;2] = \r\n__________Type:...............NUMBER\r\n__________Max Length:.........22\r\n__________Name:...............&quot;C&quot;\r\n__________Name Length:........3\r\n__________Precision:..........0\r\n__________Scale:..............-127\r\n\r\n\r\n........get column&#x5B;3] = \r\n__________Type:...............VARCHAR2\r\n__________Max Length:.........10\r\n__________Name:...............&quot;V&quot;\r\n__________Name Length:........3\r\n__________Charset Id:.........AL32UTF8\r\n__________Collation:..........USING_NLS_COMP\r\n\r\n\r\n....Put Columns:\r\n\r\n........put column&#x5B;1] = \r\n__________Type:...............VARCHAR2\r\n__________Max Length:.........4000\r\n__________Name:...............&quot;AGG_COL&quot;\r\n__________Name Length:........9\r\n__________Charset Id:.........AL32UTF8\r\n__________Collation:..........USING_NLS_COMP\r\n\r\n....Referenced Columns:\r\n\r\n........ column&#x5B;1] = &quot;AGG_COL&quot; referenced\r\n\r\n.... This is User query\r\n\r\n....Numbers of rows in this row set: 3\r\n\r\n<\/pre>\n<p><em>Listing 5: FETCH_ROWS procedure with get_col\/put_col<\/em><\/p>\n<p>As you can see, there is more to do: you have to know the columns datatype to choose the right collection, otherwise, you&#8217;ll get <em>ORA-62559: Data types mismatch (NUMBER<<==>>VARCHAR) for polymorphic table function<\/em>. We are also doing the loops and concatenation in a different way: instead of concatenating all columns in the loop for all rows we are now looping through columns, concatenating just one column for all rows at a time.<\/p>\n<p><em><strong>Combining the two approaches<\/strong><\/em><\/p>\n<p>With first approach populating of the output was some kind of inconvinient, and with the second one &#8211; fetching of incoming data. Taking the fact into account that maybe in most situations you will have more input as output columns, it will be a good idea to try to combine both approaches. We just use <strong><em>dbms_tf.get_row_set<\/em><\/strong>  to fetch the for_read columns and use <em><strong>dbms_tf.put_col<\/strong><\/em> for output.<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\n\r\n...\r\nPROCEDURE fetch_rows IS\u200b\r\n\u00a0 rowset dbms_tf.row_set_t;\u00a0\u200b\r\n\u00a0 colcnt PLS_INTEGER;\u200b\r\n\u00a0 rowcnt PLS_INTEGER;\u200b\r\n\u00a0 agg_col\u00a0 dbms_tf.tab_varchar2_t;\u00a0\u200b\r\nBEGIN\u00a0\u200b\r\n\u00a0\u00a0\u00a0 dbms_tf.get_row_set(rowset, rowcnt, colcnt);\u200b\r\n\u00a0\u00a0\u00a0 FOR r IN 1..rowcnt LOOP\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 agg_col(r) := '';\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 FOR c IN 1..colcnt LOOP\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 agg_col(r) := agg_col(r)||';'\r\n                     ||DBMS_TF.COL_TO_CHAR(rowset(c), r);\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 END LOOP;\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 agg_col(r) := ltrim (agg_col(r) ,';');\u00a0\u200b\r\n\u00a0\u00a0\u00a0 END LOOP;\u200b\r\n\u00a0\u00a0\u00a0 dbms_tf.put_col(1, agg_col);\u00a0\u200b\r\nEND;\u200b\r\n\r\n...\r\n\r\nSQL&gt; SELECT * FROM my_ptf(t, COLUMNS(A));\u200b\r\n\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 A AGG_COL\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n---------- ----------------\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1 2;3;&quot;ONE&quot;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 4 5;6;&quot;TWO&quot;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 7 8;9;&quot;THREE&quot;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n\u200b\r\nSQL&gt; SELECT * FROM my_ptf(t, COLUMNS(A,B));\u200b\r\n\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 A\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 B AGG_COL\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n---------- ---------- ---------------\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 2 3;&quot;ONE&quot;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 5 6;&quot;TWO&quot;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8 9;&quot;THREE&quot;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n\u200b\r\nSQL&gt; SELECT * FROM my_ptf(scott.emp, COLUMNS(empno));\u200b\r\n\u200b\r\n\u00a0\u00a0\u00a0\u00a0 EMPNO AGG_COL\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n---------- --------------------------------------------------\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 7369 &quot;SMITH&quot;;&quot;CLERK&quot;;7902;&quot;17-DEC-80&quot;;800;;20\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 7499 &quot;ALLEN&quot;;&quot;SALESMAN&quot;;7698;&quot;20-FEB-81&quot;;1600;300;30\u00a0\u00a0\u00a0\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 7521 &quot;WARD&quot;;&quot;SALESMAN&quot;;7698;&quot;22-FEB-81&quot;;1250;500;30\u00a0\u200b\r\n\r\n<\/pre>\n<p><em>Listing 6: FETCH_ROWS procedure with get_row_set\/put_col<\/em><\/p>\n<p>Simple, isn&#8217;t it? No headaches with fetching and collections of proper datatype, no need for environment record, just one collection for output, whose population is simple too. And you can see the polymorphism in action: the same PTF returns diffenet output shape depending on parameters and input table.<\/p>\n<p><strong>JSON<\/strong><\/p>\n<p>Now what if we wanted a JSON-document instead of a CSV? Nothing easier than that! The function <strong>DBMS_TF.ROW_TO_CHAR<\/strong> does exactly that: converts the whole rowset to its string representation. The third parameter allows us to choose the desired format, dbms_tf.format_json in our case. I&#8217;m kidding \ud83d\ude09 JSON is the only supported format at the moment. But it would be nice to have also CSV, would it? <\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\n...\r\nPROCEDURE fetch_rows IS\u200b\r\n\u00a0 rowset dbms_tf.row_set_t;\u00a0\u200b\r\n\u00a0 rowcnt PLS_INTEGER;\u200b\r\n\u00a0 json_col\u00a0 dbms_tf.tab_varchar2_t;\u00a0\u200b\r\nBEGIN\u00a0\u200b\r\n\u00a0\u00a0\u00a0 dbms_tf.get_row_set(rowset, rowcnt);\u200b\r\n\u00a0\u00a0\u00a0 FOR r IN 1..rowcnt LOOP\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 json_col(r) := DBMS_TF.ROW_TO_CHAR(rowset, r, \r\n                                   DBMS_TF.FORMAT_JSON);\u200b\r\n\u00a0\u00a0\u00a0 END LOOP;\u200b\r\n\u00a0\u00a0\u00a0 dbms_tf.put_col(1, json_col);\u00a0\u200b\r\nEND;\r\n...\r\n\r\nSQL&gt; SELECT * FROM split_json(t, COLUMNS(A));\u200b\r\n\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 A JSON_COL\r\n---------- -------------------------------------\u200b\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1 {&quot;B&quot;:2, &quot;C&quot;:3, &quot;V&quot;:&quot;ONE&quot;}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 4 {&quot;B&quot;:5, &quot;C&quot;:6, &quot;V&quot;:&quot;TWO&quot;}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 7 {&quot;B&quot;:8, &quot;C&quot;:9, &quot;V&quot;:&quot;THREE&quot;}\r\n\r\n<\/pre>\n<p><em>Listing 7: FETCH_ROWS is generating JSON output<\/em><\/p>\n<p><strong>What&#8217;s next?<\/strong><\/p>\n<p>We have now seen, how processing in FETCH_ROWS can be organized, how to produce new values using the input data (FOR_READ columns). But so far we&#8217;ve only covered the simplest cases of the row-semantic PTF. All needed columns were the part of the same row and we produced just as many rows as the input table had.<\/p>\n<p>In the next section we will cover row replication and see how to produce less or more rows than the input data flow has.<\/p>\n<div class=\"crp-list-container\"><h3 class=\"crp-list-title\">Related Posts<\/h3><ul class=\"crp-list\"><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/dynamic-pivot-with-polymorphic-table-function\/\" target=\"_blank\">Dynamic Pivot with Polymorphic Table Function?<\/a><\/div><\/li><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-part-4-table-semantic-ptf\/\" target=\"_blank\">Polymorphic Table Functions (PTF), Part 4 - Table Semantic PTF<\/a><\/div><\/li><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-part-3-row-replication\/\" target=\"_blank\">Polymorphic Table Functions (PTF) , Part 3 \u2013 Row Replication<\/a><\/div><\/li><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-tinkering-with-rowsets\/\" target=\"_blank\">Polymorphic Table Functions (PTF) - Tinkering with Rowsets<\/a><\/div><\/li><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-ptf-part-1-basics\/\" target=\"_blank\">Polymorphic Table Functions (PTF) ,  Part 1 - Basics<\/a><\/div><\/li><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-example-not-transposing-rows-to-columns\/\" target=\"_blank\">Polymorphic Table Functions Example \u2013 (NOT) Transposing Rows to Columns<\/a><\/div><\/li><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions-example-transposing-columns-to-rows\/\" target=\"_blank\">Polymorphic Table Functions Example - Transposing Columns To Rows<\/a><\/div><\/li><li class=\"crp-list-item crp-list-item-image-none\"><div class=\"crp-list-item-title\"><a href=\"https:\/\/blog.sqlora.com\/en\/polymorphic-table-functions\/\" target=\"_blank\">Polymorphic Table Functions<\/a><\/div><\/li><\/ul><\/div>\n","protected":false},"excerpt":{"rendered":"<p>In the first part of PTF series we went through a very basic example removing some columns and adding a new column with a constant value. Starting from the same example we&#8217;ll do something more meaningful in the second part. How about concatenating the values of the removed columns as CSV in a new column?<\/p>\n","protected":false},"author":1,"featured_media":684,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[66,1,2,9,67,3],"tags":[74,84,72,70],"class_list":["post-871","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-18c","category-general","category-oracle","category-plsql","category-ptf","category-sql","tag-18c","tag-fetch_rows","tag-polymorphic-table-function","tag-ptf"],"_links":{"self":[{"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/posts\/871","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/comments?post=871"}],"version-history":[{"count":44,"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/posts\/871\/revisions"}],"predecessor-version":[{"id":984,"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/posts\/871\/revisions\/984"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/media\/684"}],"wp:attachment":[{"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/media?parent=871"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/categories?post=871"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.sqlora.com\/en\/wp-json\/wp\/v2\/tags?post=871"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}