Tom, nicely done, and not even a need to dip into the analytic functions! One small thing though....
The total I need to show (the column you labeled TODATE) needs to be the total revenue per customer irrespective of time. In other words, this sum needs to be _not_ the sum of Q4_AGO, Q3_AGO, Q2_AGO and LAST_Q_AGO; rather, all revenue ever generated for that customer. As such, I'm running into trouble, because then wouldn't failing to restrict t1.date_claimed by data from recent_qtrs result in a Cartesian join?
As always, Tom, your clear help is much appreciated. Thank you.
Art
The the query would simply be:ops$tkyte@ORA817DEV.US.ORACLE.COM> select cust_no, 2 sum(case when date_claimed between start_date1 and end_date1 3 then revenue else 0 end) q4_ago, 4 sum(case when date_claimed between start_date2 and end_date2 5 then revenue else 0 end) q3_ago, 6 sum(case when date_claimed between start_date3 and end_date3 7 then revenue else 0 end) q2_ago, 8 sum(case when date_claimed between start_date4 and end_date4 9 then revenue else 0 end) last_q_ago, 10 sum(revenue) 11 todate 12 from t1, recent_qtrs 13 14 15 group by cust_no 16 /and that is is. full table scan, add up the 4 qtrs -- add ALL revenue in the last column and group by cust_no.
SQL> set verify off-- simple example of dynamically setting column name-- grab sysdate into a variable called The_DateSQL> column My_Date new_val The_Date noprintSQL> select sysdate My_Date from dual;-- Use this variable as a column heading in the next querySQL> column revenue format 99999999 heading &The_DateSQL> select revenue from t1 where rownum < 2;01-MAY-02--------- 2244-- so extending this to the example...SQL> column start_date1 new_val v_q4_ago noprintSQL> column start_date2 new_val v_q3_ago noprintSQL> column start_date3 new_val v_q2_ago noprintSQL> column start_date4 new_val v_q1_ago noprintSQL> select to_char(start_date1, 'Q')||'Q'||to_char(start_date1, 'YY') start_date1, 2 to_char(start_date2, 'Q')||'Q'||to_char(start_date2, 'YY') start_date2, 3 to_char(start_date3, 'Q')||'Q'||to_char(start_date3, 'YY') start_date3, 4 to_char(start_date4, 'Q')||'Q'||to_char(start_date4, 'YY') start_date4 5 from recent_qtrs;SQL> column q4_ago heading &v_q4_agoSQL> column q3_ago heading &v_q3_agoSQL> column q2_ago heading &v_q2_agoSQL> column last_q_ago heading &v_q1_agoSQL> select cust_no, 2 sum(case when date_claimed between start_date1 and end_date1 3 then revenue else 0 end) q4_ago, 4 sum(case when date_claimed between start_date2 and end_date2 5 then revenue else 0 end) q3_ago, 6 sum(case when date_claimed between start_date3 and end_date3 7 then revenue else 0 end) q2_ago, 8 sum(case when date_claimed between start_date4 and end_date4 9 then revenue else 0 end) last_q_ago, 10 sum(revenue) 11 todate 12 from t1, recent_qtrs 13 group by cust_no; CUST_NO 2Q01 3Q01 4Q01 1Q02 TODATE---------- ---------- ---------- ---------- ---------- ---------- 1 2645215 2591672 2955139 2561960 32161036 2 2523765 2683104 2554243 2748321 31985326 3 2722713 2729052 2636726 2676797 32197917 4 2636179 2591297 2767145 2379181 32139968SQL>
Hi Tom,
I read your Expert 1-to-1 and understand the pivot template in the book. But I have no idea how to use self join /or subquery as part of partition clause.
I have a table weekly_sale(dept_no, transact_date, sales)
all the transact_date are the date of the saturday.
I'd like to get a report like
dept_no transact_date ly_transact_date sales ly_sales
------ ----------- ----------- ---- ------
where ly_transact_date is last year's same fiscal week (e.g. week 52) I have a table call bn_period (transact_date,year,month,date_of_wk,fiscal_yr,fiscal_wk)
ly_transact_date is got by following self join or subquery
select p.transact_date
from bn_period p, bn_period p1
where p.fiscal_yr||p.fiscal_wk||p.dow = p1.fiscal_yr-1 ||p1.fiscal_wk||6
and trunc(p1.transact_date) = some_transAct_date
or sub-query:
select transact_date as ly_from_date
from bn_period
where fiscal_yr||fiscal_wk||dow =(
select fiscal_yr-1||fiscal_wk||6
from bn_period
where transact_date = 'some_transact_date'
);
By the way, what is the peformance preferense regarding the self join and subquery?
Thanks!
Steve
<b>time to move on to the chapter on analytic functions! no self join, LAG() and LEAD() rock and roll...</b>ops$tkyte@ORA920> create table weekly_sale( deptno int, transact_date date, sales number );Table created.ops$tkyte@ORA920>ops$tkyte@ORA920>ops$tkyte@ORA920> insert into weekly_sale values( 10, to_date( '05-jan-2002' ), 1000 );1 row created.ops$tkyte@ORA920> insert into weekly_sale values( 10, to_date( '04-jan-2003' ), 1100 );1 row created.ops$tkyte@ORA920>ops$tkyte@ORA920> begin 2 for x in ( select * from weekly_sale ) 3 loop 4 for i in 1 .. 50 5 loop 6 insert into weekly_sale values( 10, x.transact_date+7*i, x.sales+i ); 7 end loop; 8 end loop; 9 end; 10 /PL/SQL procedure successfully completed.ops$tkyte@ORA920>ops$tkyte@ORA920>ops$tkyte@ORA920> select * 2 from ( 3 select deptno, 4 transact_date, 5 lag(transact_date) over (partition by week order by transact_date) ly_trans_date, 6 sales, 7 lag(sales) over (partition by week order by transact_date) ly_sales 8 from ( select deptno, 9 transact_date, 10 to_char(transact_date,'IW') week, 11 sales 12 from weekly_sale 13 where transact_date >= add_months(trunc(sysdate,'Y'),-12) 14 and transact_date < add_months(trunc(sysdate,'Y'), 12) 15 ) 16 ) 17 where ly_trans_date is not null 18 / DEPTNO TRANSACT_ LY_TRANS_ SALES LY_SALES---------- --------- --------- ---------- ---------- 10 04-JAN-03 05-JAN-02 1100 1000 10 11-JAN-03 12-JAN-02 1101 1001 10 18-JAN-03 19-JAN-02 1102 1002 10 25-JAN-03 26-JAN-02 1103 1003 10 01-FEB-03 02-FEB-02 1104 1004 10 08-FEB-03 09-FEB-02 1105 1005<b>use whatever function you want to compute "fiscal_wk"</b>
Hi tom,
Many thanks for your answer!
Before I got your solution. I came up with the following
sql. I also use the fiscal_wk as partition key. (in real world I need add more fields and tables)
select merch_div_num,
dept_num,
dept_name,
subject_code,
subject_desc,
transact_date,
max(decode(rn,1, sale_qty,0)) ty_sale_qty,
max(decode(rn,1, sale_amt,0)) ty_sale_amt,
max(decode(rn,1, sale_ext_amt,0)) ty_sale_ext_amt,
max(decode(rn,2, sale_qty,0)) ly_sale_qty,
max(decode(rn,2, sale_amt,0)) ly_sale_amt,
max(decode(rn,2, sale_ext_amt,0)) ly_sale_ext_amt
From
(select w.merch_div_num merch_div_num,
i.dept_num,
d.dept_name,
i.subject_code,
s.subject_desc,
w.transact_date,
sum(w.sale_qty) as sale_qty,
sum(w.sale_amt) as sale_amt,
sum(w.sale_qty * i.retail_amt) as sale_ext_amt,
p.fiscal_wk,
row_number() over (partition by w.merch_div_num, i.dept_num, d.dept_name, i.subject_code, s.subject_desc,p.fiscal_wk
order by p.fiscal_yr desc ) rn
from
bn_wkly_sales_item w,
bn_item i,
bn_period p,
bn_department d,
bn_subject s
where w.sysid = i.sysid
and i.dept_num = d.dept_num
and w.transact_date = p.transact_date
and i.subject_code = s.subject_code
group by w.merch_div_num, i.dept_num, d.dept_name, i.subject_code, s.subject_desc,w.transact_date, p.fiscal_wk,p.fiscal_yr
) t
group by t.merch_div_num, t.dept_num, t.dept_name, t.subject_code, t.subject_desc,t.transact_date
But the problem is that when ty_sale_qty, ty_sale_amt have data, the ly_sale_qty and ly_sale_amt are 0 and vice verse
Could you tell me what is the problem?
I'll try your way because You solution is much simple than mine.
Thanks!
Steve
Hi Tom,
Thank you for your great solution. I was a little bit confuse about how analytic function works. In the following sql, the clause 'partition by week order by transact_date' will group data with the same week
transact_date sales ...
--------- ----
05-JAN-02 1000
04-JAN-03 1100
....
How does Lag() will pick up the the previous one in stead of next one? and if I use 'by week order by transact_date desc', the result is wrong.
Can you explain to me?
Thanks
Steve
select *
from (
select deptno,
transact_date,
lag(transact_date) over (partition by week order by transact_date)
ly_trans_date,
sales,
lag(sales) over (partition by week order by transact_date) ly_sales
from ( select deptno,
transact_date,
to_char(transact_date,'IW') week,
sales
from weekly_sale
where transact_date >= add_months(trunc(sysdate,'Y'),-12)
and transact_date < add_months(trunc(sysdate,'Y'), 12)
)
)
where ly_trans_date is not null
Hi
the query in your answer is great, I just wonder if it's possible to make it dynamic for example instead of 4 quarters I want of first 5 months :-?
Do we have to use a temporary table for that.. Or any user defined datatype?
I need all item_code, summary for specific month if it shipped, 0 if item wasn't shipped, and month.
This select sta-t gives me items which were shipped.
select i.item_code,sum(a.itm_shipped) as shipped, to_char(b.ship_date,'mon/yy') as monthly
from inventory i,order_items a, orders b
where a.order_id=b.order_id and
i.account_id=b.account_id and
i.inventory_id=a.inventory_id(+)
and ship_date >= to_date('1/1/2004', 'mm/dd/yy')
and ship_date <= to_date('3/1/2004', 'mm/dd/yy')
group by i.item_code,to_char(b.ship_date,'mon/yy');
item_code shipped month
A-100 34 jan/04
A-101 23 jan/04
A-101 33 feb/04
A-234 66 feb/04
Items A-222, A-123 weren't shipped and they are not in this query.
I have to show:
item_code jan/04 feb/04
A-100 34 0
A-101 23 33
A-123 0 0
A-222 0 0
A-234 0
thanks.
sta-t? did we lose some bytes there? anyway....
you have half way outer joins. Anytime you see the construct:
where t1.c = t2.c1(+)
and t2.c2 = <anything, literal, another column>
you know you either
a) made a horrible mistake
b) are just making the optimizer not able to do its job.
You have:
where a.order_id=b.order_id
i.inventory_id=a.inventory_id(+)
well, b.order_id could never ever be equal to a.order_id if we "made up a".
So, it seems you want to join orders to order_items, sum up the itm_shipped for all inventory_ids in the period in question and then outer from inventory to that result.
select i.item_code, sum(x.shipped), x.monthly
from inventory i,
(select a.inventory_id,
a.itm_shipped shipped,
trunc(b.ship_date,'mm') monthly
from order_items a,
orders b
where a.order_id = b.order_id
and b.ship_date between to_date('1/1/2004', 'mm/dd/yy')
and to_date('3/1/2004', 'mm/dd/yy')) X
where i.inventory_id = x.inventory_id(+)
group by i.item_code, x.monthly;
(not run, just coded off the cuff, please understand the concept -- join order_items to orders to get inventory_id, shipped, and the month -- then outerjoin that to inventory and aggregate as needed -- and verify we got it right)
Tom, thanks a lot. It works great.
But I need I suppose, pivot:
item_code jan/04 feb/04
A-100 34 0
A-101 23 33
A-123 0 0
A-222 0 0
A-234 0 66
ok, so pivot it then?
select i.item_code,
sum(decode(x.monthly, jan ), x.shipped ),
sum(decode(x.monthly, feb ), x.shipped ),
sum(decode(x.monthly, mar ), x.shipped )
from inventory i,
(select a.inventory_id,
a.itm_shipped shipped,
trunc(b.ship_date,'mm') monthly
from order_items a,
orders b
where a.order_id = b.order_id
and b.ship_date between to_date('1/1/2004', 'mm/dd/yy')
and to_date('3/1/2004', 'mm/dd/yy')) X
where i.inventory_id = x.inventory_id(+)
group by i.item_code;
I do not know dates, let's say ,
fromDate and ToDate.
Thanks.
then you do not know the number of columns and pivoting will be impossible in SQL, you'll have to do it in the client OR once you are given the to/from -- dynamically GENERATE the sql.
SQL wants to have "n" columns where N is a constant. If you don't know the to/from, you cannot pivot in this case (not until you do know anyway)
Hello Tom!
For your consideration (v9.2.0.4.0 - 64bit):
create table j_test1
(col_a1 varchar2(5),
col_a2 varchar2(5),
col_a3 varchar2(5),
gimme varchar2(5));
insert into j_test1 values ('X',null,null,'Foo1');
insert into j_test1 values ('X','Y',null,'Foo2');
insert into j_test1 values ('X','Y','Z','Foo3');
insert into j_test1 values (null,'Y',null,'Foo4');
insert into j_test1 values (null,'Y','Z','Foo5');
insert into j_test1 values (null,null,'Z','Foo6');
insert into j_test1 values (null,null,null,'Foo7');
----------------------------------------------------
So my task is to create a function such that:
if (parm1 = col_a1 and
parm2 = col_a2 and
parm3 = col_a3), return Foo3
elsif
(parm1 = col_a1 and
parm2 = col_a2), return Foo2
elsif
(parm1 = col_a2 and -- notice the flip!
parm2 = col_a1), return Foo2
elsif
(parm1 = col_a1), return Foo1
elsif
(parm2 = col_a2), return Foo4
elsif
(parm3 = col_a3), return Foo6
else return Foo7
end if;
Note a few gotcha's:
1) Since the users sometimes confuse col_a1 and col_a2,
try finding a match via a parameter flip for the
third matching criteria
2) According to theory (ha), Foo5 should never occur.
Q: Is this "analytic"ical (a single SQL query),
or a "function" sort of thing (w/CASE)?
I've tried the former (love learning analytics),
bugged my teammates, but we can't figure out a
"single query" that would work.
Well, I'm stubborn, of course, and can't quit
on this until I've read your opinion/advice.
Many thanks!
- J
Here is the scenario:
How would I display multiple time periods in a single report? This seems like it would be a common request from
reporting users.
REQUIREMENT: Build a report to should show Sales by Account by Zip by Vendor by Channel by Product for a given month with reference to the Average Sales for given range of Months and the selected variance range. Sample report would look like this:
Account Zip Vendor Channel Product Month Sales Avg Sales from 01/04 10/04 Variance
------- ----- ------ ------- ------- ----- ------ ---------------------------- --------
ACCT 1 12345 V1 CH1 P1 11/04 $2,000 $2,500 -20%
ACCT 2 12346 V2 CH2 P2 11/04 $1,000 $1,150 +15%
ACCT 3 12347 V3 CH3 P3 11/04 $0 $3,000 -100%
The user wants to compare sales for a selected month to the average sales for a selected range of months i.e. Current Month Sales compared to average Sales 01/2004 thru 03/2004. So
1) The first month is dynamic (Current Month, Last Month, Some Month a year
ago, etc.)
2) The consecutive range of months used to calculate the average sales is dynamic. It could coincide with quarter, semester, trimester, etc. but could also be a random 3 consecutive months at any point historically. The user does not want a fixed months solution i.e. Last 3 months, last 6 months, last 12 months, etc.
Database is Oracle 9i and the data warhouse is a typical star schema with Sales being the central fact table and the remaining fields belonging to individual dimensions with a 1:n join on the fact.
Hi Tom,
Thanks for wonderful asnwers that you give, I have a query in which I have to group my sales data weekly starting from sysdate (as sysdate-7). Say today (on 28th sep )If I have to see sales data for two weeks then the it should group sales data for
15-sep-2006 to 21-sep-2006
22-sep-2006 to 28-sep-2006
I tried to do this thru analytical function by but failed to partition the data weekly going from today.
why would you do that with analytics, you want to aggregate it appears and they do NOT aggregate.ops$tkyte%ORA10GR2> select min(x), max(x), count(*), grp 2 from ( 3 select x, trunc((trunc(sysdate)-x)/7) grp 4 from t) 5 group by grp 6 order by 1;MIN(X) MAX(X) COUNT(*) GRP--------- --------- ---------- ----------26-AUG-06 01-SEP-06 7 402-SEP-06 08-SEP-06 7 309-SEP-06 15-SEP-06 7 216-SEP-06 22-SEP-06 7 123-SEP-06 29-SEP-06 7 0just divide.
Tom,
I think I have read about every posting in here about analytics and I've tried to combine many of the functions in an attempt to solve this problem, but to no avail.
I have a table that has the following:
SELL_DATE PRIOR_MTD
--------- ---------
03-OCT-06 01-OCT-05
04-OCT-06 03-OCT-05
05-OCT-06 04-OCT-05
06-OCT-06 05-OCT-05
07-OCT-06 06-OCT-05
08-OCT-06 06-OCT-05
My result set needs to show a row for every date within the current and prior Month To Date range.
Example: result set
Sell Date Current MTD Prior MTD Dates
2006-10-03 2006-10-03 2005-10-01
2006-10-04 2006-10-03 2005-10-01
2006-10-04 2006-10-04 2005-10-02
2006-10-04 2005-10-03
2006-10-05 2006-10-03 2005-10-01
2006-10-05 2006-10-04 2005-10-02
2006-10-05 2006-10-05 2005-10-03
2006-10-05 2005-10-04
2006-10-06 2006-10-03 2005-10-01
2006-10-06 2006-10-04 2005-10-02
2006-10-06 2006-10-05 2005-10-03
2006-10-06 2006-10-06 2005-10-04
2006-10-06 2005-10-05
2006-10-07 2006-10-03 2005-10-01
2006-10-07 2006-10-04 2005-10-02
2006-10-07 2006-10-05 2005-10-03
2006-10-07 2006-10-06 2005-10-04
2006-10-07 2006-10-07 2005-10-05
2006-10-07 2005-10-06
2006-10-08 2006-10-03 2005-10-01
2006-10-08 2006-10-04 2005-10-02
2006-10-08 2006-10-05 2005-10-03
2006-10-08 2006-10-06 2005-10-04
2006-10-08 2006-10-07 2005-10-05
2006-10-08 2006-10-08 2005-10-06
2006-10-09 2006-10-03 2005-10-01
2006-10-09 2006-10-04 2005-10-02
2006-10-09 2006-10-05 2005-10-03
2006-10-09 2006-10-06 2005-10-04
2006-10-09 2006-10-07 2005-10-05
2006-10-09 2006-10-08 2005-10-06
2006-10-09 2006-10-09 2005-10-07
Looking at the rows for October 4th, the current days belonging in that MTD are the 3rd & 4th. The comparable prior year MTD had days of October 1st - 3rd. The 2nd was not in the table, so that had to be derived, hence the null column value for current MTD. If either current or prior has more values, then the opposite column value will be NULL.
I was very close in resolving, but was not able to group the dates. My current result set looks like:
SELL_DT CURRENT_M PRIOR_MTD
--------- --------- ---------
03-OCT-06 03-OCT-06
03-OCT-06 01-OCT-05
04-OCT-06 03-OCT-06
04-OCT-06 04-OCT-06
04-OCT-06 01-OCT-05
04-OCT-06 02-OCT-05
04-OCT-06 03-OCT-05
05-OCT-06 03-OCT-06
05-OCT-06 04-OCT-06
05-OCT-06 05-OCT-06
05-OCT-06 01-OCT-05
05-OCT-06 02-OCT-05
05-OCT-06 03-OCT-05
05-OCT-06 04-OCT-05
...
using this SQL:
select sell_dt, current_mtd, prior_mtd from (
select sell_dt, null as current_mtd, pt+r-1 as prior_mtd
from t2, (select distinct sell_dt, pt, pt2, pt2-pt+1 as num_days
from (select a.selling_dt as sell_dt,
b.selling_dt as curr_mtd,
b.PRIOR_YR_ACTUAL_SELLING_DT as prior_mtd,
min(b.prior_yr_actual_selling_dt) over (partition by a.selling_dt order by a.selling_dt) pt,
max(b.prior_yr_actual_selling_dt) over (partition by a.selling_dt order by a.selling_dt) pt2
from selling_day_matrx_info a, selling_day_matrx_info b
where a.mth_start_dt = '01-oct-2006'
and b.mth_start_dt='01-oct-2006'
and b.selling_dt <= a.selling_dt //and a.selling_dt = '04-oct-2006'
order by 1,2,3)
) t1
where t2.r <= t1.num_days
union all
select a.selling_dt as sell_dt , b.selling_dt as current_mtd, null as prior_mtd
from selling_day_matrx_info a, selling_day_matrx_info b
where a.mth_start_dt = '01-oct-2006'
and b.mth_start_dt='01-oct-2006'
and b.selling_dt <= a.selling_dt
order by 1,2)
where table t2 is a table of sequential numbers.
no creates....
no insert intos....
no lookie......
not promising an answer, just saying I cannot do a thing with what you've supplied.
sounds like you need to outer join to a set of dates
with dates as
(select START_DATE + level-1
from dual
connect by level <= NUMBER_OF_DAYS_NEEDED)
select ..
from dates left join .....
and maybe "ignore nulls" with last_value() (new in 10g, but you didn't say anything about a version) to carry down the last non-null value over the dates.
please take an attempt at this yourself using the above hints.... OOW this week, catch as catch can, if I cannot answer a "review" really fast, I have to skip it and move on...
Hi Tom. I'm trying to pivot the data in a dynamic table setup that would be queried via an update-able view.
This way the users can create dynamic 'tables' and and store data in them, and retrieve/edit the data by using statements as simple as 'select * from mytable' (actually a view).
The table schemas:
create table DYNAMICTABLES( PKEY NUMBER not null, TABLENAME VARCHAR2(35) not null)create table DYNAMICCOLUMNS( PKEY NUMBER not null, TABLENUM NUMBER not null, COLUMNNAME VARCHAR2(35) not null, COLUMNTYPE VARCHAR2(35) not null)create table DYNAMICDATA( PKEY NUMBER not null, TABLENUM NUMBER not null, COLUMNNUM NUMBER not null, RECORDNUM NUMBER not null, VALUENONBLOB VARCHAR2(4000), VALUECLOB CLOB, VALUEBLOB BLOB)
I want to dynamically create the view in code by reading the values in the linked DYNAMICTABLES and DYNAMICCOLUMNS tables. So far what I've tried are simplistic self joins, but I think that would have performance problems. Example:
CREATE OR REPLACE VIEW MYTABLE(column1, colnum1, column2, colnum2, column3, colnum3, column4, colnum4, recordnum, tablename, tablenum)ASSELECT D1.VALUENONBLOB, D1.COLUMNNUM,D2.VALUENONBLOB,D2.COLUMNNUM, D3.VALUECLOB,D3.COLUMNNUM, D4.VALUEBLOB,D4.COLUMNNUM,D1.RECORDNUM, T.TABLENAME,D1.TABLENUMFROM dynamictables t,DYNAMICDATA D1 JOIN DYNAMICDATA D2 ON D2.RECORDNUM = D1.RECORDNUMJOIN DYNAMICDATA D3 ON D3.RECORDNUM = D1.RECORDNUMJOIN DYNAMICDATA D4 ON D4.RECORDNUM = D1.RECORDNUMWHERE D1.VALUENONBLOB IS NOT NULLAND D2.VALUENONBLOB IS NOT NULLAND D3.VALUECLOB IS NOT NULLAND D4.VALUEBLOB IS NOT NULLAND D1.COLUMNNUM = 1AND D2.COLUMNNUM = 2AND D3.COLUMNNUM = 3AND D4.COLUMNNUM = 4AND ((T.PKEY = D1.TABLENUM) OR (T.PKEY = D2.TABLENUM) OR (T.PKEY = D3.TABLENUM) OR (T.PKEY = D4.TABLENUM));
Is there a pivot table approach that could do what I'm trying to do here?
Thanks.
I was trying to use your following query for my warehouse application to get previous year values
select * 2 from ( 3 select deptno, 4 transact_date, 5 lag(transact_date) over (partition by week order by transact_date) ly_trans_date, 6 sales, 7 lag(sales) over (partition by week order by transact_date) ly_sales 8 from ( select deptno, 9 transact_date, 10 to_char(transact_date,'IW') week, 11 sales 12 from weekly_sale 13 where transact_date >= add_months(trunc(sysdate,'Y'),-12) 14 and transact_date < add_months(trunc(sysdate,'Y'), 12) 15 ) 16 ) 17 where ly_trans_date is not null
Version: Oracle 11g R2
My application is OLAP, a warehouse database, how can I make this query to give quick results in a warehouse environment. My fact table contains 300 million records ! it took 15 minutes to give results, this is not acceptable to the business. I thought of MV's but this is not a scenario of aggregation and the size of the MV will be larger then table.
Please suggest.
Thanks
how big is 300,000,000 records - it could be 300MB to 300PB (or more!)
do you have the IO bandwidth to retrieve that much data in a shorter period of time (I'm assuming we'll be scanning)
have you partitioned the data so we only have to look at a years worth - not everthing?
come on - give us *something* to go on here.
FAQs
How to convert rows into columns in SQL query without pivot in Oracle? ›
Decode is another function used to convert rows into columns in oracle databases 19c and 21c. It compares expression or provided column to each search value one by one. It was widely used before the introduction of the pivot function and can be called a rival of the pivot function.
How do I pivot rows into columns in SQL? ›- Select columns for pivoting.
- Then, select a source table.
- Apply the PIVOT operator, and then use the aggregate functions.
- Mention pivot values.
PIVOT: The Oracle PIVOT clause allows you to write a cross-tabulation query starting in Oracle 11g. This means that you can aggregate your results and rotate rows into columns. DECODE: The Oracle/PLSQL DECODE function has the functionality of an IF-THEN-ELSE statement.