tag:blogger.com,1999:blog-3820031471524503731.post4656394751729616749..comments2022-03-30T12:21:24.920-07:00Comments on Oracle OLAP: Tuning Guidance for OLAP 10gBrian Macdonaldhttp://www.blogger.com/profile/18408740222558531436noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-3820031471524503731.post-6916095493980906632010-06-01T01:45:33.006-07:002010-06-01T01:45:33.006-07:00About Discoverer and OLAP - I seem to remember tha...About Discoverer and OLAP - I seem to remember that in 10g opening the first OLAP report in Discoverer (Plus and Viewer) could take some time because the initial load of the first worksheet has to attach the AW and validate the AW metadata. <br /><br />I would recommend reviewing the following: <br /><br />1) checking the number of objects in your AW - the more objects you have in the AW the longer this will take to validate when the AW is attached for the first time.<br /><br />2) Make sure the selection scripts you have created for your reports are not overly complicated. I think in 10g you could improve the performance of many reports by deleting all the selection scripts, saving the report and then defining new selection scripts that return the correct list of dimension members. I think if you "prototype" your reports building up the scripts over a number of attempts then this could create very large selection scripts where many of the steps were not actually needed. Creating new reports and defining the steps in one go could help.<br /><br />In general 11g will be much faster and the issue with reading the all AW metadata during the first attach has been much improved.<br /><br />Hope this helps.ASQLBaristahttps://www.blogger.com/profile/13350994132294695189noreply@blogger.comtag:blogger.com,1999:blog-3820031471524503731.post-37934557022016616942010-05-31T08:41:21.570-07:002010-05-31T08:41:21.570-07:00Hi Keith,
I've been working on Oracle Express...Hi Keith,<br /><br />I've been working on Oracle Express OLAP 6.3.4 for a long time and recently we started develop in 10g<br />Here began our fight with performance issues, - particularly accessing OLAP cubes using Oracle Discoverer<br /><br />It takes long time (more then 2 minutes) open OLAP workpages in Discoverer.<br />We contacted Oracle on this issue but no feasible answer so far. They directed us to next Oracle 11g version<br />where, as they said, - lots of things done for performance improvement.<br /><br />Our environment is: <br />Oracle Database 10g Enterprise Edition Release 10.2.0.3.0, <br />OracleBI Discoverer 10g (10.1.2.3), <br />Analytic Workspace Manager 10.2.0.3<br /> <br />There are many articles with Discoverer performance found on web but no solid solution so far.<br /><br />At that point I would like ask you if you have had experience with the Discoverer and if not -<br />could suggest some sources with the solutions on front-end reporting?<br />Thanks,<br />OlegAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3820031471524503731.post-14611343129597942192008-07-22T13:53:00.000-07:002008-07-22T13:53:00.000-07:00Hi Keith,Increasing the sortbuffersize fixed a "st...Hi Keith,<BR/><BR/>Increasing the sortbuffersize fixed a "string index out of range" error I was receiving when trying to sort a large dimension.<BR/>thanks for the informative article.<BR/><BR/>MikeAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3820031471524503731.post-8983759468786250662008-04-25T01:51:00.000-07:002008-04-25T01:51:00.000-07:00Hi Piyush,Thanks for the feedback. These are inter...Hi Piyush,<BR/><BR/>Thanks for the feedback. These are interesting observations and I will update my "internal technical paper" with your feedback. Here are my thoughts..<BR/><BR/>1) Global composites<BR/>The main use for global composites is to help tuning the SQL access layer where users want to supress NA rows. The composite provides a single densified object that can be used to controll looking within the OLAP_TABLE function.<BR/><BR/>There is no data held against the composite, it should be viewed simply as an index. This index is updated during the data loading phase of a cube refresh but has no impact on the aggregation phase.<BR/><BR/>My view is "never do more work than you have to", therefore, if you do need a global composite, do not implement it. It might have a minimal impact on load times but I would expect it to have some impact especially as the size of the composite grows. For very small incremental updates it might have minimal impact (say loading less than 50 million rows) but for large datasets (for example during a data take-up phase) it could have an impact. You also have to bear in mind there is a fixed limit to the size of the composite object - one of the reasons for using partitioning to create smaller composites - and while this is much larger compared to the old Express days there is still a limit. I am not sure what that limit is, although it is extremely high figure.<BR/><BR/>2. Order of dimensions<BR/>This is more a theory I have about aggregation performance. It is not well tested yet. However, I understand from development that when a cube has a consistent aggregation method (such as SUM) and compression is used, the engine automatically determines the most optimal aggregation process. Where you have mixed aggregation methods (SUM and MAX for example) it should be possible to achieve improved aggregation performance by changing the order of the dimensions as listed on the Rules tab - making sure you still get the correct result!<BR/><BR/>3. Partitioning<BR/>I have also seen ORA-600 issues when loading data. For this reason I tried to make clear the difference between data take-up and incremental loads. The volume of data generated during an data take-up can be enormous and with the commit phase only occuring at the end this can cause space problems within the TEMP tablespace.<BR/><BR/>For incremental loads I always try to break things down into very small chunks and then slowly increase the size of those incremental chunks to determine an optimal load size.<BR/><BR/>The interesting part here is the impact of partitioning and the size of the default partition vs the size of the other partitions. I am wondering in your case if the default partition is getting too large. I did have a similar problem with another customer and resolved it by flattening the partition dimension (may be something for another post). I can give you more details if you need them, email me: keith.laker@oracle.com<BR/><BR/>4. Tablespaces<BR/>I never like to use auto-extend because I think you lose control over your environment. It is always wise to monitor resource usage rather than letting the server just grab whatever it needs. But that is a personal preference.<BR/><BR/>KeithKeith Lakerhttps://www.blogger.com/profile/01039869313455611230noreply@blogger.comtag:blogger.com,1999:blog-3820031471524503731.post-81146218369016179652008-04-24T11:09:00.000-07:002008-04-24T11:09:00.000-07:00Hi Keith,I would first like to thank you for this ...Hi Keith,<BR/><BR/>I would first like to thank you for this wonderful article. This is a brilliant attempt to summarize the OLAP Tuning considerations and ensuing efforts. I have been working on many aspects of OLAP cubes and have implemented the fine-tuning recommendations quite often, however your article propelled me to go even further with my efforts. I would like to seek your opinion on the following points (as mentioned in your article) going by my observations through my OLAP journey. Please note that for all the points that follow, my discussion is limited to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 and AWM v10.2.0.3A. <BR/><BR/>1. Global Composites:<BR/><BR/>When it comes to cube build time or the query performance, I have not been able to replicate a situation wherein the inclusion or exclusion of Global Composites has been of any noticeable effect. I have experimented with cubes involving 8-12 dimensions and more than a dozen measures but never <BR/>saw any difference in cube build/load or query performance by using or not using Global Composites. This makes me ponder if the usage/non-usage of "Global Composites" should even be a candidate for the fine-tuning efforts.<BR/><BR/>2. Order of dimensions in implementation tab and rules tab:<BR/><BR/>I experimented with a 9-dimension cube by creating it in two different ways:<BR/><BR/>->a. The order of dimensions in the implementation tab and rules tab was from the most populous to the one with least number of members.<BR/><BR/>->b. The order of dimensions in the implementation tab and rules tab was from the one with least number of members to the one with maximum number of members.<BR/><BR/>In either case (a and b as above), there was no appreciable difference in cube load time or query performance. Using both the above methods, I was able to load about 1.5 million records per minute into my cube. Also, for both the methods, compression option was used.<BR/><BR/>3. Partitioning the cube on "Month" level of "TIME" dimension:<BR/><BR/>I really would like to go with this recommendation, but owing to the data volume of my OLAP system (about 1.5 million records per day), I guess it becomes too much for the OLAP engine to load and aggregate for all levels that I need it to do - I have been running into ORA-600 issues with this particular experiment which is being attributed to a bug. This being a different story all together - I would rather wait for the bug to be fixed rather than <BR/>framing any opinion on this. For now, I work with partitioning at the "week" level.<BR/><BR/>4. Temp Tablespace is not set to Autoextend / Unlimited: (Temp Storage Settings)<BR/><BR/>This recommendation actually surprised me a bit - this is because going by my interaction with Oracle Support for the SR related to point 3 above, I <BR/>come to understand that Oracle OLAP demands Temp Tablespace to be set to Autoextend and Unlimited. However, I tend to agree with your recommendation because I feel so far as we are in a position to monitor our OLAP system for its memory usage and allocate (or rather pre-allocate) <BR/>enough memory going by its usage trends, it may not be very necessary to have the Temp Tablespace set to Autoextend and Unlimited. This is actually how we manage the Temp storage settings in our environment.<BR/><BR/>Well, these are just some of those points that often come to my mind while working with Oracle OLAP and sometimes there seems to be a major conflict between the way things are recommended to work and the way they actually work. I do understand that there might be a lot of things that depend on the particular data model and application scenario that I have at hand, however, I am seeking your valuable guidance so as to reinforce my efforts of "tweaking" and "twisting" the Oracle OLAP option.<BR/><BR/>Once again, thanks a lot for this valuable article.<BR/><BR/>Best Regards,<BR/>PiyushAnonymousnoreply@blogger.com