tag:blogger.com,1999:blog-38200314715245037312024-02-24T23:29:38.383-08:00Oracle OLAPThe most powerful, open Analytic EngineBrian Macdonaldhttp://www.blogger.com/profile/18408740222558531436noreply@blogger.comBlogger17125tag:blogger.com,1999:blog-3820031471524503731.post-18728120056467217762014-03-04T12:57:00.002-08:002014-03-04T13:00:42.916-08:00The OLAP Extension is now available in SQL Developer 4.0<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJbhVcy8__pd4EUgzWTOw5ZvV00L2SnuEjGEnM7D9uEtu3rIkqTqHgWQhVY8I3X2fFju9_an2pjJ9muc5ZjwwhJNIJ2esbFPHQSfN62-D7MUcq80z8TpHsR0OBKGrmONtEirtIx2S2wUY/s1600/sqldev.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJbhVcy8__pd4EUgzWTOw5ZvV00L2SnuEjGEnM7D9uEtu3rIkqTqHgWQhVY8I3X2fFju9_an2pjJ9muc5ZjwwhJNIJ2esbFPHQSfN62-D7MUcq80z8TpHsR0OBKGrmONtEirtIx2S2wUY/s1600/sqldev.png" height="307" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<!--[if !mso]>
<style>
v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style>
<![endif]--><br />
<!--[if gte mso 9]><xml>
<o:OfficeDocumentSettings>
<o:AllowPNG/>
</o:OfficeDocumentSettings>
</xml><![endif]--><!--[if gte mso 9]><xml>
<w:WordDocument>
<w:View>Normal</w:View>
<w:Zoom>0</w:Zoom>
<w:TrackMoves>false</w:TrackMoves>
<w:TrackFormatting/>
<w:PunctuationKerning/>
<w:ValidateAgainstSchemas/>
<w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid>
<w:IgnoreMixedContent>false</w:IgnoreMixedContent>
<w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText>
<w:DoNotPromoteQF/>
<w:LidThemeOther>EN-US</w:LidThemeOther>
<w:LidThemeAsian>X-NONE</w:LidThemeAsian>
<w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript>
<w:Compatibility>
<w:BreakWrappedTables/>
<w:SnapToGridInCell/>
<w:WrapTextWithPunct/>
<w:UseAsianBreakRules/>
<w:DontGrowAutofit/>
<w:SplitPgBreakAndParaMark/>
<w:DontVertAlignCellWithSp/>
<w:DontBreakConstrainedForcedTables/>
<w:DontVertAlignInTxbx/>
<w:Word11KerningPairs/>
<w:CachedColBalance/>
</w:Compatibility>
<m:mathPr>
<m:mathFont m:val="Cambria Math"/>
<m:brkBin m:val="before"/>
<m:brkBinSub m:val="--"/>
<m:smallFrac m:val="off"/>
<m:dispDef/>
<m:lMargin m:val="0"/>
<m:rMargin m:val="0"/>
<m:defJc m:val="centerGroup"/>
<m:wrapIndent m:val="1440"/>
<m:intLim m:val="subSup"/>
<m:naryLim m:val="undOvr"/>
</m:mathPr></w:WordDocument>
</xml><![endif]--><!--[if gte mso 9]><xml>
<w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true"
DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
<w:LsdException Locked="false" Priority="0" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
<w:LsdException Locked="false" Priority="9" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8"/>
<w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9"/>
<w:LsdException Locked="false" Priority="39" Name="toc 1"/>
<w:LsdException Locked="false" Priority="39" Name="toc 2"/>
<w:LsdException Locked="false" Priority="39" Name="toc 3"/>
<w:LsdException Locked="false" Priority="39" Name="toc 4"/>
<w:LsdException Locked="false" Priority="39" Name="toc 5"/>
<w:LsdException Locked="false" Priority="39" Name="toc 6"/>
<w:LsdException Locked="false" Priority="39" Name="toc 7"/>
<w:LsdException Locked="false" Priority="39" Name="toc 8"/>
<w:LsdException Locked="false" Priority="39" Name="toc 9"/>
<w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption"/>
<w:LsdException Locked="false" Priority="10" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Title"/>
<w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font"/>
<w:LsdException Locked="false" Priority="11" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
<w:LsdException Locked="false" Priority="22" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
<w:LsdException Locked="false" Priority="20" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
<w:LsdException Locked="false" Priority="59" SemiHidden="false"
UnhideWhenUsed="false" Name="Table Grid"/>
<w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text"/>
<w:LsdException Locked="false" Priority="1" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
<w:LsdException Locked="false" Priority="60" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Shading"/>
<w:LsdException Locked="false" Priority="61" SemiHidden="false"
UnhideWhenUsed="false" Name="Light List"/>
<w:LsdException Locked="false" Priority="62" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Grid"/>
<w:LsdException Locked="false" Priority="63" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 1"/>
<w:LsdException Locked="false" Priority="64" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 2"/>
<w:LsdException Locked="false" Priority="65" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 1"/>
<w:LsdException Locked="false" Priority="66" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 2"/>
<w:LsdException Locked="false" Priority="67" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 1"/>
<w:LsdException Locked="false" Priority="68" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 2"/>
<w:LsdException Locked="false" Priority="69" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 3"/>
<w:LsdException Locked="false" Priority="70" SemiHidden="false"
UnhideWhenUsed="false" Name="Dark List"/>
<w:LsdException Locked="false" Priority="71" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Shading"/>
<w:LsdException Locked="false" Priority="72" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful List"/>
<w:LsdException Locked="false" Priority="73" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Grid"/>
<w:LsdException Locked="false" Priority="60" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
<w:LsdException Locked="false" Priority="61" SemiHidden="false"
UnhideWhenUsed="false" Name="Light List Accent 1"/>
<w:LsdException Locked="false" Priority="62" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
<w:LsdException Locked="false" Priority="63" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
<w:LsdException Locked="false" Priority="64" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
<w:LsdException Locked="false" Priority="65" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>
<w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision"/>
<w:LsdException Locked="false" Priority="34" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
<w:LsdException Locked="false" Priority="29" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
<w:LsdException Locked="false" Priority="30" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
<w:LsdException Locked="false" Priority="66" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
<w:LsdException Locked="false" Priority="67" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
<w:LsdException Locked="false" Priority="68" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
<w:LsdException Locked="false" Priority="69" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
<w:LsdException Locked="false" Priority="70" SemiHidden="false"
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
<w:LsdException Locked="false" Priority="71" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
<w:LsdException Locked="false" Priority="72" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
<w:LsdException Locked="false" Priority="73" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
<w:LsdException Locked="false" Priority="60" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
<w:LsdException Locked="false" Priority="61" SemiHidden="false"
UnhideWhenUsed="false" Name="Light List Accent 2"/>
<w:LsdException Locked="false" Priority="62" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
<w:LsdException Locked="false" Priority="63" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
<w:LsdException Locked="false" Priority="64" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
<w:LsdException Locked="false" Priority="65" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
<w:LsdException Locked="false" Priority="66" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
<w:LsdException Locked="false" Priority="67" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
<w:LsdException Locked="false" Priority="68" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
<w:LsdException Locked="false" Priority="69" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
<w:LsdException Locked="false" Priority="70" SemiHidden="false"
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
<w:LsdException Locked="false" Priority="71" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
<w:LsdException Locked="false" Priority="72" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
<w:LsdException Locked="false" Priority="73" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
<w:LsdException Locked="false" Priority="60" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
<w:LsdException Locked="false" Priority="61" SemiHidden="false"
UnhideWhenUsed="false" Name="Light List Accent 3"/>
<w:LsdException Locked="false" Priority="62" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
<w:LsdException Locked="false" Priority="63" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
<w:LsdException Locked="false" Priority="64" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
<w:LsdException Locked="false" Priority="65" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
<w:LsdException Locked="false" Priority="66" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
<w:LsdException Locked="false" Priority="67" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
<w:LsdException Locked="false" Priority="68" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
<w:LsdException Locked="false" Priority="69" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
<w:LsdException Locked="false" Priority="70" SemiHidden="false"
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
<w:LsdException Locked="false" Priority="71" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
<w:LsdException Locked="false" Priority="72" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
<w:LsdException Locked="false" Priority="73" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
<w:LsdException Locked="false" Priority="60" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
<w:LsdException Locked="false" Priority="61" SemiHidden="false"
UnhideWhenUsed="false" Name="Light List Accent 4"/>
<w:LsdException Locked="false" Priority="62" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
<w:LsdException Locked="false" Priority="63" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
<w:LsdException Locked="false" Priority="64" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
<w:LsdException Locked="false" Priority="65" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
<w:LsdException Locked="false" Priority="66" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
<w:LsdException Locked="false" Priority="67" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
<w:LsdException Locked="false" Priority="68" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
<w:LsdException Locked="false" Priority="69" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
<w:LsdException Locked="false" Priority="70" SemiHidden="false"
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
<w:LsdException Locked="false" Priority="71" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
<w:LsdException Locked="false" Priority="72" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
<w:LsdException Locked="false" Priority="73" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
<w:LsdException Locked="false" Priority="60" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
<w:LsdException Locked="false" Priority="61" SemiHidden="false"
UnhideWhenUsed="false" Name="Light List Accent 5"/>
<w:LsdException Locked="false" Priority="62" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
<w:LsdException Locked="false" Priority="63" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
<w:LsdException Locked="false" Priority="64" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
<w:LsdException Locked="false" Priority="65" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
<w:LsdException Locked="false" Priority="66" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
<w:LsdException Locked="false" Priority="67" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
<w:LsdException Locked="false" Priority="68" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
<w:LsdException Locked="false" Priority="69" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
<w:LsdException Locked="false" Priority="70" SemiHidden="false"
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
<w:LsdException Locked="false" Priority="71" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
<w:LsdException Locked="false" Priority="72" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
<w:LsdException Locked="false" Priority="73" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
<w:LsdException Locked="false" Priority="60" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
<w:LsdException Locked="false" Priority="61" SemiHidden="false"
UnhideWhenUsed="false" Name="Light List Accent 6"/>
<w:LsdException Locked="false" Priority="62" SemiHidden="false"
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
<w:LsdException Locked="false" Priority="63" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
<w:LsdException Locked="false" Priority="64" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
<w:LsdException Locked="false" Priority="65" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
<w:LsdException Locked="false" Priority="66" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
<w:LsdException Locked="false" Priority="67" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
<w:LsdException Locked="false" Priority="68" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
<w:LsdException Locked="false" Priority="69" SemiHidden="false"
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
<w:LsdException Locked="false" Priority="70" SemiHidden="false"
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
<w:LsdException Locked="false" Priority="71" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
<w:LsdException Locked="false" Priority="72" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
<w:LsdException Locked="false" Priority="73" SemiHidden="false"
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
<w:LsdException Locked="false" Priority="19" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
<w:LsdException Locked="false" Priority="21" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
<w:LsdException Locked="false" Priority="31" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
<w:LsdException Locked="false" Priority="32" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
<w:LsdException Locked="false" Priority="33" SemiHidden="false"
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>
<w:LsdException Locked="false" Priority="37" Name="Bibliography"/>
<w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading"/>
</w:LatentStyles>
</xml><![endif]--><!--[if gte mso 10]>
<style>
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;}
</style>
<![endif]-->
<br />
<div class="MsoNormal">
<span style="mso-no-proof: yes;"></span></div>
<div class="MsoNormal" style="line-height: normal; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto;">
</div>
<div class="MsoNormal" style="line-height: normal; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto;">
<span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">T<a href="https://www.blogger.com/null" name="_GoBack"></a>he OLAP
Extension is now in SQL Developer 4.0.<br />
<br />
See </span><a href="http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/sqldev-releasenotes-v4-1925251.html" target="_blank"><span style="color: blue; font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/sqldev-releasenotes-v4-1925251.html</span></a><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";"> for the details.<br />
<br />
The OLAP functionality is mentioned toward the bottom of the web page.</span></div>
<div class="MsoNormal" style="line-height: normal; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto;">
<span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">You will still need AWM 12.1.0.1.0
to</span></div>
<ul type="disc">
<li class="MsoNormal" style="line-height: normal; mso-list: l0 level1 lfo1; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Manage and enable cube and dimension MV's.</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l0 level1 lfo1; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Manage data security.</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l0 level1 lfo1; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Create and edit nested measure folders (i.e. measure
folders that are children of other measure folders)</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l0 level1 lfo1; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Create and edit Maintenance Scripts</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l0 level1 lfo1; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Manage multilingual support for OLAP Metadata objects</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l0 level1 lfo1; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Use the OBIEE plugin or the Data Validation plugin </span></li>
</ul>
<div class="MsoNormal" style="line-height: normal; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto;">
<span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">What is new or improved:</span></div>
<ul type="disc">
<li class="MsoNormal" style="line-height: normal; mso-list: l1 level1 lfo2; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">New Calculation Expression editor for calculated
measures. This allows the user to nest different types to calculated
measures easily. For instance a user can now create a Moving Total
of a Prior Period as one calculated measure. In AWM, it would have
required a user to create a Prior Period first and then create a Moving
Total calculated measure which referred to the Prior Period measure.
Also the new Calculation Expression editor displays hypertext helper
templates when the user selects the OLAP API syntax in the editor.</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l1 level1 lfo2; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Support for OLAP DML command execution in the SQL
Worksheet. Simply prefix OLAP DML commands by a '~' and then select
the execute button to execute them on the SQL Worksheet. The output
of the command will appear in the DBMS Output Window if it is opened, or
the Script Output Window if the user has executed 'set serveroutput on'
before executing the DML command.</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l1 level1 lfo2; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Improved OLAP DML Program Editor integrated within the
SQL Developer framework.</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l1 level1 lfo2; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">New diagnostic reports in the SQL Developer Report
navigator.</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l1 level1 lfo2; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Ability to create a fact view with a measure dimension
(i.e. "pivot cube"). This functionality is accessible from
the SQL Developer Tools-OLAP menu option.</span></li>
<li class="MsoNormal" style="line-height: normal; mso-list: l1 level1 lfo2; mso-margin-bottom-alt: auto; mso-margin-top-alt: auto; tab-stops: list .5in;"><span style="font-family: "Times New Roman","serif"; font-size: 12.0pt; mso-fareast-font-family: "Times New Roman";">Cube scripts have been renamed to Build Specifications
and are now accessible within the Create/Edit Cube dialog. The Build
Specifications editor there, is similar to the calculation expression
editor as far as functionality.</span></li>
</ul>
Christopher Kearneyhttp://www.blogger.com/profile/17722642593898599800noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-15670455951937343732010-02-10T03:02:00.000-08:002010-02-10T03:14:46.934-08:00Using MindMapping to view OLAP hierarchies...I came across this interesting article by accident. One thing I have always wanted within AWM is a way of viewing a hierarchy or series of hierarchies. In the old days of Express I wrote a various Express programs, EIS code (who remembers Express EIS?), Express Objects extensions, that would list out a hierarchy. However, this article by Robert Brooke takes this idea way beyond anything I have seen because it uses the open source tool called MindMapping. Take a look at the series of articles Robert has written:<div><div> <blockquote><a href="http://ofaworld.wordpress.com/2010/02/09/ofa-mindmapping/">http://ofaworld.wordpress.com/2010/02/09/ofa-mindmapping/</a><br /><a href="http://ofaworld.wordpress.com/2009/06/10/sample-olap-dml-code-gplcode/">http://ofaworld.wordpress.com/2009/06/10/sample-olap-dml-code-gplcode/</a><br /><a href="http://ofaworld.wordpress.com/2009/12/01/oracle-olap-mindmapping/">http://ofaworld.wordpress.com/2009/12/01/oracle-olap-mindmapping/</a><br /></blockquote> Now that is cool! </div><div><br /></div><div>Now I am thinking if we could just add the MindMapping GUI into AWM and somehow allow people to build a hierarchy visually using the MindMapping GUI that would be something. This would give AWM a very powerful way of designing a hierarchy in a live/interactive manner. One for the OLAP PM team - <i>I think</i>. In the short-term I think it might be possible to use the AWM extension API to add the MindMapping GUI to AWM menus but this would simply launch an the MindMapping tools outside of AWM but that might be sufficient for the moment?<br /><div><br /></div><div>Nice one Robert!</div></div></div>ASQLBaristahttp://www.blogger.com/profile/13350994132294695189noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-74838880975850393632009-02-05T16:36:00.000-08:002009-02-06T02:05:20.721-08:00Oracle OLAP Newsletter - February 2009The latest Oracle OLAP newsletter, February 2009, has been posted onto OTN and is available by clicking <a href="http://www.oracle.com/technology/products/bi/olap/olapref/newsletter/oracleolapnewsletter_feb09.html">here</a><br /><br />The customer feature this time is R.L. Polk who have used 11g OLAP to simplify their delivery of aggregate data through the use of cube organised materialised views. This is a fantastic case study which captures the true value of this functionality (note the dramatic improvements in both build and query times), and of having Oracle OLAP embedded in the Oracle Database.<br /><br />The highlights of the Product Update section this time are the release of <a href="http://download.oracle.com/otndocs/products/warehouse/awm11.1.0.7.0B.zip">the latest version of AWM 11g</a> (11.1.0.7B), and also a new version of the <a href="http://www.oracle.com/technology/products/bi/spreadsheet_addin/index.html">BI Spreadsheet Add-in</a> (10.1.2.3.0.1 - enough digits?!) which now includes support for Excel 2007.Stuart Bunbyhttp://www.blogger.com/profile/10781347144821555643noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-11829367427313529342008-12-29T11:26:00.000-08:002008-12-29T11:36:57.267-08:00Now Available! Two new Oracle OLAP DemonstrationsTwo new Oracle OLAP demonstrations have been added to the <a href="http://www.oracle.com/technology/products/bi/olap/index.html">Oracle OLAP product page on OTN</a>:<br /><br /><a href="http://www.oracle.com/technology/products/bi/olap/11g/demos/olap_sql_demo.html">Fast Answers to Tough Questions Using Simple SQL</a> :- Oracle OLAP is a world class analytic engine embedded in the Oracle Database. OLAP Cubes and dimensions are easily accessible thru a star-model. Using very simple SQL, Oracle OLAP delivers fast answers to tough, analytic questions. This demonstration shows how to query OLAP cubes using several tools, including: Oracle Business Intelligence Enterprise Edition, Application Express and SQL Developer.<br /><br /><a href="http://www.oracle.com/technology/products/bi/olap/11g/demos/olap_cube_mvs_demo.html">Transparently Improving Query Performance with Oracle OLAP Cube MVs</a> :- Oracle OLAP cubes may also be deployed as materialized views. Summary queries written to base fact tables can transparently leverage the fast query performance delivered by Oracle OLAP - without any changes to the application's query. The Oracle Optimizer automatically rewrites queries to cubes when appropriate. This demonstration shows how Oracle Business Intelligence Enterprise Edition seamlessly benefits from this capability. The demonstration then provides an "under the covers" view of how this improvement is achieved.Stuart Bunbyhttp://www.blogger.com/profile/10781347144821555643noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-79671629845327902532008-12-21T09:22:00.000-08:002008-12-21T13:52:23.212-08:00Get hands-on with 11g OLAP<p class="MsoNormal">You may have already noticed but over the past couple of weeks some new 11g OLAP training material has been published on the <a href="http://www.oracle.com/technology/products/bi/olap/index.html">OTN OLAP home page</a>.</p><p class="MsoNormal">Two new tutorials have been added to the popular <a href="http://www.oracle.com/technology/obe/start/index.html">Oracle By Example (OBE)</a> series.<br /></p><p class="MsoNormal">The first is titled <a href="http://www.oracle.com/technology/obe/olap_cube/buildicubes.htm">'Building OLAP 11g Cubes'</a> and covers using Analytic Workspace Manager (AWM) 11g to build and load an OLAP cube.</p><p class="MsoNormal">The second is titled <a href="http://www.oracle.com/technology/obe/olap_cube/querycubes.htm">'Querying OLAP 11g Cubes'</a> and is a guide to querying a cube via SQL, both directly using OLAP Cube Views, and indirectly using Cube Materialized Views.<br /></p><p class="MsoNormal">Supporting both of the tutorials is a new <a href="http://www.oracle.com/technology/products/bi/olap/11g/samples/schemas/readme.html">sample schema</a> which gives you the opportunity to get hands-on and experiment in your own environment. Remember, that patch level 11.1.0.7 is required and to always check the <a href="http://www.oracle.com/technology/products/bi/olap/collateral/olap_certification.html">recommended release</a> details for your chosen operating system.</p>Stuart Bunbyhttp://www.blogger.com/profile/10781347144821555643noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-62501789249891909372008-11-25T01:07:00.000-08:002008-11-25T14:03:25.817-08:00New 11g OLAP tutorial posted onto OTNA new tutorial has been added to OTN.<br /><br />The tutorial is aimed at newcomers to Oracle OLAP and is <a href="http://www.oracle.com/technology/products/bi/olap/collateral/create_11g_olap_cube.html">a guide to creating and populating an 11g OLAP cube.</a><br /><br />This is perfect for people who are looking for a gentle introduction to using the Analytic Workspace Manager OLAP administration tool and understand the basic steps in building an 11g OLAP cube.Stuart Bunbyhttp://www.blogger.com/profile/10781347144821555643noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-28064977607311350842008-10-06T02:46:00.000-07:002008-10-10T04:10:59.631-07:00Analytic Workspace Manager 11.1.0.7A released to MetalinkThe 11.1.0.7A version of AWM has been released to Metalink as patch <a href="https://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=7420490">7420490</a><br /><br />It includes important fixes and new features including:<br /><ul><li>the ability to add multiple languages to a single analytic workspace</li><li>individual aggregation definitions may now be defined for each measure of a cube</li><li>the Create Dimension user interface has been modified to allow levels of the dimension to be created at the same time as the dimension</li><li>the functionality of dimension and cube mapping has been enhanced to allow the application to refresh the definitions of database objects interactively to reflect the current state of database schema tables</li></ul><br />To take advantage of all the new fixes and features, the <a href="https://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=6890831">Oracle Database 11g Release 11.1.0.7.0 Server Patch</a> must be installed as well. This is currently only available for Linux 32-bit & Linux 64-bit, but other ports are likely to be available soon.Stuart Bunbyhttp://www.blogger.com/profile/10781347144821555643noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-25148580317597388922008-06-02T03:34:00.000-07:002008-12-11T15:25:17.957-08:00Best Practice Tips : SQL Access to Oracle DB Multidimensional AW Cubes (#2)<div>One of the most useful features introduced with Oracle Database OLAP is the ability for the powerful multidimensional calculation engine and the performance benefits of true multidimensional storage in the Analytic Workspace (AW), to be accessed and leveraged by simple SQL queries.<br /><br />This single feature dramatically increases the reach and applicability of multidimensional OLAP – to a vast range of BI query and reporting tools, and SQL-based custom applications that can now benefit from the superior performance, scalability and functionality of a first class multidimensional server, but combined within the Oracle Database with all the other advantages that derive from that. </div><br /><div><br />This post is the <strong>second</strong> in a series that I will use to share some general best practice tips to get the most out of this feature, so that you can deliver even better solutions to your business end-users:<br /><br /><strong>Best Practice Tip #2: </strong><strong>General AW Object Naming Conventions for dimensions, levels, hierarchies and attributes…</strong>(Oracle Database 10g and 11g)<br /><br />The following advice will result in much easier to understand and use relational views over your AW. It makes the implementation much cleaner to visualise, and easier for other users to understand what they are looking at. It also saves a lot of typing for developers that are writing their own SQL queries!<br /><br />The objective is to ensure that the generated column names in your views are easy to read, and also to avoid the possibility that generated column names may get truncated to fit within the limits for a column name in Oracle Database (when that happens your views get really ugly really quickly). Finally, it has the additional desirable side effect of making it easier and therefore quicker to do the mappings in AWM because the screens are less cluttered with long-winded object names!<br /><br /><em><span style="font-size:85%;">Note: this advice follows both for Oracle Database 10g OLAP (eg views created by the AWM10g View Generator Plug-in) and for Oracle Database 11g OLAP, where views are auto-generated (eg when creating your Standard From AW via AWM11g).</span></em><br /><br /><strong>Here is the idea:</strong> </div><ul><li>Keep the names used for dimensions, levels, hierarchies, and attributes as short as possible, while still meaningful of course. </li><li>If possible (simply for readability in the resulting relational view and column names), avoid the use of the "_" char especially for dimension, hierarchy, level and attribute names.</li><li><div align="left">If possible <em>(also recommended if Oracle OLAP API clients such as OracleBI Spreadsheet Add-in , OracleBI Discoverer Plus OLAP and OracleBI Beans will be used on the same AW),</em> create the AW in its own schema.<br /></div></li></ul><p align="left"><img id="BLOGGER_PHOTO_ID_5207301947241906770" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgl_2HfC-sbZD0b-9LMiHRPacOQWAlpR9Sq5CYJEdyImDL2vDy1Es7IceDxVGJ38OYmVb32BUCYdb2zuERb4K-yfwUa_FwmUGQDjH4m8EUg5ZLcUAdV1TenfOdfmfw3fEvexTPvON9sqG5t/s400/ObjectNameLengths.jpg" border="0" /></p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwMK-gQgGq-by-qSf6TbXyMoBS9CwVP3vmU0kz8TggWPH0Wj2Y7gb9qUTIpobodYaopNaeiyxhSLAnLWJmrhCFRvFN5GnTVwmssIZ2h3wx7C9Tk-aBJjqpYDjjQnHB9IeF3IISjkIcxqsO/s1600-h/ObjectNameLengths.jpg"></a>Don't be seduced into thinking it is a good idea to put "DIM" in the name of everything that is a dimension, or "ATT" into the name of all the attributes. You don't need to do this. The AW knows what objects are what, and you can very simply query the AW if you need, for example, to find out the names of all the Dimensions in an AW. <em>(Another topic for another day is to walk thru all the Data Dictionary stuff that helps with this). </em><br /><p align="left">In other words: If you have a Product Dimension, it is self-evidently a dimension, so clogging up its name with "_DIM" or "_DIMENSION" is just extra wear and tear on your keyboard!</p><br /><p><strong>Example:</strong></p><br /><p>To illustrate the impact this advice can have, here are two Product Dimensions, which apart from the fact one follows best practice advice and one does not, are identical (example is from Oracle Database 11g AW) <em><span style="font-size:85%;">(you can click on the picture to see it full size):</span></em><br /><br /><strong>First</strong> – two ways I could have created my Product Dimension:<br /></p><br /><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2TcRVxzFkhiBnuGh5hRJM8aoHDGT7dWeIeQG_sA9CzrtVbnUwdL0k9vi1yn7E2GGYqLNMsdnY1sEfdeZp1qwiel2n1WfGeaBYke8ejuG3pqA_wGzUseSHoiDyOAHpTpOJqgjczeo-KGql/s1600-h/DimExamples_AWM.jpg"><img id="BLOGGER_PHOTO_ID_5207299048138981906" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2TcRVxzFkhiBnuGh5hRJM8aoHDGT7dWeIeQG_sA9CzrtVbnUwdL0k9vi1yn7E2GGYqLNMsdnY1sEfdeZp1qwiel2n1WfGeaBYke8ejuG3pqA_wGzUseSHoiDyOAHpTpOJqgjczeo-KGql/s400/DimExamples_AWM.jpg" border="0" /></a><br /><br /><strong>Second</strong> – what the resulting dimension views for the Main hierarchy would look like in each case:<br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgynGFRPiL68gBy6jhAgrQkph_OHSmiLFPVQa-9O3kzvuU-SN1kGPmCuiI8V6HIsHIFHhYFaSUXK3MsqofCoq-FZebredp17qV3XPTD_N-PxcaxdmlkMkV2T_iM326xb5f3P_qEyMgK9Du2/s1600-h/DimExamples_Views.jpg"><img id="BLOGGER_PHOTO_ID_5207299202757804578" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgynGFRPiL68gBy6jhAgrQkph_OHSmiLFPVQa-9O3kzvuU-SN1kGPmCuiI8V6HIsHIFHhYFaSUXK3MsqofCoq-FZebredp17qV3XPTD_N-PxcaxdmlkMkV2T_iM326xb5f3P_qEyMgK9Du2/s400/DimExamples_Views.jpg" border="0" /></a><br /><br /><strong>Third</strong> – how much harder it is to read and write the SQL to query the AW’s dimension as a result:<br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjlTOmuYvkFmB54mnj4W4qeOgEePGWVuCtFoaOXTQRsnDu-sXkpylHX9vzHafyZjkK2ghDdlHwexykjJAjMgQmZ2YNPbFENbExRERLQVYZWnzjLaWQdQT3fOdegYBcFQczpJQMrR-eeP_C/s1600-h/DimExamples_SQL.jpg"><img id="BLOGGER_PHOTO_ID_5207299842707931714" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjlTOmuYvkFmB54mnj4W4qeOgEePGWVuCtFoaOXTQRsnDu-sXkpylHX9vzHafyZjkK2ghDdlHwexykjJAjMgQmZ2YNPbFENbExRERLQVYZWnzjLaWQdQT3fOdegYBcFQczpJQMrR-eeP_C/s400/DimExamples_SQL.jpg" border="0" /></a><br />Which of these functionally identical examples is easier to read, easier to understand and easier to query?</p><p><strong>I rest my case</strong>. Giving a bit of thought to the way you build your AW before you build it nearly always pays dividends later. </p>Kevin Lancasterhttp://www.blogger.com/profile/06742628997065141834noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-87376031701255277812008-05-31T11:48:00.000-07:002008-12-11T15:25:18.095-08:00Best Practice Tips : SQL Access to Oracle DB Multidimensional AW Cubes (#1)One of the most useful features introduced with Oracle Database OLAP is the ability for the powerful multidimensional calculation engine and the performance benefits of true multidimensional storage in the Analytic Workspace (AW), to be accessed and leveraged by simple SQL queries.<br /><br />This single feature dramatically increases the reach and applicability of multidimensional OLAP – to a vast range of BI query and reporting tools, and SQL-based custom applications – BI and operational – that can now benefit from the superior performance, scalability and functionality of a first class multidimensional server, but combined within the Oracle Database with all the other advantages that derive from that. Bottom line: if you have a tool or application that can (a) connect to an Oracle Database instance, and (b) fire simple SQL at that Database, then you can get benefit from the AWs in that tool or application.<br /><br />This post is the <strong>first of a series</strong> that I will use to share some general best practice tips to get the most out of this feature, so that you can deliver even better solutions to your business end-users.<br /><br />If any of you have tips and advice of your own that we can share, please contact us – we’ll be happy to publish your good ideas and experience with this feature of Oracle Database OLAP.<br /><br />Anyway. Enough pre-amble. Let’s get on with it. Here goes:<br /><br /><strong>Best Practice Tip #1: Creating your views</strong> (Oracle Database 10g and 11g)<br /><br />Basically the first tip in the series boils down to two things:<br /><br />1) Always build your AWs to Oracle Database OLAP ‘Standard Form’. This is what happens if you build them with AWM, OWB (10g-only at the time of this post, but support for 11g target AWs is due in OWB very soon), or the supplied AW API if you need to programmatically build and maintain your AW.<br />2) Use the free-ware “View Generator” plug in for AWM10g to build your 10g views, and leverage the <strong>automatically generated</strong> views in <strong>11g,</strong> unless you have a very good reason not to.<br /><br />Together, if you follow this advice you will save a lot of time on your project, and also increase your ability to support the application going forward. And it will be a lot easier for others (such as Oracle Support, or your local friendly Oracle OLAP Consultant) to help you if you have any problems.<br /><br /><strong>More detail:</strong><br /><br />In <strong>Oracle Database 10g</strong>, there is nothing to stop you coding your own views using the SQL OLAP_TABLE() function. And, if you have an entirely custom built AW this is pretty much your only option. However, if you have developed your AW to Oracle’s OLAP Standard Form specification you can save yourself the time, by using a handy dandy little plug-in for AWM10g. The plug-in is free shareware for AWM10gR2 & can be downloaded from <a href="http://www.oracle.com/technology/products/bi/olap/viewGenerator_1_0_2.zip">here</a>, with the associated ReadMe <a href="http://www.oracle.com/technology/products/bi/olap/ViewGenerator.html">here</a>.<br /><br />The plug in steps you thru a simple wizard within AWM, allowing you to choose which measures etc you need, and then creates the views for you (storing the biggest lump of syntax – the ‘limitmap’ parameter which describes which AW objects show up in what columns in your view – inside the AW itself, in a multi-line text variable/measure).<br /><br />In <strong>Oracle Database 11g</strong>, while <span style="font-family:courier new;">OLAP_TABLE()</span> is still available for you to use if you like (and sometimes it is perfect for your needs as it has lots of very clever hooks by which you can trigger various OLAP actions whenever a user selects from the view), for most cases, the new <span style="font-family:courier new;">CUBE_TABLE()</span> function added in Database 11g is much easier and therefore recommended.<br /><br /><span style="font-family:courier new;">CUBE_TABLE()</span> views are what AWM11g automatically creates for you when defining the objects inside the AW. Assuming you have a valid Standard Form 11g Database AW, such as you might build in AWM11g, <span style="font-family:courier new;">CUBE_TABLE()</span> is much, much easier to use than <span style="font-family:courier new;">OLAP_TABLE().</span><br /><br />For example, the entire syntax required to create a Dimension View, for a specified hierarchy of that Dimension in an AW (not that I even have to type any of this in, as the AWM tool does it automatically for me) is as follows:<br /><br /><span style="font-family:courier new;">CREATE OR REPLACE FORCE VIEW MYDIM_MYHIER_VIEW AS<br />SELECT * </span><br /><span style="font-family:courier new;">FROM TABLE( CUBE_TABLE('MYSCHEMA.MYDIM;MYHIER') );</span><br /><br />How easy is that?!<br /><br />All you need to know about your AW is the name of the Hierarchy (MYHIER), Dimension (MYDIM) and schema that the AW is built in (MYSCHEMA). All the object mappings that you have to tell <span style="font-family:courier new;">OLAP_TABLE</span> about, in the limitmap parameter, are automatically done as a result of improvements in Database 11g’s Data Dictionary (which is now fully aware of the details of the contents of the AW).<br /><br />Here (below) is what an example Product Dimension looks like in AWM11g, and the resulting View:<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvGRK4rNiVEIu3dvlLDj2X3mleqYJQl7qKQnktUXDRSg7gwlSajbZiZB-92fLHlqShlcxODXHyn-01Rd1Gvemv0GBCL8hKVEkhQWZTVOvAsN0HEGWlUcx6qB9wakemQBOAViLnMzX8WpoP/s1600-h/PROD_and_View.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvGRK4rNiVEIu3dvlLDj2X3mleqYJQl7qKQnktUXDRSg7gwlSajbZiZB-92fLHlqShlcxODXHyn-01Rd1Gvemv0GBCL8hKVEkhQWZTVOvAsN0HEGWlUcx6qB9wakemQBOAViLnMzX8WpoP/s400/PROD_and_View.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5207295788258804210" /></a><br /><br /><p><span style="font-size:85%;"><em>Note that the OLAP Option only allows one Dimension or Cube (and therefore Dimension View, or Cube View) of a given name in each SCHEMA. For this reason, it is our recommendation that each AW be built in its own schema if possible. This will allow you, if you ever need to, to have a PROD dimension or SALES Cube in more that one unrelated AW. This tip will be included again, in an upcoming Post on Best Practice AW Design practices, and naming conventions.</em></span></p>Kevin Lancasterhttp://www.blogger.com/profile/06742628997065141834noreply@blogger.com2tag:blogger.com,1999:blog-3820031471524503731.post-46563947517296167492008-04-21T22:00:00.000-07:002008-12-11T15:25:24.150-08:00Tuning Guidance for OLAP 10gMy assumption with this posting is: you are familiar with all the basic OLAP terms such as, dimensions, levels, hierarchies, attributes, measures, cubes, etc. If this is not the case then go to the Oracle Wiki and checkout these links:<br /><br /><a href="http://wiki.oracle.com/page/Oracle+Olap+Option">http://wiki.oracle.com/page/Oracle+Olap+Option</a><br /><ul><li><a href="http://wiki.oracle.com/page/Oracle+Olap+Terminology">Terminology</a> - Key Concepts and Terms</li><li><a href="http://wiki.oracle.com/page/Oracle+OLAP+How+To">Oracle OLAP How To</a> - Performance Tuning and more ...</li><li><a href="http://wiki.oracle.com/page/OLAP+option+-+DBA+Sample+Scripts">Script Samples</a> - for DBAs managing the OLAP option.</li><li><a href="http://wiki.oracle.com/page/OLAP+option+-+Did+You+Know%3F">Did You Know?</a> - that in the Oracle OLAP option you can.</li><li><a href="http://wiki.oracle.com/page/OLAP+option+-+Diagnostic+Techniques">Diagnostic Techniques</a> - for those using the OLAP option.</li><li><a href="http://wiki.oracle.com/page/OLAP+option+-+RAC+%26+GRID">RAC & GRID</a> - for those using OLAP option with RAC or GRID</li></ul><br />Most people when they approach OLAP for the first time, create a data model that either takes too long to build or too long to query. The “too long to query” is usually the first problem to arise and in trying to solve this issue they create the second problem “too long to build”. There is a balance that needs to be achieved when designing OLAP data models. That balance is between pre-solving every level across all dimensions, which increases build time, and providing users with fast query performance. Most people assume there is a direct relationship between the number of levels that are pre-solved and query performance. As one goes up so does the other: pre-solve more levels and query performance improves. Therefore, the answer to poor query performance is to pre-solve all levels across all dimensions correct? Yes and no. Most systems do not have an infinite window for building cubes. Fortunately, using Oracle OLAP Option it is possible to balance the amount of time taken to build a cube and still ensure excellent query performance. How is this achieved?<br /><br />Oracle OLAP is the most powerful and scalable OLAP server on the market. Because OLAP is inside the database it inherits all the native scalability, security and performance of the Oracle database and it is because the database is so fast and scalable there is a tendency to ignore certain design principles when building an OLAP data model. If the original design and methodology is sound then tuning is very quick and easy to manage. But there is no silver bullet to make OLAP go faster, as one of our OLAP gurus states: there is no ”_OLAP_TURBO_MODE=YES” setting for the init.ora.<br /><br />What follows is a series of recommendations and observations based on my experience on various OLAP projects to help optimize OLAP builds. This is not the authoritative guide to tuning OLAP data models; just my thoughts.<br /><br />When asked to tune an existing OLAP data model I break the work up in to five sections:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQMUG3ygIx5jb7vWL41NNfQhkIQEBV-kpQcLK_QnfCIYnJCa5fa5Ac-FQRykMNA-Y5tk-GCHtbKVsikASUWMq9x1CFnYuHckMGZl2rw4cCHWg1x-_G7AcOcVw6rrtgTIZlpF2HNzKhNX8/s1600-h/Slides+for+Keith.001.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQMUG3ygIx5jb7vWL41NNfQhkIQEBV-kpQcLK_QnfCIYnJCa5fa5Ac-FQRykMNA-Y5tk-GCHtbKVsikASUWMq9x1CFnYuHckMGZl2rw4cCHWg1x-_G7AcOcVw6rrtgTIZlpF2HNzKhNX8/s400/Slides+for+Keith.001.png" alt="" id="BLOGGER_PHOTO_ID_5191650848715385202" border="0" /></a><br /><br />Tuning a data load process needs to be done in a step-by-step process. Trying to rush things and changing too many settings at once can simply create more problems than it solves. It is also important to start at the beginning with the hardware and lastly look at the database instance itself. Most DBAs will be tempted to rip open the init.ora file and start tweaking parameters in the hope of making the build run faster.<br /><br />However, the area that is likely to have the biggest impact is refining (or possibly even changing) the implementation of the logical model. But when making changes that improve the build performance you should also check the impact on query performance to ensure the amount of time taken to return a query is still within acceptable limits.<br /><br />Below are the steps I use when I am asked to analyse the build performance on an OLAP schema. But before you start a tuning exercise, I would recommend reading the <span style="font-weight: bold;font-size:85%;" >2-Day Performance Tuning Guide</span> that is now part of the database documentation suite. It provides a lot of useful information. It is available as an <a href="http://www.oracle.com/pls/db111/to_toc?pathname=server.111/b28275/toc.htm">HTML</a> document and <a href="http://www.oracle.com/pls/db111/to_pdf?pathname=server.111/b28275.pdf">PDF</a> document. The PDF document can be downloaded and stored on your laptop/memory stick etc for easy reference.<br /><br /><br /><span style="font-size:100%;"><span style="font-weight: bold;">Part 1 - Analysis of Hardware</span></span><br />In any situation the first challenge in a tuning exercise is to ensure the foundation for the whole solution is solid. This tends to be the biggest challenge because it can involve a working with a number of hardware and software vendors. Trying to make sure your environment is based on an adequate configuration can be time consuming and risky, and will probably end in a compromise between performance, scalability, manageability, reliability, and naturally price.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3ky3Cg5s8EaARZlx5HcFaTCEi8QVRVnBo0j9nAQJfBsoCicJX5B0KD5HbeQ6odHfj367RFungZvcGc77O-Its6J2HGY6gnfpcPm5sFRrz7O6eu5NRQd_epzOLeSrhnxO8_T8goPNe2ns/s1600-h/Slides+for+Keith.002.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3ky3Cg5s8EaARZlx5HcFaTCEi8QVRVnBo0j9nAQJfBsoCicJX5B0KD5HbeQ6odHfj367RFungZvcGc77O-Its6J2HGY6gnfpcPm5sFRrz7O6eu5NRQd_epzOLeSrhnxO8_T8goPNe2ns/s400/Slides+for+Keith.002.png" alt="" id="BLOGGER_PHOTO_ID_5191651535910152578" border="0" /></a><br /><br />Configurations can be difficult to analyse and most of the time. This analysis typically tends to degenerate into each vendor in the hardware stack blaming the other vendor and/or the database.<br /><br /><span style="font-weight: bold;font-size:85%;" >Step 1 – Check Patches</span><br />When analysing an existing environment make sure all the latest firmware, drivers and O/S patches have been applied. Refer to the Oracle database installation guide, Metalink, and the hardware vendors web sites for more details.<br /><ul><li><a href="http://www.oracle.com/technology/documentation/index.html">Oracle Documentation Portal</a> </li><li><a href="http://metalink.oracle.com/">Oracle Support Portal</a></li></ul><br /><span style="font-weight: bold;font-size:85%;" >Step 2 – Determine Workload</span><br />In a good environment you should be expecting to load about 1 million rows per minute via OLAP. This is the benchmark. Check the XML_LOAD_LOG table from previous builds to determine if this is being achieved. Here is a log from a data load for the Common Schema AW based on a relatively simple view that joins two fact tables together to load three measures. Approximately 900,000 records are loaded in 57 seconds.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivhCryipL__kHD2rgUS8auH0axZ2yPggLGnO8Jb-vplyVZfaEeTc3tRPMie9wOC8uWkkNWm-K7QfaLMoMcejvmPTBkEIRkJv-RM5ZuBGhISSheEEFj9FwFXvWGR8l0Y25baIyXurZTAhQ/s1600-h/image1.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivhCryipL__kHD2rgUS8auH0axZ2yPggLGnO8Jb-vplyVZfaEeTc3tRPMie9wOC8uWkkNWm-K7QfaLMoMcejvmPTBkEIRkJv-RM5ZuBGhISSheEEFj9FwFXvWGR8l0Y25baIyXurZTAhQ/s400/image1.PNG" alt="" id="BLOGGER_PHOTO_ID_5191652472213023122" border="0" /></a><br /><br />In this case, we could conclude this is a reasonable starting point to begin the next phase of the tuning exercise. However, don’t forget the performance initially listed in XML_LOAD_LOG could be influenced by a number of factors, but if the data source is a table or a very simple view, then 1 million rows a minute should be achievable. Anything less tends to indicate some sort of I/O issue, or possibly the use of inefficient SQL to extract data from the source. The ADDM analysis of I/O performance partially depends on a single argument, DBIO_EXPECTED, that describes the expected performance of the I/O subsystem. The value of DBIO_EXPECTED is the average time it takes to read a single database block in microseconds. Oracle uses the default value of 10 milliseconds, which is an appropriate value for most modern hard drives. If your hardware is significantly different, such as very old hardware or very fast RAM disks, consider using a different value. To determine the correct setting for DBIO_EXPECTED parameter, perform the following steps:<br /><ol><li>Measure the average read time of a single database block read for your hardware. Note that this measurement is for random I/O, which includes seek time if you use standard hard drives. Typical values for hard drives are between 5000 and 20000 microseconds.</li><li>Set the value one time for all subsequent ADDM executions. For example, if the measured value if 8000 microseconds, you should execute the following command as SYS user: </li><li>EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER( 'ADDM', 'DBIO_EXPECTED', 8000);</li></ol>Also review the Performance Tuning Guide, <a href="http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/iodesign.htm#i20394">Chapter 8 : I/O Configuration and Design</a>. Specifically, review these two sections:<br /><ul><li><a href="http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/#CHDBBADG">Prerequisites for I/O Calibration</a></li><li><a href="http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/#CHDGBECA">Running I/O Calibration</a></li></ul><span style="font-style: italic;">Parallel vs Serial Processing</span><br />As part of this step some consideration needs to given to parallel vs serial processing. I find most people will start by running a build in serial mode and then assume if it takes X amount of time to process in serial mode, running in parallel mode will naturally take X/No of parallel jobs. Of course this is true up to a point. There is a definite tipping point in parallel processing where the law of diminishing returns sets in very quickly. As a starting point if I am going to process a load in parallel I will start by using a job queue = “No. of CPUs-1”. This is usually a good starting point and depending on where the bottlenecks start appearing (CPU waits vs I/O waits) I may increase or decrease this figure during testing.<br /><br />Parallel processing is a very useful tool for improving performance but you need to use partitioned cubes and the data being load must map across multiple partition keys to result in parallel processing. As before, this is not a silver bullet that will simply make everything run faster. It needs to be used carefully.<br /><br /><br /><span style="font-weight: bold;font-size:85%;" >Step 3 – Determine Best Reference Configurations</span><br />What can be useful is to work from a set of known configurations designed to provide a stated level of performance. Oracle has worked with a number of hardware vendors to provide documented configurations for data warehouse solutions. These configurations can be used as benchmarks and/or recommendations for your environment.<br /><ul><li><a href="http://www.oracle.com/solutions/business_intelligence/emc.html">Reference Configurations for Dell and EMC</a></li><li><a href="http://www.oracle.com/solutions/business_intelligence/hp.html">Reference Configurations for HP</a></li><li><a href="http://www.oracle.com/solutions/business_intelligence/ibm.html">Reference Configurations for IBM</a></li><li><a href="http://www.oracle.com/solutions/business_intelligence/sun.html">Reference Configurations for Sun</a></li></ul>Each configuration combines software, hardware, storage, I/O and networking into an optimized environment for different scales of customer data warehouse requirements. Using extensive customer experience and technical knowledge, Oracle and its hardware partners have developed configurations for data warehouses with varying raw data sizes, concurrent user population and workload complexity. By offering customers reference configurations suited for different profiles, customers can select the one that best suits their business and price, performance requirements. And since they're built on scalable, modular components, these reference configurations enable customers to aggressively pursue incremental data warehouse growth.<br /><br />One of the key questions for the performance tuning exercise is: Are you just tuning the model based on today’s data volumes or should the exercise look to maximise performance of future load volumes. This is a very tricky area to manage and difficult to plan and test. Which is why having a referenceable configuration is so important. Why? Because the reference configurations provide a clear upgrade path and levels of performance are certified along that upgrade path.<br /><br /><span style="font-weight: bold;font-size:85%;" >Step 4 – Match/Compare/Contrast with Existing Configurations</span><br />In reality altering your hardware configuration is going to be a given. To change a configuration is likely to be a costly and time-consuming exercise. However, it should not be ignored. If you have followed and tuned your data model based on the following recommendations and the load and aggregation phase is still too long then a full hardware review may in fact be needed and upgrades may need to be purchased. Hopefully, the next result of this whole exercise is to provide some sort of cost/benefit report to outline expected performance improvements based on additional hardware costs.<br /><br /><br /><span style="font-size:100%;"><span style="font-weight: bold;">Part 2 - Analysis of Dimensions</span></span><br />The second stage is to review the logical model for the dimensions. This stage is largely to confirm the dimensions are correctly implemented and the source data is of good quality. Most of this analysis is really just making sure there are no big issues within the various dimensions and possibly making small changes based on experience from various projects. But it is important to ensure the dimensions are of “good quality” before moving on to review the cubes – there is no point building a house on a sand bank and then wondering why it gets washed away (if that makes sense?). Good foundations are needed.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuUMjoDcruso-tYIwz6jz4VxzV2mG8A12qRY1_RZ1hfzV1sDOjaD1tWtybJXcWXsqB7lacjbiCmfzegccDzsJ4UxR1Sb3CCquNAUKaMmgZ7JXF7aS99OtWUGsTmNynJhhSiM8j3lDEo8E/s1600-h/Slides+for+Keith.003.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuUMjoDcruso-tYIwz6jz4VxzV2mG8A12qRY1_RZ1hfzV1sDOjaD1tWtybJXcWXsqB7lacjbiCmfzegccDzsJ4UxR1Sb3CCquNAUKaMmgZ7JXF7aS99OtWUGsTmNynJhhSiM8j3lDEo8E/s400/Slides+for+Keith.003.png" alt="" id="BLOGGER_PHOTO_ID_5191654082825759138" border="0" /></a><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 1 - Analysis of Attributes</span></span><br />Many customers implement dimensions when they only really need attributes. This usually happens when they are migrating from a legacy OLAP server to Oracle OLAP. We have a customer at the moment that has an existing OLAP data model in a legacy OLAP server based on 60 dimensions. Reviewing the queries the users make against the data model it became clear that many of these dimensions were in fact simple attributes within Oracle OLAP. This can have a significant impact on the design of related cubes and the whole loading process, since fewer dimensions within a cube will improve both load and aggregation times.<br /><br />Can Oracle OLAP support extremely large dimensional models? Yes it can. The engine will support up to 256 dimensions within a single cube and within the AW you can have as many dimensions as you need. The key point here is: each cube can have its own dimensionality. Oracle OLAP does not implement hyper-cubes where every cube has to share the same dimensionality – one of the key benefits of Oracle OLAP over other legacy OLAP engines is that it does support cubes of different dimensionality.<br /><br />There is an excellent customer case study in the December 07 OLAP Newsletter that examines how one customer managed a very large data models based on lots of dimensions. The <a href="http://www.oracle.com/technology/products/bi/olap/olapref/newsletter/oracleolapnewsletter_dec07.html">E.ON</a> data model contains multiple cubes (subject areas) each with between 6 to 12 dimensions. The cubes are updated weekly with many millions of rows loaded and aggregated, with about 1 million rows updated in the cube per minute. For more information, read the complete review by clicking <a href="http://www.oracle.com/technology/products/bi/olap/olapref/newsletter/oracleolapnewsletter_dec07.html">here</a>. There are many other customers with even bigger models.<br /><br />From a UI perspective you need to think very carefully about the number of dimensions within a cube. Many UI studies have shown business users find it increasingly difficult to interpret the results from a dataset where there are more than nine dimensions. Although, as the above case shows, it is possible for some users to interact with larger more complex models providing the information is presented in usable format, it is worth spending some time clarifying the exact dimensionality of each cube.<br /><br />If you think about a typical crosstab layout, a nine dimensional cube results in one row edge dimension, one column edge dimension and seven page edge dimensions plus the measure dimension. That is a huge amount of information to absorb and, in my opinion, makes constructing queries very difficult. Another, issue that frequently occurs, as the number of dimensions increases, is the game of “hunt-the-data”. Even with only nine dimensions in a cube it is likely the data set will be extremely sparse and drilling down only one or two levels across a couple of dimensions can result in crosstabs with little or no data. Some BI tools try to mask this problem by providing an NA and/or Zero row filters. The net result is usually a “no rows returned” message appearing in the body of the report at regular intervals.<br /><br />My main recommendation is: Check the number of dimensions in your model and for the sake of your users and try to keep the number within each cube down to something intelligible. For example approximately nine – this is not a hard and fast rule; just a recommendation but do read the E.ON case study as well. If you are presented with a data model, do not be afraid to challenge the dimensionality of the cubes within the model. Make sure all the dimensions within a cube are really required because I can guarantee some are simply basic attributes.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 2 - Analysis of Level Keys</span></span><br />There is not much that needs to be done here except to make sure to use surrogate keys except when you are certain the dimension keys are unique across all levels. This is not always the case and using a surrogate key is a good way to ensure your hierarchies are correctly populated. OLAP creates a surrogate key by prefixing the original source key with the level identifier. Therefore, from a storage perspective it makes sense to make the level key as short as possible. For example, don’t create a level identifier such as “PRODUCT_SUB_CATEGORY_SKU_IDENTIFIER”. There is a limit of 30 characters for level names. In practice I have seen issues with both data loading and aggregation where very large dimension keys (i.e. greater than 400 characters) have been created.<br /><br />In practice I recommend using simple level identifiers such as L1, L2, L3 although this does make writing SQL statements a little more challenging via the SQL Views as the level identifier is used in the column name along with the dimension and it is not exactly obvious what each column contains when they are called PODUCT_L1, PRODUCT_L2 etc.<br /><br /><span style="font-style: italic;">Surrogate vs Natural Keys</span><br />The use of surrogate keys is an interesting area. During some projects it has been found that by not using surrogate keys build performance has increased. This does make sense since the source data for cube will have to be reformatted at load time to ensure the key is valid. In some cases the amount of time required to manipulate the incoming key values may be minimal. In other cases it has had a significant impact on load performance – the “1 million rows a minute” benchmark was not achieved and reverting to natural keys did improve load times. If the data source can be guaranteed to provide unique keys across all levels it is probably worth switching to natural keys. But be warned – you cannot switch between using surrogate and natural keys if the cube already contains data.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 3 - Quantitative Analysis of Members</span></span><br />This is an important step as it will allow us to determine which levels to pre-aggregate within the cube. In most cases the default skip level approach to pre-solving levels within a cube is a reasonable starting point. But it is possible to design a much better model by analysing the number of members at each level and the average number of children for each level.<br /><br />Lets look at two real customer examples:<br /><br /><span style="font-style: italic;">Dimension 1</span><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJbt3THB9D_lvNC7m0XxS0Si92RLLvwc89dguRfqg-T9OUR55-YG5f05YZICcpHCET23HPqOPBxFsh0_5wkAxjl_9RN4fNuGfmSrzTqyQdNLObNmtia-wSBAsoE52cPucFvAg8bI4IgDg/s1600-h/image2.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJbt3THB9D_lvNC7m0XxS0Si92RLLvwc89dguRfqg-T9OUR55-YG5f05YZICcpHCET23HPqOPBxFsh0_5wkAxjl_9RN4fNuGfmSrzTqyQdNLObNmtia-wSBAsoE52cPucFvAg8bI4IgDg/s400/image2.PNG" alt="" id="BLOGGER_PHOTO_ID_5191658425037695410" border="0" /></a><br /><br />A change as simple as this could have a huge impact on the amount of time taken to aggregate a cube.<br /><br />However, in some cases the OLAP Compression feature can be useful in terms of allowing you to pre-compute additional lower levels within a hierarchy for little or no additional cost because the sparsity of the data allows higher levels to compressed out of the cube. If you have a situation where there is almost a 1:1 relationship between a level and the next level down in the hierarchy it would make sense to pre-compute that level since the compression feature will compress out the redundant data. For example:<br /><br /><span style="font-style: italic;">Dimension 2</span><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihYZWjokDC5xeImvdgzAWc7_2xPFXwHn3WmwtymukVUIBysGe9PLYtLih6FqJr5D47VijYjIEO_AyutaU8JKcSZY3SEJGoKMsUsvuZwr5nkbAEM4GiSNFDd61fwNFuGgdYySpkQN5kXao/s1600-h/image2b.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihYZWjokDC5xeImvdgzAWc7_2xPFXwHn3WmwtymukVUIBysGe9PLYtLih6FqJr5D47VijYjIEO_AyutaU8JKcSZY3SEJGoKMsUsvuZwr5nkbAEM4GiSNFDd61fwNFuGgdYySpkQN5kXao/s400/image2b.PNG" alt="" id="BLOGGER_PHOTO_ID_5191659052102920658" border="0" /></a><br /><br />In this example the hierarchy is relatively flat and the number of children returned at each level varies quite a lot. But at the lowest levels, there is likely to be a large number of instances where a parent only has a single child and in these situations the compress feature can compress out the repeated values. Therefore, it might make sense to solve levels L5 and L4.<br /><br /><span style="font-style: italic;">Dimension 3</span><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdJnKM8D13b6ZCQDB1zzCjHt8AHSxmvTHhqviethZzmGh-ja_ict8iYGg8SnljaYNBNSKFZpdnlMQA3hzKHGCUVNK4Sl-2Ky4NJZg5toxUR_QkdgqRs7p8lwx1Yb-0y8Cb82C0pVoFfbk/s1600-h/image2c.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdJnKM8D13b6ZCQDB1zzCjHt8AHSxmvTHhqviethZzmGh-ja_ict8iYGg8SnljaYNBNSKFZpdnlMQA3hzKHGCUVNK4Sl-2Ky4NJZg5toxUR_QkdgqRs7p8lwx1Yb-0y8Cb82C0pVoFfbk/s400/image2c.PNG" alt="" id="BLOGGER_PHOTO_ID_5191658983383443906" border="0" /></a><br /><br />In this example the hierarchy here shows the normal pyramid approach and is definitely bottom heavy. But the upper levels contain relatively few members and drilling typically returns very few members. The default skip level approach for this dimension may in fact be pre-solving too many levels. In practice it may take 2 or 3 builds to determine which are the best levels to pre-solve, with a good starting point being:<br /><ul><li>Run 1: L7, L5, L2</li><li>Run 2: L7, L6, L1</li><li>Run 3: L7, L4, L2</li></ul>This dimension shows that it may be necessary to schedule multiple runs to test these various scenarios. Again we need to consider the impact of using compression, which allows OLAP to solve additional levels very cheaply.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 4a - Hierarchy Validation</span></span><br />Always, always check your hierarchies are functioning correctly. This involves using the Data Viewer feature within AWM. You should make sure the dimension is drillable and that selecting each level in turn returns the correct result-set.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiIR3U_k_OG8ZsuEMhVicn8bbYQZ2nu6Uj9XU8w8czV-yfX1WF12ninfer0pArpmvKxDJ3wQEGZ0cDc2WBzHErLGoG46kMELn27UR_yrq1-pN9VHVJclYLty8cRSxD7IdR6QkFiCgm-OU/s1600-h/image4.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiIR3U_k_OG8ZsuEMhVicn8bbYQZ2nu6Uj9XU8w8czV-yfX1WF12ninfer0pArpmvKxDJ3wQEGZ0cDc2WBzHErLGoG46kMELn27UR_yrq1-pN9VHVJclYLty8cRSxD7IdR6QkFiCgm-OU/s400/image4.PNG" alt="" id="BLOGGER_PHOTO_ID_5191659717822851570" border="0" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibdPeq2JuSmuxFi658eXJ9cXPWv_t9Nl8WL6DVun14OGWAWkFsJBJHxeMYBeB9lgl4Ojo1Myb0Lsgd9K0lniy2xUopX_0rOabgPt3VzOAYyGqdKIF4NPJhqci5LFuZpcrWkDCvbuux5QY/s1600-h/image5.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibdPeq2JuSmuxFi658eXJ9cXPWv_t9Nl8WL6DVun14OGWAWkFsJBJHxeMYBeB9lgl4Ojo1Myb0Lsgd9K0lniy2xUopX_0rOabgPt3VzOAYyGqdKIF4NPJhqci5LFuZpcrWkDCvbuux5QY/s400/image5.PNG" alt="" id="BLOGGER_PHOTO_ID_5191659653398342114" border="0" /></a><br /><br />A better approach is actually to make the database do the work, but this requires some additional SQL commands to be executed against the source tables. Ideally, try and create a relational dimension over the source table(s). Normally, the relational dimension object is used within query rewrite, which in this case we are not really concerned with for 10gR2 (in 11g the story is quite different as a cube can be registered as a materialised view and used for query-rewrite). But this does allow us to use the dbms_dimension.validate_dimension procedure verifies that the relationships specified in a dimension are valid. The rowid for any row that is found to be invalid will be stored in the table DIMENSION_EXCEPTIONS in the user's schema. The procedure looks like this:<br /><br /><span style="font-family:courier new;"> DBMS_DIMENSION.VALIDATE_DIMENSION (</span><br /><span style="font-family:courier new;"> dimension IN VARCHAR2,</span><br /><span style="font-family:courier new;"> incremental IN BOOLEAN := TRUE,</span><br /><span style="font-family:courier new;"> check_nulls IN BOOLEAN := FALSE,</span><br /><span style="font-family:courier new;"> statement_id IN VARCHAR2 := NULL );</span><br /><br />Note that before running the VALIDATE_DIMENSION procedure, you need to create a local table, DIMENSION_EXCEPTIONS, by running the provided script utldim.sql. If the VALIDATE_DIMENSION procedure encounters any errors, they are placed in this table. Querying this table will identify the exceptions that were found. To query this table you can use a simple SQL statement such as this:<br /><br /><span style="font-family:courier new;"> SELECT * FROM dimension_exceptions</span><br /><span style="font-family:courier new;"> WHERE statement_id = 'Product Validation';</span><br /><br /><br />However, rather than query this table, it may be better to query the rowid of the invalid row to retrieve the actual row that has generated the errors. In this example, the dimension PRODUCTS is checking a table called DIM_PRODUCTS. To find any rows responsible for the errors simply link back to the source table using the rowid column to extract the row(s) causing the problem, as in the following:<br /><br /><span style="font-family:courier new;"> SELECT * FROM DIM_PRODUCTS</span><br /><span style="font-family:courier new;"> WHERE rowid IN (SELECT bad_rowid</span><br /><span style="font-family:courier new;"> FROM dimension_exceptions</span><br /><span style="font-family:courier new;"> WHERE statement_id = </span><span style="font-family:courier new;"> 'Product Validation');</span><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 4b - Hierarchy Order</span></span><br />The order of hierarchies within a dimension can have a significant impact on query performance. When solving levels at run-time the OLAP engine will use the last hierarchy in the list as the aggregation path. Consider this example using a time dimension:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr5QlvQtvQRsENQr19484EhCgzcj6_qNJ4RiJeMrK7uLfWO9NLbrU_7n9CAqA-jzfeqYVQriBzBYErdz8v5-KYjhFy216teR_BtuUPRlw3jLnqg8RYBz9TFXR6U48yddG7iNKq7Igd1PY/s1600-h/image5a.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr5QlvQtvQRsENQr19484EhCgzcj6_qNJ4RiJeMrK7uLfWO9NLbrU_7n9CAqA-jzfeqYVQriBzBYErdz8v5-KYjhFy216teR_BtuUPRlw3jLnqg8RYBz9TFXR6U48yddG7iNKq7Igd1PY/s400/image5a.PNG" alt="" id="BLOGGER_PHOTO_ID_5191660722845198850" border="0" /></a><br /><br />Let’s assume we pre-compute the levels Month and Quarter. But decide not to pre-compute the Year level because the main hierarchy used during queries is the Julian Year-Quarter-Month-Day and, therefore, the total for each Year will be derived from adding up just 4 values. In fact, the aggregation engine looks at all the hierarchies to find the lowest common level across all hierarchies, which in this case is Day. It then selects the last hierarchy in the list containing the level Day, in this case the Week hierarchy. Therefore, the value for the each dimension member at the Year level will be the result of adding up 365/366 values and not simply 4 Quarter values.<br /><br />The obvious question is why? The answer is to ensure backward compatibility with the Express ROLLUP command from which the AGGREGATE command is derived. When the Aggregate command was introduced one of our requirements was that it produced numbers that matched those of Rollup, thus in cases where an aggregate node was declared in multiple hierarchies we always produced numbers based on the LAST definition of the node because that would be the number that matched the procedural approach taken by rollup. Because of this feature an alternative approach to hierarchy ordering might be as follows:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRXn_30dygSoJclWMdanUsmcCMF9hKgoFi_7Lyog8gFkrfCluvAGUkrEAm9chJ9iVNeJcmqcUTkUUGMZ09y6lCI-6h5Mm1-ky5ISDrdxqDOsEqc11HcKyMo5l7eETprVmpf8VSonksgpE/s1600-h/image5b.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRXn_30dygSoJclWMdanUsmcCMF9hKgoFi_7Lyog8gFkrfCluvAGUkrEAm9chJ9iVNeJcmqcUTkUUGMZ09y6lCI-6h5Mm1-ky5ISDrdxqDOsEqc11HcKyMo5l7eETprVmpf8VSonksgpE/s400/image5b.PNG" alt="" id="BLOGGER_PHOTO_ID_5191660971953302034" border="0" /></a><br />Now the run-time aggregation for Year will be derived from the level Quarter, which has been pre-computed, and the result will be returned much faster.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 5 - Check the Data Quality</span></span><br />This last step is probably the most important, especially as OLAP style projects tend to be scheduled once all the ETL has been completed. But you should never take the quality of the source data for granted. Ideally you can use the Data Quality option of Warehouse Builder (which is a costed option for OWB) and analyse the source data for each dimension to make sure the data is of a reasonable quality. Things to check are:<br /><ul><li>Consistent data type</li><li>Number distinct values</li><li>Min and Max values</li><li>Domain members</li><li>Number of members not present in the fact table</li></ul>OLAP stores all members as data type text. Even if there are inconsistent data types within the source data, everything gets converted to text. This can mask some issues where unusual dimension members are included in the source data such as –9999, or XXXX. In many cases the data owners are completely unaware these values exist, or, worse still – they are included to allow the data to balance correctly and used as journal buckets. It may not be possible to remove these values but it is important to know they exist and equally important to clarify if they are in fact needed.<br /><br />The last one is an interesting check especially if you are using that dimension as a partition key. If you are creating lots and lots of empty partitions that will never contain data then should those members even be loaded? In a recent project I identified a dimension that contained over 300,000 leaf node members, but in the main fact table there was only data for 50% of those members. The obvious question is why load 150,000 plus members if you are never going to post data to them.<br /><br /><br /><span style="font-weight: bold;">Part 3 - Analysis of Cubes</span><br />The next stage is to review the data model for each cube in turn. It is in this area the biggest impacts on load time are likely to be achieved.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzS98WhQjbGAtdWIEEjE-5OZjozpl22cNOIj0b3gvxz4ZI4oeujBokOh5uQYeQLReXZi3zBopC_0_7QVSyyGRMBo4JnDTItv6gcSvCLtVHuya2YdT4ZtgOM0OBVh6MHsDqJU8xTHLEB3o/s1600-h/Slides+for+Keith.004.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzS98WhQjbGAtdWIEEjE-5OZjozpl22cNOIj0b3gvxz4ZI4oeujBokOh5uQYeQLReXZi3zBopC_0_7QVSyyGRMBo4JnDTItv6gcSvCLtVHuya2YdT4ZtgOM0OBVh6MHsDqJU8xTHLEB3o/s400/Slides+for+Keith.004.png" alt="" id="BLOGGER_PHOTO_ID_5191661869601466914" border="0" /></a><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 1 - Analysis of Storage Model</span></span><br />It is important to assign an efficient storage model to a cube, as this will have a significant impact on both the load and aggregation times.<br /><br /><ul><li>Make sure compression is enabled.</li><li>Data type should be either DECIMAL or INTGER</li><ul><li>Warning do not use NUMBER as this uses approximately 3.5 times the storage compared to DECIMAL, but it is the default. Number requires 22 bytes and Decimal requires 8 bytes (See OLAP Application Developers Guider, 10.2.0.3, Chapter 7 Aggregating Data).</li></ul></ul>Try not to use Global Composites. There is little need to use this feature, except in very special cases where you need to optimise the retrieval of rows via SQL access and you want to only report non-NA and/or non-zero rows. Note – if you are using compression it is not possible to use the “Global Composites” feature even though in AWM10gR2 the option box is still enabled even after you select to use compression. (In 11g there are database events you can use to optimise the retrieval of non-NA/zero rows. See the posting by Bud Endress on the OLAP Blog: <a href="http://oracleolap.blogspot.com/2008/04/attribute-reporting-on-cube-using-sql.html">Attribute Reporting on the Cube using SQL</a>)<br /><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 2 - Analysis of Sparsity Model</span></span><br />Management of sparsity within a cube is critical. Firstly the order of the dimensions is very important. When using compression, which should always be enabled, dimensions should be ordered with the dimension with the least number of members first and the dimension with the most number of members last. The most common question is: Should time be dense or sparse?<br /><br />Answer – it depends. This is where you need to have a deep understanding of the source data and the data quality features in OWB can really help in this type of situation. In some models time works best dense and in other models time works best when it is sparse. This is especially true when time is used as the partition dimension. Therefore, you need to plan for testing these different scenarios.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsA1vT8ovCtrrE6gjNWpJ8Ki3eC_3P2MYUznJHd6uNbGfU90WInFEvJwYazY_b0Ru0f0bikSy7GbGJXSBh9OuHE3yRKmy10saEEpI9V8nmRKa6WX2MpiVfUJmllsn1fHw6tmwx0Ztb1qE/s1600-h/Image6.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsA1vT8ovCtrrE6gjNWpJ8Ki3eC_3P2MYUznJHd6uNbGfU90WInFEvJwYazY_b0Ru0f0bikSy7GbGJXSBh9OuHE3yRKmy10saEEpI9V8nmRKa6WX2MpiVfUJmllsn1fHw6tmwx0Ztb1qE/s400/Image6.PNG" alt="" id="BLOGGER_PHOTO_ID_5191662509551594034" border="0" /></a><br /><br />There is sparsity advisor package in the database, which analyses the source data in relational tables and recommends a storage method. The recommendations may include the definition of a composite and partitioning of the data variable. The Sparsity Advisor consists of these procedures and functions:<br /><ul><li>SPARSITY_ADVICE_TABLE Procedure</li><li>ADD_DIMENSION_SOURCE Procedure</li><li>ADVISE_SPARSITY Procedure</li><li>ADVISE_DIMENSIONALITY Function</li><li>ADVISE_DIMENSIONALITY Procedure</li></ul>The Sparsity Advisor also provides a public table type for storing information about the dimensions of the facts being analyzed. I have to say this is not the friendliest package ever shipped with the database, but it can be useful in some situations. To use the Sparsity Advisor you need to follow these five steps:<br /><ol><li>Call SPARSITY_ADVICE_TABLE to create a table for storing the evaluation of the Sparsity Advisor.</li><li>Call ADD_DIMENSION_SOURCE for each dimension related by one or more columns to the fact table being evaluated. The information that you provide about these dimensions is stored in a DBMS_AW$_DIMENSION_SOURCES_T variable.</li><li>Call ADVISE_SPARSITY to evaluate the fact table. Its recommendations are stored in the table created by SPARSITY_ADVICE_TABLE. You can use these recommendations to make your own judgements about defining variables in your analytic workspace, or you can continue with the following step.</li><li>Call the ADVISE_DIMENSIONALITY procedure to get the OLAP DML object definitions for the recommended composite, partitioning, and variable definitions, or</li><li>Use the ADVISE_DIMENSIONALITY function to get the OLAP DML object definition for the recommended composite and the dimension order for the variable definitions for a specific partition.</li></ol><br /><br />The OLAP Reference manual provides an example script for the GLOBAL demo schema to analyse the relational fact table. The amount of information required does seem a little excessive given that most of it could be extracted from the various metadata layers – may be some bright person will create a wrapper around this to simplify the whole process.<br /><br />On the whole I still find the majority of models work best with everything sparse and to far I have only found a few cases where load and aggregation times improved when time was marked dense. But as with all tuning exercises, it is always worth trying different options, as there is no “fits-all” tuning solution with OLAP.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 3 - Analysis of Partition Model</span></span><br />Partitioning is managed at both the logical and physical levels. At the logical level, it is possible to partition a cube using a specific level to split the cube into multiple chunks. At the physical level, it is possible to partition the actual AW$ table and associated indexes that form the AW.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Logical Partitioning</span></span><br />Always start by use partitioning. Why? Because partitioning allows the cube to be broken down into smaller segments – much like relational table partitioning. This can help improve the aggregation phase of a build because the engine is able to load more related data into memory during processing. It also allows you to use the parallel update features of Oracle OLAP during a build. But there are some things to consider when setting up partitioning. When using partitioning you should:<br /><br /><ul><li>Try to select a dimension that has balanced partitions, such as Time</li><li>Try to select a dimension level that is not too volatile, this is one of the reasons for electing to use a dimension such as time.</li><li>Select the Level based on the information collected during Step 3 of the analysis of dimensions. In 10g, the levels above the partition key are solved at run time (this is resolved in 11g) so select the level for the partition key carefully.</li><ul><li>When selecting the partition key consider the impact this will have on the default partition, which contains all the levels above the partition level. For example partitioning on a level such as Day might generate nice small partitions but the default partition will contain all the other members such as Week, Month, Quarter and Year making the default partition very large.</li></ul></ul>It might be necessary to experiment with different partition keys to get the right balance between stored and run time aggregation. For example, if you partition using a Time dimension then Month is usually a good level to select as the key since each year only needs to aggregate 12 members to return a total, but if you have 30 years of data and most reports start at the year level displaying all 30 years the run time performance might not be acceptable. In this case the level Quarter or even Year might be a better option.<br /><br /><span style="font-style: italic;">Parallel vs Serial Processing</span><br />Logical partitioning is required for cubes where you want to enable parallel processing. But be warned, running a job in parallel may not improve processing times. In fact using too many parallel processes can have the opposite affect. But used wisely, parallel processing can drastically improve processing times provided the server is not already CPU bound. As a starting point I always begin testing by setting the value MaxJobQueues to “No. of CPUs-1” in the XML file for the definition of the build. In some cases even this might be too high and reducing this figure can actually improve processing times. Tuning AW parallel processing is exactly the same as tuning relational parallel processing – you need to determine where the point of diminishing returns sets in, which can be a combination:<br /><br />CPU loading<br />I/O bandwidth<br />Cube design<br /><br />Do not assume throwing parallel resources at a performance issue will resolve the whole problem. Managed carefully this can provide a significant improvement in peformance.<br /><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Physical Partitioning</span></span><br />The aim of relational (physical) partitioning is to allow you to control the tablespace for each partition thus distributing the load across multiple disks, to spread data across a variety of disk types (see information on ILM on OTN) and to enhance query performance since it is possible to direct specific queries to a smaller subset of data.<br /><br />Some, but not all of this applies to an AW. From a tablespace perspective it is probably easier to use ASM to manage and distribute the storage of an AW across multiple disks as opposed to creating a partitioned AW$ table spread across multiple tablespaces. The reason the AW$ table is partitioned is optimise the lob performance. Each partition has its own lob index, which manages its storage, and a separate slave process can update each partition.<br /><br />The relational table that acts as a container for the AW, AW$xxxxx , can be partitioned to break the AW into more physical chunks which can reduce contention for locks on the relational objects (AW$ table and related indexes) during parallel data loading jobs. By default each AW is created using a range partition key of gen# and 8 subpartitions. The DDL below is from a default AW created via AWM. Note the clauses to manage the partition and sub-partitions:<br /><ul><li>PARTITION BY RANGE ("GEN#") </li><li>SUBPARTITION BY HASH ("PS#","EXTNUM") </li><li>SUBPARTITIONS 8</li></ul><br /><span style="font-size:85%;"><span style="font-family:courier new;"> CREATE TABLE "BI_OLAP"."AW$SH_AW" </span><br /><span style="font-family:courier new;"> ("PS#" NUMBER(10,0), </span><br /><span style="font-family:courier new;"> "GEN#" NUMBER(10,0), </span><br /><span style="font-family:courier new;"> "EXTNUM" NUMBER(8,0), </span><br /><span style="font-family:courier new;"> "AWLOB" BLOB, </span><br /><span style="font-family:courier new;"> "OBJNAME" VARCHAR2(256 BYTE), </span><br /><span style="font-family:courier new;"> "PARTNAME" VARCHAR2(256 BYTE)) </span><br /><span style="font-family:courier new;">PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 </span><br /><span style="font-family:courier new;"> STORAGE(</span><br /><span style="font-family:courier new;"> BUFFER_POOL DEFAULT)</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> DISABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 0</span><br /><span style="font-family:courier new;"> CACHE </span><br /><span style="font-family:courier new;"> STORAGE(</span><br /><span style="font-family:courier new;"> BUFFER_POOL DEFAULT)) </span><br /><span style="font-family:courier new;"> PARTITION BY RANGE ("GEN#") </span><br /><span style="font-family:courier new;"> SUBPARTITION BY HASH ("PS#","EXTNUM") </span><br /><span style="font-family:courier new;"> SUBPARTITIONS 8</span><br /><span style="font-family:courier new;"> (PARTITION "PTN1" VALUES LESS THAN (1) </span><br /><span style="font-family:courier new;">PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 </span><br /><span style="font-family:courier new;"> STORAGE(</span><br /><span style="font-family:courier new;"> BUFFER_POOL DEFAULT)</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> DISABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 0</span><br /><span style="font-family:courier new;"> CACHE READS LOGGING </span><br /><span style="font-family:courier new;"> STORAGE(</span><br /><span style="font-family:courier new;"> BUFFER_POOL DEFAULT)) </span><br /><span style="font-family:courier new;"> ( SUBPARTITION "SYS_SUBP16109" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16110" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16111" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16112" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16113" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16114" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16115" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16116" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP") , </span><br /><span style="font-family:courier new;"> PARTITION "PTNN" VALUES LESS THAN (MAXVALUE) </span><br /><span style="font-family:courier new;">PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 </span><br /><span style="font-family:courier new;"> STORAGE(</span><br /><span style="font-family:courier new;"> BUFFER_POOL DEFAULT)</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> DISABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 0</span><br /><span style="font-family:courier new;"> CACHE </span><br /><span style="font-family:courier new;"> STORAGE(</span><br /><span style="font-family:courier new;"> BUFFER_POOL DEFAULT)) </span><br /><span style="font-family:courier new;"> ( SUBPARTITION "SYS_SUBP16117" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16118" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16119" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16120" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16121" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16122" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16123" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP", </span><br /><span style="font-family:courier new;"> SUBPARTITION "SYS_SUBP16124" </span><br /><span style="font-family:courier new;"> LOB ("AWLOB") STORE AS (</span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP" ) </span><br /><span style="font-family:courier new;"> TABLESPACE "BI_OLAP") ) ;</span><br /></span><br />The best overall approach here is to ensure you have the correct number of sub-partitions to reduce contention during updates. For example, if you have a cube with three years of data partitioned using the level month, it would be sensible to add and additional 36 subpartitions to the AW$ table to spread the load and reduce contention during parallel updates. You can add more sub-partitions quickly and easily as follows.<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">alter table aw$test modify partition ptnn add subpartition ptnn_009 update indexes;</span><br /><span style="font-family:courier new;">alter table aw$test modify partition ptnn add subpartition ptnn_010 update indexes;</span><br /></span><br />Therefore, I recommend adding additional subpartitions at the physical level to match the number of logical partitions within the cube.<br /><br />It is possible to go to the next level (if you really feel it is necessary) and directly manage the DDL used to create the AW and there are a number of commands that allow you to control the default tablespace and the number of partitions. You can increase the number of sub-partitions within each gen# partition using either the ‘aw create command’ as shown here:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">exec dbms_aw.execute('aw create <span style="font-style: italic;">owner</span></span></span><owner><span style="font-style: italic;font-size:85%;" ><span style="font-family:courier new;">.aw_name</span></span><aw><span style="font-size:85%;"><span style="font-family:courier new;"> partitions <span style="font-style: italic;">N</span> </span></span><n><span style="font-size:85%;"><span style="font-family:courier new;">segmentsize </span></span></n></aw></owner><span style="font-size:85%;"><span style="font-family:courier new;"><span style="font-style: italic;">N</span></span></span><span style="font-size:85%;"><span style="font-family:courier new;"> K|M|G');</span><br /></span><br />Note the key word “partitions” actually refers to the number of subpartitions.<br />It is in fact possible to define a target tablespace for the AW via the DBMS_AW.ATTACH procedure:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;"> DBMS_AW.AW_ATTACH ( </span><br /><span style="font-family:courier new;"> awname IN VARCHAR2,</span><br /><span style="font-family:courier new;"> forwrite IN BOOLEAN DEFAULT FALSE,</span><br /><span style="font-family:courier new;"> createaw IN BOOLEAN DEFAULT FALSE,</span><br /><span style="font-family:courier new;"> attargs IN VARCHAR2 DEFAULT NULL,</span><br /><span style="font-family:courier new;"> tablespace IN VARCHAR2 DEFAULT NULL);</span><br /></span><br />For example, the following SQL statement creates the AW GLOBAL_PROGRAMS as the last user-owned analytic workspace in tablespace USERS:<br /><br /><span style="font-size:85%;">SQL>EXECUTE DBMS_AW.AW_ATTACH('global_programs', true, true, 'last', ‘USERS’);<br /></span><br />AWM 10gR2 (10.2.0.03A) also allows you to define the tablespace when you create the AW, but the tablespace name is not included in the XML definition of the AW.<br />If you think you need to get right down to the base DDL level to control the allocation of tablespaces used by the AW then you will need to manually define the AW$ table. The easiest method is to create another AW$ table using the DDL from the original AW$ table and modifying it to create your own placement statements for the tablespaces. To get the DDL for an AW (table and index) you can either use SQLDeveloper or use the DBMS_METDATA package as follows:<br /><br /><span style="font-size:85%;"> set heading off;<br />set echo off;<br />set pages 999;<br />set long 90000;<br />spool aw_ddl.sql<br />select dbms_metadata.get_ddl('TABLE','AW$SH_AW','SH_OLAP') from dual;<br />select dbms_metadata.get_ddl('INDEX','SH_AW_I$','SH_OLAP') from dual;<br />spool off;<br /></span><br />These statements show the exact DDL used to generate the AW$ table and its associated index. Once you have the DDL you can then modify the tablespace statements for each sub-partition to spread the loading across different tablespaces and hence data files. But it is much easier to use ASM to manage all this for you.<br /><br />There is one major issue with manually creating an AW – the standard form metadata is not automatically added to the AW and there is no documented process for achieving this. The only reliable solution I have found is to first create the AW via AWM and then export the empty AW to an EIF file. This EIF file will then contain the standard form metadata objects. Once you have deleted and re-created the AW with the based on your specific tablespace and subpartition requirements the standard form metadata can be added by importing the EIF file. Not the prettiest of solutions but it works – at least with 10gR2.<br /><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 4 - Analysis of Aggregation Model</span></span><br /><span style="font-style: italic;">The Summarize To Tab</span><br />This is where the biggest improvements to build time are likely to be uncovered. The “Summarize To” tab allows you to select the levels to pre-solve. Based on the analysis of the number of members and children at each level it should be possible to tune the levels to pre-solve only the most important levels.<br /><br />This step will require lots of testing and many builds to finally arrive at the best mix of levels.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRF39N6yag0G6qFx3Xn1K0Z7P2Z8INcZ0eodhQitirfiXgkj0TwJ7zC17kxsz7gFQY1gtUf69OHgnj7q-ho0x8TsWTtjJbSx0MkDYhkvXBgXt5KC61vIUzUerm1op3Pr6XRJ7J3ovd1MA/s1600-h/Image13.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRF39N6yag0G6qFx3Xn1K0Z7P2Z8INcZ0eodhQitirfiXgkj0TwJ7zC17kxsz7gFQY1gtUf69OHgnj7q-ho0x8TsWTtjJbSx0MkDYhkvXBgXt5KC61vIUzUerm1op3Pr6XRJ7J3ovd1MA/s400/Image13.JPG" alt="" id="BLOGGER_PHOTO_ID_5191987490420546434" border="0" /></a><br /><br /><span style="font-style: italic;">The Rules Tab</span><br />If you are using the same aggregation method across all dimensions, such as SUM, the aggregation engine will optimise the processing order for the dimensions by solving them in the reverse order from highest cardinality to lowest cardinality. Despite this I always manually order the dimensions myself anyway on the Rules Tab.<br /><br />Where you are using different aggregation methods across the various dimensions it is important to ensure the dimensions are in the correct order to return the desired result. If you change the order to improve aggregation performance where different aggregation methods are used, check the results returned are still correct. Getting the wrong answer very quickly is not a good result.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvBKNlXvTQPD5f6hHL9EVnYMjcErWJmgyev58x6F42f6hWkOeA_Rg0zZIu3c-NYMjXeERprq_2-XzMzZTomd7GVDF3BGFwpfhxGVySYCCSzRoWbYs3PO4YZK8JnRNhKq_FMbvZFxPkF4k/s1600-h/Image14.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvBKNlXvTQPD5f6hHL9EVnYMjcErWJmgyev58x6F42f6hWkOeA_Rg0zZIu3c-NYMjXeERprq_2-XzMzZTomd7GVDF3BGFwpfhxGVySYCCSzRoWbYs3PO4YZK8JnRNhKq_FMbvZFxPkF4k/s400/Image14.JPG" alt="" id="BLOGGER_PHOTO_ID_5191988074536098706" border="0" /></a><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 5 - Analysis of Data Quality</span></span><br />This is another area that can have a huge impact of load times. There are three key things to consider:<br /><ul><li>Number of NA cells</li><li>Number of Zero cells</li><li>Sparsity patterns</li></ul>Quite often I see situations where hundreds of thousands of either NA or zero values are loaded into a cube and then aggregated. In a recent customer situation, over 40% of the data being loaded was either NA or zero. Removing just those records from the data load saved a huge amount of time both in loading and aggregating that data set. Now in some cases it may in fact be necessary to load a zero balance because the value “0” does actually mean something and having a cell appear as NULL in a report does not infer the same meaning. If this is the case, there are much better ways of managing zero balances than loading and aggregating those balances up across all the various hierarchies to return a value of 0. My recommendation is to remove all zero and NA/null rows from the source fact table.<br /><br />Where there is a need to show a zero balance, create a separate cube load only the zero balances into that cube but do not aggregate the data. Use a formula to glue the non-zero balance data to the zero balance data, such as:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">Nafill(CUBE1_NON_NA_DATA, CUBE2_ZERO_BALANCE_DATA)</span></span><br /><br />This will significantly improve the performance of the main cube since the aggregation engine only has to deal with real balances.<br /><br />Sparsity patterns are important when you have a cube that contains a large number of measures all sourced from the same fact table. In another situation, a customer had designed two cubes with about 30 measures in one cube and two measures in the other cube. The source fact table contained 75 million rows. The data load was taking about ten hours for just three years of data. Looking at the data and executing various SQL counts to determine the number of NULL cells and Zero cells for each measure, it was clear there were five different sparsity patterns within the fact table.<br /><br />By breaking the single cube into five different cubes, creating views over the base fact table to only return the relevant columns for each cube and removing all NA and zero values the amount of data being loaded each month declined to the values shown below:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEOu8nkKGx3I6Su9vEZMFDJHsI8-7WFztHUi0l2Ji7GRB3uLLzFKSm2wAjPCIsbAGtSgUYZCCBOCAjeLoKHWbs4lRyFBohn1sfN9ON8As8B9ZKZEZYSwYVambQn9AEanNxx07o0Mi8n1Q/s1600-h/Image15.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEOu8nkKGx3I6Su9vEZMFDJHsI8-7WFztHUi0l2Ji7GRB3uLLzFKSm2wAjPCIsbAGtSgUYZCCBOCAjeLoKHWbs4lRyFBohn1sfN9ON8As8B9ZKZEZYSwYVambQn9AEanNxx07o0Mi8n1Q/s400/Image15.JPG" alt="" id="BLOGGER_PHOTO_ID_5191988791795637154" border="0" /></a><br /><br />This change combined with changes to the selection of levels pre-aggregated reduced the build and aggregation time by over 50% with little impact on query performance.<br /><br />It is critical to fully understand the source data and how it is stored. As the number of measures within a cube increases it is likely that the number of times an NA or Zero value appears will also increase. Breaking a large cube up into smaller more focused chunks in this type of scenario can provide significant benefits.<br /><br /><span style="font-size:100%;"><span style="font-weight: bold;">Part 4 - Analysis of Source Schema Queries</span></span><br />When loading data into a cube from a relational source schema you should be able to achieve about 1 million rows updated in the cube per minute. If you are not seeing that level of throughput from the source table/view, you need to look at:<br /><ul><li>Hardware issues</li><li>Cube design issues</li><li>Query design issues</li></ul>The first two issues have already been covered. This area aims to review the tuning of the query fetching the data from the relational source table/view into the cube.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY_P0y2yWTA9ilBRUkbt1W30xU4usOnvSPiXU3drBIehyphenhyphenQaxabBVGBH4MnfoITN_iqBQNfw2fQg-74-dvSMxvuqn8nba7oPN9YFE-s6V63O5jKLaATSx-cpeSeS5Q9XwXkQMt-yUYseuI/s1600-h/Slides+for+Keith.005.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY_P0y2yWTA9ilBRUkbt1W30xU4usOnvSPiXU3drBIehyphenhyphenQaxabBVGBH4MnfoITN_iqBQNfw2fQg-74-dvSMxvuqn8nba7oPN9YFE-s6V63O5jKLaATSx-cpeSeS5Q9XwXkQMt-yUYseuI/s400/Slides+for+Keith.005.png" alt="" id="BLOGGER_PHOTO_ID_5191989049493674930" border="0" /></a><br /><br />Tuning the queries used to load dimension members and data into cubes can be very important. When either a data load or dimension load is executed a program is created containing the SQL to fetch the data from the relational table. It is important to make sure the SQL being executed is as efficient as possible. By using views as the source for your mappings it is relatively easy to add additional hints to ensure the correct execution path is used. Note - with 11g this can cause problems if the cube is to be exposed as a materialised view. For query re-writes to function the cube must use the underlying fact table that is part of the end-user query.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 1 - Analysing SQL Statements</span></span><br />To optimise the SQL executed during a load you have use either, or both, of the approaches:<br /><ul><li>Enterprise Manager – via Tuning Packs</li><li>Manual analysis</li></ul>If you are comfortable using PL/SQL and understand a little about OLAP DML you can follow the manual approach. However, I expect most people will revert to using Enterprise Manage as it makes the whole process so simple. However, note the Tuning Pack is a costed option for EM so check your license agreement before you start using the Enterprise Manager approach.<br /><br /><span style="font-style: italic;">Enterprise Manager</span><br />Enterprise Manager can be used to monitor the results from a SQL statement. The Performance Tab provides the environment for tuning SQL statements as well as monitoring the operation of the whole instance. The easiest way to find the SQL statement used by the data load process is to search for a SELECT statement against the view/table used in the mapping. The SQL can quite often be found in the “Duplicate SQL” report at the bottom of the Top Activity page:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvT-izIkYHB3bRW7nlu4Otfy3_cCX638VYHUD_f0LUXL7TAnKnCL7FDRpmgYdhz4DAA4jYyj143sRE3oQ7Di_XqTMfHRQQGZuYCpZNdi3SR6UH9xCmA3_pO-xtmrouUGAK9ZFqK3Lrqhk/s1600-h/Image9b.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvT-izIkYHB3bRW7nlu4Otfy3_cCX638VYHUD_f0LUXL7TAnKnCL7FDRpmgYdhz4DAA4jYyj143sRE3oQ7Di_XqTMfHRQQGZuYCpZNdi3SR6UH9xCmA3_pO-xtmrouUGAK9ZFqK3Lrqhk/s400/Image9b.PNG" alt="" id="BLOGGER_PHOTO_ID_5191989560594783170" border="0" /></a><br /><br />Once you have found the SQL statement, clicking on the SQL statement listed in the table will present a complete analysis of that statement and allow you to schedule the SQL Tuning Advisor. The output from the Advisor includes recommendations for improving the efficiency of that statement. Below is the analysis of the resources used to execute the product dimension SQL statement:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiz-kW8QQvt7b49F348uBZoKmKzPxKf-79csMq6_fMY6qKniqr4dwWjf-L13v-UYI_3MWJ6wd6YHovGgIMDpVQO5k0V7LINFx7DOl6s6xWKlQxRx7XgEr1QgR5npnA1KhzPDZymP_AEsIo/s1600-h/Image9a.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiz-kW8QQvt7b49F348uBZoKmKzPxKf-79csMq6_fMY6qKniqr4dwWjf-L13v-UYI_3MWJ6wd6YHovGgIMDpVQO5k0V7LINFx7DOl6s6xWKlQxRx7XgEr1QgR5npnA1KhzPDZymP_AEsIo/s400/Image9a.PNG" alt="" id="BLOGGER_PHOTO_ID_5191989771048180690" border="0" /></a><br /><br /><span style="font-style: italic;">Scheduling the Advisor</span><br />Scheduling the advisor to analyse your SQL statement is very simple. Click on the button in the top right corner of the SQL Details screen. This will launch the SQL Advisor where you need to provide:<br /><ul><li>A description for the job</li><li>Set the scope to either limited or comprehensive (there are on screen notes to help you make this decision)</li><li>Time and date to run the Advisor, since it might not be possible to run the advisor immediately.</li></ul>Once the Advisor has completed its review, it is possible to look at the recommendations that have been generated. After you have implemented the recommendations it is then possible to view the explain plan for your query:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQbcQw62axdNo4f7SX18HxACdjGa7EaE-qSqgvhzz9LPOATKF4fvVMOH16YQtYjymvXrXBX10q6YZaEVqhm93InEp1QhdeMNpXPJ3V6hSW9AEQEMO74JMDniPWuSUuGdqBArFgkmBsdoM/s1600-h/Image9d.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQbcQw62axdNo4f7SX18HxACdjGa7EaE-qSqgvhzz9LPOATKF4fvVMOH16YQtYjymvXrXBX10q6YZaEVqhm93InEp1QhdeMNpXPJ3V6hSW9AEQEMO74JMDniPWuSUuGdqBArFgkmBsdoM/s400/Image9d.PNG" alt="" id="BLOGGER_PHOTO_ID_5191990063105956834" border="0" /></a><br /><span style="font-size:85%;"><span style="font-weight: bold;">Note: These features are costed extensions to the Enterprise Manager console and cannot be used on a production system unless your customer has bought these extensions.</span><br /></span><br /><span style="font-style: italic;">Manual Tuning</span><br />So how do you capture the SQL being executed during a data load? In 10g, during a load process an OLAP DML program is created called '___XML_LOAD_TEMPPRG'.<br />This program contains the code used during the build process and it is relatively easy to capture this code either via Enterprise Manager or manually at the end of the build.<br />(For a data load in 11g, look at the CUBE_BUILD_LOG’s “output” column. The table is in the AW’s schema).<br /><br />To manually capture the program code (in 10g), create your own job to manually execute a data load. For example below is a job to refresh the members in the dimension Products. Note the first three lines and last four lines that control the dumping of the program code so we can capture the SQL.<br /><br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">set serveroutput on</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('aw attach SH_AW rw first');</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('cda BI_DIR');</span><br /></span><br /><span style="font-style: italic;">call SQL file to refresh cube<br /><br /></span><span style="font-size:85%;"><span style="font-family:courier new;">exec dbms_aw.execute('outfile loader.txt');</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('DSC ___XML_LOAD_TEMPPRG')</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('outfile eof');</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('aw detach SH_AW');</span><br /></span><br />The resulting program looks like this, with the SQL code highlighted in bold:<br /><span style="font-size:85%;"><br /><span style="font-family:courier new;">DEFINE ___XML_LOAD_TEMPPRG PROGRAM INTEGER</span><br /><span style="font-family:courier new;">PROGRAM</span><br /><span style="font-family:courier new;">variable _errortext text</span><br /><span style="font-family:courier new;">trap on HADERROR noprint</span><br /><span style="font-family:courier new;">sql declare c1 cursor for -</span><br /><span style="font-family:courier new;">select SH.VW_PRODUCTS_DIM.PROD_ID, -</span><br /><span style="font-family:courier new;">SH.VW_PRODUCTS_DIM.PROD_DESC, -</span><br /><span style="font-family:courier new;">SH.VW_PRODUCTS_DIM.PROD_DESC, -</span><br /><span style="font-family:courier new;">SH.VW_PRODUCTS_DIM.PROD_PACK_SIZE, -</span><br /><span style="font-family:courier new;">SH.VW_PRODUCTS_DIM.PROD_WEIGHT_CLASS, -</span><br /><span style="font-family:courier new;">SH.VW_PRODUCTS_DIM.PROD_UNIT_OF_MEASURE, -</span><br /><span style="font-family:courier new;">SH.VW_PRODUCTS_DIM.SUPPLIER_ID -</span><br /><span style="font-family:courier new;">from SH.VW_PRODUCTS_DIM -</span><br /><span style="font-family:courier new;">where -</span><br /><span style="font-family:courier new;">(SH.VW_PRODUCTS_DIM.PROD_ID IS NOT NULL)</span><br /><span style="font-family:courier new;">sql open c1</span><br /><span style="font-family:courier new;">if sqlcode ne 0</span><br /><span style="font-family:courier new;">then do</span><br /><span style="font-family:courier new;"> _errortext = SQLERRM</span><br /><span style="font-family:courier new;"> goto HADERROR</span><br /><span style="font-family:courier new;"> doend</span><br /><span style="font-family:courier new;">sql import c1 into :MATCHSKIPERR SH_OLAP.SH_AW!PRODUCTS_PRODUCT_SURR -</span><br /><span style="font-family:courier new;">:SH_OLAP.SH_AW!PRODUCTS_LONG_DESCRIPTION(SH_OLAP.SH_AW!ALL_LANGUAGES 'AMERICAN') -</span><br /><span style="font-family:courier new;">:SH_OLAP.SH_AW!PRODUCTS_SHORT_DESCRIPTION(SH_OLAP.SH_AW!ALL_LANGUAGES 'AMERICAN') -</span><br /><span style="font-family:courier new;">:SH_OLAP.SH_AW!PRODUCTS_PACK_SIZE -</span><br /><span style="font-family:courier new;">:SH_OLAP.SH_AW!PRODUCTS_WEIGHT_CLASS -</span><br /><span style="font-family:courier new;">:SH_OLAP.SH_AW!PRODUCTS_UNIT_OF_MEASURE -</span><br /><span style="font-family:courier new;">:SH_OLAP.SH_AW!PRODUCTS_SUPPLIER_ID</span><br /><span style="font-family:courier new;">if sqlcode lt 0</span><br /><span style="font-family:courier new;">then do</span><br /><span style="font-family:courier new;"> _errortext = SQLERRM</span><br /><span style="font-family:courier new;"> goto HADERROR</span><br /><span style="font-family:courier new;"> doend</span><br /><span style="font-family:courier new;">sql close c1</span><br /><span style="font-family:courier new;">sql cleanup</span><br /><span style="font-family:courier new;">return 0</span><br /><span style="font-family:courier new;">HADERROR:</span><br /><span style="font-family:courier new;">trap on NOERR1 noprint</span><br /><span style="font-family:courier new;">sql close c1</span><br /><span style="font-family:courier new;">NOERR1:</span><br /><span style="font-family:courier new;">trap off</span><br /><span style="font-family:courier new;">sql cleanup</span><br /><span style="font-family:courier new;">call __xml_handle_error(_errortext)</span><br /><span style="font-family:courier new;">END</span><br /></span><br />Once you have the statement you can use SQLDeveloper’s explain plan feature to determine the execution plan. When dealing with cubes, it is likely the source fact table will be partitioned; therefore, you need to ensure partition elimination is occurring correctly.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ3zqq-ZGOiw1bOmhcMjhlDOAnCf8_pSbUN_Gqx1NO0OLUnR9zcy9GpKDLu9FO3cbT_ehxA9QbjE9K-BZvxYR7OX-FivT6MSimIJX-hnADJmC6mVfY1z6-0x8ez84aV3RETlucWgEs23A/s1600-h/Image10.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ3zqq-ZGOiw1bOmhcMjhlDOAnCf8_pSbUN_Gqx1NO0OLUnR9zcy9GpKDLu9FO3cbT_ehxA9QbjE9K-BZvxYR7OX-FivT6MSimIJX-hnADJmC6mVfY1z6-0x8ez84aV3RETlucWgEs23A/s400/Image10.PNG" alt="" id="BLOGGER_PHOTO_ID_5191991166912551922" border="0" /></a><br /><br />If additional hints need to be added to make the query more efficient, these can be added to the view definition. This approach does not automatically generate recommendations so you will need to have a solid grasp of SQL tuning to ensure your query is based on the most optimal execution plan.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 2 – Managing Sort Resources</span></span><br />Sorting the source data is quite important for both dimensions and facts. By default, OLAP sorts dimensions alphabetically in ascending order based on the long description. Therefore, it makes sense for the relational source to provide the data in the required order, especially for the dimension loads.<br /><br />Optimising cube loads requires making sure the sorting is based on the same order as the dimensions are listed within the partitioned composites. This will be the same order as shown on the implementation details tab.<br /><br />OLAP load operations are sort intensive. You may need to increase the sort_area_size setting within the database to try and ensure the various sorting operations during a load are performed in memory and not disk. The default setting is 262,144. As part of a load process you can increase the amount of sort memory available as follows:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('SortBufferSize=10485760');</span></span><br /><br />Executing this command before starting a data load will increase the amount of resources allocated to memory sorts, in this case providing approximately 10Mb of memory. To permanently set the SortBufferSize to 10Mb, issue the following commands:<br /><br /><span style="font-size:85%;">exec DBMS_AW.EXECUTE('aw attach my_aw_name rwx');<br />exec DBMS_AW.EXECUTE('SortBufferSize=10485760');<br />exec DBMS_AW.EXECUTE('update');<br />exec DBMS_AW.EXECUTE('commit');<br />exec DBMS_AW.EXECUTE('aw detach my_aw_name');<br /></span><br />Or you can simply set the option before executing the XML job definition:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">set serveroutput on</span><br /><span style="font-family:courier new;">exec dbms_aw.shutdown;</span><br /><span style="font-family:courier new;">exec dbms_aw.startup;</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('aw attach SH_AW rw first');</span><br /><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('SortBufferSize=10485760');</span><br /><br /></span><span style="font-style: italic;font-family:courier new;font-size:85%;" >call SQL file to refresh cube</span><span style="font-size:85%;"><br /><br /><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('SortBufferSize=262144');</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('update;commit');</span><br /><span style="font-family:courier new;">exec dbms_aw.execute('aw detach SH_AW');</span><br /><span style="font-family:courier new;">exec dbms_aw.shutdown;</span><br /><span style="font-family:courier new;">exec dbms_session.free_unused_user_memory;</span><br /></span><br />For more information on this subject area refer to the next session on monitoring system resources.<br /><br /><br /><span style="font-weight: bold;">Part 5 - Analyis of the Database</span><br />There are a number of areas that are important when tuning a data load process and the areas outlined in this section are really just going to tweak the performance and may or may need result in significant performance improvements. But this area can provide the “icing on the cake” in terms of extracting every last ounce of performance.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin33JRugaP6L9ZoLgljx7mCfEyg4qfLsydnwZHs8uwU0aMAnx6jPUfM8Lu71QT_EYBtBMG30PQfEvEx0c36v4XktxoKRSVrkinChj9vtg7vSv61mW_WmlUbzF1-YxJ7j0mp0xwn06JOGg/s1600-h/Slides+for+Keith.006.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin33JRugaP6L9ZoLgljx7mCfEyg4qfLsydnwZHs8uwU0aMAnx6jPUfM8Lu71QT_EYBtBMG30PQfEvEx0c36v4XktxoKRSVrkinChj9vtg7vSv61mW_WmlUbzF1-YxJ7j0mp0xwn06JOGg/s400/Slides+for+Keith.006.png" alt="" id="BLOGGER_PHOTO_ID_5191991686603594754" border="0" /></a><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 1a – Relational Storage Settings</span></span><br />Make sure logging is switched off on the tablespace used to store the AW. Since the AW does not support redo there is not point in generating it. Make sure you have enough space within the tablespace before you start a build. A lot of time can be consumed extending the tablespace if you are not careful.<br /><br />If you are using Data Guard, it will not be possible to switch of redo. The alternative is to increase REDO Log Size to between 100M and 500M, and also modify LOG_BUFFER parameter to 10M (for example) to allow for more efficient index lob creation, and also try to move TEMP, UNDO and REDO logs to fastest disk.<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 1b –AW Storage Settings</span></span><br />If the cubes within an AW contain a large number of partitions, then performance can be improved by adding additional physical partitions to AWs. The AW should be logically partitioned and modelled well and then should also be physically partitioned as it improves update performance by reducing index lob contention. For example, if the main data cube contains 36 months of data and is logically partitioned by month in the AW, then the physical partitioning of the AW should match the number of logical partitions. To override the default of eight partitions it is necessary to manually define the AW and set the required number of partitions as show here:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">SQL> exec dbms_aw.execute('aw create scott.product_AW partitions 36');</span></span><br /><br />However, this approach does create some additional complications regarding the creation of standard form metadata. This metadata is required to make the AW visible to AWM and other OLAP aware tools. In the vast majority of cases it will be necessary to create a standard form metadata compliant AW. See the Part 3 Analyis of Cube Model, Step 3 – Partitioning for more information.<br /><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 2 - Temp Storage Settings</span></span><br />Pre-allocating space within the temp tablespace prior to running a build can make some performance improvements. When pre-allocating space make sure the temp tablespace is not set to auto-extend and the correct (most efficient) uniform extend size is used. The procedure below will pre-allocate TEMP Tablespace. Alter the for i in 1..1000000 are required. This example will pre-allocate approximately 1.5GB of TEMP tablespace. Make sure your default temporary tablespace/group is not set to auto-extend unlimited. It should be fixed to the required size.<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">create or replace procedure preallocate_temp as</span><br /><span style="font-family:courier new;">amount integer := 26;</span><br /><span style="font-family:courier new;">buffer varchar2(26) := 'XXXXXXXXXXXXXXXXXXXXXXXXXX';</span><br /><span style="font-family:courier new;">done boolean := false;</span><br /><span style="font-family:courier new;">out_of_temp exception;</span><br /><span style="font-family:courier new;">position integer := 10240;</span><br /><span style="font-family:courier new;">pragma exception_init(out_of_temp,-01652);</span><br /><span style="font-family:courier new;">tmppre clob;</span><br /><span style="font-family:courier new;">begin</span><br /><span style="font-family:courier new;">dbms_lob.createtemporary(tmppre, true,d bms_lob.session);</span><br /><span style="font-family:courier new;">dbms_lob.open(tmppre, dbms_lob.lob_readwrite);</span><br /><span style="font-family:courier new;">for i in 1..130400</span><br /><span style="font-family:courier new;">loop</span><br /><span style="font-family:courier new;">if (done = true) then</span><br /><span style="font-family:courier new;">dbms_lob.close(tmppre);</span><br /><span style="font-family:courier new;">dbms_lob.freetemporary(tmppre);</span><br /><span style="font-family:courier new;">end if;</span><br /><span style="font-family:courier new;">begin</span><br /><span style="font-family:courier new;">dbms_lob.write(tmppre, amount, position, buffer);</span><br /><span style="font-family:courier new;">exception when out_of_temp then done := true;</span><br /><span style="font-family:courier new;">end;</span><br /><span style="font-family:courier new;">position := position + amount + 10240;</span><br /><span style="font-family:courier new;">end loop;</span><br /><span style="font-family:courier new;">dbms_lob.close(tmppre);</span><br /><span style="font-family:courier new;">dbms_lob.freetemporary(tmppre);</span><br /><span style="font-family:courier new;">end;</span><br /><span style="font-family:courier new;">/</span><br /><br /><span style="font-family:courier new;">conn prealltemp/oracle</span><br /><span style="font-family:courier new;">exec preallocate_temp;</span><br /><span style="font-family:courier new;">disc;</span><br /></span><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 3 - ADDM Report</span></span><br />ADDM (Automatic Database Diagnostic Monitor) is a self-diagnostic engine built into the Oracle Database kernel, which automatically detects and diagnoses common performance problems, including:<br /><ul><li>Hardware issues related to excessive I/O</li><li>CPU bottlenecks</li><li>Connection management issues</li><li>Excessive parsing</li><li>Concurrency issues, such as contention for locks</li><li>PGA, buffer-cache, and log-buffer-sizing issues</li><li>Issues specific to Oracle Real Application Clusters (RAC) deployments, such as global cache hot blocks and objects and interconnect latency issues</li></ul>An ADDM analysis is performed after each AWR snapshot (every hour by default). The results are saved in the database, which can then be viewed using either Oracle<br />Enterprise Manager or SQLPlus. For tuning OLAP data loads, ADDM is always a good place to start. In addition to diagnosing performance problems, ADDM recommends possible solutions. When appropriate, ADDM recommends multiple solutions, which can include:<br /><br /><ul><li>Hardware changes</li><ul><li>Adding CPUs or changing the I/O subsystem configuration</li></ul><li>Database configuration</li><ul><li>Changing initialization parameter settings</li></ul><li>Schema changes</li><ul><li>Hash partitioning a table or index, or using automatic segment-space management (ASSM)</li></ul><li>Application changes</li><ul><li>Using the cache option for sequences or using bind variables</li></ul><li>Using other advisors</li><ul><li>Running the SQL Tuning Advisor on high-load SQL statements or running the Segment Advisor on hot objects</li></ul></ul><br />ADDM benefits apply beyond production systems; even on development and test<br />Systems. ADDM can provide an early warning of potential performance problems. Typically the results from an ADDM snapshot are viewed via various interactive pages within Enterprise Manager, as shown below:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioQfdcDWUI9LnJKhv5wP4MBZ_9y1AyEIShF1OHeHAMsIp_Ma7LE2bqwAFqUm5W-6PoS8ElLhLVdSe_kXIhjKjYSxyPXh2dF6hsBFHM3JXLC9NWMB7zwmN1h7kel3pBBN2_Np6cI9X2bfw/s1600-h/Image7.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioQfdcDWUI9LnJKhv5wP4MBZ_9y1AyEIShF1OHeHAMsIp_Ma7LE2bqwAFqUm5W-6PoS8ElLhLVdSe_kXIhjKjYSxyPXh2dF6hsBFHM3JXLC9NWMB7zwmN1h7kel3pBBN2_Np6cI9X2bfw/s400/Image7.PNG" alt="" id="BLOGGER_PHOTO_ID_5191992128985226258" border="0" /></a><br /><br />Alternatively you can access ADDM reports using the SQL*Plus command line by calling the new DBMS_ADVISOR built-in package. For example, here's how to use the command line to create an ADDM report quickly (based on the most recent snapshot):<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">set long 1000000</span><br /><span style="font-family:courier new;">set pagesize 50000</span><br /><span style="font-family:courier new;">column get_clob format a80</span><br /><span style="font-family:courier new;">select dbms_advisor.get_task_report(</span><br /><span style="font-family:courier new;">task_name, 'TEXT', 'ALL') </span><br /><span style="font-family:courier new;">as ADDM_report</span><br /><span style="font-family:courier new;">from dba_advisor_tasks</span><br /><span style="font-family:courier new;"> where task_id=(</span><br /><span style="font-family:courier new;"> select max(t.task_id)</span><br /><span style="font-family:courier new;"> from dba_advisor_tasks t, dba_advisor_log l</span><br /><span style="font-family:courier new;"> where t.task_id = l.task_id</span><br /><span style="font-family:courier new;"> and t.advisor_name='ADDM'</span><br /><span style="font-family:courier new;"> and l.status= 'COMPLETED');</span><br /></span><br />The ‘ALL’ parameter generates additional information about the meaning of some of the elements in the report. The most interesting section of the report relates to the "Findings" for each issue. This outlines the impact of the identified problem as a percentage of DB time, which correlates with the expected benefit, based on the assumption the problem described by the finding will be solved if the recommended action is taken.<br /><br />In the example below the recommendation is to adjust the sga_target value in the parameter file:<br /><br /><span style="font-size:78%;"><span style="font-family:courier new;">FINDING 3: 5.2% impact (147 seconds)</span><br /><span style="font-family:courier new;">---------------------------------------</span><br /><span style="font-family:courier new;">The buffer cache was undersized causing significant additional read I/O.</span><br /><span style="font-family:courier new;">RECOMMENDATION 1: DB Configuration, 5.2% benefit (147 seconds)</span><br /><span style="font-family:courier new;">ACTION: Increase SGA target size by increasing the value of parameter "sga_target" by 24 M.</span><br /><span style="font-family:courier new;">SYMPTOMS THAT LED TO THE FINDING:</span><br /><span style="font-family:courier new;">Wait class "User I/O" was consuming significant database time. (5.3% impact [150 seconds])</span><br /><span style="font-family:courier new;">...</span></span><br /><br />To get more information this feature refer to the Oracle® Database 2 Day + Performance Tuning Guide 10g Release 2 (10.2).<br /><br />For the HTML version, click <a href="http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29">here</a> for 10gR2 and <a href="http://www.oracle.com/pls/db111/to_toc?pathname=server.111/b28274/toc.htm">here </a>for 11g.<br />For the PDF version, click <a href="http://www.oracle.com/pls/db102/to_pdf?pathname=server.102%2Fb14211.pdf&remark=portal+%28Getting+Started%29">here</a> for 10gR2 and <a href="http://www.oracle.com/pls/db111/to_pdf?pathname=server.111/b28274.pdf">here </a>for 11g<br /><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 4 - Dynamic Performance Views</span></span><br />Each Oracle database instance maintains a set of virtual tables that record current database activity and store data about the instance. These tables are called the V$ tables. They are also referred to as the dynamic performance tables, because they store information relating to the operation of the instance. Views of the V$ tables are sometimes called fixed views because they cannot be altered or removed by the database administrator. The V$ tables collect data on internal disk structures and memory structures. They are continuously updated while the database is in use. The SYS user owns the V$ tables. In addition, any user with the SELECT CATALOG role can access the tables. The system creates views from these tables and creates public synonyms for the views. The views are also owned by SYS, but the DBA can grant access to them to a wider range of users.<br /><br />Among these are tables that collect data on OLAP operations. The names of the OLAP V$ views begin with V$AW:<br /><ul><li>V$AW_AGGREGATE_OP</li><ul><li>Lists the aggregation operators available in the OLAP DML.</li></ul><li>V$AW_ALLOCATE_OP</li><ul><li>Lists the allocation operators available in the OLAP DML.</li></ul><li>V$AW_CALC</li><ul><li>Collects information about the use of cache space and the status of dynamic aggregation.</li></ul><li>V$AW_LONGOPS</li><ul><li>Collects status information about SQL fetches.</li></ul><li>V$AW_OLAP</li><ul><li>Collects information about the status of active analytic workspaces.</li></ul><li>V$AW_SESSION_INFO</li><ul><li>Collects information about each active session.</li></ul></ul>For tuning the two most important views from this list are:<br /><br /><span style="font-style: italic;">V$AW_CALC</span><br />This reports on the effectiveness of various caches used by Oracle OLAP and the status of processing by the AGGREGATE function. Oracle OLAP uses the following caches:<br /><br /><ul><li>Aggregate cache: An internal cache used by the aggregation subsystem during querying. It stores the children of a given dimension member, such as Q1-04, Q2-04, Q3-04, and Q4-04 as the children of 2004.</li><li>Session cache: Oracle OLAP maintains a cache for each session for storing the results of calculations. When the session ends, the contents of the cache are discarded.</li><li>Page pool: A cache allocated from the User Global Area (UGA), which Oracle OLAP maintains for the session. The page pool is associated with a particular session and caches records from all the analytic workspaces attached in that session. If the page pool becomes too full, then Oracle OLAP writes some of the pages to the database cache. When an UPDATE command is issued in the OLAP DML, the changed pages associated with that analytic workspace are written to the permanent LOB, using temporary segments as the staging area for streaming the data to disk. The size of the page pool is controlled by the OLAP_PAGE_POOL initialization parameter.</li><li>Database cache: The larger cache maintained by the Oracle RDBMS for the database instance.</li></ul><br />Because OLAP queries tend to be iterative, the same data is typically queried repeatedly during a session. The caches provide much faster access to data that has already been calculated during a session than would be possible if the data had to be recalculated for each query.<br /><br />The more effective the caches are, the better the response time experienced by users. An ineffective cache (that is, one with few hits and many misses) probably indicates that the data is not being stored optimally for the way it is being viewed. To improve runtime performance, you may need to reorder the dimensions of the variables (that is, change the order of fastest to slowest varying dimensions).<br /><br /><br />V$AW_LONGOPS<br />This view will identify the OLAP DML command (SQL IMPORT, SQL FETCH, or SQL EXECUTE) that is actively fetching data from relational tables. The view will state the current operation based on one of the following values:<br /><ul><li>EXECUTING. The command has begun executing.</li><li>FETCHING. Data is being fetched into the analytic workspace.</li><li>FINISHED. The command has finished executing. This status appears very briefly before the record disappears from the table.</li></ul>Other information returned includes: the number of rows already inserted, updated, or deleted and the time the command started executing.<br />For more information refer to the Oracle OLAP Option Users Guide, Section 7 Administering Oracle OLAP – <a href="http://download.oracle.com/docs/cd/B28359_01/olap.111/b28124/admin.htm#sthref479">Dynamic Performance Views</a>.<br /><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Step 5 - Init.Ora Parameters</span><br /></span>Checking the RDBMS parameters are appropriately tuned for your OLAP environment is relatively easy. Fortunately in 10g the majority of init.ora parameters are managed dynamically, however a few parameters that may need to be changed are:<br /><br /><span style="font-style: italic;">SORTBUFFERSIZE</span><br />This should be increased since OLAP AWs use this parameter instead of SORT_AREA_SIZE. So, for every AW, to increase it do the following:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('aw attach SCOTT.MYAW rwx');</span><br /><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('shw sortbuffersize');</span><br /><span style="font-family:courier new;">262,411</span><br /><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('SortBufferSize=10485760');</span><br /><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('shw sortbuffersize');</span><br /><span style="font-family:courier new;">10,485,760</span><br /><span style="font-family:courier new;">exec DBMS_AW.EXECUTE('aw detach SCOTT.MYAW');</span><br /></span><br /><span style="font-style: italic;">OLAP_PAGE_POOL_SIZE</span><br />This should be set to 0 or unset so that auto dynamic page pool is on and is managed by the database (will be set to 50% of PGA size). However, if you have over 8Gb of memory available then you should set the parameter manually and a good value for data loading is to set to 256MB and for multiple users querying concurrently, 64MB.Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com5tag:blogger.com,1999:blog-3820031471524503731.post-29183148074305080842008-03-27T07:30:00.000-07:002008-12-11T15:25:29.775-08:00Creating A Calculated Measure Cube - IOGALFF<p style="font-style: italic;" class="MsoNormal"><span lang="EN-GB">(Firstly, this is information relates only to OLAP 10gR2.<span style=""> </span>There are a number of changes within OLAP11gR1 and the following scenario has not been tested with 11g)</span></p> When you venture into most supermarkets today, somewhere on one of the many isles there will be a BOGOF – buy one get one free. Well how about an OLAP offer: “IOGALFF” – install one get at least fourteen free. I admit it is not quite as catchy as the original, but the benefits are huge and it has the result of making life so much easier.<br /><br />One of the more interesting challenges of using Analytic Workspace Manager (and also OWB) relates to managing all the calculated measures within a cube. Every measure should really be accompanied by a standard set of calculated measures such as:<br /><br /><ul><li>Current Period</li><li>Last Year</li><li>Last Year %</li><li>Prior Period</li><li>Prior Period %</li><li>Year to Date</li><li>Year to Date Last Year</li><li>Year to Date Last Year %</li><li>Quarter to Date</li><li>Quarter to Date Last Year</li><li>Quarter to Date Last Year %</li><li>3-Month Moving Average</li><li>6-Month Moving Average</li><li>12-Month Moving Average</li><li>Dimension “A” Share of All Members</li><li>Dimension “A” Share of Parent</li></ul><br /><br />Its is the addition of these types of measures that adds real value to the your BI application. As I have stated many times before: business users are not interested in looking at data from base measures, typically they are more interested in trends based on the prior period or prior year, share of revenue, moving averages and so on. This adds a lot of work for the cube designer as they need to add at least 14 calculated measures for each base measure as well as two additional calculated share measures for each dimension associated with each cube.<br /><br />Which adds up to a lot of measures and manually creating all these measure is likely to take a considerable amount of time. If we take the command schema sample that is shipped with BI10g, this contains 5 base measures (costs, quantity, revenue, margin, price) with 4 dimensions (channel, geography, product, time), which translates to<br /><br /><span style="font-weight: bold;font-size:85%;" ><span style="font-family:courier new;"> 5 * 14 calculated measures</span><br /><span style="font-family:courier new;"> + </span><br /><span style="font-family:courier new;"> 5 * 3 (dimensions - channel, geography, product) * 2 calculated share measures</span><br /><br /><span style="font-family:courier new;"> = 100 calculated measures.</span><br /><br /></span>As I said, it is very easy to generate a lot of calculated measures.<br /><br /><br />So is there a more intelligent way of managing calculated measures? What is the quickest way to add all these additional calculations to a cube? One way is to use the Excel Accelerator to add the calculated measures to your AW. It is relatively easy to use and allows you to use normal Excel tools such as cut & paste to quickly create all the calculations you could ever need. You can download the Excel Accelerator from the OLAP OTN Home Page, or by clicking <a href="http://www.oracle.com/technology/products/bi/olap/index.html">here</a>. Look for these links:<br /><br /><a href="http://download.oracle.com/otn/java/olap/SpreadsheetCalcs_10203.zip">Creating OLAP Calculations using Excel</a><br /><a href="http://www.oracle.com/technology/products/bi/olap/OLAP_SpreadsheetCalcs.html">Creating OLAP Calculations using Exce Readme</a><br /><br />During a recent POC, we used a more interesting approach that takes the use of customer calculations to a new level. In many ways this goes back to the various techniques we used when creating Express databases for use with Sales Analyzer to make life easier for both administrators and users. The approach is quite simple – use and additional dimension to define the type of calculation you want to execute and link this to a DML program to return the required results. The list of calculated measures listed above are converted into dimension members, which provides another way to slice, dice and pivot the data. As you can see here, the dimension member that identifies the type of calculation is in the row edge with the measure in the column edge. In this way, only one calculation is required which is used across all the available measures.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2XQH0z5nEcFGNUZDSOOnMtoIvg0HxhYkDL9M7dtxvbw1-nqviIov5cQuzGT9LQ-f6HB2bzgriQ1DmiiAUByD1DspH1YA8ZeR6aK9mWzfGSyZWt58GlL4gipD3Yt9UlhUZdBRLZ1vEhV8/s1600-h/moz-screenshot-17.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2XQH0z5nEcFGNUZDSOOnMtoIvg0HxhYkDL9M7dtxvbw1-nqviIov5cQuzGT9LQ-f6HB2bzgriQ1DmiiAUByD1DspH1YA8ZeR6aK9mWzfGSyZWt58GlL4gipD3Yt9UlhUZdBRLZ1vEhV8/s400/moz-screenshot-17.jpg" alt="" id="BLOGGER_PHOTO_ID_5182076332047329618" border="0" /></a><br /><br />What this means is: by installing one simple (well, everything is relative) calculated measure you can get upto fourteen additional calculations for free. Hence the acronym “IOGALFF” – install one get at least fourteen free.<br /><br />How is all this managed? There are five basic steps to creating this type of reporting solution:<br /><ol><li>Create a new dimension to control the types of calculations returned</li><li>Add some additional attributes to the time dimension to help manage some of the time series arguments for specific calculations</li><li>Create an OLAP DML program to return the data</li><li>Create a new cube that includes the new calculation type dimension and add a calculated measure that calls the OLAP DML program to return the required data<br /></li><li>Create a SQL View over the cube</li></ol><br /><span style="font-weight: bold;">Step 1 – Creating the new Time View dimension</span><br /><br />I have called the dimension containing the list of calculated members, Time View (mainly because the majority of the calculations are time based but may be a better name would be "<span style="font-style: italic;">measure view</span>" ?). The Time View dimension has one level and one hierarchy with three attributes:<br /><ul><li>Long Label</li><li>Short Label</li><li>Default sort order</li></ul><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiksgR-dODUFOAvevAR9hS4H8zLbA-vC3iuWNgDGZPioupVeMzFXpxl4cHbRlohc4qOszE7Sj1fCgenxa7lRwUzZz4TTWVqg107GEo1BYOGzbKx9c2UzurcXNuy-i-RuhA72G2CUyyEoGY/s1600-h/Image1.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiksgR-dODUFOAvevAR9hS4H8zLbA-vC3iuWNgDGZPioupVeMzFXpxl4cHbRlohc4qOszE7Sj1fCgenxa7lRwUzZz4TTWVqg107GEo1BYOGzbKx9c2UzurcXNuy-i-RuhA72G2CUyyEoGY/s400/Image1.JPG" alt="" id="BLOGGER_PHOTO_ID_5182076164543605058" border="0" /></a><br /><br />The source data looks like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnLFUBNaPqwB1-iGWBEhU8XMIhP_bCYf751nIv5bsaZbrHK-pBUbRDqlK0eTPIj3iv-h6SS6uO5Uj4YoXusblbAWGFNkuJEBXhehfKpQ2CBqMpfeZYIzyehEoILv_mTGTVwXEJrYxsvS0/s1600-h/Image2.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnLFUBNaPqwB1-iGWBEhU8XMIhP_bCYf751nIv5bsaZbrHK-pBUbRDqlK0eTPIj3iv-h6SS6uO5Uj4YoXusblbAWGFNkuJEBXhehfKpQ2CBqMpfeZYIzyehEoILv_mTGTVwXEJrYxsvS0/s400/Image2.JPG" alt="" id="BLOGGER_PHOTO_ID_5182076813083666786" border="0" /></a><br /><br />With the source table definition as follows:<br /><span style="font-size:85%;"><br /><span style="font-family:courier new;">CREATE TABLE TIME_VIEW </span><br /><span style="font-family:courier new;"> (TIME_VIEW_ID VARCHAR2(2 BYTE), </span><br /><span style="font-family:courier new;"> TIME_VIEW_DESC VARCHAR2(35 BYTE), </span><br /><span style="font-family:courier new;"> SORT_ORDER NUMBER(*,0)</span><span style="font-family:courier new;">)</span><span style="font-family:courier new;">;</span><br /></span><br />And the view definition as follows:<br /><span style="font-size:85%;"><br /><span style="font-family:courier new;">CREATE OR REPLACE FORCE VIEW VW_TIME_VIEW AS </span><br /><span style="font-family:courier new;">SELECT<br />TIME_VIEW_ID<br />, TIME_VIEW_DESC<br />, SORT_ORDER<br />FROM TIME_VIEW<br />ORDER BY SORT_ORDER;</span><br /></span><br />To populate the base table the following commands are used:<br /><span style="font-size:85%;"><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('1', 'Current Period', '1')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('2', 'Last Year', '2')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('3', 'Last Year %', '3')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('4', 'Prior Period', '4')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('5', 'Prior Period %', '5')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('6', 'Year To Date', '6')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('7', 'Year To Date Last Year', '7')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('8', 'Year To Date Last Year %', '8')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('9', 'Quarter To Date', '9')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('10', 'Quarter To Date Last Year', '10')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('11', 'Quarter To Date Last Year %', '11')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('12', '3-Month Moving Average', '12')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('13', '6-Month Moving Average', '13')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('14', '12-Month Moving Average', '14')</span></span><br /><br />After this, the next set of values depends on the dimensions within your data model. For my model, I have three additional dimensions that I want to analyze: Channel, Product and Customer. For each of these dimensions I want to see the % share of each member in relation to the value for all members (i.e. the top level) and the shared based on the parent value. To enable these calculations I add the following lines to the TIME_VIEW table:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('15', ' Product Share of All Products', '15')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('16', ' Product Share of Parent', '16')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('17', ' Channel Share of All Channels', '17')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('18', ' Channel Share of Parent', '18')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('19', ' Customer Share of All Customers', '19')</span><br /><span style="font-family:courier new;">INSERT INTO TIME_VIEW VALUES ('20', ' Customer Share of Parent', '20')</span></span><br /><br />(Obviously, these additional share calculations are not really related to time, so using the name Time View is sort of confusing and in hindsight the term <span style="font-style: italic;">Measure View</span> would have been a better name for the dimension). Once the table has been defined, it can be mapped in AWM to create the dimension mapping (in this case I am using a view over the source<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbuSxTmle0tk3J45KfZXWfhQkAIPbdlupmddRB-JGNKbYT5zCqPSpJeJgJUJwgCNpxTn2shTZgVfLSgiSdkb2njNUsyAV_SZzPBawVnCxpHgyO-gSqjukh8Uc95JFLFes1eQ2c_ZNil-0/s1600-h/Image3.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbuSxTmle0tk3J45KfZXWfhQkAIPbdlupmddRB-JGNKbYT5zCqPSpJeJgJUJwgCNpxTn2shTZgVfLSgiSdkb2njNUsyAV_SZzPBawVnCxpHgyO-gSqjukh8Uc95JFLFes1eQ2c_ZNil-0/s400/Image3.JPG" alt="" id="BLOGGER_PHOTO_ID_5182076971997456754" border="0" /></a><br /><br />The last part of this step is to then load the members into the dimension using the dimension data load wizard or via the SQL command line using a load script. The result should look like this when you view the members using the Dimension Data Viewer:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9rNh-7G-hwr0SDFtbOKqlDaR57Z4TQqTyhWdiHqyr37N3ZzV88dm_SDM64zH2aX40XR1TBpeqPlNvr_1SJ7lERt3f3JT_9OxNpYibMqAo7N4jHUPsi5bMwvYtqcaBPgvlRbTdyJD4sDk/s1600-h/Image3a.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9rNh-7G-hwr0SDFtbOKqlDaR57Z4TQqTyhWdiHqyr37N3ZzV88dm_SDM64zH2aX40XR1TBpeqPlNvr_1SJ7lERt3f3JT_9OxNpYibMqAo7N4jHUPsi5bMwvYtqcaBPgvlRbTdyJD4sDk/s400/Image3a.JPG" alt="" id="BLOGGER_PHOTO_ID_5182077113731377538" border="0" /></a><br /><br />As can be seen here, there are fourteen base calculations that can be applied to any model, plus the additional share calculations that are based on the dimensions within the source cube. Hence the title “IOGALFF” – install one get fourteen free. In reality it is more like: install one and as many calculations as you like or need, but that translates to “IOGASMAYNORL” which does not really role off the tongue.<br /><br /><span style="font-weight: bold;">Step 2 – Updating the Time Dimension with new Attributes.</span><br /><br />To make the program that delivers all these calculated measures as simple as possible a number of attributes are added to the time dimension:<br /><ul><li>Lag Prior Year</li><ul><li>At the Year level this equates to 1</li></ul><ul><li>At the Quarter level this equates to 4</li></ul><ul><li>At the Month level this equates to 12</li></ul><ul><li>At the Day level it is the number of days in the year (365 or 366)</li></ul><li>Parent Quarter</li><ul><li>This is the quarter for each time period and used as the reset trigger for the cumulative totals</li></ul><li>Parent Year</li><ul><li>This is the year for each time period and used as the reset trigger for the cumulative totals</li></ul></ul><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6AuUNjbh57uy9RMUR_Ao4oYE4585ASCNMTkhH2JKarss3tY5oza4sgR662GRAYe89wcXVP2HnJe-74WaD6w99EVomS8A6I6oHDHV6zFGyGp9xncsF_9sg4l30yEqQFP_fLItLS8TxJYw/s1600-h/Image4.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6AuUNjbh57uy9RMUR_Ao4oYE4585ASCNMTkhH2JKarss3tY5oza4sgR662GRAYe89wcXVP2HnJe-74WaD6w99EVomS8A6I6oHDHV6zFGyGp9xncsF_9sg4l30yEqQFP_fLItLS8TxJYw/s400/Image4.JPG" alt="" id="BLOGGER_PHOTO_ID_5182078101573855634" border="0" /></a><br /><br />The information for the two parent attributes is taken from existing columns within the source data since this information already exists and the Lag Prior Year is a simple hard-coded value apart from the day level where the timespan value for the year is used to cope with leap years. Therefore, all the information for these three attributes should be readily available within your source data, especially if you are using the OWB Time Wizard to create your time dimension. Obviously using a view over the source table for the mapping within AWM makes adding this information relatively trivial.<br /><br /><span style="font-weight: bold;">Step 3 – Create the OLAP DML Calculation Program.</span><br /><br />The new OLAP DML program, called TIME_VIEW_PRG (again feel free to change the name of program if you want), takes two arguments:<br /><ul><li>The cube.measure_name for the base measure, in this case SALES_M1, which is the Revenue measure in the SALES cube.</li><li>The identifier for the calculated measure within the REPORT_CUBE cube, which in this case is M1_PRG (this is explained in more detail in the next section)<br /></li></ul>The program code is as follows, which for those of you familiar with OLAP DML will find relatively easy to understand:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">DEFINE TIME_VIEW_PRG PROGRAM DECIMAL</span><br /><span style="font-family:courier new;">argument T_MEASURE text " Full measure name including cube name</span><br /><span style="font-family:courier new;">argument T_MEASURE_ID text " Measure name</span><br /><br /><span style="font-family:courier new;">variable T_FORM text " Name of the formula</span><br /><span style="font-family:courier new;">variable T_TIME_VIEW text " Value of TIME_VIEW dimension</span><br /><span style="font-family:courier new;">variable D_RETURN decimal " Return value</span><br /><br /><span style="font-family:courier new;">trap on ALLDONE noprint</span><br /><br /><span style="font-family:courier new;">T_TIME_VIEW = TIME_VIEW</span><br /><span style="font-family:courier new;">T_FORM = joinchars('REPORT_CUBE_', T_MEASURE_ID)</span><br /><br /><span style="font-family:courier new;">switch T_TIME_VIEW</span><br /><span style="font-family:courier new;">do</span><br /><span style="font-family:courier new;"> case 'VL1_1':</span><br /><span style="font-family:courier new;"> D_RETURN = &T_MEASURE</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_2': "LY</span><br /><span style="font-family:courier new;"> D_RETURN = lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_3': "LY%</span><br /><span style="font-family:courier new;"> D_RETURN = lagpct(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_4': "PP</span><br /><span style="font-family:courier new;"> D_RETURN = lag(&T_FORM(TIME_VIEW 'VL1_1'), 1, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_5': "PP%</span><br /><span style="font-family:courier new;"> D_RETURN = lagpct(&T_FORM(TIME_VIEW 'VL1_1'), 1, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_6': "YTD</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL EQ 'YEAR'</span><br /><span style="font-family:courier new;"> then D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')</span><br /><span style="font-family:courier new;"> else D_RETURN = cumsum(&T_FORM(TIME_VIEW 'VL1_1'), TIMES, TIMES_PARENT_YEAR)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_7': "YTD LY</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL EQ 'YEAR'</span><br /><span style="font-family:courier new;"> then D_RETURN = lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> else D_RETURN = cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_YEAR)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_8': "YTD LY%</span><br /><span style="font-family:courier new;"> D_RETURN = (&T_FORM(TIME_VIEW 'VL1_6')-cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_YEAR))/cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_YEAR)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_9': "QTD</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL EQ 'YEAR'</span><br /><span style="font-family:courier new;"> then D_RETURN = NA</span><br /><span style="font-family:courier new;"> else if TIMES_LEVELREL EQ 'QUARTER'</span><br /><span style="font-family:courier new;"> then D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')</span><br /><span style="font-family:courier new;"> else if TIMES_LEVELREL EQ 'MONTH'</span><br /><span style="font-family:courier new;"> then D_RETURN = cumsum(&T_FORM(TIME_VIEW 'VL1_1'), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> else if TIMES_LEVELREL EQ 'DAY'</span><br /><span style="font-family:courier new;"> then D_RETURN = cumsum(&T_FORM(TIME_VIEW 'VL1_1'), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> else D_RETURN = NA</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_10': "QTD LY</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL EQ 'YEAR'</span><br /><span style="font-family:courier new;"> then D_RETURN = NA</span><br /><span style="font-family:courier new;">else if TIMES_LEVELREL EQ 'QUARTER'</span><br /><span style="font-family:courier new;"> then D_RETURN = cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> else if TIMES_LEVELREL EQ 'MONTH'</span><br /><span style="font-family:courier new;"> then D_RETURN = cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> else if TIMES_LEVELREL EQ 'DAY'</span><br /><span style="font-family:courier new;"> then D_RETURN = cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_11': "QTD LY%</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL EQ 'YEAR'</span><br /><span style="font-family:courier new;"> then D_RETURN = NA</span><br /><span style="font-family:courier new;">else if TIMES_LEVELREL EQ 'QUARTER'</span><br /><span style="font-family:courier new;"> then D_RETURN = (&T_FORM(TIME_VIEW 'VL1_9')-cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER))/cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> else if TIMES_LEVELREL EQ 'MONTH'</span><br /><span style="font-family:courier new;"> then D_RETURN = (&T_FORM(TIME_VIEW 'VL1_9')-cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER))/cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> else if TIMES_LEVELREL EQ 'DAY'</span><br /><span style="font-family:courier new;"> then D_RETURN = (&T_FORM(TIME_VIEW 'VL1_9')-cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER))/cumsum(lag(&T_FORM(TIME_VIEW 'VL1_1'), TIMES_LAG_PRIOR_YEAR, TIMES, LEVELREL TIMES_LEVELREL), TIMES, TIMES_PARENT_QUARTER)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_12': "3MMA</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL eq 'MONTH'</span><br /><span style="font-family:courier new;"> then D_RETURN = MOVINGAVERAGE(&T_FORM(TIME_VIEW 'VL1_1'), -2, 0, 1, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> else D_RETURN = NA</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_13': "6MMA</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL eq 'MONTH'</span><br /><span style="font-family:courier new;"> then D_RETURN = MOVINGAVERAGE(&T_FORM(TIME_VIEW 'VL1_1'), -5, 0, 1, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> else D_RETURN = NA</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_14': "12MMA</span><br /><span style="font-family:courier new;"> if TIMES_LEVELREL eq 'MONTH'</span><br /><span style="font-family:courier new;"> then D_RETURN = MOVINGAVERAGE(&T_FORM(TIME_VIEW 'VL1_1'), -11, 0, 1, TIMES, LEVELREL TIMES_LEVELREL)</span><br /><span style="font-family:courier new;"> else D_RETURN = NA</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_15': "Product Share of All Products</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', PRODUCTS limit(PRODUCTS to TOPANCESTORS using PRODUCTS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_16': "Product Share of Parent</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', PRODUCTS PRODUCTS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_17': "Customers Share of All Customers</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', CUSTOMERS limit(CUSTOMERS to TOPANCESTORS using CUSTOMERS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_18': "Customers Share of Parent</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', CUSTOMERS CUSTOMERS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_19': "Channels Share of All Channels</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', CHANNELS limit(CHANNELS to TOPANCESTORS using CHANNELS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_20': "Channels Share of Parent</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', CHANNELS CHANNELS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;">doend</span><br /><br /><br /><span style="font-family:courier new;">ALLDONE:</span><br /><span style="font-family:courier new;">return D_RETURN</span><br /><span style="font-family:courier new;">END</span><br /></span><br />In simple terms, the program checks to see which member of the dimension TIME_VIEW is being requested and then executes the correct OLAP function to return the required data. At the moment this has only been tested on a Julian calendar hierarchy but I am doing some more testing over the next few weeks for other types of hierarchies. Some of the measures are dependant on the level within the Time dimension, so you may need to change references to specific level names etc if you want to reuse this program code. For example:<br /><span style="font-size:85%;"><br /><span style="font-family:courier new;"> if TIMES_LEVELREL EQ 'YEAR'</span></span><br /><br />My time dimension has levels; Year, Quarter, Month, and Day. In some cases it is necessary to change the processing depending the level, for example:<br /><br /><span style=";font-family:courier new;font-size:85%;" > if TIMES_LEVELREL EQ 'YEAR'<br />then D_RETURN = NA<br />else if TIMES_LEVELREL EQ 'QUARTER'<br />then D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')<br />else if TIMES_LEVELREL EQ 'MONTH'<br />then D_RETURN = cumsum(&T_FORM(TIME_VIEW 'VL1_1'), TIMES, TIMES_PARENT_QUARTER)<br />else if TIMES_LEVELREL EQ 'DAY'<br />then D_RETURN = cumsum(&T_FORM(TIME_VIEW 'VL1_1'), TIMES, TIMES_PARENT_QUARTER)<br />else D_RETURN = NA<br /></span><br /><br />In this code extract you can see the reference to one of the additional time attributes we added in the previous step, PARENT_QUARTER that is used by the program. All the references within the program are based on Standard Form naming conventions. Therefore, the full standard name for this attribute is TIMES_PARENT_QUARTER since the dimension is called TIMES and the attribute is called PARENT_QUARTER.<br /><br />For simplicity I created the program in the AW containing the cubes, which is not exactly ideal because if you delete the AW to rebuild it you will lose the program code. Alternatively, you can just create a new AW and add the program to the new AW then modify the ONTTACH program in your data AW to automatically attach the program AW.<br /><br />Calculations for dimension members VL_15 to VL_20 are provided as examples and would need to be changed to match the dimensions within your own AWs. For each dimension you will need to create two new dimension members within the TIME_VIEW dimension to return the following share calculations:<br /><ul><li>Share of based on the top level total, i.e. All Members</li><li>Share based on parent.</li></ul>As the code uses Standard form notation the only thing you should need to change is code highlighted in bold and replace the phrase “Channels” with your own dimension name:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;"> case 'VL1_19': "Channels Share of All Channels</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', CHANNELS limit(CHANNELS to TOPANCESTORS using CHANNELS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /><span style="font-family:courier new;"> case 'VL1_20': "Channels Share of Parent</span><br /><span style="font-family:courier new;"> D_RETURN = &T_FORM(TIME_VIEW 'VL1_1')/&T_FORM(TIME_VIEW 'VL1_1', CHANNELS CHANNELS_PARENTREL)</span><br /><span style="font-family:courier new;"> break</span><br /></span><br /><br /><span style="font-weight: bold;">Step 4 – Creating the New Cube.</span><br /><br />In my demo schema I have a cube called SALES and it contains two measures: Revenue (called M1) and Quantity (called M2). The dimensionality for this cube is Time, Channel Product, and Customer.<br /><br />To provide all the time series and comparative calculations I create a new cube that has the same dimensionality as the SALES cube, with one additional dimension. In this cube the Time View dimension is also used. This reporting cube contains no stored measures, but it does contain custom calculated measures for each measure in the SALES cube, however, each calculated measure will return 14 additional time calculations and 6 share calculations to support the base measure controlled by the dimension TIME_VIEW.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4F-ky1V55QzyppVZUzYMgvL7Rp9XO5TYToRSg8eMA4lx9mkYDOrkXQugwTbk5b5wikCMMKeAMPctoV4AW38Ymmi3y8MlprU3EiTVZGAjFmRa9DWyCYJD_BjwoB0swwWkfTHav9NkPO7A/s1600-h/Image5.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4F-ky1V55QzyppVZUzYMgvL7Rp9XO5TYToRSg8eMA4lx9mkYDOrkXQugwTbk5b5wikCMMKeAMPctoV4AW38Ymmi3y8MlprU3EiTVZGAjFmRa9DWyCYJD_BjwoB0swwWkfTHav9NkPO7A/s400/Image5.JPG" alt="" id="BLOGGER_PHOTO_ID_5182079463078488482" border="0" /></a><br /><br />The implementation for the cube is largely irrelevant since no stored measures will be present within the cube, only calculated measures. Therefore, you can if you want either accept the default settings or de-select all the various options.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfCFbc-OgRq6jXdQaiDTKw7jnYL_aumZk5WleVUqE2yLdLn-KmT7TPfAAAfas0ptLyc5weIsT4DG8p-pwFUb4RVlvL-BtAOhn96JBjB8atnAvmmLRB2pa7HCqbGO6SzHDRDH0xvJLSNXA/s1600-h/Image5a.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfCFbc-OgRq6jXdQaiDTKw7jnYL_aumZk5WleVUqE2yLdLn-KmT7TPfAAAfas0ptLyc5weIsT4DG8p-pwFUb4RVlvL-BtAOhn96JBjB8atnAvmmLRB2pa7HCqbGO6SzHDRDH0xvJLSNXA/s400/Image5a.JPG" alt="" id="BLOGGER_PHOTO_ID_5182080098733648306" border="0" /></a><br /><br />To add the calculated measures to the cube requires the use of a custom measure XML template. To create a calculated measure to support the measure “Revenue” from the SALES cube you will need to create a custom calculation either using an XML template (email me I can send you a blank template) or use the Excel Calculation Utility that can be downloaded from the OLAP Home Page on OTN to install the two custom calculations.<br /><br />Once you have added these calculated measures to the REPORT_CUBE, the tree in AWM should look like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQUJzHqPBEiFL-vTqVDgx3WTFByRHShB8gxbt7OIKYAwozP_TXWNJpHIaI3wxOngTXK84bP5xiLttw1rsOZEosFDeErrb9lp-ipxEtBPpucvICb12UoplKzT_Pv-J2-psKT9FT5GCcpOA/s1600-h/Image7.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQUJzHqPBEiFL-vTqVDgx3WTFByRHShB8gxbt7OIKYAwozP_TXWNJpHIaI3wxOngTXK84bP5xiLttw1rsOZEosFDeErrb9lp-ipxEtBPpucvICb12UoplKzT_Pv-J2-psKT9FT5GCcpOA/s400/Image7.JPG" alt="" id="BLOGGER_PHOTO_ID_5182081537547692482" border="0" /></a><br /><br />By selecting each of the calculated measures, the details of each calculation will be displayed within the right-hand panel of AWM. For the first calculated measure linked the Revenue measure in the Sales cube the panel looks like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIvakMOTrmEdJMDBBRMNxK1MsPkCzsaFojkvMSvsaynD6Ll52yP2qq883GqVP5dfQe26a08oFmwbgnBu4IeDEfJyo-tze-qKxNGpjU1Q7uM_QOqNjw4GEO9rHrQTCCsVsUUyrbEL_Q5UE/s1600-h/Image8.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIvakMOTrmEdJMDBBRMNxK1MsPkCzsaFojkvMSvsaynD6Ll52yP2qq883GqVP5dfQe26a08oFmwbgnBu4IeDEfJyo-tze-qKxNGpjU1Q7uM_QOqNjw4GEO9rHrQTCCsVsUUyrbEL_Q5UE/s400/Image8.JPG" alt="" id="BLOGGER_PHOTO_ID_5182081705051417042" border="0" /></a><br /><br />For the second calculated measure linked the Quantity measure in the Sales cube the panel looks like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDj1CB9FGk1axQfHkdBtr8Jdu_PGP5wllAF238sbB2pUeq53ApoScYdCj4t_XZwNrYIOaursxNRmnbN2Z6OE3yJCU4Z1JkE5wyxgF7eMXForWyC1VrOZpsofadkcgeYLEhN4Y8Z1owo7U/s1600-h/Image9.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDj1CB9FGk1axQfHkdBtr8Jdu_PGP5wllAF238sbB2pUeq53ApoScYdCj4t_XZwNrYIOaursxNRmnbN2Z6OE3yJCU4Z1JkE5wyxgF7eMXForWyC1VrOZpsofadkcgeYLEhN4Y8Z1owo7U/s400/Image9.JPG" alt="" id="BLOGGER_PHOTO_ID_5182081803835664866" border="0" /></a><br /><br />The key part is the line showing the “Expression” (in the XML Template this is the attribute ExpressionText), this defines the call to an OLAP DML program called TIME_VIEW_PRG passing the two required arguments of<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;"> ExpressionText="TIME_VIEW_PRG('SALES_M1', 'M1_PRG')”</span></span><br /><ul><li>The cube.measure_name for the base measure, in this case SALES_M1, which is the Revenue measure in the SALES cube.</li><li>The identifier for the calculated measure within the REPORT_CUBE cube.</li></ul>At this point you may or may not have the correct formula/expression text. When you load calculations via the XML interface OLAP tries to be a bit too clever and it attempts to convert any physical name it can match with the equivalent standard form name. In most cases this is fine, but in this case we need to refer directly to the physical objects. Modifying the expressions is relatively easy if you follow these steps. Using AWM, open the OLAP Worksheet and then:<br /><ol style="font-family:courier new;"><li><span style="font-size:85%;">CNS REPORT_CUBE_M1_PRG</span></li><li><span style="font-size:85%;">EQ </span><span style="font-size:85%;">TIME_VIEW_PRG('SALES_M1', 'M1_PRG')</span></li><li><span style="font-size:85%;">UPDATE</span></li><li><span style="font-size:85%;">COMMIT</span></li><li><span style="font-size:85%;">CNS REPORT_CUBE_M2_PRG</span></li><li><span style="font-size:85%;">EQ </span><span style="font-size:85%;">TIME_VIEW_PRG('SALES_M2', 'M2_PRG')</span></li><li><span style="font-size:85%;">UPDATE</span></li><li><span style="font-size:85%;">COMMIT</span></li></ol><br /><br />If want to do this via the PL/SQL interface then you can do something like this:<br /><span style="font-size:85%;"><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('AW ATTACH AW_NAME RW FIRST')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('CNS REPORT_CUBE_M1_PRG')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('EQ TIME_VIEW_PRG(''SALES_M1'', ''M1_PRG'')')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('UPDATE')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('COMMIT')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('CNS REPORT_CUBE_M2_PRG')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('EQ TIME_VIEW_PRG(''SALES_M2'', ''M2_PRG'')')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('UPDATE')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('COMMIT')</span><br /><span style="font-family:courier new;">EXEC DBMS_AW.EXECUTE('AW DETACH AW_NAME') </span></span><br /><br /><br /><span style="font-weight: bold;">Step 4a – Viewing the Data.</span><br /><br />Once the calculated measures have been added to the new cube, REPORT_CUBE, the AWM Data Viewer can be launched to check the results:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfl3HAHgsKALlKPQWQKJdVrZQhxCXInHgn_2TmDsrQhfkqpqPC3dfilQB6f-9LRhjC1PRCZYfsREPQ6ArlyC6o5NFc1Ky6xJlYgKabdYmRukGRDotJBieIDqzA6OkabV7GoKJvinW_h7E/s1600-h/Image10A.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfl3HAHgsKALlKPQWQKJdVrZQhxCXInHgn_2TmDsrQhfkqpqPC3dfilQB6f-9LRhjC1PRCZYfsREPQ6ArlyC6o5NFc1Ky6xJlYgKabdYmRukGRDotJBieIDqzA6OkabV7GoKJvinW_h7E/s400/Image10A.JPG" alt="" id="BLOGGER_PHOTO_ID_5182082710073764338" border="0" /></a><br /><br />In the Query Wizard both sets of measures (stored measures and the new calculated measures) are both available for selection. To see the new calculations that are now available we can select both of the Report View measures as shown here.<br /><br />Once the measures have been selected, the dimension selector will allow us to pick the calculations we want to display from the list of twenty (there are fourteen base calculations that can be applied to any model, plus the additional share calculations that are based on the dimensions within the source cube, in this case six additional share calculations, making twenty calculated measures in total).<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1WCpEdYAI5P7Hr6qPj7XfknvxbT1xRsG3o7JMKDuXXIcKuspX5QboT1AwHtEeHqMo2NazStLw9WomomctxHEHBWSpdgDB9JlQk8dyPpxOoGlllSYnSAgeGvm6HWftCb09pQaEJI15LO0/s1600-h/Image10B.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1WCpEdYAI5P7Hr6qPj7XfknvxbT1xRsG3o7JMKDuXXIcKuspX5QboT1AwHtEeHqMo2NazStLw9WomomctxHEHBWSpdgDB9JlQk8dyPpxOoGlllSYnSAgeGvm6HWftCb09pQaEJI15LO0/s400/Image10B.JPG" alt="" id="BLOGGER_PHOTO_ID_5182082933412063746" border="0" /></a><br /><br />The Dimension Viewer shows all the available calculations for this demo schema and the final report is shown below.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjru_W5zJgZJbk8gqj-rvgMNqj2BjY36G5tspbI59juuTJWzIHZ4Sa5BFmjg3A1lLMTgCfiq8alHw_XgetShaJC4hByxEwspykE7kgbqUnP-kL90rkWKmJrbVnTQe751oR-XztlqZQ5ZUA/s1600-h/Image11.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjru_W5zJgZJbk8gqj-rvgMNqj2BjY36G5tspbI59juuTJWzIHZ4Sa5BFmjg3A1lLMTgCfiq8alHw_XgetShaJC4hByxEwspykE7kgbqUnP-kL90rkWKmJrbVnTQe751oR-XztlqZQ5ZUA/s400/Image11.JPG" alt="" id="BLOGGER_PHOTO_ID_5182083109505722898" border="0" /></a><br /><br /><br /><span style="font-weight: bold;">Step 5 – Making the Data Available via SQL.</span><br />Is there any real point to this post except to show how clever Oracle OLAP can be? In opinion yes, since this technique can be extremely useful when you need to make an OLAP cube visible to SQL based tools. Analytic Workspace Manager 10g has an relational view generator that is available as a plugin and this makes generating the SQL views to support your OLAP cubes a very quick and easy process. But there we know users don’t just want base measures they want lots and lots of calculations as well. When the time comes to create SQL views as the limit on the number of columns within a view is 1000. This may have been increased in 10g/11g, but even so, navigating a cube with that many columns is not easy.<br /><br />The OLAP View Generator plugin for AWM10gR2 can be downloaded from <a href="http://www.oracle.com/technology/products/bi/olap/viewGenerator_1_0_2.zip">here</a>, and the associated readme is <a href="http://www.oracle.com/technology/products/bi/olap/ViewGenerator.html">here</a>.<br /><br />Using this model, the calculations simply resolve to another dimension, which translates to one additional column in the view as opposed to, at least, 14 additional columns per source measure and in this case 20 additional columns per source measure. Therefore, this approach makes exposing the cube much easier to manage as can be seen here:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMnmCrOosZygpoywmF5nN9sITJ2sk2EAmphTO6yVDqev5cITT-a6cVUh9BJlfvrOPGlMZ-X94gznUC3PcYfDHvP6gDlyzTlHdFAR3bZY-UKna3QK2b0uEawm9ckVfXB13bkjq_dXRjgeI/s1600-h/Image12.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMnmCrOosZygpoywmF5nN9sITJ2sk2EAmphTO6yVDqev5cITT-a6cVUh9BJlfvrOPGlMZ-X94gznUC3PcYfDHvP6gDlyzTlHdFAR3bZY-UKna3QK2b0uEawm9ckVfXB13bkjq_dXRjgeI/s400/Image12.JPG" alt="" id="BLOGGER_PHOTO_ID_5182083315664153122" border="0" /></a><br /><br />As we can see here, all the calculations are contained within the dimension Time View, which simply gets exposed like any other dimension.<br /><br />Eh voila, install one measure and you can automatically generate fourteen or more additional measures which are guaranteed to bring a smile to the face of any business user.Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com2tag:blogger.com,1999:blog-3820031471524503731.post-20778631645952167902008-03-18T07:30:00.000-07:002008-12-11T15:25:31.819-08:00Monitoring OLAP BuildsRecently I was working on a project where the customer’s server was in out South Africa office and I was running various build configurations testing data loading performance. In the past when I have been monitoring build times I always used SQLDeveloper’s excellent auto refresh facility. Firstly, it is worth downloading the “Scripts for OLAP DBAs” created by Jameson White (who also contributes to this blog and is very active on the Oracle OLAP Wiki).<br /><br /><a href="http://www.oracle.com/technology/products/bi/olap/OLAP_DBA_scripts.ZIP">http://www.oracle.com/technology/products/bi/olap/OLAP_DBA_scripts.ZIP</a><br /><br />These scripts are extremely valuable during tuning exercises. Jameson also provided me with two additional scripts for monitoring the XML_LOAD_LOG table, which holds all the build messages generated during a data load process. In the SQLDeveloper tree below you can see all these scripts<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq6WsAtDM3S-onRBD6MDZHmOG-pvv4zLQrvEJNs89GWZNo0yejetvUQvP-LXGJiTXPoEjmpPW0pDOIvJIgNhm6tjuD2Sf96sV1p6DguH_HHc1bfeqx-AbHjvcVkoGccmlRu9afL256Wmg/s1600-h/Image1.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq6WsAtDM3S-onRBD6MDZHmOG-pvv4zLQrvEJNs89GWZNo0yejetvUQvP-LXGJiTXPoEjmpPW0pDOIvJIgNhm6tjuD2Sf96sV1p6DguH_HHc1bfeqx-AbHjvcVkoGccmlRu9afL256Wmg/s400/Image1.JPG" alt="" id="BLOGGER_PHOTO_ID_5178749389882231250" border="0" /></a><br /><br />(Note, the OWB team has also provided a set of predefined scripts/reports as well). Here is the main report for the XML_LOAD_LOG table that generates a report based on the whole table:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj30M6Z17T-fbeERAbLcdVHOQQO1kb0FM0isPsKKk18dtW2YCofHUQx9dO_oK3ovJiLQXtx4lL3fnSwUj0nzqjmQccpo6lkwT02z17CsnmKRsu0HjYKDcpwaGeV8SQ46lTUscf5uqAsNz4/s1600-h/Image2.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj30M6Z17T-fbeERAbLcdVHOQQO1kb0FM0isPsKKk18dtW2YCofHUQx9dO_oK3ovJiLQXtx4lL3fnSwUj0nzqjmQccpo6lkwT02z17CsnmKRsu0HjYKDcpwaGeV8SQ46lTUscf5uqAsNz4/s400/Image2.JPG" alt="" id="BLOGGER_PHOTO_ID_5178751361272220194" border="0" /></a><br /><br />The code for this report is here:<br /><br /><span style="font-family:courier new;">select XML_LOADID as "Load ID"</span><br /><span style="font-family:courier new;">, XML_RECORDID as "Record ID"</span><br /><span style="font-family:courier new;">, XML_AW as "AW"</span><br /><span style="font-family:courier new;">, XML_DATE as "Date"</span><br /><span style="font-family:courier new;">, TO_CHAR(XML_DATE, 'HH24:MM:SS') as "Actual Time"</span><br /><span style="font-family:courier new;">, substr(XML_MESSAGE, 1, 9) as "Message Time"</span><br /><span style="font-family:courier new;">, substr(XML_MESSAGE, 9) as "Message"</span><br /><span style="font-family:courier new;">from olapsys.xml_load_log order by 1 desc, 2 desc</span><br /><br />The other report allows you to focus on a single job, which is passed to the report as a parameter:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_GpspX9hyphenhyphen_cKt91lB5h3k0RtSzEiRYlsohAqtYJUyiqcfAhZ2rqabbniFXJmJIQPrj6l4P1uCAdYCOulVWuqpd4SBrGa1QwG0b7dpp83qe61mWWuz8MjLb3NO54u0IoElvvadmmor00E/s1600-h/Image3.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_GpspX9hyphenhyphen_cKt91lB5h3k0RtSzEiRYlsohAqtYJUyiqcfAhZ2rqabbniFXJmJIQPrj6l4P1uCAdYCOulVWuqpd4SBrGa1QwG0b7dpp83qe61mWWuz8MjLb3NO54u0IoElvvadmmor00E/s400/Image3.JPG" alt="" id="BLOGGER_PHOTO_ID_5178751623265225266" border="0" /></a><br /><br />and the code for this report is here:<br /><br /><span style="font-family:courier new;">select XML_LOADID as "Load ID"</span><br /><span style="font-family:courier new;">, XML_RECORDID as "Record ID"</span><br /><span style="font-family:courier new;">, XML_AW as "AW"</span><br /><span style="font-family:courier new;">, XML_DATE as "Date"</span><br /><span style="font-family:courier new;">, TO_CHAR(XML_DATE, 'HH24:MM:SS') as "Actual Time"</span><br /><span style="font-family:courier new;">, substr(XML_MESSAGE, 1, 9) as "Message Time"</span><br /><span style="font-family:courier new;">, substr(XML_MESSAGE, 9) as "Message"</span><br /><span style="font-family:courier new;">from olapsys.xml_load_log </span><br /><span style="font-family:courier new;">where XML_LOADID = :i_LoadId </span><br /><span style="font-family:courier new;">order by 1 desc, 2 desc</span><br /><br />Once you have installed the reports into SQLDeveloper you can then use the auto refresh feature to keep each report up to date. SQLDeveloper lets you set the refresh rate for 5, 10, 15, 20, 25, 30, 60, or120 seconds<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsaRcjQJWJVDG2JfsigW0-PMwjRxoQBT_QaiwtqMU-rfVTxqf8ulfjLRpXYAjxwF329R4JLF8AIOYIVGt20tUxbbBr6prXuEqtIVyfbO76n3GpABqTQuyMGdGJr6iqbuZi4e7ma0cDsy0/s1600-h/Image4.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsaRcjQJWJVDG2JfsigW0-PMwjRxoQBT_QaiwtqMU-rfVTxqf8ulfjLRpXYAjxwF329R4JLF8AIOYIVGt20tUxbbBr6prXuEqtIVyfbO76n3GpABqTQuyMGdGJr6iqbuZi4e7ma0cDsy0/s400/Image4.JPG" alt="" id="BLOGGER_PHOTO_ID_5178751756409211458" border="0" /></a><br /><br />This works well if you have a good connection to your remote server and most importantly you can stay connected during the build process. Unfortunately, my connection to South Africa was very slow and sometimes it was necessary to run a job in background mode and just disconnect and walk away. Which causes a problem of know when the build has actually completed?<br /><br />To help resolve this I created some utilities to help resolve this particular issue. Using the utl_smtp package that is part of the database I created a routine that would scan the XML_LOAD_LOG table and once a build was complete it would send me an email.<br /><br />For ease of use I created three procedures to monitor:<br /><ul><li>Dimension builds</li><li>Cube builds</li><li>AW builds</li></ul>Depending on what is being monitored the title and body of the email changes accordingly. For example when monitoring a dimension build:<br /><br />The email that is sent has the title:<br /><br /><span style="font-family:courier new;">Data Load for PRODUCTS finished at 15:43:41</span><br /><br />And the message body can contain either just three simple lines:<br /><br /><span style="font-size:78%;"><span style="font-family:courier new;">07-MAR-08 14:03:20 Started Loading Dimension Members for PRODUCTS.DIMENSION (1 out of 1 Dimensions).</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:20 Started Loading Dimension Members for PRODUCTS.DIMENSION (1 out of 1 Dimensions).</span><br /><span style="font-family:courier new;">Total Time 00:00:00</span><br /></span><br /><br />Or the body of the message can contain the complete XML_LOAD_LOG for that job, for example:<br /><br /><span style="font-size:78%;"><span style="font-family:courier new;">07-MAR-08 14:03:33 Completed Build(Refresh) of SH_OLAP.SH_AW Analytic Workspace.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:22 Finished Updating Partitions.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Started Updating Partitions.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Finished Loading Dimensions.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Finished Loading Attributes.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Finished Loading Attributes for PRODUCTS.DIMENSION. 6 attribute(s) LONG_DESCRIPTION, PACK_SIZE, SHORT_DESCRIPTION, SUPPLIER_ID, UNIT_OF_MEASURE, WEIGHT_CLASS Processed.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Started Loading Attributes for PRODUCTS.DIMENSION (1 out of 1 Dimensions).</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Started Loading Attributes.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Finished Loading Hierarchies.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:21 Finished Loading Hierarchies for PRODUCTS.DIMENSION. 1 hierarchy(s) STANDARD Processed.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:20 Started Loading Hierarchies for PRODUCTS.DIMENSION (1 out of 1 Dimensions).</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:20 Started Loading Hierarchies.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:20 Finished Loading Dimension Members.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:20 Finished Loading Members for PRODUCTS.DIMENSION. Added: 0. No Longer Present: 0.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:20 Started Loading Dimension Members for PRODUCTS.DIMENSION (1 out of 1 Dimensions).</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:20 Started Loading Dimension Members.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:19 Started Loading Dimensions.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:19 Attached AW SH_OLAP.SH_AW in RW Mode.</span><br /><span style="font-family:courier new;">07-MAR-08 14:03:19 Started Build(Refresh) of SH_OLAP.SH_AW Analytic Workspace.</span><br /><span style="font-family:courier new;">Total Time 00:00:14</span><br /></span><br />The code will monitor both foreground and background jobs but for the background jobs access to the scheduler is required. For both types of job access to DBMS_LOCK.SLEEP function to allow the code to continuously loop while the job continues to process. During each loop of checking to see if the specified load has completed the DBMS_LOCK.SLEEP forces the monitoring process to sleep for 60 seconds (if anyone knows of a better way to do this please let me know).<br /><br />Overview of the Code<br />The monitoring code is split into two packages with associated procedures:<br /><ul><li>AW_Monitor</li><ul><li>Dim_Build</li></ul><ul><li>Cube_Build.</li></ul><ul><li>Aw_Build </li></ul><ul><li>Send_Complete_Log </li></ul><ul><li>Send_Mail</li></ul><li>Monitor_Sched_Process</li><ul><li>Create_Job</li></ul><ul><li>Drop_Job</li></ul></ul><span style="font-weight: bold;">Dim_Build</span><br />This procedure monitors the build process for a dimension, looking for the string 'Finished Loading Members for ' to determine if the build has completed.<br /><span style="font-weight: bold;"><br />Cube_Build</span><br />This procedure monitors the build process for a dimension, looking for the string 'Finished Auto Solve for Measure' to determine if the build has completed.<br /><span style="font-weight: bold;"><br />Aw_Build </span><br />This procedure monitors the build process for a dimension, looking for the string 'Completed Build(Refresh) of ' to determine if the build has completed.<br /><span style="font-weight: bold;"><br />Send_Complete_Log</span><br />Emails the complete log file for a build<br /><span style="font-weight: bold;"><br />Send_Mail</span><br />This procedure sends an email containing just the Start and End messages from the build being monitored<br /><span style="font-weight: bold;"><br />Create_Job</span><br />This procedure creates a new job within DBMS_SCHEDULER but does not enable the job<br /><span style="font-weight: bold;"><br />Drop_Job</span><br />This removes the job from DBMS_SCHEDULER<br /><br /><br /><span style="font-weight: bold;">What to Monitor?</span><br />There are three monitoring options:<br /><ul><li>Dimension</li><li>Cube</li><li>AW</li></ul>All this really does is determine the definition of the string used in the search criteria within the XML_MESSAGE column. If you want to know when a specific dimension has completed its refresh then use DIM_BUILD procedure. If you want to know when a specific cube has completed its refresh then use CUBE_BUILD procedure. The final procedure, AW_BUILD, monitors the refresh of the AW, which can be useful if you are refreshing lots of cubes and/or dimensions within a single job.<br /><br /><span style="font-weight: bold;">How to Monitor a Foreground Job</span><br />Monitoring a foreground job is relatively easy. If you want to run a job that maintains a dimension called product and then have an email sent once the refresh of the dimension has completed then you would use the DIM_BUILD procedure. The parameters for each procedure are much the same. You need to provide:<br /><ul><li>Schema name</li><li>AW Name</li><li>Object name (dimension name or cube name)</li><li>Report Type (Summary or Full)</li><li>Job Name (if the monitor process is being scheduled)</li></ul>So the command would be as follows:<br /><span style="font-size:85%;"><span style="font-family:courier new;"><br />EXEC AW_MONITOR.DIM_BUILD('SH_OLAP', 'SH_AW', 'PRODUCTS', 'SUMMARY', null);</span></span><br /><br /><span style="font-weight: bold;">How Monitor a Background Job</span> It is important to schedule the monitoring of XML_LOAD_LOG to start after the AW job has started. Therefore, you need to set the time passed to CREATE_JOB procedure to a point in time after the BuildDate details in the AW XML script.<br /><ul><li>Job Name</li><li>Script to run</li><li>Date and Time to run</li><li>Job Description</li></ul>So the command would be as follows:<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">exec monitor_sched_process.create_job('MONITOR_PROD_1','aw_monitor.dim_build(''SH_OLAP'',''SH_AW'', ''PRODUCTS'', ''SUMMARY'', ''MONITOR_PROD_1'')', '07-MAR-2008 15:46:00', 'Starts the monitor of PRODUCTS dimension build');</span></span><br /><br />It is important to schedule the monitoring of XML_LOAD_LOG to start after the AW job has started. Therefore, you need to set the time passed to CREATE_JOB procedure to a point in time after the BuildDate details in the AW XML script.<br /><br />You can review the job details via the Scheduler Jobs page in Enterprise Manager. Here you can see an AW build process is scheduled to run at 3:45 and the monitoring job, ‘MONITOR_PROD_1’, is scheduled to run at 3:46.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjML1ZwQ0LZcXpVSACe-KGuYSiRRfvVZmFLHKFxL4VwCqT1YpAmZUHLziy2WTGx-ex3n9JIdACmKjYI0BSQcWFEKns2a0-yi_eyO23MQ8klR8OuTqJVmKwdMr_i_bo5pKwYqreCxjn_q4/s1600-h/Image5.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjML1ZwQ0LZcXpVSACe-KGuYSiRRfvVZmFLHKFxL4VwCqT1YpAmZUHLziy2WTGx-ex3n9JIdACmKjYI0BSQcWFEKns2a0-yi_eyO23MQ8klR8OuTqJVmKwdMr_i_bo5pKwYqreCxjn_q4/s400/Image5.JPG" alt="" id="BLOGGER_PHOTO_ID_5178754599677561426" border="0" /></a><br /><br />You can use the features in Enterprise Manager to halt the job at any point in time via the delete button. Once the job itself has completed, i.e. the email is sent, the job is stopped and removed from the job queue.<br /><br /><span style="font-weight: bold;">Possible Code Changes</span><br />Before running the code you may need to change the recipient, from, and mail server details in the AW_MONITOR package. Each of the monitoring procedures has a call to SEND_MAIL procedure that includes the name of the “To” part of the email. This would need to be changed unless you want to send all your emails to me.<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;">send_mail('keith.laker@oracle.com', v_title, v_body);</span></span><br /><br />The procedure SEND_MAIL has the following lines that need to changed<br /><br /><span style="font-size:85%;"><span style="font-family:courier new;"> msg_from VARCHAR2(50) := 'keith.laker@oracle.com';</span><br /><span style="font-family:courier new;"> mailhost VARCHAR2(30) := 'mail.oracle.com';</span><br /></span><br /><span style="font-weight: bold;"><br />The Code</span><br />A note of caution - I am not a brilliant PL/SQL coder, therefore, I am sure most of the code I have created can be improved. I am not going to post all the code here, as I suspect it will cause problems. At the moment I cannot find a convenient location to host the Zip file containing the two PL/SQL packages, therefore, if you want the code send me an email (<a href="mailto:keith.laker@oracle.com">keith.laker@oracle.com</a>) and then I will send you the zip file.<br /><br />There are two basic packages:<br /><ul><li>AW_MONITOR</li><li>MONITOR_SCHED_PROCESS</li></ul>The AW_MONITOR package contains the following procedures:<br /><ul><li>Dim_Build - This procedure monitors the build process for a dimension, looking for the string 'Finished Loading Members for ' to determine if the build has completed.</li><li>Cube_Build - This procedure monitors the build process for a dimension, looking for the string 'Finished Auto Solve for Measure' to determine if the build has completed.</li><li>Aw_Build - This procedure monitors the build process for a dimension, looking for the string 'Completed Build(Refresh) of ' to determine if the build has completed.</li><li>Send_Complete_Log - Emails the complete log file for a build</li><li>Send_Mail - This procedure sends an email containing just the Start and End messages from the build being monitored</li></ul>Example code:<br />monitoring a build for a specific dimension. The parameters are schema, AW Name, Dimension Name, email type:<br /><span style="font-size:85%;"><span style="font-family: courier new;">exec aw_monitor.dim_build('SH_OLAP', 'SH_AW', 'PRODUCTS', 'SUMMARY');</span><br /></span><br />monitoring a build for a specific cube. The parameters are schema, AW Name, Cube Name, email type:<br /><span style="font-size:85%;"><span style="font-family: courier new;">exec aw_monitor.cube_build('SH_OLAP', 'SH_AW', 'SALES', 'SUMMARY');</span><br /></span><br />monitoring a build for a specific AW. The parameters are schema, AW Name, Cube, email type<br /><span style="font-size:85%;"><span style="font-family: courier new;">exec aw_monitor.aw_build('SH_OLAP', 'SH_AW', 'SUMMARY');</span><br /></span><br />The above procedures all finish by sending an email to the specified recipients, where the body of the email can either be a summary of the build (just the start and end times) or the complete build log for the dimension, cube or AW.<br /><br /><span style="font-size:85%;"><span style="font-family: courier new;">exec send_mail('keith.laker@oracle.com', 'Data Load for PRODUCT finished at 12:00pm', 'Start at...., Finished at.....);</span><br /></span><br /><span style="font-size:85%;"><span style="font-family: courier new;">exec send_complete_log(795, 'keith.laker@oracle.com', 'Data Load for PRODUCT finished at 12:00pm');</span><br /></span><br />The result is an email delivered to your Inbox:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuO-2I0uu3gsKimjVDXguocwik5P-FHHUHIXnymcxMfqxzgDyHHTOM-3uzbP1qFHhMlDB91PiaGc4S6tRIa5fZt_pglsCotf2T0NoI9ZFMzvhSKdYrt7wthAKjT1UagWyOgLgiTzexCcA/s1600-h/Image6.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuO-2I0uu3gsKimjVDXguocwik5P-FHHUHIXnymcxMfqxzgDyHHTOM-3uzbP1qFHhMlDB91PiaGc4S6tRIa5fZt_pglsCotf2T0NoI9ZFMzvhSKdYrt7wthAKjT1UagWyOgLgiTzexCcA/s400/Image6.JPG" alt="" id="BLOGGER_PHOTO_ID_5179013594795444834" border="0" /></a>Then depending on the parameter that controls the body of the email, the body will contain either the complete log from XML_LOAD_LOG table:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-LF2Tzn4yJPpRsp8o9z6Gw9JvuU3WfSB6xGSHzhxjzN4-q0-ufl8kktOoAx7jp3KlvE2j5UB61PkW5r8aEPSgIwu0dGVoiEL0yz5yWTZMr_k17sVAUiWazZY3ESnD_oNrmYVlen7xBds/s1600-h/Image7.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-LF2Tzn4yJPpRsp8o9z6Gw9JvuU3WfSB6xGSHzhxjzN4-q0-ufl8kktOoAx7jp3KlvE2j5UB61PkW5r8aEPSgIwu0dGVoiEL0yz5yWTZMr_k17sVAUiWazZY3ESnD_oNrmYVlen7xBds/s400/Image7.JPG" alt="" id="BLOGGER_PHOTO_ID_5179013676399823474" border="0" /></a>or simply a summary containing the start and end times for the build process:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9UMOtMLDlMAnFqHMVuGpn1E6V7COBlHjXUNVoIq-IQhLpPfmck4y70Kv6vFEWRaxByS75Jv9EEfx0M8sNG1HJo89wRJVQGJE2uzn7-9QIRNP0Pwrutc7T9RdbX19AMZUbElZYTRO7l3Q/s1600-h/Image8.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9UMOtMLDlMAnFqHMVuGpn1E6V7COBlHjXUNVoIq-IQhLpPfmck4y70Kv6vFEWRaxByS75Jv9EEfx0M8sNG1HJo89wRJVQGJE2uzn7-9QIRNP0Pwrutc7T9RdbX19AMZUbElZYTRO7l3Q/s400/Image8.JPG" alt="" id="BLOGGER_PHOTO_ID_5179014574047988370" border="0" /></a><br />The MONITOR_SCHED_PROCESS package contains the following procedures:<br /><ul><li>create_job - This procedure creates a new job within DBMS_SCHEDULER but does not enable the job</li><li>drop_job - This removes the job from DBMS_SCHEDULER</li></ul><br />Example code:<br /><br /><span style="font-size:78%;"><span style="font-family: courier new;">exec monitor_sched_process.create_job('AW_DIM_MONITOR',aw_monitor.dim_build('SH_OLAP','AH_AW', 'PRODUCTS', 'AW_DIM_MONITOR', 'SUMMARY'), '23-JAN-2008 12:15:00', 'Starts the monitor of the PRODUCTS dimension build');</span><br /><br /><span style="font-family: courier new;">exec monitor_sched_process.drop_job('AW_DIM_MONITOR');</span><br /><br /></span>Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-35031989649570317672008-03-07T21:00:00.000-08:002008-12-11T15:25:37.789-08:00OLAP Workshop 7 : Creating Calculated MeasuresMost data warehouses contain a lot of data but also conversely contain very little information. Many DW teams are content to publish basic facts to their user communities and then leave those communities to fend for themselves in turning data into information. For example, it is not uncommon to see basic measures such as sales, costs, and stock in many data warehouse schemas, but in reality these types of measures are completely useless in themselves. Business users are not interested per se in the value of sales today and this can be seen in the general press as well when reporting key trading periods such as Thanksgiving in the US and Christmas in Europe. Both business and financial communities are both more interested in sales compared to the same time last year, rate of growth of sales, sales compared to forecast. In other words most people are actually interested in calculated measures such as ratios and percentages derived from the base data.<br /><br />Therefore, the key question for many data warehouse teams is how to create and manage these types of calculations?<br /><br />In this workshop we will explore some the calculated measures that can be quickly and easily created in your analytic workspace (AW) to enrich its analytic content for end users.<br /><br />One of the powerful features of the Oracle OLAP technology is the ability to efficiently and easily create business calculations. Oracle OLAP contains a powerful calculation engine that allows you to extend the analytic content of your AW by adding into it some useful business calculations as calculated measures. Some of these business calculations are simple and some of are a lot more involved. However, none are complex from an end-user perspective, although many of them are challenging to traditional relational-only databases. This is especially true when the calculations are numerous, and when many of the queries are ad hoc and unpredictable in nature.<br /><br />Calculated measures are, as the name suggests, calculated from other measures available in the AW. They are implemented as formulas in the AW; that is, their definition is saved, but no calculated data is stored. The calculations happen at run time when a query requires it. Calculated measures are derived from the contents of other measures, including stored measures as well as measures that are calculated at run time. The calculated measures that you define in the AW are indistinguishable to end users from the stored measures into which data has been loaded and stored in the AW. All measures, according to the dimensional model presented to the end user, are identical. This promotes ease of use by end users.<br /><br />There is generally a trade-off between precomputing and storing measures in the AW versus calculating them at query time. However, Oracle AWs are very efficient at preserving query performance at very fast levels, even when there are many calculated measures that are resolved dynamically. It is not uncommon for Oracle OLAP customers to implement multidimensional cubes with many hundreds or even thousands of calculated measures and key performance indicators (KPIs), which are calculated at query time from a relatively small number of physically stored and aggregated (or partially aggregated) measures. It is a striking characteristic of AWs in the Oracle database that query performance generally remains consistent even as data volumes and calculation complexity increase.<br /><br />So how do calculated measures works and what happens when the dimensionality of the source measures does not match exactly?<br /><br />In the example below a measure called Revenue is a calculated measure based on two other measures: quantity and price. The calculation itself is simple: quantity × price. Notice that the resulting dimensionality of Revenue is inherited from the two measures involved in the calculation. When you use measures with different dimensionality in a calculation, the result always contains the superset of the dimensions of the base measures. The multidimensional data model handles this automatically. You do not have to worry about the possibility that different measures in your AW have different shapes or dimensionality. You specify the calculation rule in the wizard, and the engine automatically resolves the dimensionality. One obvious requirement is that one dimension must be in common for the result to make sense.<br /><br />In this example, Quantity is dimensioned by Time, Product, and Customer but Unit Price is not dimensioned by Customer. When Oracle OLAP is asked to calculate quantity × price, it uses its knowledge of the dimensional model to automatically handle the calculation of Revenue for all customers, even though there is no separate price stored for each customer. If there is not a separate price for each customer, then there must be a single price for all customers. Price does not vary by customer. As Oracle OLAP performs the calculation quantity × price, it applies the appropriate price for the particular product and time dimension intersections being calculated.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFciYQNkZbL_v9xUjvPCIlJLM_Pc159Yrr_e8mjZMjmhx-42gve0uAqKhuIr9qzPgIVnGFF0AoFtzE49sG4L8kv9LqR_0TWuJDLwV4cmEw7lYy0nUY9VQu4Fr7cYsPbzyPEB9pwDnW90g/s1600-h/image1.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFciYQNkZbL_v9xUjvPCIlJLM_Pc159Yrr_e8mjZMjmhx-42gve0uAqKhuIr9qzPgIVnGFF0AoFtzE49sG4L8kv9LqR_0TWuJDLwV4cmEw7lYy0nUY9VQu4Fr7cYsPbzyPEB9pwDnW90g/s400/image1.JPG" alt="" id="BLOGGER_PHOTO_ID_5174956731896400034" border="0" /></a><br />There are two methods for creating a calculated measure<br /><ul><li>Wizard and template method</li><li>Free format</li></ul><span style="font-size:130%;"><span style="font-weight: bold;">Using the Calculation Wizard</span></span><br />By default both AWM and OWB provide a calculation wizard to help define the most common types of business calculaltions. There are four categories of calculations:<br /><ul><li>Basic</li><li>Advanced</li><li>Prior/Future comparison</li><li>Time Frame</li></ul>The image below shows the wizard screen and the list of templates within each category (Note there are some changes with AWM 11g).<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXZ6cEskPYcO7kIXH7SMJ3jSGA1trjb1PYr3vDAqyGypqd80LLd5GeBZL0C1WqkOPoBNuCoiUMreu9fDG6POQfsdnoWxrc1NbZhEoZafgs6WP7UzpLZRjvpdM6tV63U8Ktr8JaZcVm-K4/s1600-h/image2.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXZ6cEskPYcO7kIXH7SMJ3jSGA1trjb1PYr3vDAqyGypqd80LLd5GeBZL0C1WqkOPoBNuCoiUMreu9fDG6POQfsdnoWxrc1NbZhEoZafgs6WP7UzpLZRjvpdM6tV63U8Ktr8JaZcVm-K4/s400/image2.JPG" alt="" id="BLOGGER_PHOTO_ID_5174957629544564914" border="0" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSVPENpqryLWAD2D1ZCos1QrNxBogOop-q2Cf6_AhhvsDOF_x3bcnLPZDpQu4p8li8yeIjT5ongM8gF9h5kp4ydV6Ds07xb-tpp96DHtxphf3KOGsYUxWnbxegF4Z-ig2iL505UzqATMs/s1600-h/image2a.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSVPENpqryLWAD2D1ZCos1QrNxBogOop-q2Cf6_AhhvsDOF_x3bcnLPZDpQu4p8li8yeIjT5ongM8gF9h5kp4ydV6Ds07xb-tpp96DHtxphf3KOGsYUxWnbxegF4Z-ig2iL505UzqATMs/s400/image2a.JPG" alt="" id="BLOGGER_PHOTO_ID_5174957724033845442" border="0" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEoMiz7fGExiMtpeQhWyv5mz_33JszFOYdLEzSMHi4dujW0KL4KEWFh2uy-7qGWE_EMILn1qLQM2ncOzSd3kZfm2-WlzN6oU8Af3gyHUZ74v-WLtE_kALYOc2sIMF5rmM9I66Gw26xExg/s1600-h/image2b.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEoMiz7fGExiMtpeQhWyv5mz_33JszFOYdLEzSMHi4dujW0KL4KEWFh2uy-7qGWE_EMILn1qLQM2ncOzSd3kZfm2-WlzN6oU8Af3gyHUZ74v-WLtE_kALYOc2sIMF5rmM9I66Gw26xExg/s400/image2b.JPG" alt="" id="BLOGGER_PHOTO_ID_5174957809933191378" border="0" /></a><br /><span style="font-weight: bold;">Creating a Share Calculation</span><br />The Share template prompts you for the components you need to specify the calculation:<br />Share Of: A measure or calculated measure that is dimensioned by the Product dimension (in this example)<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYqyWMHDu1-nQtCnGDMLNvAe2IxQ40_N7msv5rfLDO84P3yBA1tvFbZsLBWZX2RweT3YBiEatlJ3DzI6C_N9kcG1yLrPvPmcucGNPObFu93cvwbfz4MCQ483Mpk6BUL3IU_fi-GBJmWb0/s1600-h/image2c.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYqyWMHDu1-nQtCnGDMLNvAe2IxQ40_N7msv5rfLDO84P3yBA1tvFbZsLBWZX2RweT3YBiEatlJ3DzI6C_N9kcG1yLrPvPmcucGNPObFu93cvwbfz4MCQ483Mpk6BUL3IU_fi-GBJmWb0/s400/image2c.JPG" alt="" id="BLOGGER_PHOTO_ID_5174957891537570018" border="0" /></a><br /><ul><li>For: The dimension for which the share is to be calculated</li><li>In: The hierarchy to be used while calculating the share for the selected dimension</li><li>As a Percent of: The dimension member to be used as a baseline to calculate the share. Select one of the following choices: </li><ul><li>Total: Specifies that the baseline consists of the total of all items on the level that is associated with the current member (that is, the item for which the share is being calculated). This option is disabled for a dimension that has no hierarchies.</li></ul><ul><li>Parent: Specifies that the baseline consists of the total on the level of the parent for the current member (that is, the item for which the share is being calculated). This option is disabled for a dimension that has no hierarchies. </li></ul><ul><li>Level: Specifies that the baseline consists of the total of a level to be specified. Choosing this item requires the selection of a value in the associated drop-down list. This list displays the names of levels from the selected hierarchy for the selected dimension that are available for calculating the share. This option is disabled for a dimension that has no hierarchies. </li></ul><ul><li>Member: Specifies that the baseline consists of the total for a dimension member to be specified. Choosing this item requires the selection of a value in the associated drop-down list. This list displays the names of the dimension members that are available for calculating the share. This type of calculation applies to measures only.</li></ul></ul>Note that although the most common use of this template is to express the share as a “% of Total” or as a “% of Parent” in the chosen hierarchy, a specific member can be used as the baseline of the calculation. This is useful if you want to compare members of the dimension in question to a specific benchmark or model member, such as an established market leading product, flagship store, or key competitor.<br /><br />So, as can be seen with this share calculation template it may be necessary to create multiple calculated measures using the same template to provide different results, such as shown below:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNQi54rUBBNR1kN_Kd2vppGGX_xSqQyZE9IrpoRJSW8t1c5GrxnxzfBHFTwAetpnQ78bShsV8VHcGbw3tDnUpFV3MUqXnxHnzHxOkETVc9oMsvwI9xdDvUx1nJ_SjxxS5fjF6LY4fBKLc/s1600-h/Image14.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNQi54rUBBNR1kN_Kd2vppGGX_xSqQyZE9IrpoRJSW8t1c5GrxnxzfBHFTwAetpnQ78bShsV8VHcGbw3tDnUpFV3MUqXnxHnzHxOkETVc9oMsvwI9xdDvUx1nJ_SjxxS5fjF6LY4fBKLc/s400/Image14.JPG" alt="" id="BLOGGER_PHOTO_ID_5174971841591347570" border="0" /></a><br />This report is showing the Budget Profit base measure and three Share calculations: Share of Product Total, Share of Product Level, and Share of a Product Member (Hardware). Note how the “% of Total,” “% of Level,” and “% of HW ” (hardware) category measures behave differently as the user drills down the product hierarchy. Also note that the third share measure on the report is base-lined to a specific product: the Hardware category. The report shows how Hardware compares to the other categories: Electronics, Peripherals and Accessories, Photo, and Software/Other.<br /><br /><span style="font-weight: bold;">Creating a % Different Prior Period Calculation</span><br />Using the “Percent Difference from Prior Period” calculated measure template, you can create a calculated measure that is useful to indicate growth or decline of a business over time. This calculation template is found in the Prior/Future Time Period calculation type folder. This template accepts input for the following items to calculate the percentage difference from a prior period:<br /><ul><li>Measure: Select a measure or a dimension member for which you want to calculate the percentage difference from the prior period. </li><li>Over: If there is more than one time dimension, then a box appears to enable the selection of the proper Time dimension. Otherwise, the default Time dimension is used. </li><li>In: Select the hierarchy for the specified dimension. </li><li>From: Choose one of the following items to indicate the previous time period that the comparison is to be based on: </li><ul><li>Year ago: Use if your measure is to compare performance with the same time period from the previous year</li></ul><ul><li>Period ago: Use if your measure is to compare performance with the previous period at the same level in the Time hierarchy</li></ul><ul><li>Number of periods or years ago: Use if your measure is to calculate a comparison with a time period of a specified number (entered in the number box) of periods ago, at a particular level (such as Year, Quarter, or Month)</li></ul></ul><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIUnoDGN0GW41KtCt15n1qiv3xUksGT1lePL_AW8qIKeYgAd_GBPJ8CbfQ3FKLbjbOg_ZJvvTrX2A59N6Jkv71UUmHPa77PDYbso0OglzQi8uVz7JMbNfkOiNYcb_C4m2GpCGA1wSqq_8/s1600-h/Image15.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIUnoDGN0GW41KtCt15n1qiv3xUksGT1lePL_AW8qIKeYgAd_GBPJ8CbfQ3FKLbjbOg_ZJvvTrX2A59N6Jkv71UUmHPa77PDYbso0OglzQi8uVz7JMbNfkOiNYcb_C4m2GpCGA1wSqq_8/s400/Image15.JPG" alt="" id="BLOGGER_PHOTO_ID_5174972545965984146" border="0" /></a><br /><br />In a report this would look like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgkhGF5S1ph2x7AAKGTu72koog4XBDlHn1GLJdFoJ6wUDQEcZAIbM0wgmnr7KmDC7ZeanC-I78MQHhoG9cXjJoLqPOyT7CFojoCqf6cMpiwU4wg-at1EZeoI0SwWC9GilsFwwFoVDQrDM/s1600-h/Image16.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgkhGF5S1ph2x7AAKGTu72koog4XBDlHn1GLJdFoJ6wUDQEcZAIbM0wgmnr7KmDC7ZeanC-I78MQHhoG9cXjJoLqPOyT7CFojoCqf6cMpiwU4wg-at1EZeoI0SwWC9GilsFwwFoVDQrDM/s400/Image16.JPG" alt="" id="BLOGGER_PHOTO_ID_5174972421411932546" border="0" /></a><br /><br />This report contains calculations of a number of alternative percentage differences from prior periods. All the measures automatically handle the situation in which the user needs to drill down into the time dimension and look at time periods at different levels. A single calculated measure in the AW can be used at any level of time, by any query tool, including SQL tools.<br /><br />Note the following:<br /><ul><li>The Last Year calculation works at all levels of time, and compares each time period with the same time period 1 year ago.</li><li>The Last Period calculation works at all levels of time, and compares each time period with the previous period at the same level.</li><li>The 3 Months Ago calculation works at the appropriate levels of time (in this case, Month and Quarter because a quarter is made up of three months), and compares each time period with the same time period 3 months ago (which is equivalent to one quarter ago). </li><li>Similar calculations can be easily generated for Costs, Quantity, Profit, and Budget measures.</li></ul><br /><span style="font-weight: bold;">Creating a Moving Average Calculation</span><br />The Moving Average calculated measure template enables you to create moving averages over any of the measures in your AW. Moving averages are very useful when you analyze volatile data, because they smooth out the peaks and troughs and enable you to more easily visualize the trend in data. In the Moving Average template, you are asked to provide the following input:<br /><ul><li>Measure: Select the measure for which you want to calculate a moving average. </li><li>Over Time In: If there is more than one time dimension, then a box appears to enable the selection of the proper time dimension. Otherwise, the default time dimension is used. In identifies the hierarchy for the specified dimension. </li><li>Include Previous: Enter the number of periods to be used for the calculation. </li><li>An example of this calculation is as follows:</li><li>Moving average of sales for the last three months = (Jan sales + Feb sales + March sales) / 3</li></ul>Note: Similar pages are used for Moving Totals, Moving Maximums, and Moving Minimums.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6lYvmRCO3NoF0IR4bA6q34ViiyeMVXsm8aaSL1n3059UaHT0xEs7NSmT4dBE14u0wSFublVObUZ4hTHliQ-clEl1xeZKkLSG8Tf5yFpEeRCSFS3xkyJjS6J9HLI7_LYcu-q9HBOb7dCE/s1600-h/Image17.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6lYvmRCO3NoF0IR4bA6q34ViiyeMVXsm8aaSL1n3059UaHT0xEs7NSmT4dBE14u0wSFublVObUZ4hTHliQ-clEl1xeZKkLSG8Tf5yFpEeRCSFS3xkyJjS6J9HLI7_LYcu-q9HBOb7dCE/s400/Image17.JPG" alt="" id="BLOGGER_PHOTO_ID_5174973263225522594" border="0" /></a><br /><br />Below is a combination graph showing how moving averages can be a useful way of smoothing out volatile data, thus enabling you to see the trends in data more easily.<br />One line is a moving six-month average, and the other line is a three-month average.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMCBRxIjzSjr5Lm-he9fhvGikxeTFTWugAa30uJW-oItQ-Lu0HmmioaYGDODhDEPpi8lzjjeE7VXe3tX7JH_04yjQ3saaLWJI6Xd-JKmexQKO5hTPqtVrggFUG9Wpnd_OgqSu6HVf9ufM/s1600-h/Image18.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMCBRxIjzSjr5Lm-he9fhvGikxeTFTWugAa30uJW-oItQ-Lu0HmmioaYGDODhDEPpi8lzjjeE7VXe3tX7JH_04yjQ3saaLWJI6Xd-JKmexQKO5hTPqtVrggFUG9Wpnd_OgqSu6HVf9ufM/s400/Image18.JPG" alt="" id="BLOGGER_PHOTO_ID_5174973340534933938" border="0" /></a><br /><br /><br /><span style="font-weight: bold;">Modifying a Calculated Measure</span><br />Existing calculated measures can be edited from within AWM 10g. The descriptions and the calculation details can be changed. To change a calculated measure, click the calculated measure in the Model view. You see the general information displayed on the right. You can:<br /><ol><li>Make changes to labels and description. You can change the labels and description, but not the name.</li><li>Click the Launch Calculation Editor button to change the details of the calculated measure. You can change the details, but not the type, of the measure.</li></ol><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPqFLlOHZIlDDiI98rSVOiudX8ish7hnDuYjdFF6wO-KdNCD60u2setL2RJoOi_8kSTEKdRbuOBZveQF_4qdGhbBTIiuUNY8alLXJCWxMQJw0ya0TpBxtGp0H7062ttFz0EujRDsUb8jE/s1600-h/Image19.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPqFLlOHZIlDDiI98rSVOiudX8ish7hnDuYjdFF6wO-KdNCD60u2setL2RJoOi_8kSTEKdRbuOBZveQF_4qdGhbBTIiuUNY8alLXJCWxMQJw0ya0TpBxtGp0H7062ttFz0EujRDsUb8jE/s400/Image19.JPG" alt="" id="BLOGGER_PHOTO_ID_5174973649772579266" border="0" /></a><br /><br /><br /><span style="font-weight: bold;">Managing Calculated Measures</span><br />My recommendation is always to create your calculated measures in a separate cube. This helps insulate you from changes to the physical implementation of your base cubes. For example, if you want to change the storage definition for a cube AWM forces you to delete the cube, which if it contains calculated measures means these are also delete and need to be recreated. By keeping your calculated measures in a separate cube it is possible to delete and a rebuild a cube without impacting the calculated measures. Assuming of course you do not change the dimensionality.<br /><br />There are other ways to resolve this issue (deleting a cube but keeping the calculated measures):<br /><ul><li>Save the calculated measure to an XML template</li><li><span style="font-style: italic;">Hack</span> the XML definition for the cube</li></ul>Saving the calculated measures to an XML file is always a good idea since this creates a backup of the definition. However, you can only save one measure at a time which is fine if you create the XML template when you define the calculated measure but not so good if you have lots and lots of calculated measures in a cube and then you decide to save them all to XML templates.<br /><br />Hacking the XML is not something I would normally recommend, however, it is possible to move calculated measures from one template file to another using notepad. Again, assuming you do not change the dimensionality (as some measures may refer to specific dimensions and/or levels and/or hierarchies) you can cut & paste the XML. The calculated measures are all defined in the last but one block of the XML definition, using the tag “DerivedMeasure”. Simply copy all the DerivedMeasure blocks to your new cube XML template and reload that template to restore all your calculated measures. This works for 10gR2 but has not been tested with 11g.<br /><br /><br /><span style="font-weight: bold;font-size:130%;" >Creating custom calculated Measures</span><br />Oracle OLAP Option has a very powerful calculation engine that supports a huge library of functions:<br /><ul><li>Numeric Functions</li><li>Time Series Functions</li><li>Text Functions</li><li>Financial Functions</li><li>Statistical Functions</li><li>Date and Time Functions</li><li>Aggregation Functions</li><li>Data Type Conversion Functions</li></ul>Any these functions can be used to create a custom calculated measure. To get more information on these various functions you can refer to the Oracle OLAP DML Reference <span style="color: rgb(255, 0, 0); font-weight: bold;">10g</span> Release 2 documentation:<br /><br /><a href="http://download.oracle.com/docs/cd/B19306_01/olap.102/b14346/toc.htm">http://download.oracle.com/docs/cd/B19306_01/olap.102/b14346/toc.htm</a><br /><br />and for <span style="color: rgb(255, 0, 0); font-weight: bold;">11g</span>:<br /><br /><a href="http://download.oracle.com/docs/cd/B28359_01/olap.111/b28126/toc.htm">http://download.oracle.com/docs/cd/B28359_01/olap.111/b28126/toc.htm</a><br /><br />Here is a very simple example of how to create a custom calculated measure. Lets create a measure to show the percent variance for a measure, sales revenue, based on the prior-period. To do this we need to use the lagpct function. The LAGPCT function returns the percentage difference between the value of a dimensioned variable or expression at a specified offset of a dimension prior to the current value of that dimension and the current value of the dimensioned variable or expression.<br /><br />The syntax for the function is :<br /><ul><li>LAGPCT(variable, n, [dimension], [STATUS|NOSTATUS|limit-clause] )</li></ul>Where<br /><ul><li>Variable - A variable or expression that is dimensioned by dimension.</li><li>‘n’ - The offset (that is, the number of dimension values) to lag. LAGPCT uses this value to determine the number of values that LAGPCT should go back in dimension to retrieve the value of variable. Typically, n is a positive INTEGER that indicates the number of time periods (or dimension values) before the current one. When you specify a negative value for n, it indicates the number of time periods after the current one. In this case, LAGPCT compares the current value of the time series with a subsequent value.</li><li>Dimension - The dimension along which the lag occurs. While this can be any dimension, it is typically a hierarchical time dimension of type TEXT that is limited to a single level (for example, the month or year level) or a dimension with a type of DAY, WEEK, MONTH, QUARTER or YEAR. When variable has a dimension with a type of DAY, WEEK, MONTH, QUARTER, or YEAR and you want LAGPCT to use that dimension, you can omit the dimension argument.</li><li>Status can be one of the following:</li><ul><li>STATUS - Specifies that LAGPCT should use the current status list (that is, only the dimension values currently in status in their current status order) when computing the lag.</li></ul><ul><li>NOSTATUS - (Default) Specifies that LAGPCT should use the default status (that is, a list all the dimension values in their original order) when computing the lag.</li></ul><ul><li>limit-clause - Specifies that LAGPCT should use the default status limited by limit-clause when computing the lag. You can use any valid LIMIT clause (see the entry for the LIMIT command for further information). To specify that LAGPCT should use the current status limited by limit-clause when computing the lag, specify a LIMIT function for limit-clause.</li></ul></ul>Based on this syntax, the format of our function would be as follows:<br /><ul style="font-family: courier new;"><li>lagpct(sales_revenue, 1, TIME, LEVELREL TIME_LEVELREL)</li></ul>since sales_revenue is the variable, we need to offset by one period to get the prior period, the dimension for the lag is Time and the limit clause is based on the standard form level object TIME_LEVELREL which ensures the correct prior period is selected based on the level of the dimension member, so months are only compared to months and quarters only compared to quarters and so on.<br /><br /><span style="font-weight: bold;">How do you install a Custom Calculation?</span><br />There are two ways to add a custom calculation to a cube:<br /><ul><li>Special XML Template</li><li>Excel Utility</li></ul>It is possible to use an XML template file to define a custom calculated measure. As we noted in the XML definition of a cube containing a calculated measure the tag “DerivedMeasure” is used to denote a calculated measure. The template needs to have the following fields:<br /><ul><li>Name</li><li>LongName</li><li>ShortName</li><li>PluralName</li><li>Id</li><li>DataType</li><li>IsInternal</li><li>UseGlobalIndex</li><li>ForceCalc</li><li>ForceOrder</li><li>SparseType</li><li>AutoSolve</li><li>IsValid</li><li>ExpressionText</li></ul>So for our example the following entries would be required:<br /><br /><ul><li>Name="SR_PPV_PCT" </li><li>LongName="Sales Revenue Prior Period % Variance" </li><li>ShortName=" Sales Rev Prior Period % Var" </li><li>PluralName=" Sales Revenue Prior Period % Variance"</li><li>Id="SALES.SR_PPV_PCT.MEASURE"</li><li>DataType="Decimal" </li><li>isInternal="false" </li><li>UseGlobalIndex="false" </li><li>ForceCalc="false" </li><li>ForceOrder="false" </li><li>SparseType="STANDARD" </li><li>AutoSolve="DEFAULT" </li><li>IsValid="true" </li><li>ExpressionText=" lagpct(sales_revenue, 1, TIME, LEVELREL TIME_LEVELREL)"/></li></ul><br />Note the following:<br /><br /><ul><li>Name is can must match the “custom_calculated_measure_name” value in the Id tag.</li><li>Id is derived as follows:</li><ul><li>Cube_name.custom_calculated_measure_name.MEASURE</li></ul><li>ExpressionText can refer either to the AWM Object View names or the physical objects from the ModelView. It can be more efficient to refer directly to the stored variables rather than using the standard form objects since this involves and additional layer of processing that is not always necessary. But start by referring to the standard form objects and check query performance before pointing directly to the base storage objects.</li></ul>Fortunately, there is a much easier way to install a custom calculated measure. On OTN there is an Excel utility to that can help. See the link on the OLAP OTN Home Page, “Creating OLAP Calculations using Excel”:<br /><br /><a href="http://download.oracle.com/otn/java/olap/SpreadsheetCalcs_10203.zip">http://download.oracle.com/otn/java/olap/SpreadsheetCalcs_10203.zip<br /></a><br />Follow the instructions in the readme file and then open the spreadsheet included in the zip. This utility can be used to install both custom and standard calculations (those generated by the Calculated Measure Wizard), which makes installing calculations into a cube a quick and simple exercise. However, you do need to understand how the underlying functions are implemented as some of the templates require you to provide inputs such as “offset”, “start”, “stop” and “step”. Now the example worksheet provided does include examples for each of the types of templates, which makes it much easier to understand the values required for some of these templates. Using Excel is a good way to back up all your calculation definitions and makes it very easy to install the calculations into different environments, such as test, training,QA, production etc.<br /><br />To use this utility follow these steps:<br /><br />Step 1: Define your connection<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilDmFX83_lOijDpwkqL0AySV6GpFkADk2WhGR8Yd7M7boJHLrf0dQXq_1jlsYhxIAYnTi2eSihOudLYI1dvasXgzmXEPQ_6vds0eNw_WwX80orQVCB5tMY4hYS4K6nTQBiXTmz2seBHB8/s1600-h/Image6.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilDmFX83_lOijDpwkqL0AySV6GpFkADk2WhGR8Yd7M7boJHLrf0dQXq_1jlsYhxIAYnTi2eSihOudLYI1dvasXgzmXEPQ_6vds0eNw_WwX80orQVCB5tMY4hYS4K6nTQBiXTmz2seBHB8/s400/Image6.JPG" alt="" id="BLOGGER_PHOTO_ID_5174959923057101042" border="0" /></a><br /><br />This should match the details you set in AWM to connect to your analytic workspace.<br /><br />Step 2: Select an AW<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgiYWJANkrPfkBxAQo4F1xF5P9DZojpXkRdTS5-YVeNyD9Dt3K7-LEkhslkSBlfsEzUrQ0-DFYqbjTTEThOfu1ox_4SLh_mtWWqvePOxRfAZ8AkzSEqLBxZ3onBCYuvIPCD_VlcUWIMCA/s1600-h/Image7.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgiYWJANkrPfkBxAQo4F1xF5P9DZojpXkRdTS5-YVeNyD9Dt3K7-LEkhslkSBlfsEzUrQ0-DFYqbjTTEThOfu1ox_4SLh_mtWWqvePOxRfAZ8AkzSEqLBxZ3onBCYuvIPCD_VlcUWIMCA/s400/Image7.JPG" alt="" id="BLOGGER_PHOTO_ID_5174960163575269634" border="0" /></a><br /><br />Once the connection is established the next stage is to select an AW. Each user is not limited to owning and/or using just one AW. In most implementations an OLAP user may have access to multiple AWs. Therefore, it is important to select the required AW before creating any calculations. A pulldown list of available AWs is provided just below the “Connection Details” button.<br /><br />Step 3: Defining the calculation type<br />This utility will allow you to create both custom and pre-defined calculations. The column headed “Calculation Type” can be toggled between two values:<br /><ul><li>Template</li><li>Equation</li></ul><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMDt9dIfofInftztlnWcivAo4JHHJVuhHni3lfzI5GOYmW26FHLqIVk0a1AcnZs6HJ0YunHNsz2YZ354eLWBxk1TqVzdBpWJ_zpOBCNWw86hDJ-86MED-xIdAtOT21tWwEHL2A3Fi64pA/s1600-h/Image8.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMDt9dIfofInftztlnWcivAo4JHHJVuhHni3lfzI5GOYmW26FHLqIVk0a1AcnZs6HJ0YunHNsz2YZ354eLWBxk1TqVzdBpWJ_zpOBCNWw86hDJ-86MED-xIdAtOT21tWwEHL2A3Fi64pA/s400/Image8.JPG" alt="" id="BLOGGER_PHOTO_ID_5174960532942457106" border="0" /></a><br /><br />The “Template” option will install one of the calculations from the AWM Calc Wizard and the “Equation” option allows you to define a free format equation.<br /><br />Step 4: Basic details.<br />Each measure needs to have a name (which is the physical storage name for the measure so it cannot contain spaces or certain characters such as %, $, £ etc.), long label a short label and be assigned to a specific cube.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_fxKDhyKMTcHOcvgalCxkRNCJO1nfUbLLuL2rSo7e1-J19DOjjH6vmg2q4W4FSAIL5hRfCzBRsbYp5_7fOHW-2zmhxEngu9Vh9qPN_zHZU5AxuWB6ve0c3H3v0wTdGC0Xf2ilYcQAhuk/s1600-h/Image9.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_fxKDhyKMTcHOcvgalCxkRNCJO1nfUbLLuL2rSo7e1-J19DOjjH6vmg2q4W4FSAIL5hRfCzBRsbYp5_7fOHW-2zmhxEngu9Vh9qPN_zHZU5AxuWB6ve0c3H3v0wTdGC0Xf2ilYcQAhuk/s400/Image9.JPG" alt="" id="BLOGGER_PHOTO_ID_5174960747690821922" border="0" /></a><br /><br />A pulldown list can be used to select the target cube. The next column to the right allows you to assign the measure to a measure folder.<br /><br />Step 4a: Template Calculations<br />If you are defining a calculation based on a template, a pulldown list of available templates is available in the column marked “Calculation Template”<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1SHqRdtmokbIHHmxYhu7ALs0OqIf_oumj_R0vBf09C759HPngDESE9Xr0_4Jja2_-6z7F7Qor8WJTS9TpcbrnRNEXRYFSpBcK4_4vJCgThlYbkYmxLlSOCbQ0ykZ6l4PK7Bhio1CjIhA/s1600-h/Image10.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1SHqRdtmokbIHHmxYhu7ALs0OqIf_oumj_R0vBf09C759HPngDESE9Xr0_4Jja2_-6z7F7Qor8WJTS9TpcbrnRNEXRYFSpBcK4_4vJCgThlYbkYmxLlSOCbQ0ykZ6l4PK7Bhio1CjIhA/s400/Image10.JPG" alt="" id="BLOGGER_PHOTO_ID_5174960898014677298" border="0" /></a><br /><br />At this point it is a good idea to refer to the sample worksheet as this show you how to complete the additional columns to the right that manage the arguments for the templates:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8fXJFMIcUL4PKDg3mbo1R2o5R4SVAG3NSjV9THGzyEzlPb0MJSPHf7NZ3yu4RLTNd89YJ5XHVhyphenhyphenK7sgZIfwCquGx4D8Ru1xH05pkQ2QDwCruPD7PwvM1sHIv3cZdE2b3W_sI6t0mm3r0/s1600-h/Image11.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8fXJFMIcUL4PKDg3mbo1R2o5R4SVAG3NSjV9THGzyEzlPb0MJSPHf7NZ3yu4RLTNd89YJ5XHVhyphenhyphenK7sgZIfwCquGx4D8Ru1xH05pkQ2QDwCruPD7PwvM1sHIv3cZdE2b3W_sI6t0mm3r0/s400/Image11.JPG" alt="" id="BLOGGER_PHOTO_ID_5174961082698271042" border="0" /></a><br />The inputs are:<br /><ul><li>Base measure – a pulldown list of all available measures is provided</li><li>Dimension – the base dimension, which for most of the templates tends to be Time (pulldown list is available)</li><li>Hierarchy – the main hierarchy from the time dimension (pulldown list is available)</li><li>Level – the target level from the time dimension (pulldown list is available)</li><li>Other numeric arguments determined by the type of template</li></ul><br />All this information is taken from the Calculation Wizard so if you want to check your inputs simply run the calculation wizard in AWM and note the inputs for the specific template.<br /><br />Step 4b: Equation Templates:<br />To create a custom calculation set the calculation type to “Equation” and then in the “Free Form Equation” column enter the formula using either the standard form object names or the physical storage object names. The equations can be one of three basic data types:<br /><ul><li>Decimal</li><li>Integer</li><li>Text</li></ul>In the example below (taken from the sample spreadsheet) two calculated measures are defined, one decimal and one text:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhExgTT8O0Ul3ghw8Pms3sJ1a8W2_0gt4BiawxqQSAnLG4vKJBv6Qz4nAkx2HmSjwOOFwz4Bjr8-ZVILmvBlIfqxghyphenhyphenjBcFNMZ7YLFHxsQAOuDwpb20KF9Jq295kzHiQ9QasKD5lJ_cSIY/s1600-h/Image12.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhExgTT8O0Ul3ghw8Pms3sJ1a8W2_0gt4BiawxqQSAnLG4vKJBv6Qz4nAkx2HmSjwOOFwz4Bjr8-ZVILmvBlIfqxghyphenhyphenjBcFNMZ7YLFHxsQAOuDwpb20KF9Jq295kzHiQ9QasKD5lJ_cSIY/s400/Image12.JPG" alt="" id="BLOGGER_PHOTO_ID_5174961512195000658" border="0" /></a><br /><br />Measure name = <span style="font-size:85%;"><span style="font-family:courier new;">PROFIT</span></span><br />Equation = <span style="font-size:85%;"><span style="font-family:courier new;">SALES.SALES.MEASURE - SALES.COST.MEASURE </span></span><br />Data Type = <span style="font-size:85%;"><span style="font-family:courier new;">DECIMAL</span></span><br /><br />Measure Name = <span style="font-size:85%;"><span style="font-family:courier new;">HOW_IS_MARGIN </span></span><br />Equation = <span style="font-size:85%;"><span style="font-family:courier new;">If SALES.PROFIT.MEASURE/SALES.SALES.MEASURE gt .2 then 'GROOVY' else if SALES.PROFIT.MEASURE/SALES.SALES.MEASURE lt .1 then 'YIPES' else 'WHATEVER'</span></span><br />Data Type = <span style="font-size:85%;"><span style="font-family:courier new;">TEXT</span></span><br /><br />Step 5: Installing the Calculated Measures<br />Once all your calculations are defined, simply on the “Define Calculations” button.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiy8K4yFc6CcBURw-kz4nRqok5so3lsXKyq_FVCkNdUX35Pjsvxa6_HqlpncWsHQ_iON9nXhP9LQlv5VvVWi1CzT4lUpDq9r7M9Wqs9LrWuSEtv8cGnWrG11vnoPF1VnpcfwiCif4KxE60/s1600-h/Image13.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiy8K4yFc6CcBURw-kz4nRqok5so3lsXKyq_FVCkNdUX35Pjsvxa6_HqlpncWsHQ_iON9nXhP9LQlv5VvVWi1CzT4lUpDq9r7M9Wqs9LrWuSEtv8cGnWrG11vnoPF1VnpcfwiCif4KxE60/s400/Image13.JPG" alt="" id="BLOGGER_PHOTO_ID_5174962366893492578" border="0" /></a><br /><br />This will launch a command window where the OLAP AW XML Java API is used to load the calculated measures defined in the worksheet into the target AW. During the installation process, feedback is written to the command window. An errors will be visible in this window and need to be resolved before trying the installation process again. This utility will overwrite an existing calculated measure so updating an AW is quick and easy since there is no need to first delete any existing calculated measures.<br /><br />Once the measures have been deployed I would recommend starting AWM and checking all the calculated measures were correctly installed and do in fact return data. Sometimes it is easier to do this via the OLAP Worksheet, especially checking the data, since you can limit the various dimensions to a nice small subset of the data.<br /><br />It is possible for a calculated measure to be installed and visible in AWM but not physically present. Which seems a little odd. This usually implies an issue with the naming convention, which allowed the object to be added to the metadata catalog, but the physical name generated an error for some reason. Easiest solution is to delete the calculated measure using AWM and try again after checking the name in the Excel worksheet.Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com1tag:blogger.com,1999:blog-3820031471524503731.post-85292902404008954232008-01-24T05:42:00.000-08:002008-12-11T15:25:39.651-08:00OLAP Workshop 6 : Advanced Cube DesignIn the previous workshop we looked at creating a cube making use of AWM’s ability to manage the other features. In most cases these default settings will provide good load and query performance. Certainly when looking at the data model that supports the 10g common schema the default settings do a great job and make life much easier. Consequently, you can design and build the analytic workspace from using the data sourced from the SH schema in about 15 minutes.<br /><br />In some cases you may need to move beyond the default settings and in the next few sections we will look at the other tabs that are part of the Cube wizard. These tabs control sparsity, compression and partitioning features, aggregation rules, and summarization strategies. The tabs and the features they control will be explained in the following order:<br /><ul><li>Implementation Details</li><li>Rules</li><li>Summarize To</li><li>Cache</li></ul><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8i76rrA1eChxDE4AQRlWRChc-SMyk6Dpnh0_iIRplc1tCDauhpMcQ26rVxrnd4KQZKXqUH6_D2a_G0Hu9T1OekuOqlUqQ11tvBrQRZSyL0uNVj_Vdzo1q0AND3D9apbwi1sUNl5hWEPg/s1600-h/Image1.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8i76rrA1eChxDE4AQRlWRChc-SMyk6Dpnh0_iIRplc1tCDauhpMcQ26rVxrnd4KQZKXqUH6_D2a_G0Hu9T1OekuOqlUqQ11tvBrQRZSyL0uNVj_Vdzo1q0AND3D9apbwi1sUNl5hWEPg/s400/Image1.JPG" alt="" id="BLOGGER_PHOTO_ID_5159043973661869346" border="0" /></a><br /><br />But before proceeding there is one important thing you should always do before starting to load data into a cube (of a dimension) – review the data in as much detail as possible. Data quality is a subject that most companies often don’t even consider when building cubes, and most consultants just take the data given to them a load it without question.<br /><br />In any project I would allocate 10-30% of the time looking at the data. The information gained at this stage will provide huge benefits later when you need to determine sparsity patterns (explained later). On a recent customer project I was asked to tune a cube to improve load and aggregation times. When we started to review the data we noticed some very very large numbers in one of the measures, which were simply amazing. After a lot of analysis we determine the ETL that was computing the figure in to the fact table had a mistake. Unfortunately both the developers and business users failed to identify this error. To compound the problem, the data formed a key business metric.<br /><br />Therefore, NEVER EVER start loading data until you have checked the quality. Ideally you should use the data quality features of Warehouse Builder, which can significantly speed up this process. There are a number of presentations relating to data quality on the Warehouse Builder OTN home page.<br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">Implementation Details Tab</span><br /></span>Most of the advanced options for tuning your multidimensional model are found on the Implementation Details tabbed page of the Create Cube wizard. As shown below the Implementation Details tabbed page contains four important tuning features of Oracle OLAP. The correct use of these features ensures that your analytic workspace is very efficient and is implemented in an optimal way.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJllmTTl7R2IuvND7SbVnpYaI8xpdtwqICjhWRh4ss7f5L9UjCYFIfPXc_R-oAAJ68Y5rngrWtZdG5Z7wgdLvJZUFs33CWQ3KHBTqr7BgkDvwrr1bDkQPv9L4ubZRMChsI3Kn8O8AXe5s/s1600-h/Image2.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJllmTTl7R2IuvND7SbVnpYaI8xpdtwqICjhWRh4ss7f5L9UjCYFIfPXc_R-oAAJ68Y5rngrWtZdG5Z7wgdLvJZUFs33CWQ3KHBTqr7BgkDvwrr1bDkQPv9L4ubZRMChsI3Kn8O8AXe5s/s400/Image2.JPG" alt="" id="BLOGGER_PHOTO_ID_5159044162640430386" border="0" /></a><br /><br />1.<span style="font-weight: bold;">Sparsity</span>: AWM 10g, by default, applies the common best practice in deciding which of your dimensions should be marked as “sparse” when you create a cube. Sparsity refers to the natural phenomenon evident in all multidimensional data to some degree: Not all the cells in the logical cube (the total possible combinations of all the dimension members for each dimension of the cube) will ever contain data. It is very common for a relatively small percentage of the possible combinations to actually store data. By understanding the sparsity of the data you expect to load into your AW, you can tune how it handles that sparsity and improve the performance of data loading and aggregation and reduce the disk-storage requirements for the populated AW.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8AM3CSv9NCsEZPgmIXQANcs715sspqDHzzpSxWP6rTk0tZ3Tfiros22z9cI3-tN67P9oIBiqG_RaVuF9mSiGx0ZqLRgIPw_DdDcctW8b5c8xX2Lax0U8dLYbnIBIs3TCEeYnJWSefKZo/s1600-h/Image3.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8AM3CSv9NCsEZPgmIXQANcs715sspqDHzzpSxWP6rTk0tZ3Tfiros22z9cI3-tN67P9oIBiqG_RaVuF9mSiGx0ZqLRgIPw_DdDcctW8b5c8xX2Lax0U8dLYbnIBIs3TCEeYnJWSefKZo/s400/Image3.JPG" alt="" id="BLOGGER_PHOTO_ID_5159046580707018114" border="0" /></a><br /><br />After you understand which dimensions are sparse and which are not, their order can be important. When there are a large number of empty cells in a cube, the cube is said to be “sparse.” For example, if you are a manufacturer of consumer-packaged goods, you do not sell one or more of every single product you make to every customer, every day, through every sales channel. Different customers buy different products, at different time intervals, and each customer probably has a preferred channel. Different products may display different sparsity patterns: Ice creams and cold drinks tend to sell faster in the summer, whereas warm arctic coats are more popular in the winter (particularly in cold locations).<br /><br />When using multidimensional technology, pay attention to sparsity so that you can design cubes efficiently. The effect of sparsity in data (and a badly designed cube) can result in tremendous growth in disk usage and a corresponding increase in the time taken to update and recalculate data in the cube. Inefficient sparsity control in any multidimensional data store can result in many empty cells actually being physically stored on the disk. This is something that is less of a concern with relational technology, because it is rare to store a completely null row in a table.<br />Oracle OLAP automatically deals with sparsity up to a point. But you, as a cube builder, can provide Oracle OLAP with the information that you know about your data (and information that Oracle OLAP needs to know) to deal with that data extremely efficiently.<br /><br />Cube designers express sparsity in percentage terms. Data is said to be 5% dense (or 95% sparse) if only 5% of the possible combinations of the cells in a multidimensional measure or cube actually contain data. In many cases, data is very sparse, especially sales and marketing data. Only very aggregated data with a fairly small number of dimensions is typically dense enough for you to not consider sparsity.<br /><br />Sparsity tends to increase with the number of dimensions and with the number of levels and hierarchies in each dimension. As you add dimensions to the definition of a cube, the number of possible cell combinations can increase exponentially. Also, the granularity of data affects sparsity. Low-level, detailed data is much more sparse than aggregate data. Very aggregate data is typically dense. Particular combinations of dimensions typically have different sparsity from others. For example, Time dimensions and Line dimensions are often more dense than dimensions such as Product, Customer, and Channel. This is because combinations of customers and products are sparser than combinations of customers and time or sparser than combinations of products and time. For this reason, AWM 10g asks you to confirm which of the dimensions for your data are sparse dimensions and which ones are dense.<br /><br />In most cases I would recommend making all dimensions sparse. However, there are some additional considerations. The most important is the use of partitioning and we will look at this in one of the following sections. Sometimes, you may need to build a cube with different sparsity settings to determine the most efficient settings. In some cases making Time dense will generate a highly efficient cube and in other situations it will cause the massively extend the time take to load and aggregate data. The best method is to use an iterative development approach, but as with tuning be careful not to change too many settings at once as it becomes difficult to interpret the results.<br /><br />A very common mistake I see with many customers is they insist on loading a zero balance into a measure. This is quite pointless, since a zero balance does not impact the overall total. Now it can be important to differentiate between an NA row and zero-row but for 99.9% of analysis it is possible to infer one from the other. Therefore, when loading data into a cube add an additional filter to remove zero and NA rows since this will provide huge savings in load and aggregation times. I was working on a project recently where a fact table contained 75 million rows of data and 65% of those rows contained 0 or NA.<br /><br />2.<span style="font-weight: bold;">Dimension order</span>: It is possible to improve the build and aggregation performance of your AW by tuning the order in which the dimensions are specified in your cube.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2CDP94uDf5d-BYBj0cuaie69Z5HUTdl6tNeErxItgjcSOdRCvgScv6cOA7j4kZK43vIdmNv0Xd9Rb2eZZxGO1YM8ePVYf6cX9wE60upbFKJrSfc_3PZopPu-Iz30OREBA5c2wbXcDb6g/s1600-h/Image4.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2CDP94uDf5d-BYBj0cuaie69Z5HUTdl6tNeErxItgjcSOdRCvgScv6cOA7j4kZK43vIdmNv0Xd9Rb2eZZxGO1YM8ePVYf6cX9wE60upbFKJrSfc_3PZopPu-Iz30OREBA5c2wbXcDb6g/s400/Image4.JPG" alt="" id="BLOGGER_PHOTO_ID_5159044867015066962" border="0" /></a><br /><br />When using the compression feature (discussed below), it is usually best to have a relatively small, dense dimension (such as Time) first in the list, followed by a group of all the sparse dimensions. Furthermore, it is generally the best practice to list the sparse dimensions in order of their size: from the one with the least members to the one with the most..<br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Note 1</span>: Sparsity and dimension order are generally considered at the same time, which is why these choices are grouped together in the AWM 10g user interface:<br /></span><br />My recommendation is to try building your cube with Time marked sparse and then try with Time marked dense. The effect on load times varies according to nature of the source data. I recently worked on a project where we marked all the dimensions as sparse and loaded a trial data set in 4 hours. By making Time dense, the same dataset loaded in 1 hour. Therefore, it pays to understand your data. But, most importantly, don’t assume you will get the data model right first time.<br /><br /><br />3.<span style="font-weight: bold;">Compressed cubes and Global Composites</span>: Version 10g of Oracle OLAP provides a new, internationally patented technology for the AW, which is exposed via a simple check box in AWM.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNWLwInQPNMbHWUw_Wdv8eWxSdBxQKAMxXe0MvAP8x2iHqWX-PThszLMo-ZdzNNIJgK-h4o3GZObHGtXROOsXwlmq-3OWEUSQXNTMGkZvuPhCuyJe_jrT-Cnph0nWyaQopd9uBLudlKIw/s1600-h/Image5.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNWLwInQPNMbHWUw_Wdv8eWxSdBxQKAMxXe0MvAP8x2iHqWX-PThszLMo-ZdzNNIJgK-h4o3GZObHGtXROOsXwlmq-3OWEUSQXNTMGkZvuPhCuyJe_jrT-Cnph0nWyaQopd9uBLudlKIw/s400/Image5.JPG" alt="" id="BLOGGER_PHOTO_ID_5159045167662777698" border="0" /></a><br /><br />This is an extremely powerful data storage and aggregation algorithm optimized for sparse data. It is a new technology that is often dramatically faster than any previous OLAP server technology when aggregating sparse multidimensional data. The use of this feature can improve aggregation performance by a factor of 5 to 50. At the same time, query performance can improve, and disk storage is often also dramatically reduced. This feature is ideal for large volumes of sparse data but not suitable for all cubes (especially dense cubes).<br /><br />If the “Use Compression” option is selected, then additional efficiency can often (but not always) be achieved by marking all dimensions (including Time) as sparse, especially for sparse data where there is known seasonality in the data, and especially if your AW is also partitioned on Time. But see my previous notes regarding this subject.<br /><br />As we use this feature on more and more projects it is becoming clear that just about every cube will benefit from compression. Now there are some exceptions, such as cubes where you plan to use and application to write-back data directly into the cube, but such situations are easily managed by posting the updated data to a relational table and using the normal data load procedures to import and aggregate the data.<br /><br />Note: Dimension order is unimportant when using compression. The multidimensional engine automatically determines how best to physically order the data after it is loaded.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg09-ZVyN1RtYDJbFv1fv0BmzTszrM69iYrnw8D05taWpuXPm3HmWWj9IZ2Fu8lRxEMT5p9o4Ujk3E3mqUNXYnVrd6yJDh0SZwSyMjHgEStXaZE85bJcffchojSOpsWGPK6_LXpLbzU9Cw/s1600-h/Image6.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg09-ZVyN1RtYDJbFv1fv0BmzTszrM69iYrnw8D05taWpuXPm3HmWWj9IZ2Fu8lRxEMT5p9o4Ujk3E3mqUNXYnVrd6yJDh0SZwSyMjHgEStXaZE85bJcffchojSOpsWGPK6_LXpLbzU9Cw/s400/Image6.JPG" alt="" id="BLOGGER_PHOTO_ID_5159046791160415634" border="0" /></a><br /><br />A composite is an analytic workspace object used to manage sparsity. It maintains a list of all the sparse dimension-value combinations for, which there is data. By ignoring the sparse “empty” combinations in the underlying physical storage, the composite reduces the disk space required for sparse data. When data is added to a measure dimensioned by a composite, the AW automatically maintains the composite with any new values.<br /><br />A “global” composite is simply a single composite for all data in a cube. Depending on the Compression and Partitioning choices you make, the behaviour of AWM will vary.<br /><br />When would you opt to create Global Composites? The answer is very rarely. It can be beneficial to select this option in the case of a non-compressed cube that is partitioned. But as stated above, it is probably best to use compression on just about every cube you create, so you should probably leave the option unselected.<br /><br /><br />4.<span style="font-weight: bold;">Partitioned cubes</span>: You can partition your cube along any level in any hierarchy for a dimension. This is another way of improving the build and aggregation performance of your AW, especially if your computer has multiple CPUs. Oracle Database 10g (and thus the OLAP option) can run on single-CPU computers, large multi-CPU computers, and (with Real Application Clusters and Grid technology) clusters of computers that can be harnessed together and used as if they are one large computer. Oracle OLAP is, therefore, perhaps the most scalable OLAP server available.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzRpp9bJfFdTUQfkQbugTGkPAw9pFBLtb3RYrINA7-VEfjNgtE0g0nMkqcEkw3GXyDqP4ouQi1DGnhq3bx4VTymsSmwLERJ3WHtFbYI9uB1M5zDf2btkDcu7tcanpmSwSiHMOO0yncrRk/s1600-h/Image7.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzRpp9bJfFdTUQfkQbugTGkPAw9pFBLtb3RYrINA7-VEfjNgtE0g0nMkqcEkw3GXyDqP4ouQi1DGnhq3bx4VTymsSmwLERJ3WHtFbYI9uB1M5zDf2btkDcu7tcanpmSwSiHMOO0yncrRk/s400/Image7.JPG" alt="" id="BLOGGER_PHOTO_ID_5159047035973551522" border="0" /></a><br /><br />Using partitioning does have certain knock-on consequences in 10g, but these are resolved in11g. In 10g, when you look at the “Summarize To” tab (this will be explained later) the levels above the partition key cannot be pre-aggregated and have to be solved at query time. Therefore, it is critical to select an appropriate level as the partition key so that query performance is maintained. Let us consider the example of time dimension:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPcm5XnO8R49kXp4BC36j8BSC8V4qzBygtgnplqNyy0J-T1jJxgrkS44uLi6r0GfhFDbOjiclsGeQoGzkOxzEsEB__7WWXszLube8TLFk5ZFwwl5-A420SuUWWTEGrqPi7qMkriuPmNDM/s1600-h/Image8.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPcm5XnO8R49kXp4BC36j8BSC8V4qzBygtgnplqNyy0J-T1jJxgrkS44uLi6r0GfhFDbOjiclsGeQoGzkOxzEsEB__7WWXszLube8TLFk5ZFwwl5-A420SuUWWTEGrqPi7qMkriuPmNDM/s400/Image8.JPG" alt="" id="BLOGGER_PHOTO_ID_5159047216362177970" border="0" /></a><br /><br />If we use Day as the partition key, each individual partition will be small which should improve load times and aggregation times. But when a user creates a query based on yearly data 365 values have to be aggregated at run time for each cell being referenced within the query. Depending on the hardware this might or might not provide acceptable query performance.<br /><br />If we use month as the partition key, each individual partition will still be relatively small and load times and aggregation times should still be acceptable. Each partition will hold between 28-31 days worth of data and in this case it would be prudent to make Time sparse within the model. However, when a user creates a query based on yearly data only 12 values have to be aggregated at run time for each cell being referenced within the query.<br /><br />Partitioning has a big impact on two key areas:<br /><ul><li>Partial Aggregation</li><li>Parallel Processing</li></ul>Partial Aggregation – the Oracle OLAP option supports incremental updates to a cube (as we will see in a later workshop). This allows the engine to only aggregate date for just those members where data has been loaded. Which means the aggregation process can work with a substantially reduced set of data. For example, if we are loading data for Dec 2008, then for the time dimension only the members Q4 2008 and 2008 are impacted by any data loaded.<br /><br />Parallel Processing – By partitioning a cube, it is possible to solve it in parallel assuming data is being loaded into more than one partition. Which brings us to an important point. Most customers will typically partition their cubes by time. Of course if you only load data for one month at a time and use month as the partition key then parallel processing is not going to occur. Which may or may not be a good thing.<br /><br /><span style="font-weight: bold;font-size:130%;">Rules Tab</span><br />On the Rules tabbed page, you identify aggregation rules for the cube (this is also available within each individual measure). You have many different kinds of aggregations available. This is one of the most powerful features of Oracle OLAP, enabling different dimensions to be independently calculated using different aggregation methods (or not using aggregation at all). In effect, a different aggregation method can be applied each dimension within a cube. The engine itself is also capable of supporting dimension member level aggregation plans through the use of MODELS. However, at this point in time Analytic Workspace Manager 10g does not support this feature. But AWM11g will support the ability to create dimension member aggregation plans in the form of custom aggregates.<br /><br />In this image below, the aggregation method of SUM is used across all dimensions.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPWBBd1gS2b_a_UjX5TqvvN-LAo75mJrQOuCkK39tKTvuy6IRr2hqGX0EkcAx4NO9Tve-Kfc4im9SYmcDV7ItocSeuoP3WYKQbT5m-2XvxBgcTZ8IsmMzBBO4Suf4ZYiTJuS-OoKP1UlM/s1600-h/Image9.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPWBBd1gS2b_a_UjX5TqvvN-LAo75mJrQOuCkK39tKTvuy6IRr2hqGX0EkcAx4NO9Tve-Kfc4im9SYmcDV7ItocSeuoP3WYKQbT5m-2XvxBgcTZ8IsmMzBBO4Suf4ZYiTJuS-OoKP1UlM/s400/Image9.JPG" alt="" id="BLOGGER_PHOTO_ID_5159047443995444674" border="0" /></a><br /><br />However, as we will see later different aggregation methods are available. For example, if you have costs and price data, you may want to see this data averaged over time, answering such business questions as “What is the average cost over 12 months?” or “What is the average price over 2 years?”<br /><br /><span style="font-weight: bold;font-size:85%;">Aggregation Methods</span><br />It is common to set the aggregation rules only once for all measures contained in a cube. When you define a cube, you identify an aggregation method and any measures that you create that belong to the cube automatically receive the aggregation methods for that cube. This is the default behaviour, and it is one of the benefits of using a cube: By setting up aggregation rules and sparsity handling for all the measures once at the “cube” level, you save time and reduce the scope for errors or inconsistencies.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkBSXnCJD8E-wJC8QKZFVMNlLlXrfFbfDtB1cZCfgsuvoWCzvn7y391ewxb50dZ8v1mLdKnsaYtiB7Jfid4xIwuMrsBoSi-TrLTcAu-J_6133ZC28ASOjFZg-FedVcSwldxGxMnweiLUk/s1600-h/Image11.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkBSXnCJD8E-wJC8QKZFVMNlLlXrfFbfDtB1cZCfgsuvoWCzvn7y391ewxb50dZ8v1mLdKnsaYtiB7Jfid4xIwuMrsBoSi-TrLTcAu-J_6133ZC28ASOjFZg-FedVcSwldxGxMnweiLUk/s400/Image11.JPG" alt="" id="BLOGGER_PHOTO_ID_5159048028110996962" border="0" /></a><br /><br />The default for aggregation used by AWM is the SUM method (simple additive aggregation) for each dimension. However, you do not have to aggregate data. Some measures have no meaning at aggregate levels of certain dimensions. In such cases, you can specify that the data is non-additive and should not be aggregated over those dimensions at all. Choosing the non-additive aggregation method means that when you view the data in the analytic workspace, you find data only at the leaf levels of the dimensions for which you selected that method.<br />Understanding Aggregation<br /><br />AWM allows you to set aggregation rules for each dimension independently for your cubes and measures. That is, each dimension, if required, can use a different mathematical method of generating data for the parent and ancestors.<br /><br />Here are some examples of different aggregation methods:<br /><ul><li>SUM simply adds up the values of the measure for each child value to compute the value for the parent. This is the default (and most common) behaviour.</li><li>AVERAGE calculates the average of the values of the measure for each child value to provide the value for the parent.</li><li>LAST takes the last non-NA (Null) value of the child members and uses that as the value for the parent.</li></ul><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLTLw-bx6GJ8xp_iWIaDOu7lHocYv7z-R16T4aVktyrnGMCg5ym6_8hzlQqkrj3JwSYGZ4U8xS5DwQ5muNPsSeMIbgAgzgehaCzspAvSsPoYsdhqOaschNkXNFpA3RK4u5q0KRMUe5Kds/s1600-h/Image10.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLTLw-bx6GJ8xp_iWIaDOu7lHocYv7z-R16T4aVktyrnGMCg5ym6_8hzlQqkrj3JwSYGZ4U8xS5DwQ5muNPsSeMIbgAgzgehaCzspAvSsPoYsdhqOaschNkXNFpA3RK4u5q0KRMUe5Kds/s400/Image10.JPG" alt="" id="BLOGGER_PHOTO_ID_5159050042450658898" border="0" /></a><br /><br />Sales quantities and revenues are usually aggregated over all dimensions using the SUM method, whereas inventory or headcount measures commonly require a different method (such as LAST) on the Time dimension and SUM for the other dimensions. More advanced aggregation methods, such as weighted average, are useful when aggregating measures such as Prices (weighted by Sales revenue).<br /><br /><span style="font-weight: bold;font-size:85%;">Different Aggregation for Individual Measures</span><br />However, you are not limited to specifying that all measures of a cube have the same aggregation method. When adding measures to the cube, you can specify a different aggregation method, and accept the defaults of all the other measure settings.<br /><br />For example, it is not uncommon for a single cube to contain measures such as Sales Revenue, Sales Quantity, Order Quantity, and Stock/Inventory Quantity. All these measures will aggregate using the SUM method over all dimensions, except for the Stock/Inventory measure. This requires a LAST method on the Time dimension (and SUM on all the others). Using the Rules tab for the Stock measure you can override the default aggregation method for Time and set the method to LAST, while retaining all the all other default settings from the cube.<br /><br /><span style="font-size:85%;">Note: The ability to override cube settings for individual measures is not supported in compressed-cubes. If you use compression, and one of your measures requires a different aggregation method, you need to create it in a separate cube.</span><br /><br /><span style="font-weight: bold;font-size:85%;">Aggregation Operators</span><br />There are a number of different aggregation operators available to you for summarizing data in your AW. The following is a brief description of each of the operators.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5fjBRgQGrbB0srYDM_wjVwJuGOS16GIyIuve1n6TG7UoSSd_MMVuQ89JAFHJX6YJCIfQOxh2Gn_7qKaYkVc2bJMfo66Vt_DC1HBBjV7009JMxPbf-46XaJLoeJma5jqTarS0bu_DybVI/s1600-h/Image12.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5fjBRgQGrbB0srYDM_wjVwJuGOS16GIyIuve1n6TG7UoSSd_MMVuQ89JAFHJX6YJCIfQOxh2Gn_7qKaYkVc2bJMfo66Vt_DC1HBBjV7009JMxPbf-46XaJLoeJma5jqTarS0bu_DybVI/s400/Image12.JPG" alt="" id="BLOGGER_PHOTO_ID_5159048363118446066" border="0" /></a><br /><br /><ul><li><span style="font-weight: bold;">Average</span>: Adds data values, and then divides the sum by the number of data values that are added together</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Hierarchical Average</span></span>: Adds data values, and then divides the sum by the number of children in the dimension hierarchy. Unlike Average, which counts only non-NA children, Hierarchical Average counts all the logical children of a parent, regardless of whether each child does or does not have a value.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Hierarchical Weighted Average</span></span>: Multiplies non-NA child data values by their corresponding weight values, and then divides the result by the sum of the weight values. Unlike Weighted Average, Hierarchical Weighted Average includes weight values in the denominator sum even when the corresponding child values are NA. You identify the weight object in the Based On field.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Weighted Average</span></span>: Multiplies each data value by a weight factor, adds the data values, and then divides that result by the sum of the weight factors. You identify the weight object in the Based On field.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">F</span></span><span style="font-size:85%;"><span style="font-weight: bold;">irst Non-NA Data Value</span></span>: The first real data value</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Hierarchical First Member</span></span>: The first data value in the hierarchy, even when that value is NA</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Hierarchical Weighted First</span></span>: The first data value in the hierarchy multiplied by its corresponding weight value, even when that value is NA. You identify the weight object in the Based On field.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Weighted First</span></span>: The first non-NA data value multiplied by its corresponding weight value. You identify the weight object in the Based On field.</li><li><span style="font-weight: bold;font-size:85%;">Last Non-NA Data Value</span>: The last real data value</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Hierarchical Last Membe</span></span>r: The last data value in the hierarchy, even when that value is NA</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Hierarchical Weighted Last</span></span>: The last data value in the hierarchy multiplied by its corresponding weight value, even when that value is NA. You identify the weight object in the Based On field.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Weighted Last</span></span>: The last non-NA data value multiplied by its corresponding weight value. You identify the weight object in the Based On field.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Maximum</span></span>: The largest data value among the children of each parent</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Minimum</span></span>: The smallest data value among the children of each parent</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Non-additive (Do Not Summarize)</span></span>: Do not aggregate any data for this dimension. Use this keyword only in an operator variable. It has no effect otherwise.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Sum</span></span>: Adds data values (default)</li><li><span style="font-weight: bold;font-size:85%;">Scaled Sum</span>: Adds the value of a weight object to each data value, and then adds the data values. You identify the weight object in the Based On field.</li><li><span style="font-size:85%;"><span style="font-weight: bold;">Weighted Sum</span></span>: Multiplies each data value by a weight factor, and then adds the data values. You identify the weight object in the Based On field.</li></ul><span style="font-size:85%;"><span style="font-weight: bold;">Aggregating Across Multiple Hierarchies</span></span><br />Most dimensions within real world models will have multiple hierarchies. In the image below, there are two separate hierarchies on the Time dimension.<br /><br />On the Aggregation Rules tabbed page, when creating a cube, you can specify which hierarchy or hierarchies should be used for aggregation for that cube’s measures. You should select one or more hierarchies for each dimension being aggregated. If you omit a hierarchy, then no aggregate values are stored in it; they are always calculated in response to a query.<br /><br />Because this may reduce query performance, generally you should omit a hierarchy only if it is seldom used. The default behaviour of AWM 10g is to select all hierarchies.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlgJl2FIUy2wCnpLpNsegui9hhmKOsrgDT59QtLxxxqyxwIG0x-tiEaRmIUkvO3sp5YlHRmffFj-eKMD3dV7qQbWBfJf4Pb8H-pbl7Tlw6SOuVJnefLrx9fKGpRxHTVhRmrsn9rf2Pgp8/s1600-h/Image14.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlgJl2FIUy2wCnpLpNsegui9hhmKOsrgDT59QtLxxxqyxwIG0x-tiEaRmIUkvO3sp5YlHRmffFj-eKMD3dV7qQbWBfJf4Pb8H-pbl7Tlw6SOuVJnefLrx9fKGpRxHTVhRmrsn9rf2Pgp8/s400/Image14.JPG" alt="" id="BLOGGER_PHOTO_ID_5159048646586287618" border="0" /></a><br /><br /><span style="font-size:85%;"><span style="font-weight: bold;">Aggregating Measures with Data Coming in at Different Levels</span></span><br />There are other occasions where careful selection of the hierarchies to use in aggregation is important, especially for measure data that arrives into the AW at different levels of aggregation.<br /><br />Suppose you have an AW that contains Budget and Actuals cubes for the purposes of variance analysis. The leaf level for Actuals is the Day level, but Budgets are set at the Monthly level. Initially, you created a single Time hierarchy in which Year is the highest level and Day is the lowest level:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXkisHatRw1lRUK46KSU2JASSxXYG8ijy50t7WR5gasuEmDM3cNz3A6zjAlgudmEqQ3M-fgyU0z_S9o-Qmg8pscCeX-ZuSyqCS2SJFFlCA4Pj8LUIqJLkPpgLugJuX0FZQppGL-kaFWww/s1600-h/Image8.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXkisHatRw1lRUK46KSU2JASSxXYG8ijy50t7WR5gasuEmDM3cNz3A6zjAlgudmEqQ3M-fgyU0z_S9o-Qmg8pscCeX-ZuSyqCS2SJFFlCA4Pj8LUIqJLkPpgLugJuX0FZQppGL-kaFWww/s400/Image8.JPG" alt="" id="BLOGGER_PHOTO_ID_5159048955823932962" border="0" /></a><br /><br />This is perfect for the aggregation hierarchy for the Actuals measures. However, there is an issue with the Budgets measure. If data is loaded at the Month level, but this hierarchy is used for the aggregation of Budgets, then aggregation may begin at the Day level. All the empty cells for Budget at the Day level would be interpreted as zeros for the purposes of aggregating the data, resulting in new monthly totals being calculated as zero.<br /><br />To handle this situation, a recommended approach is to create a second hierarchy that stops at the Month level specifically for the purposes of aggregating Budgets:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsjwEe9IwKpBhUmaxuAsrnQF409yAG8_E4ch87Lt541cJcewkpqXM4LTs5kFw41Hj0X3kQiCrW9sb9Uo4_q8a7NTA0YUIrdmNNQCLhyphenhyphenD-G1mUPz6ux1sokQUfOEhn8lxIfwI7PHUzGhQ4/s1600-h/Image15.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsjwEe9IwKpBhUmaxuAsrnQF409yAG8_E4ch87Lt541cJcewkpqXM4LTs5kFw41Hj0X3kQiCrW9sb9Uo4_q8a7NTA0YUIrdmNNQCLhyphenhyphenD-G1mUPz6ux1sokQUfOEhn8lxIfwI7PHUzGhQ4/s400/Image15.JPG" alt="" id="BLOGGER_PHOTO_ID_5159048878514521618" border="0" /></a><br /><br />You must deselect the hierarchy containing the Day level on the Aggregation Rules tab for the cube or measure in question. Use the Day-level hierarchy for the Actuals measures only. The Day-level hierarchy is the primary or default hierarchy for end users because it enables drilling down to the Day level, and Budgets are available at Month, Quarter, and Year, exactly as required.<br /><br /><br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">Summarize To Tab</span><br /></span>Within all OLAP models you will need to balance the desire to aggregate absolutely everything and the time taken to load into a cube and then aggregate that data. In general, the less you choose to pre-summarize when loading data into the AW, the higher the load placed on run-time queries. In this scenario, queries are likely to be a bit slower and the load on the server at query time is likely to be greater (for example, each user query is likely to be asking the server to do more calculations at a given point in time). Pre-calculated summaries are instantly available for retrieval and are generally faster to query.<br /><br />However, it does not necessarily follow that full aggregation across all levels of all dimensions yields the best query performance. In many cases, partial summarization strategies can provide optimal build and aggregation performance with little noticeable impact on query performance.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF4e5nXzMKTKCb9V_Gg_Orjc0Tyauierek9d4wvDYY1XmdXU6EVWxgok_lEn7qkFZrtPtzZ-OXnm6nv_mRhZduQHbj99uwHwEbYCMzWoRRRt9Gwlg3AbS9LH1ljmo-M7wyKnMmWPJmr_w/s1600-h/Image13.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiF4e5nXzMKTKCb9V_Gg_Orjc0Tyauierek9d4wvDYY1XmdXU6EVWxgok_lEn7qkFZrtPtzZ-OXnm6nv_mRhZduQHbj99uwHwEbYCMzWoRRRt9Gwlg3AbS9LH1ljmo-M7wyKnMmWPJmr_w/s400/Image13.JPG" alt="" id="BLOGGER_PHOTO_ID_5159049265061578290" border="0" /></a><br /><br />Many experienced OLAP cube builders make the following recommendations regarding summarization strategies:<lu><br /></lu><ul><li><lu></lu><br /></li><li>Large dimensions, and those with many deep levels and/or hierarchies, are typically the most “expensive” to aggregate over. They are also likely to be one of the sparse dimensions in the cube definition. For such dimensions, a common guideline is to decide to summarize using a “skip-level” approach—that is, to precalculate every other level in the hierarchy. This generally gives reasonably good results and a solid basis for further tuning (if required)</li><li>If there is a small, dense dimension (such as Time) as the first dimension on the list of dimensions for a cube, then it is often a good strategy to leave that dimension to completely summarize on the fly at run time, especially if a large number of sparse data-level combinations have been computed</li></ul>AWM generally defaults to settings that reflect this advice, but you can tune the settings if you need to. But at least the defaults provide a good starting point for tuning a build if required. But be warned, adding more levels to be pre-summarized will require additional storage space.<br /><br />When you build and test your AWs, it is a good idea to include time in your project plan to experiment with different summarization strategies. Estimating in advance the exact storage requirements and aggregation times of a multidimensional cube (especially a highly dimensional, sparse one) is extremely difficult. So, it is often the case that some tuning after data is properly understood improves the performance of builds and aggregations.<br /><br />You can use a database package to help you plan your summarisation strategy. There are two procedures, part of the DBMS_AW package that can provide help and guidance:<br /><ul><li>The SPARSITY_ADVICE_TABLE procedure creates a table for storing the advice generated by the ADVISE_SPARSITY procedure</li><li>The ADVISE_SPARSITY procedure runs a series of queries against your data and make recommendations about what data to pre-summarize and what to leave for dynamic aggregation. The 11g release of Analytic Workspace Manager leverages this database feature and make recommendations directly inside the tool<br /></li></ul><lu><br /></lu><br /><br /><br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">Cache Tab</span></span><br />Caching improves run-time performance in sessions that repeatedly access the same data, which is typical in data analysis. Caching temporarily saves calculated values in a session so that you can access them repeatedly without recalculating them each time. You have two options:<br /><ul><li>Cache run-time aggregations using session cache: This is the default behaviour. This option ensures that any run-time aggregations that are completed during a session are cached for the remainder of the session, improving query performance as the session progresses. This setting is ideal for a larger number of OLAP applications, namely those that allow read-only analysis where the underlying data is not changing during a session. </li><li>Do not cache run-time aggregations: Select this option if the cube would be subject to what-if analysis and, therefore, it would be important that previously calculated summarizations are not reused.</li></ul><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOtFebwR9Rn-fllxILhr5PI5yWeYkkd0UkEaF3FxCAUZo_HA39CZ_SGYKQKKnZx-F276nbqMxBTPW1zo8SNZnqXdBAGVbpuWFzqa3r6R8m0KxCaCzSPA-_OzbaJGf25Xi0YrluY6aMjXw/s1600-h/Image16.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOtFebwR9Rn-fllxILhr5PI5yWeYkkd0UkEaF3FxCAUZo_HA39CZ_SGYKQKKnZx-F276nbqMxBTPW1zo8SNZnqXdBAGVbpuWFzqa3r6R8m0KxCaCzSPA-_OzbaJGf25Xi0YrluY6aMjXw/s400/Image16.JPG" alt="" id="BLOGGER_PHOTO_ID_5159049522759616066" border="0" /></a><br />In the next workshop we will review how to quickly and easily load data into a cube and then review some best practices for loading data within a production environment.Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-77310489395376042342008-01-21T03:19:00.000-08:002008-12-11T15:25:43.549-08:00OLAP Workshop 5 : Building CubesIn the last series of Workshops, we started to look at building the dimensions to support our data model. Each dimension contained levels and a hierarchy. The purpose of a hierarchy is to provide the relationships for summarization of measures in the cube and to make navigating multiple levels of data easy and intuitive for the end user. The next stage is to start building cubes.<br /><br /><span style="font-size:130%;">Creating Cubes<br /></span><span style="font-weight: bold;"><br />What Are Cubes?</span><br />Cubes are containers of measures (facts). They simply provide a convenient way of collecting up measures with the same dimensions. Therefore, all measures in a cube are candidates for being processed together at all stages: data loading, aggregation, and storage. Cubes are only visible to the cube builder (end users only see the measures they contain) and simplify the setup and maintenance of measures in AWM.<br /><br /><span style="font-weight: bold;">Creating Cubes</span><br />To create a cube, right-click the Cubes node in the navigator, and then select the Create Cube option.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMupeE18xa2OxsNDvI3uge3fxEZaOhzfr5NI33vHVYK0Pg6VClrt1PA7a1emkBr5CclQtDUkI3yqu-_UnEogwFlmAC0Q15Dsvo35RYXUMl04RsPj44SB2nh4c7GCTOt0zrpS8xfZqN17M/s1600-h/image2.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMupeE18xa2OxsNDvI3uge3fxEZaOhzfr5NI33vHVYK0Pg6VClrt1PA7a1emkBr5CclQtDUkI3yqu-_UnEogwFlmAC0Q15Dsvo35RYXUMl04RsPj44SB2nh4c7GCTOt0zrpS8xfZqN17M/s400/image2.JPG" alt="" id="BLOGGER_PHOTO_ID_5157888807543877298" border="0" /></a><br /><br /><span style="font-weight: bold;">Note:</span> You can also create a cube from a cube template if you have a template available.<br /><br />The Create Cube window appears, as shown below:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKtuP6qyHLsVcZHDbt8arfIWCj6HTMrAIQkLRT-KFUQQU5Uml1ijWrn-X3uYV4irl5mOQFPc10QUCh0vbQgClS7487TAoIC1IBwAf_H6Olb5m4H2rBTwL7x8vHjnq-FE3I_FJi_F-7RBM/s1600-h/image3.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKtuP6qyHLsVcZHDbt8arfIWCj6HTMrAIQkLRT-KFUQQU5Uml1ijWrn-X3uYV4irl5mOQFPc10QUCh0vbQgClS7487TAoIC1IBwAf_H6Olb5m4H2rBTwL7x8vHjnq-FE3I_FJi_F-7RBM/s400/image3.JPG" alt="" id="BLOGGER_PHOTO_ID_5157890160458575554" border="0" /></a><br /><br />The Create Cube wizard provides a tabbed page interface that enables you to specify the logical model and processing options for a cube. The best way to use this wizard is to always work from left to right across the various tabs.<br /><br /><span style="font-style: italic;">General Tabbed Page</span><br />On the General tabbed page, enter the basic information about the cube:<br /><ul><li>Provide the cube with a distinct name and provide the short and long label descriptions. Note – the name of the cube cannot be changed once the cube has been created.</li><li>Identify the dimensionality of the cube by using the arrow keys to move dimensions from the Available Dimension list to the Selected Dimension list. After you define the dimensionality, all measures that you create based on this cube will have the same dimensionality. Note – the dimensionality of the cube cannot be changed once the cube has been created.</li></ul>Remember that Oracle OLAP supports cubes of different dimensionality. Therefore, you do not need to select all the dimensions listed in the panel marked ‘Available Dimension’.<br /><br />The tick box “Use Default Aggregation Plan for Cube Aggregation” allows you to shortcut the process of creating of measures by applying the settings defined at the cube level to all measures within the cube. As we will see later, defining measures is an almost identical process as defining cube.<br /><br /><span style="font-style: italic;">Translations Tabbed Page </span><br />Enables you to provide long and short descriptions for the cube in each language that the AW supports. Although there are other tabs within the cube wizard, at this point it is possible to ignore all the other tabs and allow the AWM to default all the other features.<br /><br /><span style="font-weight: bold;">Adding Measures to a Cube</span><br />Base measures store the facts collected about your business. Dimensions logically organize the edges of a measure, and the body of the measure contains data values. Each measure belongs to a particular cube, and by default all the settings for a measure (such as dimensions) are inherited from the cube.<br /><br />Right clicking on the Measures node in the navigator can create a measure. Next select the Create Measure option.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4zwPALm_V4osfxDftPAiHTWCj20iYcCABow9EcRZ8L_AGVAjOzHo8MwIOowepL8u_OsQUNRoGav9yqKlANuErjSBO9tQvwJTYeBfbJj-hr5fFMaiaOQUiiYJJJs6E5cOc83vEtUrz-Zo/s1600-h/image4.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4zwPALm_V4osfxDftPAiHTWCj20iYcCABow9EcRZ8L_AGVAjOzHo8MwIOowepL8u_OsQUNRoGav9yqKlANuErjSBO9tQvwJTYeBfbJj-hr5fFMaiaOQUiiYJJJs6E5cOc83vEtUrz-Zo/s400/image4.JPG" alt="" id="BLOGGER_PHOTO_ID_5157890448221384402" border="0" /></a><br /><br />This will then launch the wizard to create the measure:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpxilGhdeqpST2pAZukxSMbmy2QZLKNA-ZuoG8fUJ4CbgWpyWsXpFn5-FToaa4Y9obBqxw7vZ-udO63ATsFZkUm58cWaRhOGY9O_ZxDDn-Dg4divcvW49ZSQQCvrppe6k_YOzB-m2joJI/s1600-h/image5.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpxilGhdeqpST2pAZukxSMbmy2QZLKNA-ZuoG8fUJ4CbgWpyWsXpFn5-FToaa4Y9obBqxw7vZ-udO63ATsFZkUm58cWaRhOGY9O_ZxDDn-Dg4divcvW49ZSQQCvrppe6k_YOzB-m2joJI/s400/image5.JPG" alt="" id="BLOGGER_PHOTO_ID_5157890585660337890" border="0" /></a><br /><br /><br /><span style="font-style: italic;">General Tabbed Page</span><br />On the General tabbed page, you create a name and add label information. Long labels are used by most OLAP clients for display. If you do not specify a value for the long label, then it defaults to the measure’s name. Once the measure is defined you cannot change the name of the measure. If you delete a measure all the data associated with that measure is lost.<br /><br /><span style="font-style: italic;">Other Tabbed Pages</span><br />The Translations tabbed page enables you to provide long and short descriptions for the measure in each language that the AW supports. The other tabbed pages (Implementation Details, Rules, and so on) enable the selection of certain measure-specific processing options other than the settings that are applied by the definition of the cube. These tabbed pages are examined in the following workshop.<br /><br />At this point it possible to simply create the measure and allow AWM to default all the other settings.<br /><br /><span style="font-weight: bold;">Loading Data into a Cube.</span><br />After creating logical objects, you can map them to relational data sources in the Oracle database. Afterward, you can load data into your analytic workspace by using the Maintain Analytic Workspace Wizard.<br /><br /><span style="font-style: italic;">Step 1 – Mapping Data Sources</span><br />To map your measures to a data source, perform these steps:<br /><br />1. In the navigator, choose Mappings for the cube that contains the measure that you want to map. A list of schemas appears. Find the schema to which you want to map your measure, and then click the + button.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie3zaylmIt16giC-E-CA0r4VsLxCAlEXXXC_ihiZsLu9o3hTgDcY44KpEG7-FTVCEEIJEBXww7eGrTNtGAILV6Y_kBApa_cg86dxluAx-TSQLzKy6hMWClCSoyhqUk-SdWU_bgIAspkE0/s1600-h/image6.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie3zaylmIt16giC-E-CA0r4VsLxCAlEXXXC_ihiZsLu9o3hTgDcY44KpEG7-FTVCEEIJEBXww7eGrTNtGAILV6Y_kBApa_cg86dxluAx-TSQLzKy6hMWClCSoyhqUk-SdWU_bgIAspkE0/s400/image6.JPG" alt="" id="BLOGGER_PHOTO_ID_5157890796113735410" border="0" /></a><br /><br />2. Select either Tables or Views, depending on what you are mapping to.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlGyVrRr6l-PqEGWsAi3jSxcJtcr_VEXZW0hZuPitFTdVr5g2sGBIzwDVsk9O1YIlQJmjX5KKJEf8fDf5B1Vb6bQ6rSkzg3yRdqFEPR_Qti30h8RXCiOayha4J_iAmVJ6yjqWWHOTSws0/s1600-h/image7.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlGyVrRr6l-PqEGWsAi3jSxcJtcr_VEXZW0hZuPitFTdVr5g2sGBIzwDVsk9O1YIlQJmjX5KKJEf8fDf5B1Vb6bQ6rSkzg3yRdqFEPR_Qti30h8RXCiOayha4J_iAmVJ6yjqWWHOTSws0/s400/image7.JPG" alt="" id="BLOGGER_PHOTO_ID_5157891332984647458" border="0" /></a><br /><br />3. Find the table or view name and double-click, or drag it to the mapping canvas. When on the canvas, the structure of the table is visible.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3_xfF7PJh_GlPwB-0ye26hurSRCmKUhwWzZ3BWRPmOgxS3hATNNwGsZW972yP4XrJXlYgk9bbfQl1FE-Na5-cGLNg3ayz_6XHQsOxqRxbBunv7d1ZOsQuQtawYAITdJ24qi_X6_j3xGM/s1600-h/image8.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3_xfF7PJh_GlPwB-0ye26hurSRCmKUhwWzZ3BWRPmOgxS3hATNNwGsZW972yP4XrJXlYgk9bbfQl1FE-Na5-cGLNg3ayz_6XHQsOxqRxbBunv7d1ZOsQuQtawYAITdJ24qi_X6_j3xGM/s400/image8.JPG" alt="" id="BLOGGER_PHOTO_ID_5157891152596021010" border="0" /></a><br /><br /><br /><span style="font-weight: bold;">Note:</span> If you want to see the data in the table or view, right-click the name of the table or view, and then select the View Data option.<br /><br />My recommendation is never to map directly to a fact table. Always use a view as this allows to you to fine tune the load process. For example by using a view you can select to load a single time period, which can be useful when you are trying to manage some of the more advanced settings and you are using an iterative development approach. As you will see later, using a view can make the data take stage (i.e. the initial build of the cube) easier to plan and manage.<br /><br /><br />4. Drag the cursor from the column name in the relational source to the destination object name in the measure. The image below shows a completed mapping.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_hqEzzw1KdcN9XOvupDSlSRqFy1NQ1dm-Vnh_Z4NqppTWeJTF3fH1pgdEib65n-vmC4-UYb2N8FAt2iM7yMDZNMukWyRSGETVzCVPkQNH_3oFpdAyyJG1yv4v8OphsDW93YOSmlcBGC0/s1600-h/image9.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_hqEzzw1KdcN9XOvupDSlSRqFy1NQ1dm-Vnh_Z4NqppTWeJTF3fH1pgdEib65n-vmC4-UYb2N8FAt2iM7yMDZNMukWyRSGETVzCVPkQNH_3oFpdAyyJG1yv4v8OphsDW93YOSmlcBGC0/s400/image9.JPG" alt="" id="BLOGGER_PHOTO_ID_5157891040926871298" border="0" /></a><br /><br /><br />Note: The mapping canvas enables you to map the contents of the source data to any level of dimensions. Here, because Budgets are set by product, and by channel for each month, map them to the Month, Product and Channel levels. In the next lesson there is advice techniques for managing situations where source data for different cubes and measures is loaded at different leaf levels of detail.<br /><br /><span style="font-style: italic;">Step 2 - Loading Data into the Cube</span><br />AWM contains a data maintenance wizard to help you create a job to load data into your cubes. The job both loads and aggregates the data within the cube as a single job. You can load:<br /><ul><li>All mapped objects in the analytic workspace</li><li>All mapped measures in a cube including the dimensions</li><li>All mapped measures in a cube excluding the dimensions</li><li>Individually mapped measures</li></ul>To load data, right-click the desired object name into which you are loading data, and then select the Maintain option.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEga7bMS6ks0Ztwuh3nyssMwV-51UUle7z3fdN7TPvbqfYeEv0ND34_OvWvFWekk78ipk36xnO7wePtElSYtoCe4OSrYyeAxIDtx7k32yAEytAZXqkH1q2iFW4MfPKZyfmG6jbOnVRS6V9M/s1600-h/image10.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEga7bMS6ks0Ztwuh3nyssMwV-51UUle7z3fdN7TPvbqfYeEv0ND34_OvWvFWekk78ipk36xnO7wePtElSYtoCe4OSrYyeAxIDtx7k32yAEytAZXqkH1q2iFW4MfPKZyfmG6jbOnVRS6V9M/s400/image10.JPG" alt="" id="BLOGGER_PHOTO_ID_5157891655107194674" border="0" /></a><br />In this screenshot, the Budgets cube is maintained. This results in the loading of data for all the dimensions that organize the cube and all the mapped measures associated with the cube.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgccwd3qe4ttjrO6qAyVhrUAJZ0LXlG1He03Gomtq8l9MXgXvpprl5prNeK1vS2iRVqUkgrfm5zl2m9QXsEkQO_b6zodnw5bckte7UJcNrXAf4e3-btqHRtyQTzDbHKJ6rsc9ktsmNGQSs/s1600-h/image11.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgccwd3qe4ttjrO6qAyVhrUAJZ0LXlG1He03Gomtq8l9MXgXvpprl5prNeK1vS2iRVqUkgrfm5zl2m9QXsEkQO_b6zodnw5bckte7UJcNrXAf4e3-btqHRtyQTzDbHKJ6rsc9ktsmNGQSs/s400/image11.JPG" alt="" id="BLOGGER_PHOTO_ID_5157891908510265154" border="0" /></a><br /><br /><br />The Maintenance Wizard takes you through a set of steps to load data from the mapped relational objects to the multidimensional objects in the AW.<br /><br />Step 1 of the wizard, you identify the objects for which data is to be loaded. If you choose cubes, all the measures for the cube are selected. Alternatively, you can choose a specific measure of a cube. After a measure or cube is selected, the associated dimensions are automatically selected as well. AWM, by default, selects the related dimensions for the cube. This is because AWM is dimensionally aware, and knows that the dimensions must exist and be populated in order for measures to be loaded (the dimensions organize the measures physically not just logically in an AW, so they must be maintained before the measure data can be loaded).<br /><br /><span style="font-weight: bold;">Note</span> – My personal preference is not to maintain dimensions at the same time as processing the cube. This goes back to the old days of Express Server when it was best practice to load dimensions first and then load data as a separate job. The reason for this two-step process was to ensure efficient storage of a measure. With the OLAP Option I am not sure if this should still be considered best practice but old habits die-hard.<br /><br />From this screen it possible to simply click on the “Finish” button and the job will run immediately. Alternatively you can step through the two other screens to set some additional processing options:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrV_S73tODY72aWZg5HcFXcEslqUQ0FcOoLYWo7YXhRwORViJcscvflDrum5cDoexiDiq4VNdaA6T_k0MvhfmuwFhlO-bOFrFekc9nn4R9L2z_eiAqmTaXMQOP9FxoCyne6oRY81e18j4/s1600-h/image11.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrV_S73tODY72aWZg5HcFXcEslqUQ0FcOoLYWo7YXhRwORViJcscvflDrum5cDoexiDiq4VNdaA6T_k0MvhfmuwFhlO-bOFrFekc9nn4R9L2z_eiAqmTaXMQOP9FxoCyne6oRY81e18j4/s400/image11.JPG" alt="" id="BLOGGER_PHOTO_ID_5157892161913335634" border="0" /></a><br /><br />Step 2 allows you to determine how previously load data should be managed as well as new data. For the moment, simply ignore this screen, all will be explained in the next workshop.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpg6dMNcVyRMIC4Ix8KPMhLpIy0cnJLK8g4jlYsVDTUqkP3Bv15iU1znUnKq5RQdWJvV74VBcNbeqWT1QXmcOru630_DeP8VEMOTada0caXy6fdYfN-lz8qNSnCRgplr7JnpauigBd9_w/s1600-h/image13.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpg6dMNcVyRMIC4Ix8KPMhLpIy0cnJLK8g4jlYsVDTUqkP3Bv15iU1znUnKq5RQdWJvV74VBcNbeqWT1QXmcOru630_DeP8VEMOTada0caXy6fdYfN-lz8qNSnCRgplr7JnpauigBd9_w/s400/image13.JPG" alt="" id="BLOGGER_PHOTO_ID_5157892462561046370" border="0" /></a><br /><br /><br />Step 3 allows you to determine when to run the job. For the moment simply use the default option to run the job immediately. Again, the other options will be explained in the next workshop.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzkcb91l1AmlouQvVchhengG8EH8K4YmhTXW5g-LUYyH-aLtCWUiTqiFrU9amvYH3XIHHrRiStmcnIuIguZK6qB6E6CY6FMJBrgwXAdy-L-JtTJuUneAFBg9zxOkvNY4V1byHW6NVNib0/s1600-h/image12.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzkcb91l1AmlouQvVchhengG8EH8K4YmhTXW5g-LUYyH-aLtCWUiTqiFrU9amvYH3XIHHrRiStmcnIuIguZK6qB6E6CY6FMJBrgwXAdy-L-JtTJuUneAFBg9zxOkvNY4V1byHW6NVNib0/s400/image12.JPG" alt="" id="BLOGGER_PHOTO_ID_5157892754618822514" border="0" /></a><br /><br />After the loading of data is completed, you can view the report which is shown below (this is the 10g report, the 11g report provides a lot more detail):<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh53tYl7Te8tjTtK4gUCvsCx3pNiEe0Aozb82zG08KZURfc4pdxpVU77HJtFSYTWwGj9D61sUotv4HRWvP8jkdaBTIPg4RHkaiLhjy_cWfau-GbyuqZpqVe3LpgDxN5EDE0mtpk-Fx8FSM/s1600-h/image14.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh53tYl7Te8tjTtK4gUCvsCx3pNiEe0Aozb82zG08KZURfc4pdxpVU77HJtFSYTWwGj9D61sUotv4HRWvP8jkdaBTIPg4RHkaiLhjy_cWfau-GbyuqZpqVe3LpgDxN5EDE0mtpk-Fx8FSM/s400/image14.JPG" alt="" id="BLOGGER_PHOTO_ID_5157892939302416258" border="0" /></a><br /><br />After successful completion, the data in your AW is ready to be analyzed.<br /><br /><span style="font-weight: bold;">Note</span> - All the maintenance logging goes into the XML_LOAD_LOG table (for 10g, with 11g there have been some changes which will be explained in a later post), which belongs to the OLAPSYS user. This table can be reviewed later, if required. There is a lot of information in this log, but some of it can be hidden. Always make sure ALL your records were correctly loaded. The log file will tell you if any were rejected, but unfortunately it will not tell you why or which records. The usual reasons are:<br /><ul><li>missing dimension members</li><li>invalid data due to data type errors</li></ul><br /><span style="font-size:130%;">Viewing the Results<br /></span>After data is loaded, you can preview it by using the Data Viewer. To see the data, right-click the name of the measure or cube that you want to view, and then select View Data from the submenu.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpy9yviPgHkxKjQp42rs9kGsmS0R-xWjeSiAk3-uYHh2Pox1TwG-ECbd0euTF56v-o9hebYU01W9Rf254zzrfsCCl4S-Sp9ytU1u0XRAiJq4smSdR77CFHL127IHvdGT1gz7tX7e0mbpg/s1600-h/image15.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpy9yviPgHkxKjQp42rs9kGsmS0R-xWjeSiAk3-uYHh2Pox1TwG-ECbd0euTF56v-o9hebYU01W9Rf254zzrfsCCl4S-Sp9ytU1u0XRAiJq4smSdR77CFHL127IHvdGT1gz7tX7e0mbpg/s400/image15.JPG" alt="" id="BLOGGER_PHOTO_ID_5157893351619276690" border="0" /></a><br />A tabular report appears. If you view a cube, all measures in the cube are displayed. In the Data Viewer, you can:<br /><ul><li>Drill up or down on the dimension values</li><li>Pivot or rotate the view of the data by dragging the edges (rows, columns, and pages) to new positions</li><li>Use the query builder to slice and dice the data<br /></li></ul>This basic crosstab control is used extensively in Oracle Business Intelligence tools, including OracleBI Beans, Discoverer Plus OLAP, and administrative tools such as AWM and Oracle Warehouse Builder. Also, third-party tools and applications sold by Oracle partners that use the OracleBI Beans technology use this same user interface.<br /><br /><span style="font-weight: bold;">Note</span> - When you are developing an analytic workspace always check your data after it has loaded. Do not just assume the data is correct. It always good practice to go back to the fact table and make sure the totals from the source data match the totals in the OLAP cube.<br /><br />In end-user tools and applications, more functionality (such as formatting, colour coding, and cell actions) is enabled in Discoverer Plus OLAP, as you see in the lesson titled “Building Analytical Reports with OracleBI Discoverer Plus OLAP.”<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicURJY1Svrg16JwcycHApRGzy9GH785u3PqeYc5dwA_CBSCpqYHKWmu4eqDpdj16MZYtCNNAbfrQsMGLXgEvkwcfwY3wB6k0_U-l6D5L-YMf-in2uu2Q1CibTcnwLEnI-a1mPJxfGrkUE/s1600-h/image16.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicURJY1Svrg16JwcycHApRGzy9GH785u3PqeYc5dwA_CBSCpqYHKWmu4eqDpdj16MZYtCNNAbfrQsMGLXgEvkwcfwY3wB6k0_U-l6D5L-YMf-in2uu2Q1CibTcnwLEnI-a1mPJxfGrkUE/s400/image16.JPG" alt="" id="BLOGGER_PHOTO_ID_5157893592137445298" border="0" /></a><br /><br /><span style="font-weight: bold;">Note: </span>From the File menu within the Data Viewer, or from the Query Builder tool, <br />you can access the Oracle OLAP Query Builder. This query wizard is used throughout Oracle Business Intelligence tools.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX9Hvdvbt7BA7cTZhC2jR7CInrO-vzcQyenZVQGlTpHdupgjreoJ-gIoTNJpg8Y_GGwReiJItoA02PVKffP3hVcLIeu58iOcAtZzu_IdZXiY5AMpO7hmdVjTGne8SFn-rEwyAS8QD3Eog/s1600-h/image17.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX9Hvdvbt7BA7cTZhC2jR7CInrO-vzcQyenZVQGlTpHdupgjreoJ-gIoTNJpg8Y_GGwReiJItoA02PVKffP3hVcLIeu58iOcAtZzu_IdZXiY5AMpO7hmdVjTGne8SFn-rEwyAS8QD3Eog/s400/image17.JPG" alt="" id="BLOGGER_PHOTO_ID_5157893506238099362" border="0" /></a><br /><br />As shown in the image above, you drill down on data to the lowest levels of detail by clicking the arrow icon to the left of the dimension value. Notice that the measure appears to the user as fully aggregated at all level combinations of all dimensions. This is an important feature of the Oracle OLAP dimensional model. All data is presented to the end user as if it is already aggregated and calculated, even if some or all of the data being displayed is being calculated on the fly.<br /><br />For example, some of the budget data has been pre-aggregated during the maintenance task, and some of it is being calculated dynamically. The end user cannot tell the difference, and does not need to know. The AW contains the data and the calculation logic and presents the results that the user needs. From the technical perspective, not even the query behind this crosstab needs to know whether the measure cells being requested are pre-computed or not. The query simply requests these cells from the database, and the AW engine performs any calculations required at query time.<br /><br />In some cases you may need to move beyond the default settings described in this workshop. Therefore, in the next workshop we will look at the other tabs that are part of the Cube wizard. These tabs control sparsity, compression and partitioning features, aggregation rules, and summarization strategies.Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com0tag:blogger.com,1999:blog-3820031471524503731.post-68480039371941281562008-01-14T08:43:00.000-08:002008-12-11T15:25:49.708-08:00OLAP Workshop 4 : Managing Different Types of HierarchiesIn the previous posting we started to look at building our first analytic workspace using Analytic Workspace Manager. At this stage don’t forget that we can also use Warehouse Builder to perform the same tasks and in many cases, especially on large-scale projects, this will be the product of choice for designing, building and maintaining your analytic workspaces.<br /><br />At the end of the last workshop we had defined a simple time dimension and examined the various components that make up a dimension:<br /><ul><li>Levels</li><li>Hierarchies</li><li>Attributes</li></ul>In this workshop we are going to look in more detail at the types of hierarchies that you might need to design and map within your environments.<br /><br />Most dimensions will have at least one hierarchy, but Oracle OLAP does also support completely flat dimensions where no hierarchy exists. Although this is rare it does occur in some cases, but it is always wise to have an “All Members” level for these types of dimensions as this will allow business users to pivot these types of dimensions out of their query by selecting that top level. Otherwise their queries will always be pinned to a single dimension member within the page dimension.<br /><br />A hierarchy defines a set of parentage relationships between all or some of a dimension's members:<br /><ul><li>Used for rollups of data.</li><li>Used for end-user navigation; e.g., drill-down.</li></ul>While multiple hierarchies are supported each member can have only one parent within each hierarchy. Lets look at some basic examples:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUdAjrJcwXUEzxgP6AMy9_4FRKMzNr2TdL2xrMwKWzSKBOJAozCGEJLE4XWqQ-4mklc80PpP23LbvxjdyRoe2Sz9742pzfBsfBfMsn057zkq1tXme7SzRI5RV-bQHPUiHTE8sopYTWexY/s1600-h/Image1.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUdAjrJcwXUEzxgP6AMy9_4FRKMzNr2TdL2xrMwKWzSKBOJAozCGEJLE4XWqQ-4mklc80PpP23LbvxjdyRoe2Sz9742pzfBsfBfMsn057zkq1tXme7SzRI5RV-bQHPUiHTE8sopYTWexY/s400/Image1.JPG" alt="" id="BLOGGER_PHOTO_ID_5151293661922569474" border="0" /></a><br />In the first image we have a traditional level based hierarchy where each child has a parent at the next level up in the hierarchy. Although the number of children at each node may, and usually does, differ between nodes. The second image shows another type of level based hierarchy that is some times referred to as a “Skip Level” hierarchy. This is where a leaf node links to a higher-level parent above its next most obvious level. Oracle database can support skip-level relationships within relational hierarchies, however, this is limited to skipping to only one specific level. Oracle OLAP is able to support skip-levels across multiple levels, as seen here:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCyMAhzjZeW5rmQZS_2UDhyphenhyphen82gnMhM4CCwXMvnTZoe4fnUKh7jERTMceUxq2N0-73AsF6npXEmmUJrUsbO0A6HD5Ri4ncnqk_iSAjvYOG5mHXm1N60aCpOwMrXcLysfRXcylJCvnsVOx4/s1600-h/Image1b.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCyMAhzjZeW5rmQZS_2UDhyphenhyphen82gnMhM4CCwXMvnTZoe4fnUKh7jERTMceUxq2N0-73AsF6npXEmmUJrUsbO0A6HD5Ri4ncnqk_iSAjvYOG5mHXm1N60aCpOwMrXcLysfRXcylJCvnsVOx4/s400/Image1b.JPG" alt="" id="BLOGGER_PHOTO_ID_5151293966865247506" border="0" /></a><br />Oracle OLAP is able to manage these types of relationships quickly and easily because all types of hierarchies are effectively stored as parent-child relationships. A derivation of the skip-level hierarchy is the “Ragged” hierarchy. This is where leaf-nodes are located at different levels within the hierarchy. Obviously this can have an impact on the data loading and aggregation plans, however, Oracle OLAP is more than capable of handling this type of scenario in just the same way as any other level based hierarchy.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEis5fYmJcbpfq_ruMeev9gcQWdALwxn18-d5JyU4XKbk4Lv3_zQWr5uVD4qCK97AjUmYfdrFevzEGSqWhlPrAmGtlb8owqu2LQZ1wM0URRFFEPEe1fguW50hhEHX4xIe29Lc6XJUnhKlhI/s1600-h/Image1a.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEis5fYmJcbpfq_ruMeev9gcQWdALwxn18-d5JyU4XKbk4Lv3_zQWr5uVD4qCK97AjUmYfdrFevzEGSqWhlPrAmGtlb8owqu2LQZ1wM0URRFFEPEe1fguW50hhEHX4xIe29Lc6XJUnhKlhI/s400/Image1a.JPG" alt="" id="BLOGGER_PHOTO_ID_5151294263217990946" border="0" /></a><br />Of course you can combine some of these structures to create more complicated relationships such as a “Ragged-Skip” level hierarchy. These more complex structures are also supported.<br /><br />The last type of hierarchy shown above is a simple flat hierarchy, which as explained earlier may or may not be an ideal type of dimension to model depending on how your business users plan to build queries. In all these cases, the hierarchy is defined based on levels.<br /><br />One type of hierarchy not shown, but which is supported Oracle OLAP, is “Value” based hierarchies, of which the typical Employee/HR table is the most common example. This type of hierarchy contains no levels and is dealt with as a pure parent-child relationship. In this case the level names are converted into attributes to help business users define the queries.<br /><br />Across all these types of hierarchies there are some simple rules that need to be followed. It is recommended you create at least one top level on each of your hierarchies. Although some types of dimensions, such as time, will require multiple top levels such as Years.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyzFDubpCR81kWNVOV7XmPY4egzIVVF5MYYbCrDwQdqrLy9vSd7NtZnig7D9TPgTAju5weEolXl8MrK_s-nYevBVfIPi_sXodrAGQgS1SUoLJJSWVWR3-WTNYFaYIzCg_Fitzr3RdVhsA/s1600-h/Image2.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyzFDubpCR81kWNVOV7XmPY4egzIVVF5MYYbCrDwQdqrLy9vSd7NtZnig7D9TPgTAju5weEolXl8MrK_s-nYevBVfIPi_sXodrAGQgS1SUoLJJSWVWR3-WTNYFaYIzCg_Fitzr3RdVhsA/s400/Image2.JPG" alt="" id="BLOGGER_PHOTO_ID_5151294589635505458" border="0" /></a><br />What you cannot do is have a child owned by multiple parents within the same hierarchy as shown below. In this case, you would need to create two separate hierarchies to manage the relationships separately.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDppDVbxSzD6d8cM8XdGCiaFEM2t_kQXsxAuE6lNpgmWukkm-vuPOKpHuuEKT2aL4AyLmw1v9Oz5lLfF8ByrK71xk9avgtCgNN_8vBfOdbyUbYY-kYqS3eliVWi8Y9OjRZ1Dj9NxoPWXQ/s1600-h/Image3.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDppDVbxSzD6d8cM8XdGCiaFEM2t_kQXsxAuE6lNpgmWukkm-vuPOKpHuuEKT2aL4AyLmw1v9Oz5lLfF8ByrK71xk9avgtCgNN_8vBfOdbyUbYY-kYqS3eliVWi8Y9OjRZ1Dj9NxoPWXQ/s400/Image3.JPG" alt="" id="BLOGGER_PHOTO_ID_5151294761434197314" border="0" /></a><br />The interesting part here, is the basic design of the dimension and its related levels, hierarchies and attributes is largely consistent across all these different types of structures. The only real different is between level and value-based relationships where value-based dimensions do not contain levels. Fortunately, the dimension loading routines manage these types of dimension structures transparently.<br /><br />The next step, having defined our dimensions and their associated hierarchies, is to map the source data to the actual dimension itself. To help with this process, and to accommodate some of these more complex relationships, the AWM Mapping Editor allows for three types of source data:<br /><ul><li>Star format source table</li><li>Snowflake collection of source tables</li><li>Other</li></ul>Which just allows you to use just about any type of relational schema design as a source in the mapping editor.<br /><br /><span style="font-weight: bold;font-size:180%;" ><span style="font-size:130%;">The Mapping Editor</span><br /></span><br />The mapping editor is laid is comprised of four main areas:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPnHCy_sQJGzU7AEQ8r3s0Fe7K1dQB20rU5IsOciq4wYfeKJq2BTGQ549EPL6RP8Z6cfzU3U0C30QMP_hyphenhyphen7xjmz0rZYxvCwtQfVxAbCw401sU6PAOBTcviVOpNQJ4wCPwznnn2QZK_hyphenhyphenc/s1600-h/image11.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPnHCy_sQJGzU7AEQ8r3s0Fe7K1dQB20rU5IsOciq4wYfeKJq2BTGQ549EPL6RP8Z6cfzU3U0C30QMP_hyphenhyphen7xjmz0rZYxvCwtQfVxAbCw401sU6PAOBTcviVOpNQJ4wCPwznnn2QZK_hyphenhyphenc/s400/image11.JPG" alt="" id="BLOGGER_PHOTO_ID_5155257023383580130" border="0" /></a><br /><br /><ul><li><span style="color: rgb(255, 0, 0); font-weight: bold;">1</span> – The mapping editor is launched from the main navigator. There is a mapping editor for each dimension and each cube.</li><li><span style="font-weight: bold; color: rgb(255, 0, 0);">2</span> – Schema List: lists the available tables, views and synonyms where the owner of the AW has been granted SELECT privilege.</li><li><span style="font-weight: bold; color: rgb(255, 0, 0);">3 </span>– The mapping Canvas: dragging tables views and.or synonym on to the mapping canvas makes it available for use within a mapping.</li><li><span style="color: rgb(255, 0, 0); font-weight: bold;">4</span> – Mapping Control: Controls the type of layout, which includes:</li><ul><li>Star schema</li></ul><ul><li>Snowflake schema</li></ul><ul><li>Other</li></ul></ul><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgJKWEjCnE6bW9B95BIfbyBE2Xq5TbEpTjJ1fULROKfz0mS9TyEnfgToNrPBS4hG97duZHw24AJyIIBkA8TepzT0O2niA0GmpsmyTjgveqW8SeiMh51yeW000MRpEpeJpsdijEe47zcck/s1600-h/image10.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgJKWEjCnE6bW9B95BIfbyBE2Xq5TbEpTjJ1fULROKfz0mS9TyEnfgToNrPBS4hG97duZHw24AJyIIBkA8TepzT0O2niA0GmpsmyTjgveqW8SeiMh51yeW000MRpEpeJpsdijEe47zcck/s400/image10.JPG" alt="" id="BLOGGER_PHOTO_ID_5155256933189266898" border="0" /></a><br /><br /><br /><br />In the following sections we will look at how to use the mapping editor to manage different schema layouts to model different types of dimensions and hierarchies.<br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">Types of Dimension Source Tables/Views</span><br /></span>Firstly, a quick best practice tip. Personally, I always find it useful to map to views rather than directly to tables. This provides more control over the data passed into the data loader (which can be useful for testing), especially when trying to perform incremental updates from a fact table. But we will look at this in more detail when reviewing processes for designing and building cubes.<br /><br /><span style="font-weight: bold;">Star</span><br />A star schema provides one table or view with columns containing member id's representing all levels of a hierarchy for each dimension. Each row in the table specifies a branch in the hierarchy. Additional columns identify additional attributes for each level, such as long and short descriptions. In the case of a time dimension, additional attributes will be required to provide information on end date and time span for each level.<br /><br />Where a hierarchy is unbalanced and contains skip-levels, or is ragged, or is a combination of both, some rows may contain blank entries in specific columns.<br /><br />OLAP dimension member ids must be unique within a level, which is normal in relational models, but they may also need to be unique across levels as well. In fact most people forget or try to ignore this requirement and often hit problems later when loading data into their cubes. OLAP stores dimension members as a single continuous list of ids. If your source keys are not unique across levels then you must take the option of generating surrogate keys as stated in the previous workshop.<br /><br />Enabling the surrogate key option appends the level name to the member id, which should then guarantee uniqueness. However, this is only possible with value-based hierarchies. If your dimension requires a value based hierarchy you must use natural keys.<br /><br />In summary:<br /><ul><li>Natural keys:</li><ul><li>Created in the AW as is from the source table or view (except numeric, dates become text).</li></ul><ul><li>If source table had months 1, 2, 3 then the AW dimension values would be '1', '2', '3'.</li></ul><li>Surrogate keys:</li><ul><li>The level name is prefixed to the source table or view id value.</li><li>If source table had months 1, 2, 3 then the AW dimension values would be 'MONTH_1', 'MONTH_2', 'MONTH_3'</li></ul></ul><span style="font-weight: bold;">Mapping a Star Based Schema</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWsbejMpQbORLUEUX1bUwtsD8OpUaHa-DrKniumbX2_KrwZaoA3zqWMFj6BNUWaI_QEL3OJRB1ZxKJnJFnAIUJdjgMWUjGDvNVvUdZa9tUp8lTAfO6n0bXt4OS_EvFc6gVvLp9FD6HQzM/s1600-h/Image4.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWsbejMpQbORLUEUX1bUwtsD8OpUaHa-DrKniumbX2_KrwZaoA3zqWMFj6BNUWaI_QEL3OJRB1ZxKJnJFnAIUJdjgMWUjGDvNVvUdZa9tUp8lTAfO6n0bXt4OS_EvFc6gVvLp9FD6HQzM/s400/Image4.JPG" alt="" id="BLOGGER_PHOTO_ID_5151295616132689234" border="0" /></a><br />The steps for using a star schema are:<br /><ul><li>Use natural or surrogate keys allowed</li><ul><li>Must use surrogate keys if dimension values are not unique across levels.</li></ul><li>Define levels and a level-based hierarchy.</li><li>In the mapping editor choose Star Schema as the Type of Dimension Table(s).</li></ul><span style="font-weight: bold;">Dimension Objects used in the Mapping</span><br />The Mapping Editor allows mapping from the source table to the member and attributes at each level. Each attribute is shown as a separate entry in the dimension object in the editor. The editor will not allow mappings from more than one column to each element, although AWM 11g removes the restriction by allowing simple transformations to be performed during the data loading process.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZaFWYRV0OctbTTTzyo_yB0dUup3EBYzZ7Qnc6tIkrjl8Dg9dixUaswIa7CmRcTYfI210CRTWsBUWPUl-c2J5OuMJn-XbM0ofuLFGHSgxemPKOL4-WY69-BK4YaJWIZLtQEkwHBqlK9eo/s1600-h/Image6.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZaFWYRV0OctbTTTzyo_yB0dUup3EBYzZ7Qnc6tIkrjl8Dg9dixUaswIa7CmRcTYfI210CRTWsBUWPUl-c2J5OuMJn-XbM0ofuLFGHSgxemPKOL4-WY69-BK4YaJWIZLtQEkwHBqlK9eo/s400/Image6.JPG" alt="" id="BLOGGER_PHOTO_ID_5151296002679745890" border="0" /></a>Here is an example of a completed mapping for the Product dimension. Note the long and short description attributes share the same source (so it is possible to map a source column to multiple target attributes).<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkBODXQef2Ej6pI9a1ozT5A82PKy4LXVtVPizIqyqTnsgTw-ZOKmBnYJBkhZF8GZWuor3GkEjmIZRsrM1hiROybhZL2B3MDciVpuzDvmjVPLw4-pc2tzLRWMl63jJdrq24HN_DcUg7BkY/s1600-h/Image5.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkBODXQef2Ej6pI9a1ozT5A82PKy4LXVtVPizIqyqTnsgTw-ZOKmBnYJBkhZF8GZWuor3GkEjmIZRsrM1hiROybhZL2B3MDciVpuzDvmjVPLw4-pc2tzLRWMl63jJdrq24HN_DcUg7BkY/s400/Image5.JPG" alt="" id="BLOGGER_PHOTO_ID_5151296097169026418" border="0" /></a><br />Some query tools will differentiate between long and short descriptions. For example both Discoverer and the OLAP Spreadsheet Addin for Excel will use short descriptions for dimensions used as column headers and long descriptions when the dimension is used in the page or row edge.<br /><br />If you do not provide a long and./or short description the data loader will default to using the dimension key to populate these attributes.<br /><br /><span class="Apple-style-span" style="font-weight: bold;">Snowflake</span><br />A snowflake schema provides separate tables or views for each levels of a hierarchy. Each row in the table specifies a level in the hierarchy with an additional column to link to each parent across the various hierarchies. The same basic requirements apply as for star schemas in terms of uniqueness.<div><br /><span style="font-weight: bold;">Mapping a Snowflake Based Schema<br /><br /></span><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhepSxp_jDCqWWZj1aT40G1fqvmAZmYNEwGYEPf6xXaMCPCifs6biXJ1ojcLQYxgTn6FCiVr1rGee3byHQ1lqn9VAje8qYVHfFC8AFVyGuQQy0K-2da4z7dsq5vGDaLIIukV-pmd-d2u94/s1600-h/Image7.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhepSxp_jDCqWWZj1aT40G1fqvmAZmYNEwGYEPf6xXaMCPCifs6biXJ1ojcLQYxgTn6FCiVr1rGee3byHQ1lqn9VAje8qYVHfFC8AFVyGuQQy0K-2da4z7dsq5vGDaLIIukV-pmd-d2u94/s400/Image7.JPG" alt="" id="BLOGGER_PHOTO_ID_5151296810133597570" border="0" /></a><br />The steps for using a snowflake schema are:<br /><ul><li>Natural or surrogate keys allowed</li><ul><li>Must use surrogate keys if dim values are not unique across levels.</li></ul><li>Define levels and a level-based hierarchy.</li><li>Choose Snowflake schema as the Type of Dimension Table(s).</li></ul><br /><span style="font-weight: bold;">Dimension Objects used in the Mapping</span><br />The mapping editor has to be switched to “Snowflake” mode using the pulldown selection dialog at the top of the editor. The mapping canvas will then change to allow you to map each member, its parent and associated attributes at each level.</div><div><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXHBiu3mNQocNTzLoJi0cMYRszYdFE7VzW0T-y0IyTBpTDwJjlAVt3ZyAJXjYpqwEsbI4JqalgCRj3IECsb4TBV-WlJLZnoSj06cGSpGALFEmNcc755CDccRw3sK8fSV8OydLzVXzKW18/s1600-h/Image9.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXHBiu3mNQocNTzLoJi0cMYRszYdFE7VzW0T-y0IyTBpTDwJjlAVt3ZyAJXjYpqwEsbI4JqalgCRj3IECsb4TBV-WlJLZnoSj06cGSpGALFEmNcc755CDccRw3sK8fSV8OydLzVXzKW18/s400/Image9.JPG" alt="" id="BLOGGER_PHOTO_ID_5151297308349803922" border="0" /></a><br />As with the Star schema mapping process, the snowflake mapping editor will not allow mappings from more than one column to each element i.e., map from a single source table or view per level. Here is a completed snowflake mapping:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhm6MM61L2ZEbWE2Vj-dwTnEXx0rpoWRBPgoJeJ8FE_n-mQIQ5zRLiVrb1gNyJZ3eXlnNLtzC-j10ZMkzekqXspfxS4QnsqCGn9DQ3amy8S5dPAF20BD9k2s8a_Ks9yR0HlWgDnL4u2nnQ/s1600-h/Image8.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhm6MM61L2ZEbWE2Vj-dwTnEXx0rpoWRBPgoJeJ8FE_n-mQIQ5zRLiVrb1gNyJZ3eXlnNLtzC-j10ZMkzekqXspfxS4QnsqCGn9DQ3amy8S5dPAF20BD9k2s8a_Ks9yR0HlWgDnL4u2nnQ/s400/Image8.JPG" alt="" id="BLOGGER_PHOTO_ID_5151297493033397666" border="0" /></a><br /><br /><span class="Apple-style-span" style="font-weight: bold;">Collection of Tables</span><br />The basic snowflake schema can be taken a stage further by moving the various attributes, such as descriptions etc, to separate tables. This follows a more 3NF approach to data storage and although it looks more complicated it can easily be managed within AWM's mapping editor.<br /><br /><br /><span style="font-weight: bold;">Mapping a Collection Based Schema</span><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMYUVoMzKa5GaU_lpe_tzCVMRti0ia3QN2aP3_KG7ldL2lQzJ_3aodEPSQqF8Wlu3bcMuX0HOFcnIiZjqCyFUtEdaGqPCaoYtnk5rdzJHFu1IR-lyKgFi7CNMjnwZjceJIrbc58CJHuas/s1600-h/image12.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMYUVoMzKa5GaU_lpe_tzCVMRti0ia3QN2aP3_KG7ldL2lQzJ_3aodEPSQqF8Wlu3bcMuX0HOFcnIiZjqCyFUtEdaGqPCaoYtnk5rdzJHFu1IR-lyKgFi7CNMjnwZjceJIrbc58CJHuas/s400/image12.JPG" alt="" id="BLOGGER_PHOTO_ID_5155280319286193650" border="0" /></a><br /><br />In this format natural or surrogate keys can still be used within the dimension. To map a collection of tables as described above the mapping editor needs to be switched to “Other” mode.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs65aiKIBLF509V4LfYQQdsLT8JzHSmW01qmbdAQekYrTm4CyVWnfx1Rtm0pzrOS5Ej-iK-91KhS9VPzrSbxIZuSRWjIS9qmnqRIeT-cPIv2NY3bTGvdaZU7r-Afa_Ar8BTVebAR7rzIg/s1600-h/image13.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs65aiKIBLF509V4LfYQQdsLT8JzHSmW01qmbdAQekYrTm4CyVWnfx1Rtm0pzrOS5Ej-iK-91KhS9VPzrSbxIZuSRWjIS9qmnqRIeT-cPIv2NY3bTGvdaZU7r-Afa_Ar8BTVebAR7rzIg/s400/image13.JPG" alt="" id="BLOGGER_PHOTO_ID_5155280409480506882" border="0" /></a><br /><br /><br />When mapping the tables to the dimension, the normal rules still apply. The Mapping Editor only allows mapping to member, parent for dim values and member, value for attributes at each level. It will not allow mappings from more than one column to each element, but you can map from an arbitrary set of source tables and/or views, which have member and value columns.<br /><br /><br /><br /><br /><span style="font-weight: bold;">Value Based (Parent Child)</span><br />This is probably the most simple type of relationship to manage from a mapping perspective. Likely sources for this type of mapping are other AWs, where the source data is from an OLAP enabled SQL view, or another multi-dimensional engine.<br /><br />The source for this type of relationship is normally a two-column table that provides the key and the parent for each child. Other columns are used to provide additional attributes.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk4LV3emDvLcv9qQww1o2qydik9j-QMh3RhJneIdIGH2bDCsEC308LWeDI84ZI8RGq0v-pPMzd4d1sXN9ajhRZnCWubG8roF_AMbOZ5k0cEJzy4X2qkw9u-DeTs5-nDe_cg_1l9qGI4_U/s1600-h/image14.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk4LV3emDvLcv9qQww1o2qydik9j-QMh3RhJneIdIGH2bDCsEC308LWeDI84ZI8RGq0v-pPMzd4d1sXN9ajhRZnCWubG8roF_AMbOZ5k0cEJzy4X2qkw9u-DeTs5-nDe_cg_1l9qGI4_U/s400/image14.JPG" alt="" id="BLOGGER_PHOTO_ID_5155280491084885522" border="0" /></a><br /><br />In this case, natural keys must be used to define the dimension, since there are no level identifiers that can be used to construct the surrogate key. In this scenario it is possible to use any of the mapping editor options (star, snowflake, or other) to construct the mapping.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPSedTicDKYT1Aws3_goOyt5obblIvJulDaVWTk-rGb3Y6oIQhWQYXogIsCjNdooi1ox90oizGVZ4jPs02iLrLyFSZaehJJn5yTy8K7IS21Eca7wwOdJJ9hg_DnJt2BRXhN4GMJS0lHR4/s1600-h/image15.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPSedTicDKYT1Aws3_goOyt5obblIvJulDaVWTk-rGb3Y6oIQhWQYXogIsCjNdooi1ox90oizGVZ4jPs02iLrLyFSZaehJJn5yTy8K7IS21Eca7wwOdJJ9hg_DnJt2BRXhN4GMJS0lHR4/s400/image15.JPG" alt="" id="BLOGGER_PHOTO_ID_5155280551214427682" border="0" /></a><br /><br />Things to remember when designing a parent-child/value based hierarchy. Firstly, there are no levels, therefore, certain calculations, such as share, are not possible. A parent-child hierarchy cannot be used in the partition statement of a cube because there is no level identifier to act as the partition key.<br /><br />However, it is possible to provide a pseudo level identified by creating a level attribute. This allows users to create selections using the attribute in the normal way. In some cases, a value based hierarchy may be the only way to manage an unbalanced hierarchy, where not all branches have the same number of dimension members).<br /><br /><span style="font-weight: bold;">Flat List</span><br />Another version of the parent-child/value based hierarchy is the flat-list dimension. In this scenario, the dimension has no hierarchies and is simply a flat list of dimension members. Personally, I would not recommend building this type of dimension simply because there is no top level. This makes it difficult for business users to pivot the dimension out of the query since they have to pin the dimension to a specific member when it is hidden. This can make the query process more complicated for business users to understand.<br /><br />In most cases I would suggest that a flat list hierarchy where no top level is possible is a likely candidate for migration to a series of measures within a cube. This is something you should seriously consider before creating a flat list dimension.<br /><br />The dimension itself can have a hierarchy based on a single level. This provides the flexibility to use either surrogate or natural keys. If the dimension is designed with no levels and no hierarchy then only natural keys are available.<br /><br /><br /><span style="font-weight: bold;">Skip, Ragged and Ragged-Skip Level Hiearchies</span><br />Ragged is a special form of skip. The diagram below shows the various scenarios that can be found in many dimensions. It is highly likely that at least one dimension in a data model will have one or all of these scenarios.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmaYlM6XIjABRpviX_79JvJV7eKg0oSVCcsUE-ztKPOakVB60lDHc7KUIfBFC_QSt7cbC_EzUCCPBKzABlSZGj5nZy8gvIIUt7H-9fWurgIJ9ydsc8o05tN8CPcU7iwyR2BuujhTON8cA/s1600-h/image16.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmaYlM6XIjABRpviX_79JvJV7eKg0oSVCcsUE-ztKPOakVB60lDHc7KUIfBFC_QSt7cbC_EzUCCPBKzABlSZGj5nZy8gvIIUt7H-9fWurgIJ9ydsc8o05tN8CPcU7iwyR2BuujhTON8cA/s400/image16.JPG" alt="" id="BLOGGER_PHOTO_ID_5155284347965517362" border="0" /></a><br />The question is how can such a structure be represented within a relational table?<br /><br />Skip<br />Using an across format structure, where a skip level occurs one or more columns are left blank within a specific row. However, within a skip level there is a common leaf node that denotes the lowest level of the hierarchy. From the leaf node to the top level, certain columns that relate to parents of the leaf node are left blank. As shown below (in this case the ID columns are not shown but follow the same pattern)<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjexuEwFzUvGms9l10wKeAoLcwOH8mC3WN_VbFcnwO3IqWdLA8D4aZsQ0xzn58mwjMA6jG3b1CZG2DnxYZ9jEBdUh5KVIptti53DSAQ-lObemB-m3NADcbTcK38Qwa5Kvp7RnOepIL-i28/s1600-h/image17.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjexuEwFzUvGms9l10wKeAoLcwOH8mC3WN_VbFcnwO3IqWdLA8D4aZsQ0xzn58mwjMA6jG3b1CZG2DnxYZ9jEBdUh5KVIptti53DSAQ-lObemB-m3NADcbTcK38Qwa5Kvp7RnOepIL-i28/s400/image17.JPG" alt="" id="BLOGGER_PHOTO_ID_5155284648613228098" border="0" /></a><br />This type of layout is difficult, if not possible, for most SQL based query tools to manage. However, recent additions to the SQL language has allowed skip-level hierarchies to be partially managed using normal query methods. However, it is only possible to skip one level within a single hierarchy. Fortunately, OLAP does not enforce this constraint.<br /><br />To map this type of hierarchy use a normal star schema approach. The OLAP engine will manage the complexity of the relationships for you.<br /><br />Ragged<br />For a ragged hierarchy, the leaf node will occur at any or all-intervening levels within a hierarchy. Again null values will appear in certain columns within each row.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzJ5eTP_DZnbd9IMP0KYyy8I4Ms25RkUMMfcjpCbof2w91ezwLMfqNc0UK1lAXyNSIxw65rY7kkWiPcw0Obgkl0iBTff4wz3PxAjc6TK4g12ImyrJj1UR16DMpT7D-K1gypedVmHWLnWQ/s1600-h/image18.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzJ5eTP_DZnbd9IMP0KYyy8I4Ms25RkUMMfcjpCbof2w91ezwLMfqNc0UK1lAXyNSIxw65rY7kkWiPcw0Obgkl0iBTff4wz3PxAjc6TK4g12ImyrJj1UR16DMpT7D-K1gypedVmHWLnWQ/s400/image18.JPG" alt="" id="BLOGGER_PHOTO_ID_5155284863361592914" border="0" /></a><br /><br />When defining a ragged hierarchy within a dimension wse natural keys and create a level-based hierarchy(ies). Within the dimension-mapping editor map the source table as a star. But for the cube mapping the fact table requires a little more work. It is necessary to map the key for the ragged dimension to all levels in the dimension, which have leaf values (or, to be safe, map to all levels). This is shown below<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuhLTFCwjeCeRlF2c6jS1ziuSTa1khXR6B8WhGugJf0LlexUGssmej545FAb0qt7WdBBDPpAmxlj7xLtRjL4XIeX3IXHj2jTGcP3gEfV2w6HkN5Q0FgA5JsS6koyZL518cmVOG_Scakm0/s1600-h/image19.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuhLTFCwjeCeRlF2c6jS1ziuSTa1khXR6B8WhGugJf0LlexUGssmej545FAb0qt7WdBBDPpAmxlj7xLtRjL4XIeX3IXHj2jTGcP3gEfV2w6HkN5Q0FgA5JsS6koyZL518cmVOG_Scakm0/s400/image19.JPG" alt="" id="BLOGGER_PHOTO_ID_5155289347307449970" border="0" /></a><br /><br /><br />Ragged Skip Level<br />In this scenario, looking at the image at the start of this sectio, we can see the leaves are not always at lowest level; there are some intervening nulls However, this simply a combination of the two types of hierarchies we have already reviewed. The source table would look something like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi422m8dlHFBfA7C3SUwq_oaBOgUgwBnndVVt9ormFYrCHbKHYXbKZ7ReA0NoNZVClXzB8uzDcOuRt7Ua1x7br35XPC5pFm4ivw_vOH9LXVbMNEbyd3BoMzNa3p0F1DJvjaAMoD-_xP12c/s1600-h/image20.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi422m8dlHFBfA7C3SUwq_oaBOgUgwBnndVVt9ormFYrCHbKHYXbKZ7ReA0NoNZVClXzB8uzDcOuRt7Ua1x7br35XPC5pFm4ivw_vOH9LXVbMNEbyd3BoMzNa3p0F1DJvjaAMoD-_xP12c/s400/image20.JPG" alt="" id="BLOGGER_PHOTO_ID_5155289828343787154" border="0" /></a><br /><br />In this scenario the same rules apply as before:<br /><ul><li>Use natural keys, level-based hierarchy(ies).</li><li>Map as a star.</li><li>When map the fact table, map its key to all levels in the dimension which have leaf values. </li></ul><br />In the next posting in this series we will consider how to design and create cubes.<br /><a href="javascript:void(0)" tabindex="10" onclick="return false;"><span></span></a><br /><br /><br /></div>Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com1tag:blogger.com,1999:blog-3820031471524503731.post-63473982498000988512007-12-31T02:46:00.000-08:002008-12-11T15:26:00.519-08:00AWM Connection MethodsConnecting to a database using Analytic Workspace Manager (AWM) seems to cause some interesting postings on OTN. Why? Mainly because AWM provides two different connection methods and each method has its own requirements:<br /><ul><li>JDBC - this uses the normal host:port:sid connection format and this is I suspect how most people connect since this is the way AWM is typically demonstrated<br /></li></ul><ul><li>TNS - this uses either the full TNS protocol string or references a TNS entry in the TNSNAMES.ORA file.</li></ul>so let's look at these methods in a bit more detail:<br /><br /><span style="font-size:130%;">Creating a JDBC Connection</span><br />This is the easiest method to use since AWM is configured out of the box to use JDBC connections. Connecting to a database using JDBC is very straightforward. After launching AWM, right-mouse click on the node "Database" and select "Add Database to tree", as shown here:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcbksvvx4-hA76xGDrjyLrCiljybF3ELWU0IlxsmZTIPoYsaMRBRDg2x0-xdYrHHQJYI5YGnXFnDTLfsl-lmsp9BAFz7tU-eLUT7R1sj41kqW7tH6d7mF_gfhT5XW9XZRS8pOsBCaMoXE/s1600-h/image5.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcbksvvx4-hA76xGDrjyLrCiljybF3ELWU0IlxsmZTIPoYsaMRBRDg2x0-xdYrHHQJYI5YGnXFnDTLfsl-lmsp9BAFz7tU-eLUT7R1sj41kqW7tH6d7mF_gfhT5XW9XZRS8pOsBCaMoXE/s400/image5.JPG" alt="" id="BLOGGER_PHOTO_ID_5150096736141552274" border="0" /></a><br />The connection dialog provides prompts to enter a descriptive label and the connection information. For a JDBC connection this is simply the hostname, the port for the database listener and the database SID. This is the information shown here:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgt6UvRgW9zv5giuIi9Eyrg2CvWv5a0DHsajRYfrsqSfAA27zzjTpsXofcURBjmgtlcxKKn_8C_MdJzk7Ol_9RpiUR7KVp_TnjRuA1jBlS9R7iUQkUu-2ZkeWA1Jzsuh_3pR8coJE3NY8k/s1600-h/image4.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgt6UvRgW9zv5giuIi9Eyrg2CvWv5a0DHsajRYfrsqSfAA27zzjTpsXofcURBjmgtlcxKKn_8C_MdJzk7Ol_9RpiUR7KVp_TnjRuA1jBlS9R7iUQkUu-2ZkeWA1Jzsuh_3pR8coJE3NY8k/s400/image4.JPG" alt="" id="BLOGGER_PHOTO_ID_5150096920825146018" border="0" /></a><br /><br />Once you have supplied this information the database will be added to the tree and then you can connect to your chose database instance and provide a user name and password, as shown here:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh26Cqyf28jggdS_C98v7h6BzDzXu7pVPbJVVFdwPxsyi1AbwhMHLiYrVyIadcrUAR62cVTzkB0u4BP4wEJ8NIr2wh0VhZCGPO05JkSifmfOhnTRUi_VUtOhwYoSKfVU_NgP_n1NWpk6ss/s1600-h/image6.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh26Cqyf28jggdS_C98v7h6BzDzXu7pVPbJVVFdwPxsyi1AbwhMHLiYrVyIadcrUAR62cVTzkB0u4BP4wEJ8NIr2wh0VhZCGPO05JkSifmfOhnTRUi_VUtOhwYoSKfVU_NgP_n1NWpk6ss/s400/image6.JPG" alt="" id="BLOGGER_PHOTO_ID_5150098793430887090" border="0" /></a><br />The alternative method is to use a TNS entry and this method always seems to cause errors. Typical errors are:<br /><ul><li>AWM simply aborts with no error message or warning</li><li>OLAPI exception error stating : Unable to resolve type "SYS.SQLOLAPIEXCEPTION"</li><li>An unexpected exception has been detected in native code outside the VM....... Library=D:\Oracle\product\10.2.0\db_1\BIN\ocijdbc10.dll</li></ul><span style="font-size:130%;">Creating a TNS connection</span><br /><br />Firstly, we need to change the way AWM is typically launched.<br /><br /><span style="font-weight: bold;font-size:100%;" >Trapping errors with AWM</span><br />To get diagnostic information, to trap any errors not shown in the AWM GUI, I always recommend using the AWMC.EXE file. This launches a DOS command window that can be used to track error messages. With the 10.2.0.3A version of AWM there are some instances where the GUI will just simply crash or hang with no visible error messages. In this case if you try to use a TNS connection method, when AWM connects to the database instance and tries to retrieve the list of available AWs it simply aborts with no warning and the DOS command window disappears. To resolve this I created a batch file called AWM.BAT which launches AWM by calling awmc.exe. Running this from a command line window allows me to see all the relevant error messages.<br /><br /><span style="font-weight: bold;">Using a TNS connection</span><br />To connecting via the TNS method requires some additional steps in terms of configuration that might not be totally clear. The main problem appears to be the lack of any error messages if you make a mistake. If you get the basic connection string wrong, AWM will give you a reasonable error message that points you in the right direction (" TNS:could not resolve the connect identifier specified...."). However, as we all probably have lots of different Oracle products installed on our desktops/laptops, AWM is able to find, without any prompting, some of the files it needs to make a TNS connection and this is what causes the problem.<br /><br />So which files does AWM need to make a TNS connection? <br /><br />It needs a database client installation to be run to install the SQLNet layer. This will then provide the necessary DLLs etc to support a SQLNet connection. At this point, this is where AWM can go wrong and just simply crash without warning.<br /><br />To make a TNS connection you can either reference one of the entries in the TNSNAMES.ORA file or you can paste in the full TNS connection string, such as:<br /><br />(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=klaker-uk.uk.oracle.com)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=beans)))<br /><br />into the connection dialog box instead of the JDBC connection string. If you want to reference an entry in the TNSNAMES, then assuming you have multiple Oracle homes, I would recommend setting the TNS_ADMIN environment variable so you know which TNSNAMES.ORA is being used. If you do not specify this environment variable the ORACLE_HOME environment variable will be used to source the TNSNAMES.ORA file. Therefore, you need to make sure you have the ORACLE_HOME environment variable set as a minimum before you start AWM.<br /><br />Using the above batch file to run AWM, I added some additional environment variable statements as follows:<br /><br /><span style="font-family:courier new;">set TNS_ADMIN=D:\oracle\product\10.2.0.1\db_1\NETWORK\ADMIN</span><br /><span style="font-family:courier new;">set PATH=D:\oracle\awm\awm\jre\bin;D:\oracle\</span><span style="font-family:courier new;">product\10.2.0.1\db_1</span><span style="font-family:courier new;">\bin;</span><br /><span style="font-family:courier new;">set CLASSPATH=D:\oracle\awm\awm\jre\bin</span><br /><span style="font-family:courier new;">set ORACLE_HOME=D:\oracle\</span><span style="font-family:courier new;">product\10.2.0.1\db_1</span><br /><span style="font-family:courier new;">call awmc.exe</span><br /><br />In this case I have referenced my 10gR2 database installation. This, however, does cause an error when AWM tries to return a list of available AWs for my TNS connection. An error log is now created that contains the following information:<br /><span style=";font-family:courier new;font-size:85%;" ><br />An unexpected exception has been detected in native code outside the VM.<br />Unexpected Signal : EXCEPTION_ACCESS_VIOLATION (0xc0000005) occurred at PC=0x61D35968<br />Function=xaolog+0x6338<br />Library=D:\oracle\product\10.2.0\db_1\bin\OraClient10.Dll<br /><br />Current Java thread:<br /> at oracle.jdbc.driver.T2CStatement.t2cParseExecuteDescribe(Native Method)<br /> at oracle.jdbc.driver.T2CPreparedStatement.executeForDescribe(T2CPreparedStatement.java:518)<br /> at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1030)<br /> at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1123)<br /> at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3284)<br /> at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3328)<br /> - locked <0x1002eaf8> (a oracle.jdbc.driver.T2CPreparedStatement)<br /> - locked <0x102312c8> (a oracle.jdbc.driver.T2CConnection)<br /> at oracle.olap.awm.util.jdbc.SQLWrapper.execute(SQLWrapper.java:184)<br /> at oracle.olap.awm.util.jdbc.SQLWrapper.execute(SQLWrapper.java:62)<br /> at oracle.olap.awm.businessobject.aw.WorkspaceBO.getWorkspacesOwnedBySchemaInStandardForm(WorkspaceBO.java:200)<br /> at oracle.olap.awm.navigator.node.WorkspaceFolderNode.getChildren(WorkspaceFolderNode.java:111)<br /> at oracle.olap.awm.navigator.node.BaseNodeModel.refreshData(BaseNodeModel.java:74)<br /> at oracle.olap.awm.navigator.node.BaseNodeModel.dTreeItemExpanding(BaseNodeModel.java:221)<br /> at oracle.bali.ewt.dTree.DTreeDeferredParent.__fireExpansionChanging(Unknown Source)<br /> at oracle.bali.ewt.dTree.DTreeDeferredParent.setExpanded(Unknown Source)<br /> at oracle.olap.awm.navigator.node.BaseNode.expandHelper(BaseNode.java:2185)<br /> - locked <0x100159f8> (a java.lang.Object)<br /> at oracle.olap.awm.navigator.node.BaseNode.access$400(BaseNode.java:109)<br /> at oracle.olap.awm.navigator.node.BaseNode$ExpansionThread.run(BaseNode.java:2135)<br /><br />Dynamic libraries:<br />0x00400000 - 0x0041B000 D:\oracle\awm\awm\bin\awmc.exe<br />0x7C900000 - 0x7C9B0000 C:\WINDOWS\system32\ntdll.dll<br />0x7C800000 - 0x7C8F5000 C:\WINDOWS\system32\kernel32.dll<br />0x7E410000 - 0x7E4A0000 C:\WINDOWS\system32\USER32.dll<br />0x77F10000 - 0x77F57000 C:\WINDOWS\system32\GDI32.dll<br />0x76390000 - 0x763AD000 C:\WINDOWS\system32\IMM32.DLL<br />0x77DD0000 - 0x77E6B000 C:\WINDOWS\system32\ADVAPI32.dll<br />0x77E70000 - 0x77F01000 C:\WINDOWS\system32\RPCRT4.dll<br />0x629C0000 - 0x629C9000 C:\WINDOWS\system32\LPK.DLL<br />0x74D90000 - 0x74DFB000 C:\WINDOWS\system32\USP10.dll<br />0x77C10000 - 0x77C68000 C:\WINDOWS\system32\msvcrt.dll<br />0x08000000 - 0x08138000 D:\oracle\awm\awm\jre\bin\client\jvm.dll<br />0x76B40000 - 0x76B6D000 C:\WINDOWS\system32\WINMM.dll<br />0x10000000 - 0x10007000 D:\oracle\awm\awm\jre\bin\hpi.dll<br />0x00A20000 - 0x00A2E000 D:\oracle\awm\awm\jre\bin\verify.dll<br />0x00A30000 - 0x00A49000 D:\oracle\awm\awm\jre\bin\java.dll<br />0x00A50000 - 0x00A5D000 D:\oracle\awm\awm\jre\bin\zip.dll<br />0x03D70000 - 0x03E7F000 D:\oracle\awm\awm\jre\bin\awt.dll<br />0x73000000 - 0x73026000 C:\WINDOWS\system32\WINSPOOL.DRV<br />0x774E0000 - 0x7761D000 C:\WINDOWS\system32\ole32.dll<br />0x5AD70000 - 0x5ADA8000 C:\WINDOWS\system32\uxtheme.dll<br />0x03E80000 - 0x03ED0000 D:\oracle\awm\awm\jre\bin\fontmanager.dll<br />0x755C0000 - 0x755EE000 C:\WINDOWS\system32\msctfime.ime<br />0x038C0000 - 0x038DE000 D:\oracle\awm\awm\jre\bin\jpeg.dll<br />0x62F00000 - 0x62F13000 D:\oracle\product\10.2.0\db_1\BIN\ocijdbc10.dll<br />0x045D0000 - 0x04629000 D:\oracle\product\10.2.0\db_1\BIN\OCI.dll<br />0x7C340000 - 0x7C396000 C:\WINDOWS\system32\MSVCR71.dll<br />0x76BF0000 - 0x76BFB000 C:\WINDOWS\system32\PSAPI.DLL<br />0x61C20000 - 0x61E76000 D:\oracle\product\10.2.0\db_1\bin\OraClient10.Dll<br />0x60870000 - 0x60957000 D:\oracle\product\10.2.0\db_1\bin\oracore10.dll<br />0x60A80000 - 0x60B4B000 D:\oracle\product\10.2.0\db_1\bin\oranls10.dll<br />0x63690000 - 0x636A8000 D:\oracle\product\10.2.0\db_1\bin\oraunls10.dll<br />0x60EB0000 - 0x60EB7000 D:\oracle\product\10.2.0\db_1\bin\orauts.dll<br />0x71AB0000 - 0x71AC7000 C:\WINDOWS\system32\WS2_32.dll<br />0x71AA0000 - 0x71AA8000 C:\WINDOWS\system32\WS2HELP.dll<br />0x636B0000 - 0x636B6000 D:\oracle\product\10.2.0\db_1\bin\oravsn10.dll<br />0x60FA0000 - 0x61093000 D:\oracle\product\10.2.0\db_1\bin\oracommon10.dll<br />0x60300000 - 0x6086C000 D:\oracle\product\10.2.0\db_1\bin\orageneric10.dll<br />0x63430000 - 0x63457000 D:\oracle\product\10.2.0\db_1\bin\orasnls10.dll<br />0x63750000 - 0x638C6000 D:\oracle\product\10.2.0\db_1\bin\oraxml10.dll<br />0x04640000 - 0x04651000 C:\WINDOWS\system32\MSVCIRT.dll<br />0x60960000 - 0x60A73000 D:\oracle\product\10.2.0\db_1\bin\oran10.dll<br />0x62740000 - 0x6277E000 D:\oracle\product\10.2.0\db_1\bin\oranl10.dll<br />0x62790000 - 0x627A7000 D:\oracle\product\10.2.0\db_1\bin\oranldap10.dll<br />0x627F0000 - 0x628FC000 D:\oracle\product\10.2.0\db_1\bin\orannzsbb10.dll<br />0x62530000 - 0x62583000 D:\oracle\product\10.2.0\db_1\bin\oraldapclnt10.dll<br />0x62670000 - 0x6268B000 D:\oracle\product\10.2.0\db_1\bin\orancrypt10.dll<br />0x71AD0000 - 0x71AD9000 C:\WINDOWS\system32\WSOCK32.dll<br />0x77120000 - 0x771AB000 C:\WINDOWS\system32\OLEAUT32.dll<br />0x62920000 - 0x6296D000 D:\oracle\product\10.2.0\db_1\bin\oranro10.dll<br />0x626B0000 - 0x626B7000 D:\oracle\product\10.2.0\db_1\bin\oranhost10.dll<br />0x62660000 - 0x62666000 D:\oracle\product\10.2.0\db_1\bin\orancds10.dll<br />0x04660000 - 0x04668000 D:\oracle\product\10.2.0\db_1\bin\orantns10.dll<br />0x04670000 - 0x049D6000 D:\oracle\product\10.2.0\db_1\bin\orapls10.dll<br />0x049E0000 - 0x049E9000 D:\oracle\product\10.2.0\db_1\bin\oraslax10.dll<br />0x63080000 - 0x63284000 D:\oracle\product\10.2.0\db_1\bin\oraplp10.dll<br />0x61ED0000 - 0x61F6A000 D:\oracle\product\10.2.0\db_1\bin\orahasgen10.dll<br />0x62AB0000 - 0x62B1F000 D:\oracle\product\10.2.0\db_1\bin\oraocr10.dll<br />0x62B20000 - 0x62B66000 D:\oracle\product\10.2.0\db_1\bin\oraocrb10.dll<br />0x5B860000 - 0x5B8B4000 C:\WINDOWS\system32\NETAPI32.dll<br />0x62980000 - 0x62990000 D:\oracle\product\10.2.0\db_1\bin\orantcp10.dll<br />0x63520000 - 0x635BA000 D:\oracle\product\10.2.0\db_1\bin\orasql10.dll<br />0x77FE0000 - 0x77FF1000 C:\WINDOWS\system32\Secur32.dll<br />0x71A50000 - 0x71A8F000 C:\WINDOWS\System32\mswsock.dll<br />0x76F20000 - 0x76F47000 C:\WINDOWS\system32\DNSAPI.dll<br />0x76FB0000 - 0x76FB8000 C:\WINDOWS\System32\winrnr.dll<br />0x76F60000 - 0x76F8C000 C:\WINDOWS\system32\WLDAP32.dll<br />0x751D0000 - 0x751EE000 C:\WINDOWS\system32\wshbth.dll<br />0x77920000 - 0x77A13000 C:\WINDOWS\system32\SETUPAPI.dll<br />0x04CF0000 - 0x04D15000 C:\Program Files\Bonjour\mdnsNSP.dll<br />0x76D60000 - 0x76D79000 C:\WINDOWS\system32\Iphlpapi.dll<br />0x76FC0000 - 0x76FC6000 C:\WINDOWS\system32\rasadhlp.dll<br />0x662B0000 - 0x66308000 C:\WINDOWS\system32\hnetcfg.dll<br />0x71A90000 - 0x71A98000 C:\WINDOWS\System32\wshtcpip.dll<br />0x71F80000 - 0x71F84000 C:\WINDOWS\system32\security.dll<br />0x77C70000 - 0x77C93000 C:\WINDOWS\system32\msv1_0.dll<br />0x76C90000 - 0x76CB8000 C:\WINDOWS\system32\imagehlp.dll<br />0x59A60000 - 0x59B01000 C:\WINDOWS\system32\DBGHELP.dll<br />0x77C00000 - 0x77C08000 C:\WINDOWS\system32\VERSION.dll<br /><br />Heap at VM Abort:<br />Heap<br />def new generation total 2176K, used 226K [0x10010000, 0x10260000, 0x12770000)<br />eden space 1984K, 6% used [0x10010000, 0x100313e0, 0x10200000)<br />from space 192K, 48% used [0x10230000, 0x102475c0, 0x10260000)<br />to space 192K, 0% used [0x10200000, 0x10200000, 0x10230000)<br />tenured generation total 27488K, used 21294K [0x12770000, 0x14248000, 0x30010000)<br />the space 27488K, 77% used [0x12770000, 0x13c3b850, 0x13c3ba00, 0x14248000)<br />compacting perm gen total 15616K, used 15595K [0x30010000, 0x30f50000, 0x34010000)<br />the space 15616K, 99% used [0x30010000, 0x30f4ad60, 0x30f4ae00, 0x30f50000)<br /><br />Local Time = Mon Dec 31 09:52:15 2007<br />Elapsed Time = 17<br />#<br /># The exception above was detected in native code outside the VM<br />#<br /># Java VM: Java HotSpot(TM) Client VM (1.4.2_03-b02 mixed mode)<br />#<br /></span><br />Notice the error is with the <span style="font-weight: bold;">OraClient10.dll</span> file. Doing a search across all my Oracle software installations I found multiple copies of this file, with different file sizes. The file in the database home/bin directory was 2348Kb. The file in my OWB10gR2 directory was 1877Kb. Switching the batch file to point to my OWB home directory to use that OraClient10.dll file resolved the connection problem:<br /><br /><span style="font-family:courier new;">set TNS_ADMIN=D:\oracle\OWB10gHome\NETWORK\ADMIN</span><br /><span style="font-family:courier new;">set PATH=D:\oracle\awm\awm\jre\bin;D:\oracle\OWB10gHome\bin;</span><br /><span style="font-family:courier new;">set CLASSPATH=D:\oracle\awm\awm\jre\bin</span><br /><span style="font-family:courier new;">set ORACLE_HOME=D:\oracle\OWB10gHome</span><br /><span style="font-family:courier new;">call awmc.exe</span><br /><br />Therefore, it would appear that the latest database version (10.2.0.3) of the OraClient10.dll file is somehow incompatible with the latest version of AWM10.2.0.3A. Not sure why, but I have logged a bug to try and resolve this.<br /><br />To summarize, if you want to define a database connection in AWM based on a TNS connection name or TNS string do the following:<br /><br />1) Make sure you have a database Client installation (or equivalent, such as OWB) that provides 2) SQLNet<br />Create a batch file to run AWM<br />3) Add the following environment variables to your batch file:<br /><ul><li> TNS_ADMIN to point to your TNSNAMES.ORA file</li><ul><li>or enter the TNS connect string in full ((DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=............................)))</li></ul><li>ORACLE_HOME to point to your database client installation or equivalent.</li></ul>in addition, to avoid other possible conflicts I also set the following:<br /><ul><li>CLASSPATH - limited to just AWM</li><li>PATH - limited to just the ORACLE_HOME and AWM</li></ul>With all this in place, everything should work as normal. and if you do get an error it should be recorded in the DOS command window, which will not be closed if you call it directly from a command prompt.Keith Lakerhttp://www.blogger.com/profile/01039869313455611230noreply@blogger.com0