Increasing efficiency of cube processing

Last Post 02 Aug 2004 12:21 PM by Vinay Dharmik. 2 Replies.
AddThis - Bookmarking and Sharing Button Printer Friendly
  •  
  •  
  •  
  •  
  •  
Sort:
PrevPrev NextNext
You are not authorized to post a reply.
Author Messages
Shill_SQL
New Member
New Member

--
23 Jun 2004 06:45 AM
We have two OLAP cubes. I have added a new dimension to give users an alternative and more efficient way to view the data. As "normal" I have added the new dimension to the two cubes and created the storage plan by accepting the default values.

Since adding the new dimension to both cubes, the time taken to refresh both in total has rocketed from 1hr 45 to over 2hrs 30 minutes. The fact table for one of the cubes has approx 70 million records in it but a gradual increase in processing time is to be expected as we add to this table every day but a 45 minute increase is alarming.

The only thing I can think of is that adding the dimension and defining the storage by accepting the defaults has created the problem. I have shifted the reprocessing dts task as far back as I can but this only postpones the problem and doesn't resolve it. Is there any way that I can significantly improve the time taken to reprocess the cubes and not affect the way that the data within the cube is viewed without me reverting to deleting the new dimension.

Would appreciate any advice.
mimadon
New Member
New Member

--
29 Jun 2004 05:53 AM
If you do not have enough time in your update window to refresh the cubes, you might consider using incremental updates instead.

You'll need to isolate 'new' fact table rows (via a datetime-based view or separate fact table) and use that as the fact table input to the incremental update.

The datetime-based view will only work if the granularity of the cube's time dimension and the cube's update frequency would result in 'new' cells for all incoming fact table rows (based on the change in time).

A separate 'incoming' fact table representing the changes since the last update is the most common approach, and allows the incremental update to either update existing cells (adding the 'incoming' fact table cell values to existing cells) or adding new cells to the cube if the 'incoming' cell does not already exist.

Hope that helps....
Vinay Dharmik
New Member
New Member

--
02 Aug 2004 12:21 PM
1) The size of record also affects the processing time.
2) The max no. of possible aggregations in a cube is the product of the no. of levels of dimensions in cube. If u add levels and dimensions to a cube, the number of possible aggregations increases exponentially.
For Ex. Time dimension has four levels, Customers has five levels, and the Products has five levels. Thus, max no. of aggregations of 100 (5 customer levels x 5 product levels x 4 time levels). Now, if you add lets say a day level to Time dimension, max no. of aggregations increases to 125 (5 x 5 x 5). Now, if you add two more dimensions to the cube, each having three levels, the max no. of aggregations will increase to 1125 (5 x 5 x 5 x 3 x 3).
3) Member Properties also affects the processing time of dimensions, and hence cube.
You are not authorized to post a reply.

Acceptable Use Policy