pre-calculated aggregations

    pre-calculated aggregations

    Hi there,

    I always had in mind that palo does pretty much do the work in advance, saying that a request to an aggregation only delivers a precalculated result instead of starting to sum just then?

    Do I have to do a trick or do I need to apply rules? Cache size in palo.ini is on 200M
    Would a set of rules be able to fill the cache so my users get a more "interactive user esperience"?

    Thanks in advance,
    Mario
    Another trick is to do partial aggregation of your source data via ETL-server.

    I do this for large cubes.. ie. 10 years of sales data, where having millions of invoice lines doesn't make sense. I sum the data via SQL by month before loading.

    You have to ask the question: what is the acceptable level of drill-down for analysis?
    Hi blabj,

    the level of dril-down is determined by individual parties and their demands. Ususally one does not require the same depths as the others.

    But: I have several steps from a workflow, about 40 steps, some of which are relevant for invoicing, some only in case of necessary clarification. I wanted to avoid drill through. So every step in the workflow represents a manual process which is expected to be recorded and aggregated to determine the data quality which leads to a high ratio of manual interaction.

    There are 10 dimensions, 7.8 Mio filled cells with a high degreee of sparsity.

    So your suggestion was leading to probably having a second cube which keeps preaggregated values either drawn out of the primary cube or in SQL (... count(*) ... group by ...) from the original source? True, a reasonable approach.

    But initially I really assumed the aggregations to be held in memory!

    Did anybody try to force some aggregations by a sum rule and hence keep them precalculated instead of thrown out of the cache eventually?

    Thanks,
    Mario