Talking about performance

    Talking about performance

    Hi everybody,

    I'm to implement a project for a call center's customer, (we have a lot of data to put on the olap), and I want to know or share some tips about olap architecture, performance, etc...

    Well, about the sources:

    8 database of transational systems, where we have data of time, sales, agent and etc....

    About the actual olap architecture:

    1 main cube for the systems or kind of data (1 cube for sales, 1 cube for time data, ...);
    "Throwing" the details to DrillThrough functionality.

    Note: Actually I'm following the approach 1 main cube for kind of data, (to deliver the data for a report of them). Starting by sales cube that today is with (10.006.407.290.880 cells) I'm affraid about performance because it could to increase.

    How many memory, processor, disk do you recommend?

    Someone already did something like that?
    Atenciosamente,
    Matheus Luz
    Cel.: 55 11 97053 - 7945
    Site: dimat.com.br
    E-mail: matheus@dimat.com.br

    Hi Matheus,

    I have many cubes which have more cells than your cube, and it's works even with using of "palosetdataif". I have 2-5 sec of time reponse.
    But I have a server with 129 G RAM.
    But you can reduce your cube in using attribut for the data which are not very important.
    Hi Matheus,

    the number of cells in the cube is unimportant. 10^13 you have in your cube is nothing unusual. It depends on sparsity of data - how many base cells you load into the cube. I doubt it will be more than few millions. Other aspect is how general will be your cube rules and how complex calculations they will do.
    Last week I was helping with performance issues in cubes with 10^27 cells.

    Jiri
    Hum.....

    Jiri, thank you for your answer.

    Well, this is almost a DW project. kkk

    At this moment I don't wanna to put rules on the cubes, just common aggregations some parallel hierarchies and then create "Sub Cubes" loaded with the values from main cubes, something like this:

    Soruce:
    Main_Cube_Sales_All_Costumers;

    Targets:
    Sub_Cube_Sales_For_Costumer_A;
    Sub_Cube_Sales_For_Costumer_B;
    Sub_Cube_Sales_For_Costumer_C;

    Sub_Cube_Agents_Time_For_Site_A;
    Sub_Cube_Agents_Time_For_Site_B;
    Sub_Cube_Agents_Time_For_Site_C;

    I think that this sub cubes may to deliver a kind of views more quick and especific than main cubes, (that has a lot of "null" coordinates in many cells);

    I know that this approach is a kind of duplication but with this I can to build a view with 8 dimensions and not 13/16/18 dimensions;

    Someone know a better way? Note: Build a cube weakly general may not to support all report needs.
    Atenciosamente,
    Matheus Luz
    Cel.: 55 11 97053 - 7945
    Site: dimat.com.br
    E-mail: matheus@dimat.com.br

    Your approach to create more cubes with less dimensions is good practice
    These cubes are usually connected via rules if needed. Only in case of performance problems these individual cubes can contain redundant data from other cubes to eliminate rules.
    If you want to go this way from beginning and load data directly to multiple cubes it's ok.
    Cubes with aggregations only with 8 dimensions and less than some hundreds of millions of loaded cells is the simplest scenario working well on 8 core cpus in seconds.
    There is no reason to be worried.
    Hi everyone,

    interessting, what is your experiance regarding browser ?

    We have some performance issues, and have found out that Chrome is the browser with the fastest responce time. Actually 2xtimes as fast as Firefox.

    Our Server has 256 GB Ram, which is about 8 times the size of our database.


    With this setup we are with Firefox seeing responsetime, when opening a report at approx 13 sec and with Chrome about 6 sec.
    In our reports, from my point of view we are not doing anything fancy, just accessing with palo.datac, some comboboxes with variables, some macros loading default values into the variables at the start, and off course some diagrams. The reports are approx. 300 kB big.

    Is this normal ?

    Also we are seeing delays when scrolling, anyway to overcome this by adjusting the setup of the browser ?


    best regards

    Sven
    Hi SvenAndersen,

    We had a problem like this in the past, I think with the J5.0.

    Which Jedox's version do you is using?

    Did your spreadsheets with dynaranges, (it is crossing with other dyna.?, This dyna. is building a big/complex hierachy?).

    How many web reports and users do you has accessing this?

    Do you have business rules running?
    Atenciosamente,
    Matheus Luz
    Cel.: 55 11 97053 - 7945
    Site: dimat.com.br
    E-mail: matheus@dimat.com.br

    Hi matheus,

    Tyring to use Palo.DataV, but it returns the value in an 1x1 array. Or so it seems when I press F9 over the function.

    I use the values in a diagram, so is there a functions which can convert the value in the array into a "normal" value.
    The values are calculated correclty and also shown in the cell, until you look a little deeper then you see it is an array,e.g. ={0.503205399677794}

    and this is no good for a diagram.

    Sven