Resources requested from Palo

This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

  • About Alea: good to know that. Do you have figures abaout the quantitative influence of this effect? It does not suprise me as this kind of implementation is the first that comes into mind, even though there may be potential to optimize this ...

    The "keeping all in memory" strategy is clear, I only doubt that changing a value will trigger a global recalc. The more sane algorithm would update an aggregation node when child elements report a change. This change propagates up in consolidation levels. Only those subtrees which are really affected by this change hence are updated. When sales values for a german branch office are changing those in France do not need an update.
  • I´m sorry I have no idea about performance changes about good and bad cubes.

    if you change a value perhaps in your german branch than you change also a value in articels, cities... in all dimension with all functions. that´s a lot of function to recalculate.

    for not calculate the french office, you have to exclude all the not reliable value, which cost server power too. ;)
  • "for not calculate the french office, you have to exclude all the not reliable value, which cost server power too."
    No, that is not as expensive as it is in the algorithm. The french office is just never informed about changes in Germany :]

    Well, let me explain it more clearly: The structure of each dimension is a tree (or several trees, this is possible but then everyone of them is treated separately) and I am only focussing on one of them. It is clear that this applies for each dimension accordingly but as they are defined as perpendiculiar I can talk about them independently.
    A change of a value at one of the elements only propagates along this tree up to the root (well, my tree hangs upside down ;) )
    Lets take the demo database as everbody has it:
    If I change data referenced by the element "Germany" in the dimension "Region" then I update the value in that element and send an update event one level up the tree. This means "West" has to do the same, update it's value by summing all western european countries and sends an update event one level up the tree. The same happens to Europe but there is no update message as there is no parent element.
    Nevertheless, the sum for "East" never has to be reevaluated because there is no side effect (otherwise the model would have to be changed) and it is only queried for it's current value by "Europe" in order to create the sum.

    (There might even be another optimisation not calculating and caching the value unless it has been queried, but this is another story)

    Just imagine a dimension forking into two sub trees at each level and having 8 levels: a change at a leaf element (no splash) with the algorithm above would require 8 summings instead of a complete recalc of 511 nodes. Quite a saving I think.

    The post was edited 1 time, last by Irbis ().

  • Originally posted by ANP
    Yes, but in your first thread you wrote that you use a laptop, without a server.

    Normal you have a server and a client. so perhaps you could test on a server-client system to see what speed your cube would get on a proper hardware. Or is your plan to use Palo only on Laptops, Single-place-PC`s?

    For speed enhancement there are 2 tips

    1. smallest dimensions first biggest last, as you see in test from Irbis
    2. RAM as much as possible on machine which is doing calculation, normaly the server. OLAP´s are memeory-based. If the memory is to small the computer has to create a swap-file and this is a factor of 1000!!!

    So if you plan a server 4Gb are better than 2Gb.

    perhaps this helps.

    PS: Perhaps you could install Palo on two computer one as server and one as client


    I plan to use Palo only on my laptops.
    Features of my laptop: Ram 1 GB, Cpu 2.0 GHz, Swap file 2,5 GB.

    Thank you for the tricks. ;)
    Alberto M. Vitulano
  • Originally posted by ANP
    I have two questions:

    Are you using palo 1.5 final release?
    if you compare palo with other Software, are these olaps installed on a seperate server?

    we use Alea and we have a dual opteron server with 2Gb. For testing Palo we are using a PC with P4 3,0 and 1Gb and no seperate server.

    so compare a client (PC) + server (dual processor machine) with Palo PC client and server on same machine is not fair.


    1. I've installed Palo 1.5

    2. all the olaps are installed on the same machine (server & client)

    3. I've compared all the olap (Palo, Tm1, ecc.) on the same machine in the same hardware environment.
    Alberto M. Vitulano
  • RE: Resources requested from Palo

    Originally posted by h_decker
    Originally posted by amvitulano
    Hi h_decker,
    I'm not reffering only to data import/export, but retrieve data from cube as well. Often, when I have to read over 10000 cells from cube, I spend 2-3 minutes (if cpu doesn't block before).
    My consideration are regarding the delay that I meet every time I have to explore data cube with Palo rather than with other Olap.
    My question is: is going to be developped a new faster engine that allow us to read/write data in the cube in few seconds, like other multidimensional olap?
    Thank you in advance.

    :)


    Hi,
    as they say in the Roadmap its planned to give PALO 2.0 another performance boost.

    Greetings from Cologne
    Holger


    Thank you for the news. :)
    Now I'm going to do other tests, in order to evaluate the answer time of the cubes.
    As soon as possible, I'll post my results.
    Alberto M. Vitulano

    The post was edited 1 time, last by amvitulano ().

  • RE: Resources requested from Palo

    Hello amvitulano,

    Originally posted by amvitulano
    ..
    I've tried other olap application (like PowerOlap, Applix Tm1, etc.) and I've encounter no problem like this. With these olap applications (not open source) I can manage quickly thousands of data, whilst with Palo I've to wait for a long time (sometimes 20-30 minutes).
    It's normal?


    Could you please send us your data so we can trace that? Thanks!

    Regards,
    Stephanie