max elements in a dimension

This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

  • RE: max elements in a dimension

    Hi there
    :)
    i made some tests aun enlarged the 'product'-dimension of the sales-cube with additional 5'000 testproducts with vba (by creating 10'000, it's getting very slowly so that i stopped), and i could make a browserview (paste) on the salescube, enter data on this new products, the total are calculated very quickly ... so far very good!

    8o
    but: after stopping palo i couldn't restart it (no connection to the localhost). The cubefile grew up to 755 MB ! (or have just to wait much more time until the load is complete?)

    any solutions, maybe in release 1.0 ?
  • RE: max elements in a dimension

    Bear in mind that any modifications to the cubes are not written back to the database immediately, initially they are only in the RAM. Changes are journalised in the log, and after the next restart of the database they will actually be written to the database.

    Allow Palo some time to do all that and have a mug of coffee ;), then try again. If the problem persists, let me know!

    Holger
    Jedox QA
  • RE: max elements in a dimension

    :D Hi there
    you encouraged me to wait and really after 10 minutes, the db was loaded and updated. the following restarts takes (only) about 3 minutes.

    maybe a point for improving: most of the datapoints in the cube are empty (Null, even not 0), and the cubefile is as big as if for each datapoint there would be a value, and this needs a lot of (harddisc)-space (not so problematic) but also needs a lot of time to load.
  • Hi Palo team

    i made the same tests with TP2 (1 dimension with 5000 elements) :
    good news : time to load is reduced to only a few seconds
    bad news: the db needs still about 730 MB (1 dimension with 5000 elements),
    the speed to enlarge a dimension to 5000 elemts with vba is even slower then with TP1.

    Will it be possible to make bigger cubes (example : salescube with 20'000 products and 10'000 customers) , which is not an unrealistic spec ?
    this is an essential question wich should a technical preview give an answer on, an the
    answer untill now is probably 'NO'.
    A comparable cube (mostly empty ) in tm1 or alea needs about 1-5 MB, and with palo tp2 i suppose that i can't make a singel dimension with 20'000 elements

    what's your thinking / proposing , next steps concerning the db-architecture?
  • I have made a test with a cube which has 5 dimensions:

    - measures (only 1 value at the time being)
    - products , 4 values with one most used,
    - months (13 values)
    - kind of client (100 values), with a 4 levels hierachy
    - clients and client commercial responsabilities (26000 values) with a 4 level hierarchy .

    The cube is very sparse since a client is from only one kind.... (260 000 not empty cells).

    The total database files space is 180 M.

    Standard XP destop PC (one year old) not much memory (I do not know how much exactly, since it is not a PC I use often, and a new PALO loading is still running so I do not dare touching it..).

    The loading (both dimensions and cube) is not optimized since it uses only one file (EXCEL API from vba) and it took several hours (over 5 and less then the night...). Loading the cube only (without updating the dimensions) took 2 hours.

    The response time to access cube data can take up to 1 minute (from EXCEL , the PHP demo script, or the RCP client) when changing elements selection in the large dimension but can be as quick as 1 second with a 'small ' view when changing the elements selection for a small dimension. Sometimes most of the time is used by some initialization which is made before opening the paste view window (not to get the data later).

    The same data were loaded in a local microsoft OLAP cube in about 15 minutes and accessing the data from EXCEL takes 10 second the first time and less than 1 second later for almost any change.

    Can we hope for an improvement in PALO , or is PALO clearly not made for that kind of use ?

    I must also say that I tried usind an other microsoft EXCEL add-in (more like PALO to have more flexibility to load data into individual EXCEL cells) but after 30 minutes of understanding nothing; I stopped trying to test the product.

    Perhaps a future solution in PALO could be using a four dimension cube, not sparse, with several hierarchies in the same client dimension ( sesponsability and kind of client) , and having the possibility to get a virtual cube (of five dimensions) obtained by crossing two different hierarchies from the same dimension. That is the way EXPRESS worked (now ORACLE OLAP, but I use a very old personnal PC version), with 'relations' instead of hierarchies. And that was very efficient.
    LJ

    The post was edited 5 times, last by L.Jaouan ().

  • I use Alea with 13 dimensions.
    The biggest dimensions have
    12.000
    2.000
    1.500
    100

    That´s 360.000.000.000 possibilities. to recalculate a very big excel sheet with 24.000 cells my system a 2,4GHz pentiumiV with 1GB memory needs 20 Minutes.

    Palo will be faster I hope so. Or am I wrong?
  • Palo and large MOLAP Databases

    Hi,

    I use for very fast display of MOLAP MS Analysis Services 2000 / 2005 based on Oracle DWH or Access databases (with facttables with more than 40.000.000 records and more than 15 dimensions with 5 hierachies with 10.000 members and timetables filled with data form more than 20 years on each date).

    It is easy to use and very fast! Even with the default Pivot table services in Excel you can make own free format reports in Excel or use one of the best Webbased server clients: ReportPortal.

    Now the differences with PALO: PALO is more for forecasting and large data in Excel support while MS Analysis Services is only for reporting.

    Different ideas for different solutions:

    reportportal.com
    palo.net
    quantrix.com
    gmsbv.nl

    Regards, Marco
  • RE: Palo and large MOLAP Databases

    In my experience MS Analysis services can be slow when you have calculated members in the cube. I don't know if this is because it doesn't have a concept such as feeders / accelerators such as TM1 / Alea.
    I've seen cubes in Analysis services take over 40x longer to calculate in AS than in TM1, and if you are writing back etc these values never really get cached so this performance is very important in a planning application.