Palo CE Crash

This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

  • Palo CE Crash

    Hallo Community,

    wir haben seit einigen Wochen einen Testsetup mit Palo CE 3.2 unter SLES 11 64bit laufen. Die Datenbank (/opt/jedox/ps/Data) ist derzeit 12GB groß; der Server sollte ausreichend dimensioniert sein.
    Leider kommt es vermehrt zu Abstürzen des Palo-Prozesses, ohne dass ich in einem der Logfiles einen Eintrag auf Probleme finden konnte. In der palo.ini habe ich verbose auf debug gestellt - auch ohne Erfolg.
    Gibt es die Möglichkeit das Loglevel weiter zu erhöhen bzw. einen Ort, an dem die Crashes geloggt werden?

    Vielen Dank

  • Sorry, my bad.

    Our Palo system is running on an opteron 24 core with 64GB RAM - we will use it as a hyper-v cluster node in the future, but actually it is our best system for testing purposes.
    Over the last weeks we had the palo service crash without any reported error in the logs. My question was, where to find some information about and why the process crashed. The other processes like core, httpd and tomcat are not involved. I've tried to set the log level in palo.ini to debug, but the log stopped without any notice.

    The database is about 12GB large, this should not be a problem. The Palo process is consuming 700 MB of RAM.

    I will try to compile the 64bit version now.



    The post was edited 1 time, last by Florian_Wolf ().

  • Hopefully solved.

    I compiled the 64bit version and all of our jobs ran fine tonight - the largest job twice as fast!

    I added the kernel parameters to the startup script which we are using on our mailserver. This might have helped and will be of interest:

    # Linux 2.6 tuning script

    # max open files

    echo 131072 > /proc/sys/fs/file-max

    # kernel threads

    echo 131072 > /proc/sys/kernel/threads-max

    # socket buffers

    echo 65536 > /proc/sys/net/core/wmem_default

    echo 1048576 > /proc/sys/net/core/wmem_max

    echo 65536 > /proc/sys/net/core/rmem_default

    echo 1048576 > /proc/sys/net/core/rmem_max

    # netdev backlog

    echo 4096 > /proc/sys/net/core/netdev_max_backlog

    # socket buckets

    echo 131072 > /proc/sys/net/ipv4/tcp_max_tw_buckets