glassfish
  1. glassfish
  2. GLASSFISH-18705

Glassfish increasing CPU usage/ interrupting http-thread-pool after Broken pipe at sun / Glassfish 3.1.2 will die when compression is enabled.

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 3.1.2
    • Fix Version/s: 3.1.2
    • Component/s: grizzly-kernel
    • Labels:
      None
    • Environment:

      Glassfish 3.1.2
      Oracle JDK 1.6.0_32

      Description

      During reglar run in production, CPU usag starts growing after 12 to 24h hours after deploy. Cpu usage goes up based upon the cores in the system - here with 6 cores first usage of 1 core, then 2 etc. till all cores are used at 100% and system eventually dies. Growing is "level"based, meaning first 1 thread spins, then next etc.;

      Prior to the cpu-burn of a thread there is following log:

      Timestamp

      09.05.2012 16:21:59.726
      Log Level

      SEVERE
      Logger

      javax.enterprise.system.std.com.sun.enterprise.server.logging
      Name-Value Pairs

      _ThreadID=118;_ThreadName=Thread-2;
      Record Number

      328
      Message ID

      Complete Message

      java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:417) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489) at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:373) at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:894) at com.sun.grizzly.tcp.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:167) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:223) at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95) at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter.recycle(GzipOutputFilter.java:186) at com.sun.grizzly.http.SocketChannelOutputBuffer.recycle(SocketChannelOutputBuffer.java:510) at com.sun.grizzly.http.ProcessorTask.finishResponse(ProcessorTask.java:838) at com.sun.grizzly.http.ProcessorTask.postResponse(ProcessorTask.java:771) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:747) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662)

      soon after this, log starts getting polluted by:
      332 WARNING GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(50).(details) com.sun.grizzly.config.GrizzlyServiceListener 09.05.2012 16:41:29.370 _ThreadID=122;_ThreadName=Thread-2;
      331 WARNING GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(50).(details) com.sun.grizzly.config.GrizzlyServiceListener 09.05.2012 16:41:28.370 _ThreadID=122;_ThreadName=Thread-2;
      330 WARNING GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(50).(details) com.sun.grizzly.config.GrizzlyServiceListener 09.05.2012 16:41:27.377 _ThreadID=122;_ThreadName=Thread-2;

      where each message is just:

      Timestamp

      09.05.2012 16:41:29.370
      Log Level

      WARNING
      Logger

      com.sun.grizzly.config.GrizzlyServiceListener
      Name-Value Pairs

      _ThreadID=122;_ThreadName=Thread-2;
      Record Number

      332
      Message ID

      Complete Message

      GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(50).

      Problem is hitting more people:
      http://gregorbowie.wordpress.com/2012/05/03/java-glassfish-v3-high-cpu-and-memory-usage-locked-threads-death/
      http://home.java.net/forum/topic/glassfish/glassfish/glassfish-312-ubuntu-1004-lts-exceptions-serverlog-interrupting-idle-thread-http-thread-pool-808

      Beside this report I kindly ask for a workaround as current situation is I need to restart production-server every 20-24 hours;

      The behaviour is independent to the application that is deployed. Some days ago we had the exact same behaviour with the admin-thread-pool
      instead of the regular http-thread-pool;

      1. jstack.log.gz
        5 kB
        KBachl
      2. jvm.log.gz
        5 kB
        KBachl
      1. BusyWaitingOnGlassfish.png
        107 kB

        Activity

        Hide
        KBachl added a comment -

        Follow up:

        currently system is at 66% CPU - every broken pipe IOexception uses 1 CPU core from then on; I urgently request any hint how to overcome/ bypass this as I can't support glassfish 3.1.2 much longer;

        Beside:

        java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:417) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489) at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:373) at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:894) at com.sun.grizzly.tcp.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:167) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:223) at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95) at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter.recycle(GzipOutputFilter.java:186) at com.sun.grizzly.http.SocketChannelOutputBuffer.recycle(SocketChannelOutputBuffer.java:510) at com.sun.grizzly.http.ProcessorTask.finishResponse(ProcessorTask.java:838) at com.sun.grizzly.http.ProcessorTask.postResponse(ProcessorTask.java:771) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:747) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662)

        there also showed following up:

        GRIZZLY0051: ProcessorTask exception. java.lang.NullPointerException at com.sun.grizzly.ssl.SSLOutputBuffer.flushChannel(SSLOutputBuffer.java:123) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489) at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:373) at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:894) at com.sun.grizzly.tcp.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:165) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:223) at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95) at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter.recycle(GzipOutputFilter.java:186) at com.sun.grizzly.http.SocketChannelOutputBuffer.recycle(SocketChannelOutputBuffer.java:510) at com.sun.grizzly.ssl.SSLOutputBuffer.recycle(SSLOutputBuffer.java:208) at com.sun.grizzly.http.ProcessorTask.finishResponse(ProcessorTask.java:838) at com.sun.grizzly.http.ProcessorTask.postResponse(ProcessorTask.java:771) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:747) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662)

        Show
        KBachl added a comment - Follow up: currently system is at 66% CPU - every broken pipe IOexception uses 1 CPU core from then on; I urgently request any hint how to overcome/ bypass this as I can't support glassfish 3.1.2 much longer; Beside: java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:417) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489) at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:373) at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:894) at com.sun.grizzly.tcp.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:167) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:223) at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95) at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter.recycle(GzipOutputFilter.java:186) at com.sun.grizzly.http.SocketChannelOutputBuffer.recycle(SocketChannelOutputBuffer.java:510) at com.sun.grizzly.http.ProcessorTask.finishResponse(ProcessorTask.java:838) at com.sun.grizzly.http.ProcessorTask.postResponse(ProcessorTask.java:771) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:747) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662) there also showed following up: GRIZZLY0051: ProcessorTask exception. java.lang.NullPointerException at com.sun.grizzly.ssl.SSLOutputBuffer.flushChannel(SSLOutputBuffer.java:123) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489) at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:373) at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:894) at com.sun.grizzly.tcp.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:165) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:223) at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95) at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter.recycle(GzipOutputFilter.java:186) at com.sun.grizzly.http.SocketChannelOutputBuffer.recycle(SocketChannelOutputBuffer.java:510) at com.sun.grizzly.ssl.SSLOutputBuffer.recycle(SSLOutputBuffer.java:208) at com.sun.grizzly.http.ProcessorTask.finishResponse(ProcessorTask.java:838) at com.sun.grizzly.http.ProcessorTask.postResponse(ProcessorTask.java:771) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:747) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662)
        Hide
        Ryan Lubke added a comment -

        Sorry for the delay. Investigating now.

        Show
        Ryan Lubke added a comment - Sorry for the delay. Investigating now.
        Hide
        Ryan Lubke added a comment -

        Would it be possible for you to attach a thread dump from when the server is exhibiting issues?

        Show
        Ryan Lubke added a comment - Would it be possible for you to attach a thread dump from when the server is exhibiting issues?
        Hide
        KBachl added a comment -

        Hello Ryan,

        sorry for the late answer; Somehow my mail-info about updates didnt show up; Currently Server is running for over 30 hours without problems. This is the first time he did this so long in production. The only thing I changed from last time (he died after about 12 hours) was to turn off the GZip compression in Glassfish; Tonight I'll deploy a new version of our apps that have built in GZIP based on EHCache filter;

        What command do I have to use when I want to do a thread dump on glassfish in case its coming up again?

        In either way, glassfish needs to react more "mature" to client abort and/ or client I/O errors as these are not uncommon on the internet; It has to be made sure that literally "nothing" ever stalls/ blocks a thread so hard that the system could go down...

        Show
        KBachl added a comment - Hello Ryan, sorry for the late answer; Somehow my mail-info about updates didnt show up; Currently Server is running for over 30 hours without problems. This is the first time he did this so long in production. The only thing I changed from last time (he died after about 12 hours) was to turn off the GZip compression in Glassfish; Tonight I'll deploy a new version of our apps that have built in GZIP based on EHCache filter; What command do I have to use when I want to do a thread dump on glassfish in case its coming up again? In either way, glassfish needs to react more "mature" to client abort and/ or client I/O errors as these are not uncommon on the internet; It has to be made sure that literally "nothing" ever stalls/ blocks a thread so hard that the system could go down...
        Hide
        Ryan Lubke added a comment -

        Interesting. That gives us a little more info to go on.

        As far as gathering the thread dump, see [1].

        Or if you prefer a tool to the command line, you can use Visual VM.

        Finally, in reponse to your final comment about better I/O failure handling, I don't disagree.

        [1] https://blogs.oracle.com/foo/entry/how_to_get_thread_dumps

        Show
        Ryan Lubke added a comment - Interesting. That gives us a little more info to go on. As far as gathering the thread dump, see [1] . Or if you prefer a tool to the command line, you can use Visual VM. Finally, in reponse to your final comment about better I/O failure handling, I don't disagree. [1] https://blogs.oracle.com/foo/entry/how_to_get_thread_dumps
        Hide
        KBachl added a comment -

        Hello Ryan,

        yesterday I restarted the server and put regular optimized VM settings back (I also suspected these in the beginning as well as thread-pool numbers, with higher e.g.: 100 he died faster than with lower thread pool number like 5);
        Currently he's on about 23 hours and performing better than ever - surprisingly, the gzip done by ehCache filter is better in terms of performance than the gzip glassfish did before - at least what I can say from the response times and cpu cycles measured by newrelic (even thow, the numbers are so surprisingly low that it could also be measerument based diff);

        No problems so far, http-thread-pool is set for min/max 25/25, special vm options are:
        -XX:+UseConcMarkSweepGC
        -XX:+CMSClassUnloadingEnabled
        -server
        -XX:MaxPermSize=384m
        -XX:LargePageSizeInBytes=256m

        After all the investigation I assume following that might be worth to dig in - at least I believe its what happening under the hood:

        Glassfish gets request from user, he replies with say 100 kb HTML; He checks mime and compresses it, sends it out to client via one of the thread pools. Now, hes not doing it in one part, but chunked .
        As another thread in threadpool is empty he might give it there and there we had the broken pipe - while the first thread still wants to write out - so those 2 threads block each other and finally one spins over....

        I'll keep you informed how the server behaves. Is there any plan to release a 3.1.3 or 3.2? (I saw 3.2 was anounced longer ago, but its last builds are quite old...)

        Show
        KBachl added a comment - Hello Ryan, yesterday I restarted the server and put regular optimized VM settings back (I also suspected these in the beginning as well as thread-pool numbers, with higher e.g.: 100 he died faster than with lower thread pool number like 5); Currently he's on about 23 hours and performing better than ever - surprisingly, the gzip done by ehCache filter is better in terms of performance than the gzip glassfish did before - at least what I can say from the response times and cpu cycles measured by newrelic (even thow, the numbers are so surprisingly low that it could also be measerument based diff); No problems so far, http-thread-pool is set for min/max 25/25, special vm options are: -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -server -XX:MaxPermSize=384m -XX:LargePageSizeInBytes=256m After all the investigation I assume following that might be worth to dig in - at least I believe its what happening under the hood: Glassfish gets request from user, he replies with say 100 kb HTML; He checks mime and compresses it, sends it out to client via one of the thread pools. Now, hes not doing it in one part, but chunked . As another thread in threadpool is empty he might give it there and there we had the broken pipe - while the first thread still wants to write out - so those 2 threads block each other and finally one spins over.... I'll keep you informed how the server behaves. Is there any plan to release a 3.1.3 or 3.2? (I saw 3.2 was anounced longer ago, but its last builds are quite old...)
        Hide
        Ryan Lubke added a comment -

        I've made changes to resolve the SSLOutputBuffer NPE and improve the broken pipe handling.

        I don't supposed you've been able to replicate this issue in a non-production setting (I haven't yet). If so, would you be willing to test new jars?

        Show
        Ryan Lubke added a comment - I've made changes to resolve the SSLOutputBuffer NPE and improve the broken pipe handling. I don't supposed you've been able to replicate this issue in a non-production setting (I haven't yet). If so, would you be willing to test new jars?
        Hide
        KBachl added a comment -

        Hi Ryan,

        as glassfish doesnt check if content is already gzipped I cant test it easily in my current prod (currently our apps all have gzip filter installed);
        But I'm setting up an internal beta server next week (long running one) that can be used to test this - the environment will be same as production.

        So you can attach the needed jar's to replace; BTW server is up now for 4 days without any problem so far....

        Show
        KBachl added a comment - Hi Ryan, as glassfish doesnt check if content is already gzipped I cant test it easily in my current prod (currently our apps all have gzip filter installed); But I'm setting up an internal beta server next week (long running one) that can be used to test this - the environment will be same as production. So you can attach the needed jar's to replace; BTW server is up now for 4 days without any problem so far....
        Hide
        Ryan Lubke added a comment -

        I've attached patch.jar which includes update grizzly http and utils jars. Back up the existing jars in GF_HOME/modules and drop these in.

        Show
        Ryan Lubke added a comment - I've attached patch.jar which includes update grizzly http and utils jars. Back up the existing jars in GF_HOME/modules and drop these in.
        Hide
        Ryan Lubke added a comment - - edited

        > PS: can you give me insight into the changes you did to these jars?

        1. Fixed the NPE with the SSL processing that was reported
        2. Fixed an issue with the GZIP compression where a resource wouldn't
        be properly cleaned up if an exception was thrown when trying to recycle
        to compression artifact.

        So, I reviewed the jstack output. I have a question:

        Thread 9970: (state = IN_NATIVE)

        • sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0 (Compiled frame; information may be imprecise)
        • sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=210 (Compiled frame)
        • sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=65 (Compiled frame)
        • sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=69 (Compiled frame)
        • sun.nio.ch.SelectorImpl.select(long) @bci=30, line=80 (Compiled frame)
        • org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector) @bci=4, line=33 (Compiled frame)
        • org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run() @bci=57, line=157 (Compiled frame)
        • org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run() @bci=55, line=102 (Interpreted frame)
        • org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run() @bci=14, line=42 (Interpreted frame)
        • java.util.concurrent.ThreadPoolExecutor$Worker.runTask(java.lang.Runnable) @bci=59, line=886 (Compiled frame)
        • java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=28, line=908 (Compiled frame)
        • java.lang.Thread.run() @bci=11, line=662 (Interpreted frame)

        What do you have on the server that uses this code? None of the Grizzly threads appear to be doing any work. At the time of the dump, they are all parked waiting for tasks. However, there are lots of these threads in IN_NATIVE state polling.

        Quick google search showed several likely issues: http://elasticsearch-users.115913.n3.nabble.com/Netty-threads-eats-all-of-CPU-td2907488.html

        Within that thread, a suggested workaround is to set network.tcp.blocking setting to true. I'm not familiar at all with Netty, so I can't say exactly how one would do that (system property perhaps?).

        Show
        Ryan Lubke added a comment - - edited > PS: can you give me insight into the changes you did to these jars? 1. Fixed the NPE with the SSL processing that was reported 2. Fixed an issue with the GZIP compression where a resource wouldn't be properly cleaned up if an exception was thrown when trying to recycle to compression artifact. So, I reviewed the jstack output. I have a question: Thread 9970: (state = IN_NATIVE) sun.nio.ch.EPollArrayWrapper.epollWait(long, int, long, int) @bci=0 (Compiled frame; information may be imprecise) sun.nio.ch.EPollArrayWrapper.poll(long) @bci=18, line=210 (Compiled frame) sun.nio.ch.EPollSelectorImpl.doSelect(long) @bci=28, line=65 (Compiled frame) sun.nio.ch.SelectorImpl.lockAndDoSelect(long) @bci=37, line=69 (Compiled frame) sun.nio.ch.SelectorImpl.select(long) @bci=30, line=80 (Compiled frame) org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(java.nio.channels.Selector) @bci=4, line=33 (Compiled frame) org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run() @bci=57, line=157 (Compiled frame) org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run() @bci=55, line=102 (Interpreted frame) org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run() @bci=14, line=42 (Interpreted frame) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(java.lang.Runnable) @bci=59, line=886 (Compiled frame) java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=28, line=908 (Compiled frame) java.lang.Thread.run() @bci=11, line=662 (Interpreted frame) What do you have on the server that uses this code? None of the Grizzly threads appear to be doing any work. At the time of the dump, they are all parked waiting for tasks. However, there are lots of these threads in IN_NATIVE state polling. Quick google search showed several likely issues: http://elasticsearch-users.115913.n3.nabble.com/Netty-threads-eats-all-of-CPU-td2907488.html Within that thread, a suggested workaround is to set network.tcp.blocking setting to true. I'm not familiar at all with Netty, so I can't say exactly how one would do that (system property perhaps?).
        Hide
        Ryan Lubke added a comment -

        Also, can you confirm the version of the linux kernel you're using?

        Show
        Ryan Lubke added a comment - Also, can you confirm the version of the linux kernel you're using?
        Hide
        KBachl added a comment -

        Hello,

        we use, in fact, elasticsearch and this uses netty - so sorry for the wrong alarm. Like I said I didn't analyse the trace (only had my iphone and a 5 minute access to a laptop), but the symptoms were very similar but not so strong like expected (only 1 cpu was locked, no second over the weekend);
        The linux system is:
        Linux 2.6.32-28-server #55-Ubuntu SMP Mon Jan 10 23:57:16 UTC 2011 x86_64 GNU/Linux
        running on VMWare 5 (6CPU/ 8GB RAM);

        I'll test the patch in the upcoming days, need to alter our appplications first - would be the second patch our GF 3.1.2 is using (first one is for the corrected post of binary data)

        Show
        KBachl added a comment - Hello, we use, in fact, elasticsearch and this uses netty - so sorry for the wrong alarm. Like I said I didn't analyse the trace (only had my iphone and a 5 minute access to a laptop), but the symptoms were very similar but not so strong like expected (only 1 cpu was locked, no second over the weekend); The linux system is: Linux 2.6.32-28-server #55-Ubuntu SMP Mon Jan 10 23:57:16 UTC 2011 x86_64 GNU/Linux running on VMWare 5 (6CPU/ 8GB RAM); I'll test the patch in the upcoming days, need to alter our appplications first - would be the second patch our GF 3.1.2 is using (first one is for the corrected post of binary data)
        Hide
        Ryan Lubke added a comment -

        Hello,

        Just checking back. Have you been able to get the patches in place?

        Show
        Ryan Lubke added a comment - Hello, Just checking back. Have you been able to get the patches in place?
        Hide
        KBachl added a comment -

        not yet, sorry (we currently have some memorial days in germany and this year they are usually 1 thursday or tuesday - so only 3 day's a week - but our beta server didnt make it in time)

        but I plan to test it on our prod. system in upcoming days - especially as I'd like to clean the SSL error;

        Show
        KBachl added a comment - not yet, sorry (we currently have some memorial days in germany and this year they are usually 1 thursday or tuesday - so only 3 day's a week - but our beta server didnt make it in time) but I plan to test it on our prod. system in upcoming days - especially as I'd like to clean the SSL error;
        Hide
        KBachl added a comment -

        just want to notify you that the patch will be put in place tonight on our production system;

        Show
        KBachl added a comment - just want to notify you that the patch will be put in place tonight on our production system;
        Hide
        KBachl added a comment -

        Patch is in place;
        Note if anyone else is trying it: you need to do a
        rm -r ./osgi-cache
        rm -r ./generated
        in your domain dir's else the server may not come up as expected (he came up once, but repeated tries didnt work);

        Show
        KBachl added a comment - Patch is in place; Note if anyone else is trying it: you need to do a rm -r ./osgi-cache rm -r ./generated in your domain dir's else the server may not come up as expected (he came up once, but repeated tries didnt work);
        Hide
        KBachl added a comment -

        So far patch works (note: gzipping on server side is not yet activated - will be in next 1-2 days); One stalled request occured last night, stack trace is:

        273796 ms
        [Stopped monitoring after 273 seconds] Request took more than 6 times that of stall threshold [45000] ms

        Thread Name:http-thread-pool-8080(25)
        ID:123
        Time:Sat Jun 02 00:28:03 CEST 2012
        State:RUNNABLE
        Priority:5

        sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        com.sun.grizzly.util.Utils.readWithTemporarySelector(Utils.java:159)
        com.sun.grizzly.util.InputReader.doClearRead(InputReader.java:322)
        com.sun.grizzly.util.InputReader.doRead(InputReader.java:283)
        com.sun.grizzly.util.InputReader.read(InputReader.java:206)
        com.sun.grizzly.tcp.http11.InternalInputBuffer.fill(InternalInputBuffer.java:836)
        com.sun.grizzly.tcp.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:871)
        com.sun.grizzly.tcp.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:157)
        com.sun.grizzly.tcp.http11.InternalInputBuffer.doRead(InternalInputBuffer.java:791)
        com.sun.grizzly.tcp.Request.doRead(Request.java:501)
        org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
        com.sun.grizzly.util.buf.ByteChunk.substract(ByteChunk.java:436)
        org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:357)
        org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:265)
        org.apache.catalina.connector.Request.readPostBody(Request.java:3231)
        org.apache.catalina.connector.Request.getPostBody(Request.java:3214)
        com.sun.enterprise.web.pwc.connector.coyote.PwcCoyoteRequest.getPostBody(PwcCoyoteRequest.java:343)
        org.apache.catalina.connector.Request.parseRequestParameters(Request.java:3183)
        org.apache.catalina.connector.Request.getParameter(Request.java:1242)
        org.apache.catalina.connector.RequestFacade.getParameter(RequestFacade.java:422)
        javax.servlet.ServletRequestWrapper.getParameter(ServletRequestWrapper.java:184)
        org.apache.wicket.protocol.http.servlet.ServletWebRequest.getParameter(ServletWebRequest.java:146)
        org.apache.wicket.protocol.http.request.WebRequestCodingStrategy.decode(WebRequestCodingStrategy.java:209)
        org.apache.wicket.Request.getRequestParameters(Request.java:183)
        org.apache.wicket.RequestCycle.step(RequestCycle.java:1339)
        org.apache.wicket.RequestCycle.steps(RequestCycle.java:1465)
        org.apache.wicket.RequestCycle.request(RequestCycle.java:545)
        org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:486)
        org.apache.wicket.protocol.http.WicketFilter.doFilter(WicketFilter.java:319)
        org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
        org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
        FF.FOO.cache.CachingFilter.doFilter(CachingFilter.java:22)
        org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
        org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
        net.sf.ehcache.constructs.web.filter.GzipFilter.doFilter(GzipFilter.java:95)
        net.sf.ehcache.constructs.web.filter.Filter.doFilter(Filter.java:86)
        org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
        org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
        org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
        org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
        org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
        org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
        org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:380)
        org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
        org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
        org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
        org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
        org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
        org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
        org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
        org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
        org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161)
        org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331)
        org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
        com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
        com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
        com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849)
        com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746)
        com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045)
        com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228)
        com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
        com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
        com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
        com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
        com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
        com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
        com.sun.grizzly.ContextTask.run(ContextTask.java:71)
        com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
        com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
        java.lang.Thread.run(Thread.java:662)

        Log file is clean at that time, however, we had some minutes before a:

        02.06.2012 00:02:45.290
        Log Level

        WARNING
        Logger

        com.sun.grizzly.config.GrizzlyServiceListener
        Name-Value Pairs

        _ThreadID=94;_ThreadName=Thread-5;
        Record Number

        1185
        Message ID

        Complete Message

        GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(13).

        above message occured 2 times during the last 2 days where patch is in place - its good to see that server is not going down/ and/ or eats cpu, however, its not good that thread stalls;

        Show
        KBachl added a comment - So far patch works (note: gzipping on server side is not yet activated - will be in next 1-2 days); One stalled request occured last night, stack trace is: 273796 ms [Stopped monitoring after 273 seconds] Request took more than 6 times that of stall threshold [45000] ms Thread Name:http-thread-pool-8080(25) ID:123 Time:Sat Jun 02 00:28:03 CEST 2012 State:RUNNABLE Priority:5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) com.sun.grizzly.util.Utils.readWithTemporarySelector(Utils.java:159) com.sun.grizzly.util.InputReader.doClearRead(InputReader.java:322) com.sun.grizzly.util.InputReader.doRead(InputReader.java:283) com.sun.grizzly.util.InputReader.read(InputReader.java:206) com.sun.grizzly.tcp.http11.InternalInputBuffer.fill(InternalInputBuffer.java:836) com.sun.grizzly.tcp.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:871) com.sun.grizzly.tcp.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:157) com.sun.grizzly.tcp.http11.InternalInputBuffer.doRead(InternalInputBuffer.java:791) com.sun.grizzly.tcp.Request.doRead(Request.java:501) org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336) com.sun.grizzly.util.buf.ByteChunk.substract(ByteChunk.java:436) org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:357) org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:265) org.apache.catalina.connector.Request.readPostBody(Request.java:3231) org.apache.catalina.connector.Request.getPostBody(Request.java:3214) com.sun.enterprise.web.pwc.connector.coyote.PwcCoyoteRequest.getPostBody(PwcCoyoteRequest.java:343) org.apache.catalina.connector.Request.parseRequestParameters(Request.java:3183) org.apache.catalina.connector.Request.getParameter(Request.java:1242) org.apache.catalina.connector.RequestFacade.getParameter(RequestFacade.java:422) javax.servlet.ServletRequestWrapper.getParameter(ServletRequestWrapper.java:184) org.apache.wicket.protocol.http.servlet.ServletWebRequest.getParameter(ServletWebRequest.java:146) org.apache.wicket.protocol.http.request.WebRequestCodingStrategy.decode(WebRequestCodingStrategy.java:209) org.apache.wicket.Request.getRequestParameters(Request.java:183) org.apache.wicket.RequestCycle.step(RequestCycle.java:1339) org.apache.wicket.RequestCycle.steps(RequestCycle.java:1465) org.apache.wicket.RequestCycle.request(RequestCycle.java:545) org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:486) org.apache.wicket.protocol.http.WicketFilter.doFilter(WicketFilter.java:319) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) FF.FOO.cache.CachingFilter.doFilter(CachingFilter.java:22) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) net.sf.ehcache.constructs.web.filter.GzipFilter.doFilter(GzipFilter.java:95) net.sf.ehcache.constructs.web.filter.Filter.doFilter(Filter.java:86) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449) org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365) org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90) org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83) org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:380) org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362) org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161) org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317) com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849) com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746) com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) com.sun.grizzly.ContextTask.run(ContextTask.java:71) com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) java.lang.Thread.run(Thread.java:662) Log file is clean at that time, however, we had some minutes before a: 02.06.2012 00:02:45.290 Log Level WARNING Logger com.sun.grizzly.config.GrizzlyServiceListener Name-Value Pairs _ThreadID=94;_ThreadName=Thread-5; Record Number 1185 Message ID Complete Message GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(13). above message occured 2 times during the last 2 days where patch is in place - its good to see that server is not going down/ and/ or eats cpu, however, its not good that thread stalls;
        Hide
        KBachl added a comment -

        Update: Server was just restarted and new apps put in place, so the GZipping is now again done by glassfish server; I'll keep you informed how the next days go;

        At least it can be stated yet that the new patch didnt have any negative effets so far - if it cures the problem we initially had will be shown in next 7 days;

        Show
        KBachl added a comment - Update: Server was just restarted and new apps put in place, so the GZipping is now again done by glassfish server; I'll keep you informed how the next days go; At least it can be stated yet that the new patch didnt have any negative effets so far - if it cures the problem we initially had will be shown in next 7 days;
        Hide
        fabiosf added a comment -

        Hi, Ryan,

        Do you know if your patches are compatible with Glassfish 3.1.1? I have the same logs/behavior on our server exactly as described by this ticked, but we still use Glassfish 3.1.1.

        Thank you.

        Show
        fabiosf added a comment - Hi, Ryan, Do you know if your patches are compatible with Glassfish 3.1.1? I have the same logs/behavior on our server exactly as described by this ticked, but we still use Glassfish 3.1.1. Thank you.
        Hide
        Ryan Lubke added a comment -

        I can create a bundle later today and associate it here.

        Show
        Ryan Lubke added a comment - I can create a bundle later today and associate it here.
        Hide
        Ryan Lubke added a comment -

        Patch for 3.1.1.

        Show
        Ryan Lubke added a comment - Patch for 3.1.1.
        Hide
        Ryan Lubke added a comment -

        Patch for 3.1.1 attached. Same drill as with 3.1.2. Back-up the existing jars, drop these in and per KBachl's report:

        rm -r ./osgi-cache
        rm -r ./generated

        Show
        Ryan Lubke added a comment - Patch for 3.1.1 attached. Same drill as with 3.1.2. Back-up the existing jars, drop these in and per KBachl's report: rm -r ./osgi-cache rm -r ./generated
        Hide
        KBachl added a comment -

        Short update after 2,5 days: Server working well so far, no special logs so far beside the one reported above - will continue to watch for next 7 days;

        Show
        KBachl added a comment - Short update after 2,5 days: Server working well so far, no special logs so far beside the one reported above - will continue to watch for next 7 days;
        Hide
        Ryan Lubke added a comment -

        Thanks for the updates!

        Show
        Ryan Lubke added a comment - Thanks for the updates!
        Hide
        Ryan Lubke added a comment -

        Updated patch for GlassFish 3.1.1

        Show
        Ryan Lubke added a comment - Updated patch for GlassFish 3.1.1
        Hide
        Ryan Lubke added a comment -

        @fabiosf: Updated the jars for the 3.1.1 patch. Please use the new bundle instead.

        Show
        Ryan Lubke added a comment - @fabiosf: Updated the jars for the 3.1.1 patch. Please use the new bundle instead.
        Hide
        KBachl added a comment -

        Update after 5 days: still all well - seems bug is squased with this patch... will continue reporting;

        Show
        KBachl added a comment - Update after 5 days: still all well - seems bug is squased with this patch... will continue reporting;
        Hide
        Ryan Lubke added a comment -

        Good to hear (so far). On a related note, you mentioned a performance improvement when using a compression filter vs GF's compression mechanism. If you have some numbers to share, it might be worth logging an issue to track that.

        Show
        Ryan Lubke added a comment - Good to hear (so far). On a related note, you mentioned a performance improvement when using a compression filter vs GF's compression mechanism. If you have some numbers to share, it might be worth logging an issue to track that.
        Hide
        Ryan Lubke added a comment -

        I'm assuming no news is good news?

        Show
        Ryan Lubke added a comment - I'm assuming no news is good news?
        Hide
        fabiosf added a comment -

        I cannot tell yet. We moved this week our production server from Glassfish3.1.1 to 3.1.2 because we had previous patches applied on the same files on our GF3.1.1 that would be lost if we used your patch.

        Since the 3.1.2 version corrected those previous problems, we will now test your patch on 3.1.2 as well. On the next week we will give you an answer.

        Thanks.

        Show
        fabiosf added a comment - I cannot tell yet. We moved this week our production server from Glassfish3.1.1 to 3.1.2 because we had previous patches applied on the same files on our GF3.1.1 that would be lost if we used your patch. Since the 3.1.2 version corrected those previous problems, we will now test your patch on 3.1.2 as well. On the next week we will give you an answer. Thanks.
        Hide
        KBachl added a comment -

        So, a month is over and my long term-comment is: Patch is great, no problems so far, all works as expected;

        I also couldn't verify the performance differences. In fact, during much testing and fiddling it seems that its not caused by glassfish but mere our VMWare beneath that was not well done aligned onto its LUN. After fixing this the noted performance difference was under 0.1%, so its not enough to blame anything.

        Ryan, can you give us some news about planning for Glassfish 3.1.3 or sth. like that?

        Show
        KBachl added a comment - So, a month is over and my long term-comment is: Patch is great, no problems so far, all works as expected; I also couldn't verify the performance differences. In fact, during much testing and fiddling it seems that its not caused by glassfish but mere our VMWare beneath that was not well done aligned onto its LUN. After fixing this the noted performance difference was under 0.1%, so its not enough to blame anything. Ryan, can you give us some news about planning for Glassfish 3.1.3 or sth. like that?
        Hide
        fabiosf added a comment -

        In our case, the problem persists. In fact, the diagnosis has changed, but the prognosis is the same.
        Described at the end of this message, there are the three logs - [1], [2] and [3] - directly associated with the bug, which became a little rarer, but still occurs.

        The Glassfish's (3.1.2 patched) behaviour to enter on 'busy waiting' is detailed next:

        (a) Sometimes the exception [1] is logged on 'server.log', but Glassfish sometimes continue to run fine with no problem.
        (b) Sometimes after [1] occur, the log [3] starts to be written thousands of times on server.log, but apparently with no problem to the normal Glassfish's behaviour.
        (c) However, sometimes after [1], the exception [2] is throwed and logged. Generally [2] is logged imediattly after [1], but sometimes after some few minutes. When [2] is logged, one core enters in busy waiting.
        (d) The process (c) may repeat itself after some hours, but sometimes even after some several minutes, which means our application may be down at any time and we must restart the server usually each day or less.

        It is valid to noticed that the described behaviors (a-c) are different from the Glassfish 3.1.1 having no patch. When there is no patch on GF_3.1.1, the 'busy waiting' begins imediattly after [3].

        #######
        [LOG 1]
        #######
        [#|2012-06-25T17:43:44.472-0300|WARNING|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=22;_ThreadName=Thread-2;|org.apache.catalina.core.StandardHostValve@6ac45da6: Exception processing ErrorPage[errorCode=404, location=/erro404.jsf]
        org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer
        at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:384)
        at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:353)
        at org.apache.catalina.connector.Response.flushBuffer(Response.java:643)
        at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:328)
        at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:671)
        at org.apache.catalina.core.StandardHostValve.status(StandardHostValve.java:357)
        at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:221)
        at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
        at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
        at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
        at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849)
        at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746)
        at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045)
        at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228)
        at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
        at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
        at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
        at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
        at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
        at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
        at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
        at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
        at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
        at java.lang.Thread.run(Thread.java:662)
        Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
        at sun.nio.ch.IOUtil.write(IOUtil.java:40)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
        at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108)
        at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:417)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flush(SocketChannelOutputBuffer.java:467)
        at com.sun.grizzly.http.ProcessorTask.action(ProcessorTask.java:1276)
        at com.sun.grizzly.tcp.Response.action(Response.java:268)
        at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:380)
        at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:353)
        at org.apache.catalina.connector.CoyoteWriter.flush(CoyoteWriter.java:153)
        at org.apache.catalina.core.ApplicationDispatcherForward.closeResponse(ApplicationDispatcherForward.java:407)
        at org.apache.catalina.core.ApplicationDispatcherForward.commit(ApplicationDispatcherForward.java:154)
        at org.apache.catalina.core.ApplicationDispatcher.dispatch(ApplicationDispatcher.java:358)
        at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:472)
        at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:669)
        ... 20 more

        #]

        #######
        [LOG 2]
        #######
        [#|2012-06-25T17:43:44.729-0300|SEVERE|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=22;_ThreadName=Thread-2;|java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcher.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
        at sun.nio.ch.IOUtil.write(IOUtil.java:40)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
        at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108)
        at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:417)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:373)
        at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:894)
        at com.sun.grizzly.tcp.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:167)
        at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:223)
        at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95)
        at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
        at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter.recycle(GzipOutputFilter.java:186)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.recycle(SocketChannelOutputBuffer.java:510)
        at com.sun.grizzly.http.ProcessorTask.finishResponse(ProcessorTask.java:838)
        at com.sun.grizzly.http.ProcessorTask.postResponse(ProcessorTask.java:771)
        at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:747)
        at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045)
        at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228)
        at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
        at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
        at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
        at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
        at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
        at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
        at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
        at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
        at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
        at java.lang.Thread.run(Thread.java:662)

        #]

        #######
        [LOG 3]
        #######
        [#|2012-06-26T10:01:15.053-0300|WARNING|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=18;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(2).|#]

        Show
        fabiosf added a comment - In our case, the problem persists. In fact, the diagnosis has changed, but the prognosis is the same. Described at the end of this message, there are the three logs - [1] , [2] and [3] - directly associated with the bug, which became a little rarer, but still occurs. The Glassfish's (3.1.2 patched) behaviour to enter on 'busy waiting' is detailed next: (a) Sometimes the exception [1] is logged on 'server.log', but Glassfish sometimes continue to run fine with no problem. (b) Sometimes after [1] occur, the log [3] starts to be written thousands of times on server.log, but apparently with no problem to the normal Glassfish's behaviour. (c) However, sometimes after [1] , the exception [2] is throwed and logged. Generally [2] is logged imediattly after [1] , but sometimes after some few minutes. When [2] is logged, one core enters in busy waiting. (d) The process (c) may repeat itself after some hours, but sometimes even after some several minutes, which means our application may be down at any time and we must restart the server usually each day or less. It is valid to noticed that the described behaviors (a-c) are different from the Glassfish 3.1.1 having no patch. When there is no patch on GF_3.1.1, the 'busy waiting' begins imediattly after [3] . ####### [LOG 1] ####### [#|2012-06-25T17:43:44.472-0300|WARNING|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=22;_ThreadName=Thread-2;|org.apache.catalina.core.StandardHostValve@6ac45da6: Exception processing ErrorPage [errorCode=404, location=/erro404.jsf] org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:384) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:353) at org.apache.catalina.connector.Response.flushBuffer(Response.java:643) at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:328) at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:671) at org.apache.catalina.core.StandardHostValve.status(StandardHostValve.java:357) at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:221) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:417) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489) at com.sun.grizzly.http.SocketChannelOutputBuffer.flush(SocketChannelOutputBuffer.java:467) at com.sun.grizzly.http.ProcessorTask.action(ProcessorTask.java:1276) at com.sun.grizzly.tcp.Response.action(Response.java:268) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:380) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:353) at org.apache.catalina.connector.CoyoteWriter.flush(CoyoteWriter.java:153) at org.apache.catalina.core.ApplicationDispatcherForward.closeResponse(ApplicationDispatcherForward.java:407) at org.apache.catalina.core.ApplicationDispatcherForward.commit(ApplicationDispatcherForward.java:154) at org.apache.catalina.core.ApplicationDispatcher.dispatch(ApplicationDispatcher.java:358) at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:472) at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:669) ... 20 more #] ####### [LOG 2] ####### [#|2012-06-25T17:43:44.729-0300|SEVERE|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=22;_ThreadName=Thread-2;|java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:417) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:489) at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:373) at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:894) at com.sun.grizzly.tcp.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:167) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:223) at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95) at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140) at com.sun.grizzly.tcp.http11.filters.GzipOutputFilter.recycle(GzipOutputFilter.java:186) at com.sun.grizzly.http.SocketChannelOutputBuffer.recycle(SocketChannelOutputBuffer.java:510) at com.sun.grizzly.http.ProcessorTask.finishResponse(ProcessorTask.java:838) at com.sun.grizzly.http.ProcessorTask.postResponse(ProcessorTask.java:771) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:747) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662) #] ####### [LOG 3] ####### [#|2012-06-26T10:01:15.053-0300|WARNING|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=18;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(2).|#]
        Hide
        fabiosf added a comment -

        Chart of CPU usage and busy waiting observed by using Glassfish on our production server.

        Show
        fabiosf added a comment - Chart of CPU usage and busy waiting observed by using Glassfish on our production server.
        Hide
        Ryan Lubke added a comment -

        @KBachl: I don't have any details on 3.1.3 at this point in time. However, if you have a support contract, I'm sure you can get a more official patch made available.

        @fabiosf: I think at this point, I'm going to open a new issue for yours as the original issue, while the end result was the same, the cause is obviously different. I'd much rather track specific problems under their own issues than having an "umbrella" issue. I'll take care of opening the issue and make sure you're cc'd.

        Show
        Ryan Lubke added a comment - @KBachl: I don't have any details on 3.1.3 at this point in time. However, if you have a support contract, I'm sure you can get a more official patch made available. @fabiosf: I think at this point, I'm going to open a new issue for yours as the original issue, while the end result was the same, the cause is obviously different. I'd much rather track specific problems under their own issues than having an "umbrella" issue. I'll take care of opening the issue and make sure you're cc'd.
        Hide
        Ryan Lubke added a comment -

        @fabiosf I've logged http://java.net/jira/browse/GLASSFISH-18849. Please add yourself as a participant, as I'm not able to do so.

        Show
        Ryan Lubke added a comment - @fabiosf I've logged http://java.net/jira/browse/GLASSFISH-18849 . Please add yourself as a participant, as I'm not able to do so.
        Hide
        Ryan Lubke added a comment -

        Actually disregard the other issue.

        Show
        Ryan Lubke added a comment - Actually disregard the other issue.
        Hide
        Ryan Lubke added a comment -

        @fabiosf Looking at the stacktrace for #2, compression is enabled, but the stacktrace doesn't line up with the patch - it matches the pre-patch code. Can you double check you have the patch properly applied to 3.1.2?

        Show
        Ryan Lubke added a comment - @fabiosf Looking at the stacktrace for #2, compression is enabled, but the stacktrace doesn't line up with the patch - it matches the pre-patch code. Can you double check you have the patch properly applied to 3.1.2?
        Hide
        fabiosf added a comment -

        Hi, Ryan.

        I add myself as a participant on the key GLASSFISH-18849 and answered your question over there directly.

        Thank you

        Show
        fabiosf added a comment - Hi, Ryan. I add myself as a participant on the key GLASSFISH-18849 and answered your question over there directly. Thank you
        Hide
        fabiosf added a comment -

        Since it was asked to disregard the other issue, I´ll post here what I´ve written there:

        Hi, Ryan

        I checked as you requested.
        I took the md5sum from the two files involved on the patch, grizzly-utils.jar and (ii) grizzly-http.jar, on our server, and they matches with the md5sum of the same jars contained within "patch.jar".

        Please, take a look:

        md5sum /data/glassfish3/glassfish/modules/grizzly-utils.jar /data/glassfish3/glassfish/modules/grizzly-http.jar
        947b09d1da3993495fb4f060da90e7ca /data/glassfish3/glassfish/modules/grizzly-utils.jar
        247d5769ce5378a1169d786c5611763e /data/glassfish3/glassfish/modules/grizzly-http.jar

        Just to contrast, the original files, from the original GF_3.1.2, have the following md5sum:
        31663ed99c679106beb2970fa68cb904 grizzly-utils.jar
        fa6e7cb99f38e8c2208f02ac54b559e9 grizzly-http.jar

        Show
        fabiosf added a comment - Since it was asked to disregard the other issue, I´ll post here what I´ve written there: Hi, Ryan I checked as you requested. I took the md5sum from the two files involved on the patch, grizzly-utils.jar and (ii) grizzly-http.jar, on our server, and they matches with the md5sum of the same jars contained within "patch.jar". Please, take a look: md5sum /data/glassfish3/glassfish/modules/grizzly-utils.jar /data/glassfish3/glassfish/modules/grizzly-http.jar 947b09d1da3993495fb4f060da90e7ca /data/glassfish3/glassfish/modules/grizzly-utils.jar 247d5769ce5378a1169d786c5611763e /data/glassfish3/glassfish/modules/grizzly-http.jar Just to contrast, the original files, from the original GF_3.1.2, have the following md5sum: 31663ed99c679106beb2970fa68cb904 grizzly-utils.jar fa6e7cb99f38e8c2208f02ac54b559e9 grizzly-http.jar
        Hide
        Ryan Lubke added a comment -

        @fabiosf

        Hey there - sorry for the delay. When you installed the patch jars, you cleared the osgi cache, etc per the previous comments?

        Show
        Ryan Lubke added a comment - @fabiosf Hey there - sorry for the delay. When you installed the patch jars, you cleared the osgi cache, etc per the previous comments?
        Hide
        fabiosf added a comment -

        Hi, Ryan.

        No, I forgot that part. Sorry for my error.

        I´ve done this now and I´m trying to force the error.
        I discovered that it occurs only when a wrong URL redirects the user to our 404 error page (but just sometimes as well).

        In my manual tests, I couldn´t force a busy waiting anymore, which is a good indicative. I will wait a few days and let you know about the results, ok?

        Thank you very much.

        Show
        fabiosf added a comment - Hi, Ryan. No, I forgot that part. Sorry for my error. I´ve done this now and I´m trying to force the error. I discovered that it occurs only when a wrong URL redirects the user to our 404 error page (but just sometimes as well). In my manual tests, I couldn´t force a busy waiting anymore, which is a good indicative. I will wait a few days and let you know about the results, ok? Thank you very much.
        Hide
        Ryan Lubke added a comment -

        Sounds good. I'll be watching for updates. Thanks!

        Show
        Ryan Lubke added a comment - Sounds good. I'll be watching for updates. Thanks!
        Hide
        obfischer added a comment -

        Hi,

        will this patch be part of the GlassFish distribution? As I can see, there is not fix version set. Does it mean that the 3.1.2 distribution will be updated?

        Show
        obfischer added a comment - Hi, will this patch be part of the GlassFish distribution? As I can see, there is not fix version set. Does it mean that the 3.1.2 distribution will be updated?
        Hide
        Ryan Lubke added a comment -

        @obfischer There should be a patch release for 3.1.2 (support contract assumed). I'm uncertain when the next open source release will happen.

        Show
        Ryan Lubke added a comment - @obfischer There should be a patch release for 3.1.2 (support contract assumed). I'm uncertain when the next open source release will happen.
        Hide
        Ryan Lubke added a comment -

        Not sure why this was closed. Still waiting for additional verification from fabiosf.

        Show
        Ryan Lubke added a comment - Not sure why this was closed. Still waiting for additional verification from fabiosf.
        Hide
        Ryan Lubke added a comment -

        @fabiosf I'm assuming no news is good news, yes?

        Show
        Ryan Lubke added a comment - @fabiosf I'm assuming no news is good news, yes?
        Hide
        fabiosf added a comment -

        Hi, Ryan.

        Correct. For the first time in three months, our Glassfish server hasn´t crashed within a day or half. Our application is running fine for the last 5 days.

        The only problems we found were many WARN logs on "server.log" (that once preceded the busy waiting), that now don´t harm the server (just bores us a little).

        Show
        fabiosf added a comment - Hi, Ryan. Correct. For the first time in three months, our Glassfish server hasn´t crashed within a day or half. Our application is running fine for the last 5 days. The only problems we found were many WARN logs on "server.log" (that once preceded the busy waiting), that now don´t harm the server (just bores us a little).
        Hide
        Ryan Lubke added a comment -

        Good to hear. Might be worth logging a separate issue for the WARNING messages - seems we should look into the verbosity there.

        Show
        Ryan Lubke added a comment - Good to hear. Might be worth logging a separate issue for the WARNING messages - seems we should look into the verbosity there.
        Hide
        fabiosf added a comment -

        Sure.
        And thank you very much for the fix.

        Show
        fabiosf added a comment - Sure. And thank you very much for the fix.
        Hide
        Ryan Lubke added a comment -

        Marking this fixed in 3.1.2 but this is in fact fixed in 3.1.2.2 (which will be released soon(TM)).

        Thanks to all for working with us to resolve this one.

        Show
        Ryan Lubke added a comment - Marking this fixed in 3.1.2 but this is in fact fixed in 3.1.2.2 (which will be released soon(TM)). Thanks to all for working with us to resolve this one.
        Hide
        KBachl added a comment -
        Show
        KBachl added a comment - @Ryan you write it is fixed 3.1.2.2, while Sathyan Catari from @oracle.com writes that only these are fixed: Grizzly: http://java.net/jira/browse/GRIZZLY-1284 http://java.net/jira/browse/GRIZZLY-1254 http://java.net/jira/browse/GRIZZLY-1267 http://java.net/jira/browse/GRIZZLY-1276 GlassFish: http://java.net/jira/browse/GLASSFISH-18446 http://java.net/jira/browse/GLASSFISH-18546 http://java.net/jira/browse/GLASSFISH-18327 http://java.net/jira/browse/GLASSFISH-18444 Metro/JAX-WS: http://java.net/jira/browse/WSIT-1619 http://java.net/jira/browse/JAX_WS-1059 plus these from: JClinigan: [1] http://netbeans.org/bugzilla/show_bug.cgi?id=211962 [2] http://java.net/jira/browse/JAX_WS-1059 [3] http://java.net/jira/browse/GLASSFISH-18446 ---------------- So I currently can only warn to upgrade to 3.1.2.2 for all plagued users as it may not work then... Can you please comment on this?
        Hide
        Ryan Lubke added a comment -

        It's in there (1.9.50). A more official list of fixed issues will be forthcoming from the community managers.

        Stay tuned.

        Show
        Ryan Lubke added a comment - It's in there (1.9.50). A more official list of fixed issues will be forthcoming from the community managers. Stay tuned.
        Hide
        ftz2012 added a comment -

        Hi, after upgrading from 3.1.1 to 3.1.2.2 we also get the same exceptions and the server also become unresponsive after a while.
        See IOException and idle Thread messages below.

        Could you please confirm that the patch is in 3.1.2.2?

        Thank you.

        [#|2012-08-13T15:08:39.902+0200|WARNING|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=19;_ThreadName=Thread-2;|StandardWrapperValve[]: PWC1406: Servlet.service() threw exception
        java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcher.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
        at sun.nio.ch.IOUtil.write(IOUtil.java:40)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336)
        at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108)
        at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:426)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:498)
        at com.sun.grizzly.http.SocketChannelOutputBuffer.flush(SocketChannelOutputBuffer.java:476)
        at com.sun.grizzly.http.ProcessorTask.action(ProcessorTask.java:1287)
        at com.sun.grizzly.tcp.Response.action(Response.java:268)
        at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:380)
        at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:353)
        at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:175)
        at com.ctc.wstx.io.UTF8Writer.flush(UTF8Writer.java:103)
        at com.ctc.wstx.sw.BufferingXmlWriter.flush(BufferingXmlWriter.java:225)
        at com.ctc.wstx.sw.BufferingXmlWriter.close(BufferingXmlWriter.java:198)
        at com.ctc.wstx.sw.BaseStreamWriter._finishDocument(BaseStreamWriter.java:1406)
        at com.ctc.wstx.sw.BaseStreamWriter.close(BaseStreamWriter.java:249)
        at com.sun.xml.ws.util.xml.XMLStreamWriterFilter.close(XMLStreamWriterFilter.java:67)
        at com.sun.xml.ws.server.SDDocumentImpl.writeTo(SDDocumentImpl.java:297)
        at com.sun.xml.ws.transport.http.HttpAdapter.publishWSDL(HttpAdapter.java:700)
        at com.sun.xml.ws.transport.http.HttpAdapter.handleGet(HttpAdapter.java:280)
        at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:254)
        at com.sun.xml.ws.transport.http.servlet.ServletAdapter.publishWSDL(ServletAdapter.java:252)
        at org.glassfish.webservices.JAXWSServlet.doGet(JAXWSServlet.java:206)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:668)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:770)
        at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1550)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:281)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
        at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
        at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161)
        at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
        at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
        at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
        at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
        at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
        at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056)
        at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
        at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
        at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
        at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
        at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
        at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
        at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
        at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
        at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
        at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
        at java.lang.Thread.run(Thread.java:662)

        [#|2012-08-09T03:30:50.823+0200|WARNING|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=118;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(273).|#]

        Show
        ftz2012 added a comment - Hi, after upgrading from 3.1.1 to 3.1.2.2 we also get the same exceptions and the server also become unresponsive after a while. See IOException and idle Thread messages below. Could you please confirm that the patch is in 3.1.2.2? Thank you. [#|2012-08-13T15:08:39.902+0200|WARNING|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=19;_ThreadName=Thread-2;|StandardWrapperValve[]: PWC1406: Servlet.service() threw exception java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108) at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:426) at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:498) at com.sun.grizzly.http.SocketChannelOutputBuffer.flush(SocketChannelOutputBuffer.java:476) at com.sun.grizzly.http.ProcessorTask.action(ProcessorTask.java:1287) at com.sun.grizzly.tcp.Response.action(Response.java:268) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:380) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:353) at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:175) at com.ctc.wstx.io.UTF8Writer.flush(UTF8Writer.java:103) at com.ctc.wstx.sw.BufferingXmlWriter.flush(BufferingXmlWriter.java:225) at com.ctc.wstx.sw.BufferingXmlWriter.close(BufferingXmlWriter.java:198) at com.ctc.wstx.sw.BaseStreamWriter._finishDocument(BaseStreamWriter.java:1406) at com.ctc.wstx.sw.BaseStreamWriter.close(BaseStreamWriter.java:249) at com.sun.xml.ws.util.xml.XMLStreamWriterFilter.close(XMLStreamWriterFilter.java:67) at com.sun.xml.ws.server.SDDocumentImpl.writeTo(SDDocumentImpl.java:297) at com.sun.xml.ws.transport.http.HttpAdapter.publishWSDL(HttpAdapter.java:700) at com.sun.xml.ws.transport.http.HttpAdapter.handleGet(HttpAdapter.java:280) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:254) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.publishWSDL(ServletAdapter.java:252) at org.glassfish.webservices.JAXWSServlet.doGet(JAXWSServlet.java:206) at javax.servlet.http.HttpServlet.service(HttpServlet.java:668) at javax.servlet.http.HttpServlet.service(HttpServlet.java:770) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1550) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:281) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662) [#|2012-08-09T03:30:50.823+0200|WARNING|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=118;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(273).|#]
        Hide
        oleksiys added a comment -

        @ftz2012: it might be not related at all. Pls. take a threads dump when the server stops responding. It might be related to slow DB access or so.

        Show
        oleksiys added a comment - @ftz2012: it might be not related at all. Pls. take a threads dump when the server stops responding. It might be related to slow DB access or so.
        Hide
        ftz2012 added a comment -

        @oleksiys: thanks, I looked at the thread dump and you are probably right. I will play with the idle thread time-out settings

        Show
        ftz2012 added a comment - @oleksiys: thanks, I looked at the thread dump and you are probably right. I will play with the idle thread time-out settings
        Hide
        fabiosf added a comment -

        Dear ftz2012, How are you?

        Did you find any solution to your problem? We switched to 3.1.2-2 by almost one year, but we still have a similar problem described by this thread.
        The patch has mitigated the original problem described by this thread. However, we still see in server.log many broken pipe exceptions and we have to restart our server at least once within 3 days (before the patch was twice a day). Maybe threads that don't get killed over time turns Glassfish to become unresponsive after a while.

        Have you any clue about this?

        Thanks

        Show
        fabiosf added a comment - Dear ftz2012, How are you? Did you find any solution to your problem? We switched to 3.1.2-2 by almost one year, but we still have a similar problem described by this thread. The patch has mitigated the original problem described by this thread. However, we still see in server.log many broken pipe exceptions and we have to restart our server at least once within 3 days (before the patch was twice a day). Maybe threads that don't get killed over time turns Glassfish to become unresponsive after a while. Have you any clue about this? Thanks
        Hide
        Exxenmann added a comment - - edited

        Hey everyone,

        "affected version: 3.1.2
        fixed version: 3.1.2"

        is pretty uninformative

        We are still having the Problem using:
        GlassFish Server Open Source Edition 3.1.2.2 (build 5)
        and
        java version "1.6.0_43"
        Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
        Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)

        Which is the patched Open Source Glassfish Server Build? (and where to get it)?

        Thanks to all of you for your investigations!

        Show
        Exxenmann added a comment - - edited Hey everyone, "affected version: 3.1.2 fixed version: 3.1.2" is pretty uninformative We are still having the Problem using: GlassFish Server Open Source Edition 3.1.2.2 (build 5) and java version "1.6.0_43" Java(TM) SE Runtime Environment (build 1.6.0_43-b01) Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode) Which is the patched Open Source Glassfish Server Build? (and where to get it)? Thanks to all of you for your investigations!
        Hide
        seanespn added a comment -

        I don't know if the GF team ever created a release with this fix in it.

        But you can download the patch jar attached to this ticket and apply yourself fairly easily - worked for us.

        Show
        seanespn added a comment - I don't know if the GF team ever created a release with this fix in it. But you can download the patch jar attached to this ticket and apply yourself fairly easily - worked for us.
        Hide
        Exxenmann added a comment -

        When I do:
        rm -rf generated
        rm -rf osgi-cache

        and then put them into glassfish3/modules

        I get this as on console and Glassfish doesnt start:

        Waiting for domain to start ..Error starting domain.
        The server exited prematurely with exit code 1.
        Before it died, it produced the following output:

        Launching GlassFish on Felix platform
        Completed shutdown of GlassFish runtime
        Exception in thread "main" java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMain.java:97)
        at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:55)
        Caused by: com.sun.enterprise.module.ResolveError: Failed to start Bundle Id [29] State [INSTALLED] [org.glassfish.main.core.kernel(Kernel Classes):3.1.2.1-SNAPSHOT]
        at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.start(OSGiModuleImpl.java:177)
        at org.jvnet.hk2.osgiadapter.OSGiModuleImpl$2$1$1.loadClass(OSGiModuleImpl.java:344)
        at com.sun.hk2.component.LazyInhabitant.loadClass(LazyInhabitant.java:124)
        at com.sun.hk2.component.LazyInhabitant.type(LazyInhabitant.java:99)
        at org.jvnet.hk2.component.Habitat$SelfListener.inhabitantIndexChanged(Habitat.java:1142)
        at org.jvnet.hk2.component.Habitat$3.inhabitantChanged(Habitat.java:601)
        at org.jvnet.hk2.component.Habitat$4.run(Habitat.java:642)
        at org.jvnet.hk2.component.SameThreadExecutor.execute(SameThreadExecutor.java:80)
        at org.jvnet.hk2.component.Habitat.doNotify(Habitat.java:632)
        at org.jvnet.hk2.component.Habitat.notify(Habitat.java:619)
        at org.jvnet.hk2.component.Habitat.notify(Habitat.java:605)
        at org.jvnet.hk2.component.Habitat.addIndex(Habitat.java:431)
        at org.jvnet.hk2.component.Habitat.addIndex(Habitat.java:424)
        at com.sun.hk2.component.InhabitantsParser.addIndex(InhabitantsParser.java:243)
        at com.sun.hk2.component.InhabitantsParser.add(InhabitantsParser.java:222)
        at com.sun.hk2.component.InhabitantsParser.parse(InhabitantsParser.java:174)
        at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.parseInhabitants(OSGiModuleImpl.java:376)
        at org.jvnet.hk2.osgiadapter.OSGiModulesRegistryImpl.parseInhabitants(OSGiModulesRegistryImpl.java:321)
        at com.sun.enterprise.module.common_impl.AbstractModulesRegistryImpl.createHabitat(AbstractModulesRegistryImpl.java:156)
        at com.sun.enterprise.module.bootstrap.Main.createHabitat(Main.java:459)
        at org.jvnet.hk2.osgiadapter.HK2Main.createHabitat(HK2Main.java:96)
        at com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishRuntime.newGlassFish(EmbeddedOSGiGlassFishRuntime.java:89)
        at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher.launch(GlassFishMain.java:113)
        ... 6 more
        Caused by: org.osgi.framework.BundleException: Unresolved constraint in bundle org.glassfish.main.core.kernel [29]: Unable to resolve 29.0: missing requirement [29.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.enterprise.admin.util)(version>=3.1.0)) [caused by: Unable to resolve 120.0: missing requirement [120.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.enterprise.config.serverbeans)(version>=3.1.0)) [caused by: Unable to resolve 38.0: missing requirement [38.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.grizzly.config.dom)(version>=1.9.0)) [caused by: Unable to resolve 130.0: missing requirement [130.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.grizzly)(version>=1.9.50)) [caused by: Unable to resolve 4.0: missing requirement [4.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.grizzly.tcp)(version>=1.9.50))]]]]
        at org.apache.felix.framework.Felix.resolveBundleRevision(Felix.java:3826)
        at org.apache.felix.framework.Felix.startBundle(Felix.java:1868)
        at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:944)
        at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.start(OSGiModuleImpl.java:169)
        ... 28 more

        Command start-domain failed.

        Show
        Exxenmann added a comment - When I do: rm -rf generated rm -rf osgi-cache and then put them into glassfish3/modules I get this as on console and Glassfish doesnt start: Waiting for domain to start ..Error starting domain. The server exited prematurely with exit code 1. Before it died, it produced the following output: Launching GlassFish on Felix platform Completed shutdown of GlassFish runtime Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMain.java:97) at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:55) Caused by: com.sun.enterprise.module.ResolveError: Failed to start Bundle Id [29] State [INSTALLED] [org.glassfish.main.core.kernel(Kernel Classes):3.1.2.1-SNAPSHOT] at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.start(OSGiModuleImpl.java:177) at org.jvnet.hk2.osgiadapter.OSGiModuleImpl$2$1$1.loadClass(OSGiModuleImpl.java:344) at com.sun.hk2.component.LazyInhabitant.loadClass(LazyInhabitant.java:124) at com.sun.hk2.component.LazyInhabitant.type(LazyInhabitant.java:99) at org.jvnet.hk2.component.Habitat$SelfListener.inhabitantIndexChanged(Habitat.java:1142) at org.jvnet.hk2.component.Habitat$3.inhabitantChanged(Habitat.java:601) at org.jvnet.hk2.component.Habitat$4.run(Habitat.java:642) at org.jvnet.hk2.component.SameThreadExecutor.execute(SameThreadExecutor.java:80) at org.jvnet.hk2.component.Habitat.doNotify(Habitat.java:632) at org.jvnet.hk2.component.Habitat.notify(Habitat.java:619) at org.jvnet.hk2.component.Habitat.notify(Habitat.java:605) at org.jvnet.hk2.component.Habitat.addIndex(Habitat.java:431) at org.jvnet.hk2.component.Habitat.addIndex(Habitat.java:424) at com.sun.hk2.component.InhabitantsParser.addIndex(InhabitantsParser.java:243) at com.sun.hk2.component.InhabitantsParser.add(InhabitantsParser.java:222) at com.sun.hk2.component.InhabitantsParser.parse(InhabitantsParser.java:174) at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.parseInhabitants(OSGiModuleImpl.java:376) at org.jvnet.hk2.osgiadapter.OSGiModulesRegistryImpl.parseInhabitants(OSGiModulesRegistryImpl.java:321) at com.sun.enterprise.module.common_impl.AbstractModulesRegistryImpl.createHabitat(AbstractModulesRegistryImpl.java:156) at com.sun.enterprise.module.bootstrap.Main.createHabitat(Main.java:459) at org.jvnet.hk2.osgiadapter.HK2Main.createHabitat(HK2Main.java:96) at com.sun.enterprise.glassfish.bootstrap.osgi.EmbeddedOSGiGlassFishRuntime.newGlassFish(EmbeddedOSGiGlassFishRuntime.java:89) at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher.launch(GlassFishMain.java:113) ... 6 more Caused by: org.osgi.framework.BundleException: Unresolved constraint in bundle org.glassfish.main.core.kernel [29] : Unable to resolve 29.0: missing requirement [29.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.enterprise.admin.util)(version>=3.1.0)) [caused by: Unable to resolve 120.0: missing requirement [120.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.enterprise.config.serverbeans)(version>=3.1.0)) [caused by: Unable to resolve 38.0: missing requirement [38.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.grizzly.config.dom)(version>=1.9.0)) [caused by: Unable to resolve 130.0: missing requirement [130.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.grizzly)(version>=1.9.50)) [caused by: Unable to resolve 4.0: missing requirement [4.0] osgi.wiring.package; (&(osgi.wiring.package=com.sun.grizzly.tcp)(version>=1.9.50))]]]] at org.apache.felix.framework.Felix.resolveBundleRevision(Felix.java:3826) at org.apache.felix.framework.Felix.startBundle(Felix.java:1868) at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:944) at org.jvnet.hk2.osgiadapter.OSGiModuleImpl.start(OSGiModuleImpl.java:169) ... 28 more Command start-domain failed.

          People

          • Assignee:
            Ryan Lubke
            Reporter:
            KBachl
          • Votes:
            10 Vote for this issue
            Watchers:
            11 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: