glassfish
  1. glassfish
  2. GLASSFISH-20950

Glassfish doesn't start due to deadlock

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 4.1
    • Fix Version/s: None
    • Component/s: logging
    • Labels:
      None

      Description

      Fresh GF 4.0.1 (trunk) installation fails to start due to deadlock.

      Java stack information for the threads listed above:
      ===================================================
      "RunLevelControllerThread-1389425844252":
      	at java.util.logging.LogManager.drainLoggerRefQueueBounded(LogManager.java:811)
      	- waiting to lock <0x000000011013d4e0> (a java.util.logging.LogManager)
      	at java.util.logging.LogManager$LoggerContext.addLocalLogger(LogManager.java:511)
      	- locked <0x000000011013d148> (a java.util.logging.LogManager$LoggerContext)
      	at java.util.logging.LogManager.addLogger(LogManager.java:848)
      	at java.util.logging.LogManager.demandLogger(LogManager.java:405)
      	at java.util.logging.Logger.demandLogger(Logger.java:317)
      	at java.util.logging.Logger.getLogger(Logger.java:361)
      	at com.sun.logging.LogDomains.getLogger(LogDomains.java:337)
      	- locked <0x00000001062408d8> (a java.lang.Class for com.sun.logging.LogDomains)
      	at org.glassfish.kernel.javaee.MEJBNamingObjectProxy.<clinit>(MEJBNamingObjectProxy.java:79)
      	at org.glassfish.kernel.javaee.MEJBService.postConstruct(MEJBService.java:85)
      	at org.jvnet.hk2.internal.ClazzCreator.postConstructMe(ClazzCreator.java:378)
      	at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:426)
      	at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:456)
      	at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:225)
      	at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:82)
      	at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2395)
      	at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:98)
      	- locked <0x0000000106240be8> (a java.lang.Object)
      	at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:87)
      	at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1162)
      	at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1147)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:722)
      "RunLevelControllerThread-1389425844244":
      	at java.util.logging.LogManager$LoggerContext.findLogger(LogManager.java:489)
      	- waiting to lock <0x000000011013d148> (a java.util.logging.LogManager$LoggerContext)
      	at java.util.logging.LogManager.setLevelsOnExistingLoggers(LogManager.java:1356)
      	- locked <0x000000011013d4e0> (a java.util.logging.LogManager)
      	at java.util.logging.LogManager.readConfiguration(LogManager.java:1115)
      	at java.util.logging.LogManager.readConfiguration(LogManager.java:988)
      	at com.sun.enterprise.server.logging.LogManagerService.postConstruct(LogManagerService.java:288)
      	at org.jvnet.hk2.internal.ClazzCreator.postConstructMe(ClazzCreator.java:378)
      	at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:426)
      	at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:456)
      	at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:225)
      	at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:82)
      	at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2395)
      	at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:98)
      	- locked <0x00000001062410d8> (a java.lang.Object)
      	at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:87)
      	at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1162)
      	at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1147)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:722)
      
      Found 1 deadlock.
      

        Issue Links

          Activity

          Hide
          oleksiys added a comment -

          Clarification: the issue is intermittent, I hit it once out of 20+ runs.

          Show
          oleksiys added a comment - Clarification: the issue is intermittent, I hit it once out of 20+ runs.
          Hide
          Joe Di Pol added a comment -

          This may be causing some of the hangs we are seeing in large scale cluster testing. See GLASSFISH-20929.

          Show
          Joe Di Pol added a comment - This may be causing some of the hangs we are seeing in large scale cluster testing. See GLASSFISH-20929 .
          Hide
          Joe Di Pol added a comment -

          The reporter was using 1.7.0_17 which is known to have deadlock issues in LogManager. Some more info from the JDK team:

          "There have been actually two deadlocks
          reported with findLogger / drainLoggerRefQueueBounded.

          https://bugs.openjdk.java.net/browse/JDK-8010939
          which I think is what you are observing, and that one is
          marked fixed in 7u25 b11.

          Later and more recently another deadlock was reported
          but with a slightly different trace. I never managed to
          reproduced it - but saw its theoretical possibility in
          7 and 8.
          This is https://bugs.openjdk.java.net/browse/JDK-8027670
          which as been fixed by https://bugs.openjdk.java.net/browse/JDK-8029281
          but in JDK 8 b120 only (not in JDK 7).

          By looking at your stack trace and the fix made for
          https://bugs.openjdk.java.net/browse/JDK-8010939, and
          the code that is currently in 7 - I think the issue
          you are observing is the one that has been fixed in 7u25. "

          Show
          Joe Di Pol added a comment - The reporter was using 1.7.0_17 which is known to have deadlock issues in LogManager. Some more info from the JDK team: "There have been actually two deadlocks reported with findLogger / drainLoggerRefQueueBounded. https://bugs.openjdk.java.net/browse/JDK-8010939 which I think is what you are observing, and that one is marked fixed in 7u25 b11. Later and more recently another deadlock was reported but with a slightly different trace. I never managed to reproduced it - but saw its theoretical possibility in 7 and 8. This is https://bugs.openjdk.java.net/browse/JDK-8027670 which as been fixed by https://bugs.openjdk.java.net/browse/JDK-8029281 but in JDK 8 b120 only (not in JDK 7). By looking at your stack trace and the fix made for https://bugs.openjdk.java.net/browse/JDK-8010939 , and the code that is currently in 7 - I think the issue you are observing is the one that has been fixed in 7u25. "
          Hide
          Joe Di Pol added a comment -

          We have run with a private JDK7 build with the logging deadlocks fixed. That seems to have resolved the deadlock issue in other tests. The fix will be in JDK7u60, and should address this issue.

          Show
          Joe Di Pol added a comment - We have run with a private JDK7 build with the logging deadlocks fixed. That seems to have resolved the deadlock issue in other tests. The fix will be in JDK7u60, and should address this issue.
          Hide
          Joe Di Pol added a comment -

          We have verified this is fixed in JDK 1.7u60

          Show
          Joe Di Pol added a comment - We have verified this is fixed in JDK 1.7u60

            People

            • Assignee:
              Joe Di Pol
              Reporter:
              oleksiys
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: