glassfish
  1. glassfish
  2. GLASSFISH-3814

Horrible memory leak when serving large PDFs

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: 9.1pe
    • Fix Version/s: 9.1peur1
    • Component/s: web_container
    • Labels:
      None
    • Environment:

      Operating System: All
      Platform: Macintosh

    • Issuezilla Id:
      3,814

      Description

      There is a nasty memory leak when serving large PDFs from GFv2.

      Starting Sun Java System Application Server 9.1 (build b58g-fcs) (latest download from GF homepage).

      I am trying to simply view a 139MB PDF file from Glassfish.

      Bone stock default GF install on Mac OS 10.4, JDK 1.5

      Create a simple webapp from NetBeans, with the generated index.jsp. Add in your large PDF file.

      Start Glassfish, and deploy your war.

      Launch JConsole, connect to GF, and select the Memory pane.

      On my system, after a quick "Perform GC" click, the JVM is around 29MB.

      Go to a brower and try to download the pdf file:

      http://localhost:8080/example/big.pdf

      Observe on JConsole the memory shoot up from 29MB up to ~190MB.

      Observe in Glassfish Log File:

      [#|2007-10-27T21:43:08.952-0700|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|
      _ThreadID=15;_ThreadName=httpSSLWorkerThread-8080-0;_RequestID=e09a1e67-dbbb-45e3-
      be7b-dfbb06936407;|StandardWrapperValve[default]: PWC1406: Servlet.service() for servlet default
      threw exception
      java.lang.OutOfMemoryError: Java heap space

      #]

      [#|2007-10-27T21:43:09.404-0700|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|
      _ThreadID=15;_ThreadName=httpSSLWorkerThread-8080-0;_RequestID=e09a1e67-dbbb-45e3-
      be7b-dfbb06936407;|java.lang.OutOfMemoryError: Java heap space

      #]

      Click "Perform GC" on JConsole, and observe the JVM Heap size go down by only a couple of MB (say
      from 190MB to 188MB).

      Try and download the PDF again from the browser, and observe the JVM memory surge up again, and
      again hit OutOfMemory, and again not be able to GC any of the memory.

      Go ahead and stop GF, it's effectively dead now anyway.

      I have observed similiar behavior on Solaris installs, but of earlier build (GF Release Candidates).

      I have not tried this with other large resources (say a large ZIP).

      This is a real show stopper for us.

        Activity

        Hide
        whartung added a comment -

        I dug a bit deepere, playing with the debugger. This is definately something "down deep" in Grizzly. My
        suspicion is that it's allocating some kind of buffer (duh), and when it gets the OOM exception, that's
        where it "loses" the memory and the whole thing goes to pot.

        I tried a simple servlet:

        response.setContentType("application/pdf");
        OutputStream o = response.getOutputStream();

        FileInputStream is = new FileInputStream("/tmp/biggo.pdf");
        byte[] buf = new byte[8192];

        int l = is.read(buf);
        while (l > -1)

        { o.write(buf, 0, l); }

        o.close();
        is.close();

        And the same error happens, so it's not some kind of internal resource caching thing.

        I haven't run tests as to when the file is big enough to cause the OOM to happen. No doubt this
        threshold may well be based on heap size, and it's clear this doesn't happen with smaller files (or we
        would be hearing from someone besides me).

        Show
        whartung added a comment - I dug a bit deepere, playing with the debugger. This is definately something "down deep" in Grizzly. My suspicion is that it's allocating some kind of buffer (duh), and when it gets the OOM exception, that's where it "loses" the memory and the whole thing goes to pot. I tried a simple servlet: response.setContentType("application/pdf"); OutputStream o = response.getOutputStream(); FileInputStream is = new FileInputStream("/tmp/biggo.pdf"); byte[] buf = new byte [8192] ; int l = is.read(buf); while (l > -1) { o.write(buf, 0, l); } o.close(); is.close(); And the same error happens, so it's not some kind of internal resource caching thing. I haven't run tests as to when the file is big enough to cause the OOM to happen. No doubt this threshold may well be based on heap size, and it's clear this doesn't happen with smaller files (or we would be hearing from someone besides me).
        Hide
        jfarcand added a comment -

        Embarrassing no one from our testing team found that

            • This issue has been marked as a duplicate of 3683 ***
        Show
        jfarcand added a comment - Embarrassing no one from our testing team found that This issue has been marked as a duplicate of 3683 ***

          People

          • Assignee:
            jluehe
            Reporter:
            whartung
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: