[GLASSFISH-21005] GlassFish stops responding to http connections if client aborts downloads (grizzly or http service i suppose) Created: 08/Mar/14  Updated: 13/Mar/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.0
Fix Version/s: None

Type: Bug Priority: Blocker
Reporter: pranahata Assignee: oleksiys
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

CenOs 64 bit, Java 7 u 51



 Description   

To reproduce the issue

Create a Serlvet that serves a file.

From a different box, do this a few hundred times.

HttpURLConnection connection = null;

URL url = new URL("http://lala:8080/myapp/myfileservingservlet");
connection = (HttpURLConnection)url.openConnection();

connection.setDoInput(true);
connection.setRequestMethod("GET");
InputStream is = connection.getInputStream();
connection.getHeaderField("Content-Type);
connection.disconnect();

GlassFish logs shows a fair few of these:

Caused by: java.io.IOException: Connection closed
at org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:298) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:613) ~[nucleus-grizzly-all.jar:na]

And also a fair few of these

java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:51)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.flushByteBuffer(TCPNIOTransport.java:1252)

Trying to get anything else from the server shows:

Caused by: org.apache.http.NoHttpResponseException: The target server failed to respond

Note that it is easier to reproduce if the request goes over the internet but doesn't happen so frequently if using localhost, no sure about another box in the local network.

I have also seen the same errors in the logs when launching jws applications via JnlpDownloadServlet. To my undertanding the jws container does a similar thing by requesting the http headers of a jar but then aborting the download if the jar is in the jws cache and hasn't been modified.

Setting as blocker as it can make the server unusable and there one can not control how http clients interact with the server.



 Comments   
Comment by oleksiys [ 12/Mar/14 ]

Can you please try this patch [1] and check if it's still reproducible?

Thanks.

[1] https://dl.dropboxusercontent.com/u/7319744/nucleus-grizzly-all.jar

Comment by pranahata [ 13/Mar/14 ]

Olexsys,

I am quite busy at the moment, the test case is pretty simple. Where you able to reproduce the error without the patch?





[GLASSFISH-21228] glassfish web IO hangs on html5 video requests(mp4) from client Created: 09/Oct/14  Updated: 15/Oct/14

Status: Open
Project: glassfish
Component/s: web_container
Affects Version/s: 3.1.2_b22
Fix Version/s: None

Type: Bug Priority: Blocker
Reporter: pradyut Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Glassfish running on any envirenment, Client is Google Chrome



 Description   

This is related to Grizzly.

Caused by: java.io.IOException: Client is busy or timed out
at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:129)
at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76)


 Comments   
Comment by pradyut [ 09/Oct/14 ]

Any further requests to the server fails and the proxy server shows that the server failed to respond to the GET request.
Shutting down the client browser(chrome) resumes the services from the server.

The logs show -

org.apache.catalina.connector.ClientAbortException: java.io.IOException: Client is busy or timed out
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:430)
at com.sun.grizzly.util.buf.ByteChunk.flushBuffer(ByteChunk.java:458)
at com.sun.grizzly.util.buf.ByteChunk.append(ByteChunk.java:380)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:455)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:442)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:160)
at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:2365)
at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:2100)
at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:1090)
at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:466)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:668)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:770)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1550)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
at newa.StoreImage.doFilter(StoreImage.java:100)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
at newa.CEFilter.doFilter(CEFilter.java:193)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Client is busy or timed out
at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:129)
at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76)
at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:426)
at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:498)
at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes0(SocketChannelOutputBuffer.java:382)
at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:364)
at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite

(InternalOutputBuffer.java:894)
at com.sun.grizzly.tcp.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:159)
at com.sun.grizzly.tcp.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:661)
at com.sun.grizzly.tcp.Response.doWrite(Response.java:685)
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:425)
... 43 more

#]

at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:430)|#]

at com.sun.grizzly.util.buf.ByteChunk.flushBuffer(ByteChunk.java:458)|#]

at com.sun.grizzly.util.buf.ByteChunk.append(ByteChunk.java:380)|#]

at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:455)|#]

at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:442)|#]

at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:160)|#]

at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:2365)|#]

at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:2100)|#]

at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:1090)|#]

at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:466)|#]

at javax.servlet.http.HttpServlet.service(HttpServlet.java:668)|#]

at javax.servlet.http.HttpServlet.service(HttpServlet.java:770)|#]

at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1550)|#]

at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343)|#]

at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)|#]

at newa.StoreImage.doFilter(StoreImage.java:100)|#]

at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)|#]

at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)|#]

at newa.CEFilter.doFilter(CEFilter.java:193)|#]

at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)|#]

at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)|#]

at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)|#]

at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)|#]

at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)|#]

at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)|#]

at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161)|#]

at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331)|#]

at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)|#]

at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)|#]

at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)|#]

at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)|#]

at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)|#]

at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056)|#]

at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)|#]

at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)|#]

at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)|#]

at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)|#]

at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)|#]

at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)|#]

at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)|#]

at com.sun.grizzly.ContextTask.run(ContextTask.java:71)|#]

at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)|#]

at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)|#]

at java.lang.Thread.run(Thread.java:745)|#]

Caused by: java.io.IOException: Client is busy or timed out|#]

at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:129)|#]

at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76)|#]

at com.sun.grizzly.http.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:426)|#]

at com.sun.grizzly.http.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:498)|#]

at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes0(SocketChannelOutputBuffer.java:382)|#]

at com.sun.grizzly.http.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:364)|#]

at com.sun.grizzly.tcp.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite

(InternalOutputBuffer.java:894)|#]

at com.sun.grizzly.tcp.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:159)|#]

at com.sun.grizzly.tcp.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:661)|#]

at com.sun.grizzly.tcp.Response.doWrite(Response.java:685)|#]

at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:425)|#]

... 43 more|#]

Comment by oleksiys [ 15/Oct/14 ]

Can you pls. clarify, after hitting the problem the server is not able to respond to a simple GET from the same browser? What about other browsers/clients are they still ok?
And everything comes to normal after you restart the client/browser, not the server, right?





[GLASSFISH-21278] SessionScoped object sharing between different sessions when Async Servlet 3.0 involved in execution path Created: 22/Dec/14  Updated: 10/Feb/15

Status: Open
Project: glassfish
Component/s: web_container, web_socket
Affects Version/s: 3.1.2.2
Fix Version/s: None

Type: Bug Priority: Blocker
Reporter: moonysun Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

JDK 1.7.0_62



 Description   

We've noticed a bug related to Session "sharing" between threads when using the Servlet 3.0 Async implementation in Glassfish.
If few users log in to the application, use an Async Servlet and logout, there is a reuse of a @javax.enterprise.context.SessionScoped object between the frontend threads.

Find attached a test case that easely reproduces the errors, with the following configuration at the application server domain:

server-config -> Web Container -> (Tab) Manager Properties -> Max Sessions: 2
server-config -> Thread Pools -> http-thread-pool -> Max Thread Pool Size : 6, Min Thread Pool Size: 6
server-config -> Network Config -> Network Listeners -> http-listener-1 -> (Tab) HTTP -> Comet Support: true, Websocket support: true

The code is commented to enable easy change, and STOP the problem from happening by setting asyncSupported=false in both the AuditFilter and the AsyncDownloadProxy servlet (the AsyncDownloadProxy servlet has comments to indicate where to change the code to support synchronous mode only - marked by "//COMMENT FOR SYNC TESTING")...

The problem shows in the domain's server.log file in the following forms:

1.Looking for the String "REUSING INSTANCES", as by this example:

[#|2014-11-17T17:16:19.624+0000|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=108;_ThreadName=Thread-2;|AuditFilter: ************** REUSING INSTANCES!!!!!|#]

2. Checking for this exception in the logs:
[#|2014-11-17T17:16:21.059+0000|SEVERE|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=108;_ThreadName=Thread-2;|java.lang.NullPointerException
at com.sun.grizzly.arp.AsyncProcessorTask.obtainAndSetThreadAttachment(AsyncProcessorTask.java:233)
at com.sun.grizzly.arp.AsyncProcessorTask.enableTimeout(AsyncProcessorTask.java:227)
at com.sun.grizzly.arp.AsyncProcessorTask.doTask(AsyncProcessorTask.java:131)
at com.sun.grizzly.http.TaskBase.run(TaskBase.java:193)
at com.sun.grizzly.http.TaskBase.execute(TaskBase.java:175)
at com.sun.grizzly.arp.DefaultAsyncHandler.handle(DefaultAsyncHandler.java:145)
at com.sun.grizzly.arp.AsyncProtocolFilter.execute(AsyncProtocolFilter.java:210)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

[#|2014-11-17T17:16:21.060+0000|INFO|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=108;_ThreadName=Thread-2;|Processor exception
java.lang.RuntimeException: java.lang.NullPointerException
at com.sun.grizzly.arp.AsyncProcessorTask.doTask(AsyncProcessorTask.java:144)
at com.sun.grizzly.http.TaskBase.run(TaskBase.java:193)
at com.sun.grizzly.http.TaskBase.execute(TaskBase.java:175)
at com.sun.grizzly.arp.DefaultAsyncHandler.handle(DefaultAsyncHandler.java:145)
at com.sun.grizzly.arp.AsyncProtocolFilter.execute(AsyncProtocolFilter.java:210)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at com.sun.grizzly.arp.AsyncProcessorTask.obtainAndSetThreadAttachment(AsyncProcessorTask.java:233)
at com.sun.grizzly.arp.AsyncProcessorTask.enableTimeout(AsyncProcessorTask.java:227)
at com.sun.grizzly.arp.AsyncProcessorTask.doTask(AsyncProcessorTask.java:131)
... 14 more

#]

We've allready tried to use the following strategies:
1. Update the grizzly-*.jar from the "modules" directory (don't forget to clean the osgi-cache from the domain dir after this process) to the latest minor released version 1.9.60
2. Update only the grizzly-websockets.jar with the fix indicated at the https://java.net/jira/browse/GRIZZLY-1289

None of them resolved the problem efectively.

Moreover, the NullPointerException is confirmed on this line:

case AsyncTask.FINISH:
enableTimeout();

asyncExecutor.finishExecute();
asyncExecutor.getAsyncHandler().returnTask(this);
return;

caused by the highlighted code returning "null".

final SelectionKey selectionKey = asyncExecutor.getProcessorTask().getSelectionKey();

And I think that on NPE, the code

asyncExecutor.finishExecute();

is not allowed to run, and so the "session" is left associated with the thread, which will come to cause trouble later on following requests.

The most similar report we found is on StackOverflow:
http://stackoverflow.com/questions/25017334/weld-gf4-sessionscoped-sometimes-wrong-bean
which states the same problem very accurately, although with Glassfish 4 instead.



 Comments   
Comment by moonysun [ 22/Dec/14 ]

I also have example code to show this (the code that is referenced by my initial comment) but I can't seem to find any way to upload it to the issue...

Comment by oleksiys [ 22/Dec/14 ]

can you pls. try to apply this patch first
https://dl.dropboxusercontent.com/u/7319744/grizzly-gf3122-patch.zip
?

If you still can reproduce the problem - then we'll need a testcase. Unfortunately you can't upload it here

Comment by moonysun [ 23/Dec/14 ]

Bug is still very much reproducible even with the patch provided.

During this last test, some exceptions showed up in the logs:

[#|2014-12-23T01:49:13.524+0000|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=83;_ThreadName=Thread-2;|WebModule[/session-share-bug]PWC1322: Error invoking requestDestroyed method on ServletRequestListener org.jboss.weld.servlet.WeldListener
java.lang.IllegalArgumentException: null source
at java.util.EventObject.<init>(EventObject.java:56)
at javax.servlet.ServletRequestEvent.<init>(ServletRequestEvent.java:77)
at javax.servlet.ServletRequestAttributeEvent.<init>(ServletRequestAttributeEvent.java:83)
at org.apache.catalina.connector.Request.removeAttribute(Request.java:1625)
at org.apache.catalina.connector.RequestFacade.removeAttribute(RequestFacade.java:581)
at org.jboss.weld.context.http.HttpConversationContextImpl.removeRequestAttribute(HttpConversationContextImpl.java:32)
at org.jboss.weld.context.http.HttpConversationContextImpl.removeRequestAttribute(HttpConversationContextImpl.java:12)
at org.jboss.weld.context.AbstractConversationContext.dissociate(AbstractConversationContext.java:163)
at org.jboss.weld.servlet.WeldListener.requestDestroyed(WeldListener.java:117)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4648)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

and

[#|2014-12-23T01:49:14.772+0000|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=85;_ThreadName=Thread-2;|WebModule[/session-share-bug]PWC1322: Error invoking requestDestroyed method on ServletRequestListener org.jboss.weld.servlet.WeldListener
java.lang.NullPointerException
at org.apache.catalina.connector.Request.doGetSession(Request.java:2845)
at org.apache.catalina.connector.Request.getSession(Request.java:2583)
at org.apache.catalina.connector.RequestFacade.getSession(RequestFacade.java:920)
at org.jboss.weld.context.http.HttpConversationContextImpl.setSessionAttribute(HttpConversationContextImpl.java:16)
at org.jboss.weld.context.http.HttpConversationContextImpl.setSessionAttribute(HttpConversationContextImpl.java:12)
at org.jboss.weld.context.AbstractConversationContext.dissociate(AbstractConversationContext.java:157)
at org.jboss.weld.servlet.WeldListener.requestDestroyed(WeldListener.java:117)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4648)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)
14566,15-22 72%

and even more:

[#|2014-12-23T01:49:13.973+0000|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=82;_ThreadName=Thread-2;|WebModule[/session-share-bug]PWC1322: Error invoking requestDestroyed method on ServletRequestListener org.jboss.weld.servlet.WeldListener
java.lang.NullPointerException
at org.apache.catalina.connector.Request.removeAttribute(Request.java:1621)
at org.apache.catalina.connector.RequestFacade.removeAttribute(RequestFacade.java:581)
at org.jboss.weld.context.http.HttpConversationContextImpl.removeRequestAttribute(HttpConversationContextImpl.java:32)
at org.jboss.weld.context.http.HttpConversationContextImpl.removeRequestAttribute(HttpConversationContextImpl.java:12)
at org.jboss.weld.context.AbstractConversationContext.dissociate(AbstractConversationContext.java:163)
at org.jboss.weld.servlet.WeldListener.requestDestroyed(WeldListener.java:117)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4648)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

Relative to the test, at this web addres:

https://www.dropbox.com/sh/stk0cgehvch4ecd/AABI-mPTo9722U5PiIxMNFfCa?dl=0

You will find three files:
session-share-bug.war (deploy to glassfish)
session-share-bug.zip (maven project source code)
TestCDILogout.jmx (jmeter test plan to show the problem)

To reproduce the issue:
Use apache-jmeter to run the TestCDILogout.jmx

and loof for messages in the server.log as:

REUSING

Comment by oleksiys [ 24/Dec/14 ]

Just wanted to confirm, you are able to reproduce the problem with only one jmeter user thread?
How long you normally wait?

Comment by jpereira@linkare.com [ 24/Dec/14 ]

I am able to reproduce it with 2 user threads. But with 10 users it takes about 1 min (sometimes less) to get a few errors like the one i mentioned (finding in the logs REUSING is considered an error as you might understand from the code for obvious reasons).

I can also do it using two browsers each with two tabs/windows and hitting refresh in a quick but random order on the two servlet urls... so you don't actually need too much load to run into it...

Comment by oleksiys [ 25/Dec/14 ]

Pls. try this patch

https://dl.dropboxusercontent.com/u/7319744/grizzly-gf3122-patch.zip

Comment by moonysun [ 25/Dec/14 ]

Nope. Still nothing changes... It even looks like the problem repeated faster... (but that might be my new laptop being faster).

Oh, and by the way, I tested again with a single thread on jmeter and the problem also shows.

Comment by oleksiys [ 25/Dec/14 ]

can you pls. attach the stacktrace?

Comment by moonysun [ 25/Dec/14 ]

The problem shows up even in the event of no StackTrace.
I hope you understand the severity of the problem of sharing session objects in between sessions from different users.

Sometimes, this NPE also shows up, but it is quite rare:

[#|2014-12-25T20:06:08.337+0000|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=94;_ThreadName=Thread-2;|WebModule[/session-share-bug]PWC1322: Error invoking requestDestroyed method on ServletRequestListener org.jboss.weld.servlet.WeldListener
java.lang.NullPointerException
at org.apache.catalina.connector.Request.doGetSession(Request.java:2845)
at org.apache.catalina.connector.Request.getSession(Request.java:2583)
at org.apache.catalina.connector.RequestFacade.getSession(RequestFacade.java:920)
at org.jboss.weld.context.http.HttpConversationContextImpl.getSessionAttribute(HttpConversationContextImpl.java:23)
at org.jboss.weld.context.http.HttpConversationContextImpl.getSessionAttribute(HttpConversationContextImpl.java:12)
at org.jboss.weld.context.AbstractConversationContext.dissociate(AbstractConversationContext.java:156)
at org.jboss.weld.servlet.WeldListener.requestDestroyed(WeldListener.java:117)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4648)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]
Comment by oleksiys [ 25/Dec/14 ]

Do you see NPE similar to the one you reported earlier:

_ThreadName=Thread-2;|java.lang.NullPointerException
at com.sun.grizzly.arp.AsyncProcessorTask.obtainAndSetThreadAttachment(AsyncProcessorTask.java:233)
at com.sun.grizzly.arp.AsyncProcessorTask.enableTimeout(AsyncProcessorTask.java:227)
at com.sun.grizzly.arp.AsyncProcessorTask.doTask(AsyncProcessorTask.java:131)
.....

?

Comment by jpereira@linkare.com [ 25/Dec/14 ]

This time I didn't. But before it only showed up sometimes so there is an amount of uncertainty on tthe observation

even so the main issue of session sharing is still present

Comment by oleksiys [ 26/Dec/14 ]

I think Grizzly part is fixed, the rest IMO is related to webcontainer.
It (webcontainer) must not executes chain postInvoke in the service() thread, if Servlet request was suspended.

Comment by jpereira@linkare.com [ 26/Dec/14 ]

Ok. So I will test your last patch more throughly and report if I can find the NPE or some other bugs related to grizzly... meanwhile have you seen my initial comment stating that an error at the enableTimeout() method call would never allow the finishExecute to be reached?

One other thing... what were the code changes related to your last patch?

Comment by oleksiys [ 26/Dec/14 ]

Right, enableTimeout() race is fixed
https://java.net/jira/browse/GRIZZLY-1729

Comment by moonysun [ 26/Dec/14 ]

So now, as far as I can tell, the only exceptions that remained are:

[#|2014-12-26T22:13:28.230+0000|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=75;_ThreadName=Thread-2;|WebModule[/session-share-bug]PWC1322: Error invoking requestDestroyed method on ServletRequestListener org.jboss.weld.servlet.WeldListener
java.lang.NullPointerException
at org.apache.catalina.connector.Request.removeAttribute(Request.java:1621)
at org.apache.catalina.connector.RequestFacade.removeAttribute(RequestFacade.java:581)
at org.jboss.weld.context.http.HttpConversationContextImpl.removeRequestAttribute(HttpConversationContextImpl.java:32)
at org.jboss.weld.context.http.HttpConversationContextImpl.removeRequestAttribute(HttpConversationContextImpl.java:12)
at org.jboss.weld.context.AbstractConversationContext.dissociate(AbstractConversationContext.java:163)
at org.jboss.weld.servlet.WeldListener.requestDestroyed(WeldListener.java:117)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4648)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

-------------------------

[#|2014-12-26T22:13:28.235+0000|SEVERE|glassfish3.1.2|org.apache.catalina.connector.CoyoteAdapter|_ThreadID=75;_ThreadName=Thread-2;|PWC3989: An exception or error occurred in the container during the request processing
java.lang.NullPointerException
at org.apache.catalina.connector.Request.setAttribute(Request.java:1689)
at org.apache.catalina.connector.RequestFacade.setAttribute(RequestFacade.java:570)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4654)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

------------------------------------

[#|2014-12-26T22:13:32.325+0000|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=79;_ThreadName=Thread-2;|WebModule[/session-share-bug]PWC1322: Error invoking requestDestroyed method on ServletRequestListener org.jboss.weld.servlet.WeldListener
java.lang.NullPointerException
at org.jboss.weld.context.http.HttpConversationContextImpl.getSessionAttribute(HttpConversationContextImpl.java:24)
at org.jboss.weld.context.http.HttpConversationContextImpl.getSessionAttribute(HttpConversationContextImpl.java:12)
at org.jboss.weld.context.AbstractConversationContext.dissociate(AbstractConversationContext.java:159)
at org.jboss.weld.servlet.WeldListener.requestDestroyed(WeldListener.java:117)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4648)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

--------------------------------------------

[#|2014-12-26T22:13:39.775+0000|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=77;_ThreadName=Thread-2;|WebModule[/session-share-bug]PWC1322: Error invoking requestDestroyed method on ServletRequestListener org.jboss.weld.servlet.WeldListener
java.lang.NullPointerException
at org.apache.catalina.connector.Request.doGetSession(Request.java:2845)
at org.apache.catalina.connector.Request.getSession(Request.java:2583)
at org.apache.catalina.connector.RequestFacade.getSession(RequestFacade.java:920)
at org.jboss.weld.context.http.HttpConversationContextImpl.getSessionAttribute(HttpConversationContextImpl.java:23)
at org.jboss.weld.context.http.HttpConversationContextImpl.getSessionAttribute(HttpConversationContextImpl.java:12)
at org.jboss.weld.context.AbstractConversationContext.dissociate(AbstractConversationContext.java:156)
at org.jboss.weld.servlet.WeldListener.requestDestroyed(WeldListener.java:117)
at org.apache.catalina.core.StandardContext.fireRequestDestroyedEvent(StandardContext.java:4648)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:242)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:333)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

------------------------------------------

[#|2014-12-26T22:13:46.649+0000|WARNING|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=78;_ThreadName=Thread-2;|StandardWrapperValve[AsyncDownloadProxy]: PWC1406: Servlet.service() for servlet AsyncDownloadProxy threw exception
java.lang.IllegalStateException: Missing wrapper
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:226)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
at pt.segsocial.ii.session.AuditFilter.doFilter(AuditFilter.java:63)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1042)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:745)

#]

---------------------------------------------------------

And the main problem still remains, as reported in the initial statement.

Comment by moonysun [ 26/Dec/14 ]

Meanwhile, sorry for bothering, but the error still seems to be present, just by inspection of the code... I'll explain my reasoning:

in the following lines, at method doTask of class AsyncProcessor Task:

case AsyncTask.FINISH:
enableTimeout(); //If this method fails for some reason, the "finishExecute" will never get called...

asyncExecutor.finishExecute(); //this might never get called at all, if enableTimeout() above fails
asyncExecutor.getAsyncHandler().returnTask(this);
return;

on the other hand, at the method enableTimeout(), we have:

private void enableTimeout()

{ final ThreadAttachment attachment = obtainAndSetThreadAttachment();//check this method for possible exceptions... attachment.setIdleTimeoutDelay(SelectionKeyAttachment.UNSET_TIMEOUT); //this is not prone to exceptions unless "attachment" is null attachment.setTimeout(System.currentTimeMillis()); //this is not prone to exceptions unless "attachment" is null }

and on the method obtainAndSetThreadAttachment() we have:

private ThreadAttachment obtainAndSetThreadAttachment() {
final ProcessorTask processorTask = asyncExecutor.getProcessorTask();//
final SelectionKey selectionKey = processorTask.getSelectionKey();

//those two previous lines were previously written
//final SelectionKey selectionKey = asyncExecutor.getProcessorTask().getSelectionKey();
//so the NPE at asyncExecutor.getProcessorTask() might still occur nonetheless...

Object attachment = selectionKey.attachment();
if (attachment == null || !(attachment instanceof ThreadAttachment))

{ attachment = obtainThreadAttachment(); selectionKey.attach(attachment); }

return (ThreadAttachment) attachment;
}

(check my comments in the code snippets above)...

-------------------------

Nonetheless, I made a more comprehensive run (100 virtual users, download -> logout) and I didn't find the original NPE in the logs anymore...

Comment by jpereira@linkare.com [ 10/Feb/15 ]

Bump





[GLASSFISH-21010] Under normal traffic, Glassfish crashes after disconnecting the network connection from client PC Created: 19/Mar/14  Updated: 19/Sep/14

Status: Open
Project: glassfish
Component/s: web_socket
Affects Version/s: 4.1
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: khanhdo Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Glassfish (version 4.0.1-b04-03_13_2014) installed in Windows 7.
A WebSocket application is deployed.


Issue Links:
Dependency
depends on GRIZZLY-1670 No event is called on HttpHandler's W... Resolved

 Description   

While Glassfish server working in normal traffic environment (throughput = 80 kbps), try to disconnect the network cable from a client. From then, Glassfish server works unstable and shortly after Glassfish server does not response to any request from other clients session.

Step to reproduct the issue:
Test PC A-------> Glassfish Server <---------Test PC B
(Ethernet network: 1 Gbps)

1. From a test PC A and B, open web browser sessions to connect to the testing site.
2. From these test PC, do some actions to acquire the data from server until the traffic throughput at server reaches and remains always nearly 80 kbps.
3. Disconnect network cable at the test PC A .
4. Monitor the reaction from the browser sessions from PC A, B and also from the server.

Actual result:
1. The site is loaded successfully at both test PC A and B. Browser sessions are connected to the WebSocket server endpoint and can send/receive WebSocket data.
2. The traffic throghput reaches nearly 80 kps at the server, no problem occurs.
3. - No data is transmitted to PC A because the cable is disconnected.

  • PC B still received the data in short time (10 seconds), then receives no more data
  • Glassfish server does not throw any log exception. In next 1-2 minutes, the Glassfish still accepts http connection, but later on, it does not response to any http request. It seems Glassfish crashed (ping to server is still ok).

Note:
1. This issue does not occur when closing the browser session. It only occurs when disconnect the physical cable or disable network adapter from the test PC.
2. Under low traffic (< 80 kps), this issue also not occurs.



 Comments   
Comment by oleksiys [ 20/Mar/14 ]

It would be great if you can take a thread dump snapshot, when GF stops responding (using jstack for example).

Can you please create a testcase for us (maven project)?

thank you!

Comment by khanhdo [ 20/Mar/14 ]

Hello Oleksiys,

Thanks for your comment.

Enclosed is the thread dump by jstack and also the exceptions output. The testcase in maven project will be provided soon later.

Best Regards,
Khanh Do

------------
1. Thread dump link to download:

https://docs.google.com/file/d/0B3mBNH6XLa-PMWlDMjVpWHdfelU/edit?pli=1

------------
2. Exceptions:

WARNING: WriteListener.onError
java.io.IOException: Connection closed
at org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:291)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:576)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:400)
at org.glassfish.grizzly.nio.NIOConnection$3.run(NIOConnection.java:485)
at org.glassfish.grizzly.nio.DefaultSelectorHandler.execute(DefaultSelectorHandler.java:234)
at org.glassfish.grizzly.nio.NIOConnection.close0(NIOConnection.java:479)
at org.glassfish.grizzly.nio.transport.TCPNIOConnection.close0(TCPNIOConnection.java:292)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.writeCompositeRecord(TCPNIOAsyncQueueWriter.java:169)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:87)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.processAsync(AbstractNIOAsyncQueueWriter.java:423)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:550)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.executeIoEvent(WorkerThreadIOStrategy.java:103)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(SelectorRunner.java:412)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(SelectorRunner.java:381)
at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:345)
at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:276)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: An established connection was aborted by the software in your host machine
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:51)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:149)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.writeCompositeRecord(TCPNIOAsyncQueueWriter.java:158)
------------

Comment by oleksiys [ 20/Mar/14 ]

Hi,

thank you for the info.
The exception and the thread dump look fine, I can't see any sign of the problem.

Is it possible for you to create a testcase for us (maven based would be great)?

Thank you!

Comment by khanhdo [ 20/Mar/14 ]

Hi Oleksiys,

Thanks for your information.

Regarding to the maven testcase, I just created a WebSocket program to reproduce this issue, which can be cloned at the following Git link:

https://github.com/khanhdoth/GLASSFISH-21010

Best Regards,
Khanh Do

Comment by oleksiys [ 20/Mar/14 ]

perfect, thank you very much!
I'll let you know once I have more info.

Comment by oleksiys [ 27/Mar/14 ]

Hi,

unfortunately I can't reproduce the problem.
Can you pls. give me more details, when you say "but later on, it does not response to any http request", do you mean simple "http://localhost:8080" won't work?
Can you pls. try to connect to:

1) http://localhost:4848
2) telnet localhost 8080

Thank you!

Comment by khanhdo [ 03/Apr/14 ]

Hi Oleksiys,

Sorry for late reply. Yes, I mean only http and websocket services at http://localhost:8080 won't work; and 4848 is still ok.

In my orginal Websocket application, the root cause is that for every WebSocket message, I calls a thread to send it (in order to send the message as fast as possible without blocking the IO). Normally, a thread is fired and finished shortly when the sending is successfully. But when the network cable is disconnected, Glassfish still keep the session open for a long time. As a result, the sending threads still remain during this time. The situation becomes worse when there are more threads created to send other on-going messages.

Here, the problem I mentioned ("but later on, it does not response to any http request") occurs. Since there are too many threads are created more and more, this may exceed the capacity of the Glassfish server.

The small provided test application does not cover exactly what happens in my WebSocket application, sorry . It only shows that the sending thread is blocked when the network cable is disconnected. It causes the WebSocket Endpoint server is blocked and unable to send the message. Somehow it's similar to the situation I meet in my app.

So I don't know whether the root cause "the sending thread is blocked when the network cable is disconnected" is a bug or only a normal situation.

What is your idea?

Currently I use ThreadPoolExecutor to send the message. It helps to control the maximum of threads can be used to send the message, in order to prevent the problem.

Thanks & Best Regards,
Khanh Do

Comment by Pavel Bucek [ 03/Apr/14 ]

Hi khando,

can you please try to replace

session.getBasicRemote().sendText(msg);

with

session.getAsyncRemote().sendText(msg);

?

It returns Future instance, but if you won't call Future#get() on that, your thread should not remain blocked after disconnect happens.

Comment by khanhdo [ 04/Apr/14 ]

Hi Pavel,

Thanks for your suggestion. And here's the results after the cable disconnection:

1. session.getBasicRemote().sendText(msg) -> thread is blocked
2. session.getAsyncRemote().sendText(msg) -> thread is not blocked
3. session.getBasicRemote().sendObject(msg) -> thread is blocked
4. session.getAsyncRemote().sendObject(msg) -> thread is blocked

It's very strange for me that in case 4, thread is blocked. Is this a bug?

Unfortunately, I'm using sendObject() instead of sendText() to send the data... therefore, the threads are still blocked after disconnection.

Comment by Pavel Bucek [ 04/Apr/14 ]

ad case 4 - can you please post stacktrace of blocked thread(s)? (I don't see any reason why it should be blocked from the code perspective..)





[GLASSFISH-18803] no response for a chunked package Created: 13/Jun/12  Updated: 14/Dec/12

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 3.1.2_b23
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: yavuzs Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 1 day
Time Spent: Not Specified
Original Estimate: 1 day
Environment:

SunOs, x86, 64bit


Attachments: File seac808012.cap    
Tags: http, http-listener

 Description   

there is a server that sends us chunked requests (by POST), our application on glassfish takes these packages and process them. but somehow, some of the packages are not processed by the application, also there is no log activity about these packages. but when we capture the network activity we can see that the request has come to the server.
we tried "-Dcom.sun.enterprise.web.connector.grizzly.enableSnoop=true" parameter. with this parameter, for the lost request there is no log.
here is a lost request:
Host: 192.168.100.10:49205
Transfer-Encoding: chunked

185
<?xml version='1.0' encoding='UTF-8'?>
<env:Envelope xmlns:env='http://www.w3.org/2003/05/soap-envelope' xmlns:seadac='http://www.schange.com/2004/SeaDAC/triggers'>
<env:Header><seadac:transaction transaction-id='

{20321632-B758-442a-ADFE-6E4607343AE3}

' origination-time='2012-06-12T08:29:04Z' env:role='http://www.w3.org/2003/05/soap-envelope/ultimateReceiver'/></env:Header>
<env:Body>
80
<seadac:subscription-notification subscription-id='

{1942B6B8-158F-4279-B9E9-0429671BEC67}

' current-request='3' next-request='4'>
c7
<seadac:hierarchy-changed hierarchy-uid='539401'><seadac:node-added hierarchy-uid='539710' asset-uid='528673'/></seadac:hierarchy-changed></seadac:subscription-notification></env:Body></env:Envelope>
0

you can find the capture attached.
frame 7824 and 7825 are the same request that the server sends to all subscribed clients. for this example there are two client subscribed. the gf application and the tomcat application. as you see, at the frame 7824 tomcat responded the request but at the frame 7825 fg did not respond the request.
regards,



 Comments   
Comment by yavuzs [ 19/Jun/12 ]

any response?

Comment by oleksiys [ 14/Dec/12 ]

sorry for the late response.
is it still the case w/ Glassfish 3.1.2.2?

thanks.

Comment by yavuzs [ 14/Dec/12 ]

the version is : GlassFish Server Open Source Edition 3.1.2 (build 23)

Comment by oleksiys [ 14/Dec/12 ]

yes, I understand, but there is a newer version (patch release) where this issue might have been fixed.
are you able to reproduce the issue consistently, or it happens sporadically?

Comment by yavuzs [ 14/Dec/12 ]

it happens sporadically. we didn't try new patch yet.





[GLASSFISH-21211] Grizzly NIO selectors running postponed tasks indefinitely causing high CPU usage Created: 19/Sep/14  Updated: 22/Oct/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.1_b10
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: afcarv Assignee: oleksiys
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Glassfish 4.1_b13, JDK 1.7u67, Linux CentOS 3.10.48 x64



 Description   

I have an nginx http server load balancing a Glassfish cluster composed of 2 instances in different machines. These instances serve a Jersey application composed of several RESTful webservices.

After some time running, I noticed a great increase in CPU load in one of the instances, even though client request throughput remained the same. Looking at the server threads I see that two of them are constantly running, being the cause of this increase.

Checking a heap dump I see that both of these threads are running selectors with postponed TCPNIOConnection tasks, which in turn do not seem to be processed correctly.

After turning on Grizzly logs I see lots of entries like the one below:

[2014-09-19T14:07:42.393+0000] [glassfish 4.1] [FINEST] [] [org.glassfish.grizzly.ProcessorExecutor] [tid: _ThreadID=105 _ThreadName=http-listener-1-kernel(8) SelectorRunner] [timeMillis: 1411135662393] [levelValue: 300] [CLASSNAME: org.glassfish.grizzly.ProcessorExecutor] [METHODNAME: execute] [[
executing connection (TCPNIOConnection{localSocketAddress=

{/10.248.232.5:28080}

, peerSocketAddress={/10.248.233.91:42864}}). IOEvent=WRITE processor=org.glassfish.grizzly.filterchain.DefaultFilterChain@3d4f0b0b]]

Where the first IP address is the Glassfish instance and the second the nginx load balancer.

Checking the nginx machine I see no open sockets with the listed number.

My theory is that nginx closed the socket for some reason (timeout?) and now Glassfish is unable to connect to it. The task will keep running and consuming resources until a server restart (as selectors with postponed tasks will call non-blocking selectNow() method).

Is this a bug or is my configuration not supported?

Thank you!



 Comments   
Comment by oleksiys [ 22/Sep/14 ]

I added more logging to investigate the problem.
Can you pls. apply the patched jar [1] and enable FINEST logging for org.glassfish.grizzly.nio.DefaultSelectorHandler
After you reproduce the problem - pls. share the server.log* files.

Thank you!
[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-21211/nucleus-grizzly-all.jar

Comment by afcarv [ 22/Sep/14 ]

I have done as requested and restarted the cluster but it may take a while for the issue to manifest itself as I'm not sure what toggles it. Once it does, though, I'll get the log files.

Thanks!

Comment by oleksiys [ 23/Sep/14 ]

Thank you!

Comment by afcarv [ 24/Sep/14 ]

Ok, the event happened again and another thread is looping. Unfortunately due to the heavy log rotation, I could not get the beginning of it. Server log can be downloaded below but it's mostly composed of the same entries.

https://dl.dropboxusercontent.com/u/106573849/server.zip

Let me know how you want to proceed, thanks!

Comment by oleksiys [ 26/Sep/14 ]

Can you pls. apply this patch
https://dl.dropboxusercontent.com/u/7319744/glassfish-21211/nucleus-grizzly-all.jar

it has to give us more stacktraces logged, thank you.

Comment by afcarv [ 26/Sep/14 ]

Patch applied. Do I keep the same logger settings as before? Will post log files once it happens again, thanks!

Comment by oleksiys [ 27/Sep/14 ]

Yep, same logger settings. Thank you!

Comment by afcarv [ 27/Sep/14 ]

Happened again with one more thread. Logged some more info now, though I still couldn't get the beginning of it.

https://dl.dropboxusercontent.com/u/106573849/server-27-09.zip

Thanks

Comment by oleksiys [ 30/Sep/14 ]

pls. try this patch [1], let me know if you still see the problem.

Thank you!

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-21211/nucleus-grizzly-all.jar

Comment by afcarv [ 01/Oct/14 ]

Ok, applied the patch - will let it run for a few days and let you know if it happens again.

Thanks!

Comment by afcarv [ 02/Oct/14 ]

Looks like it's still happening - got another thread looping. Is logging still in place? Thanks

Comment by lcmoreno [ 02/Oct/14 ]

Not sure if this is the same problem I'm having, but it seems like it is. I found on Stackoverflow another user reporting this high cpu usage the same way I experienced: http://stackoverflow.com/questions/25922809/glassfish-4-grizzly-threads-heavy-cpu-usage

It looks like it happens when this sun.nio.ch.SelectorImpl.selectNow is called. When I have threads using just sun.nio.ch.SelectorImpl.select I don't have the high cpu usage problem.

Comment by oleksiys [ 03/Oct/14 ]

Can you pls. apply this patch [1].
Please enable FINEST logging for "org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter". The logging you changed earlier could be disabled.
Now you'll see even more logging messages

Thank you!

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-21211/nucleus-grizzly-all.jar

Comment by afcarv [ 06/Oct/14 ]

Patch applied, will let you know once I have the new data, thanks!

@lcmoreno

Likely the same issue, another user reported it at http://stackoverflow.com/questions/26107101/glassfish-4-1-high-cpu-usage

Comment by dirkroets [ 06/Oct/14 ]

We've also been experiencing this issue since we upgraded to Glassfish 4.1b13 last week. We're running single (non clustered) Glassfish instances behind nginx proxies.

I've applied the latest patch and enabled logging on one of the dev servers in the office. Will post the logs here when it reoccurs.

Comment by oleksiys [ 06/Oct/14 ]

thank you guys. pls. post the log files once you experience the problem again.

Comment by dirkroets [ 06/Oct/14 ]

Log files have been uploaded to dropbox - https://dl.dropboxusercontent.com/u/46663766/logs.zip

As far as I can see all the entries in the first log file in the sequence - server.log_2014-10-06T22-12-38 - were logged before the utilisation went up. Then around 2014-10-06T22:13:16.417+0200 in the log file - server.log_2014-10-06T22-13-16 - the problem starts appearing.

Comment by oleksiys [ 07/Oct/14 ]

I have a suspect now, can you pls. run this patch [1], it should clarify if I'm right or not.

Thank you!

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-4.1/nucleus-grizzly-all.jar

Comment by dirkroets [ 07/Oct/14 ]

Patch has been loaded on one of our dev servers. Will monitor throughout the day and post the logs!

Comment by dirkroets [ 07/Oct/14 ]

Ok, log files have been uploaded to dropbox again - https://dl.dropboxusercontent.com/u/46663766/logs-2.zip

First signs of the issue I can see are in server.log_2014-10-07T10-22-36 It took a while for me to realise that the issue had reappeared, so there are plenty more log files, but they just seem to repeat the same entry.

Comment by oleksiys [ 08/Oct/14 ]

Ok, please try this patch [1], hopefully it will resolve the problem.

Thank you.

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-21211/nucleus-grizzly-all.jar

Comment by dirkroets [ 08/Oct/14 ]

Thanks!

I've loaded the patch onto one of the servers. Will monitor and report back.

Comment by afcarv [ 08/Oct/14 ]

I've applied this latest patch as well, will let you know if it happens again - thanks

Comment by dirkroets [ 09/Oct/14 ]

Yesterday I applied the patch to one of our dev servers and to one of our production servers. The dev server is still running smoothly, but the issue has reappeared on the production server. It is possible that the dev server just doesn't see enough traffic for the issue to have been triggered again. In the past the dev server has also sometimes run smoothly for a day without the issue appearing.

I can turn on logging on the production servers, but unfortunately restarts have to be done between 22:00 and 06:00 CAT if I have to patch again. For the moment I'm going to ask someone in my team to try and reproduce on the dev server.

Comment by oleksiys [ 09/Oct/14 ]

@dirkroets yes, pls. turn on FINEST logging on:
"org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter"
"org.glassfish.grizzly.nio.TCPNIOAsyncQueueWriter"

Comment by dirkroets [ 10/Oct/14 ]

Here are the server log files for a server running the latest patch.

We've now succeeded in reproducing this issue "at will" on one of our dev servers. The setup is a very basic nginx proxy that forwards http requests to a GF4.1 server running a simple web application with at least one page that takes longer than 1s to process. The nginx configuration for the hostname being forwarded looks as follows:

server {
  listen          80;
  server_name     gf.localdomain;
  location / {
    proxy_pass              http://192.168.200.7:8080;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Host $host;
    proxy_set_header        X-Forwarded-Server $host;
    proxy_set_header        Host $http_host;
    proxy_connect_timeout         1s;
    proxy_send_timeout         1s;
    proxy_read_timeout         1s;
    client_max_body_size    20M;
  }
}

By setting the connect, send and read time outs to 1 second each it guarantees that nginx will handle requests for every page that takes longer than 1 second to respond as a time out. With this configuration we can trigger the issue consistently be requesting a simple page from a servlet that has Thread.sleep(2000).

https://dl.dropboxusercontent.com/u/46663766/logs-3.zip

EDIT: It turns out that it is not quite as simple to reproduce this issue as I thought. This morning we were able to easily reproduce on the test server, but after a restart of the test GF server and the nginx proxy server we cannot reproduce again with the simple Thread.sleep servlet. If we do manage to consistently reproduce it I'll post the web app here.

Comment by afcarv [ 10/Oct/14 ]

@dirkroets

Out of curiosity, which nginx version are you using? Have you tried to set proxy_ignore_client_abort to check if it solves or mitigates the issue? I think I may try that to see what happens. Thanks

Comment by dirkroets [ 10/Oct/14 ]

@afcarv , we're running nginx 1.4.6 and 1.6 in front of the two servers where this issue most frequently occurs. We have not tried proxy_ignore_client_abort yet. I will test that tonight and report back.

Comment by oleksiys [ 11/Oct/14 ]

Unfortunately this log doesn't help much, because I don't see the moment when it started failing, looks like there is some part missed?
Anyway I've updated the patch to add logs [1].

One more update, I was inaccurate with one logging domain, the correct ones are:
"org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter"
"org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter"

Thank you!

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-21211/nucleus-grizzly-all.jar

Comment by dirkroets [ 13/Oct/14 ]

I've loaded the new patch and fixed the logging. Will upload the logs as soon as this reappears!

Comment by dirkroets [ 13/Oct/14 ]

Ok, that was quite quick. I've uploaded the latest logs to dropbox and provided the link below this comment.

The issue was still triggered using an nginx configuration without proxy_ignore_client_abort I've now added the directive to nginx to see if it still gets triggered.

[1] https://dl.dropboxusercontent.com/u/46663766/logs-4.zip

Comment by dirkroets [ 13/Oct/14 ]

We've been able to reproduce this issue again with proxy_ignore_client_abort set for all locations except for locations used exclusively by websockets.

[1] https://dl.dropboxusercontent.com/u/46663766/logs-5.zip

Comment by oleksiys [ 13/Oct/14 ]

Thank you!
One more attempt to fix it [1]. If doesn't work - please attach the logs.

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-4.1/nucleus-grizzly-all.jar

Comment by dirkroets [ 14/Oct/14 ]

Thanks! I've loaded the patch onto one of our dev servers. Will try to trigger the issue and report back.

Comment by dirkroets [ 15/Oct/14 ]

Just a quick update...

We've not been able to trigger this issue again on the dev server since I've loaded the patch. It is still a bit inconclusive though because there were other times when we could also not reliable reproduce the issue on the dev server in 24 hours. We'll keep putting some effort into this on the dev server today and tomorrow and if everything seems good I'll give this a try on one of our production servers from Thursday night.

Comment by oleksiys [ 15/Oct/14 ]

thank you for the update!

Comment by afcarv [ 15/Oct/14 ]

I've applied the new patch as well, will post results later. Thanks!

Comment by dirkroets [ 16/Oct/14 ]

Everything is still looking good on the dev server!

I'm going to try the patch on one of our production servers in a couple of hours from now. Although we've not been able to directly trigger the issue on our production servers in the past the issue has shown up consistently within a few minutes of starting up any of the production servers, presumably because of the higher volumes of traffic. I should therefore tomorrow be able to say with quite a lot of certainty whether the latest patch has solved the problem for us.

Comment by oleksiys [ 16/Oct/14 ]

sounds good! Thank you for the update!

Comment by dirkroets [ 17/Oct/14 ]

After having closely monitored all the servers where we've loaded the latest patch I'm happy to report that the issue hasn't reappeared on any of these servers. We are seeing a great improvement in CPU utilisation as well as a great improvement in the number of requests timing out between our nginx proxies and app servers.

The bug is fixed as far as I'm concerned. Thanks, @oleksiys !

Comment by afcarv [ 17/Oct/14 ]

Looking good to me as well – no issues so far; will leave it running throughout the weekend and check again on Monday. Thanks Oleksiy!

Comment by dirkroets [ 20/Oct/14 ]

We've now deployed the patch to all our production servers and the issue hasn't reappeared on any of those servers.

Comment by afcarv [ 20/Oct/14 ]

Yup, no sign of the issue yet - pretty sure this has been fixed. Thanks!

Comment by oleksiys [ 20/Oct/14 ]

grizzly issue
https://java.net/jira/browse/GRIZZLY-1712

Comment by afcarv [ 22/Oct/14 ]

By the way - previously, given enough time, instances would crash with an exception like the one below:

[2014-10-22T02:42:30.987+0000] [glassfish 4.1] [SEVERE] [] [org.glassfish.grizzly.nio.SelectorRunner] [tid: _ThreadID=98 _ThreadName=http-listener-1-kernel(4) SelectorRunner] [timeMillis: 1413945750987] [levelValue: 1000] [[
doSelect exception
java.util.concurrent.RejectedExecutionException: The thread pool's task queue is full, limit: 4096
at org.glassfish.grizzly.threadpool.AbstractThreadPool.onTaskQueueOverflow(AbstractThreadPool.java:466)
at org.glassfish.grizzly.threadpool.QueueLimitedThreadPool.execute(QueueLimitedThreadPool.java:81)
at org.glassfish.grizzly.threadpool.GrizzlyExecutorService.execute(GrizzlyExecutorService.java:161)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.executeIoEvent(WorkerThreadIOStrategy.java:100)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(SelectorRunner.java:414)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(SelectorRunner.java:383)
at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:347)
at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:278)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
]]

Thought this was due to the pile of TCPNIOConnection tasks that kept accumulating, so I didn't mind it too much, but it looks like this is still happening, even after the patch, so I'm not sure if it's related to this issue now. Haven't tried to change max queue size yet.

Comment by oleksiys [ 22/Oct/14 ]

I think it can be not related to the fixed issue.
You can open a new one and we can take a look at it.

Comment by afcarv [ 22/Oct/14 ]

Ok, thanks for clarification - will try to get some more details first.





[GLASSFISH-21246] Grizzly thread pool waiting on LinkedTransferQueue hangs application Created: 03/Nov/14  Updated: 10/Nov/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.1_b10
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: afcarv Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Glassfish 4.1_b13, JDK 1.7u67, Linux CentOS 3.10.48 x64



 Description   

Elaborating on the issue I posted on GLASSFISH-21211.

After some time running, http-listener-1 thread pool stops responding. Checking the logs, I see multiple messages like the one below:

[2014-11-03T18:50:47.114+0000] [glassfish 4.1] [SEVERE] [] [org.glassfish.grizzly.nio.SelectorRunner] [tid: _ThreadID=60 _ThreadName=http-listener-1-kernel(1) SelectorRunner] [timeMillis: 1415040647114] [levelValue: 1000] [[
  doSelect exception
java.util.concurrent.RejectedExecutionException: The thread pool's task queue is full, limit: 4096
        at org.glassfish.grizzly.threadpool.AbstractThreadPool.onTaskQueueOverflow(AbstractThreadPool.java:464)
        at org.glassfish.grizzly.threadpool.QueueLimitedThreadPool.execute(QueueLimitedThreadPool.java:81)
        at org.glassfish.grizzly.threadpool.GrizzlyExecutorService.execute(GrizzlyExecutorService.java:161)
        at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.executeIoEvent(WorkerThreadIOStrategy.java:100)
        at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:89)
        at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(SelectorRunner.java:414)
        at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(SelectorRunner.java:383)
        at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:347)
        at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:278)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
        at java.lang.Thread.run(Thread.java:744)
]]

I then took a thread dump which shows all threads in http-listener-1 in a state like the below:

"http-listener-1(31)" daemon prio=10 tid=0x00007ff19422b000 nid=0x6c61 waiting on condition [0x00007ff1f6eed000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x00000006036fd360> (a java.util.concurrent.LinkedTransferQueue)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:735)
        at java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:644)
        at java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1137)
        at org.glassfish.grizzly.threadpool.FixedThreadPool$BasicWorker.getTask(FixedThreadPool.java:105)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:557)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
        at java.lang.Thread.run(Thread.java:744)

They seem to be waiting on LinkedTransferQueue take() method, which appears to be not responding (waiting for some item to arrive in the queue?). I also took a memory dump that I can look into if someone points me in the right direction.

Thought this could be related to GRIZZLY-1711 but after applying the latest version (2.3.17-SNAPSHOT) this behavior is still happening.

Let me know if I should get more data - thanks!



 Comments   
Comment by davidwinters1980 [ 03/Nov/14 ]

This appears to be occurring as a result of the http thread pool reaching its limit of 4096.

As per the default domain.xml setting:

<thread-pools>
<thread-pool name="admin-thread-pool" max-thread-pool-size="50" max-queue-size="256"></thread-pool>
<thread-pool name="http-thread-pool" max-queue-size="4096"></thread-pool>
<thread-pool name="thread-pool-1" max-thread-pool-size="200"/>
</thread-pools>

You could configure this to -1 or unlimited but this will likely lead to other issues such as out of memory etc

It would appear as though tasks are being added to this queue faster than the tasks can be processed. I would investigate why this occurring? If you could take a number of thread dumps over a period of time whereby we capture this issue occurring that would be useful.

Furthermore, it maybe worth increasing the value of your http request processing threads as this will allow more tasks to be handled concurrently off the http thread task queue and so it may reach the 4096 limit so quickly. What is the current value of the number of http request processing threads??

The issue you have here is very dependent on the number of http requests being pushed to the http task pool and how quickly these requests are being handled so appropriate tuning of the http parameters may resolve this issue. If this fails then looking at the state of all threads running over a period of time should give you a hint as to why requests are taking so long to process.

Comment by afcarv [ 03/Nov/14 ]

I don't think it's a configuration issue as the environment is able to handle much heavier loads - it may be due to specific and temporary environmental reasons (network latency, database slowdown, etc) but you'd think it would resume work as soon as this is normalized. This does not happen (waited for some hours to be sure), even if the consumer is stopped and there's no inbound traffic anymore - the only way to bring it back online is with a server/cluster restart. I did notice a lot of connections in the CLOSE_WAIT state in the server, though.

Haven't tried to raise the queue limit or remove it yet, but I suspect it would only delay the occurrence or eventually reach an OOM error.

I see no obvious reason for this looking at the thread pool dump, but will post it later for analysis. Thanks!

Comment by afcarv [ 04/Nov/14 ]

Please find attached the full thread dump and the server log file. The dump was taken about 30 minutes after the issue manifested itself - you can see all threads in the http-listener-1-kernel pool reach the queue limit and stop responding. http-thread-pool contains 32 threads; there are 8 acceptor threads.

Interestingly, http-listener-2 (SSL) continues to respond normally. http-listener-1 hanged and required a server restart. The configuration is a 2 instance cluster with a nginx load balancer in front; there are Jersey applications deployed serving RESTful webservices. The configuration can handle about 5x the average load on the server - no increased throughput was observed leading to the issue, which persisted after stopping all inbound traffic.

I will schedule a cron job to take a thread dump every 5 minutes to check on any odd behavior leading to the issue - thanks!

Thread dump: https://www.dropbox.com/s/qejdzhwejlv1zzp/threadump.txt?dl=0
Server log: https://www.dropbox.com/s/rbu24cep9y25aol/server.log?dl=0

Comment by afcarv [ 04/Nov/14 ]

Some more info I got from the memory dump -

I see lots of instances of TCPNIOConnection (1500+) in what appears to be a closing state; as the latest snapshot was applied I can see these with a CloseReason of "LOCALLY". May explain the connections in the CLOSE_WAIT state I saw? Is it possible that nginx closed the socket before a FIN packet could be sent from the server, and now it is not able to end the connection properly? Not sure if this could be just a consequence of some other issue, though.

Comment by oleksiys [ 06/Nov/14 ]

Which Grizzly version is used in 4.1_b10?

Thank you.

Comment by afcarv [ 06/Nov/14 ]

We're using b13 (couldn't find the tag for it so I selected the closest), but I believe it's the same version - 2.3.15.

Currently, 2.3.17-SNAPSHOT is applied.

By the way - we found out that what triggers this behavior is a DB performance degradation, causing the requests to queue and eventually reach the limit. No additional info on why queue isn't being reduced, though.

Comment by davidwinters1980 [ 06/Nov/14 ]

Could you take a number of thread dumps say every 5 minutes so that we can compare the different thread dump files leading up to the hang? Also could you send us on the contents of the <transports> which should contain the different tcp parameters used by Grizzly.

<transports>
<transport byte-buffer-type="HEAP" display-configuration="true" name="tcp" .....></transport>
</transports>

Useful to know db degradation issues are causing requests in the queue to fill up. It might be useful to get some debug grizzly logging on here: org.glassfish.grizzly.level=FINEST

Thanks.

Comment by afcarv [ 10/Nov/14 ]

I had some monitoring in place but we've been focusing on fixing the db degradation so the issue won't be triggered - so far we've been successful, so it looks unlikely I'll be able to collect more data in the production system.

What I'm going to try next is to create a sample application and configuration that will be able to recreate the issue - doesn't look too hard; just a sample RESTful webservice that performs a db query that has an artificial delay in response, small db connection pool and a sample JMeter test script with http requests that will timeout before the app has a chance to respond. Should replicate the conditions pretty closely (don't know if nginx in front plays a role in this or not, so will try without it first).

Will report on any news, thanks!

Comment by oleksiys [ 10/Nov/14 ]

pls. try to patch GFv4.1 with this patch [1].
I need server.log output message(s) like:
"The queue is overflowed. queuePermits=......."

Thank you.

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-21246/nucleus-grizzly-all.jar





[GLASSFISH-21202] Glassfish/Grizzly Memory Leak in Glassfish 4? Created: 16/Sep/14  Updated: 12/Dec/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: talk2umakanth Assignee: oleksiys
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

GlassFish Server Open Source Edition 4.0 (build 89)
Red Hat Enterprise Linux Server release 5.9 Beta (Tikanga)
Linux 2.6.18-339.el5 #1 SMP Mon Aug 27 15:42:17 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux



 Description   

We're facing very frequent out of Memory problems in our Glassfish Installation of an enterprise web application.

The application is running in a cluster of 18 JVMs. Initial heap size was 1 GB which resulted in the JVMs going down almost every hour and we have increased the heap size to 1.5 GB which makes the JVMs to crash every 3-4 hours.

The Heap dump does not show any application objects at all whenever the JVMs crash and shows the same objects from Glassfish/Grizzly consuming majority of the memory all the time. Below object is found to consume more than 65% of heap in most of the heap dumps.

762,277,600 bytes (52.71 %) of Java heap is used by 5,632,379 instances of org/glassfish/grizzly/memory/HeapMemoryManager$TrimmableHeapBuffer

Following error is also seen in the logs frequently (which is due a malformed URL having && in the request query string??). Can this be a potential cause of the problem? Running a 12 hours load test using similar URL does not produce out of memory in non production systems.

GRIZZLY0155: Invalid chunk starting at byte [387] and ending at byte [387] with a value of [null] ignored]]

The error is not reproducible in non-production environment but heap dumps from production can be provided if required.

  • Are there any know memory leaks in Glassfish 4.0 (Build 89)/Grizzly 2.3.1?
  • What does the object HeapMemoryManager$TrimmableHeapBuffer constitute of? Is there any way this can be avoided if there are no known/feasible fixes for Glassfish or Grizzly?

Additional Information (may not be relevant)

  • Each JVM is fronted by an Apache 2.4.9 HTTP Server using mod_jk to communicate with glassfish which are in turn balanced by Netscale load balancer.


 Comments   
Comment by oleksiys [ 18/Sep/14 ]

Heap dump tree could help us understand the problem.

Thank you.

Comment by talk2umakanth [ 18/Sep/14 ]

You can see the heap tree in the image: http://i.stack.imgur.com/tJEQV.png

All the heapdumps show the same objects leaking as is being shown in the image.

Comment by oleksiys [ 18/Sep/14 ]

Can you pls. apply the following patch (2 jars) [1] and [2] and enable FINER logging level for org.glassfish.grizzly.http.io.InputBuffer
When/if it fails pls. attach the thread dump and logs.

Thank you.

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-4.0/nucleus-grizzly-all.jar
[2] https://dl.dropboxusercontent.com/u/7319744/glassfish-4.0/glassfish-grizzly-extra-all.jar

Comment by anshul.kumar [ 18/Sep/14 ]

Thanks oleksiys. Does this patch contain any fixes as well or it is just to gather additional logging information? We just want to make sure before promoting it in the production system. It has tested successfully in non-production environment.

Comment by oleksiys [ 19/Sep/14 ]

It does have different glassfish4 fixes, not just logging.

Comment by anshul.kumar [ 19/Sep/14 ]

We applied this patch on one node containing 3 JVM instances of the application and the instances are still crashing after applying the patch.

The crashdumps including JVM Log files, thread dumps and the images of heap and permgen space leading up to out of memory of the instances can be downloaded from this link: https://drive.google.com/file/d/0B52890ypA1_bYlR4Snh5bUxLMzg/edit?usp=sharing

Note: The JVM Time Zone is one hour behind the timestamps that are visible in the images of heap size and permgen size. Another observation that we are seeing is that the JVM crashes just when it tries to resize the permgen space. Max Permgen space for JVM is set to 512 MB. Can this be a part of problem and should we try setting min and max perm size to be the same value to avoid permgen resize?

Comment by oleksiys [ 22/Sep/14 ]

Thank you for the info.
Can you pls. apply this patch [1] and also enable FINEST logging for
org.glassfish.grizzly.http.util.Parameters?

Thank you!

[1] https://dl.dropboxusercontent.com/u/7319744/glassfish-21202/nucleus-grizzly-all.jar

Comment by anshul.kumar [ 22/Sep/14 ]

Thank you for the information.

Do you want to keep the logging level of org.glassfish.grizzly.http.io.InputBuffer at FINER as well or we should switch that logger off now?

Comment by oleksiys [ 23/Sep/14 ]

please keep it as well.

thank you.

Comment by anshul.kumar [ 23/Sep/14 ]

The crash dumps of two instance crashes from today after applying the updated jar file can be downloaded from here:
https://drive.google.com/file/d/0B52890ypA1_bbEFpLS1zZkxpbzg/edit?usp=sharing

(Thread dumps are also included in the zip file). If you need more data, please let me know.

Thanks

Comment by anshul.kumar [ 25/Sep/14 ]

Is there anything that could be found in the latest crash dumps? Should we collect any other data as the instances are still crashing every 2-3 hours

Comment by oleksiys [ 25/Sep/14 ]

The logs look fine the only piece of information missed (from the very latest log) is thread http-listener-3(44), from the thread dump I see this thread is constantly reading data, but I don't see any logging activity from this thread in the server.log* files.
For now it looks like the problem might be caused by a huge HTTP POST request, which represents form parameters.

Can you pls. share the raw heap dump file so I can learn it locally?

Thank you!

Comment by anshul.kumar [ 26/Sep/14 ]

Thank you! A crashdump including the server.log* files, thread dumps and heapdump from the latest crash can be downloaded from here: https://drive.google.com/file/d/0B52890ypA1_bVzBvaGtjTHNYTE0/edit?usp=sharing

Please let me know if you need any additional data. I am going to reduce the maximum post size for the glassfish instances to 1-2 MB and will enable the option to print the request size in HTTP Server (apache) to see if there was a big POST request leading up to the crash.

Comment by anshul.kumar [ 26/Sep/14 ]

I enabled the request size directive in HTTP Server. The maximum size of GET or POST data is 171 KB leading up to the latest crash. Is that POST size big enough for the JVM to consume 1.5 GB memory and should I trey reducing this size?

You can see the log of requests for the latest crash by downloading the spreadsheet from here: https://drive.google.com/file/d/0B52890ypA1_beDZReldlWV9MLTA/edit?usp=sharing

A common pattern I have noticed is that there is one request that always takes more than 50 minutes to process with a return code of 502 leading up to the crash. The size of this request is very small (131 bytes). This request can be seen starting at 26-Sep-14 10:26:34, row number 26721 in the spreadsheet. The request completed eventually with a return code of 502 when the instance was killed and restarted. I hope this might give some extra details about the problem.

When the instance is healthy, the same request takes only 1-2 seconds.

Comment by oleksiys [ 27/Sep/14 ]

I think I found the problem, can you pls. try this patch (2 files).

https://dl.dropboxusercontent.com/u/7319744/glassfish-21202/nucleus-grizzly-all.jar
https://dl.dropboxusercontent.com/u/7319744/glassfish-21202/glassfish-grizzly-extra-all.jar

Pls. let me know if it works for you.

Thanks!

Comment by anshul.kumar [ 27/Sep/14 ]

This is great new Alexis. Do we need to enable any logging after this patch. I had switched off the loggers after some time as it creates a huge log backup within hours.

Comment by oleksiys [ 27/Sep/14 ]

No, you don't need any extra logging now, it should just work.

Comment by anshul.kumar [ 28/Sep/14 ]

so far so good. I patched one node with three instances on it and the instances have been very stable (never used more than 550 MB of heap) while the other instances that have not been patched are still crashing. I will update on Monday evening as we do not have comparable traffic over weekends as weekdays.

Comment by anshul.kumar [ 05/Oct/14 ]

Alexis - I have slowly rolled out this patch to our entire environment and the JVM instances have been very stable following the patch. The patch has resolved the problem permanently and this issue can be closed.

Very curious to know why we were not able to recreate the problem in our pre-prod application using load tests though.

Thank you for caring.

Comment by oleksiys [ 06/Oct/14 ]

I'm glad the fix works.
I assume your testing env. also has Apache and GF backend communicating via AJP protocol.
Try to reproduce the following usecase: send HTTP post message to apache, which contains entire HTTP header (with Content-Length) and a half of the POST payload, then close HTTP connection... Something like:

$ telnet localhost 80
POST /xxxxx HTTP/1.1
Host: localhost:80
Content-Length: 100
Content-Type: application/x-www-form-urlencoded

aaa=bbb&^C^C^C
Comment by smillidge-c2b2 [ 06/Oct/14 ]

Is there a Grizzly JIRA I can track for this if the fix is in Grizzly?

Comment by oleksiys [ 06/Oct/14 ]

I just created one here
https://java.net/jira/browse/GRIZZLY-1709

Comment by anshul.kumar [ 07/Oct/14 ]

Will this fix be included in any of the future Glassfish 4.1 release?

Comment by oleksiys [ 07/Oct/14 ]

Future - yes, but 4.1 has been released already. I'll prepare a patch for it separately.

Comment by kallie82 [ 09/Oct/14 ]

Has a patch been prepared perhaps? We are running into what seems to be the same issue on GlassFish Server Open Source Edition 4.1 (build 13).
Thanks!

Comment by anshul.kumar [ 20/Oct/14 ]

Alexis - Can you please help provide a patch for GF 4.1 as well for this issue. I have a few pending upgrades for the latest glassfish version

Comment by oleksiys [ 12/Dec/14 ]

here is the patch for 4.1
https://dl.dropboxusercontent.com/u/7319744/glassfish-4.1/glassfishv41-patch.zip





[GLASSFISH-6371] Be able to use EJB3 using a http tunnel Created: 30/Sep/08  Updated: 01/May/13

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: V3
Fix Version/s: future release

Type: New Feature Priority: Major
Reporter: batmat Assignee: oleksiys
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Issuezilla Id: 6,371

 Description   

Hi,

After having considered migration to EJB3, we realize that GF doesn't have an
"http-invoker" feature.

In fact, we'd like to be able to tunnel all our call to the server through an
http one.

The better would be being able that the client doesn't need anything else than
http to (1) resolve the remote ejb reference, (2) call it. To sum up, would be
great if GF has the corresponding feature of JBoss (e.g. see
http://wiki.jboss.org/wiki/Accessing_EJB3s_over_HTTP_HTTPS or
http://wiki.jboss.org/wiki/EJBOverHTTPWithUnifiedInvoker).

At least, we'd be interested into knowing how we could inject say our transport
classes (integrating spring http-invoker), and we'd also be glad to know if
there's any chance that it gets into GF if we provide a patch for it.

Cheers.
Thanks for your work, GF is already a masterpiece.



 Comments   
Comment by km [ 01/Oct/08 ]

Looks like this is an interesting feature request. Thanks for filing it as such.
If I understand it correctly, you need some standard servlets that will tunnel
EJB calls via HTTP. This is useful for all the applications that have EJB
components, but don't have Web components that front end those EJB components.

IMO, JBoss makes you do too many configuration changes for this to be achieved.

Comment by batmat [ 01/Oct/08 ]

> IMO, JBoss makes you do too many configuration changes for this to be achieved.
I totally, fully, wholeheartedly agree. This configuration is a nightmare.

What I would dream of would be to be able to configure a glassfish out of the
box using say a particular setup-firewallconstrained[cluster].xml and be done
(and providing the right jndi property to access).

More generally, for me, this feature request is about being able to reduce the
number of needed ports, to simplify a firewall configuration. And more
importantly, to delete the dynamic ports/be able to specify fixed ones. Kind of
UnifiedInvoker as it exists in JBoss.

If you think it'd be a good thing to file another FR and link it to this one,
please just let me know.
Thanks.

Comment by Bill Shannon [ 01/Oct/08 ]

I know it's not exactly what you asked for, but given that you're separating
the client and server by a firewall, implying that they're more loosely
coupled, have you considered using JAX-WS to expose your EJBs as web services?
Yes, it's a somewhat different programming model, but it might be a better
match for your environment.

Comment by batmat [ 01/Oct/08 ]

Hi Shannon,
Well, at least, Sun coworkers are coherent. See the discussion with Ken Saks
and his proposal here:
https://glassfish.dev.java.net/servlets/ReadMsg?list=ejb&msgNo=527 .

To sum up, no. We're not considering WS programming for many reasons I describe
in the given thread.
Cheers.

Comment by Ken Cavanaugh [ 02/Oct/08 ]

One of the features we have been discussing for GFv3 is port unification for all protocols.
This would mean that HTTP, WS, and IIOP could all be handled over the same port.
Does this solve the requirement, or do you need more than just the same port?
We do occasionally see requests to allow tunneling of RMI-IIOP requests over HTTP,
but so far this has not been sufficiently common to consider implementing it.

Of course, you can also look at invoking EJBs directly from the HTTP path,
as others have discussed.

Comment by batmat [ 02/Oct/08 ]

> This would mean that HTTP, WS, and IIOP could all be handled over the same port.
Does this solve the requirement, or do you need more than just the same port?
That's not exactly the same thing, but it would definitely be a damned good
thing to convince people here about Java EE servers, and more precisely about GF
part.

So, I guess this could be sufficient.

> We do occasionally see requests to allow tunneling of RMI-IIOP requests over
HTTP, but so far this has not been sufficiently common to consider implementing it.
Do you see it as an interesting/acceptable request but don't have time to
implement it, or is it just not gonna be accepted even if we try to provide a
patch for it that'd suit you?

Cheers

Comment by batmat [ 27/Nov/08 ]

> One of the features we have been discussing for GFv3 is port unification for
all protocols. This would mean that HTTP, WS, and IIOP could all be handled over
the same port.

That feature would be really interesting. Isn't it this one:
https://glassfish.dev.java.net/issues/show_bug.cgi?id=4322.

Maybe the current issue should be put as related?

Cheers.

Comment by Tom Mueller [ 23/Jun/10 ]

Kedar no longer on project.

Comment by Tom Mueller [ 06/Mar/12 ]

Bulk update to change fix version to "not determined" for all issues still open but with a fix version for a released version.

Comment by jthoennes [ 26/Mar/12 ]

This is related to the port unification feature I am also looking for a long time now.
See GLASSFISH-4073: Move CORBA transport to Grizzly (https://java.net/jira/browse/GLASSFISH-4073).

If we could access the ORB for JNDI and EJB access using the Grizzly port unification framework
(port finder and port filter). See http://antwerkz.com/blog/port-unification-in-glassfish-3-part-1.

Batman, did you find any workaround so far?

Comment by Tom Mueller [ 04/Jan/13 ]

Moving to EJB component.

Comment by marina vatkina [ 04/Jan/13 ]

You can use EJBs as JAX-RS endpoints. Assigning to the JAX-RS for further evaluation.

Comment by kumara [ 24/Apr/13 ]

Well, programming in JAX-RS was already suggested as a solution and in that case there is nothing that the server has to do, the application needs to change.

The most concrete comments about a solution was use of port unification to ensure that IIOP traffic is on the same port as http. While this is not exactly a http tunnel, it might be enough for some scenarios.

I am going to request grizzly-kernel team to add a pointer to port unification documentation so that anyone referring to this issue has a handy reference on how to configure IIOP traffic on same port as http.

If port unification is not sufficient, given the availability of JAX-RS programming model and better integration between EJB and JAX-RS, the most prudent option is to change the application.

After adding the document pointer, please assign this back to orb sub-component.

Comment by oleksiys [ 24/Apr/13 ]

Is Corba (IIOP) working on top of Grizzly?
If not then port unification will not work.

Comment by jthoennes [ 25/Apr/13 ]

A related issue is also whether OpenMQ would work using port unification. I think the new OpenMQ 5.0 is based on Grizzly, isn't it?

Comment by oleksiys [ 01/May/13 ]

yes, MQ should work.





[GLASSFISH-11031] ContainerMapper should better handle the empty string root path Created: 13/Nov/09  Updated: 13/Feb/13

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 3.1
Fix Version/s: future release

Type: Bug Priority: Major
Reporter: jfarcand Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: Macintosh


Issuezilla Id: 11,031
Tags: 3_1-exclude

 Description   

See https://glassfish.dev.java.net/issues/show_bug.cgi?id=10601

The new emptyString support in Servlet 3.0 breaks the ContainerMapper mapping
algorithm. We patched the Mapper to execute in compatible mode for the 3.0
release, but we must fix it properly in ContainerMapper for the 3.0 release.



 Comments   
Comment by jfarcand [ 13/Nov/09 ]

Exclude for that release.

Comment by jfarcand [ 01/Dec/09 ]

Alexey is now the owner

Comment by oleksiys [ 11/Oct/10 ]

target for ms7

Comment by oleksiys [ 11/Nov/10 ]

the fix may cause regression in admin gui, SCF is not the best time to make this change.
moving to 3.2





[GLASSFISH-14128] purge ThreadLocals from Threads that are reinserted into thread pools Created: 21/Oct/10  Updated: 02/May/11

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: 3.1
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Tom Mueller Assignee: oleksiys
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Issuezilla Id: 14,128
Tags: 3_1-exclude

 Description   

See this blog entry for what can happen when a ThreadLocal is left around in a
thread that is reinserted into thread pool:

http://weblogs.java.net/blog/jjviana/archive/2010/06/09/dealing-glassfish-301-
memory-leak-or-threadlocal-thread-pool-bad-ide

A defensive measure against this error would be to insert code into GlassFish that
would remove ThreadLocals from a thread when it it returned to a thread pool. The
code to do this is in the blog entry.



 Comments   
Comment by Tom Mueller [ 21/Oct/10 ]

Two issues that seem to be related to this:

issue 6082 - Deploying generates out of memory errors
issue 7075 - Memory leak when deploying/undeploying application through CLI

The underlying issue of the leak should always be fixed. But fixing this issue
could provide a warning when there is a threadlocal that is left within a thread.

Comment by Scott Oaks [ 22/Oct/10 ]

The leak in question is com.sun.enterprise.security.authorize.HandlerData;
reassigning to code owner.

Comment by Shing Wai Chan [ 22/Oct/10 ]

The ThreadLocal of HandlerData is kept by javax.security.jacc.PolicyContext and
com.sun.enterprise.security.authorize.PolicyContextHandlerImpl.

Assign to security team.

Comment by Tom Mueller [ 22/Oct/10 ]

I think you have missed the point of this issue.

The goal of this issue is NOT to find a particular ThreadLocal that is being
left around and clear it. This issue is NOT about a particular leak. That is
what issues 6082 and 7075 are about. If you know of other cases, write separate
issues for those.

The purpose of this issue is to insert some GENERIC ThreadLocal processing that
would operate with every thread that is reinserted into a thread pool to clear
any thread locals that are left around. To see what that processing would look
like, read the blog entry in the description.

Putting this back in the performance subcomponent because this is about
generally improving performance in the area of memory management.

Comment by kumarjayanti [ 20/Nov/10 ]

I made fixes to cleanup the ThreadLocal held in PolicyContext in Web/EJb and WebServices paths.

Comment by oleksiys [ 23/Dec/10 ]

Just wanted to make sure why this bug was assigned to me
You want Grizzly ThreadPool cleanup threadlocals before returning thread to the pool?

Comment by oleksiys [ 05/Jan/11 ]

IMO ThreadLocals cleanup is more responsibility of modules, which use them.

If we talk about cleaning up all thread locals, when thread is being returned to the threadpool, IMO it's not good approach, and may have perf. related consequences. For example Grizzly uses threadlocals to maintain cache (limited) for objects, which has complex structure, so it's expensive to create them all the time anew.

Cleaning up just specific thread local object is not good as well, because also may cause perf. issues (String comparison is not cheap operation).

So in general, IMO, it's not good idea to implement this logic as default thread pool behavior. We may think of adding some hooks in thread-pool configuration element, where it would be possible to intercept obtain/return thread events to run some custom thread pre/post processing, but it's too late to add this for 3.1.

Comment by Tom Mueller [ 05/Jan/11 ]

I agree with you that this should not be fixed for 3.1.

However, I disagree that ThreadLocal cleanup should be exclusively the responsibility of the modules which use them.

The behavior of a ThreadPool should be such that when a thread is obtained from the pool, the behavior of the thread should be exactly the same as if the thread was created anew. However, if the thread pool doesn't ensure that there are no ThreadLocal objects within the thread, then that behavior is not provided.

Comment by oleksiys [ 05/Jan/11 ]

IMO we should separate 2 usecases (may be more):

1) ThreadLocals, which are used to drive the application logic
for these ThreadLocals I'd agree, that they should be cleaned each time thread is getting back to a pool

2) ThreadLocals, used as object cache
these threadlocals is pretty cheap way (if using accurately) to maintain object cache. Access to such cache is not synchronized -> it's fast and cheap.
For objects with complex internal structure (usually some contexts) it's very useful to have such a cache of limited size (1-2 objects per thread). It could be even weakref'ed to avoid leaks.
These threadlocals shouldn't be cleaned, cause it can lead to perf. regressions.

That's why I'm coming back to my original idea, that IMO cleanup logic is not something threadpool implementation should have by default.

Comment by Tim Quinn [ 05/Jan/11 ]

Keep in mind that Java provides no supported, published way to access all of a thread's ThreadLocals generically.

The blog entry referenced in an earlier comment uses introspection to get to a package-visible field of the Thread class.

I agree that, ideally, the thread pool would clean the threads. With no supported way for the pool to do that, we'll need to decide whether to place that responsibility on the module or use an unsupported technique from the thread pool.

Comment by Bill Shannon [ 05/Jan/11 ]

How does a module that creates a thread local know that the thread is being reused
so that it can clean up the thread local? Seems like only the thread pool knows
this.

I agree that it's gross that we would have to look inside the Thread object for a
private field, but it's not clear there's a better and equally reliable approach.
If the thread pool implementation knows that certain thread locals don't need to
be cleaned up, it could skip them. So the Grizzly thread pool could know about the
Grizzly thread locals and skip them if they're safe. We could even come up with a
way to annotate thread local classes that are safe to preserve between different uses
of a thread.

Comment by oleksiys [ 06/Jan/11 ]

I agree, in some cases module, which added threadlocal doesn't know about its destiny and can not do a cleanup at the proper time.

If we rely on Grizzly thread-pool to do a cleanup, may be it would be worth to create a general RecyclableThread interface [1], which will be implemented by Grizzly threads.
Create utility class + method [2]

So the custom code, which is interested to add threadlocal and remove it, when thread is going to be recycled, will look like [3].
This way we'd not force Grizzly to iterate over all threadlocals and perform some comparing, we won't use reflections.

[1]
interface RecyclableThread

{ addOnRecycleTask(Runnable task); removeOnRecycleTask(Runnable task); }

[2]
public class RecyclableThreadUtils() {
public static void addOnRecycleTask(final Runnable task) {
final Thread t = Thread.currentThread();
if (t instanceof RecyclableThread)

{ ((RecyclableThread) t).addOnRecycleTask(task); }

}
}

[3]
private final ThreadLocal<String> t = new ThreadLocal<String>();

public void doSomething() {
t.set("Hello");

RecyclableThreadUtils.addOnRecycleTask(new Runnable() {

public void run()

{ t.remove(); }

});
}

Comment by Harald Wellmann [ 02/May/11 ]

I've just finished removing memory leaks in my web app deployed on Tomcat, which has some useful diagnostic and clean up features. Looking for similar features in Glassfish, I came across this issue.

In short, if an application fails to remove ThreadLocals, Tomcat does the cleanup when the application is undeployed:
http://wiki.apache.org/tomcat/MemoryLeakProtection

So if Tomcat can do it, I see no reason why Glassfish can't

It's hard enough to keep one's own code clean, but it's bad luck when you depend on thirdparty libs with improper ThreadLocal usage.
See JAXB-831 for an example.





[GLASSFISH-17098] Move network-configuration CLI commands to nucleus under "grizzly-glue" module Created: 25/Jul/11  Updated: 02/Dec/11

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.0
Fix Version/s: None

Type: Task Priority: Major
Reporter: oleksiys Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: 3_1_x-exclude, nucleus-cleanup

 Description   

subj.






[GLASSFISH-20618] Create source bundle for nucleus-grizzly-all Created: 10/Jun/13  Updated: 10/Jun/13

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major
Reporter: jjsnyder83 Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Per the summary.



 Comments   
Comment by jjsnyder83 [ 10/Jun/13 ]

Please create the source bundle for nucleus-grizzly-all.





[GLASSFISH-21136] Log filled with TimeoutException on TCPNIOTransportFilter.handleRead Created: 18/Jul/14  Updated: 21/Jul/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.1_b08
Fix Version/s: None

Type: Bug Priority: Major
Reporter: gcruscoe Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Linux, Java 8u11


Tags: fishcat

 Description   

Not sure what the cause is because it does not link to any of our code in the stack trace. Could be related to code running in a ejb timers, but not sure.

The stack trace I get is this:

2014-07-18 14:50:26.466 - :: ERROR [http-listener-2(6)] javax.enterprise.web.core:287 - An exception or error occurred in the container during the request processing
java.io.IOException: java.util.concurrent.TimeoutException
at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleRead(TCPNIOTransportFilter.java:90) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.TransportFilter.handleRead(TransportFilter.java:173) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.ssl.SSLBaseFilter$SSLTransportFilterWrapper.handleRead(SSLBaseFilter.java:999) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.read(DefaultFilterChain.java:351) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.FilterChainContext.read(FilterChainContext.java:695) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.http.io.InputBuffer.blockingRead(InputBuffer.java:1119) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.http.server.io.ServerInputBuffer.blockingRead(ServerInputBuffer.java:95) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.http.io.InputBuffer.fill(InputBuffer.java:1143) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.http.io.InputBuffer.read(InputBuffer.java:353) ~[nucleus-grizzly-all.jar:na]
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:267) ~[web-core.jar:na]
at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:234) ~[web-core.jar:na]
at org.apache.catalina.authenticator.FormAuthenticator.saveRequest(FormAuthenticator.java:614) ~[web-core.jar:na]
at org.apache.catalina.authenticator.FormAuthenticator.authenticate(FormAuthenticator.java:250) ~[web-core.jar:na]
at com.sun.web.security.RealmAdapter.invokeAuthenticateDelegate(RealmAdapter.java:1524) ~[websecurity.jar:na]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:606) ~[web-core.jar:na]
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:702) ~[web-core.jar:na]
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673) ~[web-core.jar:na]
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99) ~[web-glue.jar:na]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174) ~[web-core.jar:na]
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734) ~[web-core.jar:na]
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673) ~[web-core.jar:na]
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:412) ~[web-core.jar:na]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282) ~[web-core.jar:na]
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459) [kernel.jar:na]
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167) [kernel.jar:na]
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565) [nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545) [nucleus-grizzly-all.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
Caused by: java.util.concurrent.TimeoutException: null
at org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorReader.read(TemporarySelectorReader.java:126) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorReader.read(TemporarySelectorReader.java:75) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.AbstractReader.read(AbstractReader.java:72) ~[nucleus-grizzly-all.jar:na]
at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleRead(TCPNIOTransportFilter.java:77) ~[nucleus-grizzly-all.jar:na]



 Comments   
Comment by oleksiys [ 18/Jul/14 ]

looks like it happens during the form authentication, when the server expects more information from the browser, but the browser doesn't send it. The read attempt fails after read-timeout expires.

Comment by gcruscoe [ 18/Jul/14 ]

This occurs while there is no user interaction on the web page, but the page does make ajax calls and is connected via atmosphere using websockets. This also occurs on startup at once (although less frequently I believe) when there are no web browsers pointing at the app server.

The application has several war applications as well as a main ear application. The wars do web service calls to the ejb level web services (all on the same app server). So when the user is using the application there is a web session to the war and the war makes web service calls.

I am wondering if a web page without a session was trying to do ajax calls – although this doesn't explain the same stack trace on startup. Anyway hope any of this helps.

I'll update if I can come up with more examples of what makes this happen.

Comment by oleksiys [ 18/Jul/14 ]

pls. share the exception you see during the startup, because the one you posted is related to form authentication.

Comment by gcruscoe [ 21/Jul/14 ]

Okay exception during startup was actually the same cause. It is the atmosphere connection from a web browser – I had not found and closed all tabs before restarting the app server. Now I do not get this on startup.

The session and SSO / realm are most certainly expired – I do not know if that would contribute this issue. I will work on figuring out more about this and post back when I have more. Hopefully it is benign, but I have not seen these in the logs on gf 3.1.2.2 so it is a bit troubling.





[GLASSFISH-21189] Frequent network dropouts Created: 10/Sep/14  Updated: 19/Sep/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.0
Fix Version/s: None

Type: Bug Priority: Major
Reporter: pranahata Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

JDK 8 u 5, 64 bit CentOS



 Description   

We have a GlassFish server hosted on a datacenter and a jee application with a desktop application deployed via java web start running on it.

When the JWS client downloads the jars, we get very frequent network dropouts, mainly with jar files > 3 Mb where the JWS client reports: "EOF: Unexpected end of ZLIB input Stream" and the server reports the following exception.

ava.io.IOException: java.lang.InterruptedException
at org.glassfish.grizzly.utils.Exceptions.makeIOException(Exceptions.java:81)
at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:958)
at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:682)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:355)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:342)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:161)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1793)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)
at jnlp.sample.servlet.DownloadResponse$FileDownloadResponse.sendRespond(DownloadResponse.java:257)
at jnlp.sample.servlet.JnlpDownloadServlet.handleRequest(JnlpDownloadServlet.java:187)
at jnlp.sample.servlet.JnlpDownloadServlet.doGet(JnlpDownloadServlet.java:120)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(

Clients connecting over a wireless internet connection get these errors much more frequently.

The client application talks to the server via http.

If i do a ping -t to the server, the number of lost packets is not that big and sometimes there is no packet loss at all when this exception happens.

Dropouts seem to last no longer than one second yet, the amount of failed downloads or calls seems to be too big and causes too many errors on the application.

If this is not a bug, i would appreciate if someone told me if there is any setting that we can configure on glassfish to enable http clients to continue reading after a drop out or to change the timeouts or retries or any other configuration to make the communication more robust?



 Comments   
Comment by oleksiys [ 19/Sep/14 ]

pls. try to disable request-timeout on http listener for example for http-listener-1:

$asadmin set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.request-timeout-seconds=-1




[GLASSFISH-21061] keepAlive-timeout happened between "qin" and "qout", the connection will be broken Created: 05/May/14  Updated: 08/Aug/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 3.1.2
Fix Version/s: None

Type: Bug Priority: Major
Reporter: houtang Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 4 days
Time Spent: Not Specified
Original Estimate: 4 days
Environment:

all


Tags: NullPointerException, conn, disc, grizzly

 Description   

When I try to access a web application through ie6 on glassfish 3.1.2.2, the following phenomenon has been happened:
If there's a keepAlive-timeout happened between "qin" and "qout", the connection will be broken. You will found from ※1 on the printed log that the connection has been broken.
BTW, when we access through ie6 after the first connection was broken, there will be another connection created(※2) and return back as normal. However, when we access the web application through ie8 after the first connection was broken, the second connection can't be created, so the log about ※2 can't be print to the log under this situation, the connecton will be broken here and it can't be return the right result.

log of ie6
-----------------------------------
"23/Apr/2014:12:05:22.485" "20(Grizzly-kernel-thread(1))" "conn" "193.168.154.174:2955"
"23/Apr/2014:12:05:25.090" "20(Grizzly-kernel-thread(1))" "qin" "193.168.154.174:2955"
"23/Apr/2014:12:05:51.064" "20(Grizzly-kernel-thread(1))" "disc" "193.168.154.174:2955" ※1
"23/Apr/2014:12:08:59.668" "81(http-thread-pool-28292(4))" "qout" "null"
"23/Apr/2014:12:09:01.587" "20(Grizzly-kernel-thread(1))" "conn" "193.168.154.174:4045" ※2
"23/Apr/2014:12:09:01.587" "20(Grizzly-kernel-thread(1))" "qin" "193.168.154.174:4045"
"23/Apr/2014:12:09:01.587" "83(http-thread-pool-28292(5))" "qout" "193.168.154.174:4045"
"23/Apr/2014:12:09:01.587" "83(http-thread-pool-28292(5))" "recv" "GET /ServletSession/jsp/ServletSessionOut.jsp HTTP/1.1"
"23/Apr/2014:12:09:01.587" "83(http-thread-pool-28292(5))" "call"
"23/Apr/2014:12:09:01.587" "83(http-thread-pool-28292(5))" "rtn"
"23/Apr/2014:12:09:01.587" "83(http-thread-pool-28292(5))" "send" "200" "193.168.154.174:4045"
"23/Apr/2014:12:09:01.587" "20(Grizzly-kernel-thread(1))" "k-wt" "193.168.154.174:4045"
"23/Apr/2014:12:09:11.727" "20(Grizzly-kernel-thread(1))" "disc" "193.168.154.174:4045"
-----------------------------------

log of ie8
-----------------------------------
"23/Apr/2014:12:38:49.004" "20(Grizzly-kernel-thread(1))" "conn" "10.167.157.142:1580"
"23/Apr/2014:12:38:57.194" "20(Grizzly-kernel-thread(1))" "qin" "10.167.157.142:1580"
"23/Apr/2014:12:39:12.654" "20(Grizzly-kernel-thread(1))" "disc" "10.167.157.142:1580"
"23/Apr/2014:12:39:24.572" "107(http-thread-pool-28292(13))" "qout" "null"
-----------------------------------

Above all, the question is whether this phenomenon is a bug about ie or it is a glassfish internal issue?

I think the connection shouldn't be broken when the keepAlive-timeout happened between "qin" and "qout".

In addition, When the above phenomenon happened on glassfish 2.1.1, the following log will be print to
the server.log, the content of the log is same to the issue of GLASSFISH-20622

server.log
-----------------------------------
java.lang.NullPointerException
at org.apache.coyote.tomcat5.OutputBuffer.addSessionCookieWithJvmRoute(OutputBuffer.java:689)
at org.apache.coyote.tomcat5.OutputBuffer.doFlush(OutputBuffer.java:370)
at org.apache.coyote.tomcat5.OutputBuffer.close(OutputBuffer.java:336)
at org.apache.coyote.tomcat5.CoyoteResponse.finishResponse(CoyoteResponse.java:594)
at org.apache.coyote.tomcat5.CoyoteAdapter.afterService(CoyoteAdapter.java:354)
at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.postResponse(DefaultProcessorTask.java:625)
at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:612)
at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:889)
at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341)
at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263)
at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214)
at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:267)
at com.sun.enterprise.web.connector.grizzly.WorkerThreadImpl.run(WorkerThreadImpl.java:118)
-----------------------------------



 Comments   
Comment by oleksiys [ 23/May/14 ]

When do you print "qin" log? I see it happens in the kernel thread, but where exactly?

Thank you.

Comment by houtang [ 06/Jun/14 ]

==============Here is the background===============
Below is the logic trace when use glassfish3.1.2.
Basicly it is similar with glassfish2.1, as two thread.
Firstly Thread <Grizzly-kernel-thread> built connection "conn"(marked with ★1), then it went to "qin"(marked with ★2),
after that another Thread <http-thread-pool> was woke up, and excuted as below sequence "qout"-> "recv"-> "send" (marked with ▲1,▲2,▲3).
Then after 15 seconds(keepalivetimeout) the connection was disconnected "disc"(marked with ★3)

glassfish3.1.2
-------------------------------------------------
#=com.sun.grizzly
$=com.sun.enterprise.v3.services.impl.monitor
@=com.sun.grizzly.util
*=com.sun.grizzly.http
&=com.sun.enterprise.v3.services.impl

<Grizzly-kernel-thread>
#SelectorHandlerRunner.doSelect(SelectorHandler, NIOContext) : 201
#SelectorHandlerRunner.handleSelectedKey(SelectionKey, SelectorHandler, NIOContext)行: 302
$MonitorableSelectorHandler.acceptWithoutRegistration(SelectionKey) ★1 conn
#SelectorHandlerRunner.handleSelectedKey(SelectionKey, SelectorHandler, NIOContext)行: 381
#NIOContext.execute(ContextTask)
@ SyncThreadPool.execute(Runnable) : 189 ★2 qin
#SelectorHandlerRunner.doSelect(SelectorHandler, NIOContext) : 212
#TCPSelectorHandler.postSelect(Context) 行: 534
#SelectorThreadKeyHandler.cancel(SelectionKey) : 84
#BaseSelectionKeyHandler).doAfterKeyCancel(SelectionKey) 行: 234 ★3 disc

<http-thread-pool>
HttpWorkerThread(Thread).run() 行: 722
@SyncThreadPool$SyncThreadWorker(AbstractThreadPool$Worker).run() 行: 520
@SyncThreadPool$SyncThreadWorker(AbstractThreadPool$Worker).doWork() : 536 ▲1 qout
@SyncThreadPool$SyncThreadWorker(AbstractThreadPool$Worker).doWork() : 542
#ProtocolChainContextTask(ContextTask).run() : 71
*HttpProtocolChain(DefaultProtocolChain).execute(Context) : 90
*DefaultProtocolFilter.execute(Context) : 231
*ProcessorTask.process(InputStream, OutputStream) : 1118
*ProcessorTask.doProcess() : 752
*ProcessorTask.doProcess() : 793
*ProcessorTask.postResponse() : 817
*ProcessorTask.finishResponse() : 827
&ContainerMapper.afterService(Request, Response) : 487
*ProcessorTask.action(ActionCode, Object) : 1156
*ProcessorTask.prepareResponse() : 1833 ▲3 send
-------------------------------------------------

when i was debuging these programs, i found that if the keepalivetimeout(15 seconds) is up it can "disc"(★3) directly after "qout"(▲1),
and it also can "disc"(★3) after“recv”(▲2)
==============Background ends===============

My first question,is it correct that after "qout" or "recv" the connection can still be disconnected as keepalivetimeout(15 sec) is up?

And when i was reproducing the sitiation , after "recv" then the connection was disconnected(keeptimeout is up), IE6 can build a second connection and finish the request
and return "200", the strange thing is when i use IE8, there is no second connection!! I was so confused, My second question is: what makes the IE6 and IE8 different?
BTW, i got the same consequence in both glassfish3.1.2 and glassfish2.1

Below the is trace.log details
1)IE6
trace.log

"15/May/2014:19:22:47.193" "42(Grizzly-kernel-thread(1))" "conn" "193.168.154.174:1086" // build the first connection
"15/May/2014:19:22:47.194" "42(Grizzly-kernel-thread(1))" "qin" "193.168.154.174:1086"
"15/May/2014:19:22:47.194" "78(http-thread-pool-28292(4))" "qout" "193.168.154.174:1086"
"15/May/2014:19:23:14.646" "78(http-thread-pool-28292(4))" "recv" "GET /HelloServlet HTTP/1.1"
"15/May/2014:19:23:17.667" "42(Grizzly-kernel-thread(1))" "disc" "193.168.154.174:1086" // after "recv" connection is disconnected
"15/May/2014:19:23:23.715" "42(Grizzly-kernel-thread(1))" "conn" "193.168.154.174:1087" // build the second connection
"15/May/2014:19:23:23.716" "42(Grizzly-kernel-thread(1))" "qin" "193.168.154.174:1087"
"15/May/2014:19:23:23.716" "349(http-thread-pool-28292(11))" "qout" "193.168.154.174:1087"
"15/May/2014:19:23:23.716" "349(http-thread-pool-28292(11))" "recv" "GET /HelloServlet HTTP/1.1"
"15/May/2014:19:23:23.716" "349(http-thread-pool-28292(11))" "send" "301" "193.168.154.174:1087"
"15/May/2014:19:23:23.717" "78(http-thread-pool-28292(4))" "send" "301" "null" // the first connection return "null"
"15/May/2014:19:23:23.717" "42(Grizzly-kernel-thread(1))" "k-wt" "193.168.154.174:1087"
"15/May/2014:19:23:23.717" "42(Grizzly-kernel-thread(1))" "qin" "193.168.154.174:1087"
"15/May/2014:19:23:23.717" "74(http-thread-pool-28292(3))" "qout" "193.168.154.174:1087"
"15/May/2014:19:23:23.718" "74(http-thread-pool-28292(3))" "recv" "GET /HelloServlet/ HTTP/1.1"
"15/May/2014:19:23:23.718" "74(http-thread-pool-28292(3))" "call"
"15/May/2014:19:23:23.719" "74(http-thread-pool-28292(3))" "rtn"
"15/May/2014:19:23:23.719" "74(http-thread-pool-28292(3))" "send" "200" "193.168.154.174:1087" // the second connection sucesses
"15/May/2014:19:23:23.719" "42(Grizzly-kernel-thread(1))" "k-wt" "193.168.154.174:1087"

2)IE8
trace.log

"15/May/2014:19:34:03.339" "42(Grizzly-kernel-thread(1))" "conn" "10.167.157.75:51986" // build the first connection
"15/May/2014:19:34:03.339" "42(Grizzly-kernel-thread(1))" "qin" "10.167.157.75:51986"
"15/May/2014:19:34:03.339" "340(http-thread-pool-28292(9))" "qout" "10.167.157.75:51986"
"15/May/2014:19:34:44.796" "340(http-thread-pool-28292(9))" "recv" "GET /HelloServlet HTTP/1.1"
"15/May/2014:19:34:47.472" "42(Grizzly-kernel-thread(1))" "disc" "10.167.157.75:51986" // after "recv" connection is disconnected
"15/May/2014:19:34:50.884" "340(http-thread-pool-28292(9))" "send" "301" "null" // the first connection return "null" and there is no second connection!!!

Thank you

Comment by oleksiys [ 11/Jun/14 ]

keep-alive timeout should be applied only in cases, when the connection is idle (waiting for the next request to come).
And 15 seconds starts counting once the request/response is processed.
Are you sure it's keep-alive timeout we're talking about?

Thank you.

Comment by houtang [ 11/Jun/14 ]

Thank you for your reply!

I have a query needs your confirmation:
Basically one requests/response processing has below steps(1~9):
1.conn
2.qin
3.qout
4.recv
5.call
6.rtn
7.send
8.k-wt
9.disc

10.conn
11.qin
12.qout
13.recv
14.call
15.rtn
16.send
17.k-wt
18.disc

do you mean the 15 seconds starts counting from 8 and lasts from 8 to 9
or starts counting from 9 and lasts from 9 to 10("conn" of the second request)?
My opinion is the 15 seconds starts counting from 1, and lasts from 1 to 9(from the source logic),
do you agree?

In this case I have below situation: when the request is processing and it is not
completed(for example in step"recv"), at this time connection is disconnected.

conn(15sec starts counting)>qin>qout->recv->disc(15sec ends)

Hope you can get my point
Pls let me know if any misunderstanding

Thank you

Comment by oleksiys [ 11/Jun/14 ]

Hi,

just to be clear, we're talking about HTTP keep-alive, it means HTTP (TCP) connection is not getting closed between requests. Probably it's what you meant in your description, but I just wanted to make sure we're on the same page... so at the step 9. and 18. no real HTTP (TCP) connection termination happens.

Coming back to the steps you listed, confirm that keep-alive timeout is ticking between 9. and 10.

If you observe the timeout during read/write - it must be different timeout then.
First of all I'd recommend to switch to Glassfish 3.1.2.2 and check if the issue is still there.

Thank you.

Comment by houtang [ 16/Jun/14 ]

Hi oleksiys,

>Coming back to the steps you listed, confirm that keep-alive timeout is ticking between 9. and 10."
>If you observe the timeout during read/write - it must be different timeout then.
Ok, seems we are talking about different timeout

Here is part of source code from glassfish(Grizzly), the keep-alive timeout I was talking about is
"idleLimit" below in "com.sun.grizzly.http.SelectorThreadKeyHandler.java".

com.sun.grizzly.http.SelectorThreadKeyHandler.java
------------------------
public void expire(Iterator<SelectionKey> iterator) {
if (timeout == SelectionKeyAttachment.UNLIMITED_TIMEOUT)

{ return; }

......
if (expire >= 0) {
long idleLimit = getIdleLimit(attachment);

if (idleLimit >= 0 && currentTime - expire >= idleLimit &&
(!(attachment instanceof SelectionKeyAttachment) ||
((SelectionKeyAttachment) attachment).timedOut(key)))

{ //preventing further idle timeout detection for same key //due to we dont directly cancel key anymore we cant //rely in key.isvalid detection addExpirationStamp(key, SelectionKeyAttachment.UNLIMITED_TIMEOUT); cancel(key);●1 }

}
}
}
}

------------------------

I hava debugged several times and I am pretty sure this timeout is between read/write,
Can you try to debug from your side?

And I also listed part of the source code of "qin","qout","recv" and "send",marked with ★ below
Here is the debug steps:
1. Put the breakpoints at ●1,●2,★recv
2. When a request comes,<http-thread-pool> Thread will stop at ●2,wait for 30 seconds(default timeout), then <Grizzly-kernel-thread> Thread will come up, stop at●1
3. Keep on debuging <http-thread-pool> and after step over ★recv, go to <Grizzly-kernel-thread> and step over ●1
4. disconnect the debug

<Grizzly-kernel-thread>:
com.sun.grizzly.util.SyncThreadPool#execute(Runnable)
------------------------
public void execute(Runnable task)
{
......
if (((maxQueuedTasks < 0) || (workerQueueSize < maxQueuedTasks)) && (workQueue.offer(task)))

{ onTaskQueued(task); ★qin }

......
}
------------------------

com.sun.grizzly.BaseSelectionKeyHandler.java
------------------------
protected void doAfterKeyCancel(SelectionKey key)
{
try

{ if (selectorHandler != null) selectorHandler.closeChannel(key.channel()); else closeChannel(key.channel()); }

finally

{ notifyLocallyClose(key); ★disc Object attachment = key.attach(null); if ((attachment instanceof SelectionKeyAttachment)) ((SelectionKeyAttachment)attachment).release(key); }

}
------------------------

<http-thread-pool>:
com.sun.grizzly.util.AbstractThreadPool.java
------------------------
protected void doWork()
{
......
AbstractThreadPool.onTaskDequeued(r); ★qout
Throwable error = null;
try

{ beforeExecute(t_, r);       r.run(); ●2 onTaskCompletedEvent(r); ...... }

------------------------

com.sun.grizzly.http.ProcessorTask.java
------------------------
public boolean parseRequest()
throws Exception
{
try

{    inputBuffer.parseRequestLine(); ★recv }

catch (BufferOverflowException boe) {
if (logger.isLoggable(Level.SEVERE))

{ logger.log(Level.SEVERE, LogMessages.SEVERE_GRIZZLY_HTTP_PROCESSOR_TASK_REQUEST_URI_TOO_LARGE_ERROR(), boe); }

......
}

protected void prepareResponse()

{ ...... sendHeaders(); ★send }

------------------------

Thank you

Comment by houtang [ 08/Aug/14 ]

Hi oleksiys,

How are you doing?
It's been almost 2 months since your last response.
Is there any progress? Or you still have misunderstandings?
I'm glad to receive your reply!

Thanks





[GLASSFISH-19582] Move AccessLog support on Grizzly level instead of WebContainer Created: 24/Jan/13  Updated: 24/Jan/13

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.0_b70
Fix Version/s: future release

Type: New Feature Priority: Major
Reporter: oleksiys Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
blocks GLASSFISH-18244 Web Service calls are not logged on s... Open

 Description   

Some HTTP based services (WebServices/EJB, appclient), which are using Grizzly HttpServer directly also require access log support.






[GLASSFISH-19444] Glassfish grizzly thread consuming large amounts of CPU time in NIO poll operation Created: 14/Dec/12  Updated: 16/Nov/13

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 3.1.2.2
Fix Version/s: None

Type: Bug Priority: Major
Reporter: jfowler2 Assignee: oleksiys
Resolution: Unresolved Votes: 3
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

JDK version 1.6.0_37, Windows Server 2008 R2 Enterprise


Attachments: Java Archive File grizzly-framework-1.9.50.jar    

 Description   

We recently noticed an issue where our Glassifish server, after running successfully for several hours, would suddenly peg one of the CPUs at 100%. Our application becomes unresponsive during this time. After restarting, the problem will eventually happen again (usually after several hours).

I ran this command to see what the threads were doing:

asadmin generate-jvm-report --type=thread

In the resulting output, one thread looked highly suspicious (consuming orders of magnitude more CPU time than any other thread):

Thread Execution Information:

Thread "Grizzly-kernel-thread(1)" thread-id: 27 thread-state: RUNNABLE Running in native
at: sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
at: sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(WindowsSelectorImpl.java:273)
at: sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(WindowsSelectorImpl.java:255)
at: sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:136)
at: sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
at: sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at: com.sun.grizzly.TCPSelectorHandler.select(TCPSelectorHandler.java:513)
at: com.sun.grizzly.SelectorHandlerRunner.doSelect(SelectorHandlerRunner.java:190)
at: com.sun.grizzly.SelectorHandlerRunner.run(SelectorHandlerRunner.java:132)
at: java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at: java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at: java.lang.Thread.run(Thread.java:662)

Thread Synchronization Statistics:

Number of times this thread was blocked (to enter/reenter a Monitor): 4,520

Number of times this thread waited for a notification (i.e. it was in WAITING or TIMED_WAITING state): 0

Total CPU time for this thread: 2,753 seconds 703,125,000 nanoseconds.

User-level CPU time for this thread: 2,753 seconds 703,125,000 nanoseconds.

Object Monitors currently held or requested by this thread: []

Ownable Synchronizers (e.g. ReentrantLock and ReentrantReadWriteLock) held by this thread: []



 Comments   
Comment by oleksiys [ 17/Dec/12 ]

attaching a patch, which enables selector spin workaround for all operation systems.
Can you pls. apply this patch on GF 3.1.2.2 (rename this file to grizzly-framework.jar and copy it over existing file w/ the same name in glassfishv3/glassfish/modules folder).
Pls. let me know if you still see this issue w/ the patch applied, if yes - pls. check and attach the GF server.log file.

Thanks.

Comment by Stefan_Chossy [ 18/Mar/13 ]

Hi jfowler2,

did that patch fix your problems? We are facing pretty much the same issue but the patch didn't help any.

Comment by badbob.nsk [ 15/Nov/13 ]

Have an issue with the same symptoms.
Does this patch works?

Comment by oleksiys [ 16/Nov/13 ]

is it Glassfish 3.1.2.2 on Windows?

Comment by badbob.nsk [ 16/Nov/13 ]

It is glassfish 3.1.2.2 on SLES11.





[GLASSFISH-17934] Support SNI (server name indication) Created: 08/Dec/11  Updated: 11/Mar/14

Status: Reopened
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: foal Assignee: oleksiys
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Support SNI (server name indication) to allows multiple SSL Certificates on a Single IP/Port



 Comments   
Comment by Ryan Lubke [ 28/Feb/13 ]

If SNI support is desired, I would recommend switching to JDK 7 [1].

[1] http://docs.oracle.com/javase/7/docs/technotes/guides/security/enhancements-7.html

Comment by oleksiys [ 11/Mar/14 ]

implemented in Grizzly [1], we can think how to promote it to Glassfish.

[1] https://java.net/jira/browse/GRIZZLY-1661





[GLASSFISH-21238] Response code 400 on a DELETE request with a body Created: 21/Oct/14  Updated: 21/Oct/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.1
Fix Version/s: None

Type: Bug Priority: Major
Reporter: ganichev Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Same behavior on linux x64, windows 7 x64. jdk1.8.0_20



 Description   

When a browser sends a request with a method DELETE and some body, glassfish replies with the response code 400. No corresponding filters or a servlet are called.

When a DELETE request contains no body, then servlet's methods are called as usual.

Glassfish 4.0 has no such problem.

Th sample request is:

DELETE /struct/StructureElement/3751?_dc=1413865817878 HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Content-Length: 11
Origin: http://localhost:8080
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.101 Safari/537.36
Content-Type: application/json
Accept: */*
Referer: http://localhost:8080/struct-client/
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,ru;q=0.6
Cookie: BPMSESSIONID=8aMBU61U7xn1RFSObP1slBhkvMCC; JSESSIONID=d116d19d27d9a381eef12b5acacd; treeForm_tree-hi=treeForm:tree:resources:JDBC:connectionPoolResources:ypool

{"id":3751}





[GLASSFISH-21009] The behavior of --timeout-seconds is not in line with the document Created: 19/Mar/14  Updated: 30/Oct/14

Status: Open
Project: glassfish
Component/s: web_container
Affects Version/s: 4.0
Fix Version/s: None

Type: Bug Priority: Major
Reporter: xj Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: 4_0_1-approved, licbug

 Description   

One licensee found a contradicting behavior in Java EE 7 RI if 0 is set at
--timeout-seconds on command asadmin create-http.

There is such a description in GlassFish Reference Manual.

GlassFish Server Open Source Edition
Reference Manual Release 4.0
Page 1-77.
https://glassfish.java.net/docs/4.0/reference-manual.pdf

--timeout-seconds
The maximum time in seconds for which a keep alive connection is kept
open. A value of 0 or less means keep alive connections are kept open
indefinitely. The default is 30.

However, In Java EE 7 RI the connection is terminated immediately if
0 is set at --timeout-seconds which means 0 second is used.

It seems there is an inconsistent implementation in RI.

To reproduce.

1. Set following configurations
asadmin start-domain
asadmin create-protocol --securityenabled=false http-1
asadmin create-http --timeout-seconds 0 --default-virtual-server server http-1
asadmin create-network-listener --listenerport 7272 --protocol http-1 --enabled=true http-listener-3

The following config is set.
[domain.xml]
<protocol name="http-1">
<http timeout-seconds="0" default-virtual-server="server">
<file-cache></file-cache>
</http>
</protocol>

2. Connect the server
telnet 127.0.0.1 7272

You can see the connection is terminated immediately.



 Comments   
Comment by timmaher [ 30/Oct/14 ]

It gets even worse if you set the web session-timeout to 0 using the 4848 admin console.
Where previously there was no timeout entry in domain.xml when the timeout was set to 1800.
Saving 0 in the admin console results in the following:-
<web-container>
<session-config>
<session-manager>
<manager-properties></manager-properties>
<store-properties></store-properties>
</session-manager>
<session-properties timeout-in-seconds="0"></session-properties>
</session-config>
</web-container>
This is guaranteed to bomb all connections, including the admin console with a 500.
The only repair is to go into the domain.xml and reset the value to 1800 (I haven't tried removing it instead).





[GLASSFISH-20842] X-Forwarded-Proto not honored by glassfish 4.0 Created: 02/Oct/13  Updated: 26/Nov/14

Status: Open
Project: glassfish
Component/s: web_container
Affects Version/s: 4.0
Fix Version/s: future release

Type: Bug Priority: Major
Reporter: marcos_sakamoto Assignee: oleksiys
Resolution: Unresolved Votes: 7
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Linux and Windows


Issue Links:
Relates
relates to GLASSFISH-18685 X-Forwarded-Proto not honored by glas... Resolved
Tags: ssl-offload

 Description   

I'm using a NginX server as a SSL terminating proxy and set the following HTTP headers:

Host: <NginX host>
X-Forwarded-Proto: https

In Glassfish I set the following property:

asadmin set server.network-config.protocols.protocol.http-listener-1.http.scheme-mapping=X-Forwarded-Proto

Glassfish ignores the X-Forwarded-Proto and uses HTTP to build HttpServletRequest.getRequestURL() or evaluate HttpServletRequest.getScheme() and HttpServletRequest.isSecure()

I've tested on both Glassfish 3.1.2.2 and Glassfish 4.0 and on Glassfish 3.1.2.2 works like a charm!
But it's not working on Glassfish 4.0! Seems a regression? (Similar issue https://java.net/jira/browse/GLASSFISH-18685)



 Comments   
Comment by frederickkaempfer [ 06/Oct/14 ]

The issue still exists on Glassfish 4.1. Probably the patch in GLASSFISH-18685 was never ported to the 4.x branch.

Comment by juliohm1978 [ 06/Oct/14 ]

I can't express how disappointing that is. We were waiting for 4.1 hoping this would be fixed in the official release

Comment by smillidge-c2b2 [ 07/Oct/14 ]

If you have the patch available I can see if we can get it into a Payara build and upstream into a later GlassFish

Comment by notabenem [ 25/Nov/14 ]

I really need this patched in GF4.1 / payara. After a long-long search, I have finally found the patch needed (hopefully).
It was committed against Grizzly 1.9.50 and likely backported to 1.9.46 (based on the zips attached to GLASSFISH-18685).

The ticket in question with the corresponding commits is the following: GRIZZLY-1281
Commits: 6b35876, 42089bc and 58ce8c4
Note, it was committed against 1.9.50. Apparently (according to GIT), these fixes were not applied to branch 2.2.x (and thus neither to 2.3.x)

PS: I created a patch (created from those commits), but can't upload it here.

Comment by smillidge-c2b2 [ 25/Nov/14 ]

I've created this to see if we can help out and test this https://github.com/payara/Payara/issues/75

Comment by notabenem [ 26/Nov/14 ]

One more thing: I noticed the fix(es) for GRIZZLY-1281 were also included in the main trunk. Commits: e54698c and ed7de1e.
According to the git log, these commits made their ways to the 'master' branch, but never into the '2.3.x' branch from which the Glassfish 4.1 branch was forked.
BTW: not sure how the 2.3.x branch was kept up to date. The first commit in this branch after 8/6/2012 (commit e54698c) is from 3/6/2013 (a year later, commit: 09d8c82) and even that commit was not a squashed commit of all changes from master. Strange. Seems like a completely isolated branch from master

Comment by smillidge-c2b2 [ 26/Nov/14 ]

I would also suggest you raise the issue within the Grizzly project as the fix needs to go into Grizzly rather than GlassFish.

Comment by notabenem [ 26/Nov/14 ]

Check out the Payara issue linked above for the pull-request to have this solved. I also have a patched nucleus-grizzly-all.jar, but can't attach it here.
The related GRIZZLY issue is GRIZZLY-1720





[GLASSFISH-21269] Cancelled POST via AJP triggers busy wait by http-listener-1 thread Created: 11/Dec/14  Updated: 16/Dec/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.1
Fix Version/s: None

Type: Bug Priority: Major
Reporter: fuzzyBSc2 Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Ubuntu 12.04.4, Chrome browser with SPDY, apache 2.2.22-1ubuntu1.4, mod_jk 1:1.2.32-1, JAX-RS-based servlet with async-supported set to true



 Description   

A http worker thread goes into a busy conversation with Apache's mod_jk. Both the mod_jk thread and http-listener-1 pool thread are sitting at high CPU (together at around 100% for my single CPU test setup) and the listener appears never to return to the thread pool.

The conversation is as follows (captured using strace on the high CPU glassfish thread):
glassfish -> mod_jk: 0x41 0x42 0x00 0x03 0x06 0x1f 0xfa (Send me up to 8186 bytes of the body)
mod_jk -> glassfish: 0x12 0x34 0x00 0x00 (end of body)
This sequence of messages is repeated very very rapidly, suggesting that the glassfish side is not handling the end of body message from mod_jk.

The request is to an asynchronous JAX-RS method via the @Suspended annotation on an AsyncResponse parameter. The body of the message is JSON and is being deserialised in the container into another method parameter.

I think the following is occurring:

  • The browser sends a post request with a (in this case) 39260 byte body, but cancels it. This closes the spdy stream and causes mod_spdy to report an end of file to mod_jk internally within apache
  • Meanwhile glassfish begins to process the request. It has not read the whole request at this time. It invokes jackson to read the request which in turn reads from the InputStream that AjpHandlerFilter provides (see AjpHandlerFilter.handleRead in the stack trace).
  • The filter chain then appears to mishandle the AJP end of file message and instead continues to request more data. I didn't see AjpHandlerFilter in the read stack trace I captured so maybe it isn't intercepting correctly.

Restarting either apache or glassfish resolves the problem. This bug may be related to https://java.net/jira/browse/GLASSFISH-21202 which seems to have a similar trigger but results in memory growth rather than the high cpu and thread exhaustion I am seeing.

I have included stack traces for the write and read phases of the conversation as captured by jstack. I have also included strace output for the beginning of the problem, and to show how it recovers once Apache is restarted:
Thread 12324: (state = IN_NATIVE)

  • sun.nio.ch.FileDispatcherImpl.write0(java.io.FileDescriptor, long, int) @bci=0 (Compiled frame; information may be imprecise)
  • sun.nio.ch.SocketDispatcher.write(java.io.FileDescriptor, long, int) @bci=4, line=47 (Compiled frame)
  • sun.nio.ch.IOUtil.writeFromNativeBuffer(java.io.FileDescriptor, java.nio.ByteBuffer, long, sun.nio.ch.NativeDispatcher) @bci=114, line=93 (Compiled frame)
  • sun.nio.ch.IOUtil.write(java.io.FileDescriptor, java.nio.ByteBuffer, long, sun.nio.ch.NativeDispatcher) @bci=12, line=51 (Compiled frame)
  • sun.nio.ch.SocketChannelImpl.write(java.nio.ByteBuffer) @bci=206, line=487 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(java.nio.channels.SocketChannel, java.nio.ByteBuffer) @bci=2, line=149 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeSimpleBuffer(org.glassfish.grizzly.nio.transport.TCPNIOConnection, org.glassfish.grizzly.Buffer) @bci=130, line=133 (Compil
    ed frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTransport.write(org.glassfish.grizzly.nio.transport.TCPNIOConnection, org.glassfish.grizzly.asyncqueue.WritableMessage, org.glassfish.
    grizzly.WriteResult) @bci=53, line=680 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTemporarySelectorWriter.writeNow0(org.glassfish.grizzly.nio.NIOConnection, java.net.SocketAddress, org.glassfish.grizzly.asyncqueue.WritableMessage, org.glassfish.grizzly.WriteResult) @bci=14, line=65 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorWriter.write0(org.glassfish.grizzly.nio.NIOConnection, java.net.SocketAddress, org.glassfish.grizzly.asyncqueue.WritableMessage, org.glassfish.grizzly.WriteResult, long, java.util.concurrent.TimeUnit) @bci=50, line=202 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorWriter.write(org.glassfish.grizzly.Connection, java.net.SocketAddress, org.glassfish.grizzly.asyncqueue.WritableMessage, org.glassfish.grizzly.CompletionHandler, org.glassfish.grizzly.asyncqueue.PushBackHandler, long, java.util.concurrent.TimeUnit) @bci=71, line=153 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorWriter.write(org.glassfish.grizzly.Connection, java.net.SocketAddress, org.glassfish.grizzly.asyncqueue.WritableMessage, org.glassfish.grizzly.CompletionHandler, org.glassfish.grizzly.asyncqueue.MessageCloner) @bci=19, line=78 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorWriter.write(org.glassfish.grizzly.Connection, java.lang.Object, org.glassfish.grizzly.asyncqueue.WritableMessage, org.glassfish.grizzly.CompletionHandler, org.glassfish.grizzly.asyncqueue.MessageCloner) @bci=11, line=58 (Compiled frame)
  • org.glassfish.grizzly.AbstractWriter.write(org.glassfish.grizzly.Connection, java.lang.Object, org.glassfish.grizzly.asyncqueue.WritableMessage, org.glassfish.grizzly.CompletionHandler) @bci=10, line=105 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleWrite(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=104, line=129 (Compiled frame)
  • org.glassfish.grizzly.filterchain.TransportFilter.handleWrite(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=20, line=191 (Compiled frame)
  • org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=2, line=111 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(org.glassfish.grizzly.filterchain.FilterExecutor, org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=38, line=284 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(org.glassfish.grizzly.filterchain.FilterChainContext, org.glassfish.grizzly.filterchain.FilterExecutor, int, int, org.glassfish.grizzly.filterchain.DefaultFilterChain$FiltersState) @bci=48, line=201 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=50, line=133 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.process(org.glassfish.grizzly.Context) @bci=103, line=112 (Compiled frame)
  • org.glassfish.grizzly.ProcessorExecutor.execute(org.glassfish.grizzly.Context) @bci=53, line=77 (Compiled frame)
  • org.glassfish.grizzly.filterchain.FilterChainContext.write(java.lang.Object, java.lang.Object, org.glassfish.grizzly.CompletionHandler, org.glassfish.grizzly.asyncqueue.PushBackHandler, org.glassfish.grizzly.asyncqueue.MessageCloner, boolean) @bci=135, line=848 (Compiled frame)
  • org.glassfish.grizzly.filterchain.FilterChainContext.write(java.lang.Object) @bci=13, line=702 (Compiled frame)
  • org.glassfish.grizzly.http.ajp.AjpHandlerFilter.sendMoreDataRequestIfNeeded(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=114, line=504 (Compiled frame)
  • org.glassfish.grizzly.http.ajp.AjpHandlerFilter.handleRead(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=18, line=197 (Compiled frame)
  • org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=2, line=119 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(org.glassfish.grizzly.filterchain.FilterExecutor, org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=38, line=284 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(org.glassfish.grizzly.filterchain.FilterChainContext, org.glassfish.grizzly.filterchain.FilterExecutor, int, int, org.glassfish.grizzly.filterchain.DefaultFilterChain$FiltersState) @bci=48, line=201 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.read(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=83, line=351 (Compiled frame)
  • org.glassfish.grizzly.filterchain.FilterChainContext.read() @bci=65, line=695 (Compiled frame)
  • org.glassfish.grizzly.http.io.InputBuffer.blockingRead() @bci=4, line=1119 (Compiled frame)
  • org.glassfish.grizzly.http.server.io.ServerInputBuffer.blockingRead() @bci=18, line=95 (Compiled frame)
  • org.glassfish.grizzly.http.io.InputBuffer.fill(int) @bci=23, line=1143 (Compiled frame)
  • org.glassfish.grizzly.http.io.InputBuffer.read(byte[], int, int) @bci=74, line=353 (Interpreted frame)
  • org.apache.catalina.connector.InputBuffer.read(byte[], int, int) @bci=33, line=267 (Interpreted frame)
  • org.apache.catalina.connector.CoyoteInputStream.read(byte[], int, int) @bci=97, line=270 (Interpreted frame)
  • org.glassfish.jersey.message.internal.EntityInputStream.read(byte[], int, int) @bci=7, line=101 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.loadMore() @bci=48, line=178 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.parseEscapedName(int[], int, int, int, int) @bci=268, line=1749 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.slowParseName() @bci=64, line=1654 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser._parseName(int) @bci=78, line=1499 (Compiled frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken() @bci=255, line=700 (Compiled frame)
  • com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringMap(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext, java.util.Map) @bci=133, line=416 (Compiled frame)
  • com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=143, line=312 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=3, line=26 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=59, line=525 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext, java.lang.Object) @bci=5, line=106 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext, com.fasterxml.jackson.core.JsonToken) @bci=53, line=242 (Compiled frame)
  • com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=26, line=118 (Compiled frame)
  • com.fasterxml.jackson.databind.ObjectReader._bind(com.fasterxml.jackson.core.JsonParser, java.lang.Object) @bci=128, line=1233 (Interpreted frame)
  • com.fasterxml.jackson.databind.ObjectReader.readValue(com.fasterxml.jackson.core.JsonParser) @bci=6, line=677 (Interpreted frame)
  • com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[], javax.ws.rs.core.MediaType, javax.ws.rs.core.MultivaluedMap, java.io.InputStream) @bci=204, line=777 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.invokeReadFrom(javax.ws.rs.ext.ReaderInterceptorContext, javax.ws.rs.ext.MessageBodyReader, org.glassfish.jersey.message.internal.EntityInputStream) @bci=61, line=251 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.aroundReadFrom(javax.ws.rs.ext.ReaderInterceptorContext) @bci=292, line=229 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed() @bci=46, line=149 (Interpreted frame)
  • org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(javax.ws.rs.ext.ReaderInterceptorContext) @bci=1, line=73 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed() @bci=46, line=149 (Interpreted frame)
  • org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[], javax.ws.rs.core.MediaType, javax.ws.rs.core.MultivaluedMap, org.glassfish.jersey.internal.PropertiesDelegate, java.io.InputStream, java.lang.Iterable, boolean) @bci=48, line=1124 (Interpreted frame)
  • org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[], org.glassfish.jersey.internal.PropertiesDelegate) @bci=116, line=851 (Interpreted frame)
  • org.glassfish.jersey.server.ContainerRequest.readEntity(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[]) @bci=8, line=270 (Interpreted frame)
  • org.glassfish.jersey.server.internal.inject.EntityParamValueFactoryProvider$EntityValueFactory.provide() @bci=60, line=96 (Interpreted frame)
  • org.glassfish.jersey.server.spi.internal.ParameterValueHelper.getParameterValues(java.util.List) @bci=46, line=81 (Interpreted frame)
  • org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$AbstractMethodParamInvoker.getParamValues() @bci=4, line=121 (Interpreted frame)
  • org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(java.lang.Object, javax.ws.rs.core.Request) @bci=3, line=136 (Interpreted frame)
  • org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(java.lang.Object, org.glassfish.jersey.server.ContainerRequest) @bci=5, line=104 (Interpreted frame)
  • org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(org.glassfish.jersey.server.internal.process.RequestProcessingContext, java.lang.Object) @bci=28, line=387 (Interpreted frame)
  • org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(org.glassfish.jersey.server.internal.process.RequestProcessingContext) @bci=97, line=331 (Interpreted frame)
  • org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(java.lang.Object) @bci=5, line=103 (Interpreted frame)
  • org.glassfish.jersey.server.ServerRuntime$1.run() @bci=57, line=271 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors$1.call() @bci=4, line=271 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors$1.call() @bci=1, line=267 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors.process(java.util.concurrent.Callable, boolean) @bci=36, line=315 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors.process(org.glassfish.jersey.internal.util.Producer, boolean) @bci=2, line=297 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors.process(java.lang.Runnable) @bci=9, line=267 (Interpreted frame)
  • org.glassfish.jersey.process.internal.RequestScope.runInScope(org.glassfish.jersey.process.internal.RequestScope$Instance, java.lang.Runnable) @bci=23, line=297 (Interpreted frame)
  • org.glassfish.jersey.server.ServerRuntime.process(org.glassfish.jersey.server.ContainerRequest) @bci=164, line=254 (Interpreted frame)
  • org.glassfish.jersey.server.ApplicationHandler.handle(org.glassfish.jersey.server.ContainerRequest) @bci=5, line=1028 (Interpreted frame)
  • org.glassfish.jersey.servlet.WebComponent.service(java.net.URI, java.net.URI, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) @bci=119, line=372 (Interpreted frame)
  • org.glassfish.jersey.servlet.ServletContainer.service(java.net.URI, java.net.URI, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) @bci=9, line=381 (Interpreted frame)
  • org.glassfish.jersey.servlet.ServletContainer.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) @bci=362, line=344 (Interpreted frame)
  • org.glassfish.jersey.servlet.ServletContainer.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse) @bci=39, line=221 (Interpreted frame)
  • org.apache.catalina.core.StandardWrapper.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.Servlet) @bci=122, line=1682 (Interpreted frame)
  • org.apache.catalina.core.StandardWrapperValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=694, line=318 (Interpreted frame)
  • org.apache.catalina.core.StandardContextValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=74, line=160 (Interpreted frame)
  • org.apache.catalina.core.StandardPipeline.doInvoke(org.apache.catalina.Request, org.apache.catalina.Response, boolean) @bci=168, line=734 (Interpreted frame)
  • org.apache.catalina.core.StandardPipeline.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=4, line=673 (Interpreted frame)
  • com.sun.enterprise.web.WebPipeline.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=99, line=99 (Interpreted frame)
  • org.apache.catalina.core.StandardHostValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=44, line=174 (Interpreted frame)
  • org.apache.catalina.connector.CoyoteAdapter.doService(org.glassfish.grizzly.http.server.Request, org.apache.catalina.connector.Request, org.glassfish.grizzly.http.server.Response, org.apache.catalina.connector.Response, boolean) @bci=425, line=415 (Interpreted frame)
  • org.apache.catalina.connector.CoyoteAdapter.service(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=183, line=282 (Interpreted frame)
  • com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call() @bci=12, line=459 (Interpreted frame)
  • com.sun.enterprise.v3.services.impl.ContainerMapper.service(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=15, line=167 (Interpreted frame)
  • org.glassfish.grizzly.http.server.HttpHandler.runService(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=48, line=201 (Interpreted frame)
  • org.glassfish.grizzly.http.server.HttpHandler.doHandle(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=131, line=175 (Interpreted frame)
  • org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=348, line=235 (Interpreted frame)
  • org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=2, line=119 (Interpreted frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(org.glassfish.grizzly.filterchain.FilterExecutor, org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=38, line=284 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(org.glassfish.grizzly.filterchain.FilterChainContext, org.glassfish.grizzly.filterchain.FilterExecutor, int, int, org.glassfish.grizzly.filterchain.DefaultFilterChain$FiltersState) @bci=48, line=201 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=50, line=133 (Interpreted frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.process(org.glassfish.grizzly.Context) @bci=103, line=112 (Interpreted frame)
  • org.glassfish.grizzly.ProcessorExecutor.execute(org.glassfish.grizzly.Context) @bci=53, line=77 (Interpreted frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEventLifeCycleListener) @bci=69, line=561 (Interpreted frame)
  • org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.IOEventLifeCycleListener, java.util.logging.Logger) @bci=9, line=112 (Interpreted frame)
  • org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.IOEventLifeCycleListener) @bci=6, line=117 (Interpreted frame)
  • org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.IOEventLifeCycleListener) @bci=3, line=56 (Interpreted frame)
  • org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run() @bci=12, line=137 (Interpreted frame)
  • org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork() @bci=47, line=565 (Interpreted frame)
  • org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run() @bci=9, line=545 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)

The read phase of the cycle looks like this:
Thread 12324: (state = IN_NATIVE)

  • sun.nio.ch.FileDispatcherImpl.read0(java.io.FileDescriptor, long, int) @bci=0 (Compiled frame; information may be imprecise)
  • sun.nio.ch.SocketDispatcher.read(java.io.FileDescriptor, long, int) @bci=4, line=39 (Compiled frame)
  • sun.nio.ch.IOUtil.readIntoNativeBuffer(java.io.FileDescriptor, java.nio.ByteBuffer, long, sun.nio.ch.NativeDispatcher) @bci=114, line=223 (Compiled frame)
  • sun.nio.ch.IOUtil.read(java.io.FileDescriptor, java.nio.ByteBuffer, long, sun.nio.ch.NativeDispatcher) @bci=29, line=192 (Compiled frame)
  • sun.nio.ch.SocketChannelImpl.read(java.nio.ByteBuffer) @bci=234, line=379 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOUtils.readSimpleByteBuffer(org.glassfish.grizzly.nio.transport.TCPNIOConnection, java.nio.ByteBuffer) @bci=10, line=344 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOUtils.allocateAndReadBuffer(org.glassfish.grizzly.nio.transport.TCPNIOConnection) @bci=55, line=237 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTransport.read(org.glassfish.grizzly.Connection, org.glassfish.grizzly.Buffer) @bci=22, line=618 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTemporarySelectorReader.readNow0(org.glassfish.grizzly.nio.NIOConnection, org.glassfish.grizzly.Buffer, org.glassfish.grizzly.ReadResult) @bci=25, line=65 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorReader.read0(org.glassfish.grizzly.nio.NIOConnection, org.glassfish.grizzly.ReadResult, org.glassfish.grizzly.Buffer, long, java.util.concurrent.TimeUnit) @bci=39, line=171 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorReader.read0(org.glassfish.grizzly.nio.NIOConnection, org.glassfish.grizzly.Interceptor, org.glassfish.grizzly.ReadResult, org.glassfish.grizzly.Buffer, long, java.util.concurrent.TimeUnit) @bci=20, line=141 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorReader.read(org.glassfish.grizzly.Connection, org.glassfish.grizzly.Buffer, org.glassfish.grizzly.CompletionHandler, org.glassfish.grizzly.Interceptor, long, java.util.concurrent.TimeUnit) @bci=52, line=113 (Compiled frame)
  • org.glassfish.grizzly.nio.tmpselectors.TemporarySelectorReader.read(org.glassfish.grizzly.Connection, org.glassfish.grizzly.Buffer, org.glassfish.grizzly.CompletionHandler, org.glassfish.grizzly.Interceptor) @bci=18, line=75 (Compiled frame)
  • org.glassfish.grizzly.AbstractReader.read(org.glassfish.grizzly.Connection, org.glassfish.grizzly.Buffer) @bci=12, line=72 (Compiled frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleRead(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=57, line=77 (Compiled frame)
  • org.glassfish.grizzly.filterchain.TransportFilter.handleRead(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=20, line=173 (Compiled frame)
  • org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=2, line=119 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(org.glassfish.grizzly.filterchain.FilterExecutor, org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=38, line=284 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(org.glassfish.grizzly.filterchain.FilterChainContext, org.glassfish.grizzly.filterchain.FilterExecutor, int, int, org.glassfish.grizzly.filterchain.DefaultFilterChain$FiltersState) @bci=48, line=201 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.read(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=83, line=351 (Compiled frame)
  • org.glassfish.grizzly.filterchain.FilterChainContext.read() @bci=65, line=695 (Compiled frame)
  • org.glassfish.grizzly.http.io.InputBuffer.blockingRead() @bci=4, line=1119 (Compiled frame)
  • org.glassfish.grizzly.http.server.io.ServerInputBuffer.blockingRead() @bci=18, line=95 (Compiled frame)
  • org.glassfish.grizzly.http.io.InputBuffer.fill(int) @bci=23, line=1143 (Compiled frame)
  • org.glassfish.grizzly.http.io.InputBuffer.read(byte[], int, int) @bci=74, line=353 (Interpreted frame)
  • org.apache.catalina.connector.InputBuffer.read(byte[], int, int) @bci=33, line=267 (Interpreted frame)
  • org.apache.catalina.connector.CoyoteInputStream.read(byte[], int, int) @bci=97, line=270 (Interpreted frame)
  • org.glassfish.jersey.message.internal.EntityInputStream.read(byte[], int, int) @bci=7, line=101 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.loadMore() @bci=48, line=178 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.parseEscapedName(int[], int, int, int, int) @bci=268, line=1749 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.slowParseName() @bci=64, line=1654 (Interpreted frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser._parseName(int) @bci=78, line=1499 (Compiled frame)
  • com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken() @bci=255, line=700 (Compiled frame)
  • com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringMap(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext, java.util.Map) @bci=133, line=416 (Compiled frame)
  • com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=143, line=312 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=3, line=26 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=59, line=525 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext, java.lan
    g.Object) @bci=5, line=106 (Interpreted frame)
  • com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext, com.fasterxml.jackson.core.JsonToken) @bci=53, line=242 (Compiled frame)
  • com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(com.fasterxml.jackson.core.JsonParser, com.fasterxml.jackson.databind.DeserializationContext) @bci=26, line=118 (Compiled frame)
  • com.fasterxml.jackson.databind.ObjectReader._bind(com.fasterxml.jackson.core.JsonParser, java.lang.Object) @bci=128, line=1233 (Interpreted frame)
  • com.fasterxml.jackson.databind.ObjectReader.readValue(com.fasterxml.jackson.core.JsonParser) @bci=6, line=677 (Interpreted frame)
  • com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[], javax.ws.rs.core.MediaType, javax.ws.rs.core.MultivaluedMap, java.io.InputStream) @bci=204, line=777 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.invokeReadFrom(javax.ws.rs.ext.ReaderInterceptorContext, javax.ws.rs.ext.MessageBodyReader, org.glassfish.jersey.message.internal.EntityInputStream) @bci=61, line=251 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.aroundReadFrom(javax.ws.rs.ext.ReaderInterceptorContext) @bci=292, line=229 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed() @bci=46, line=149 (Interpreted frame)
  • org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(javax.ws.rs.ext.ReaderInterceptorContext) @bci=1, line=73 (Interpreted frame)
  • org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed() @bci=46, line=149 (Interpreted frame)
  • org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[], javax.ws.rs.core.MediaType, javax.ws.rs.core.MultivaluedMap, org.glassfish.jersey.internal.PropertiesDelegate, java.io.InputStream, java.lang.Iterable, boolean) @bci=48, line=1124 (Interpreted frame)
  • org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[], org.glassfish.jersey.internal.PropertiesDelegate) @bci=116, line=851 (Interpreted frame)
  • org.glassfish.jersey.server.ContainerRequest.readEntity(java.lang.Class, java.lang.reflect.Type, java.lang.annotation.Annotation[]) @bci=8, line=270 (Interpreted frame)
  • org.glassfish.jersey.server.internal.inject.EntityParamValueFactoryProvider$EntityValueFactory.provide() @bci=60, line=96 (Interpreted frame)
  • org.glassfish.jersey.server.spi.internal.ParameterValueHelper.getParameterValues(java.util.List) @bci=46, line=81 (Interpreted frame)
  • org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$AbstractMethodParamInvoker.getParamValues() @bci=4, line=121 (Interpreted frame)
  • org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(java.lang.Object, javax.ws.rs.core.Request) @bci=3, line=136 (Interpreted frame)
  • org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(java.lang.Object, org.glassfish.jersey.server.ContainerRequest) @bci=5, line=104 (Interpreted frame)
  • org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(org.glassfish.jersey.server.internal.process.RequestProcessingContext, java.lang.Object) @bci=28, line=387 (Interpreted frame)
  • org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(org.glassfish.jersey.server.internal.process.RequestProcessingContext) @bci=97, line=331 (Interpreted frame)
  • org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(java.lang.Object) @bci=5, line=103 (Interpreted frame)
  • org.glassfish.jersey.server.ServerRuntime$1.run() @bci=57, line=271 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors$1.call() @bci=4, line=271 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors$1.call() @bci=1, line=267 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors.process(java.util.concurrent.Callable, boolean) @bci=36, line=315 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors.process(org.glassfish.jersey.internal.util.Producer, boolean) @bci=2, line=297 (Interpreted frame)
  • org.glassfish.jersey.internal.Errors.process(java.lang.Runnable) @bci=9, line=267 (Interpreted frame)
  • org.glassfish.jersey.process.internal.RequestScope.runInScope(org.glassfish.jersey.process.internal.RequestScope$Instance, java.lang.Runnable) @bci=23, line=297 (Interpreted frame)
  • org.glassfish.jersey.server.ServerRuntime.process(org.glassfish.jersey.server.ContainerRequest) @bci=164, line=254 (Interpreted frame)
  • org.glassfish.jersey.server.ApplicationHandler.handle(org.glassfish.jersey.server.ContainerRequest) @bci=5, line=1028 (Interpreted frame)
  • org.glassfish.jersey.servlet.WebComponent.service(java.net.URI, java.net.URI, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) @bci=119, line=372 (Interpreted frame)
  • org.glassfish.jersey.servlet.ServletContainer.service(java.net.URI, java.net.URI, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) @bci=9, line=381 (Interpreted frame)
  • org.glassfish.jersey.servlet.ServletContainer.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) @bci=362, line=344 (Interpreted frame)
  • org.glassfish.jersey.servlet.ServletContainer.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse) @bci=39, line=221 (Interpreted frame)
  • org.apache.catalina.core.StandardWrapper.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.Servlet) @bci=122, line=1682 (Interpreted frame)
  • org.apache.catalina.core.StandardWrapperValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=694, line=318 (Interpreted frame)
  • org.apache.catalina.core.StandardContextValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=74, line=160 (Interpreted frame)
  • org.apache.catalina.core.StandardPipeline.doInvoke(org.apache.catalina.Request, org.apache.catalina.Response, boolean) @bci=168, line=734 (Interpreted frame)
  • org.apache.catalina.core.StandardPipeline.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=4, line=673 (Interpreted frame)
  • com.sun.enterprise.web.WebPipeline.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=99, line=99 (Interpreted frame)
  • org.apache.catalina.core.StandardHostValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response) @bci=44, line=174 (Interpreted frame)
  • org.apache.catalina.connector.CoyoteAdapter.doService(org.glassfish.grizzly.http.server.Request, org.apache.catalina.connector.Request, org.glassfish.grizzly.http.server.Response, org.apache.catalina.connector.Response, boolean) @bci=425, line=415 (Interpreted frame)
  • org.apache.catalina.connector.CoyoteAdapter.service(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=183, line=282 (Interpreted frame)
  • com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call() @bci=12, line=459 (Interpreted frame)
  • com.sun.enterprise.v3.services.impl.ContainerMapper.service(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=15, line=167 (Interpreted frame)
  • org.glassfish.grizzly.http.server.HttpHandler.runService(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=48, line=201 (Interpreted frame)
  • org.glassfish.grizzly.http.server.HttpHandler.doHandle(org.glassfish.grizzly.http.server.Request, org.glassfish.grizzly.http.server.Response) @bci=131, line=175 (Interpreted frame)
  • org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=348, line=235 (Interpreted frame)
  • org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=2, line=119 (Interpreted frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(org.glassfish.grizzly.filterchain.FilterExecutor, org.glassfish.grizzly.filterchain.Filter, org.glassfish.grizzly.filterchain.FilterChainContext) @bci=38, line=284 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(org.glassfish.grizzly.filterchain.FilterChainContext, org.glassfish.grizzly.filterchain.FilterExecutor, int, int, org.glassfish.grizzly.filterchain.DefaultFilterChain$FiltersState) @bci=48, line=201 (Compiled frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(org.glassfish.grizzly.filterchain.FilterChainContext) @bci=50, line=133 (Interpreted frame)
  • org.glassfish.grizzly.filterchain.DefaultFilterChain.process(org.glassfish.grizzly.Context) @bci=103, line=112 (Interpreted frame)
  • org.glassfish.grizzly.ProcessorExecutor.execute(org.glassfish.grizzly.Context) @bci=53, line=77 (Interpreted frame)
  • org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEventLifeCycleListener) @bci=69, line=561 (Interpreted frame)
  • org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.IOEventLifeCycleListener, java.util.logging.Logger) @bci=9, line=112 (Interpreted frame)
  • org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.IOEventLifeCycleListener) @bci=6, line=117 (Interpreted frame)
  • org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(org.glassfish.grizzly.Connection, org.glassfish.grizzly.IOEvent, org.glassfish.grizzly.IOEventLifeCycleListener) @bci=3, line=56 (Interpreted frame)
  • org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run() @bci=12, line=137 (Interpreted frame)
  • org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork() @bci=47, line=565 (Interpreted frame)
  • org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run() @bci=9, line=545 (Interpreted frame)
  • java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)

strace output on the glassfish http-listener-1 thread:
futex(0x7fc2b8219454, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7fc2b8219450,

{FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7fc2b8219428, FUTEX_WAKE_PRIVATE, 1) = 1
read(480, "\0224\4a\2\4\0\10HTTP/1.1\0\0\34/base/resourc"..., 242337) = 8027
read(484, "\0224\4[\2\4\0\10HTTP/1.1\0\0\34/base/resourc"..., 242337) = 9311
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4
.... many more repeats until I restart apache ...
futex(0x7fc2b8219454, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7fc2b8219450, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}

) = 1
futex(0x7fc2b8219428, FUTEX_WAKE_PRIVATE, 1) = 1
read(480, "\0224\4a\2\4\0\10HTTP/1.1\0\0\34/base/resourc"..., 242337) = 8027
read(484, "\0224\4[\2\4\0\10HTTP/1.1\0\0\34/base/resourc"..., 242337) = 9311
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4
write(484, "AB\0\3\6\37\372", 7) = 7
read(484, "\0224\0\0", 242337) = 4



 Comments   
Comment by fuzzyBSc2 [ 11/Dec/14 ]

Oops, sorry. The recovery portion of the strace when apache is restarted should be this:
read(484, 0x7fc261e29ff0, 242337) = -1 EAGAIN (Resource temporarily unavailable)
epoll_ctl(488, EPOLL_CTL_ADD, 484, {EPOLLIN, {u32=484, u64=9290625383554613732}}) = 0
epoll_ctl(488, EPOLL_CTL_MOD, 484, {EPOLLIN, {u32=484, u64=9290625383554613732}}) = 0
epoll_wait(488, {{EPOLLIN|EPOLLERR|EPOLLHUP,

{u32=484, u64=9290625383554613732}

}}, 4096, 30000) = 1
read(484, 0x7fc261e29ff0, 242337) = -1 ECONNRESET (Connection reset by peer)
write(441, "\1", 1) = 1
epoll_ctl(488, EPOLL_CTL_DEL, 484, {0, {u32=484, u64=484}}) = -1 ENOENT (No such file or directory)
close(484) = 0
epoll_wait(488, {}, 4096, 0) = 0
write(2, "[#|2014-12-09T15:42:00.719+1000|"..., 8192) = -1 EPIPE (Broken pipe)
— SIGPIPE (Broken pipe) @ 0 (0) —
write(2, "(DefaultFilterChain.java:133)\n\ta"..., 955) = -1 EPIPE (Broken pipe)
— SIGPIPE (Broken pipe) @ 0 (0) —

Comment by fuzzyBSc2 [ 11/Dec/14 ]

Note: I have tried applying the patches from GLASSFISH-21202 to see if they resolve this problem, but currently they appear to have been prepared for glassfish 4.0 only, not 4.1.

Comment by oleksiys [ 12/Dec/14 ]

pls. try this patch
https://dl.dropboxusercontent.com/u/7319744/glassfish-4.1/glassfishv41-patch.zip

Comment by fuzzyBSc2 [ 15/Dec/14 ]

Thanks,

I have applied this patch and retested. I have not been able to reproduce the problem with this patch in place

Comment by fuzzyBSc2 [ 16/Dec/14 ]

If you don't mind, just a procedural question: Is there any way I could trace this binary patch to a change in the source tree? Part of the open source policy of the company I work for is that I need to show how I would build the software from source if required.

Comment by oleksiys [ 16/Dec/14 ]

)

you can track it on github
https://github.com/GrizzlyNIO/grizzly-mirror/tree/glassfish41





[GLASSFISH-21312] java.io.IOException: java.util.concurrent.TimeoutException comes up from time to time Created: 23/Feb/15  Updated: 25/Feb/15

Status: Open
Project: glassfish
Component/s: web_container
Affects Version/s: 4.1
Fix Version/s: None

Type: Bug Priority: Major
Reporter: nabizamani Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Ubuntu 14.04 LTS Server x64, java 1.8.0_31 + JCE Unlimited Strength, GlassFish Server Open Source Edition 4.1 (build 13)



 Description   

Fo a client I migrated from Glassfish 3.1.2.2 to the latest Glassfish 4.1 and now suddenly I can see java.util.concurrent.TimeoutException (see logs below). I can't tell where this comes from. I checked all the log files from GF 3.1.2.2 and there I can't find even a single entry...

[2015-02-23T11:06:43.377+0100] [glassfish 4.1] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=27 _ThreadName=http-listener-1(2)] [timeMillis: 1424686003377] [levelValue: 900] [[
StandardWrapperValve[default]: Servlet.service() for servlet default threw exception
java.io.IOException: java.util.concurrent.TimeoutException
at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:973)
at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:686)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:355)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:342)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:161)
at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:2199)
at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:1085)
at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:568)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:96)
at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:415)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException
at org.glassfish.grizzly.impl.SafeFutureImpl$Sync.innerGet(SafeFutureImpl.java:357)
at org.glassfish.grizzly.impl.SafeFutureImpl.get(SafeFutureImpl.java:264)
at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:962)
... 42 more
]]



 Comments   
Comment by oleksiys [ 23/Feb/15 ]

This Exception appears when the server can't write the data to a client. Most probably client or network can't keep up with the server's pace.
You're right, in 3.1.2 we throw different Exception, but logic is the same.

Do you see a particular problem related to this Exception?

Comment by nabizamani [ 23/Feb/15 ]

As of now I don't see any real problem other than annoying amount of logs.
I have also some other new log entries which we never had in Glassfish 3.1.2.2:

  • java.lang.IllegalStateException: getOutputStream() has already been called for this response
  • java.lang.IllegalStateException: Unknown JCDI-enabled managed bean org.apache.struts2.views.jsp.TextTag@ee5597c of class class org.apache.struts2.views.jsp.TextTag

I believe that latter one occurs during restart oder the server or when underlying, maybe deploying.
I will monitor this the next days and let you know about my findings.

Comment by oleksiys [ 23/Feb/15 ]

Well, I think we can reduce the logging level for this exception.
Regarding:

java.lang.IllegalStateException: getOutputStream() has already been called for this response

looks like you call response.getWriter() after response.getOutputStream() was called, you can't combine byte- and character-based writes, that's why you see the exception.

Comment by nabizamani [ 23/Feb/15 ]

Yes, I figured that. However, we did not change anything in the jsp that seems to produce this exception. Like I said, I will monitor and analyze things and post my findings here...

Comment by nabizamani [ 24/Feb/15 ]

I think I got some fIndings:

I get somehow java.io.IOException: java.util.concurrent.TimeoutException
I cannot really reproduce it, it just happens from time to time...
Because of the exception my error page defined in web.xml gets called:

web.xml (just the relevant part):

<!-- catch all errors -->
<error-page>
    <location>/WEB-INF/jsp/error/error.jsp</location>
</error-page>

/WEB-INF/jsp/error/error.jsp :

<%@page isErrorPage="true" session="false"
%><%@ page import="org.apache.log4j.Logger"
%><%@ page trimDirectiveWhitespaces="true" 
%><%
response.setStatus(response.getStatus());   //or some other code
%><html>
    <head>
        <title>Error <%= response.getStatus() %></title>
    </head>
    <body>
        <h2 style="color:red;">Error <%= response.getStatus() %></h2>
    </body>
</html>
<%
Logger log = Logger.getLogger("error");
String uri = (String) request.getAttribute("javax.servlet.error.request_uri");
String info = "(ip='"+request.getRemoteAddr()+"' x-forwarded-for='"+request.getHeader("x-forwarded-for")+"')";
String qs = request.getQueryString();
if (qs != null)
    log.error("HTTP Status "+response.getStatus() +" for request '"+uri+"?"+qs+"' "+info);
else
    log.error("HTTP Status "+response.getStatus() +" for request '"+uri+"' "+info);
%>

So the error.jsp is called, and right after that I get this error in the log file:
java.lang.IllegalStateException: getOutputStream() has already been called for this response

I guess that is because Glassfish has either already sent "some" response or because the connection is closed (i.e. by peer?). What does timeout exactly mean? What kind of timeout are we talking about?

However, I still get the log entries I have in the error.jsp. This is an example log entry:
2015-02-24 17:27:44,985 ERROR error._jspService:76 - HTTP Status 200 for request '/assets/plugins/bootstrap/css/bootstrap.css' (ip='41.216.47.46' x-forwarded-for='null')]]

As you can see the HTTP Status code is 200 (and that is correct, because the file is on the server). Although an error was thrown by the runtime, I guess because the connection was closed or something... I believe that throwing TimeoutException because maybe a connection has closed for some reason and passing it all the way up is the issue here. Do you know if there is some change in the code somewhere here between 3.1.2.2 and 4.1? I can also see some "java.io.IOException: Broken pipe" exceptions in the logs (I believe this is related somehow).

A log file with some relevant parts is attached.

Comment by nabizamani [ 24/Feb/15 ]

Sorry, I can't attach files... So here some logs:

[2015-02-24T17:27:44.984+0100] [glassfish 4.1] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=36 _ThreadName=http-listener-2(2)] [timeMillis: 1424795264984] [levelValue: 900] [[
StandardWrapperValve[default]: Servlet.service() for servlet default threw exception
java.io.IOException: java.util.concurrent.TimeoutException
at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:973)
at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:686)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:355)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:342)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:161)
at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:2199)
at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:1085)
at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:568)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at com.web.filter.ForceHttpsFilter.doFilter(ForceHttpsFilter.java:51)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:415)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException
at org.glassfish.grizzly.impl.SafeFutureImpl$Sync.innerGet(SafeFutureImpl.java:357)
at org.glassfish.grizzly.impl.SafeFutureImpl.get(SafeFutureImpl.java:264)
at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:962)
... 45 more
]]

[2015-02-24T17:27:44.986+0100] [glassfish 4.1] [INFO] [] [] [tid: _ThreadID=36 _ThreadName=Thread-8] [timeMillis: 1424795264986] [levelValue: 800] [[
2015-02-24 17:27:44,985 ERROR error._jspService:76 - HTTP Status 200 for request '/assets/plugins/bootstrap/css/bootstrap.css' (ip='41.216.47.46' x-forwarded-for='null')]]

[2015-02-24T17:27:44.986+0100] [glassfish 4.1] [WARNING] [] [javax.enterprise.web.core] [tid: _ThreadID=36 _ThreadName=http-listener-2(2)] [timeMillis: 1424795264986] [levelValue: 900] [[
Servlet.service() for servlet jsp threw exception
java.lang.IllegalStateException: getOutputStream() has already been called for this response
at org.apache.catalina.connector.Response.getWriter(Response.java:777)
at org.apache.catalina.connector.ResponseFacade.getWriter(ResponseFacade.java:224)
at javax.servlet.ServletResponseWrapper.getWriter(ServletResponseWrapper.java:152)
at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:195)
at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:188)
at org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:240)
at org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:185)
at org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:137)
at org.apache.jsp.WEB_002dINF.jsp.error.error_jsp._jspService(error_jsp.java:88)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:411)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:473)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:377)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationDispatcher.doInvoke(ApplicationDispatcher.java:875)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:739)
at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:695)
at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:626)
at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:500)
at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:699)
at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:309)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:232)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:417)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
]]

[2015-02-24T17:27:44.987+0100] [glassfish 4.1] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=36 _ThreadName=http-listener-2(2)] [timeMillis: 1424795264987] [levelValue: 900] [[
org.apache.catalina.core.StandardHostValve@6d45b284: Exception Processing ErrorPage[errorCode=0, location=/WEB-INF/jsp/error/error.jsp]
java.lang.IllegalStateException: getOutputStream() has already been called for this response
at org.apache.catalina.connector.Response.getWriter(Response.java:777)
at org.apache.catalina.connector.ResponseFacade.getWriter(ResponseFacade.java:224)
at javax.servlet.ServletResponseWrapper.getWriter(ServletResponseWrapper.java:152)
at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:195)
at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:188)
at org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:240)
at org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:185)
at org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:137)
at org.apache.jsp.WEB_002dINF.jsp.error.error_jsp._jspService(error_jsp.java:88)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:411)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:473)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:377)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationDispatcher.doInvoke(ApplicationDispatcher.java:875)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:739)
at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:695)
at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:626)
at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:500)
at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:699)
at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:309)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:232)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:417)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
]]

[2015-02-24T18:03:01.398+0100] [glassfish 4.1] [INFO] [] [] [tid: _ThreadID=38 _ThreadName=Thread-8] [timeMillis: 1424797381398] [levelValue: 800] [[
2015-02-24 18:03:01,397 ERROR error._jspService:76 - HTTP Status 404 for request '/robots.txt' (ip='77.66.121.243' x-forwarded-for='null')]]

[2015-02-24T18:12:38.665+0100] [glassfish 4.1] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=39 _ThreadName=http-listener-2(5)] [timeMillis: 1424797958665] [levelValue: 900] [[
StandardWrapperValve[default]: Servlet.service() for servlet default threw exception
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:51)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:149)
at org.glassfish.grizzly.nio.transport.TCPNIOUtils.writeSimpleBuffer(TCPNIOUtils.java:133)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:109)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:91)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:261)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:170)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.write(AbstractNIOAsyncQueueWriter.java:70)
at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleWrite(TCPNIOTransportFilter.java:126)
at org.glassfish.grizzly.filterchain.TransportFilter.handleWrite(TransportFilter.java:191)
at org.glassfish.grizzly.ssl.SSLBaseFilter$SSLTransportFilterWrapper.handleWrite(SSLBaseFilter.java:1004)
at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:848)
at org.glassfish.grizzly.ssl.SSLBaseFilter.handleWrite(SSLBaseFilter.java:332)
at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:848)
at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:817)
at org.glassfish.grizzly.http.io.OutputBuffer.flushBuffer(OutputBuffer.java:1024)
at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:680)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:355)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:342)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:161)
at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:2477)
at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:2212)
at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:1085)
at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:568)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at com.web.filter.ForceHttpsFilter.doFilter(ForceHttpsFilter.java:51)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:415)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
]]

[2015-02-24T18:12:38.666+0100] [glassfish 4.1] [INFO] [] [] [tid: _ThreadID=39 _ThreadName=Thread-8] [timeMillis: 1424797958666] [levelValue: 800] [[
2015-02-24 18:12:38,666 ERROR error._jspService:76 - HTTP Status 200 for request '/download/img/prifile.jpg‚ (ip='23.229.97.36' x-forwarded-for='null')]]

[2015-02-24T18:12:38.666+0100] [glassfish 4.1] [WARNING] [] [javax.enterprise.web.core] [tid: _ThreadID=39 _ThreadName=http-listener-2(5)] [timeMillis: 1424797958666] [levelValue: 900] [[
Servlet.service() for servlet jsp threw exception
java.lang.IllegalStateException: getOutputStream() has already been called for this response
at org.apache.catalina.connector.Response.getWriter(Response.java:777)
at org.apache.catalina.connector.ResponseFacade.getWriter(ResponseFacade.java:224)
at javax.servlet.ServletResponseWrapper.getWriter(ServletResponseWrapper.java:152)
at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:195)
at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:188)
at org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:240)
at org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:185)
at org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:137)
at org.apache.jsp.WEB_002dINF.jsp.error.error_jsp._jspService(error_jsp.java:88)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:411)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:473)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:377)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationDispatcher.doInvoke(ApplicationDispatcher.java:875)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:739)
at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:695)
at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:626)
at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:500)
at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:699)
at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:309)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:232)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:417)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
]]

[2015-02-24T18:12:38.667+0100] [glassfish 4.1] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=39 _ThreadName=http-listener-2(5)] [timeMillis: 1424797958667] [levelValue: 900] [[
org.apache.catalina.core.StandardHostValve@6d45b284: Exception Processing ErrorPage[errorCode=0, location=/WEB-INF/jsp/error/error.jsp]
java.lang.IllegalStateException: getOutputStream() has already been called for this response
at org.apache.catalina.connector.Response.getWriter(Response.java:777)
at org.apache.catalina.connector.ResponseFacade.getWriter(ResponseFacade.java:224)
at javax.servlet.ServletResponseWrapper.getWriter(ServletResponseWrapper.java:152)
at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:195)
at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:188)
at org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:240)
at org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:185)
at org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:137)
at org.apache.jsp.WEB_002dINF.jsp.error.error_jsp._jspService(error_jsp.java:88)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:411)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:473)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:377)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.ApplicationDispatcher.doInvoke(ApplicationDispatcher.java:875)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:739)
at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:695)
at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:626)
at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:500)
at org.apache.catalina.core.StandardHostValve.dispatchToErrorPage(StandardHostValve.java:699)
at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:309)
at org.apache.catalina.core.StandardHostValve.postInvoke(StandardHostValve.java:232)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:417)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)
]]

[2015-02-24T18:49:08.494+0100] [glassfish 4.1] [INFO] [] [] [tid: _ThreadID=30 _ThreadName=Thread-8] [timeMillis: 1424800148494] [levelValue: 800] [[
2015-02-24 18:49:08,493 ERROR error._jspService:76 - HTTP Status 404 for request '/WEB-INF/tiles.xml' (ip='202.46.62.153' x-forwarded-for='null')]]

Comment by nabizamani [ 24/Feb/15 ]

I can confirm the assumptions I made above and I can reproduce it now:

1. I assume you have access to a remote glassfish server - NOT ON YOUR OWN MACHINE!
2. Deploy some large file onto that server which you download via browser in the next step - large enough so that you have enough time to disable the network connection (i.e. wifi) on you local machine abruptly
3. Use your browser to download that large file after you have deployed it
4. Once the download has started make sure to a abruptly disable your network connection (on your local machine) before the download finishes - that will simulate a lost network connection...
5. Now check the log files, they will be pretty close to what you can see in my logs

That means:
1. It seems from time to time people accessing our machine simply loose their connection while requesting/downloading some content (that ist not unusual)
2. In GF 4.1 these events are raised/logged as exceptions, it was not like this in GF 3.1.2.2
3. This can cause unwanted behavior, see my explanations in the previous comments. Was there some related change in the Java EE 7 specs???

I don't think that calling the error-page defined in the web.xml is a good decision in case some client has lost network connectivity (caused because the exception is thrown). I am not even sure if it's worth logging it, is it? Others might have different opinions...

Can you reproduce the issue? What do you think?
I think the caller of the Grizzly API should swallow the exceptions, unless there was a change in the Java EE specs...

Comment by oleksiys [ 25/Feb/15 ]

I changed the error message in Grizzly to be more understandable, TimeoutException will be hidden and user will see IOException only with better description.
Otherwise I'll redirect this issue to webcontainer, since it's webcontainer logging the exception and tries to serve the error page.





[GLASSFISH-7109] [JDK ISSUE] GlassFish performance degradation caused by invoking setSoLinger/setReuseAddress Created: 30/Jan/09  Updated: 06/Mar/12

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: v2.1
Fix Version/s: not determined

Type: Bug Priority: Minor
Reporter: jfarcand Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All
URL: http://bugs.sun.com/view_bug.do?bug_id=6799574


Issuezilla Id: 7,109
Status Whiteboard:

V2.1.1_exclude


 Description   

[Umbrella bug for JDK 5/6/7 issue 6799574)

Grizzly is suffering performance degradation when setSoLinger and setReuseAddess
starts throwing the following exception:

[#|2009-01-26T00:33:56.325-0800|WARNING|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=17;_ThreadName=SelectorReaderThread-8084;_RequestID=11ae0030-c392-4217-8408-cfa7efe0a879;|setSoLinger
exception
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.setIntOption0(Native Method)
at sun.nio.ch.Net.setSocketOption(Net.java:261)
at sun.nio.ch.SocketChannelImpl.setOption(SocketChannelImpl.java:166)
at sun.nio.ch.SocketAdaptor.setIntOption(SocketAdaptor.java:296)
at sun.nio.ch.SocketAdaptor.setSoLinger(SocketAdaptor.java:331)
at
com.sun.enterprise.web.connector.grizzly.SelectorThread.setSocketOptions(SelectorThread.java:1893)
at
com.sun.enterprise.web.connector.grizzly.SelectorReadThread.registerNewChannels(SelectorReadThread.java:93)
at
com.sun.enterprise.web.connector.grizzly.SelectorReadThread.startEndpoint(SelectorReadThread.java:121)
at
com.sun.enterprise.web.connector.grizzly.SelectorThread.run(SelectorThread.java:1223)

#]

[#|2009-01-26T00:33:56.327-0800|WARNING|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=17;_ThreadName=SelectorReaderThread-8084;_RequestID=11ae0030-c392-4217-8408-cfa7efe0a879;|setReuseAddress
exception
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.setIntOption0(Native Method)
at sun.nio.ch.Net.setSocketOption(Net.java:261)
at sun.nio.ch.SocketChannelImpl.setOption(SocketChannelImpl.java:166)
at sun.nio.ch.SocketAdaptor.setBooleanOption(SocketAdaptor.java:286)
at sun.nio.ch.SocketAdaptor.setReuseAddress(SocketAdaptor.java:399)
at
com.sun.enterprise.web.connector.grizzly.SelectorThread.setSocketOptions(SelectorThread.java:1910)
at
com.sun.enterprise.web.connector.grizzly.SelectorReadThread.registerNewChannels(SelectorReadThread.java:93)
at
com.sun.enterprise.web.connector.grizzly.SelectorReadThread.startEndpoint(SelectorReadThread.java:121)
at
com.sun.enterprise.web.connector.grizzly.SelectorThread.run(SelectorThread.java:1223)

#]

This has been discussed here:

https://glassfish.dev.java.net/servlets/ReadMsg?listName=users&msgNo=26597

One user reported: ..that these errors are harmless, however, as you can see
below, the throughput of my application was reduced by 50% in the minutes
surrounding the spurt of errors:

minute requests setSoLinger/setReuseAddress exceptions
----- ----- -
14:23 7620 0
14:24 10063 0
14:25 9714 0
14:26 8847 0
14:28 7370 0
14:29 9787 0
14:30 9104 0
14:31 8171 0
14:32 4066 15 errors in two groups: 8 @ 14:32:33 and 7 @ 14:32:58
14:33 6908 0
14:34 10463 0
14:35 9870 0
14:36 8236 0
14:37 8685 0
14:38 8098 0

My application constantly serves 200-350 requests per second - and has been
doing so for 1 week now. This is the only incident that any errors have been
thrown since the application began operation.



 Comments   
Comment by jfarcand [ 30/Jan/09 ]

Trying to find a workaround. Disabling setSoLinger (setting it to false) cause
some performance issue, but at least the exception is not reproducible.

Comment by jfarcand [ 15/Apr/09 ]

JDK issue

http://monaco.sfbay.sun.com/detail.jsf?cr=6799574

Re-assign to docs.

Comment by Paul Davies [ 13/May/09 ]

Description of this issue added to the Release Notes. Review pending.

Comment by kumara [ 01/Sep/09 ]

Changing version from 9.1.1 to v2.1 to reflect new name/version.

Comment by Paul Davies [ 30/Sep/09 ]

To be added to the v3 Release Notes

Comment by jagadesh [ 15/Oct/09 ]

Will not be fixed for V2.1.1

Comment by Paul Davies [ 22/Oct/09 ]

Reassigned to grisdal.

Comment by Gail Risdal [ 08/Dec/09 ]

Added to v3 Release Notes.

Comment by Gail Risdal [ 09/Dec/09 ]

Closed prematurely. Reopened to enable code fix.

Comment by Gail Risdal [ 09/Dec/09 ]

Reassigned to enable code fix.

Comment by Paul Davies [ 26/May/11 ]

This issue was noted as required in the Release Notes. I have changed the subcomponent to enable engineering to resolve or close this bug as necessary.

Comment by Tom Mueller [ 06/Mar/12 ]

Bulk update to change fix version to "not determined" for all issues still open but with a fix version for a released version.





[GLASSFISH-15803] GUI console and instance are not available after http-listener-1.port is changed into 4848 Created: 02/Feb/11  Updated: 02/Dec/11

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 3.1_b38
Fix Version/s: 4.0

Type: Bug Priority: Minor
Reporter: tetyanac Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Windows



 Description   

I changed setting configs.config.server-config.network-config.network-listeners.network-listener.http-listener-1.port=4848 via GUI admin console and after that GUI console is not available and GF instance as well. Why is it possible to change the port to already occupied with process one?



 Comments   
Comment by Anissa Lam [ 02/Feb/11 ]

The enforcement of not assigning conflicting port # will need to happen at the backend.
Assigning to 'admin'.

Comment by Tom Mueller [ 18/Feb/11 ]

Validation of network listener config changes happens within Grizzly.
Assigning to the grizzly category.

Comment by oleksiys [ 18/Feb/11 ]

reassigning to Justin.





[GLASSFISH-20955] Building grizzly-config with JDK 1.8 fails Created: 15/Jan/14  Updated: 19/Sep/14

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 4.1
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Ryan Lubke Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: jdk8

 Description   

Compiling nucleus/grizzly/config using JDK 1.8.0b121 fails:

Failed to execute goal org.glassfish.build:command-security-maven-plugin:1.0.6:check (default-check) on project grizzly-config: Error loading config beans: java.lang.RuntimeException: org.apache.maven.plugin.MojoExecutionException: Error analyzing java/lang/Object: IllegalArgumentException -> [Help 1]






[GLASSFISH-19372] ClientAbort Exception Default Servlet Created: 27/Nov/12  Updated: 27/Nov/12

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 3.1.2_b05
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: peppeme Assignee: oleksiys
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

linux centos


Tags: brokenpipe, clientabortexception, grizzly

 Description   

I can't reproduce the issue but I get many and many Client Abort Exception:
It seems a client side communication interruption but
The server.log is filled by these becoming unreadable..

org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:430)
at com.sun.grizzly.util.buf.ByteChunk.flushBuffer(ByteChunk.java:458)
at com.sun.grizzly.util.buf.ByteChunk.append(ByteChunk.java:380)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:455)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:442)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:160)
at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:2010)
at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:1040)
at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:466)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:668)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:770)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1550)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
at filter.AdwFilter.doFilter(AdwFilter.java:206)
...
...
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89)
at sun.nio.ch.IOUtil.write(IOUtil.java:60)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:108)
at com.sun.grizzly.util.OutputWriter.flushChannel(OutputWriter.java:76)
at






[GLASSFISH-18257] On URI decode exception the access log is not used Created: 26/Jan/12  Updated: 26/Jan/12

Status: Open
Project: glassfish
Component/s: grizzly-kernel
Affects Version/s: 3.1_b43
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: benjamin_m Assignee: oleksiys
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Linux x86_64


Tags: grizzly, logging

 Description   

When Grizzly throws an "Invalid URI character encoding" exception, the URI is part of the stack trace but the HTTP request info isn't saved on the access log.
This is a problem if the request URI makes it obvious that the requester is trying an exploit/vulnerability.
Without the access log used, there is no way of seeing the IP/hostname of the requester to identify the source of this attack attempt.



 Comments   
Comment by oleksiys [ 26/Jan/12 ]

Can you pls. check if it's still the case in the latest promoted Glassfish 3.1.2
http://dlc.sun.com.edgesuite.net/glassfish/3.1.2/promoted/glassfish-3.1.2-b19.zip

Thanks.

Comment by benjamin_m [ 26/Jan/12 ]

After trying to replicate in a VM with the suggested build, a similar error is not thrown.
To specify, here is the stack trace of the URI decoding issue which is not being logged in the access log.

[#|2012-01-26T07:50:40.472+0100|WARNING|glassfish3.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=23;_ThreadName=Thread-1;|Internal Server error: /../../../../../../../../boot.ini
java.io.IOException: Invalid URI character encoding
at com.sun.grizzly.util.http.HttpRequestURIDecoder.decode(HttpRequestURIDecoder.java:101)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:185)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:822)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:719)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1013)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:619)

Comment by oleksiys [ 26/Jan/12 ]

Just to make sure, you didn't see the record in the access.log related to the corrupted request?

Comment by benjamin_m [ 26/Jan/12 ]

Exactly.
Because the exception is thrown on URI decode, Grizzly gives up there and nothing is written to the access log.
Which becomes very problematic when you have a rogue client trying some exploit like it's the case here.

Comment by oleksiys [ 26/Jan/12 ]

I agree, just wanted to make sure this is true for the latest 3.1.2 build.





Generated at Fri Mar 06 00:49:58 UTC 2015 using JIRA 6.2.3#6260-sha1:63ef1d6dac3f4f4d7db4c1effd405ba38ccdc558.