Issue Details (XML | Word | Printable)

Key: GLASSFISH-18725
Type: Bug Bug
Status: Closed Closed
Resolution: Fixed
Priority: Major Major
Assignee: Ryan Lubke
Reporter: amitagarwal
Votes: 0
Watchers: 0
Operations

If you were logged in you would be able to see more operations.
glassfish

[PERF] Servlet Performance Regression when writing string data

Created: 13/May/12 09:15 AM   Updated: 03/Dec/12 11:09 PM   Resolved: 03/Jul/12 06:01 PM
Component/s: grizzly-kernel
Affects Version/s: 4.0_b36
Fix Version/s: 4.0_b43

Time Tracking:
Not Specified

Issue Links:
Dependency
 

Tags: PSRBUG
Participants: amitagarwal, oleksiys, Ryan Lubke, Scott Oaks and Shing Wai Chan


 Description  « Hide

We have recently extended our performance benchmark suite by adding web container atomics benchmarks. This benchmark shows around 12% to 18% regression across atomics. Will add more details to it.



Ryan Lubke added a comment - 03/Jul/12 06:00 PM

Ok, closing this particular regression out.


Scott Oaks added a comment - 03/Jul/12 05:55 PM

The call path is now through encodeArrayLoop as expected, and we've made up a little bit of ground on the regression.


Ryan Lubke added a comment - 12/Jun/12 05:02 AM

Integrated Grizzly 2.2.10 (r54548).

Should be available in this week's nightly build. Please let us know of any changes in performance.


oleksiys added a comment - 31/May/12 03:13 PM - edited

Hi Scott,

in Grizzly 1.9.x' C2BConverter I see this code

public void convert(char c[], int off, int len) throws IOException {
        CharBuffer cb = CharBuffer.wrap(c, off, len);
        byte[] barr = bb.getBuffer();
        int boff = bb.getEnd();
        ByteBuffer tmp = ByteBuffer.wrap(barr, boff, barr.length - boff);
        CoderResult cr = encoder.encode(cb, tmp, true);
        bb.setEnd(tmp.position());
        while (cr == CoderResult.OVERFLOW) {
	    if (!bb.canGrow())
                bb.flushBuffer();
	    boff = bb.getEnd();
	    barr = bb.getBuffer();
            tmp = ByteBuffer.wrap(barr, boff, barr.length - boff);
            cr = encoder.encode(cb, tmp, true);
            bb.setEnd(tmp.position());
        }
        if (cr != CoderResult.UNDERFLOW) {
            throw new IOException("Encoding error");
	}
    }

which is similar to what we have in the 2.0's OutputBuffer.

May be I'm missing something?

Thaanks.


Scott Oaks added a comment - 21/May/12 06:16 PM

There are multiple regressions in the web container atomics benchmarks. In this bug, we will focus on the issue that affects simple servlet writing.

For servlets, there is a regression in the way in which strings are written to the servlet output stream. Previously, a call to servletOutputStream.println() ended up in com.sun.grizzly.util.buf.C2BConverter, which got the character array from the string and passed it to the encoder. Because the buffer has a backing array, encoding it is simply a matter of iteration through the array.

Now in the org.glassfish.grizzly.http.server.io.OutputBuffer.flushCharsToBuf() method, we wrap the string in a CharBuffer and pass that to the character encoder. Because that CharBuffer does not have a backing array, the encoder uses its encodeBufferLoop() method, which results in lots of calls to buffer.get() rather than simply iterating through the array. This causes a significant performance penalty.

However, this code path is not used by JSP tests (JSP writers use char[] by the time the data gets to the OutputBuffer class). The JSP paths make up the bulk of the web container atomics benchmarks – hence changing the subject of this bug (and will file a separate bug on whatever is regressing in the JSP path).


Shing Wai Chan added a comment - 18/May/12 10:14 PM

Assign to Grizzly team for further investigation.


Scott Oaks added a comment - 18/May/12 10:00 PM

This appears to be (at least partly) because of a huge increase in the time spend in the character encoder. In grizzly 1.9 with glassfish 3.1.2, encoding was done to an array-backed byte buffer; now we are doing encoding to a direct bytebuffer. Because there is no backing array, looping through the character encoding takes a long time.