[GLASSFISH-16355] startup and footprint of larger size application deployment to 3.x Created: 14/Apr/11  Updated: 06/Mar/12

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: 3.1
Fix Version/s: not determined

Type: Bug Priority: Critical
Reporter: Nazrul Assignee: Scott Oaks
Resolution: Unresolved Votes: 6
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
depends on GLASSFISH-16543 Performance regression in JavaEE (ejb... Open
depends on GLASSFISH-17044 [PERF] gmbal objects consuming large ... Open
depends on GLASSFISH-16540 it takes a long time to boostrap EJB ... Resolved
depends on GLASSFISH-16460 Performance regression in server startup Resolved
depends on GLASSFISH-16747 Excessive memory requirements in EJB app Resolved
depends on GRIZZLY-1144 SocketChannelOutputBuffer consumes to... Resolved
depends on GLASSFISH-17914 Don't initialize StatsProviderRegistr... Closed
Tags: 3_1-next, 3_1_1-scrubbed, 3_1_2-exclude

 Description   

Refer to this blog for description of the problem: http://ktschmidt.blogspot.com/2011/04/is-glassfish-v3-slower-and-bigger.html

Scott Oaks confirmed that the startup time issue is valid.

Also refer to this forum thread: http://forums.java.net/node/798503



 Comments   
Comment by Hong Zhang [ 15/Apr/11 ]

Assign the umbrella issue to Tom. The deployment could have a sub issue of the umbrella issue.

Comment by Tom Mueller [ 15/Apr/11 ]

Not sure why this is an admin issue. Assigning it to performance.

Comment by Nazrul [ 15/Apr/11 ]

Adding 3_1-next tag. We need a fix for this during 3.1.1.

Comment by Tim Quinn [ 09/Jun/11 ]

Linking the apparent MQ start-up regression to this more-or-less umbrella issue.

Comment by Scott Oaks [ 13/Jul/11 ]

Remaining different in heap usage after startup is attributable to retained gmbal-related references.

Comment by Scott Oaks [ 22/Nov/11 ]

We are tracking a new set of tests for this in 3.1.2.

In this set of tests (which includes one large app with multiple jars and wars, including EJBs, JSPs, MDB, etc., and one smaller web app), the heap after starting with the apps deployed in 2.1.1 consumes 41.2MB; in 3.1.2 it consumes 59.7MB. This is before an ORB is started, and hence does not include the gmbal-related references. [So that earlier comment about everything being related to gmbal is in error.] There is no load generated in this test, so lazily-initialized things will benefit the test, which may or may not be a good thing (but it follows the scenario in the posting that drives this bug).

Where does that 18.5MB come from? Here is the short answer:
Additional classes: 4MB
HK2: 4MB
Felix: 5MB
Grizzly: 3MB
Stats Provider: 2MB

In a scenario like this where a significant part of the EE modules are loaded, one place we lose out is in the infrastructure for modularization. In simple terms of classes loaded, 3.1.2 is loading 11% more classes (10K vs 9K), and the class objects themselves consume 50% more heap (12M vs 8M). That is a reflection of the added features as well as the added modularization, of course.

Then there is the memory consumed by instances of the classes. The single instance of org.jvnet.jk2.component.Habitat consumes 1.6MB of heap. However, there are are other habitats (subclasses) as well, and they are consuming at least another 1.2MB of heap (for their MultiMap) and a significant amount of memory for the LazyInhabitant objects. Total consumed by HK2 is in excess of 3.9MB.

Instances of Felix classes consume at least 4MB of heap (not including the classes held by the Felix ModuleClassLoader). The big amounts of memory there are held by Felix ModuleImpls (again not including the inner classloader objects); memory here is consumed by CapabilityImpl, RequirementImpl, and ResolverState. I realize there is overlap between some of those classes, but the 4MB calculation in the tool will have removed that overlap, and in particular the 1.2MB of heap consumed by ResolverState appears independent of the CapabilityImpl/RequirementImpl. So without understanding the code better, I can just say that it heap usage is between 4 and 5.2MB (or bigger).

Grizzly processorTasks consume 3.1MB more heap.
In 2.1.1, the three processor tasks queues consume 2.25MB of heap
In 3.1.2, the there five processor tasks queues consuming 5.3MB of heap
This is a default-configured domain

Stats Provider Registry consumes 2MB of heap

Comment by Scott Oaks [ 23/Nov/11 ]

The extra classes contribute as well to the regression in the time to restart the server: they cause a few expansions of the perm gen as it fills up with the extra classes.

In 2.1.1, a server restart with the EJB apps deployed goes through one resizing of permgen on my laptop; in 3.1.2, there are three or four. If we increase the initial size of perm gen (keeping the max size at 192m), we can improve the server restart in this scenario by 11%. But that will affect the footprint of other smaller deployments, so some discussion on the trade-offs here needs to occur.

Comment by Joe Di Pol [ 18/Jan/12 ]

We won't be making any more progress on this for 3.1.2 so I'm excluding from the release. We did get a gmbal fix into Metro that helps WS applications, but not EJB. The ORB fix has proven more difficult (see linked gmbal bug).

Comment by Tom Mueller [ 06/Mar/12 ]

Bulk update to change fix version to "not determined" for all issues still open but with a fix version for a released version.





[GLASSFISH-7585] [Performance]11 seconds to load the Update Center page Created: 10/Apr/09  Updated: 06/Mar/12

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: V3
Fix Version/s: not determined

Type: Improvement Priority: Major
Reporter: sonali_rajashree_suchreet Assignee: Scott Oaks
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Issuezilla Id: 7,585

 Description   

The server took 11 seconds to load the Update Center page when clicked on
"Installed Components" button from Common Tasks

Steps:
1. Under Common Tasks, click on "Installed Components"

It takes about 11 seconds for the page to load.

This was at 15:36 pm on April 10th



 Comments   
Comment by Anissa Lam [ 10/Apr/09 ]

This maybe due to network connection slow.

Comment by sonali_rajashree_suchreet [ 10/Apr/09 ]

The network connection was very good when I tested. Changing the Issue type to
"ENHANCEMENT"

Thanks

Comment by Tom Mueller [ 06/Mar/12 ]

Bulk update to change fix version to "not determined" for all issues still open but with a fix version for a released version.





[GLASSFISH-17044] [PERF] gmbal objects consuming large part of heap Created: 13/Jul/11  Updated: 03/Dec/12

Status: Open
Project: glassfish
Component/s: monitoring
Affects Version/s: 3.1, 3.1.1
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Scott Oaks Assignee: Scott Oaks
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: JPEG File heap-dump-gmbal-api-only.jpg     JPEG File HeapDump-gmbal-classes.jpg     File MyEjb.ear    
Issue Links:
Dependency
depends on GLASSFISH_CORBA-5 [PERF] gmbal objects consuming large ... In Progress
depends on METRO-17 [PERF] gmbal objects consuming large ... Resolved
blocks GLASSFISH-16355 startup and footprint of larger size ... Open
Tags: 3_1-next, 3_1_1-scrubbed, 3_1_2-exclude, PSRBUG

 Description   

The gmbal related classes added in 3.x have contributed significantly to the heap regression usage for larger apps between 2.x and 3.x. In fact, now that other issues (notably 16747) have been fixed, these classes constitute almost all of the remaining regression. In a standard domain with specj deployed, the gmbal classes retain some 33MB of heap space (and the entire consumed heap space after startup is only 130MB).



 Comments   
Comment by scatari [ 26/Jul/11 ]

Targeting to be fixed in the patch release post 3.1.1.

Comment by Tom Mueller [ 18/Aug/11 ]

Scott, can you please provide details on how to recreate this issue?

Is monitoring turned on during the test?
Do you see a relatively significant amount of memory consumed by gmbal objects with a smaller application?
How did you determine the values that are quoted in the description?

Comment by Scott Oaks [ 18/Aug/11 ]

The numbers I quoted come from examining the heap dump taken after the domain has started (but not accessed); the 33MB is the size of the memory retained by the 5,619 org.glassfish.gmbal.typelib.DeclarationFactory$EvaluatedClassDeclarationImpl objects.

Monitoring options are out-of-the box settings; the only changes to the domain are to add the necessary JDBC and JMS resources for the app (which in this case is specjappserver). My understanding from Ken is that although there is a way to disable gmbal monitoring, the necessary code is not implemented at the glassfish level (it means using a different gmbal factory to get a no-op gmbal manager). Allowing that might be a good option.

We have only observed this on ejb-related deployments, not on web-only deployments. I'll have to see if we can get measurements from other apps.

Comment by Jennifer Chou [ 14/Oct/11 ]
  • If monitoring is disabled - setting mbean-enabled=false will make no difference.
  • If it is the ManagedObjectManagerFactory that is causing problems, there is only 2 places it is referenced - monitoring and web services. Since the monitoring is disabled it will not go through the code path that references ManagedObjectManagerFactory.

I tried to reproduce the issue, but was unsuccessful.

1. Download glassfish 3.1.1 open source edition
2. asadmin start-domain
3. jconsole <gf pid>
4. MBeans > com.sun.management > Operations
a. enter 'heap.dump.out' in p0
b. clicked on dumpHeap
5. Open 'heap.dump.out' in NetBeans.

I couldn't find any gmbal classes listed under Classes. I searched for 'DeclarationFactory' and 'gmbal'.

What am I missing? Do I need to have specj deployed?

Comment by Scott Oaks [ 14/Oct/11 ]

You don't need specj per se, but you need some application with EJBs deployed.

Comment by Jennifer Chou [ 14/Oct/11 ]

After deploying attached simple EJB app (with a stateless session bean), the gmbal classes can be seen in the screenshot of the heap dump list.

Comment by Jennifer Chou [ 14/Oct/11 ]

After replacing gmbal.jar with gmbal-api-only.jar, the size and number of instances is greatly reduced. See attached screenshot - heap-dump-gmbal-api-only.

gmbal-api-only.jar is downloaded from http://download.java.net/maven/2/org/glassfish/gmbal/gmbal-api-only/3.1.0-b001/gmbal-api-only-3.1.0-b001.jar

Comment by Jennifer Chou [ 14/Oct/11 ]

From Scott:

The gmbal instances are all held by the org.glassfish.gmbal.impl.ManagedObjectManagerImpl object that is held in the ORB.

There is a factory that produces a "null" maanged object manager impl instead of that ManagedObjectManagerImpl, so if we could arrange for the ORB to use that factory when we don't want the overhead of gmbal, that would solve the issue.

Comment by Jennifer Chou [ 28/Oct/11 ]

The fix should be in ORB to defer the gmbal API calls until there is a JMX client connection.

http://java.net/jira/browse/GLASSFISH_CORBA-5

Comment by Jennifer Chou [ 28/Oct/11 ]

The fix should be in metro and WebServicesContainer to defer the gmbal API call until there is a JMX client connection.

http://java.net/jira/browse/METRO-17

Comment by Jennifer Chou [ 28/Dec/11 ]

Transfer to Scott Oaks. This is an umbrella bug to track the 2 issues in ORB and Metro.

Comment by Joe Di Pol [ 18/Jan/12 ]

We've done all we plan on doing for 3.1.2 (See linked Metro bug). The ORB fix will have to wait for a subsequent release.





[GLASSFISH-16543] Performance regression in JavaEE (ejb) deployment Created: 04/May/11  Updated: 18/Jan/12

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: 3.1.1_b02
Fix Version/s: None

Type: Bug Priority: Major
Reporter: amitagarwal Assignee: Scott Oaks
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
depends on GLASSFISH-17013 Significant apparent regression in OR... Open
depends on MQ-99 Performance regression in MQ start-up... Open
blocks GLASSFISH-16355 startup and footprint of larger size ... Open
Tags: 3_1-next, 3_1_2-exclude

 Description   

JavaEE developer benchmark shows ejb deployment is regressed significantly from 2.1.1 release. We are observing around 230% regression on 3.1.1 builds compare to last build b31g of 2.1.1



 Comments   
Comment by Tim Quinn [ 09/Jun/11 ]

Linking the apparent MQ start-up regression to this umbrella issue for the EJB start-up and deployment regression issue.

Comment by Tim Quinn [ 11/Jul/11 ]

Linking to the ORB start-up regression issue

Comment by scatari [ 26/Jul/11 ]

Targeting to be evaluated further and resolved in the patch release post 3.1.1.

Comment by Joe Di Pol [ 18/Jan/12 ]

We were unable to get the ORB GMBL fix into 3.1.2 (we did get a Metro fix in, but that is not reflected in the EJB benchmark). Deferring from 3.1.2





[GLASSFISH-17368] Glassfish 3.11 start is twice worse, than 3.1 Created: 28/Sep/11  Updated: 28/Sep/11

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: 3.1.1
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: bitec Assignee: Scott Oaks
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: performance, webstart

 Description   

Updated from GF 3.1 to GF 3.1.1 and now see that the startup of my web application increased almost twice. Just started the same application on GF 3.1 and GF 3.1.1 and see the following average startup of application

GF 3.1: 28 seconds

GF 3.1.1: 52 seconds

Tested several times and see numbers are pretty constant.

Just for information: my web app is based on JSF + EJB + CDI + Hibernate + PrimeFaces + Spring. Don't see changes in Hibernate (I'm using the one, provided as the add-on - it hasn't changed), so may be something from three first technologies could influence slower initialization.

This is the copy of forum question http://www.java.net/forum/topic/glassfish/glassfish/gf-311-slow-startup-web-applications






[GLASSFISH-20825] Slow performance over RMI-IIOP Created: 26/Sep/13  Updated: 27/Sep/13

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: 3.1.2
Fix Version/s: None

Type: Bug Priority: Major
Reporter: lanthale Assignee: Scott Oaks
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Debian Squeeze with JDK 7u40, 8GB RAM for JVM (via Xmx)



 Description   

I am transfering a large byte array (10 MB) to my standalone application client via RMI-IIOP SSL.

1. Client is calling the method on the stateless EJB and is getting a byte array back
2. Server is getting the data via EntityManager from the databasae
2. Measure time to get the data (data arrived at the client)

All measures are done on a local JVM (client and server are running on the same machine and on the same JVM)

Test with 10 MB:

  • Using glassfish: 20 sec
  • Using other app server which uses just RMI: 2,5 sec

Test with 128 MB (RMIIO for streaming used because files to big for memory):

  • Glassfish: 4 minutes
  • Ohter app server: 0,5 minutes

This is a big difference and therefore it can be solved



 Comments   
Comment by lanthale [ 27/Sep/13 ]

I must correct that on the big file the time for transfering was 1 minute and not 0,5 minute.

I did some further tests:

  • Connect without any app server via JDBC to the DB: 128MB 0,16 Minutes
  • Same file with glassfish 1,5 minutes

That means 10 times slower than direct connection to the database. I had expect a 3 times slowness but not 10 times.

I have not found how to attach a file to the issue later on otherwise I would attach the code from RMIIO to test the transfer





[GLASSFISH-18724] [PERF] Trade2 benchmark has regressed by 12% Created: 13/May/12  Updated: 19/Sep/14

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: 4.0_b36
Fix Version/s: 4.1

Type: Bug Priority: Major
Reporter: amitagarwal Assignee: Scott Oaks
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
depends on GLASSFISH-18723 [PERF]Excessive thread contention in ... Open
depends on GLASSFISH-18986 [PERF] Failed ClassLoading consuming ... Resolved
depends on GLASSFISH-18725 [PERF] Servlet Performance Regression... Closed
depends on GLASSFISH-18754 [PERF] JSP Cookie Handling performanc... Closed
Tags: PSRBUG

 Description   

Trade2 benchmark has regressed by 12% for quite sometime.






[GLASSFISH-4949] print(char) does too much memory allocation Created: 29/Apr/08  Updated: 06/Mar/12

Status: Open
Project: glassfish
Component/s: performance
Affects Version/s: V3
Fix Version/s: not determined

Type: Bug Priority: Minor
Reporter: kohlerm Assignee: Scott Oaks
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Issuezilla Id: 4,949
Status Whiteboard:

gfv3-prelude-included

Tags: tp2-exclude

 Description   

Hi all,
I just ran the following Servlet on the latest GlassFish V3 (April 21)

protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException

{ ServletOutputStream out = response.getOutputStream(); response.setContentType("text/html"); response.setHeader("pragma", "no-cache"); String sizeStr = request.getParameter("size"); int size=0; if (sizeStr!=null) size=Integer.parseInt(sizeStr); for (int i=0; i<size;i++) out.print((char)(255*Math.random())); }

With an reponse of 100000 (configurable by the size parameter of the servlet)
bytes 20Mbyte are allocated.

Profiling indicates that
org.apache.coyote.tomcat5.CoyoteOutputStream.print(java.lang.String)
calling

com.sun.grizzly.util.buf.C2BConverter.convert(java.lang.String)

which calls

java.nio.ByteBuffer.wrap(byte[], int, int) 100000 times as well as
java.nio.CharBuffer.wrap(char[], int, int) 100000 times.

The response time is 3 times slower than when using write on the Outputstream,
which allocates almost nothing except a Directbuffer.

Regards,
Markus



 Comments   
Comment by kumara [ 30/Apr/08 ]

Exclude from the list being tracked for TP2 release.

Comment by kumara [ 19/Aug/08 ]

Add gfv3-prelude-include to status whiteboard

Comment by Scott Oaks [ 02/Sep/08 ]

First, it's not correct to compare calling print(c) vs write(c). The former
must encode the 16-bit character as a series of bytes; the latter can simply
write out the raw data. So the performance of printing characters vs writing
binary data will always be drastically different.

We have in the past explored various ways to get the fastest encoding out of
the JDK, and I think the current implementation is still optimal. We have filed
issues against the JDK for its memory use for encoding/decoding strings and are
tracking those. The JDK isn't well-designed for encoding single characters at a
time.

I'm leaving this open for now so we can track the JDK issue, but at present,
this is the best we can do.

Comment by kumara [ 03/Sep/08 ]

v3 defect tracking

Comment by kohlerm [ 03/Sep/08 ]

Hi,
Unfortunately, I don't have the profiling snapshot available anymore, but I seem
to remember that for each char at least one byte[] would be allocated.
I know that write is supposed to be faster than print because of the additional
encoding that print has to do, but still a factor of 3 seems to high for me.

And actually what really hurts it that 20 Mbytes are temporarily allocated just
to return 10000 bytes.

Regards,
Markus

Comment by Tom Mueller [ 06/Mar/12 ]

Bulk update to change fix version to "not determined" for all issues still open but with a fix version for a released version.





Generated at Fri May 22 15:55:15 UTC 2015 using JIRA 6.2.3#6260-sha1:63ef1d6dac3f4f4d7db4c1effd405ba38ccdc558.