[GLASSFISH-16147] Domain was created with master password. start-instance failed Created: 03/Mar/11  Updated: 02/May/11

Status: Open
Project: glassfish
Component/s: admin
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: easarina Assignee: carlavmott
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
blocks GLASSFISH-16146 Umbrella bug. Created a domain with ... Open
blocks GLASSFISH-16149 Created a domain with a master passwo... Open


Executed steps that were described in the umbrella bug 16146.

Then executed start-instance:
asadmin --passwordfile ./password.txt --port 9999 --host localhost --user admin start-instance in9
remote failure: Could not start instance in9 on node localhost-domain12 (localhost).

Command failed on node localhost-domain12 (localhost): CLI801 Instance is already synchronized
Command start-local-instance failed.

The Master Password is required to start the domain. No console, no prompting possible. You should either create the domain with --savemasterpassword=true or provide a password file with the --passwordfile option.

To complete this operation run the following command locally on host localhost from the GlassFish install location /opt/glassfish3:

asadmin start-local-instance --node localhost-domain12 --sync normal in9
Command start-instance failed.

The password.txt file had such content:


Comment by Tom Mueller [ 03/Mar/11 ]

It looks as though the password file is not passed through the SSH invocation of start-local-instances. Is that expected?

Comment by Joe Di Pol [ 03/Mar/11 ]

Correct. The current implementation never copies the master password over the network. So if you change the master password on the domain to something other than the default you must use a master password file on the instances in order for start-instance (and therefore start-cluster) to work.

Comment by Bhakti Mehta [ 04/Mar/11 ]

I dont think this is a bug but expected behaviour . Since there is no savemasterpassword=true when the create-local-instance is called. There is no master password file. You can either create the local instance using savemasterpassword=true or run the change-master-password command for the instances with savemasterpassword=true. For more info see this blog from Carla http://weblogs.java.net/blog/carlavmott/archive/2011/03/02/glassfish-31-using-master-password-and-managing-instances too

Comment by easarina [ 04/Mar/11 ]

I don't agree. First create-instance doesn't have savemasterpassword option. I.e if a user created an instance using create-instance command he can not start it using start-instance command. (The blog is not available now). But I believe that if master password was passed in passwordfile, it has to be taken from there without any other preconditions.

Comment by Nazrul [ 02/May/11 ]

GlassFish 3.1 is behaving as expected. We never send master password over the wire. If user sets a master password, then 3.1 offer the following options:

1) Use create-local-instance --savemasterpassword option to save the master password locally
2) Use change-master-password --savemasterpassword option to save the master password locally

Converting these 3 issues to Improvement to investigate if we can make life any easier without compromising security.

[GLASSFISH-20560] upgrade doesn't handle JDBCRealm and PamRealm package change Created: 20/May/13  Updated: 20/Dec/16

Status: Open
Project: glassfish
Component/s: upgrade_tool
Affects Version/s: 4.0_dev
Fix Version/s: 4.1.1

Type: Bug Priority: Major
Reporter: Tom Mueller Assignee: carlavmott
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


In GlassFish 4.0, the Java packages for the JDBCRealm and PamRealm classes changed from:




The upgrade code doesn't handle this change.

[GLASSFISH-13021] simultaneous start-instance commands fail Created: 18/Aug/10  Updated: 02/Dec/11

Status: Open
Project: glassfish
Component/s: admin
Affects Version/s: 3.1
Fix Version/s: 4.0

Type: Bug Priority: Minor
Reporter: Tom Mueller Assignee: carlavmott
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Operating System: All
Platform: All

Issue Links:
blocks GLASSFISH-4357 SYNC-005: Scale the cluster size to a... Resolved
Issuezilla Id: 13,021


If multiple start-instance commands which use SSH are executed in parallel, most (or all) of them fail to start
the instance. After setting up a cluster with multiple instances using an SSH node, the following command
recreates the problem:

for i in 1 2 3 4 5 6 7 8 9 10; do asadmin start-instance i1-1-$i </dev/null >logs/1/i1-1-$


.log 2>&1 & done;
time wait; grep -i fail logs/1/*

Some commands output the following:

remote failure: Timed out waiting for i1-1-1 to start.

Command start-instance failed.

Others produce this:

java.net.SocketException: Unexpected end of file from server
Command start-instance failed.

The server.log file contains the following messages:
[#|2010-08-18T10:47:33.985-0700|INFO|glassfish3.1|null|_ThreadID=14;_ThreadName=Thread-1;|CLI802 Synchronization
failed for directory config|#]

[#|2010-08-18T10:47:33.985-0700|INFO|glassfish3.1|null|_ThreadID=14;_ThreadName=Thread-1;|Command start-local-
instance failed.|#]

NOTE: this problem DOES NOT occur if start-local-instance is used. Also, this problem does not occur if start-
instance is used where the node for the instance is a config node (rather than an SSH) node.

This problem DOES occur for both instances that are on the same host as the DAS and for instances that are on a
host different from the DAS.

Comment by Tom Mueller [ 18/Aug/10 ]

Update on this (determined by turning on the log messages):

For the case where the instance has an SSH node, but the node is local to the
DAS, the start-instance code is correctly determining that the node is local and
it is invoking start-local-instance without using SSH. But the failure is still
there. So this problem is not strictly limited to the SSH case. However, I have
yet to see the failure when using a config node rather than an SSH node.

In summary:
start-local-instance - never fails
start-instance on local config nodes - never fails
start-instance on local SSH nodes - fails
start-instance on remote SSH nodes - fails

Comment by Tom Mueller [ 18/Aug/10 ]

Another update...

It turns out that start-instance with local config nodes does fail also. So at
least the behavior is consistent - start-instance is failing with all types of
instances that I've tried. It is start-local-instance that doesn't fail when
started directly.

Comment by Tom Mueller [ 18/Aug/10 ]

This problem is partially due to the default size (5) of the thread pool that is used for servicing admin
requests. If 5 or more start-instance commands are run simultaneously, there are no threads to process
the _synchronize-files requests that are generated by starting the instances.

If the thread pool size is increase, say to 200, then a different error results:

Exception while processing command: org.jvnet.hk2.component.ComponentException: problem initializing -
cycle detected involving: com.sun.hk2.component.SingletonInhabitant@2cec33
Closest matching local and remote command(s):
Command start-instance failed.

Comment by Tom Mueller [ 19/Aug/10 ]

I haven't been able to reproduce the initialization problem that was reported in
the last comment. So let's use this issue to focus on what to do about the small
default value (5) for the admin thread pool and whether start-instance needs to be
aware of the thread pool size.

Comment by carlavmott [ 14/Sep/10 ]

I'm still trying to find a way to get the number of threads used currently from
grizzly. What I have found is that there are only probe listeners that maybe
provide this info. I'm still working with the grizzly folks.

Comment by carlavmott [ 27/Sep/10 ]

I have checked with the grizzly team and there is no API for finding the number
of busy or available threads. Also we talked about using the probes but think
that will not work. At this point I don't think there is anything I can do at
the command level. Waiting for feedback on this topic from Jerome.

Comment by carlavmott [ 05/Oct/10 ]

this bug is being down graded because it happens less often now that the thread
pool size is larger and the command start-cluster can start instances. I'm
including all notes from discussions here so we don't lose the work that has
been done. Wiki page is located at:


The important notes from the wiki follow:

The following proposal will allievate the problem and allow commands like
start-instance to run in parallel successfully regardless of the number of
instances the user is trying to start in parallel. We specifically don't want an
unbounded thread pool as that could be a security breach. Therefore, we want to
release the grizzly thread so it doesn't wait for a long running command to
complete while still waiting for the initial command to complete before
returning execution. This is done by using a custom thread pool in the
AdminAdapter code.

  • Add a new annotation for commands like start-instance. The new annotation
    is called @UseThreadPool(name="pool-name"). A new thread pool will be declared
    within the "server-config".
  • AdminAdapter will access this new thread pool.
  • The original thread that was servicing the command will have the grizzly
    context and will call grizzlyResponse.suspend() to signal to grizzly that the
    thread can be returned to the thread pool.
  • A new thread from the thread pool will be used to execute the command.
  • When processing is complete the new thread calls resumes on grizzly using
  • AdminAdapter code is still responsible for building the Action Report with
    the results of the command

Here is some pseudo-code based on what Alexey sent:

static class AdminAdapter extends GrizzlyAdapter {

public void service(final GrizzlyRequest grizzlyRequest,
final GrizzlyResponse grizzlyResponse) {

// get the command, check if it is annotated with @UseThreadPool
if (annotated) {
grizzlyResponse.suspend(); // Suspend response here
threadpool = get thread pool named in annotation

threadpool.execute(new Runnable() { // Run task in the separate

public void run() {

{ doCommand(....); // run the command the same way it is normally run, but in a different thread // write the response (same code that is at the end of AdminAdapter.service() ) }

catch (IOException e) {
} catch (InterruptedException e) {
} finally

{ grizzlyResponse.resume(); // finish the HTTP request processing }

return; // return from the command, this releases the thread
to be used for another request, but doesn't finish the response

[GLASSFISH-13967] list-nodes: add target operand? Created: 13/Oct/10  Updated: 05/Apr/11

Status: Open
Project: glassfish
Component/s: admin
Affects Version/s: 3.1
Fix Version/s: future release

Type: Improvement Priority: Minor
Reporter: Tom Mueller Assignee: carlavmott
Resolution: Unresolved Votes: 0
Labels: None
Σ Remaining Estimate: Not Specified Remaining Estimate: Not Specified
Σ Time Spent: Not Specified Time Spent: Not Specified
Σ Original Estimate: Not Specified Original Estimate: Not Specified

Operating System: All
Platform: All

Issue Links:
blocks GLASSFISH-14654 document list-nodes target operand Open
blocks GLASSFISH-14170 list-nodes man page: add target operand? Closed
GLASSFISH-14654 document list-nodes target operand Sub-task Open Paul Davies  
Issuezilla Id: 13,967


These are comments from the ASArch review of asadmin commands that are new to 3.1
on 10/13/2010.

Consider adding a target operand.

For example, with an instance, list the node that hosts that instance.
For a cluster, list all nodes that host instances in the cluster.
For a node, list just that one node.

(This might be an RFE).

Comment by carlavmott [ 20/Oct/10 ]

Starting to investigate this issue

Comment by Paul Davies [ 23/Oct/10 ]

Fix affects Docs: Added pauldavies to CC list

Comment by Tom Mueller [ 05/Apr/11 ]

A fix for this issue was initially identified for possible inclusion in the 3.2 release, but after further 3.2 planning, the feature or improvement did not make the cut. This issue is being targeted for a future release. If based on a reevaluation, it is targeted for 3.2, then update the "fix version" again.

Generated at Thu Mar 23 12:54:27 UTC 2017 using JIRA 6.2.3#6260-sha1:63ef1d6dac3f4f4d7db4c1effd405ba38ccdc558.