Bug 6356 - PartitionAnalyzer transaction boundary on partitioned chunk (and batchlet) step need clarification
PartitionAnalyzer transaction boundary on partitioned chunk (and batchlet) st...
Status: NEW
Product: jbatch
Classification: Unclassified
Component: SPEC
PC Windows
: P5 critical
: ---
Assigned To: ScottKurz
Depends on:
  Show dependency treegraph
Reported: 2014-08-26 07:40 UTC by fantarama
Modified: 2015-12-10 14:06 UTC (History)
4 users (show)

See Also:


Note You need to log in before you can comment on or make changes to this bug.
Description fantarama 2014-08-26 07:40:25 UTC
In section 11.7 actions 6-18 define a transaction that live for the entire duration of all partitions, but in long running scenario this expose the runtime to a transaction timeout exception.

My test case (with jberet implementation) is a partitioned chunk step with 8 partitions. All partitions end in about 30 minutes, transaction timeout occurs after 5 minutes so the runtime throw exception when try to commit (action 18) because the transaction isn't alive anymore.

May the transaction begin/commit need to be executed for each analyzeCollectorData?
Comment 1 BrentDouglas 2014-08-30 12:34:33 UTC
I don't think your proposal really sits well with the way collectors work.

The spec allows a couple of ways to deal with your issue. The simplest and most portable way is to increase the transaction timeout for your step:

<step id="HalfHourStep">
        <property name="javax.transaction.global.timeout" value="1800"/>

This will increase the timeout for each chunk though.

A less portable way is to ask the JBeret guys to add a property to set the transaction timeout for this transaction individually so you would end up with something like:

<step id="HalfHourStep">
        <property name="javax.transaction.global.timeout" value="300"/>
        <property name="org.jberet.transaction.timeout.collector" value="1800"/>
Comment 2 fantarama 2014-08-30 16:54:07 UTC
The workaround is clear, but I report this as a bug because I would like to understand why the spec define this approach.

Transactions are suited for short time atomic operation. The chunk oriented step is a powerful way to handle this aspect, and I agree that is a developer responsibility to handle the transaction timeout, but only for the sigle chunk, not the full step process. In the chunk step I can manage the item count to be sure that transaction will never be greater than a fixed time.

My step work on unfixed size set of data, can goes from 1000 items to over a milion of items. I can't predict the entire step duration, and I don't want to do: chunks support checkpoints, but what appened if the analyzer throw exception? It rollback 1h of work?

I think that is wrong to have a transaction that live for the entire step lifecycle, because developer could not handle this pattern.
Comment 3 BrentDouglas 2014-08-30 19:29:05 UTC
I don't really see setting the transaction timeout to 30m when you need to run a 30m transaction as a workaround. A more correct way to deal with it that I wrote before would be something like:

<step id="HalfHourStep">
        <property name="javax.transaction.global.timeout" value="1800"/>
    <checkpoint-algorithm ref="fiveMinutesAndTenUnits"/>
    <chunk  checkpoint-policy="custom">


final class FiveMinutesAndTenUnitsCheckpointAlgorithm implements CheckpointAlgorithm {

    private static final int UNITS = 10; //What you would have set the 'units' attribute to 

    int current;

    public int checkpointTimeout() throws Exception {
        return TimeUnit.SECONDS.toMillis(5);

    public void beginCheckpoint() throws Exception {
        current = 0;

    public boolean isReadyToCheckpoint() throws Exception {
        if (current > UNITS) {
            throw new IllegalStateException();
        return target == ++current;

    public void endCheckpoint() throws Exception {

I just noticed a post from Cheng on the forum regarding accessing a DB in an analyzer or collector which I am assuming is the underlying issue you are talking about here. The spec provides a straightforward way to persist data across restarts in the StepContext#setPersistentUserData(Serializable) method. This gets updated at the end of every step regardless of it's outcome (section 11.7 #21). It will then be loaded when you retry the failed step to complete the unfinished partitions.

If you leave the transaction timeout at 5m for a half hour step you will obviously have to ensure the step's 'start-limit' attribute either not set (unlimited) or set to at least 6.
Comment 4 BrentDouglas 2014-08-30 19:38:52 UTC
I accidentally posted that before I was finished and can't work out how to edit the comment. The #checkpointTimeout() method should just return 5 as it is in seconds not millis and the checkpoint-algorithm should be in the unterminated chunk element. It should also say 'than I wrote before' on the second line rather than 'that I wrote before'.
Comment 5 fantarama 2014-08-30 21:38:35 UTC
I understand your solution, but maybe I wansn't able to explain the issue.

The chunk work inside a transaction, and I agree that I had to size the timeout correctly, but the chunk is not the step, is a part of them. The spec define begin/commit transaction only for the chunk not the step. As I wrote I can't predict how long will take the entire step, I can only define the business logic inside the chunk to take the smaller time possible and in my case the single chunk take only few seconds to terminate.

A non partitioned step don't has this issue because there aren't any thread that begin/commit transaction for the entire step duration. 
But the partitioned one does! The thread that wait for all partitions ends and that call the partition analyzer methods work iside a single transaction.
The question is the same: why? Why I need a transaction that no one can predict the duration? I have millions of items to process, my single chunk never take more than a minute, but to process all the data set is a very long process which duration depends on too many variables.

If a single chunk take 70s instead of 30s this will in fact change a lot the total time.

Also if in the analyzer a simply write to a log (no db access) I'm exposed to the transaction timeout. I really just want to understand the reason of this design, why regular step work well and a partitioned one doesn't? I need partitions for time saving purpose, but in fact I can't use it because I had to know how long will take the entire step?
Comment 6 ScottKurz 2014-09-03 14:15:20 UTC
fantarama, thanks for raising this issue.

I agree there's definitely a spec clarification needed (if not a change/fix).  
The RI seems wrong here as well.

My first reaction is to agree with you, and to argue the spec intention was that a new tran is started around each call to the analyzer.   I.e. each analyzer call to either analyzeCollectorData or analyzeStatus is done within a global tran, and if the tran rolls back then PartitionReducer.rollbackPartitionedStep is called.  (We'd call this a bug in laying out the outline flow chart then.)

I think the behavior becomes coherent with that approach.. but need to think it through.   

For what it's worth, I don't see that the RI uses a global tran at any point when calling the analyzer, which is clearly wrong.

The timeout becomes a more pressing concern for partitioned chunks, but for the sake of symmetry I think the same comment would apply to partitioned batchlets, don't you?   That would leave us knowing that the analyzer methods always run in a global tran.

Also we should note this is a potentially disruptive change. It could break any analyzer already using the RI, say, depending on the lack of a global tran.
Comment 7 ScottKurz 2014-10-30 20:08:44 UTC
Picking up this thread again... we have a case where the spec defines a behavior that I agree is undesirable, the RI implements a different behavior, and the TCK is silent.  And switching between a global tran or not can break apps.

Since there's not an easy way out, I'd like to back up to square one.  

I don't see why a transaction would be so commonly needed around the analyzer calls that the spec should insist that one is present.   The analyzer itself could start one if it really needed one.

Now, it sounds like JBeret is already providing a global tran surrounding the analyzer calls.    

Note that with the RI not starting global trans in the analyzer at all currently, we already have a case today where portability across Glassfish & JBeret is potentially an issue.

So let me pose the question to Cheng.   What would you think about changing the behavior of JBeret here to not surround the analyzer calls with a global tran?  Do you agree this would be preferable?  
Yet another alternative to consider would be a single global tran around each analyzer call (this is roughly doubling the total # of trans, since there's basically one for each chunk tran).  This would be less disruptive to any app running in JBeret, though still could break certain apps.   Still, I wonder if the overhead of the tran is worth it enough to start it all the time.

I'm trying to especially incorporate the JBeret perspective here since I don't want to punish them for actually implementing what the spec said all along. Of course this is a public list, so don't take that to exclude other people commenting.

At some point we'd need to update the spec too, but I thought we could continue the conversation before then here.

Comment 8 ScottKurz 2014-10-30 22:02:48 UTC
On second thought, my thoughts were incomplete, as I failed to consider

Need to think a bit more here..
Comment 9 cf126330 2014-11-01 02:10:05 UTC
Between the 2 options, I feel no transaction around analyzers is cleaner. I personally think this is the right move despite the risk of incompatibility. I will also check with WildFly and JBeret users for any concerns and objections.
Comment 10 Mobe 2015-12-10 14:06:31 UTC
I also think there should be no transactions around analyzer calls.

Once there is no single transaction around analyzer calls any more I do not see any reason for retaining transactions around calls to PartitionReducer.rollbackPartitionedStep or PartitionReducer.beforePartitionedStepCompletion.