ejb-spec
  1. ejb-spec
  2. EJB_SPEC-9

Introduction of @MaxConcurrency annotation

    Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 3.2
    • Fix Version/s: Future version
    • Labels:
      None

      Description

      Currently proprietary max-pool-size settings are used to control the max concurrency of an application and so to prevent overloading the application server.

      We could abstract the intended behavior into an annotation:

      @Target(value =

      {ElementType.METHOD, ElementType.TYPE})
      @Retention(value = RetentionPolicy.RUNTIME)
      public @interface MaxConcurrency {
      public int value() default 30;
      }

      We could provide more generic annotation like:

      @Target(value = {ElementType.METHOD, ElementType.TYPE}

      )
      @Retention(value = RetentionPolicy.RUNTIME)
      public @interface Concurrency {
      public int max() default 30;
      }

      to support future extensions.

      This functionality should be available for EJB 3.2 and CDI.

        Activity

        Hide
        dblevins added a comment -

        Hi All! Been following this excellent thread – very interesting.

        Let me recap the goals (someone jump in and clarify if I missed a point):

        • some way to express thread pool size for executors associated with @Asynchronous calls
        • some clarification on what happens if thread pools are full and @Asynchronous calls cannot be added to the async queue

        These are good goals. We may need to find alternate ways to achieve them than what has been proposed.

        The @MaxConcurrency concept is compelling and I had proposed something similar on Adam's blog years ago. When it comes to @Asynchronous calls there is a fundamental problem allowing the pool size to be specified on a per method basis. If there are 100 @Asynchronous methods and each sets its pool size, that essentially requires 100 ThreadPoolExecutor instances. This could be thousands of threads. I don't think a thread pool per method is the way to go.

        We're currently struggling with this in OpenEJB as we have one ThreadPoolExecutor per application, but really even that's not great. Ultimately, the number of cores is the largest indicator on what the thread pool size should be. The question of where and how you set that has a direct baring on how many pools you really want (as few as possible).

        In terms of what the behavior is when an @Asynchronous is full, that is currently unspecified. I think OpenEJB will throw an EJBException after making your thread wait 30 seconds for an opening. That duration is configuration as is the actual Queue implementation – some queues have a size, some queues like SynchronousQueue are entirely wait based and leverage only the thread pool size.

        As I say, very interesting topic! I'm not exactly sure what the right answers are, but I'd love to hear thoughts with the above "guts" in mind. Always good to think "how would I implement this?"

        Show
        dblevins added a comment - Hi All! Been following this excellent thread – very interesting. Let me recap the goals (someone jump in and clarify if I missed a point): some way to express thread pool size for executors associated with @Asynchronous calls some clarification on what happens if thread pools are full and @Asynchronous calls cannot be added to the async queue These are good goals. We may need to find alternate ways to achieve them than what has been proposed. The @MaxConcurrency concept is compelling and I had proposed something similar on Adam's blog years ago. When it comes to @Asynchronous calls there is a fundamental problem allowing the pool size to be specified on a per method basis. If there are 100 @Asynchronous methods and each sets its pool size, that essentially requires 100 ThreadPoolExecutor instances. This could be thousands of threads. I don't think a thread pool per method is the way to go. We're currently struggling with this in OpenEJB as we have one ThreadPoolExecutor per application, but really even that's not great. Ultimately, the number of cores is the largest indicator on what the thread pool size should be. The question of where and how you set that has a direct baring on how many pools you really want (as few as possible). In terms of what the behavior is when an @Asynchronous is full, that is currently unspecified. I think OpenEJB will throw an EJBException after making your thread wait 30 seconds for an opening. That duration is configuration as is the actual Queue implementation – some queues have a size, some queues like SynchronousQueue are entirely wait based and leverage only the thread pool size. As I say, very interesting topic! I'm not exactly sure what the right answers are, but I'd love to hear thoughts with the above "guts" in mind. Always good to think "how would I implement this?"
        Hide
        arjan tijms added a comment -

        you have already the possibility to wail for an asynchronous method. Just return a Future<Void>.

        That's correct, and this is the recommended way to wait for such methods from say an HTTP request processing thread.

        However, when doing this kind of wait from within a thread that is itself one of the threads from the pool that is handling asynchronous methods, you'll run the risk of getting into a dead lock situation.

        To illustrate this, assume we have a thread pool with two threads, t1 and t2. We have a bean B1 that within an asynchronous method M1 calls asynchronous method M2 on bean B2 three times and waits for it. We also have a task queue Q that holds the not yet executed tasks.

        After an initial call to M1, the situation may look as follows:

        thread    execution stack
        t1           M1 (blocked, waiting)
        t2           M2 (executing)
        
        queue     content
        Q            M2, M2
        

        Before the other two calls to M2 are done, another (asynchronous) call to M1 comes in, making the situation as follows:

        thread    execution stack
        t1           M1 (blocked, waiting)
        t2           M2 (executing)
        
        queue     content
        Q            M2, M2, M1
        

        When M2 is done executing, the system selects a new task from the queue. Suppose this happens to be M1. The situation is now as follows:

        thread    execution stack
        t1           M1 (blocked, waiting)
        t2           M1 (blocked, waiting)
        
        queue     content
        Q            M2, M2, M2, M2, M2
        

        The system is now in a dead lock.

        Both threads are waiting for an M2 to get done, but because they are waiting they are occupying a thread from the pool and thus no M2 can ever be executed.

        Strict queue ordering would not really solve this problem. As the queue could be empty when M1 started to run initially and then just before it calls M2 asynchronously the second call to M1 could happen, which would again dead lock the system.

        The example is not really contrived, as I have unfortunately encountered this problem a couple of times in the wild when investigating a Java EE application that 'mysteriously' froze.

        A join() like in fork/join prevents the dead lock by utilizing the wait to run other tasks in the same thread. The last example would then look like the following:

        thread    execution stack
        t1          M2
                    join
                    M1
        t2          M2
                    join
                    M1
        
        queue     content
        Q            M2, M2, M2
        

        A second thread pool would also solve the problem, since no matter how full the queue and how slow things get an M1 would not block an M2 from making progress.

        Suppose we have two thread pools now, with each only 1 thread, t1 now belonging to pool 1, and t2 belonging to pool 2. The last example would then look like this:

        thread    execution stack
        t1           M1 (blocked, waiting)
        t2           M2
        
        queue     content
        Q1          M1
        Q2          M2, M2
        
        Show
        arjan tijms added a comment - you have already the possibility to wail for an asynchronous method. Just return a Future<Void>. That's correct, and this is the recommended way to wait for such methods from say an HTTP request processing thread. However, when doing this kind of wait from within a thread that is itself one of the threads from the pool that is handling asynchronous methods, you'll run the risk of getting into a dead lock situation. To illustrate this, assume we have a thread pool with two threads, t1 and t2 . We have a bean B1 that within an asynchronous method M1 calls asynchronous method M2 on bean B2 three times and waits for it. We also have a task queue Q that holds the not yet executed tasks. After an initial call to M1 , the situation may look as follows: thread execution stack t1 M1 (blocked, waiting) t2 M2 (executing) queue content Q M2, M2 Before the other two calls to M2 are done, another (asynchronous) call to M1 comes in, making the situation as follows: thread execution stack t1 M1 (blocked, waiting) t2 M2 (executing) queue content Q M2, M2, M1 When M2 is done executing, the system selects a new task from the queue. Suppose this happens to be M1 . The situation is now as follows: thread execution stack t1 M1 (blocked, waiting) t2 M1 (blocked, waiting) queue content Q M2, M2, M2, M2, M2 The system is now in a dead lock. Both threads are waiting for an M2 to get done, but because they are waiting they are occupying a thread from the pool and thus no M2 can ever be executed. Strict queue ordering would not really solve this problem. As the queue could be empty when M1 started to run initially and then just before it calls M2 asynchronously the second call to M1 could happen, which would again dead lock the system. The example is not really contrived, as I have unfortunately encountered this problem a couple of times in the wild when investigating a Java EE application that 'mysteriously' froze. A join() like in fork/join prevents the dead lock by utilizing the wait to run other tasks in the same thread. The last example would then look like the following: thread execution stack t1 M2 join M1 t2 M2 join M1 queue content Q M2, M2, M2 A second thread pool would also solve the problem, since no matter how full the queue and how slow things get an M1 would not block an M2 from making progress. Suppose we have two thread pools now, with each only 1 thread, t1 now belonging to pool 1, and t2 belonging to pool 2. The last example would then look like this: thread execution stack t1 M1 (blocked, waiting) t2 M2 queue content Q1 M1 Q2 M2, M2
        Hide
        Darious3 added a comment -

        Any progress here?

        Show
        Darious3 added a comment - Any progress here?
        Hide
        marina vatkina added a comment -

        Unfortunately not.

        To move forward, somebody needs to write a very clear proposal that addresses all concerns expressed in this issue.

        Show
        marina vatkina added a comment - Unfortunately not. To move forward, somebody needs to write a very clear proposal that addresses all concerns expressed in this issue.
        Hide
        Darious3 added a comment -

        If I look at the calendar then I see it's already November. If Java EE 7 is to be released around April 2013, then I guess there really isn't a lot of time left to do anything, is there?

        As an external observer of the EJB spec, I have to say that it looks like not much is happening and a lot of issues fail to move forward because of a lack of response. I might be totally wrong, and maybe the EG is doing a lot of work that I simply missed. But to a community member like me it just looks like very few people are actually involved or interested on really working on the spec. If I compare this to say JSF or CDI, then some issues there don't make it because the people working on those specs just have a lot of other issues to work on, but with EJB is looks like there's just almost nobody around.

        Again, maybe I'm totally wrong here and not seeing the whole picture.

        Show
        Darious3 added a comment - If I look at the calendar then I see it's already November. If Java EE 7 is to be released around April 2013, then I guess there really isn't a lot of time left to do anything, is there? As an external observer of the EJB spec, I have to say that it looks like not much is happening and a lot of issues fail to move forward because of a lack of response. I might be totally wrong, and maybe the EG is doing a lot of work that I simply missed. But to a community member like me it just looks like very few people are actually involved or interested on really working on the spec. If I compare this to say JSF or CDI, then some issues there don't make it because the people working on those specs just have a lot of other issues to work on, but with EJB is looks like there's just almost nobody around. Again, maybe I'm totally wrong here and not seeing the whole picture.

          People

          • Assignee:
            marina vatkina
            Reporter:
            abien
          • Votes:
            8 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated: