Skip to main content
Last updated April 28, 2014 22:11, by timwatson
= Meeting Minutes (Older meetings are in the [[Meeting Minutes - Archive]]) * Apr 23 ** Attendees: Paul, Ed, Joshua, Tim ** Paul updated the example code of the project around injecting a managed connection, see EventScenarios. ** A couple of issues with the sample; *** Events are not tied to a specific provider/connection, the listeners must discriminate if the event is of interest to them, *** Does this expose a security issue in that listeners will see events and connections that are not meant for them? Does CDI provide any support here (access checking, security managers, etc) ? Does the container provide any help here? *** Question for CDI; Can you set the listening period programmatically? I.e. We may not want to allow CDI to create listeners dynamically (CDI managed bean) and require this be be done explicitly. Does CDI have an API to set when a CDI managed bean can be created? ** Started discussion of scope; state scope and expiration of stale data. * Apr 9 ** Attendees: Paul, Ed, Tim ** Reviewed process flow in EventScenarios from getting a Provider instance to being able to perform state management operations, ** We discussed how the JSR 350 consumer would use this and at what level they would likely interact with the system, mostly likely the user would need an instance injected at the Connection level, ** We need to rework the example code in EventScenario to show decisions made to use CDI for Provider querying and eventing model, this will help validate object model changes, ** A couple of assumptions were made in the original design; 1) the system should be highly dynamic that allows users to query for and use customized providers and 2) this should support SE usage. We feel these may not be correct, that users would likely want to use preconfigured named resource (Connections) and would be in a EE context, need to confirm this, ** No meeting next week, will resume on April 23. * Apr 2 ** Attendees: Paul, Antoine, Ed, Joshua, Werner, Tim ** Reviewed updates to Antoine's example code, it now runs via; mvn clean test -Pweld-1.x ** Example of portable extension, ** The API needs to drill into connections and how to get from querying a provider to being able to do some state management via connection, there is/should be a layer between provider and state connection, ** Named connection, injection of connection? ** CDI; if JSM is targeted to JEE 7 then CDI will be 1.1, ** Werner mentioned that a JSR 362, Portlet 3.0, may have an unfulfilled need for data persistence that 350 could possible fill. * Mar 26 ** Attendees: Paul, Antoine, Joshua, Tim ** Provider query use-case sent to eg mail list, ** Antoine provided example of using CDI to support Provider query. * Mar 5 ** Attendees: Joshua, Werner, Ed, Antoine, Paul, Tim ** Discussed Capability discovery use case, ** How to configure capability, ** How can CDI play a role here; using CDI as a mechanism to query providers, ** 3 variations when querying; capabilities that are required, capabilities that are preferred but not required, and capabilities that are not wanted, ** We will also look at CDI as a possible implementation of the eventing model, * Feb 26 ** Meeting canceled, will resume next Wednesday. * Feb 19 ** Attendees: Joshua, Werner, Tim ** Werner will be attending JavaLand March 26, may be able to get some speakers like Pete Muir and Anatole (jar 354) to join hangout, ** Some continuing discussion of CDI, need to get a working sample to Antoine. * Feb 12 ** No meeting due to technical, operator, difficulties. * Feb 5 ** Attendees: Mitch, Joshua, Werner, Ed, Paul, Antoine, Tim ** More eventing discussion; *** 3 scopes of eventing system; 1) Platform provider (JBoss, GF, etc), 2) Provider - plugable services like Coherence, and 3) the client, or application, that consumes state and uses provider, *** The app will create it’s own event classes, the Provider will fire them, *** Discussed how we might take advantage of CDI eventing with Antoine, **** '''AI:''' Investigate CDI portable extension, **** '''AI:''' Need to get Antoine a description of expected system usage and a example Provider implementation, **** Question; Use of CDI includes a bunch of dynamic proxying, is this a performance concern? ** Previous '''AI's'''; *** Look at tracking TODOs in JIRA, *** Verify EDR spec can be delivered as Javadoc only, *** Need to drill into and iron out configuration, ** Paul will be out for the next two meetings. * Jan 29 ** Attendees: Mitch, Joshua, Werner, Ed, Paul, Tim ** We need to get to EDR quickly; *** EDR is comparable to an alpha, something people will be able to play with and get a reasonable sense of the value and potential of this JSR, and also to validate the API design, *** API still has TODOs, these should be cleared as much as possible, the more substantial items should be tracked in JIRA. It is OK to release the EDR with some TODOs still in the API though. ** What will the spec include at EDR (full spec, whitepaper, executive summary, etc…); *** The group feels the spec can be delivered on JCP as javadoc only, significant detailed content can be included in overview and package.html, *** Ed will check with Heather to see if this is acceptable, *** We will also provide a reviewer guide on, the reviewer guide will come in 3 parts/documents (user/client, provider, and platform), *** Sample implementation and usages of JSR-350 will be provided on and linked to from the reviewer guide. ** Will start tracking EDR prerequisites via a checklist, Tim will add this to the project wiki, please add to and update the list... *** [[EDRChecklist]] ** Discussed configuration; *** We should include a lot of examples in the EDR, *** How does a provider get configured declaratively? Do we need to include deployment descriptor support? *** What does the configuration lifecycle look like? What happens to config from undeploy to redeploy of provider, who is responsible for supporting this? *** Mitch has a diagram showing configuration processing, see StateManagementFlow.odg in the GIT repo, *** Possible configuration store? *** Can we get the provider configuration using JNDI lookup, similar to a JDBC data source, *** There was a mention of a configuration JSR, is that 147? Anyway we can not have a dependency on that in the samples. ** Continued discussion of event model; *** Paul will see if the CDI lead can attend EG meeting next week and help evaluate how we can used CDI eventing here, *** Paul will take a stab API changes to support CDI events, *** One question is how does an upper stack get access to the annotations parsed/processed by the CDI runtime? ** Moving forward, Mitch is available to consult for this project. * Jan 22 ** Attendees: Mitch, Ed, Paul, Tim ** Introduced Paul and Tim ** Continued discussion of event model ** Discussed the semantics of the before/after event callback, and what you could do against the event object itself and the connection that caused it. ** We feel we have fleshed out the event model enough that we really should begin to CDI-ify it. ** Paul to talk with CDI maintenance release lead to see if he can consult with us on making this CDI-friendly. * Jan 15 ** Attendees: Mitch, Ed, Werner, Josh, Tim ** Introduced Tim Watson as new spec lead. ** Went over roadmap, wanting EDR by end of Q1 2014 ** Reviewed event model work from last meeting (before holidays) ** Mitch to add an example to show workings of current event model ** Goal is to get events fleshed out and then reimagine them as using the CDI event model * Dec 18 ** Attendees: Mitch, Josh, Paul ** Made a quick tweak to Externalizer to use ObjectInput/ObjectOutput instead of InputStream/OutputStream ** Talked over the 'event notification straw-man' interfaces Mitch put in last week *** We agreed that for now, we'll write interfaces using the 'delegation event model' pattern, and then later incorporate CDI support (hopefully to give customers the option to use either approach to receive events). *** We based this decision on the fact that not everyone uses CDI, and JSR107 in particular defines a delegation event model with no use of CDI *** We believe that CDI support is important, but we aren't sure if we can *require* clients/providers to use it. This needs further investigation. ** Identified expiry and activation/passivation events as possibly being separate from the CRUD events. *** Maybe need an expiry-specific EventCapabiity to indicate that the client wants and the provider supports the expiry-specific events. *** Maybe need an activation/passivation specific EventCapability (and a capability that says a provider can activate/passivate objects) ** We might consider removing EventType enum and making event-type-specific subinterfaces of StateEvent. *** Allows us to customize the information tracked per event type *** Requires separate event methods on listener, or a filter object specified at listener registration, etc. ** We'll meet again after the first of the year (2014)! * Dec 11 ** Attendees: Mitch, Paul, Ed, Werner ** Josh was at a customer site and couldn't join today. ** Discussed the Externalizer work Mitch put in last week *** Would like to use CDI if possible to do registration of ExternalizerFactory *** This is in addition to the use of ServiceLoader for registration. *** Paul to investigate how we can do this. *** Mitch will merge the externalization branch to the master branch *** Paul can do his work in the master branch or a separate branch (his choice) ** Discussed event notification *** We should investigate using CDI events instead of inventing our own event framework *** Mitch to put together a 'straw man' proposal for this, and we can then bring Oracle's CDI expert onto the call to help us understand how CDI can help here. *** We'll define a number of 'standard' event types (create, access, delete, update, activate, passivate, expire, expire_idle) and pre/post flavors of each *** We'll define an EventCapability to represent the capability of a provider to take subscriptions for events and to deliver events to subscribers *** Thus, subscriptions for events will be managed at the connection level. We should also be able to filter events by state object type (key + value type) ** Event notification lead to a discussion of expiration of a state object *** Will define an ExpirationCapability and an IdleTimeoutCapability that will allow the definition of a per-type default timeout. The ExpirationCapabiliity defines a maximum object lifetime, regardless of use, and the IdleTimeoutCapability defines an idle lifetime that is shorter than or equal to the max lifetime. *** Also we found the need for per-object capability configuration. We'll define a StateContainerCapability interface that extends Capability. *** Then we'll create StateContainerExpirationCapability/StateContainerIdleTimeoutCapability that will allow clients to specify per-object max lifetime and idle timeout. * Dec 4 ** Attendees: Mitch, Werner, Paul, Josh ** JSR-350 renewal ballot was *approved*. We're good to go for another year if needed (though we hope to reach EDR early in 2014 anyway). ** Discussed the 'Serializer' changes Mitch put in last week. *** Josh wanted to clarify that no particular serialization scheme is required by State Management, but that we recommend the use of Java serialization framework, and allow clients to plug in their own. *** Paul recommended we change the name to Externalizer to more closely parallel (serialization that lives outside the object being serialized). *** We designed a way to register Externalizers more 'naturally' and to do this separate from the code that actually manages the state objects. We came up with the idea for ExternalizerFactory registered as a service, loaded via ServiceLoader. We retained the explicit registration methods directly on ExternalizerCapability for convenience. ** Next week: *** Discuss externalizer changes *** Discuss how to support 'events' (notifications from provider to client about state object changes) * Nov 26 ** Attendees: Mitch, Werner, Paul, Josh ** Discussed concerns raised by some EC Renewal Ballot voters about why this JSR should exist. *** Summary is that most non-trivial code regardless of product must deal with state management. JSR-350 provides the 'language' needed for clients to express their requirements for state management, and for providers to meet them. This allows clients and providers to be developed independently such that clients can be matched with prebuilt providers instead of hand-rolling state management solutions as we do today. ** Discussed whether JSR-350 should address serialization directly or leave it as a separate item that client/provider must negotiate privately *** Gave example that two prominent patterns exist in serialization frameworks today (serialization logic inside object itself, and serialization logic in a separate serializer) *** JSR-350 can codify this pattern, and allow the '''client''' to implement the serializer it wants to use *** Discussed making this a capability so that neither client or provider are required to use it *** Mitch made a first pass at defining the capability for this (CustomSerializerCapability) *** We'll review next week. * Nov 20 ** Attendees: Mitch, Paul ** Josh had work commitments and could not attend, but sent thoughts on marshalling via email ** Discussed the various approaches to marshalling *** JBoss has a marshalling framework. Paul wrote up some slides and will post on this Wiki *** Coherence has a custom marshalling format. Mitch to summarize this on the Wiki *** Other marshalling/serialization frameworks exist. Mitch to summarize *** Briefly discussed alternate keys. NoSQL often provides for this (a generation function is passed over data value to derive key value). * Nov 13 ** Attendees: Mitch, Paul, Josh, Ed ** Discussed ConcurrentBatch changes by Paul ** Brought Ed up to speed on batch/ConcurrentBatch. ** Decided we needed a pluggable BatchRetryStrategy interface. Paul will put in, and review next week ** Started to discuss the features we think we need to address before EDR. Mitch suggested we address marshalling and alternate key generation, use, and management. * Nov 6 ** No meeting * Oct 30 * Oct 23 * Oct 16 ** Attendees: Mitch, Paul, Josh, Werner ** Discussed Paul's changes to Batch stuff. He combined the ideas of Batch and BatchOperations and removed the explicit batch.begin()/end() methods ** We discussed the merits of this approach. *** On the plus side, it allows your code to work the same with BatchCapability and ConcurrentBatchCapability *** On the negative side, it means that all batch operations need to be incapsulated in a Batch impl and code block. This seemed like a pretty mild inconvenience for the power it offers. ** We also added a DiscardBatchException that can be thrown from the batch.execute method. *** This exception indicates that the batch logic has 'given up' and the batch should be discarded. *** This one exception serves to prevent retry in ConcurrentBatch cases and allows the batch to communicate to the caller that the batch failed for an internal reason. *** We also had the batch.execute method throw StateException to indicate that some error happened while trying to apply the batch changes at the provider. ** With these changes, we feel like we've got a system that effectively manages concurrency without the need to account for a specific concurrency strategy. ** We discussed what it means to have have a provider that doesn't support (or isn't configured to use) batch capabilities. *** When edits apply is not specified and may happen as late as (but not later than) StateMap.put() or StateContainer.setValue(). *** With batch, there are no atomicity guarantees. If the batch fails and is discarded, the app would need to compensate to change state values back to original values. *** When not using ConcurrentMapCapability or ConcurrentBatchCapability/etc. the expected behavior at the provider is to allow any value to be put even if it overwrites a value that another process/thread put. No specific exceptions are thus needed to indicate such a condition (because the provider won't catch them). ** Paul said he would work on the JavaDoc to make these expectations clear ** Werner also mentioned that we may want to use SE 8 features like lambdas in our interfaces or ask a JUG to add this support in order to get the community excited about the use of State Management. ** Next week: Discuss the retry configuration of ConcurrentBatchCapability and discuss possibly using the same 'execute' semantic from batch for local transactions * Oct 7 ** Attendees: Mitch, Paul, Josh, Werner ** Discussed concurrency APIs adding in mitch_concurrency branch ** Simplified BatchOperations interface to include a no-arg execute/prepareForRetry method ** Experts will independently work with the new design this week and report back next week with their findings. * Oct 2 ** Attendees: Mitch, Paul, Josh, Werner ** Continued discussion of concurrency control ** Decided that since optimistic vs pessimtic didn't imply any functional interface, that we should have a single capability to indicate that any concurrency control is supported/required. ** The single capability (ConcurrencyCapability) has a parameter indicating the 'strategy' for concurrency control. ** Began discussing what mechanism to surface to allow app developers to respond to the needs of the concurrency control strategy in use ** For example, in optimistic concurrency, edits made to state may fail during apply (e.g. batch.end()) and would then need to be retried after taking into account any new state information that caused the conflict in the first place. ** How does the app developer account for this need to retry? Is there some specific exception a provider should throw in the case that conflicts are detected? ** Paul suggested that perhaps we should tie the idea of retry to the batch concept. Perhaps a batch of operations can be encapsulated in some way. Then if the batch fails to be applied in the provider, the provider itself might managed retrying the batch operations by handing the batch the new/updated state and rerunning the operations that make up the batch. ** Problems with this approach include: How would a provider know when to retry and when not to? For example, if a key needed by the batch were deleted, not just simply updated, would it be appropriate to keep retrying the batch? ** Thought: Perhaps we could define a RetryableBatch interface that allows the app developer to provide the batch operations and also to participate in the decision making process when determining whether the batch can be retried or not (e.g. RetryableBatch.isRetryable()). * Sep 25 ** Attendees: Mitch, Paul ** Missed last week because Paul and Mitch both had conflicts ** Discussed Concurrency Control and how various vendors deal with it ** Found strong evidence that there are two major flavors: Optimistic/Pessimistic ** Vendor support of these two strategies varies, and appears to not be tied to any functional interface ** Defined capability interfaces for optimistic/pessimistic ** Vendor support for ConcurrentMap is quite common, so we decided to make a capability for that ConcurrentMap support as well. ** Began discussions on making support of StateMap an optional capability ** Opened a discussion about what type of data we expect to be exposed via JSR350 *** Existing data? New data? *** Mitch voted for JSR350 being used primarily to store/manipulate new data, and providers will use their back-end implementations to accommodate new data as defined via JSR350 (e.g. DB provider will create new tables to manage the data passing through JSR350). ** Next week we should discuss the newly added capabilities and the 'old/new data' item above. * Sep 11 ** Attendees: Mitch, Paul, Josh, Werner ** Began talking about concurrency control. ** Identified optimistic and pessimistic concurrency as two main strategies, and an explicit/implicit flavor within those categories ** Began discussing a ConcurrentCapability that would return ConcurrentMap that could be used for optimistic concurrency ** Began writing up a use case example with ConcurrentMap and optmistic concurrency ** Next week we should complete this example, *and* see if we can make use of ConcurrentMap for *both* optimistic and pessimistic concurrency, avoiding the need for a separate interface for the two cases. * Sep 4 ** Attendees: Mitch, Paul, Josh ** Walked through some of Josh's questions and concerns *** Timing and lifecycle of providers. How do we know providers are available? Answer: TBD but probably the platform's responsibility *** Security? Answer: Probably platform responsibility to avoid security APIs in our spec. We'll visit when we do the RI *** Marshalling of key/value? TBD: we did talk about this in earlier iterations of the API, but need to revisit ** From there we talked about the current Connection/StateMap APIs. Decided that the Key class and methods for key mgmt on Connection weren't used ** Then talked about the approach to exceptions. We probably want to revisit this and have specific exceptions for common failure cases. * Aug 28 ** Attendees: Mitch, Paul, Josh Dettinger (New Expert from IBM for JSR-350) ** Welcome Josh! ** Gave Josh a brief introduction to JSR-350. ** Josh works on IBMs distributed caching product ** Went through Mitch's changes for the week (EditSession, shared GroupedOperationCapability). ** We decided that since an edit session and transaction are independent, their class hierarchy should be too. ** We made LocalTransactionCapability and EditSessionCapability independent classes (no shared ancestor) ** We renamed EditSessionCapability back to BatchCapability and factored a Batch interface out. So you begin a batch on BatchCapability and end a batch with Batch.end. ** Same semantics put in place for LocalTransactionCapability/LocalTransaction. ** Mitch to take the group edits and merge/push to master. ** Josh to look over the spec and source this week * Aug 21 ** Attendees: Mitch, Paul, Werner ** Went through Paul's changes related to data semantics. A few tweaks here or there, but we had general agreement. ** This week Mitch will take Paul's changes into account in his branch, and we'll review Mitch's branch next week. * Aug 14 ** Attendees: Mitch, Paul, Werner ** Discussed data semantics ** Originally, said we'll treat a value in StateContainer as disconnected copy ** Originally said StateContainer is Closeable. ** Originally said closing a container flushes value to provider. ** What about references within value to other values? *** Assume that value is standalone? *** If not, provider might need to define extra capability for managing references across values *** Maybe reference handling is the job of the marshalling layer. *** Example is: PO1 and PO1-item1 are manipulated separately via StateContainer. Then PO1 is closed. How do edits to PO1-item1 get applied. *** Maybe we have a 'unit of work' that tracks open state containers, and could give a marshaller the info needed to close/marshall related items. ** See JSR 323 for possible pointers on data handling and marshalling ** Upon further discussion, we're thinking of creating two separate capabilities: *** SessionCapability - manage a session spanning zero or more operations against zero or more state values. Closing a session flushes all edits to all state containers touched during the session. There are no atomicity guarantees. Question: Should a provider flush StateContainer objects in the order they were touched/edited? *** LocalTransactionCapability/XATransactionCapability - manages a transaction spanning zero or more operations. Question: Should a transaction commit automatically end any active session, and a tx rollback discard any active session? ** HOMEWORK: Decide if there should be any implied or explicit relationship between transaction and session. * Aug 7 ** Attendees: Mitch, Werner ** Discussed JSR 362 and tie ins to JSR 350. ** Discussed that JSR 350 might be one facet of an overall improved full-lifecycle management story in EE 8. *** Basically, managing a complex layered application today is difficult because that the components of that app tie into its environment by storing persistent data, log messages, configuration overrides, etc. With each component potentially storing things in different locations using different technologies, it becomes difficult to move or reprovision (or even back-up) the application. *** JSR 350 could help standardize how application components store persistent state, making the migration of the state for an application an exercise in migrating StateConnection instances (a use case we haven't considered yet, but should). *** Improved logging frameworks might be a good idea for EE 8 (java.util.logging is not widely adopted). *** Future configuration efforts/JSRs might make the migration of configuration easier. ** Mitch to actually begin a dialog with JSR 362 (Portlet) folks this week (didn't get to it last week). * Jul 31 ** Attendees: Mitch, Paul, Werner ** We discussed that JSR 362 (Portlet State) might use JSR 350 in its RI to manage the actual state data for PortletState. Mitch to talk with Oracle reps on that JSR. ** We walked through StateConnection, StateContainer, StateMap, etc. to give an overview of their purpose and use. *** The spec is out of date and needs updating with what we have in the API today. *** The spec needs to talk about state data semantics: **** Is the value a local disconnected copy? **** Is it a live/remove reference **** What are the persistence boundaries of the value, and how can the app understand/control them? ** We then wrote a simple scenario method that used our configured connection to manipulate a PurchaseOrder object. ** We fine-tuned how the app developer gets ahold of the LocalTransactionManager instance from a connection. This looks good and usable. ** We discussed the need to define the mechanism(s) for marshalling. Who does what in order to get PurchaseOrder marshalled and unmarshalled? ** We began discussion on data semantics (local vs remote, persistence boundaries, concurrency control, etc.) ** We'll continue data semantics discussion next time, followed by marshalling (possibly some number of weeks from now). * Jul 24 ** Code review of API so far. ** Push code to GIT repository * Jul 17 ** We met, but I didn't capture minutes. * Jul 10 ** Attendees: Mitch, Paul, Werner ** We went over pseudo-code representing the scenario steps we've developed over the last couple of weeks. ** The code Paul put together looks good and we decided to use it as the basis for our go-foward API. ** Mitch will clean up the current svn repository to allow Paul to check in his initial code. * Jul 3 ** Attendees: Mitch, Paul, Werner ** Continued fleshing out our scenario from last time. ** We got to the point of actually instantiating a StateConnection and described how that works from either path (unknown provider and ProviderQuery, or known provider name) ** Paul suggested the EG members should each take the scenario steps and render them as a prototype scenario/API and then compare the resulting code in the next meeting. ** I'm thinking I'll create a top-level package called state.mitch and flesh out the parts of the API required to write the scenario code. * Jun 26 ** Attendees: Mitch, Paul ** We constructed a scenario involving an application developer working to port an existing standalone JDBC-based app to be a cloud-based app, in a public cloud, that uses State Management for its state instead of JDBC. ** We walked through the flow diagram we've constructed and fleshed out details of each step, writing them down in the spec document, 'Use Cases' section. ** Next week we'll pick up and try to finish this first scenario. * Jun 19 ** Attendees: Mitch, Paul ** Need to address security enforcement. Perhaps have the spec say that platform must secure the StateConnection but not dedicate any API to this specifically. ** Talked about configuration ID. Leaning toward app owning the ID and making it required that it is globally unique. Need more discussion on how to get app a globally unique namespace, and that namespace would then qualify all other IDs used between app and JSR 350 ** Suggested that 'platform implementor' (the party that implements the JSR-350 SPIs) will need to provide a 'default state management provider' that can be used to store StateConnection configuration using the configuration ID discussed above. *** NOTE: StateConnection configuration is connection-level config, like a database URL, user/password, etc. and is *NOT* deeper storage configuration like a database schema would be. *** This lead to a discussion of 'dynamic provisioning' of resources needed to support a StateConnection. An analogy would be how JPA allows the definition of an EntityManagerFactory where the provider's database schema can be dynamically created based on name/value metadata given at the JPA level. *** Some providers may support dynamic provisioning and some may not. This suggests we may want a DynamicProvisioningCapability as a standard capability in the JSR 350 spec. * Jun 12 ** Attendees: Mitch, Paul, Werner ** Went over Paul's flowchart updates. *** Goal of the flowchart is to define a 'roadmap' for how an application uses JSR 350. *** Paul's Slide 2 represents the top-level runtime flow, and slide 3 represents 'design-time' for a new configuration that can be used at runtime. *** Discussed slide 2 'Request StateConnection via configuration identifier' node **** This node implies that the app either knows about a prior confid id or generates a new one **** We tabled this discussion for now, but I wonder if we want providers to control these IDs, not the app *** Jumped into slide 3 and this spawned a couple of different discussions. Summary is below. ** Discussed what a capability is *** Capability has a name, optional functional interface *** Some capabilities might require configuration *** We spent a good deal of time discussing configuration of capabilities **** Example is transactional capability **** TransactionalCapability is the capability name we'll use for now (not sure how we'll represent a capability in the API yet, but maybe as a class named TransactionalCapability) **** TransactionalCapability defines a functional interface, call it TransactionalStateConnection, that contains rollback/commit methods that the app can use **** If the app requests TransactionalCapability, it will be matched, during capability query processing, with providers that provide the TransactionalCapability **** We discussed details of the query process, and I'll summarize them below in a separate bullet **** Let's assume a given provider 'ProviderX' is matched to the app's capability query. **** It is possible that TransactionalCapability may define or require extra configuration in order to behave as the app intends at runtime ***** A TransactionIsolation parameter seems like a likely candidate for a 'standard' configuration parameter that might be defined with the TransactionCapability itself ***** But other tuning parameters, maybe specific to ProviderX, seem likely too. For example ProviderX may have a configurable concurrency strategy for locking (e.g. optimistic with collision detection vs. pessimistic with locks). ***** How can ProviderX make the app aware of this custom configuration? Should JSR 350 define a mechanism (e.g. JavaBeans) by which the provider should expose its custom config to the app? Should we just not try, and tell application writers that they will need to consult with the provider (via documentation/etc.) out-of-band in order to achieve any custom configuration? ***** Honestly, I'd like to investigate a standard way to expose provider configuration because I think leaving to out-of-band negotiations between app and provider makes the use of JSR 350 in a cloud much more difficult and really prevents a nice 'IDE-based' experience for configuration regardless of provider. But, I'm open to discussions on this. ** We also discussed how the capability query will work *** We could accept a list of capabilities and return a list of matching providers, allowing the app to iterate the list, interrogate the provider for a list of any optional/nice-to-have capabilities it offers, and then ultimately pick the provider it wants, by name *** Or, we could accept a 'filter' object of some kind, along with a list of capabilities, that the query logic invokes against any provider that matches the required capabilities. The filter would evaluate any nice-to-have capabilites for the provider, the provider's name, etc. and calculate a 'rank' for the provider. The query logic would then pick, on behalf of the app, the provider that was assigned the best (I assume lowest) rank. I personally like this second approach because it allows us to better hide the concept of a 'provider' behind specific interfaces like the Filter and not exposing the full provider to the app directly. ** We also agreed we'll begin meeting weekly to get this JSR moving quickly ** Next week: Continue discussion on capability config and query mechanics.
Please Confirm