[JPA_SPEC-80] A standard way to obtain custom SQLSTATE issued by batches, triggers and stored procedures Created: 21/May/14  Updated: 21/May/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: mkarg Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Rather any RDBMS is able to define custom SQLSTATEs. This becomes useful when writing batches, triggers and stored procedures, as the business logic inside that SQL programs has the ability to stop processing and inform the caller on business level exceptions (like "Account limit exceeded." encoded as SQLSTATE 'ATM13' when a user invokes "CALL ATM_withdraw(5000.00)" in a banking application). Sophisticated front ends (hence, calling applications) might want to differentiate between several custom SQLSTATEs (e. g. SQLSTATE 'ATM13' compared to 'ATM27' which might be a completely different business level reason to stop processing of the SQL program).

JDBC defines an unambiguous way to obtain these custom SQLSTATEs by SQLException.getSQLState(). Unfortunately, JPA does not but enforces product-specific workarounds. While some standard outcomes are provided as specialized PersistenceExceptions (e. g. EntityExistsException, EntityNotFoundException, etc.) there is no special exception for "custom causes". Also, it is not clearly said that EACH compliant entity manager MUST provide an SQLException containing the root cause to the PersistenceExceptions. While EclipseLink does provide this SQLException, it does not as a direct cause of PersistenceException. It seems to be even valid that compliant entity managers do not provide the causing SQLException at all.

As a solution I could imagine two alternatives:

(A) Define in the JPA spec that in case an entity manager is using JDBC to execute SQL statements, any thrown SQLExceptions MUST be returned DIRECTLY by PersistenceException.getCause().

(B) Define in the JPA spec that in case a database backend operation results in an unknown SQLSTATE, the returned PersistenceException MUST be an instance of the new class UnknownSQLState, and can be obtained by invoking UnknownSQLState.getSQLState().

While (A) is expected to be rather simple to implement by ORM vendors, in fact it is (B) which would provide the most unambiguous and simple solution to the application programmer.






[JPA_SPEC-75] @Index.columnList should be an array Created: 11/Mar/14  Updated: 11/Mar/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: roxton Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

In the JPA 2.1 specification, @Index has a String columnList property, and the specification presents this as a comma-delimited list. I propose that the specification be modified to make this property of type String[], both as a less surprising syntax and as an easier leap from Hibernate syntax.






[JPA_SPEC-79] EntityManager.createStoredFunctionQuery -- Using return values instead of result sets Created: 19/May/14  Updated: 19/May/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: mkarg Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

JPA 2.1 provides a facility to call stored procedures, which greatly improves performance in many situations.

Unfortunately, it does not yet provide the same feature richess of JDBC. One partocular missing feature is the ability to read the result of a stored FUNCTION: The technical way to read those is defined by JDBC API, but not by JPA!

In JDBC, a return value can be simply queried by reading the OUT parameter at index 0, which is valid for all escaped functions when using the syntax {?=call myfunc(...)}. In JPA, providing a leading question mark leads to syntax errors. While using a native query using "SELECT myfunc(...)" certainly does work, it leads to unwanted performance overhead and code clutter, as it creates a CURSOR and wraps it using a JDBC ResultSet, possibly inducing additional network roundtrips to get that CURSOR's description and first row. JDBC prevents this by simply requesting the sole function result value as a side effect of the call, which is certainly available in the same roundtrip.

Hence, to spare overhead, improve performance and reduce application code size, it would be really great if JPA learns to deal with stored FUNCTION result values, just as JDBC can do it since many years.

A proposed syntax would be:

StoredFunctionQuery q = em.createStoredFunctionQuery("MyFunc(...)");
q.registerOutputParameter(0, Integer.class, OUT);
q.execute();
int = g.getParameter(0); <-- Returns value of '?=' in JDBC






[JPA_SPEC-78] TupleTransformer Created: 15/Apr/14  Updated: 13/Aug/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: c.beikov Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

The constructor syntax in JPQL is limited by the fact, that the class must be visible to the classloader of the persistence provider. Also one might want to apply a custom transformation strategy based on metadata that does not use constructors but factories, builders or setters. To overcome these limitations I propose the addition of a TupleTransformer interface which can be implemented by a user to provide custom strategies.

TupleTransformer.java
public interface TupleTransformer<X> {
  List<X> transform(List<Tuple> tuples);
  X transform(Tuple tuple);
}

and an addition to Query and TypedQuery:

Query.java
public interface Query {
  // other methods
  Query setTupleTransformer(TupleTransformer<?> tupleTransformer);
}
TypedQuery.java
public interface TypedQuery<X> extends Query {
  // other methods
  <Y> TypedQuery<Y> setTupleTransformer(TupleTransformer<Y> tupleTransformer);
}

For reference see the ResultTransformer of Hibernate: http://docs.jboss.org/hibernate/orm/4.3/javadocs/org/hibernate/transform/ResultTransformer.html



 Comments   
Comment by c.beikov [ 13/Aug/14 ]

EclipseLink provides something similar(org.eclipse.persistence.queries.QueryRedirector) which can be used to do the same as with hibernate ResultTransformer.
If I understood it right, OpenJPA supports that with org.apache.openjpa.kernel.exps.AggregateListener, so every major JPA provider kind of already has that feature. Time to standardize





[JPA_SPEC-71] Add subquery(EntityType) to javax.persistence.criteria.CriteriaQuery Created: 27/Jan/14  Updated: 27/Jan/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: koehn Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: entitytype, subquery

 Description   

If using dynamic entities not defined by classes (as is the case when using EntityMode.MAP), there's no way to create a subquery. This is because javax.persistence.criteria.CriteriaQuery.subquery() takes an entity Class as an argument, and unlike javax.persistence.criteria.CriteriaQuery.from() there's no overloaded method to subquery by EntityType. This severely limits what can be done using entities mapped with EntityType for which there is no Java class.






[JPA_SPEC-70] Allow two phase bootstrap approach to creating the EntityManagerFactory Created: 09/Jan/14  Updated: 09/Jan/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: smarlow Assignee: Unassigned
Resolution: Unresolved Votes: 5
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Introduce a standard two phase persistence unit bootstrap and require that certain services are not accessed until the second phase. This will improve how JPA/CDI + JPA/@DataSourceDefinition work together.

In a perfect world, the first persistence unit bootstrap phase should not use the datasource or the CDI bean manager (instead wait for the second phase). Class enhancing/rewriting should also occur in the first phase (before application classes have been read).

This will help avoid use of the CDI bean manager too early (which otherwise can cause application entity classes to be loaded before they are enhanced/rewritten).

This will help avoid use of a datasource that is being deployed with the application (@DataSourceDefinition) but may not be available for use yet.

Also see discussion at https://java.net/projects/jpa-spec/lists/jsr338-experts/archive/2013-06/message/0.



 Comments   
Comment by arjan tijms [ 09/Jan/14 ]

+1!

This will also help or even be the solution for JAVAEE_SPEC-30

One question though; the two phases are thus needed for @DataSourceDefinition, but what about data sources that are defined in web.xml, ejb-jar.xml or application.xml.

Can't they be scanned and processed first without requiring the two phases?

Comment by smarlow [ 09/Jan/14 ]

One question though; the two phases are thus needed for @DataSourceDefinition, but what about data sources that are defined in web.xml, ejb-jar.xml or application.xml.

Data sources that are defined in web.xml, ebj-jar.xml or application.xml should also benefit by the change to bootstrap the persistence unit via two phases.

Can't they be scanned and processed first without requiring the two phases?

It will vary between application server implementations, when the data sources are available. On application servers that can start the data source earlier, supporting the two-phase pu bootstrap will also improve CDI injection in entity listeners (creating the bean manager will not prevent entity class enhancement).





[JPA_SPEC-69] Lift restriction limiting select clause to single-valued expression Created: 07/Jan/14  Updated: 01/Dec/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: arjan tijms Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Section 4.8 of the JPA spec defines that a select expression is limited to a single-valued path expression.

Since there doesn't seem to be a clear reason for this restriction I would like to request to remove it. This will make it easier to e.g. construct a DTO from an entity where a collection attribute needs to be included.

See also: https://java.net/projects/jpa-spec/lists/users/archive/2014-01/message/0



 Comments   
Comment by c.beikov [ 13/Aug/14 ]

This is fixed in JPA 2.1 isn't it?

Comment by kithouna [ 01/Dec/14 ]

This is fixed in JPA 2.1 isn't it?

No, not fixed. Still open!





Support for more automatic values besides @GeneratedId (JPA_SPEC-36)

[JPA_SPEC-51] @PrePersist / @PreUpdate / @PostPersist / @PostUpdate Created: 23/Feb/13  Updated: 23/Feb/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Sub-task Priority: Major
Reporter: mkarg Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

New annotations "@PrePersist" / "@PreUpdate" / "@PostPersists" / "@PostUpdate" annotations are used to define the event when the above annotations are to be injected.

@PrePersist will be the most typical use case.

Example:

@PrePersist @CurrentUser String createdBy; // Injects who created this record.
@PrePersist @CurrentTimestamp Date createdOn; // Injects when this record was created.
@PreUpdate @CurrentUser String lastUpdatedBy; // Injects who last updated this record.
@PreUpdate @CurrentTimestamp Date lastUpdatedOn; // Injects when this record was last updated.






[JPA_SPEC-57] Support schema drop on restart Created: 11/May/13  Updated: 11/May/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: reza_rahman Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

javax.persistence.schema-generation.database.action=drop-and-create only drops the schema on undeploy/deploy. For rapid/agile/iterative development via IDEs, very frequently developers start and stop the app/server without necessarily triggering a deploy/undeploy cycle hence not performing a drop. In these scenarios, developers would want a schema drop to happen on app/server stop and start. Hibernate for example, supports this feature.

Such a feature could be supported via an additional property like javax.persistence.schema-generation.database.drop-on-restart=true, that is defaulted to false.



 Comments   
Comment by reza_rahman [ 11/May/13 ]

Do let me know if anything needs to be explained further - I am happy to help.

Please note that these are purely my personal views and certainly not of Oracle's as a company.

Comment by Mitesh Meswani [ 11/May/13 ]

javax.persistence.schema-generation.database.action=drop-and-create only drops the schema on deploy.

There might be scenarios where we might want to clean up database on undeploy. It would be nice to have an option where EMF.close() triggers a drop





[JPA_SPEC-58] script generation : same table name, different schema are merged Created: 15/May/13  Updated: 15/May/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major
Reporter: laps Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Windows : XP SP3
Glassfish : Oracle GlassFish Server 3.1.2.2 (build 5)
Eclipse : Indigo Build id: 20110615-0604
JRE : jdk1.6.0_33


Tags: create, ddl, eclipselink, generate, script, table

 Description   

Write two entities with the same table name but different schema :

@Entity
@Table(name="tablename", schema="schema1")
public class MyEntitySchema1 {
}

@Entity
@Table(name="tablename", schema="schema2")
public class MyEntitySchema2 {
}

Configure persistence.xml like this :
...
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<property name="eclipselink.ddl-generation" value="create-tables"/>
<property name="eclipselink.ddl-generation.output-mode" value="sql-script"/>
<property name="eclipselink.create-ddl-jdbc-file-name" value="create.sql"/>
<property name="eclipselink.drop-ddl-jdbc-file-name" value="drop.sql"/>
...
</properties>
...

And then check the generated script create.sql :
CREATE TABLE schema2.tablename (...)
ALTER TABLE schema2.tablename ADD CONSTRAINT ...

Nothing about schema1.tablename in the script file.

If I change one thing (ex. update a letter of tablename from lowercase to uppercase, I get both tables generated. (Those tables already exist, so this is not a solution for me).






[JPA_SPEC-54] Clarify behavior of RESOURCE_LOCAL EMF instantiated by container Created: 12/Apr/13  Updated: 12/Apr/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major
Reporter: Mitesh Meswani Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

It is possible to inject/lookup an EMF that is derived from a RESOURCE_LOCAL pu. For such EMFs, we need to clarify the behavior for following:

1. Should we disallow specifying jta-data-source

2. If the persistence.xml specifies both non-jta-datasource and javax.persistence.jdbc.* properties, which one should take precedence?

3. If the persitence.xml specifies no database connection information, should we default non-jta-datasource?

3. If the persistence.xml just specifies javax.persitsence.jdbc.* properties, what should be the behavior? Should we still default non-jta-datasource






[JPA_SPEC-61] Retrieving primary key of lazy relationship Created: 18/Jun/13  Updated: 15/Apr/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: mkarg Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

JPA 2.0



 Description   

My server passes entities to clients as XML representations, simply by marshalling it using JAXB. This works well but there is one case where it fails: LAZY relationships. As I just want to include a URL for each reference (so clients can decide on their own whether to load this in turn), I need access to the primary key. But, as it is a LAZY reference, the attribute itself is NULL, and if I use the getter(), then the complete entity is instantiated – which I do not need and do not want. So it would be really great, if there would be solution, which makes JAXB and JPA work more closely. Like an API to get the primary keys of LAZY relationships, or even better, an automatic integration.



 Comments   
Comment by mkeith [ 18/Jun/13 ]

Markus,

With JPA 2.1 you could create an entity fetch graph for the related object and only load the PK. You would effectively have an unloaded object (except that it would have the PK) with its remaining state set to lazily load. There are some differences between this and having the whole object be lazily loaded since if the object ends up getting triggered then, depending upon the implementation, each of the attributes may end up getting loaded separately, but it could be configured to be fairly close I think. Of course, this would be a bit of a pain if you needed to do this for every lazy loaded attribute, but the advantage would be that it would also work pretty well for collection-oriented relationships.

-Mike

Comment by mkarg [ 19/Jun/13 ]

Unfortunately didn't find a tutorial on this on the web.

Currently I am doing this...

Foo f = em.find(Foo.class, myFooPK);
jaxbMarshaller.marshal(f);

...to get the root. Can you provide an example how to use that "fetch graph" thing to get the PKs of f's LAZY relations?

Thanks!
-Markus

Comment by mkeith [ 19/Jun/13 ]

From your first comment it sounded like you already had something (e.g. an XMLAdapter or some such thing?) on the relationship attribute to control marshalling a URL instead of the target Bar related object. Assuming that is the case (you need to stop the JAXB marshaller from continuing to traverse the Bar object) then you just need to access the PK of the Bar, right? If this is your usecase then you could do the following:

Define a named entity graph for Bar:
@Entity
public class Bar {
@Id long id;
public int getId()

{ return id; }


...
}

@XmlRootElement
@Entity
public class Foo

{ ... @XmlJavaTypeAdapter(MyBarToUrlAdapter.class) @OneToOne Bar bar; ... }

and then to read the Foo instance, you would do:

Map<String,Object> props = new HashMap<String,Object>();
props.put("javax.persistence.fetchgraph", em.createEntityGraph(Bar.class));
Foo f = em.find(Foo.class, myFooPK, props);
...

From within your adapter, or whatever you use to convert a Bar to a URL, you could get the id of the bar by calling getId() and the rest of the object will likely not be loaded (likely, because lazy does not imply not loaded). If this doesn't describe your problem then we can take this discussion offline.

Comment by arjan tijms [ 22/Jun/13 ]

It would be great if there was a way to just grab the PK of any entity, regardless of how it was loaded.

After all, we know the PK must be there as JPA uses it to lazy load the associated entity. It's just that the default mapping that we normally do with JPA doesn't give us access to this PK.

Maybe PersistenceUnitUtil would be an ideal place to put a utility method that can do this. It already contains functionality that's in the same category.

E.g.

Given

@Entity
public class Foo {

    @Id
    Long id;

    @OneToOne(fetch = LAZY)
    Bar bar;

    // + getters/setters
}

@Entity
public class Bar {

    @Id
    Long id;

    // + getters/setters
}

We could then grab the PK of Bar given an instance of Foo and an EntityManager em, as follows:

PersistenceUnitUtil persistenceUnitUtil = em.getEntityManagerFactory().getPersistenceUnitUtil();

Long barId;
if (persistenceUnitUtil.isLoaded(foo, "bar") {
    barId = foo.getBar().getId();
} else {
    barId = persistenceUnitUtil.getId(foo, "bar");
}

Note that the if/else is just for the example here. PersistenceUnitUtil#getId should be able to grab the Id of a given relation independent of its loaded status.

An alternative would be to explicitly map an extra instance field in Foo to the FK column in the table to which Foo is mapped, and then use a proprietary "read only" annotation, e.g.

@Entity
public class Foo {

    @Id
    Long id;

    @OneToOne(fetch = LAZY)
    Bar bar;

    @Column(name = "bar_id")
    @ReadOnly // made-up proprietary annotation, but many providers have something like this
    Long barId;

    // + getters/setters
}

I've seen this workaround actually being used in practice, but it's not so nice of course.

Comment by c.beikov [ 15/Apr/14 ]

@arjan: Which persistence provider are you using? In hibernate this is no problem and I also don't see why other persistence providers would load the entity just because you access the id. I don't know if the spec says anything about the behavior in that case but it would definitely be nice for portability reasons to have a defined behavior.





[JPA_SPEC-76] Allow specification for null handling in order-by expressions (JPQL and Criteria API) Created: 17/Mar/14  Updated: 13/Aug/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: Oliver Gierke Assignee: Unassigned
Resolution: Unresolved Votes: 5
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

§4.9 of the specification explictly states:

SQL rules for the ordering of null values apply: that is, all null values must appear before all non-null values in the ordering or all null values must appear after all non-null values in the ordering, but it is not specified which.

However, pretty much all of the important JPA providers support a nulls first/nulls last clause. So while it is already possible to define the strategy, it would be cool if one could reliably use it on top of JPA.



 Comments   
Comment by c.beikov [ 13/Aug/14 ]

Also not that this is important for database exchangeability. If I didn't specify nulls first or nulls last every query that uses the order by is non-portable between databases since databases have different defaults.





[JPA_SPEC-74] Obtaining @Version value Created: 17/Feb/14  Updated: 08/Dec/15

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: mkarg Assignee: Unassigned
Resolution: Unresolved Votes: 4
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

I want to suggest that a future release of the JPA API provides a means to get the value of an entities version attribute.

Example: em.getVersion(myEntity);
he
Justification: Generic frameworks like JAX-RS or Servlets might have an interest in getting the value of the version attribute without "knowing" which is "the version attribute" for any particular entity. For example, JAX-RS or Servlets want to send a ETag header to a client for any entity, but the Servlet does not "know" the class of that entity. Like when class name and primary key value are provided by a HTTP request. In that case, a solution would be em.getVersion(myEntity) which just should return the value of the version.



 Comments   
Comment by mkarg [ 17/Feb/14 ]

An improvement ontop would be em.isCached(Class<?>, id, version) which returns true or false, depending on the fact whether the EM has an entity with that primary key and version in cache.

Justification: To answer conditional HTTP requests it would be great to query the cache for a particular version. If the cache does not contain that version, EM shall NOT load from disk, but answer to JAX-RS or Servlet that this entity is not there.

Comment by pbenedict [ 23/Nov/15 ]

I think this is more of a design issue than a needed JPA enhancement. If you want to generate ETag values off any entity, for example, you should create integration logic that knows how to do the transformation. I don't understand why you want to avoid having to know the version attribute; it's not poor design to expose it. Just go ahead and create a universal interface that all your versioned entities can implement:

public interface Versioned {
    int getVersion();
}

PS: Caching is supported now by JPA 2.0 @Cacheable; this should alleviate your concern about always materializing an entity.

Comment by mkarg [ 24/Nov/15 ]

By design, I do not want to add a superfluous interface: JPA already knows the attribute, so it makes no sense to add another interface, as your compiler cannot guarantee that it accesses the same attribute as the annotation uses – risk of hard to track bugs! ALso, the attribute might be invisible to the accessing package on design purpose and adding an interface enforces "public" visibility. You simple will copy existing technology for the sake of not adding a needed JPA feature.

Also my proposal will work with ANY entity – even entities that the caller has no source code of, hence, cannot add the proposed interface!

@Cacheable actually does not solve the described use case. I do want to load from disk exactly in case the ETag is not matching. @Cacheable(false) prevents this.

Comment by pbenedict [ 24/Nov/15 ]

That is an interesting requirement. I've always been in control of my entities so I haven't encountered your dilemma. If the version isn't being exposed to you in a third-party entity, there may be a good design reason for that (or not). Someone from the EG should opine. I think the question boils down to this: For any given entity, is the value contained in a @Version attributed considered to be intrinsically public? If so, then expose it through the EM.

Comment by mkarg [ 24/Nov/15 ]

Yes, in fact our application provides services upon any "unknown" set of entities provided by third parties: We accept JPA QL and we return entities – without care of the entity classes, its interface, or its source. Hence we need to be able to handle their version without deeper knowledge, but solely must go through an JPA API.

BTW, we do not see that the @Version member implicitly has to be public, as we do not want to access the member. We only want to either ask the EM to expose the version value, or even better, simply ask whether a known version still is current or not.

Comment by neilstockton [ 08/Dec/15 ]

This requirement seems reasonable to me, and is already available in the JDO API. In JDO you can also have a surrogate version (i.e no field in the class, but the version managed by the persistence solution). That would also be a useful requirement for JPA (FWIW)





[JPA_SPEC-55] Add variant of EntityManager#merge that modifies argument instead of returning new instance Created: 22/Apr/13  Updated: 14/Jan/15

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: arjan tijms Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

The EntityManager's persist method modifies its argument (the entity to be persisted) with generated data such as the Id.

The merge method can also be used to persist a new entity, but has the additional benefit that it can also update the entity or any of its relations in the DB. merge however does not modify its argument but returns a new instance in which the generated data such as the Id is set.

For some uses cases its needed to have the original entity instance be modified such as the persist method does, but with the semantics of the merge method.

For instance, in a JSF application an existing (persistent but detached) entity is passed into an action method to add a new entity into e.g. a @OneToMany relation. When the action method returns, rendering of the page begins, which still has a reference to the original entity. Depending on the backing bean and template structure of the page in question, it might not be trivial to have the action method replace this original reference that was passed into it, and the kind of update that persist does would be far more convenient.

For this I would like to request adding a variant of EntityManager#merge that modifies its argument instead of returning a new instance.

Alternatively a refresh method that accepts detached entities could also work for the above mentioned use case. Yet another alternative might be having an attach method that makes its argument attached, overwriting fields in that argument if an entity with the same identity happened to be present in the persistence context, but contrary to merge not updating any data in the database.



 Comments   
Comment by ymajoros [ 14/Jan/15 ]

What if you end up having multiple instances referring to the same row in database? They would all end up in PC??

If you do this in JSF, you really need to use the returned entity. If you can't do that, you have an architecture problem anyway.

persist() returns the same entity because it's new, and this guaranteed to be unique. You can't do that with updates.

Just curious: I don't really get your description of an attach() method that wouldn't update the database. Why would you want to have it managed by PC, but not updating the database? This is quite contrary to the basic ideas of JPA. Or am I missing something in your explanations?





[JPA_SPEC-63] JPA next should support Java 8 Date and Time types Created: 11/Aug/13  Updated: 02/Feb/16

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: 2.2

Type: Improvement Priority: Major
Reporter: Nick Williams Assignee: Unassigned
Resolution: Unresolved Votes: 55
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: date, date-time, java8, jsr-310, temporal, time

 Description   

Currently, JPA temporal fields are supported for the following data types: java.util.Date, java.util.Calendar, java.sql.Date, java.sql.Time, and java.sql.Timestamp. java.sql.Date properties are always mapped to the JDBC methods getDate and setDate, and it is an error to specify the @javax.persistence.Temporal annotation for these types. The same is true of Time (mapped to getTIme and setTime) and Timestamp (mapped to getTimestamp and setTimestamp). Properties of type java.util.Date and Calendar must be annotated with @Temporal to specify the javax.persistence.TemporalType enum indicating which JDBC methods should be used for those properties.

Some vendors support other temporal types, such as Joda Time, but this is non-standard and should probably remain so since Joda Time isn't guaranteed to stay around (and, in fact, is likely to start ramping down with the release of Java 8).

JSR-310 as part of Java 8 specifies a new Date & Time API in the java.time package and sub-packages that supplants java.util.Date, Calendar, java.sql.Date, Time, Timestamp, and Joda Time. It is based off of the Joda Time API, but with enhancements and certain redesigns that the Joda Time founder/creator has said makes it superior to Joda Time.

JPA's existing rules for the currently-supported temporal types should remain largely unchanged. However, the specification should be added to in order to specify support for JSR-310. These are the proposed new rules I believe should be present in the JPA.next specification:

  • Properties of type java.time.Duration are treated as @javax.persistence.Basic fields. They automatically map to;
    • DURATION fields if the database vendor supports duration types;
    • DECIMAL-type fields storing the seconds before the decimal point and the nanoseconds after the decimal point;
    • INTEGER-type fields storing the seconds; and,
    • CHAR/VARCHAR-type fields storing the value in its ISO-8601 format (Duration#toString() and Duration#parse(CharSequence)).
  • Properties of type java.time.Period are treated as @Basic fields. They automatically map to:
    • PERIOD or DURATION fields if the database vendor supports period or duration types;
    • DECIMAL-type fields storing the seconds before the decimal point and the nanoseconds after the decimal point;
    • INTEGER-type fields storing the seconds; and,
    • CHAR/VARCHAR-type fields storing the value in its ISO-8601 format (Period#toString() and Period#parse(CharSequence)).
  • Properties of type java.time.Year are treated as @Basic fields. They automatically map to:
    • YEAR fields if the database vendor supports year types; and,
    • INTEGER/CHAR/VARCHAR-type fields storing the literal number/string value.
  • Properties of enum type java.time.Month are treated as special-case enum fields.
    • If the database field is a MONTH field (assuming the database vendor supports such types), it maps to this field.
    • If @javax.persistence.Enumerated is not present and the database field is an INTEGER-type field, it maps as the month number (NOT the ordinal) using int Month#getValue() and Month Month#of(int).
    • Otherwise, it falls back to standard enum mapping rules.
    • It is an error to annotate a Month property with @Enumerated if the database field is of type MONTH.
  • Properties of enum type java.time.DayOfWeek are treated as special-case enum fields.
    • If the database field is a DAY_OF_WEEK field (assuming the database vendor supports such types), it maps to this field.
    • If @Enumerated is not present and the database field is an INTEGER-type field, it maps as the day number (NOT the ordinal) using int DayOfWeek#getValue() and DayOfWeek DayOfWeek#of(int).
    • Otherwise, it falls back to standard enum mapping rules.
    • It is an error to annotate a DayOfWeek property with @Enumerated if the database field is of type DAY_OF_WEEK.
  • Properties of type java.time.YearMonth are treated as @Basic fields.
    • By default, they automatically map to:
      • YEARMONTH fields if the database vendor supports year-month types;
      • DATE and DATETIME fields storing the lowest day number that the database vendor supports and zero-time if applicable; and,
      • CHAR/VARCHAR-type fields storing the value in its ISO-8601 format (YearMonth#toString() and YearMonth#parse(CharSequence)).
    • The new @javax.persistence.YearMonthColumns annotation can map a YearMonth property to two database fields. A property annotated with this overrides the default mapping behavior. It is an error to mark properties of any other type with this annotation. The required @javax.persistence.Column-typed year attribute specifies the column that the year is stored in while the required @Column-typed month attribute specifies the column that the month is stored in. The year column follows the same default mapping rules as for Year types and the month column as for the Month enum. It is an error to specify @Column and @YearMonthColumns on the same property.
  • Properties of type java.time.MonthDay are treated as @Basic fields.
    • By default they automatically map to:
      • MONTHDAY fields if the database vendor supports month-day types;
      • DATE and DATETIME fields storing the lowest year number that the database vendor supports and zero-time if applicable; and,
      • CHAR/VARCHAR-type fields storing the value in its ISO-8601 format (MonthDay#toString() and MonthDay#parse(CharSequence).
    • The new @javax.persistence.MonthDayColumns annotation can map a MonthDay property to two database fields. A property annotated with this overrides the default mapping behavior. It is an error to mark properties of any other type with this annotation. The required @Column-typed month attribute specifies the column that the month is stored in while the required @Column-typed day attribute specifies the column that the day is stored in. The month column follows the same default mapping rules as for the Month enum and the day column automatically maps to INTEGER/CHAR/VARCHAR-type fields. It is an error to specify @Column and @MonthDayColumns on the same property.
  • Properties of type java.time.ZoneId are treated as @Basic fields. They automatically map to:
    • TIMEZONE fields if the database vendor supports time zone types (they never map to offset fields); and,
    • CHAR/VARCHAR-type fields storing the value in its ISO-8601 format (ZoneId#toString() and ZoneId#of(String)).
  • Properties of type java.time.ZoneOffset are treated as @Basic fields. They automatically map to:
    • OFFSET fields if the database vendor supports offset types (they never map to time zone fields); and,
    • CHAR/VARCHAR-type fields storing the value in its ISO-8601 format (ZoneOffset#toString() and ZoneOffset#of(String)).
  • Properties of types java.time.Instant, java.time.LocalDate, java.time.LocalTime, java.time.LocalDateTime, java.time.OffsetTime, java.time.OffsetDateTime, and java.time.ZonedDateTime are treated as temporal @Basic types that are mapped using the following rules:
    • LocalDate always maps as a date-only value. It is an error to mark a LocalDate property with the @Temporal annotation.
    • LocalTime and OffsetTime always map as time-only values. It is an error to mark a LocalTime or OffsetTime property with the @Temporal annotation.
    • Instant, LocalDateTime, OffsetDateTime, and ZonedDateTime map as timestamp values by default. You may mark a property of one of these types with @Temporal to specify a different strategy for persisting that property.
    • The new @javax.persistence.TemporalIncludeTimeZone annotation indicates that the offset in the OffsetTime or OffsetDateTime property or the time zone in the ZonedDateTime or Calendar property will be persisted with the value. Otherwise (if this is absent) the value is converted to the database server offset or time zone for persistence.
    • The new @javax.persistence.TemporalTimeZoneColumn(@Column value) annotation indicates a different column in which the time zone value is stored. It implies @TemporalIncludeTimeZone. It is required if @TemporalIncludeTimeZone is present but the database vendor does not support storing the time zone with the field data type. It is also required if @TemporalIncludeTimeZone is present but the JDBC driver in use is less than version 4.2 (a JDBC 4.2 driver is necessary to persist time zones and offsets with time/date-time values). The persistence rules for this column are the same as for ZoneId and ZoneOffset properties.
    • Properties of these types invoke the following special handling for JDBC driver versions before and after 4.2.
      • A JDBC driver is considered version 4.2 or better if java.sql.Driver#getMajorVersion() returns a number greater than 4, or it returns 4 and Driver#getMinorVersion() returns a number greater than 1. In the absence of a testable Driver instance, implementations may assume that the driver version is less than 4.2 if PreparedStatement#setObject(int, Object, SQLType) throws a SQLFeatureNotSupportedException.
      • If the JDBC driver is version 4.2 or newer, these seven types are persisted and retrieved as follows:
        • They are persisted with PreparedStatement#setObject(int, Object, SQLType) and retrieved with ResultSet#getObject(int, Class<?>) or ResultSet#getObject(String, Class<?>).
        • Time-only properties or TemporalType.TIME properties use a java.sql.SQLType of java.sql.JDBCType.TIME in the absence of @TemporalIncludeTimeZone or presence of @TemporalTimeZoneColumn. They use JDBCType.TIME_WITH_TIMEZONE in the presence of @TemporalIncludeTimeZone and absence of @TemporalTimeZoneColumn.
        • Date-only properties or TemporalType.DATE properties use a SQLType of JDBCType.DATE.
        • Date-and-time properties use a SQLType of JDBCType.TIMESTAMP in the absence of @TemporalIncludeTimeZone or presence of @TemporalTimeZoneColumn. They use JDBCType.TIMESTAMP_WITH_TIMEZONE in the presence of @TemporalIncludeTimeZone and absence of @TemporalTimeZoneColumn.
      • If the JDBC driver is version 4.1 or older, these seven types are persisted and retrieved as follows:
        • Time-only properties or TemporalType.TIME properties are automatically converted to and from Time and use the traditional setTime and getTime methods.
        • Date-only properties or TemporalType.DATE properties are automatically converted to and from java.sql.Date and use the traditional setDate and getDate methods.
        • Date-and-time properties are automatically converted to and from Timestamp and use the traditional setTimestamp and getTimestamp methods.
        • @TemporalTimeZoneColumn is required if @TemporalIncludeTimeZone is present.


 Comments   
Comment by Nick Williams [ 11/Aug/13 ]

To be clear, by "JPA.next" I mean JPA 2.2 unless 3.0 is next and there isn't going to be a 2.2. "Whatever is going to be in Java EE 8."

Comment by Nick Williams [ 11/Aug/13 ]

A few additional notes:

  • The reason for specifying the JDBC < 4.2 vs JDBC ≥ 4.2 behavior is that, even today, some JDBC driver vendors have still not fully implemented JDBC 4.0 (Java 6), let alone JDBC 4.1 (Java 7). Unfortunately and terribly, it could be 5 or even 10 years before all driver vendors have JDBC 4.2 drivers. Therefore, JPA vendors should support both mechanisms (since the JDBC 4.2 mechanisms allow saving with timezones, which JDBC 4.1 does not).
  • It is an error if the @TemporalIncludeTimeZone or @TemporalTimeZoneColumn annotations are present on properties of any type other than Calendar, OffsetTime, OffsetDateTime, and ZonedDateTime.
  • A Calendar, OffsetTime, OffsetDateTime, or ZonedDateTime must be converted from its time zone/offset to the database server's time zone/offset on persistence if and only if neither @TemporalIncludeTimeZone nor @TemporalTimeZoneColumn are present on the property. If either of those are present, time zone/offset conversion is not necessary because the time zone/offset will be saved with the time (either using setObject or a second column). Upon retrieval from the database, neither the date/time value nor the time zone/offset value should ever be altered. It should be accepted as it comes back from the database, whether stored together in the same column or separately in two columns.
Comment by Nick Williams [ 12/Aug/13 ]

Another note:

  • In addition to the int, Integer, short, Short, long, Long, and Timestamp types currently supported for @javax.persistence.Version-annotated properties, properties of type Instant may also be @Version-annotated.
Comment by mister__m [ 20/Feb/14 ]

YearMonth and MonthDay columns can also be mapped to INTEGER columns.

Comment by reza_rahman [ 25/Mar/14 ]

I must say this is an excellent initial analysis. For folks interested, the official Java Tutorial now has a nice trail on the Java SE 8 Date/Time API: http://docs.oracle.com/javase/tutorial/datetime/index.html. Details on JDBC 4.2 here: http://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/jdbc_42.html.

Comment by braghest [ 13/Apr/14 ]

Don't do ISO-8601 formatted CHAR/VARCHAR-type fields violate 1st normal form?

Comment by tomdcc [ 02/Jun/14 ]

Some databases such as Postgresql support storing time intervals [1], so the Duration and Period types should be allowed to map to such types if the underlying database supports them.

[1] http://www.postgresql.org/docs/9.3/static/datatype-datetime.html#DATATYPE-INTERVAL-INPUT

I believe that interval is an ANSI SQL standard type

Comment by perceptron8 [ 14/Jan/15 ]

Properties of type java.time.MonthDay are treated as @Basic fields.
By default they automatically map to:

  • ...
  • DATE and DATETIME fields storing the lowest year number that the database vendor supports and zero-time if applicable; and,
  • ...

This year must be also a leap year. Sadly, Date.valueOf(MonthDay.of(1, 1).atYear(0)) becomes "0001-01-01", so it can't be 0.

Comment by mkarg [ 26/Mar/15 ]

While I indeed support this feature request due to its common purpose, there actually is no real need for it anymore, thanks to the acceptance of the adapter API proposal filed by me in https://java.net/jira/browse/JPA_SPEC-35: You can just write a simple adapter that does the type conversion at runtime. Or are you aiming on schema creation instead of just type conversion?

Comment by ymajoros [ 27/Mar/15 ]

Yeah, but I think it would be a good idea to mention out-of-the box support in the spec, that providers can implement this way if they want. Otherwise, we'll end up basically having to package boiler-plate code every time we use java.time.* classes for JPA support.

Comment by mkarg [ 27/Mar/15 ]

While that is absolutely correct, the technical answer is a bit more complex: What is the final predicate that makes a data type eligible for inclusion in the set of mandatory type mappings?

One could say, that predicate is "being essential" or "being of common use", but who defines what "essential" or "common use" is? See, for some applications, support for java.awt.Image and java.net.URL might be much more essential than support for LocalDate or ZonedDateTime. On the other hand, other applications might be full of LocalDate but never uses Instant. So where exactly to make the cut? This becomes particularly complex when looking at the sheer amount of types found in the JRE, and it is obvious there has to be a cut somewhere. Even JavaFX, which is bundled with the JRE, does not support Instant still in v8, so why should JPA? And looking at the current progress of Project Jigsaw, possibly the qualifying predicate might simply be answered by "all types in a particular jigsaw module"?

Anyways, it is not up to me to decide. I do support your request, and would love to see support for rather all Java Time API times, particularly for Instant and Duration, and your request has prominent supporters like for example Java Champion Arun Gupa as I learned recently. But I doubt the final answer will be as simple an satisfying as we would love to have it.

Maybe it would be better to simply set up another JSR, like "Common Data Type Conversions for the Java Platform", which provides much more mappings than just date and time, but also would not be bound to JPA but also could be used by JAXB, JAX-RS, and possibly more API that deal which the problem of transforming "<A> to <B>"? Having such a vehicle would really reduce boilerplate a lot.

Comment by braghest [ 28/Mar/15 ]

there actually is no real need for it anymore, thanks to the acceptance of the adapter API proposal

I'm not sure. Firstly currently the RI explodes with a ClassCastException when trying to write an attribute converter mapping a java.util.Calendar database value. Secondly the spec would have to say that when mapping a java.util.Calendar the time zone of the value returned from the database is the time zone of the value on the database instead of the time zone of the Java virtual machine (like java.sql.Date). JDBC only allows to access the time zone of a value using the Java 8 Date and Time API.
If you currently want to access the timezone of a database value you need to use vendor specific extensions.

Comment by Lukas Jungmann [ 14/Sep/15 ]

will try to address this in 2.2

Comment by neilstockton [ 02/Feb/16 ]

The description for Period is wrong in that it implies that this type stores seconds+nanos, whilst it actually is YEAR+MONTH+DAY ("A date-based amount of time in the ISO-8601 calendar system"). Consequently it is not possible to store it in DECIMAL, and likely not INTEGER also.





[JPA_SPEC-72] Allow @PersistenceContext to be used on parameters ot enable constructor injection of EntityManagers Created: 07/Feb/14  Updated: 25/Mar/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: Oliver Gierke Assignee: Unassigned
Resolution: Unresolved Votes: 4
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Currently it's not possible to inject EntityManagers into constructors as @PersistenceContext is defined to not be allowed on parameters. it would be cool to enable this as users could then design application components using constructor injection only.



 Comments   
Comment by arjan tijms [ 25/Mar/14 ]

Wouldn't this automatically be possible when those contexts can be injected with @Inject?





[JPA_SPEC-62] Standard name and value for "read-only" query hint Created: 22/Jun/13  Updated: 11/Dec/15

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: mkarg Assignee: Unassigned
Resolution: Unresolved Votes: 4
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Relational databases typically benefit from the knowledge, whether a transation will potentially modify and information (so locks are needed), or only read-only queries are executed (so no locks are needed). For similar reason, EclipseLink (and hopefully other JPA implementations, too) know query hints for "ready-only".

Unfortunately when using such vendor-specific hints, this will induce the problem that a portable application must know all these hints for all JPA implementations (or there will be no Performance gain for the unknown ones). This is not smart from the view of an ISV.

Hence I want to propose that the next maintenance release of the JPA specification defines a unique name and value to enable the read-only query mode independently of the actual JPA implementation.

Proposal: A compliant implementation which has a read-only query mode MUST enable this read-only query mode when the "javax.persistence.readonly" with a value of "true" is provided.



 Comments   
Comment by mkarg [ 09/Dec/15 ]

I would really appreciate it if someone of the JPA spec team could at least comment on this more than two years old proposal.

Comment by Lukas Jungmann [ 11/Dec/15 ]

this make sense to me. Should check if there are other useful/commonly used hints to be defined by the spec.

Comment by pbenedict [ 11/Dec/15 ]

For clarity's sake, EclipseLink's "read-only" hint regards how it manages the first-level cache during a query; it's not about a making the transaction read-only.

Comment by mkarg [ 11/Dec/15 ]

A general "read-only" JPA property should simply allow the application programmer to tell JPA that the result of the query will never get updated by the current transaction. Whetever conclusions a JPA implementation draws from this is completely up to the particular JPA implementation. If EclipseLink simply uses this internally for its own cache purposes, this is a valid use. Other implementations might additionally or instead use this flag to send a "FOR READ ONLY" hint to the JDBC driver so the database can relax locking, etc.





[JPA_SPEC-56] @Convert annotation's converter property should be Class<? extends AttributeConverter>, not Class (unsafe) Created: 04/May/13  Updated: 10/Dec/15

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Nick Williams Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: 1 hour
Time Spent: Not Specified
Original Estimate: 1 hour


 Description   

Currently, the converter property for the @Convert annotation is declared as follows:

Class converter() default void.class;

However, along with just generally being unsafe ("safe" use would be Class<?>), this does not properly restrict the set of classes that can be specified. My understanding is that this MUST be a class that implements javax.persistence.AttributeConverter. Therefore, the converter property should be specified like so:

Class<? extends AttributeConverter> converter() default void.class;

With this change, the developer will know at compile time if he has specified an incorrect class. Without this change, the developer will not know until he gets a runtime error, which is seriously less desirable.



 Comments   
Comment by neilstockton [ 17/May/15 ]

You can't do that.

Class<? extends AttributeConverter> converter() default void.class;
would not compile. "void.class" is not castable to the generic form (but is to the non-generic form, hence probably why they did it).

Comment by Xavier Dury [ 10/Dec/15 ]

Well, you can always do something like this:

public @interface Convert {

  interface NoConversionAttributeConverter extends AttributeConverter<Object, Object> {}

  Class<? extends AttributeConverter<?, ?>>converter() default NoConversionAttributeConverter.class;
  ...
}




[JPA_SPEC-73] Parameterized AttributeConverter and/or AttributeConverter metadata access Created: 07/Feb/14  Updated: 23/Nov/15

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major
Reporter: frenchc Assignee: Unassigned
Resolution: Unresolved Votes: 9
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

It would be good if we were able to parameterize AttributeConverter instances and/or have access to property metadata within an AttributeConverter. This way we are able to reuse AttributeConverter code instead of building similar ones or even supply further required context information. An example:

Quite often we have to deal with ancient DB schemas, and just recently we started to migrate a large Cobol application to Java. We are unable to change the content of the database, and because of Cobol with have to support fixed width string columns containing left padded data. A 5 digit fixed width column will represent 1 as '00001' and 100 as '00100'. Nothing we can change here. Unless I am mistaken I have to create a dedicated converter for every required fixed length ending up with an LeadingZeroTwoDigitAttributeConverter, LeadingZeroFiveDigitAttributeConverter and so on.

Right now it seems the JPA spec does not define wether there is one converter instance per persistent property or just a global one (which would be OK by current spec requirements). I would propose that parameterized converter instances are bound to property fields in order to support parameter evaluation during JPA provider startup.

I believe we need access to both basic attribute information and optionally supplied converter attributes. The following example would assume access to the leading pad char, the length attribute and wether the attribute is nullable or not. (in my current project I have to store an empty string for null values in the database).

@Convert(converter = LeadingDigitAttributeConverter.class, metaData="padChar=0" )
@Column(name = "ACOLUM", length=5, nullable = false)

If you believe there is no need to support something like that: Unfortunately we are in the middle of migrating quite a few very very old applications to Java, and we can't change that stuff. And there is more to come.



 Comments   
Comment by c.beikov [ 15/Apr/14 ]

I agree that metadata is needed but I guess it would be easier to just let the converter instance know about the metamodel Attribute instance or something simialar.
You can define your own annotations that can be used for configuration purposes of the converter. Through the metamodel you can get your hands on those values.
I propose that you can either inject that instance or with java 8 default methods around, introduce a new default method "void init(javax.persistence.metamodel.Attribute)" in the AttributeConverter.

Comment by frenchc [ 15/Apr/14 ]

Sounds good to me.

Comment by tomdcc [ 30/May/14 ]

We have need of this as well - we'd like to convert enum attributes to specific string representations in the database, and with the current spec we have to create a converter per enum, rather than e.g. having the enum classes implement an interface to make the required string available and use a single converter.

Hibernate has parameters that you can pass to their converter types which is a workaround for this, but we're not using Hibernate for this project and in any case it's pretty ugly to have to do that for every column.

Making the metamodel attribute available to the converter would be perfect, as it could then grab the attribute type. The nice thing about that approach, too, is that if someone wants to pass extra info in to the converter that isn't available in the normal JPA model, they can create a custom annotation and the converter can call attribute.getJavaMember() and look for annotations, and the info is right with all the other metadata for the attribute.

Comment by isk0001y [ 23/Dec/14 ]

Such a parameterized AttributeConverter may also be of help when one is creating converter for hundreds of enums.
In our project we have approx. 260 enums, which implement a simple interface.

I can use the AttributeConverter to persist any enum implementing that interface, by just calling "getFoo()"; assuming getFoo() will return a basic type like string, the direction to the database is problem-free.
However, since I cannot parameterize the Converter, and I have no access to the Property-Type in the entity the converter is about to be applied, I cannot reversely find out the Enum-Class whose foo i had persisted.
This end me in creating over 260 Converters alongside 260 enums.

Both eclipselink and hibernate provide have solutions for this. Eclipselink allows me to use their "Converter" infrastructure to create Converters with a special "initialize" method that allows me to access the entity property being converted. Hibernate allows me to create "UserDefinedTypes" like "UserType", where the @Type annotation takes an array of @Parameters to configure the converter. Both techniques result in me creating only ONE converter, but any entity class is then dependent on the concrete JPA provider through imports. This cannot be what you guys want

If I already must stick to one JPA provider, then I completely can skip using jpa at all, and stay incompatible.

Comment by uk.wildcat [ 28/Aug/15 ]

For the slow, can someone provide an example of the workaround that c.beikov is outlining here

You can define your own annotations that can be used for configuration purposes of the converter. Through the metamodel you can get your hands on those values.

Comment by c.beikov [ 28/Aug/15 ]

You just define your own annotation type like

public @interface MyAnnotation {
String padChar();
}

and use it on your field or getter

@MyAnnotation(padChar = "0")
@Convert(converter = LeadingDigitAttributeConverter.class)
private String myField;

then when you have access to the javax.persistence.metamodel.Attribute you can get access to the member and via reflection access the MyAnnotation instance.

Comment by pbenedict [ 23/Nov/15 ]

I think this is easily possible with some design enhancement. Food for thought:

1) Once converters are injectable (JPA_SPEC-109), that means they will have a controllable lifecycle. Converters that are parameterized obviously cannot be singletons because they require customization per instance.

2) Enhance @Converter to allow an array of parameterized key/value pairs. The key represents a setter method on the converter instance.

// setPadChar must exist on LeadingDigitAttributeConverter
@Convert(converter = LeadingDigitAttributeConverter.class, parameters=@ConverterParameter(name="padChar", value="0"))
private String myField;

3) By new rule of the spec, any converter that has "parameters" are non-singletons.





[JPA_SPEC-81] @Version Support for Temporal Types Created: 22/May/14  Updated: 22/May/14

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Minor
Reporter: shelleyb Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

The JPA 2.1 specification currently indicates that Timestamp is the only supported temporal type for @Version properties:

The following types are supported for version properties: int, Integer, short, Short, long, Long, Timestamp.

I'd propose that additional temporal types are supported as well:

java.util.Date, java.util.Calendar, java.sql.Date, java.sql.Time, java.sql.Timestamp



 Comments   
Comment by shelleyb [ 22/May/14 ]

For reference, Hibernate already seems to support this, and as such, we had initially overlooked this jpa limitation and are already using @Version java.util.Calendar in our entities, and I have observed this usage elsewhere as well:

https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch05.html#mapping-declaration-timestamp





[JPA_SPEC-64] EntityGraph API has unspecified List/Map getters Created: 27/Aug/13  Updated: 27/Aug/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: mkeith Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

The List/Map getter methods (e.g. getAttributeNodes()) on EntityGraph, AttributeNode, and Subgraph do not specify whether the List/Map returned is a mutable one or a copy. They should be specified as returning the exact collections so the collections can be mutated. If it is a copy then the ability to mutate an existing named entity graph using createEntityGraph(String) is quite limited. There would be no way to remove an attribute node or a subgraph, or for that matter even add a subgraph for an existing attribute node.

The alternative to returning the actual collections is to add methods to the API to enable the additional mutating operations.






[JPA_SPEC-65] Need another property to make lazy loading of attributes easy with entity graphs Created: 31/Aug/13  Updated: 31/Aug/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: mkeith Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

There are currently two properties to use with entity graphs:

javax.persistence.fetchgraph -

This property accepts an entity graph to act as a complete override of all the attributes for the type. Attributes are dictated to be eager if included or lazy if excluded from the graph, regardless of how they are mapped.

javax.persistence.loadgraph -

This property offers a selective eager approach. One can simply add the attributes that one wants to be eagerly loaded and the rest are left as they are statically mapped.

The missing property would be something to allow more convenient selective lazy overriding without having to declare the entire attribute set for the type (as required by fetchgraph). So something like:

javax.persistence.lazygraph -

This property would offer a selective lazy approach. One would be able to add the attributes that one wants to be lazily loaded to the graph, with the rest being left as they are statically mapped.






[JPA_SPEC-59] Clarify namespaces of type aliases and named parameters Created: 11/Jun/13  Updated: 11/Jun/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Minor
Reporter: Matthew Adams Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

JPA 2.1


Tags: jpa, jpql, parameter

 Description   

JPA 2.1 Section 3.10.12, "Named Parameters", does not explicitly require that the namespaces of type aliases and named parameters be distinct. Some implementations fail & some succeed on JPQL of the following form:
SELECT x FROM Thing x WHERE x.foobar = :x

The specification should be explicit as to whether type aliases and named parameters share the same namespace.






[JPA_SPEC-68] update and/or validate javax.persistence.schema-generation.database.action Created: 12/Nov/13  Updated: 12/Nov/13

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Minor
Reporter: steveschols Assignee: Unassigned
Resolution: Unresolved Votes: 3
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: schema-generation, update, validate

 Description   

I think that an "update" and a "validate" schema-generation.database.action might come in handy, especially for production environments where Java EE 7 will be running in the comming future.

Of course you don't want your database to be accidentally being removed if you still have the "create-drop" enabled, and a new action won't solve that.
But without the "update" action you still have to resort to solutions like Google Flyway or DbMaintain to update an existing database schema.

Are there plans to incorporate a new action like "update" or "validate", the way Hibernate supports it?
Or are they left out by design?






[JPA_SPEC-137] API improvements - pass List to where Created: 26/Oct/16  Updated: 26/Oct/16

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: ymajoros Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Cloners
clones JPA_SPEC-44 API improvements Closed

 Description   

Quite happy to see jpa 2.1 is near...

I couldn't seem to subscribe to the mailing list, so here are some comments.

Some minor question/suggestion, maybe already discussed?

Why do we have this:

CriteriaQuery<T> where(Predicate... restrictions);

But not, additionnally, this:

CriteriaQuery<T> where(List<Predicate> restrictions);

While there is:

CriteriaQuery<T> groupBy(Expression<?>... grouping);
and
CriteriaQuery<T> groupBy(List<Expression<?>> grouping);

And List parameters for having, orderBy, ...



 Comments   
Comment by ymajoros [ 26/Oct/16 ]

Just cloned this issue, because:

  • JPA 2.1 is done but this would still improve the API
  • JPA 2.2 is in the air
  • As I understand it, Java 8 will be required in JPA 2.2, so it's now quite trivial to add a few default methods in EntityManager interface.

Why do we need this?

I typically have a bunch of optional search queries, which I transform into predicated:

if (namePrefix != null) {
   Predicate namePredicate = ...
   predicates.add(namePredicate);
}

// boiler-plate
Predicate[] predicateArray = predicates.toArray(new Predicate[0]);
query.where(predicateArray);

I'd like to just be able to to this:

 query.where(predicates);

Same for CriteriaBuilder::and (and ::or, etc.), which only accept arrays. I suggest adding a Collection<T> parameter to them.

This makes the Criteria API really dynamic (arrays aren't).





[JPA_SPEC-77] EntityManager(Factory) should implement AutoCloseable Created: 13/Apr/14  Updated: 02/Dec/16

Status: Open
Project: jpa-spec
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Minor
Reporter: braghest Assignee: Unassigned
Resolution: Unresolved Votes: 7
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Tags: autocloseable, java_7

 Description   

EntityManager and EntityManagerFactory have #close() methods but do not implement AutoCloseable. Implementing AutoCloseable would allow them to be used in Java 7 try-with-resource statement.



 Comments   
Comment by flutter [ 21/May/15 ]

This would mean to drop Java 6 support, right?

Comment by braghest [ 22/May/15 ]

That would mean dropping Java 7 support but then again Java EE 7 requires Java SE 7.

Comment by neilstockton [ 28/May/15 ]

It would mean dropping Java 1.6, yes. But then Java 1.6 and Java 1.7 are BOTH end of life now.
By the time the next version of JPA happens (who knows when that is), Java 1.8+ should be the baseline, hence no reason why this issue can't be included.
+1

Comment by jemiller1 [ 31/Aug/16 ]

Come on guys. It's 2016 and Java 8 has been out for over a year. Is this ever going to get implemented? It looks really obvious to me that this needs to happen. I don't know what to say about the whole Java standards process other than it's extremely slow. Coming from a .NET environment. It is a huge step backwards working with Java. I'm amazed that things as simple as this don't just work. And has already been pointed out Java 6 and 7 are already EOL.

Comment by s.grinovero [ 02/Dec/16 ]

For the record, we had this in Hibernate since a while, and initially implemented java.io.Closeable which just requires java 1.5. java.io.Closeable extends AutoCloseable since Java 7, so people on Java >=7 could use the try-with-resources pattern already.





Generated at Sat Dec 10 19:12:15 UTC 2016 using JIRA 6.2.3#6260-sha1:63ef1d6dac3f4f4d7db4c1effd405ba38ccdc558.