问题
guy, I have a very strange problem.
I am setting up some endpoints in my application and I have an endpoints like this:
@Path("/ioconfiguration")
public class IOConfigurationEndPoint {
@EJB
private static IOConfigurationDAO ioConfigurationDAO;
@GET
@Produces(MediaType.APPLICATION_JSON)
public Response getAllIoConfigurations() {
ioConfigurationDAO = new IOConfigurationDAO();
ioConfigurationDAO.init();
List<IOConfiguration> list = ioConfigurationDAO.findAllIOConfiguration();
ioConfigurationDAO.destroy();
return Response.status(Response.Status.OK).entity(list).build();
}
}
The idea is that I need to get all the information "IO Configuration" from the table, and I have 32 rows in a table called "IO_CONFIGURATION", the pojo for this entity is like this:
@Entity
@Indexed
@Table(name = "IO_CONFIGURATION",
indexes = {@Index(columnList = "CHANNEL_NAME", name = "CHANNEL_NAME")})
public class IOConfiguration implements Serializable {
private static final long serialVersionUID = 7542743172221933818L;
@Id
@GenericGenerator(name = "IOConfiguration", strategy = "uuid")
@GeneratedValue(generator = "IOConfiguration")
@Column(name = "IO_CONFIGURATION_ID")
private String ioConfigurationId;
@Field(analyze = Analyze.NO)
@Column(name = "CHANNEL_NAME")
private String channelName;
@Column(name = "NAME")
private String name;
@Column(name = "CONVERTION_TYPE")
private String conversionType;
@Column(name = "M_INFO")
private Double mInfo;
@Column(name = "B_INFO")
private Double bInfo;
@Column(name = "VOLTAGE_DIVIDE")
private String voltageDivide;
@Column(name = "SAMPLE_RANGE")
private String sampleRange;
@Column(name = "SAMPEL_PERIOD")
private Integer samplePeriod;
@Column(name = "STORE_ROW")
private Boolean storeRow;
@Column(name = "STORE_CONVERTED")
private Boolean storeConverted;
@Column(name = "DEFAULT_GRAPH")
private String defaultGraph;
@Column(name = "TITLE")
private String title;
@Column(name = "UNITS")
private String units;
@Column(name = "RANGE_LOWERBOUND")
private Integer rangeLowerbound;
@Column(name = "RANGE_UPPERBOUND")
private Integer rangeUpperbound;
@OneToMany(mappedBy = "ioConfiguration", fetch = FetchType.EAGER)
private List<Alert> alerts;
@OneToMany(mappedBy = "ioConfiguration", fetch = FetchType.EAGER)
private List<DataSeriesMeta> dataSeriesMeta;
@OneToMany(mappedBy = "ioConfiguration", fetch = FetchType.LAZY)
private List<NodeData> nodeData;
@Column(name = "CODE")
private String code;
@Column(name = "ACTIVE")
private Boolean active;
...
}
And here is how I insert the rows:
private void init() {
ioConfigurationDAO = new IOConfigurationDAO();
ioConfigurationDAO.init();
property = new AigatewayProperty();
for (int i = 1; i <= property.MAX_PORT_NUM; ++i) {
ioConfigurationDAO.getManager().getTransaction().begin();
ioConfigurationDAO.createIOConfiguration(i);
ioConfigurationDAO.getManager().getTransaction().commit();
}
List<IOConfiguration> list = ioConfigurationDAO.findAllIOConfiguration();
System.out.println(list);
ioConfigurationDAO.destroy();
}
And I can see the rows being inserted into my database from cqlsh console.
All services I have written for my DAO, like insert, delete, modify, work perfect, so I suppose there is no problem with the connection between wildfly and my cassandra database.
But queries don't work as they are expected if I am using HQL.
For the endpoint I mentioned above, this is the method I am trying to call:
@SuppressWarnings("unchecked")
public List<IOConfiguration> findAllIOConfiguration() {
Query query = this.getManager().createNativeQuery("select * from \"IO_CONFIGURATION\"");
// Query query = this.getManager().createQuery("from IOConfiguration");
return query.getResultList();
}
If I use createNativeQuery like the first line, the endpoint will work perfect, and this is the result I get from resteasy:
But if I use the createQuery like the second line, the endpoint will not work and give me an empty list.
Here is my persistence.xml for reference:
<?xml version="1.0"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0">
<persistence-unit name="JPAService">
<!-- Use the Hibernate OGM provider: configuration will be transparent -->
<provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider>
<class>com.sensorhound.aigateway.domain.Alert</class>
<class>com.sensorhound.aigateway.domain.DataSeriesMeta</class>
<class>com.sensorhound.aigateway.domain.IOConfiguration</class>
<class>com.sensorhound.aigateway.domain.NodeData</class>
<properties>
<property name="hibernate.transaction.jta.platform" value="JBossAS" />
<property name="jboss.as.jpa.providerModule" value="org.hibernate:5.0" />
<property name="hibernate.ogm.datastore.provider" value="cassandra_experimental"/>
<property name="hibernate.ogm.datastore.host" value="127.0.0.1:9042"/>
<property name="hibernate.ogm.datastore.database" value="dev"/>
</properties>
</persistence-unit>
</persistence>
I don't know what's the reason. It's very strange, can someone explain this to me?
Thanks
Edit:
I am not able to upload my screen shot about some of the database connection testing results due to some stackoverflow points problem. Here are some available resources I can share about my database. I am using cassandra, so I am using a keyspace called 'dev' to store all my table. Here is the result I got from terminal after type "describe dev":
CREATE KEYSPACE dev WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true;
CREATE TABLE dev."DATA_SERIES_META" ( "DATA_SERIES_META_ID" text PRIMARY KEY, "B_INFO" double, "CHANNEL_NAME" text, "CONVERTION_TYPE" text, "IO_CONFIGURATION" text, "LAST_SAMPLE_TIME" text, "M_INFO" double, "NAME" text, "SAMPEL_PERIOD" int, "SAMPLE_RANGE" text, "START_TIME" text, "STORE_CONVERTED" boolean, "STORE_ROW" boolean, "TOTAL_SAMPLE" bigint, "VOLTAGE_DIVIDE" text ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; CREATE INDEX DATA_SERIES_META_IO_CONFIGURATION ON dev."DATA_SERIES_META" ("IO_CONFIGURATION");
CREATE TABLE dev."NODE_DATA" ( "NODEDATA_ID" text PRIMARY KEY, "IO_CONFIGURATION" text, "TIME" bigint, "VALUE" double ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; CREATE INDEX NODE_DATA_IO_CONFIGURATION ON dev."NODE_DATA" ("IO_CONFIGURATION");
CREATE TABLE dev."IO_CONFIGURATION" ( "IO_CONFIGURATION_ID" text PRIMARY KEY, "ACTIVE" boolean, "B_INFO" double, "CHANNEL_NAME" text, "CODE" text, "CONVERSION_TYPE" text, "DEFAULT_GRAPH" text, "M_INFO" double, "NAME" text, "RANGE_LOWERBOUND" int, "RANGE_UPPERBOUND" int, "SAMPLE_PERIOD" int, "SAMPLE_RANGE" text, "STORE_CONVERTED" boolean, "STORE_ROW" boolean, "TITLE" text, "UNIT" text, "VOLTAGE_DIVIDE" text ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; CREATE INDEX IO_CONFIGURATION_CHANNEL_NAME ON dev."IO_CONFIGURATION" ("CHANNEL_NAME");
CREATE TABLE dev."ALERT" ( "ALERT_ID" text PRIMARY KEY, "IO_CONFIGURATION" text, "OPERATOR" text, "VALUE" double ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; CREATE INDEX ALERT_IO_CONFIGURATION ON dev."ALERT" ("IO_CONFIGURATION");
So probably the problem is not that I have a duplicate table.
And here is the code about how I insert a row into "IO_CONFIGURATION" table:
public IOConfiguration createIOConfiguration(Integer i) {
if (i <= 0) {
throw new IllegalArgumentException(
"Invalid configuration channel found when trying to create IO configuration");
} else if (i.equals(null)) {
throw new NullPointerException(
"Configuration channel is null when trying to create IO configuration");
}
IOConfiguration emp = new IOConfiguration();
emp.setChannelName(ChannelName.fromInt(i));
this.getManager().persist(emp);
return emp;
}
I cannot share the screen shot of the result but I am sure that after calling this function, I can insert a row into "IO_CONFIGURATION" table.
I am just using the createNativeQuery right now for this project. It's ok to select only one row since I can specify a pojo class type after the query and it works perfect.
But it's so frustrating to select a list of data from a table when using this native query. I have to manually cast the object type to my pojo because when I specify a pojo class after a query, like:
Query query = ioConfigurationDAO.getManager()
.createNativeQuery("select * from \"IO_CONFIGURATION\"", IOConfiguration.class);
It will only give me the first row in the database, I have no idea what is going on.
If you guys need more information, I am willing to share as much as I can provide.
Thanks!
回答1:
To execute queries with Cassandra and Hibernate OGM you need Hibernate Search at the moment.
This paragraph of the documentation should give you all the details: https://docs.jboss.org/hibernate/stable/ogm/reference/en-US/html_single/index.html#ogm-query-using-hibernate-search
In short, you need to add the annotation org.hibernate.search.annotations.Index
on the entity class and org.hibernate.search.annotations.Field
on the columns you want to search.
EDIT: Didn't realize you are already using Hibernate Search, I'll extend the answer.
This mean that it will only find the entities that are indexed. If your DB contains already some entities you need to index them before you can find them.
The first time you start the application you will need to run something like:
FullTextSession session = Search.getFullTextSession( openSession() );
session.createIndexer( IOConfiguration.class ).startAndWait();
This will make sure to add the existing entities in the index. Check the Hibernate Search documentation for more details about using the mass indexer.
You only need to do this when you want to re-create/refresh the index, when you execute CRUD operation with Hibernate OGM the index is updated automatically.
来源:https://stackoverflow.com/questions/42658592/createquery-not-working-but-createnativequery-works