Spring Data GemFire DiskStore

拥有回忆 提交于 2021-01-29 06:32:10

问题


I need to persist the data in a Region to disk using Spring Data GemFire.

Using the config below (Locator and Server are started using Gfsh):

@EnablePdx
@ClientCacheApplication
@EnableDiskStore(name = "disk_store")
@EnableClusterConfiguration(useHttp = true)
@EnableEntityDefinedRegions(basePackages = "xxx.entity")
@EnableGemfireRepositories(basePackages = "xxx.repository")
public class GeodeClientConfiguration {

}

The config is below:

spring.data.gemfire.disk.store.name=disk_store
spring.data.gemfire.disk.store.directory.location=C:\\apache-geode-1.9.0\\diskstore

The above config creates a DiskStore (once the code to store the data is run). The issue is that once the server is stopped, the disk store gets deleted.

Looked at the documentation and examples by John Blum to no avail.

Also tried to create the DiskStore using Gfsh but end up with multiple DiskStores and no data in the disk store created in Gfsh.

Any idea what I might be missing?

Thanks


回答1:


Even with your Java configuration above, your arrangement is still a bit unclear/ambiguous to me. However, let's start with what we know.

First, it is clear from your Java configuration above that you are creating a ClientCache, Spring application to connect to and send/receive data to/from a standalone GemFire cluster.

You also state that you are starting the Locator and Server(s) using Gfsh. All fine so far.

However, you have...

1) Annotated your client application class with @EnableEntityDefinedRegions (which is fine) without specifying an alternate data policy using the clientRegionShortcut attribute. By default, the clientRegionShortcut is set to PROXY (see here), which means your client application keeps NO local state.

2) Then, you define a DiskStore (i.e. "disk_store") on the client with the @EnableDiskStore annotation, which is probably not what you want given there is currently NO local state kept on the client Regions.

NOTE: @EnableClusterConfiguration does not push configuration meta-data for DiskStores up to the server from the client. Currently, it only pushes Region and Index configuration meta-data up to the servers, as defined on the client.

Otherwise, the rest of the Spring (for GemFire) configuration (using Annotations) seems just fine.

NOTE: Also keep in mind that the @EnableClusterConfiguration annotation is careful not to stomp on existing Regions on the server-side. If the same Regions by name already exist, then the server will not apply the definition sent by the client when declaring the @EnableClusterConfiguration annotation (i.e. the annotation will not "nuke-and-pave"). That is by design, primarily to protect against data loss.

NOTE: I also recommend that you use the type-safe alternative to the basePackages attribute, in both the @EnableEntityDefinedRegions and @EnableGemfireRepositories annotations, the basePackageClasses attribute. It can refer to 1 or more Classes, but that class type is only used to determine the package from which to begin the scan. For example, you can set the @EnableEntityDefinedRegions, basePackageClasses to example.app.customers.model.Customer.class and example.app.products.model.Product.class as in @EnableEntityDefinedRegions(basePackageClasses = { Customer.class, Product.class }) and SDG will use the package declaration of these classes to begin the Scan for Entity classes (sub-packages included). You do not need to list all (or multiple) classes from the package; 1 per (top-level) package from where you want to scan will suffice. It is good to limit the scan.

So, in your case, you probably want to do the following:

On the client:

@EnablePdx
@ClientCacheApplication
@EnableClusterConfiguration(useHttp = true)
@EnableEntityDefinedRegions(basePackageClasses = EntityType.class)
@EnableGemfireRepositories(basePackageClasses = RepositoryType.class)
public class GeodeClientConfiguration {

}

And then, on the server, to "persist" data, you want to create "PERSISTENT" Regions. You can accomplish this 1 of 2 ways:

1) First, you can configure the client, using the @EnableClusterConfiguration annotation, to tell the server when creating the matching Region (by name), as defined by the client, to create a "PERSISTENT" Region. By default, the client-side @EnableClusterConfiguration annotation tells the server to create a non-persistent PARTITION Region (see here). So you would change the @EnableClusterConfiguration annotation in your client configuration to:

@ClientCacheApplication
@EnableClusterConfiguration(useHttp = true, serverRegionShortcut = RegionShortcut.PARTITION_PERSISTENT)
...
class GeodeClientConfiguration { ... }

You may use any of the non-LOCAL, "PERSISTENT", RegionShortcut, Region (data policy) types (see here)... primarily [PARTITION_PERSISTENT* and REPLICATE_PERSISTENT*].

Then, when the client pushes the Region configuration meta-data to the server, the server will create a Region with the same name and designated (data policy) type (as defined by the @EnableClusterConfiguration annotation's serverRegionShortcut attribute).

Again, keep in mind, that if the Region already exists, it will not re-create the Region. If you want to have the client create the Region on (each application) restart, you need to destroy the Region using Gfsh.

2) Alternatively, you can use Gfsh to create the Region, using:

gfsh> create region --name=Example --type=PARTITION_PERSISTENT

Finally, when it comes to the DiskStore, since you have NO local state Region, and even if you did, you probably want the data "persisted" server-side instead, then if you do nothing and just declare the server-side Region(s) with a "PERSISTENT" data policy using 1 of the 2 methods above, then GemFire, by default, writes to the "DEFAULT" DiskStore.

If you want to associate a "specific" DiskStore (by name) with the Region (e.g. "Example"), then you must, first, create the DiskStore using Gfsh:

gfsh> create disk-store --name=disk_store ...

See here.

And then, create the Region with Gfsh:

gfsh> create region --name=Example --type=PARTITION_PERSISTENT --disk-store=disk_store ...

See here.

The DiskStore is used both to "persist" data as well as overflow data to disk if you have Eviction configured to "OVERFLOW_TO_DISK" (see here).

From #2 (creating a Region) onward (with the DiskStore creation), is all server-side.

Anyway, I hope all of this makes sense and helps.

If you have additional questions or problems, feel free to follow up in the comments.

Thanks.



来源:https://stackoverflow.com/questions/56479867/spring-data-gemfire-diskstore

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!