I have some data that is sampled at at a very high rate (on the order of hundreds of times per second). This results in a sequence length that is huge (~90,000 samples) on avera
When you have very long sequences RNNs can face the problem of vanishing gradients and exploding gradients.
There are methods. The first thing you need to understand is why we need to try above methods? It's because back propagation through time can get real hard due to above mentioned problems.
Yes introduction of LSTM has reduced this by very large margin but still when it's is so long you can face such problems.
So one way is clipping the gradients. That means you set an upper bound to gradients. Refer to this stackoverflow question
Then this problem you asked
What are some methods to effectively 'chunk' these sequences?
One way is truncated back propagation through time. There are number of ways to implement this truncated BPTT. Simple idea is
2.Take the full sequence and only back propagate gradients for some given time steps from selected time block. It's a continuous way
Here is the best article I found which explains these trunacated BPTT methods. Very easy. Refer to this Styles of Truncated Backpropagation
This post is from some time ago, but I thought I would chime in here. For this specific problem that you are working on (one-dimensional continuous-valued signal with locality, composition-ality, and stationarity), I would highly recommend a CNN convolutional neural network approach, as opposed to using an LSTM.
Three years later, we have what seems to be the start of solutions for this type of problem: sparse transformers.
See
https://arxiv.org/abs/1904.10509
https://openai.com/blog/sparse-transformer/