According to the documentation of scipy.signal.resample, the speed should vary according to the length of input:
As noted, resample uses
The docstring, somewhat misleadingly, states one part of the story. The resampling process consists of FFT (input size), zero-padding, and inverse FFT (output size). So an inconvenient output size will slow it down just as much as an inconvenient input size will.
Cris Luengo suggested using direct interpolation in the spatial domain, which should be faster here. For example, ndimage.zoom uses it (cubic spline interpolation by default):
from scipy.ndimage import zoom
t0 = time.time()
zoom(y, (220435./262144., 1)) # maybe with prefilter=False ? up to you
print(time.time() - t0) # about 200 times faster than resample
Not the same output as resample (a different method after all), but for smooth data (unlike random input used here) they should be close.
The resampling process consists of FFT (input size), zero-padding, and inverse FFT (output size). So an inconvenient output size will slow it down just as much as an inconvenient input size will.
just to add that this is the case for upsampling only. for downsampling the process is: FFT -> multiply -> iFFT -> downsample. so in downsampling, the FFT/iFFT has nothing to do with the output size, only the input size.