I have the following code which is used to deconvolve a signal. It works very well, within my error limit...as long as I divide my final result by a very large factor (11000).
width = 83.66; x = linspace(-400,400,1000); a2 = 1.205e+004 ; al = 1.778e+005 ; b1 = 94.88 ; c1 = 224.3 ; d = 4.077 ; measured = al*exp(-((abs((x-b1)./c1).^d)))+a2; rect = @(x) 0.5*(sign(x+0.5) - sign(x-0.5)); rt = rect(x/83.66); signal = conv(rt,measured,'same'); check = (1/11000)*conv(signal,rt,'same');
Here is what I have. measured
represents the signal I was given. Signal is what I am trying to find. And check is to verify that if I convolve my slit with the signal I found, I get the same result. If you use what I have exactly, you will see that the check and measured are off by that factor of 11000~ish that I threw up there.
Does anyone have any suggestions. My thoughts are that the slit height is not exactly 1 or that convolve will not actually effectively deconvolve, as I request it to. (The use of deconv only gives me 1 point, so I used convolve instead).
I think you misunderstand what conv (and probably also therefore deconv) is doing.
A discrete convolution is simply a sum. In fact, you can expand it as a sum, using a couple of explicit loops, sums of products of the measured and rt vectors.
Note that sum(rt) is not 1. Were rt scaled to sum to 1, then conv would preserve the scaling of your original vector. So, note how the scalings pass through here.
sum(rt) ans = 104 sum(measured) ans = 1.0231e+08 signal = conv(rt,measured); sum(signal) ans = 1.0640e+10 sum(signal)/sum(rt) ans = 1.0231e+08
See that this next version does preserve the scaling of your vector:
signal = conv(rt/sum(rt),measured); sum(signal) ans = 1.0231e+08
Now, as it turns out, you are using the same option for conv. This introduces an edge effect, since it truncates some of the signal so it ends up losing just a bit.
signal = conv(rt/sum(rt),measured,'same'); sum(signal) ans = 1.0187e+08
The idea is that conv will preserve the scaling of your signal as long as the kernel is scaled to sum to 1, AND there are no losses due to truncation of the edges. Of course convolution as an integral also has a similar property.
By the way, where did that quoted factor of roughly 11000 come from?
sum(rt)^2 ans = 10816
Might be coincidence. Or not. Think about it.