I have a specific kinematics as a part of a more complex machine and need to compute some physical parameters that are very hard (more like impossible) to measure with proper accuracy with instruments I have at my disposal
[kinematics]
At first look it is a simple 1
degree of freedom arm (black) which can rotate around x
axis. It has a weight to force it to go always up until it hit the mechanic endpoint (angle a0
) or some tube (blue) with radius r0
. Arm rotation center is at y0
. The tube can be moved to any y(t)
height.
[usage]
This is used to measure the radius of a tube for further processing. The radius can be computed (by basic goniometry) which leads to equation in the bottom of image. The constants a0,y0,z0
are very hard to measure (it is inside complex machinery) so the measurement accuracy for distances is min 0.1 mm
and angle 0.1 deg
and even that is questionable.
[calibration]
So I decided to try compute these parameters from set of measurements done by the machine itself (auto-calibration). So I have calibration tube with known radius r0
. All green parameters can be handled as constants. Now I position the tube along y
axis to cover as much angles of arm as I could. Sadly the range is only about 20 degrees
(for current machine setup) remembering measured a(t)
for preset y(t)
... as n
point dataset. This gives me system of n
transcendent equations. From this I try/guess "all" possibilities of a0,y0,z0
remembering the best solution (closest to r0
)
[approximation of a0,y0,z0]
approximation is based on this class of mine:
//---------------------------------------------------------------------------
class approx
{
public:
double a,aa,a0,a1,da,*e,e0;
int i,n;
bool done,stop;
approx() { a=0.0; aa=0.0; a0=0.0; a1=1.0; da=0.1; e=NULL; e0=NULL; i=0; n=5; done=true; }
approx(approx& a) { *this=a; }
~approx() {}
approx* operator = (const approx *a) { *this=*a; return this; }
//approx* operator = (const approx &a) { ...copy... return this; }
void init(double _a0,double _a1,double _da,int _n,double *_e)
{
if (_a0<=_a1) { a0=_a0; a1=_a1; }
else { a0=_a1; a1=_a0; }
da=fabs(_da);
n =_n ;
e =_e ;
e0=-1.0;
i=0; a=a0; aa=a0;
done=false; stop=false;
}
void step()
{
if ((e0<0.0)||(e0>*e)) { e0=*e; aa=a; } // better solution
if (stop) // increase accuracy
{
i++; if (i>=n) { done=true; a=aa; return; } // final solution
a0=aa-fabs(da);
a1=aa+fabs(da);
a=a0; da*=0.1;
a0+=da; a1-=da;
stop=false;
}
else{
a+=da; if (a>a1) { a=a1; stop=true; } // next point
}
}
};
//---------------------------------------------------------------------------
It search the full range of single variable by some initial step then find the min deviation point. After that change the range and step to close area of this point and recursively increase accuracy.
The solution itself looks like this:
// (global) input data
#define _irc_calib_n 100
#define _irc_approx_n 5
int irc_calib_ix; // number of measured points
double irc_calib_y[_irc_calib_n]; // y(t)
double irc_calib_a[_irc_calib_n]; // a(t)
double irc_calib_r; // calibration tube radius + arm radius
// approximation
int ix=0;
double e,a,deg=M_PI/180.0;
approx aa,ay,az;
// min max step recursions ErrorOfSolutionVariable
for (aa.init(-90.0*deg,+90.0*deg,10.0*deg,_irc_approx_n,&e);!aa.done;aa.step())
for (ay.init( 0.0 ,200.0 ,10.0 ,_irc_approx_n,&e);!ay.done;ay.step())
for (az.init( 50.0 ,400.0 ,10.0 ,_irc_approx_n,&e);!az.done;az.step())
{
for (e=0.0,ix=0;ix<_irc_calib_n;ix++) // test all measured points (e is cumulative error)
{
a=irc_calib_a[ix]+aa.a;
if (a> pi) a-=pi2;
if (a<-pi) a+=pi2;
if (fabs(a)>0.5*pi) { e=100.0; break; } // ignore too far angles
e+=fabs(+(cos(a)*(irc_calib_y[ix]-ay.a))
-(sin(a)*(az.a))
-(irc_calib_r));
}
}
// here aa.a,ay.a,az.a holds the result
This leads to solution close to the measured values but inside simulation the result is still not accurate enough. It is from 0.1 mm to 0.5 mm depending on number of points and angle range. If I measure properly z0
and ignore its approximation then the precision is boosted significantly leaving y0
without error (in simulation) and a0
with error around 0.3 degree
Q1 how can I further improve accuracy of the solution?
I cannot increase the angular range. The number of points is best around 100
the more the better accuracy but above 150 the result is unstable (for some radiuses it is completely off). Have absolutely no clue why. The recursions number above 6
has not much effect
Could help weighting the deviations according to angular distance from 0 degree
? But sadly a(t)
range does not necessarily include 0 degrees
desired accuracy is 0.01 mm
for y0,z0
and 0.01 degree
for a0
Q2 is there something I have missed?
Like wrongly nested approximations or some math simplification or different approach
[notes]
The angle must be in form of a(t)+a0
because it is measured by IRC with SW reset (16000 steps/round
). It is reseted when in a0
position I do not count vibrations and calibration tube eccentricity they are taken care of already and my first goal is to make this work in simulation without them. Tube y(t)
can be positioned at free will and the a(t)
measurement can be done at will.
Right now the calibration process scan points along y
axis (movement from a0
down). Computation with 6
recursions take around 35
seconds (so be patient). 5
recursions take around 22
seconds
[edit1] here how the simulation is done
approx aa; double e;
for (aa.init(-90.0*deg,+90.0*deg,10.0*deg,6,&e);!aa.done;aa.step())
e=fabs(+(cos(aa.a)*(y(t)-y0))
-(sin(aa.a)*(z0))
-(irc_calib_r));
if (aa.a<a0) aa.a=a0;
[edit2] some values
Just realized that I had only 4
recursions in simulation code to match the input IRC accuracy then there must be 6
recursions. After changing it (also in previous edit) here are some results
| a0[deg]| y0[mm] | z0[mm] |
simulated | -7.4510|191.2590|225.9000|
z0 known | -7.4441|191.1433|225.9000|
z0 unknown | -7.6340|191.8074|225.4971|
So the accuracy with z0
measured is almost in desired range but with z0
unknown the error is still ~10
times bigger then needed. Increasing simulation accuracy has no effect above 6
recursions and also no sense because real input data will not be more accurate either.
Here the simulated/measured points for testing with above simulated settings:
ix a [deg] y [mm]
0 -0.2475 +105.7231
1 -0.4500 +104.9231
2 -0.6525 +104.1231
3 -0.8550 +103.3231
4 -1.0575 +102.5231
5 -1.2600 +101.7231
6 -1.4625 +100.9231
7 -1.6650 +100.1231
8 -1.8675 +99.3231
9 -2.0700 +98.5231
10 -2.2725 +97.7231
11 -2.4750 +96.9231
12 -2.6775 +96.1231
13 -2.8575 +95.3077
14 -3.0600 +94.5154
15 -3.2625 +93.7231
16 -3.4650 +92.9308
17 -3.6675 +92.1385
18 -3.8700 +91.3462
19 -4.0725 +90.5538
20 -4.2750 +89.7615
21 -4.4877 +88.9692
22 -4.6575 +88.1769
23 -4.8825 +87.3615
24 -5.0850 +86.5154
25 -5.2650 +85.7000
26 -5.4675 +84.9077
27 -5.6700 +84.1154
28 -5.8725 +83.3231
29 -6.0750 +82.5308
30 -6.2775 +81.7000
31 -6.5025 +80.8462
32 -6.6825 +80.0462
33 -6.8850 +79.2538
34 -7.0875 +78.4615
35 -7.2900 +77.6538
36 -7.5159 +76.7692
37 -7.6725 +75.9769
38 -7.8750 +75.1846
39 -8.1049 +74.3692
40 -8.2800 +73.5000
41 -8.4825 +72.7077
42 -8.6850 +71.9154
43 -8.9100 +71.0308
44 -9.0900 +70.2231
45 -9.2925 +69.4308
46 -9.5175 +68.5462
47 -9.6975 +67.7462
48 -9.9000 +66.9462
49 -10.1025 +66.0615
50 -10.3148 +65.2692
51 -10.4850 +64.3769
52 -10.6875 +63.5846
53 -10.9125 +62.7462
54 -11.0925 +61.9077
55 -11.2950 +61.0846
56 -11.4975 +60.2231
57 -11.7000 +59.3923
58 -11.9025 +58.5308
59 -12.1288 +57.6692
60 -12.3075 +56.8385
61 -12.5100 +55.9462
62 -12.7125 +55.1538
63 -12.9150 +54.2615
64 -13.1175 +53.4000
65 -13.2975 +52.5769
66 -13.5000 +51.6846
67 -13.7025 +50.7923
68 -13.9050 +50.0000
69 -14.1075 +49.1077
70 -14.3100 +48.2154
71 -14.5350 +47.3615
72 -14.7150 +46.5308
73 -14.9175 +45.6385
74 -15.1200 +44.7462
75 -15.3225 +43.8538
76 -15.5250 +42.9615
77 -15.7490 +42.0692
78 -15.9075 +41.2769
79 -16.1100 +40.3846
80 -16.3125 +39.4923
81 -16.5150 +38.6000
82 -16.7175 +37.7077
83 -16.9200 +36.8154
84 -17.1225 +35.9231
85 -17.3250 +34.9308
86 -17.5275 +34.0385
87 -17.7300 +33.1462
88 -17.9325 +32.2538
89 -18.1350 +31.3615
90 -18.3405 +30.4692
91 -18.5175 +29.4769
92 -18.7200 +28.5846
93 -18.9225 +27.6923
94 -19.1250 +26.8000
95 -19.3275 +25.8077
96 -19.5300 +24.9154
97 -19.7325 +23.9231
98 -19.9350 +23.0308
99 -20.1375 +22.1385
[edit3] progress update
some clarification for @Ben
how it works
the colored equation under the first image gives you the radius r0
it is made from 2 joined 90 degree
triangles (basic trigonometry)
red stuff:
y(t)
is motor position and it is knowna(t)
is IRC state also known
green stuff:
a0,y0,z0
are mechanical dimensions and are known but not precise so I measure manya(t)
for different positions ofy(t)
with known calibration tuber0
and compute thea0,y0,z0
with higher precision from it
further accuracy improvement
I actually managed to get it more precise by measuring y1=y0+z0*cos(a0)
from special calibration movement with precision around 0.03 mm
and better. It is the height of intersection between arm in a0
position and tube y
movement axis. It is measured and interpolated from situation when arm get first time contact when tube coming from up to down but the real position must be recomputed by used radius and a0
... because contact point is not on this axis ... (unless r0=0.0
). This also eliminates one approximation loop from calibration because y1,a0,z0
are dependent and can be computed from each other. Also removing double aliasing from measurement of IRC due to discontinuous manner of measurement and a(t),y(t)
positions helped a lot to increase precision and computation stability (on real machine). I can not reliably asses accuracy right now because by analysis of many measured cycles I found some mechanical problems on the machine so I wait until it is repaired. Anyway the calibration vs. simulation accuracy for r0=80.03 mm
with accounting both approaches and _irc_calib_n=30
is now:
; computed simulated |delta|
a0= -6.915840 ; -6.916710 +0.000870 deg
y0=+186.009765 ;+186.012822 +0.003057 mm
y1=+158.342452 ;+158.342187 +0.000264 mm
z0=+228.102470 ;+228.100000 +0.002470 mm
The bigger the calibration r0
the less accuracy (due to more limited a(t)
range) this is by computing all a0,y0,(y1),z1
nothing is measured directly or known. This is already acceptable but as I wrote before need to check on machine when it is ready. Just to be complete here is how simulated measurements looks like now:
[edit4] see How approximation search works
If I understand this correctly, you're trying to infer (but not measure) the radius r0 of the tube from measurements for y and a.
Applying the usual error propagation to your formula for r0, one obtains (an estimate for) the error of the resulting r0. In the limit of small angles (applicable here, since a(t) is limited to 20 degrees), this gives roughly (using the small-angle approximation for the trigonometic functions)
dr0^2 ~= dy^2 + z0^2 (pi*da/180)^2
Thus, in the case of r0 much smaller than z0, the relative error on r0 is always much larger than the relative errors of y and z0*sin(a). This is already clear from your graph: the measured quantities depend only weakly on r0.
In other words, this is not a clever way to determine the radius r0. There is not much you can do about this fundamental limitation (except you can increase the range of angle a). Making many measurements (the usual method to beat down noise/errors) presumably won't help, because these measurements aren't independent of each other due to the internal workings of your machine. So, the only help would be more accurate measurements.
To analyse the situation, I recommend to make plots/figures of, say, the inferred r0 as function of y or of y as function of a for fixed r0.
来源:https://stackoverflow.com/questions/29166819/increasing-accuracy-of-solution-of-transcendental-equation