Consider I have the following minimal code:
#include
template
struct TData
{
typedef typename boost:
Overload resolution is a headache. But since you stumbled on a fix (eliminate conversion of the index operand to operator[]
) which is too specific to the example (literals are type int
but most variables you'll be using aren't), maybe you can generalize it:
template< typename IT>
typename boost::enable_if< typename boost::is_integral< IT >::type, value_type & >::type
operator [] ( IT id ) { return data[id]; }
Unfortunately I can't test this because GCC 4.2.1 and 4.5 accept your example without complaint under --pedantic
. Which really raises the question whether it's a compiler bug or not.
Also, once I eliminated the Boost dependency, it passed Comeau.
It seems to me that with
t[1][1] = 5;
the compiler has to choose between.
value_type & operator [] ( size_t id ) { return data[id]; }
which would match if the int
literal were to be converted to size_t
, or
operator ptr_t & () { return data; }
followed by normal array indexing, in which case the type of the index matches exactly.
As to the error, it seems GCC as a compiler extension would like to choose the first overload for you, and you are compiling with the -pedantic and/or -Werror flag which forces it to stick to the word of the standard.
(I'm not in a -pedantic mood, so no quotes from the standard, especially on this topic.)
I don't know what's the exact answer, but...
Because of this operator:
operator ptr_t & () { return data; }
there exist already built-in []
operator (array subscription) which accepts size_t
as index. So we have two []
operators, the built-in and defined by you. Booth accepts size_t
so this is considered as illegal overload probably.
//EDIT
this should work as you intended
template<typename ptr_t>
struct TData
{
ptr_t data;
operator ptr_t & () { return data; }
};
I have tried to show the two candidates for the expression t[1][1]. These are both of equal RANK (CONVERSION). Hence ambiguity
I think the catch here is that the built-in [] operator as per 13.6/13 is defined as
T& operator[](T*, ptrdiff_t);
On my system ptrdiff_t is defined as 'int' (does that explain x64 behavior?)
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
typedef float (&ATYPE) [100][100];
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[size_t(1)][size_t(1)] = 5; // note the cast. This works now. No ambiguity as operator[] is preferred over built-in operator
t[1][1] = 5; // error, as per the logic given below for Candidate 1 and Candidate 2
// Candidate 1 (CONVERSION rank)
// User defined conversion from 'TData' to float array
(t.operator[](1))[1] = 5;
// Candidate 2 (CONVERSION rank)
// User defined conversion from 'TData' to ATYPE
(t.operator ATYPE())[1][1] = 6;
return 0;
}
EDIT:
Here is what I think:
For candidate 1 (operator []) the conversion sequence S1 is User defined conversion - Standard Conversion (int to size_t)
For candidate 2, the conversion sequence S2 is User defined conversion -> int to ptrdiff_t (for first argument) -> int to ptrdiff_t (for second argument)
The conversion sequence S1 is a subset of S2 and is supposed to be better. But here is the catch...
Here the below quote from Standard should help.
$13.3.3.2/3 states - Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if — S1 is a proper subsequence of S2 (comparing the conversion sequences in the canonical form defined by 13.3.3.1.1, excluding any Lvalue Transformation; the identity conversion sequence is considered to be a subsequence of any non-identity conversion sequence) or, if not that...
$13.3.3.2 states- " User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor and if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2."
Here the first part of the and condition "if they contain the same user-defined conversion function or constructor" does not hold good. So, even if the second part of the and condition "if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2." holds good, neither S1 nor S2 is preferred over the other.
That's why gcc's phantom error message "ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second"
This explains the ambiguity quiet well IMHO
It's actually quite straight forward. For t[1]
, overload resolution has these candidates:
Candidate 1 (builtin: 13.6/13) (T being some arbitrary object type):
(T*, ptrdiff_t)
Candidate 2 (your operator)
(TData<float[100][100]>&, something unsigned)
The argument list is given by 13.3.1.2/6
:
The set of candidate functions for overload resolution is the union of the member candidates, the non-member candidates, and the built-in candidates. The argument list contains all of the operands of the operator.
(TData<float[100][100]>, int)
You see that the first argument matches the first parameter of Candidate 2 exactly. But it needs a user defined conversion for the first parameter of Candidate 1. So for the first parameter, the second candidate wins.
You also see that the outcome of the second position depends. Let's make some assumptions and see what we get:
ptrdiff_t
is int
: The first candidate wins, because it has an exact match, while the second candidate requires an integral conversion. ptrdiff_t
is long
: Neither candidate wins, because both require an integral conversion. Now, 13.3.3/1
says
Let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F.
A viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then ... for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that ...
For our first assumption, we don't get an overall winner, because Candidate 2 wins for the first parameter, and Candidate 1 wins for the second parameter. I call it the criss-cross. For our second assumption, the Candidate 2 wins overall, because neither parameter had a worse conversion, but the first parameter had a better conversion.
For the first assumption, it does not matter that the integral conversion (int to unsigned) in the second parameter is less of an evil than the user defined conversion of the other candidate in the first parameter. In the criss-cross, rules are crude.
That last point might still confuse you, because of all the fuss around, so let's make an example
void f(int, int) { }
void f(long, char) { }
int main() { f(0, 'a'); }
This gives you the same confusing GCC warning (which, I remember, was actually confusing the hell out of me when I first received it some years ago), because 0
converts to long
worse than 'a'
to int
- yet you get an ambiguity, because you are in a criss-cross situation.
With the expression:
t[1][1] = 5;
The compiler must focus on the left hand side to determine what goes there, so the = 5;
is ignored until the lhs is resolved. Leaving us with the expression: t[1][1]
, which represents two operations, with the second one operating on the result from the first one, so the compiler must only take into account the first part of the expression: t[1]
.The actual type is (TData&)[(int)]
The call does not match exactly any functions, as operator[]
for TData
is defined as taking a size_t
argument, so to be able to use it the compiler would have to convert 1
from int
to size_t
with an implicit conversion. That is the first choice. Now, another possible path is applying user defined conversion to convert TData<float[100][100]>
into float[100][100]
.
The int
to size_t
conversion is an integral conversion and is ranked as Conversion in Table 9 of the standard, as is the user defined conversion from TData<float[100][100]>
to float[100][100]
conversion according to §13.3.3.1.2/4. The conversion from float [100][100]&
to float (*)[100]
is ranked as Exact Match in Table 9. The compiler is not allowed to choose from those two conversion sequences.
Q1: Not all compilers adhere to the standard in the same way. It is quite common to find out that in some specific cases a compiler will perform differently than the others. In this case, the g++ implementors decided to whine about the standard not allowing the compiler to choose, while the Intel implementors probably just silently applied their preferred conversion.
Q2: When you change the signature of the user defined operator[]
, the argument matches exactly the passed in type. t[1]
is a perfect match for t.operator[](1)
with no conversions whatsoever, so the compiler must follow that path.