The IEEE 754 standard defines the square root of negative zero as negative zero. This choice is easy enough to rationalize, but other choices, such as defining sqrt(-0.0)<
It was defined in the official floating point standard in 1985 (IEEE std. 754-1985) that sqrt(-0.0) = -0.0.
The 2008 revision of the same standard added a definition of the pow function. According to this definition, pow(x,y) can have a negative sign only if y is an odd integer. Hence, pow(-0.0, 3.0) = -0.0. While pow(-0.0, 0.5) = +0.0. In 2008, it was too late to change the definition of sqrt(-0.0) and therefore we have the unfortunate situation that the two functions give different results.
The sign of zero generally doesn't matter since zero and negative zero are equal. But it matters when you divide by it. So 1/sqrt(-0.0) gives -INF, while pow(-0.0,-0.5) gives +INF.
The decision of 1985 was probably just an observation of status quo. The Intel math coprocessor 8087 from 1980 had sqrt implemented in hardware and it gave sqrt(-0.0) = -0.0. Today, all PC processors have sqrt implemented in hardware, so it would be very difficult to change the standard. The problem is not so important that it is worthwhile making two different sqrt functions that differ only for negative zero. I don't know anything about the history prior to 1980. If anybody can trace the history further back please post a comment here.
The only mathematically reasonable result is 0. There is a reasonable question of whether it should be +0 or -0. For most computations it makes no difference at all, but there are some specific complex expressions for which the result makes more sense under the -0 convention. The exact details are outside the scope of this site, but that's the gist of it.
I may explain some more when I'm not on vacation, if someone else doesn't beat me to it.