Dependent types are often advertised as a way to enable you to assert that a program is correct up to a specification. So, for example, you are asked to write a code that
This is a problem with any specification language (even English), not just dependent types. Your own post is a good example: it contains an informal specification of "sort function" that only requires the result to be sorted, which is not what you want (\xs -> []
would qualify). See e.g. this post from Twan van Laarhoven's blog.
Suppose your function is not top level one, but used by somebody else as part of some module, which also has correctness proof. The latter must use correctness proof of your function, and if it is bad, module will not compile. The module itself still can have mistakes, but it is not problem of yours anymore.
How do you prove that math is correct? That is, how do you prove that integer addition is the correct way to count apples, or how do you prove that real addition is the correct way to add weights? There is always an interface between the formal / mathematical and the informal / real. It requires skill and mathematical / physical taste to find the appropriate formalism for solving a particular problem. Formal methods won't eliminate that.
The value of formal methods is twofold:
You're not going to know if your program is correct, unless you know what properties it actually satisfies. Before you know if your sort routine is "correct", you first have to know what it actually does. Any procedure for finding that out is going to be a formal method (even unit testing!), so people who reject "formal methods" are really just limiting themselves to a tiny subset of the available methods.
Even when you know how to find out what a program actually does, people make mistakes in their mathematical reasoning (we are not rational creatures, whatever certain ideologies may claim); so it's helpful to have a machine check us. This is the same reason we use unit tests --- it's nice to run a desk check and make sure the program does what we want; but having the computer do the check and tell us whether the result is correct helps prevent mistakes. Letting the computer check our proofs about the behavior of the program is helpful for exactly the same reason.
This is the static, type-system version of, How do you tell that your tests are correct?
The only answer I can honestly give is, yes, the more complex and unwieldy your specification, the more likely you are to have made a mistake. You can mess up in writing something in a type theoretic formalism just as well as you can in formalizing the description of your program as an executable function.
The hope is that your specification is simple and small enough to judge by examination, while your implementation of that might be far larger. It helps that, once you have some "seed" ideas formalized, you can show that the ideas derived from these are correct. From that point of view, the more readily you can mechanically and provably derive parts of your specification from simpler parts, and ultimately derive your implementation from your specification, the more likely you are to get a correct implementation.
But it can be unclear how to formalize something, which has the effect that either you might make a mistake in translating your ideas into the formalism – you might think you proved one thing, when actually you proved another – or you might find yourself doing type theory research in order to formalize an idea.
Coming late to the party, but AFAICT, noone has yet mentioned another important aspect: in the context of program verification, having a bug in the spec is not always too terrible, because you can use the code to check the spec.
IOW, the proof doesn't say "the code is right", but "the code and the spec are mutually consistent". So, in order for a bug in the spec to go unnoticed, it has to be one of:
As someone else pointed out: the problem is the same for tests.
I think it's the other way around: a well-typed program can't prove nonsense (assuming the system is constistent), while specifications can be inconsistent or just silly. So it's not "how to make sure this piece of code reflects my platonic ideas?", but rather "how to make sure my ideas meaningfully project onto a well-founded plane of pure syntactic rules?". How to make sure the bird you see is a mockingbird [for some supplied notion of mockingbirdness]? Well, study birds and raise you chances to be right. But as always with humans, you can't be 100% sure.
Type theory is a way to mitigate the imperfectness of human mind by introducing formal rules, machine-checked proofs (it's a very relevant paper) and other stuff, that allows to focus and thus to simplify problems a lot (as Brouwer said: "Mathematics is nothing more, nothing less, than the exact part of our thinking"), but you can't expect any tool to make your thoughts "right", because there is just no uniform notion of rightness. IOW, there is no way to formally connect informal and formal: being informal is like being inside the IO
monad — there is no escape.
So it's not "does this syntax reflects my very precise semantics?", but rather "can I attach my raw semantics to this strongly structured syntax?". Programs are proper material objects, while ideas are cumbersome approximations, that can become proper material objects only by convention. So we form some basis using conventions, and then we just trust it, because it's much more sensible to trust to a small subset of all your numerous ideas than to all of them.