Dependent types are often advertised as a way to enable you to assert that a program is correct up to a specification. So, for example, you are asked to write a code that
How do you prove that math is correct? That is, how do you prove that integer addition is the correct way to count apples, or how do you prove that real addition is the correct way to add weights? There is always an interface between the formal / mathematical and the informal / real. It requires skill and mathematical / physical taste to find the appropriate formalism for solving a particular problem. Formal methods won't eliminate that.
The value of formal methods is twofold:
You're not going to know if your program is correct, unless you know what properties it actually satisfies. Before you know if your sort routine is "correct", you first have to know what it actually does. Any procedure for finding that out is going to be a formal method (even unit testing!), so people who reject "formal methods" are really just limiting themselves to a tiny subset of the available methods.
Even when you know how to find out what a program actually does, people make mistakes in their mathematical reasoning (we are not rational creatures, whatever certain ideologies may claim); so it's helpful to have a machine check us. This is the same reason we use unit tests --- it's nice to run a desk check and make sure the program does what we want; but having the computer do the check and tell us whether the result is correct helps prevent mistakes. Letting the computer check our proofs about the behavior of the program is helpful for exactly the same reason.