According to Wikipedia
Computer scientists consider a language \"type-safe\" if it does not allow operations or conversions that violate the rules of the
in general, large scale / complex systems need type checking, first at compile type (static) and run time (dynamic). this is not academia, but rather a simple, common sense rule of thumb like "compiler is your friend". beyond runtime performance implication, there are other major implications, as following:
the 3 axis of scalability are:
the only way to do safe refactoring is to have everything fully tested (use test driven development or at least unit testing and at least decent coverage testing as well, this is not qa, this is development/r&d). what is not covered, will break and systems like that are rather garbage than engineering artifacts.
now let's say that we have a simple function, sum, returning the sum of two numbers. one can imagine doing unit testing on this function, based on the fact that both parameters and returned type are known. we are not talking about function templates, which boil down to the trivial example. please write a simple unit test on the same function called sum where both parameters and return type can literally be of any kind, they can be integers, floats, strings and/or any other kind of user defined types having the plus operator overloaded/implemented. how do you write such a simple test case?!? how complex does the test case need to be in order to cover every possible scenario?
complexity means cost. without proper unit testing and test coverage, there no safe way to do any refactory, so the product is maintenance garbage, in not immediately visible, clearly in long term, because performing any refactoring in blind would be like driving a car without a driver license, drunk like a skunk and of course, without insurance.
go figure! :-)