I did some testing with floating point calculations to minimize the precision loss. I stumbled across a phenomen I want to show here and hopefully get an explanation.
Wh
SQL Server uses the smallest possible datatype.
When you run this script
SELECT SQL_VARIANT_PROPERTY(1.0, 'BaseType')
SELECT SQL_VARIANT_PROPERTY(1.0, 'Precision')
SELECT SQL_VARIANT_PROPERTY(1.0, 'Scale')
SELECT SQL_VARIANT_PROPERTY(1.0, 'TotalBytes')
you'll see that SQL Server implicitly used a NUMERIC(2, 1) datatype.
The division by 60.0 converts the result to NUMERIC(8, 6).
The final calculation converts the result to NUMERIC(17, 10).
Edit
Taken from SQL Server Books Online Data Type Conversion
In Transact-SQL statements, a constant with a decimal point is automatically converted into a numeric data value, using the minimum precision and scale necessary. For example, the constant 12.345 is converted into a numeric value with a precision of 5 and a scale of 3.