floating-point

Custom locale configuration for float conversion

99封情书 提交于 2021-02-07 05:24:17
问题 I need to convert a string in the format "1.234.345,00" to the float value 1234345.00 . One way is to use repeated str.replace : x = "1.234.345,00" res = float(x.replace('.', '').replace(',', '.')) print(res, type(res)) 1234345.0 <class 'float'> However, this appears manual and non-generalised. This heavily upvoted answer suggests using the locale library. But my default locale doesn't have the same conventions as my input string. I then discovered a way to extract the characters used in

How to test floats results with doctest?

大兔子大兔子 提交于 2021-02-07 05:01:08
问题 I'm developing a program that makes some floating points calculations. Is there any way to test my functions (which deliver floats) with doctests? 回答1: Sure, just format the floats with a reasonable format, based on your knowledge of what precision you expect them to exhibit -- e.g, if you expect accuracy to 2 digits after the decimal point, you could use: ''' Rest of your docstring and then... >>> '%.2f' % funcreturningfloat() '123.45' ''' 回答2: The documentation has a suggestion Floating

What is the arithmetic mean of an empty sequence?

梦想与她 提交于 2021-02-07 04:41:53
问题 Disclaimer: No, I didn't find any obvious answer, contrary to what I expected! When looking for code examples wrt. the arithmetic mean, the first several examples I can turn up via Google seem to be defined such that the empty sequence generates a mean value of 0.0 . (eg. here and here ...) Looking at Wikipedia however, the Arithmetic mean is defined such that an empty sequence would yield 0.0 / 0 -- A = 1/n ∑[i=1 -> n](a[i]) -- so, possibly, that is NaN in the general case. So if I write a

Converting IEEE 754 from bit stream into float in JavaScript

给你一囗甜甜゛ 提交于 2021-02-06 13:57:03
问题 I have serialized 32-bit floating number using GO language function (math.Float32bits) which returns the floating point number corresponding to the IEEE 754 binary representation. This number is then serialized as 32-bit integer and is read into java script as byte array. For example, here is actual number: float: 2.8088086 as byte array: 40 33 c3 85 as hex: 0x4033c385 There is a demo converter that displays the same numbers. I need to get that same floating number back from byte array in

Converting IEEE 754 from bit stream into float in JavaScript

一个人想着一个人 提交于 2021-02-06 13:52:45
问题 I have serialized 32-bit floating number using GO language function (math.Float32bits) which returns the floating point number corresponding to the IEEE 754 binary representation. This number is then serialized as 32-bit integer and is read into java script as byte array. For example, here is actual number: float: 2.8088086 as byte array: 40 33 c3 85 as hex: 0x4033c385 There is a demo converter that displays the same numbers. I need to get that same floating number back from byte array in

Converting IEEE 754 from bit stream into float in JavaScript

筅森魡賤 提交于 2021-02-06 13:50:27
问题 I have serialized 32-bit floating number using GO language function (math.Float32bits) which returns the floating point number corresponding to the IEEE 754 binary representation. This number is then serialized as 32-bit integer and is read into java script as byte array. For example, here is actual number: float: 2.8088086 as byte array: 40 33 c3 85 as hex: 0x4033c385 There is a demo converter that displays the same numbers. I need to get that same floating number back from byte array in

In Lua, what is #INF and #IND?

梦想的初衷 提交于 2021-02-06 09:14:29
问题 I'm fairly new to Lua. While testing I discovered #INF / #IND . However, I can't find a good reference that explains it. What are #INF , #IND , and similar (such as negatives) and how do you generate and use them? 回答1: #INF is infinite, #IND is NaN. Give it a test: print(1/0) print(0/0) Output on my Windows machine: 1.#INF -1.#IND As there's no standard representation for these in ANSI C, you may get different result. For instance: inf -nan 回答2: Expanding @YuHao already good answer. Lua does

In Lua, what is #INF and #IND?

谁说胖子不能爱 提交于 2021-02-06 09:13:50
问题 I'm fairly new to Lua. While testing I discovered #INF / #IND . However, I can't find a good reference that explains it. What are #INF , #IND , and similar (such as negatives) and how do you generate and use them? 回答1: #INF is infinite, #IND is NaN. Give it a test: print(1/0) print(0/0) Output on my Windows machine: 1.#INF -1.#IND As there's no standard representation for these in ANSI C, you may get different result. For instance: inf -nan 回答2: Expanding @YuHao already good answer. Lua does

When to use a Float

大兔子大兔子 提交于 2021-02-05 20:42:13
问题 Years ago I learned the hard way about precision problems with floats so I quit using them. However, I still run into code using floats and it make me cringe because I know some of the calculations will be inaccurate. So, when is it appropriate to use a float? EDIT: As info, I don't think that I've come across a program where the accuracy of a number isn't important. But I would be interested in hearing examples. 回答1: Short answer: You only have to use a float when you know exactly what you

Java explicit type cast for the floating-point division

微笑、不失礼 提交于 2021-02-05 11:40:27
问题 I am not sure if I am using explicit type cast for the floating-point division in option 4 (Division). I need a little help understanding what is floating-point division. I must use integers to store the 2 operands, a double to store the result. You must use an explicit type cast for the floating-point division in option 4. Also use a switch statement to process the menu choices. After each computation import java.util.Scanner; public class SimpleCalculator { //-------------------------------