Table of Contents for Programming Languages: a survey

integer arithmetic

floating point arithmetic:

fixed point arithmetic:

http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1169.pdf

decimal arithmetic

interval arithmetic: https://www.semipublic.comp-arch.net/wiki/Interval_arithmetic

gotchas:

" Personally, I think JS needs an integer type. Especially when you see people start getting into bignum stuff in JS, it gets silly fast, as first you can only store so many bits before floating point loses precision, but even then, even if you have an "integer", JS will find funny ways to shoot you in the foot. For example:

x | 0 === x

does not hold true for all integers in JS. " -- https://news.ycombinator.com/item?id=7366781 (todo: this is a floating point problem, isn't it?)

"

haberman 202 days ago

link |

I'm not sure what that has to do with it. Even if you are ok with the idea that integers and "decimal numbers" are different, it's still confusing that 0.1 + 0.2 != 0.3.

It's confusing because it is very difficult to look at a decimal number and know whether it can be represented exactly as base-2 floating point. It's especially confusing because you get no feedback about it! Here is a Ruby session:

irb(main):001:0> 0.1 => 0.1 irb(main):002:0> 0.2 => 0.2 irb(main):003:0> 0.3 => 0.3 irb(main):004:0> 0.4 => 0.4 irb(main):005:0> 0.5 => 0.5

They all look exact, right? Well actually only 0.5 is (IIRC?). All the others are actually approximations internally.

StefanKarpinski? 202 days ago

link |

The problem here is that Ruby is lying to you – if you enter 0.3 as a literal, you will not get that value.

haberman 202 days ago

link |

I would challenge you to name any software system that reliably shows you the precise value of a floating point number.

I couldn't find any (like not a single one), so I wrote a program myself to do it: http://blog.reverberate.org/2012/11/dumpfp-tool-to-inspect-f...

StefanKarpinski? 202 days ago

link |

Python, JavaScript? V8, Julia, Java.

haberman 202 days ago

link |

The precise value of double(0.1) is 0.1000000000000000055511151231257827021181583404541015625. That is precise, not an approximation.

If you know of a program in any of these languages that will print this value for "0.1" using built-in functionality, please let me know because I would love to know about it.

Likewise the precise value of double(1e50) is 100000000000000007629769841091887003294964970946560. Anything else is an approximation of its true value.

In another message you said that what's really important is that the string representation uniquely identifies the precise value. While that will help you reconstruct the value later, it does not help you understand why 0.1 + 0.2 != 0.3.

masida 202 days ago

link |

Python does it:

>>> from decimal import Decimal >>> Decimal(0.1) Decimal('0.1000000000000000055511151231257827021181583404541015625') >>> Decimal(1e50) Decimal('100000000000000007629769841091887003294964970946560')

haberman 202 days ago

link |

Great to know, thanks!

PaulAJ? 202 days ago

link |

Haskell will tell you that

toRational 0.1 = 3602879701896397 % 36028797018963968 toRational 1e50 = 100000000000000007629769841091887003294964970946560 % 1

Both of those values are precise, although I admit the first isn't really useful for human beings.

StefanKarpinski? 201 days ago

link |

The easiest way to see this exact value in Julia is to convert the Float64 value 0.1 to BigFloat? and then print that:

julia> big(0.1) 1.000000000000000055511151231257827021181583404541015625e-01

julia> big(1e50) 1.0000000000000000762976984109188700329496497094656e+50

Note that this is exactly why you don't normally want to construct BigFloats? this way. Instead you want to do this:

julia> BigFloat("0.1") 1.000000000000000000000000000000000000000000000000000000000000000000000000000002e-01

julia> BigFloat("1e50") 1e+50

StefanKarpinski? 202 days ago

link |

It helps because 0.1 + 0.2 produces 0.30000000000000004 for 64-bit floats – so at least you can see that this value isn't the same as 0.3. In Ruby you just get two values that print the same yet aren't equal, which is way more confusing. I agree that printing the minimal number of digits required for reconstruction does not help with explaining why 0.1, 0.2 and 0.3 in 64-bit floats aren't the real values 1/10, 2/10 and 3/10.

StefanKarpinski? 202 days ago

link |

We may be speaking across each other here. Ruby is lying in the sense that there are multiple distinct float values that it will print as 0.3 – in particular, confusion ensues when two values look the same but are unequal. These other languages print each distinct float value differently, using just enough decimal digits to reconstruct the exact binary value. Ruby doesn't give you enough digits to reconstruct the value you have. Nobody actually prints the full correct value because it's fifty digits long and is completely redundant given that you know you're dealing with a 64-bit float.

kzrdude 202 days ago

link |

You're right.

Python 3.3.5rc1 (default, Feb 23 2014, 17:44:29) >>> 0.3 == 0.2 + 0.1 False >>> 0.2 + 0.1 0.30000000000000004 >>> 0.3 0.3

here's the Julia implementation of rationals: https://github.com/JuliaLang/julia/blob/master/base/rational.jl

doc: http://julia.readthedocs.org/en/latest/manual/complex-and-rational-numbers/#man-rational-numbers

see also http://julia.readthedocs.org/en/latest/manual/constructors/#case-study-rational and https://github.com/JuliaLang/julia/blob/master/doc/manual/conversion-and-promotion.rst for explanation of the rational implementation

"... I personally think that the way to handle floating-point confusion is better user education. However, if you really want a decimal standard, then, as I mentioned above, there already is one that is part of the IEEE 754 standard. Not only do there exist hardware implementations, but there are also high-quality software implementations.

A better approach to making things more intuitive in all bases, not just base 10, is using rational numbers. The natural way is to use reduced paris of integers, but this is unfortunately quite prone to overflow. You can improve that by using reduced ratios of – guess what – floating point numbers. " -- Stefan Karpinski, https://news.ycombinator.com/item?id=7367147

literals

addition, subtraction

multiplication, division

bitwise OR, AND, XOR

exponentiation, log, sqrt often library fns

often the symbol - is used in two ways: a - b for subtraction, but -b for the negation of b (e.g. if b is 3, then -b is -3)

this irregularity can be vexing. e.g. in Haskell, the expression "abs -3" is parsed as "abs (-) 3" and you have to write "abs (-3)". some languages use another character for one of subtraction or negation, e.g. ML uses ~ (tilde) for unary negation.

infix

promotion

traditional precedence

There is some disagreement in mathematics whether the result of 0^0 should be one, or undefined. IEEE754 2008 defines various different functions for exponentiation: pow, pown (integers), powr (reals). pow gives 0^0 = 1. pown gives 0^0 = 1. powr gives 0^0 = nan.

In many langauges and platforms (e.g. C, Python, Java, Octave, C++, .NET), 0^0 returns 1. (cite https://en.wikipedia.org/wiki/Exponentiation#Programming_languages )

if your integers are stored as fixed-size fields in memory, you have the possibility of getting a result that is too large to fit in that memory field.

at least six ways to deal with this:

- don't have a set size. arbitrary-precision arithmetic. disadvantage: slower (and theoretically unbounded memory usage)
- return a null value or sentinel value
- raise an error
- truncated byte result (if everything is a single byte, 200 + 200 = 144) (e.g. arithmetic mod 256)
- saturated byte result (if everything is a single byte, 200 + 200 = 255)
- double width full result (not for exponentiation) (200 + 200 = 400, but the inputs are one byte and the output is two bytes)

See also the implementation of arithmetic.