Fixed precision numbers are desirable as it matches the concept of precision in reality. The problem being it is difficult to be certain of needed precision ahead of time. So ideally, we would like a seamless support at the programming language level. At the source code level, numbers are just numbers and all arithmetics are supported in the same presentation. By default, a floating type will do. However, we can tune he program to more precise precision by updating a variable's type. Integers will have precision of 1 (scale of 1) or more, and some variables may have a precision of 1e-100. Even binary precision may be adopted.
Now this is quite complex to support. Instead of a couple number types -- int32, int64, float, double and their unsigned versions, we essentially need support unlimited number of types. The compilers need output different arithmetic routines based on the operands data types.
But I think it is workable, and it is probably worth while.