Replies: 11 comments
-
Explicit typing for integers and doubles doesn't sound very interesting, usually providing an integer to a function which takes a float is fine, and vice versa if the float is representable as an integer. Lua will always use floats for There is a compatibility issue with integers as they are implemented in Lua 5.3: integer operations will wrap around when overflowing in integer arithmetic, instead of promoting to a float. Expressions like There is also a potentially problematic change which isn't really a backwards compatibility issue: floats with no fractional part will get a |
Beta Was this translation helpful? Give feedback.
-
I see. So, implementing this would also involve avoiding these compatibility issues.
This is usually the case, however the concern is around being able to handle higher precision numbers, with up to 64 bits. This is especially helpful in algorithms which merge multiple bytes into a single integer for speed, and it generally applies to data processing. Sometimes you actually need the full range of precision, because, well, sometimes you need to work with big numbers. There is also the application of being able to combine this with |
Beta Was this translation helpful? Give feedback.
-
See the Luau compatibility page: It breaks compatibility, as it changes the behavior of numbers in a non-trivial way.
|
Beta Was this translation helpful? Give feedback.
-
Some of the reasoning on the compatibility page irks me because there are ways to address compatibility. Integers breaking the The problem with integers doesn't seem to be that they break compatibility, the problem seems to be that luau will not consider a syntax that doesn't break compatibility. I'm interested in the reasoning for why new syntax is considered a problem rather than a useful way to expand the language to make it more appropriate for the environment it's designed for (gameplay programming). |
Beta Was this translation helpful? Give feedback.
-
It has more to do with the behavioral changes it will cause in existing programs. Native bitwise operations are already covered by bit32 fastcall which is basically exactly as fast as
Well it is. It changes the behavior of existing programs. The only reason the vector type was added was because it acted basically the same as the existing Vector3 userdata and also was created in the same way. And that was only a minor breaking change. Native integers would be way bigger. Luau would have to make a special syntax for integers, different type, add support for them to all the bit32 funcs, etc. all that effort to try not to break compatibility, and then it would because integers would suddenly not qualify as numbers since they'd be separate types... I dunno this sounds like a mess waiting to happen. |
Beta Was this translation helpful? Give feedback.
-
The primary concern that a native integer type would address is not having enough precision in code. Doubles take up 64 bits, but, you can only represent integers up to I don't think that adding any new syntax is reasonable. I think that the most reasonable approach would have to be implementing this in a way which doesn't break backwards compatibility, which, is easier said than done. Compromise?Because directly backporting lua 5.3's implementation of a native integer type isn't feasible in any real way, I think a good potential compromise is to allow for absolute bare minimum support. In any case where integer(s) are used in a way which can break compatibility, we can just convert to a double and use the double logic. This allows for different incompatibilities to be evaluated at different times, or not at all, and it lets both backwards and forwards compatibility be preserved, with minimal performance cost. For example:
Conversion from 64 bit signed/unsigned integers to doubles is extremely fast on most hardware, as it only requires overwriting a section of the integer data with some constant data for the sign & exponent, so most performance impact should come down to how quickly we can handle branch cases between numeric types, which should already be overhead that exists in theory since operations between doubles and non double primitives need to be handled already (e.g. a double and a string) |
Beta Was this translation helpful? Give feedback.
-
It could also be possible to move from 64-bit doubles to 80-bit long doubles, which still fit into the 96 bits of space in TValue. However, this would immediately move into the realm of non-portable code:
|
Beta Was this translation helpful? Give feedback.
-
Having different semantics for overflow is important if an integer type is to be added, this is probably the main source of incompatibility. |
Beta Was this translation helpful? Give feedback.
-
@Halalaluyafail3 How are scripts that stringify numbers broken? |
Beta Was this translation helpful? Give feedback.
-
I am saying that if a script relies on the format of a default conversion (tostring or implicit conversions) from numbers to strings it is broken, as the format that is used by default conversions from numbers to strings is left unspecified. If a script uses default conversions from numbers to strings but does not rely on the format of the generated strings then it is not broken. |
Beta Was this translation helpful? Give feedback.
-
I'm going to convert this issue to a discussion - our current stance on integers is described on compatibility page, and boils down to this feature having non-obvious compatibility, complexity and performance tradeoffs. No decision we ever make is final, but there's good reasons for us to not venture into integer support for now. This may be reevaluated after we implement the JIT and a few other projects. The compat issues are well summarized here. The major one is overflow handling, which breaks the numeric tower. I think there are some other smaller issues, beyond just tostring, that are going to be difficult to evaluate before building the implementation out; for example, a bunch of C APIs today work with 32-bit integers and I think 5.3 changes that to 64-bit that can result in behavioral changes and/or difficult to debug bugs. Let's say we solve the overflow by automatically checking for it after arithmetic operations, and do something with the other smaller issues. There's still many issues that remain. One is performance, where integers are very useful in some cases but harmful in others. Because we don't have strict types affecting runtime (and even if we did, whether we should separate integers and numbers at type level is unclear), a host of operations are now more ambiguous than before wrt expected types and that requires extra branching in the interpreter to resolve. Performance is further complicated by many non-obvious factors. One that's applicable to 5.3 is that integers are 64-bit on all platforms, even ones that don't natively support 64-bit operations. This is the correct choice because it equalizes behavior, but it comes at a performance cost on 32-bit platforms, which we still support in both desktop and mobile space and they aren't going away any time soon. One that's applicable to a backwards-compatible extension to 5.3 logic wrt overflow handling is that it's very non-trivial to perform overflow checked math portably and efficiently in C, and even reasonably efficient implementations still add overhead to all basic operations like +/*. One is complexity, where integers by themselves add some amount of non-trivial code, where every single place that dealt with numbers in the VM now needs to handle two types, and they also naturally invite a further increase in complexity by adding 6 more operators, with associated bytecode, metamethods, etc. This is all non-trivial as it increases the interpreter code size which affects performance, increases syntactic complexity, etc. etc. One is the runtime - type interaction which has two unsatisfactory answers here. Today we're pretty evenly matched between the types the VM exposes and the types the type checker understands, and that's likely to continue. It's not obvious whether it's better to separate integers from doubles at the type level - after all, semantically the operations can be different, and this can help code generation - or not. Both choices have upsides and downsides. Overall, if we include overflow handling and keep a single numeric type externally, we effectively get from "numbers are 64-bit doubles that can represent integers precisely up to 2^53" to "numbers are 64-bit doubles that can represent integers precisely up to 2^63 in some cases". I say "in some cases" because, if you leave integer space into floating space, can you ever go back? Does math.floor(3.4) return an integer or a double? What about math.floor(2**54 + 0.5)? The idea that I'm personally much more excited about is implicit integers. This is used by LuaJIT and various JS runtimes, and there are ways for us to implement this behind the scenes without changing semantics of user programs - where we'd automatically downgrade the representation of a given value from a 64-bit double to a 32-bit integer when we have a proof that the integer is going to stay within the 31-bit range. This can cover many of the cases where today we lose a bit of performance on a single number type. Maybe we can do this dynamically when using a JIT. Maybe it doesn't work out. To summarize this overly long explanation, today 64-bit integers are in the design space where the feature has a significant cost in various areas of the language and runtime and the cost isn't obviously justified by the language becoming simpler or more user friendly or faster. I don't think the use cases around packed math are as motivating - yes, there's cool things you can do with 64-bit integers, and if they were free we'd happily incorporate them, but on the balance it doesn't seem like an obvious win. Accelerating array indexing is where I see the most value in performance for this, but implicit integers are a possible competitor here. Maybe we look at this problem again in a year or two when we have a JIT and try implicit integers and decide otherwise, but the compatibility note has been written 2 years ago and all of the reasoning from that time still applies today. |
Beta Was this translation helpful? Give feedback.
-
This is a feature in lua 5.3 and beyond, which breaks the
number
type internally into two new types,integer
andfloat
.Currently, luau numbers are doubles, which offers 53 bits of lossless integer precision. Having the capability for 64 bit computation in luau would extend to a wide variety of algorithms, improving performance in cases where they are required.
Pros:
Cons:
3/4
must be a double to be represented, is4/2
, or10/5
represented as a float? How can this be done performantly?)Beta Was this translation helpful? Give feedback.
All reactions