-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generic Integers V2: It's Time #3686
base: master
Are you sure you want to change the base?
Conversation
…p size/alignment to a multiple of 64 bits.
Fix some nits
…eric integers since that's not an issue any more
This reverts commit 25f85cc105cb04b4e87debf46f4547240c122ae4.
As much as I dislike 👍 from me |
Even if we should probably leave them out of the initial RFC for complexity reasons, I would just cheat with floats, as they rely on system libraries and hardware instructions way more than regular integers. By that, I mean that I'd allow |
Are you proposing delaying the discussion or the implementation? My understanding is that with a release early 2025, Rust 2024 will be done by mid November, which is only 2 months away, and it seems quite unlikely this RFC would be accepted and implementation ready to start by then, so I see no conflict with regard to starting on the implementation... ... but I could understand a focus on the edition for the next 2 months, and thus less bandwidth available for discussing RFCs. |
The problem with this approach is that any "cheating" becomes permanently stabilised, and thus, it's worth putting in some thought for the design. This isn't to say that Plus, monomorphisation-time errors were actually one of the big downsides to the original RFC, and I suspect that people haven't really changed their thoughts since then. Effectively, while it's okay to allow some of edge-case monomorphisation-time errors like this RFC includes (for example, asking for One potential solution that was proposed for unifying
And it would support all float types, forever, and there would be no invalid values for
As stated: yes, RFCs take time to discuss and implement and it's very reasonable to expect people to focus on the 2024 edition for now. However, that doesn't mean that we can't discuss this now, especially since there are bound to be things that were missed that would be good to point out. |
|
||
In general, operations on `u<N>` and `i<N>` should work the same as they do for existing integer types, although the compiler may need to special-case `N = 0` and `N = 1` if they're not supported by the backend. | ||
|
||
When stored, `u<N>` should always zero-extend to the size of the type and `i<N>` should always sign-extend. This means that any padding bits for `u<N>` can be expected to be zero, but padding bits for `i<N>` may be either all-zero or all-one depending on the sign. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please clarify this to say what exactly happens when I transmute e.g. 255u8
to u<7>
(and similar to i<N>
). I assume it is UB, i.e., the validity invariant of these types says that the remaining bits are zero-extended / sign-extended, but the RFC should make that explicit.
Note that calling this "padding" might be confusing since "padding" in structs is uninitialized, but here padding would be defined to always have very specific values. (That would, e.g. allow, it to be used as a niche for enum optimizations.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not quite sure what a better name is; it's the same as rustc_layout_scalar_valid_range
, which is UB if the bits are invalid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that since this is the reference description, calling them niche bits would be more appropriate? Would that feel reasonable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. Niche bits are an implementation detail of the enum layout algorithm, and mostly not stable nor documented.
Just describe what the valid representations of values of these type are, i.e., what should go into this section about these types.
|
||
The compiler should be allowed to restrict `N` even further, maybe even as low as `u16::MAX`, due to other restrictions that may apply. For example, the LLVM backend currently only allows integers with widths up to `u<23>::MAX` (not a typo; 23, not 32). On 16-bit targets, using `usize` further restricts these integers to `u16::MAX` bits. | ||
|
||
While `N` could be a `u32` instead of `usize`, keeping it at `usize` makes things slightly more natural when converting bits to array lengths and other length-generics, and these quite high cutoff points are seen as acceptable. In particular, this helps using `N` for an array index until [`generic_const_exprs`] is stabilized. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean "using N for an array length", I assume?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
|
||
As an example, someone might end up using `u<7>` for a percent since it allows fewer extraneous values (`101..=127`) than `u<8>` (`101..=255`), although this actually just overcomplicates the code for little benefit, and may even make the performance worse. | ||
|
||
Overall, things have changed dramatically since [the last time this RFC was submitted][#2581]. Back then, const generics weren't even implemented in the compiler yet, but now, they're used throughout the Rust ecosystem. Additionally, it's clear that LLVM definitely supports generic integers to a reasonable extent, and languages like [Zig] and even [C][`_BitInt`] have implemented them. A lot of people think it's time to start considering them for real. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't say Zig has generic integers, it seems like they have arbitrarily-sized integers. Or is it possible to write code that is generic over the integer size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well actually you can
const std = @import("std");
fn U(comptime bits: u16) type {
return @Type(std.builtin.Type {
.Int = std.builtin.Type.Int {
.signedness = std.builtin.Signedness.unsigned,
.bits = bits,
},
});
}
pub fn main() !void {
const a: U(2) = 1;
const b: U(2) = 3;
// const c: U(2) = 5; // error: type 'u2' cannot represent integer value '5'
const d = std.math.maxInt(U(147));
std.debug.print("a={}, b={}, d={}", .{ a, b, d });
// a=1, b=3, d=178405961588244985132285746181186892047843327
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that example is satisfactory enough, @RalfJung? Not really sure if it's worth the effort to clarify explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, neat.
C and LLVM only have concrete-width integers though, I think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, C doesn't have generic anything, so, I guess you're right. Not 100% sure the distinction is worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clang adds _BitInt
to C++ as an extension and the number of bits can be generic: template <size_t N> void example(_BitInt(N) a);
will deduce N
but it only works on the actual _BitInt
types, not just any signed integer type.
I love this! One point that is touched upon here is aliases for I think that'd be super valuable to have. Rust already has a lot of symbols and being able to not use the angle brackets makes sure that the code is much calmer to look upon. It's also not the first explicit syntax sugar since an Having the aliases also allows for this while keeping everything consistent: fn foo<const N: usize>(my_num: u<N>) { ... }
foo(123); // What is the bit width? u32 by default?
foo(123u7); // Fixed it |
I agree with you, just didn't want to require them for the initial RFC, since I wanted to keef it simple. Ideally, the language will support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When the last RFC was postponed, the stated reason was waiting for pure library solutions to emerge and letting the experience with those inform the design. I don't really see much of this in the current RFC, so here's a bunch of questions about it. It would also be great if some non-obvious design aspects of the RFC (such as limits on N
, whether and how post-monomorphization errors work, padding, alignment, etc.) could be justified with experience from such libraries.
|
||
This was the main proposal last time this RFC rolled around, and as we've seen, it hasn't really worked. | ||
|
||
Crates like [`u`], [`bounded-integer`], and [`intx`] exist, but they come with their own host of problems: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I can tell, bounded-integer
and intx
only provide subsets of the native types up to {i,u}128, not arbitrarily large fixed-size integers. The u
crate seems to be about something else entirely, did you mean to link something different there?
So where are the libraries that even try to do what this RFC proposes: arbitrary number of bits, driven by const generics? I've searched and found ruint, which appears relevant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That definitely seems like a good option to add to the list. I had trouble finding them, so, I appreciate it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd appreciate a mention of https://crates.io/crates/arbitrary-int
, which is (I think) the closest in design to this rfc
|
||
Crates like [`u`], [`bounded-integer`], and [`intx`] exist, but they come with their own host of problems: | ||
|
||
* None of these libraries can easily unify with the existing `uN` and `iN` types. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A const-generic library type can't provide this and also can't support literals. But what problems exactly does that cause in practice? Which aspects can be handled well with existing language features and which ones really need language support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The RFC already mentions how being able to provide a small number of generic impls that cover all integer types has an extremely large benefit over being forced to use macros to implement for all of them individually. You cannot do this without language support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this bullet point is "only" about impls like impl<const BITS: usize> Foo for some_library::Int<BITS> { ... }
not implementing anything for the primitive integer types? Could From
impls and some form of delegation (#3530) also help with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, and this is mentioned in the RFC also. That's 5 impls for unsigned, 5 impls for signed that could just be 2 impls, whether you have delegation or not. Even for simple traits, like Default
, you're incentivised to use a macro just because it becomes so cumbersome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
arbitrary-int
provides a unification somewhat using its Number
trait. It's somewhat rudimentary but I am working on improving it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading this again, the Number trait fulfills a somewhat different role though. It allows writing generic code against any Number (be it an arbitrary-int or a native int), but it does not expose the bits itself - which can be a plus or a minus, depending on what you're building.
Crates like [`u`], [`bounded-integer`], and [`intx`] exist, but they come with their own host of problems: | ||
|
||
* None of these libraries can easily unify with the existing `uN` and `iN` types. | ||
* Generally, they require a lot of unsafe code to work. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of unsafe code, and for what purposes? And is that sufficient reason to extend the language? Usually, if it's something that can be hidden behind a safe abstraction once and for all, then it seems secondary whether that unsafety lives on crates.io, in sysroot crates, or in the functional correctness of the compiler backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, the unsafe code is stuff similar to the bounded-integer
crate, where integers are represented using enums and transmuted from primitives. The casting to primitives is safe, but not the transmuting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that really all? Because that seems trivial to encapsulate without affecting the API, and likely to be solved by any future feature that makes it easier to opt into niche optimizations (e.g., pattern types).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's easy to encapsulate, but I think it's worth mentioning that unsafe code is involved as a negative because it means many code bases will be more apprehensive to use it.
You are right that it could easily be improved, though, with more compiler features. I just can't imagine it ever being on par with the performance of a compiler-supported version, both at runtime and compile time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
arbitrary-int works without unsafe code (with the exception of the optional function new_unchecked
which skips the bounds check)
|
||
* None of these libraries can easily unify with the existing `uN` and `iN` types. | ||
* Generally, they require a lot of unsafe code to work. | ||
* These representations tend to be slower and less-optimized than compiler-generated versions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have any data on what's slower and why? Are there any lower-stakes ways to fix these performance issues by, for example, adding/stabilizing suitable helper functions (like rust-lang/rust#85532) or adding more peephole optimizations in MIR and/or LLVM?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Main source of slowdown is from using enums to take advantage of niche optimisations; having an enum with a large number of variants to represent this niche is pretty slow to compile, even though most of the resulting code ends up as no-ops after optimisations.
I definitely should mention that I meant slow to compile here, not slow to run. Any library solution can be made fast to run, but will generally suffer in compile time when these features are effectively already supported by the compiler backends, mostly for free.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any compile time issue when not trying to provide niches? Out of the potential use cases the RFC lists, only a couple seem to really care about niche optimizations. In particular, I don't expect that it typically matters for integers larger than 128 bits. (But again, surveying the real ecosystem would help!) If so, the compile time problem for crates like bounded-integer could be addressed more directly by stabilizing a proper way to directly opt into niches instead of having to abuse enums. And that would help with any bounds, while this RFC (without future possibilities) would not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I would expect some negative compile-time impact from repeatedly monomorphizing code that's const-generics over bit width or bounds. But that's sort of inherent in having lots of code that is generic in this way, so it's no worse for third party libraries than for something built-in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's very fair; I agree that we should have an ability to opt into niches regardless. I guess that my reasoning here is pretty lackluster because I felt that the other reasons to have this feature were strong enough that this argument wasn't worth arguing, although you're right that I should actually put a proper argument for it.
From what I've seen, of the use cases for generic integers:
- Generalising primitives
- Between-primitives integer types (like
u<7>
andu<48>
) - Larger-than-primitives integer types
For 1, basically no library solution can work, so, that's off the table. For 2, which is mostly the subject of discussion here, you're right that it could probably be improved a lot with existing support. And for 3, most people just don't find the need to make generalised code for their use cases, and just explicitly implement, say, u256
themselves with the few operations they need.
The main argument IMHO is that we can effectively knock out all three of these options easily with generic integers supported by the language, and they would be efficient and optimized by the compiler. We can definitely whittle down the issues with 2 and 3 as we add more support, but the point is that we don't need to if we add in generic integers.
Although, I really need to solidify this argument, because folks like you aren't 100% convinced, and I think that the feedback has been pretty valuable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I appreciate that you're trying to tackle a lot of different problems with a unifying mechanism. I focus on each problem separately because I want to tease out how much value the unifying mechanism adds for each of them, compared to smaller, more incremental additions that may be useful and/or necessary in any case. Only when that's done I feel like I can form an opinion on whether this relatively large feature seems worth it overall.
* None of these libraries can easily unify with the existing `uN` and `iN` types. | ||
* Generally, they require a lot of unsafe code to work. | ||
* These representations tend to be slower and less-optimized than compiler-generated versions. | ||
* They still require you to generalise integer types with macros instead of const generics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand the problem here. If a library provides struct Int<const BITS: usize>(...);
then code using this library shouldn't need macros to interact with it (except, perhaps, as workaround for current gaps in const generics). The library itself would have a bunch of impls relating its types to the language primitives, which may be generated with macros. But that doesn't seem like such a drastic problem, if it's constrained to the innards of one library, or a few competing libraries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand your argument. No matter what, a library solution cannot be both generic and unify with the standard library types. I don't see a path forward that would allow, for example, some library Uint<N>
type to allow Uint<8>
being an alias for u8
while also supporting arbitrary Uint<N>
. Even with specialisation, I can't imagine a sound subset of specialisation allowing this to work.
Like, sure, a set of libraries can choose to only use these types instead of the primitives, circumventing the problem. But most people will want to implement their traits for primitives for interoperability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This overlaps a bit with the bullet point about unification, but I do think it depends a lot on what one is doing. For example, the num-traits crate defines traits that it needs to implement for the primitive types. On the other hand, any code that's currently written against the traits from num-traits may be happy with a third party library that provides Int<N>
and Uint<N>
and implements the relevant traits for them. And for something like bit fields, you may not need much generalization over primitive types at all: in the MipsInstruction
example, you probably want some widening and narrowing conversions, but only with respect to u32 specifically.
It's hard to form an opinion about how common these scenarios are (and whether there are other nuances) without having a corpus of "real" code to look at. Experience reports (including negative ones) with crates like num-traits and bounded-integer may be more useful than discussing it in the abstract.
Two things that came to mind:
|
So, I agree that this was one of the reasons, but it's worth reiterating that also, at that time, const generics weren't even stable. We had no idea what the larger ecosystem would choose to do with them, considering how many people were waiting for stabilisation to really start using them. (We had an idea of what was possible, but not what would feel most ergonomic for APIs, etc.) So, I personally felt that the library solution idea was mostly due to that fact that we didn't really know what libraries would do with const generics. And, overwhelmingly, there hasn't been much interest in it for what I believe to be the most compelling use case: generalising APIs without using macros, which right now cannot really be done without language support. |
* `From` and `TryFrom` implementations (requires const-generic bounds) | ||
* `from_*e_bytes` and `to_*e_bytes` methods (requires [`generic_const_exprs`]) | ||
|
||
Currently, the LLVM backend already supports generic integers (you can refer to `iN` and `uN` as much as you want), although other backends may need additional code to work with generic integers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing to emphasize here: getting u128
to work was a huge endeavour, and bigger ones will be even harder for things like division -- even for 128-bit it calls out to a specific symbol for that.
Embarassingly-parallel things like BitAnd
or count_ones
are really easy to support for bigger widths, but other things might be extremely difficult, so it might be worth exploring what it would look like to allow those only for N ≤ 128
or something, initially.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing to emphasize here: getting
u128
to work was a huge endeavour, and bigger ones will be even harder for things like division -- even for 128-bit it calls out to a specific symbol for that.
LLVM has a pass specifically for expanding large divisions into a loop that doesn't use a libcall, so that shouldn't really be an issue though libcalls can still be added if you want something faster: llvm/llvm-project@3e39b27
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as part of clang gaining support for _BitInt(N)
where N > 128
, basically all the work to make it work has already been done in LLVM. Div/Rem was the last missing piece and that was added in 2022.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clang still limit _BitInt(N)
to N <= 128 on quite a few targets: https://gcc.godbolt.org/z/8P3sMjavs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think that's merely because they haven't got around to defining the ABI, but it all works afaict: https://llvm.godbolt.org/z/88K3ox7bh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also worth mentioning that having N > 128
be a post-monomorphisation error was seen as one of the biggest downsides to the previous RFC, and that this would cause more of a headache than just trying to make it work in general. [citation needed]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good start. However, as long as Clang isn't shipping it, the people working on Clang aren't discovering and fixing any bugs specific to those platforms. The div/rem lowering happens in LLVM IR so it's hopefully pretty target-independent, but most other operations are still legalized later in the backends. That includes the operations the div/rem lowering relies on, but also any other LLVM intrinsics that the standard library uses or may want to use in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm kind of relying a lot on the fact that even though not everyone is using _BitInt(N)
right now, by the time we actually would be stabilising this RFC, LLVM would be a lot more robust in that regard. Kind of a role reversal from what happened with 128-bit integers: back then, we were really pushing LLVM to have better support, and C benefited from that, but now, C pushing LLVM to have better support will benefit Rust instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you say, this can be revisited later, but note that there's no guarantee that Clang will ever support _BitInt(129)
or larger on any particular target. The C standard only requires BITINT_MAXWIDTH >= ULLONG_WIDTH
. If some target keeps it at 128 for long enough, it could become entrenched enough that nobody wants to risk increasing it (e.g., imagine people putting stuff like char bits[BITINT_MAXWIDTH / 8];
in some headers).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually had no idea that was how the standard worked, but I shouldn't really be surprised, considering how it's C. :/
Hey, I'm the author of https://crates.io/crates/arbitrary-int . It seems like this proposal has some overlap with what I've built as a crate, so I can talk a bit about the hurdles I've run into. Arbitrary-int has a generic type It also provides types to shorten the name. For example It also provides a In general, implementing this as a crate worked pretty well, but there are some downsides:
|
[#2581]: https://github.com/rust-lang/rfcs/pull/2581 | ||
[Zig]: https://ziglang.org/documentation/master/#Primitive-Types | ||
|
||
# Rationale and alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, the biggest reason to go this way is the coherence possibilities. I'd propose something like
# Rationale and alternatives | |
# Rationale and alternatives | |
## Coherence | |
One problem with other ways of doing this is that anything trait-based will run afoul of coherence in user code. | |
For example, if I tried to `impl<T> MyTrait for T where T: UnsignedInteger`, then it takes extra coherence logic -- which doesn't yet exist -- to also allow implementing `MyTrait` for other things. And this is worse if you want blankets for both `T: SignedInteger` and `T: UnsignedInteger` -- which would need like mutually-exclusive traits or similar. | |
When user code does | |
``` | |
impl<const N: u32> MyTrait for u<n> { … } | |
impl<const N: u32> MyTrait for i<n> { … } | |
``` | |
those are already-distinct types to coherence, no different from implementing a trait for both `Vec<T>` and `VecDeque<T>`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would likely go in the motivation section rather than the rationale section, but I agree with you that this is a good argument to mention. Will have to ponder where exactly it fits in the RFC.
Also due to my design decision to base everything on a simple types (no arrays), the maximum number of bits supported is u127. |
I hadn't actually read the code yet, but I'm actually a bit curious why the max number of bits is 127 instead of 128. This feels like a weird restriction. |
It is 128 bits actually. |
By the way, I love this RFC! While arbitrary-int (as well as ux) provide the unusually-sized ints like u48 etc, having a built-in solution will feel more natural and allows to treat numbers in a much more unified fashion, which I'm looking forward to. |
} | ||
``` | ||
|
||
That's a lot better. Now, as you'll notice, we still have to cover the types `usize` and `isize` separately; that's because they're still separate from the `u<N>` and `i<N>` types. If you think about it, this has always been the case before generic integers; for example, on a 64-bit system, `u64` is not the same as `usize`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can certainly come after this RFC, but this could be made much more ergonomic by adding the following APIs:
impl usize {
fn to_bits(self) -> u<Self::BITS>;
fn from_bits(bits: u<Self::BITS>) -> Self;
}
impl isize {
fn to_bits(self) -> i<Self::BITS>;
fn from_bits(bits: i<Self::BITS>) -> Self;
}
So this is definitely not a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although now I reälize that won’t work, because usize::BITS
is a u32
. But then again, it might be helpful to have ::BITS_USIZE
constants anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also won't work until generic const args are stable, since associated consts aren't allowed in const generics at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usize and isize could have an associated type alias for the equivalent u<N>
/i<N>
, and to_bits/from_bits
could reference that type alias.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also won't work until generic const args are stable, since associated consts aren't allowed in const generics at the moment.
no, it works fine since there are no generics in the const expression. what doesn't work is struct S<T: Tr>(S2<{ T::ASSOC_CONST }>);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although now I reälize that won’t work, because
usize::BITS
is au32
. But then again, it might be helpful to have::BITS_USIZE
constants anyway.
I still think that these should be u<const BITS: u32>
because of this. Once things like that are allowed, I want u<{FOO.ilog2_ceil()}>
to just work, not need casts.
(We shouldn't make things worse forever for a minor mostly-irrelevant convenience today, since u<const BITS: usize>
still doesn't even fix to_ne_bytes
and such.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I changed my mind, I had just forgotten all the justification for choosing usize
over u32
, which is why I felt amenable to changing it at the time.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most of why I wanted usize
as the bit width parameter type is that it's the same as arrays, and that trying to share a const generic between a u32
bit width and an array size is basically impossible until we get casting in const generic expressions. e.g.:
pub const fn to_binary<const N: usize>(v: u<N>, buf: &mut [u8; N]) -> &str {
let mut i = 0;
while i < N {
buf[i] = if (v >> (N - i - 1)) & 1 == 1 { b'1' } else { b'0' };
i += 1;
}
str::from_utf8(buf).unwrap()
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How often do you need an array with one element per bit? If the array length has any other relationship with the number of bits (e.g., adding +2 for a 0b
prefix, or the aforementioned to_le_bytes and friends), then you still need const generics expressions. And casts are possibly less problematic than most other operations, which can fail and therefore imply more post-mono errors or need some solution for propagating bounds like “N > 0”.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replied here
(replying to #3686 (comment) here so it's less likely to get lost when that thread gets resolved)
when doing SIMD with bitmasks -- all the time. using In particular this is a big motivation for using |
I am not super familiar with core::simd, so please correct me if I misunderstood something. I understand that the From looking at the current implementation it seems that As long as masks are a separate type with platform-dependent representation, it seems to me that the interaction with |
Honestly, if the arguments for I do think that it would be worthwhile to choose one over the other if there actually were a lot of cases where it were valuable, and SIMD does feel compelling enough to be valuable, but honestly, it's kind of tied at this point. I personally think that choosing Like realistically, if we were choosing an appropriate size for the count of bits, we should have chosen |
Using u16 would have the advantage that conversion to u32 and to usize is always lossless while the opposite direction isn’t. Wild idea: is there anything forcing the |
Nevermind that last bit, the lang item might not have to pick a type for N, but all the impl blocks involving those types have to pick a type for their const generic parameter, so |
I mean, it would be nice if you could allow I don't think it'd be a good idea to delay deciding what the type of the parameter should be in this case, although we might end up getting the above anyway before this feature is actually implemented, since it's currently one of the project goals; see rust-lang/rust-project-goals#100 |
You mean carve it out as something that doesn't even need any of the implementation challenges to be solved because you could mostly ignore the cast completely? Might work, might cause horrible problems down the line, I don't know. But I'm already working under the assumption that "MVP generic const expressions" is a prerequisite for generic integers to be stabilized and widely used. Starting from a smaller type doesn't buy you anything in terms of language design or type system interactions,, but it's a nicer model for users (well, for me at least) because it means far fewer places where my "could this truncate?" spidey sense tingles. |
That's mostly because we only got around to setting it to bitmasks on AVX512, other targets that have bitmasks are at least RISC-V V and ARM SVE. I fully expect |
I'm not, actually, which is why I felt comfortable restarting this RFC at this point in time. Back at the original RFC's time, we were in a similar situation with regard to const generics in general that we are right now with regard to these other const generic features: they were definitely coming, we had plans for them, and we abstractly knew what they would look like, but they weren't complete yet. The difference is that we genuinely cannot implement generic integers without const generics, whereas we totally can have generic integers without generic const expressions. Yes, it's likely that a lot of these things will be mostly resolved by the time an implementation exists, but even if they were delayed for a year, they didn't live up to what we hoped they'd be, or they take a very long time to become stable, I think that generic integers could still exist and be useful. For example, if we allowed most integer methods and operations via the generic type but still made So, I think that it's better to consider the feature from the perspective that these features don't exist, so we don't hype up potentially unrealistic fantasies of what we'll be able to do with them. And I think that even without them, this is still incredibly useful of a proposal, and it's not incompatible with the improvements we can make later with these features. |
Adding additional platforms that use an integer representation for masks internally doesn't remove the (common and important!) ones that prefer the vector representation. It also doesn't make mask<->integer punning in user code any better for performance portability -- even on SVE and RVV, you want the generated code to stay in predicate/mask land as much as possible. I also have some doubts about whether integers are really the best representation of masks in RVV and predicates in SVE, considering ISA design, calling conventions, the vendor-defined C intrinsics, and what little I know about existing uarchs. But I don't want to come off as telling you and the others working on portable SIMD how to do your job and this isn't the right venue for a deeper discussion in any case. |
portable-simd's only vector representation for masks is not bitmasks, but instead full integer elements that are bitmasks currently are just struct wrappers around an integer (and that seems unlikely to change, though operations may change to use more intrinsics). we rely on llvm optimizations to translate that to operations on mask/predicate registers. |
I don't get why you're explaining that in response to what I've written. Let's try a different angle. Can you point at some concrete code snippets where I've tried to guess at where exactly you're trying to go with that connection but it feels a bit like we've been talking past each other. Put differently, I would distinguish between two aspects:
I'm aware of the existence |
Unrelated to the above, but another thing that occurred to me as a benefit of this system is exhaustive matching. In a recent project I have quite a few places where I'm doing bitmasking, then matching on the result, where there always has to be a wildcard |
That’s a really good point, that I think even applies to current types. Let’s say I have a random number generator that gives me a u64, and I want to split that into two u32s. For explicitness sake, I will usually bitmask both the original number and the down-shifted one with u32::MAX before as-casting to u32. Clippy of course still complains and so I need to allow some truncation lint. What would be great, especially for generic ints, would be to have a method |
So, at least that part will hopefully be covered by the Sure, this doesn't cover cases where the bit fields are discontinuous, but it covers most of them. |
yes they are bounded, but also critical for getting arbitrary length
One example is when counting the number of |
Those both seem like things that it would make sense to have methods for directly on Mask. |
Yeah, that's part of the problem: either things are popular enough to warrant a method on This issue on adding something like also, AVX512 has |
If the end goal is to not have any bounds, then that's a fair point. There are other possible solutions (e.g., enough "generic const expressions" to make
I know these use cases well, but as mentioned I would prefer not to write them that way in portable SIMD because there are much more efficient ways to implement them on most non-x86 targets. Sometimes even the most efficient spelling is not worth it (e.g., SwissTable/Hashbrown switches between 128-bit SSE2, 64-bit NEON, and integer-based SWAR depending on the target mostly because of the wildly varying cost of the necessary mask operations). As far as possible, a portable API should avoid guiding people into performance portability cliffs, and provide more abstract operations whenever feasible. This was also a theme in the Webassembly discussion you linked. As you say, there's always a long tail of creative uses that the mask abstraction can't cover completely, so I would never argue for not having conversions between masks and integers at all. But I think in the context of "using core::simd instead of core::arch, and abstracting over lane count" they're so niche that nicer APIs for that specific combination comes very low on the list of priorities. Removing Aside about AVX-512 `kadd*`
I would love to hear why the architects added this one. I googled a bit and found virtually no mention of it and only one potential use. But that article also says that the instruction is too slow to be worthwhile on the CPUs considered and simply doing the mask->GPR->mask round-trip that |
yes, but I also think that translating from a reasonable portable implementation to whatever weirdness your particular cpu architecture requires should be mostly llvm's problem to handle -- since that's how almost all of portable-simd currently operates and that allows portable-simd's users to easily write actually portable simd and still get good performance without having to spend weeks researching how every different target does stuff its own special way. iirc so far the only exceptions to portable-simd's leaving it up to llvm are switching between bitmasks/fullmasks, implementing dynamic swizzle (since llvm just plain doesn't have a non-arch-specific operation for that), and working around aarch64 backend bugs for integer division. so, in summary, I think a programmer writing |
I don't disagree but I also don't think it conflicts with what I said, considering that LLVM does not (and might never) 100% live up to that goal. Providing higher-level operations on As I said before, getting rid of the bounds on |
|
||
## Documentation decluttering | ||
|
||
Having generic impls would drastically reduce the noise in the "implementations" section of rustdoc. For example, the number of implementations for `Add` for integer types really drowns out the fact that it's also implemented for strings and `std::time` types, which is useful to know too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having generic impls would drastically reduce the noise in the "implementations" section of rustdoc. For example, the number of implementations for `Add` for integer types really drowns out the fact that it's also implemented for strings and `std::time` types, which is useful to know too. | |
Having generic impls would drastically reduce the noise in the "implementations" section of rustdoc. For example, the number of implementations for `Add` for integer types really drowns out the fact that it's also [implemented for strings](https://doc.rust-lang.org/stable/std/ops/trait.Add.html#impl-Add%3C%26str%3E-for-String) and `std::time` types, which is useful to know too. |
Summary
Adds the builtin types
u<N>
andi<N>
, allowing integers with an arbitrary size in bits.Rendered
Details
This is a follow-up to #2581, which was previously postponed. A lot has happened since then, and there has been general support for this change from a lot of different people. It's time.
There are a few key differences from the previous RFC, but I trust that you can read.
Thanks
Thank you to everyone who responded to the pre-RFC on Internals with feedback.