Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IDEs and proc-macros #11014

Closed
Veykril opened this issue Dec 14, 2021 · 33 comments
Closed

IDEs and proc-macros #11014

Veykril opened this issue Dec 14, 2021 · 33 comments
Labels
A-macro macro expansion A-proc-macro proc macro C-Architecture Big architectural things which we need to figure up-front (or suggestions for rewrites :0) ) E-unknown It's unclear if the issue is E-hard or E-easy without digging in

Comments

@Veykril
Copy link
Member

Veykril commented Dec 14, 2021

Edit: I've written my current thoughts on the matter down in a blog post: https://veykril.github.io/posts/ide-proc-macros/#conclusion

This issue is meant as a place to collect information and ideas about the current completion(IDE support rather) dilemma we have with proc-macros.

Current State of Things

When typing inside an attributed item(or proc-macro) the user will inevitably create some form of syntax error at some point which currently causes those proc-macros to just bail out, emitting a compile_error! invocation that r-a will happily replace the entire item with. This has the downside that when typing in such an item, all ide features will momentarily stop working resulting in incorrect or straight up missing completions and syntax highlighting flickers(when semantic highlighting is being used).

First Attempt at Solving This

Our first idea to solve this was to nudge proc-macro authors towards doing more fallible expansions, meaning instead of only emitting a compile_error! on unexpected input, they should instead try to produce output as close as possible to what the macro would expand to in the happy case together with the compile_error!.
This actually seems to be not as good of an idea as we have first imagined.
It goes against syn's ways as the crate is designed to return early on errors, and in fact, as it turns out(I wasn't aware of this fact prior) attribute and derive macros have the invariant/contract1 that they should always receive TokenStreams that parse to valid rust syntax.

Quoting @dtolnay's comment on the matter1 which raises a lot of good points on this matter:

Passing syntactically invalid input to attribute macros, whether in the context of rustc or rust-analyzer, is not a good plan because doing the recovery, emitting diagnostics about the recovery, and then running macros on the original unrecovered input means that in general you're forcing macros to reimplement their own recovery independent and inconsistent with rustc/rust-analyzer — so rustc/rust-analyzer will diagnose what it thinks you meant, and the macro will diagnose what it thinks, and probably they won't align, resulting in dissonant user-facing messages. The correct behavior in my opinion is for rustc/rust-analyzer to perform its normal high-effort syntax recovery the same as for nodes that are not macro input, report diagnostics on that recovery as normal, then pass the recovered input for the attribute to proceed on.

Recovery in this context would involve rust-analyzer snipping out the nearest syntactically invalid syntax tree node and swapping in whatever syntactically valid placeholder/sentinel it wants in its place. It can then run the macro which will expand successfully, then provide autocompletion and other functionality to the programmer based on the position where it finds its sentinel in the expanded output, and the original snipped out syntax.

While the user is typing inside of attribute macro input this approach will give high quality results for the IDE in vastly more cases than rust-analyzer's current behavior, without trying to force changes for invalid macro input into all the attribute macros in the ecosystem.

Possible Solutions

So with this, we have a few options at hand currently:

  • Nudge proc-macro authors to "fix" their macros.
    For attributes and derives(both of which expect rust syntax) this does seem like a bad idea after all after considering the points dtolnay has raised.
    For function-like proc macros on the other hand, as there is no rust syntax being passed as well as for macros that error out on semantic problems like expecting certain identifiers, in these cases we should still nudge proc-macro authors to make their macros recoverable.
    As there is nothing r-a can do in those cases.
    This also has problem that we would define an implicit interface2 for r-a relying on exactly this behaviour for proc-macros, basically running the danger that if we would ever want to change our mind on this we would cause a eco system churn.
  • Analyze the macro expansion output, and if it merely expands to a compile_error! keep the original item. Uncertain whether this is feasable in the first place, but it sounds like a bad trade off in general as a lot of macros that change their input item will not reflect their change in this case properly due to them still failing expansion.
    Still violates the proc-macro contract.
  • Fix up the input nodes in a heuristic manner with what we expect they should be for completions/snip them out. This would ideally give the best results if properly implemented, but is also the most difficult to get right.
  • Do nothing. Obviously a choice, and obviously a not so wise one.
    Still violates the proc-macro contract.

Fixing up the input nodes seems like the best approach to me personally, to reiterate that would mean:

  • For attribute and derive macros, we will fix up the input items syntax errors as best as we can
  • We ask proc-macro authors to still implement recovery strategies for when their macros receive unexpected but syntactic valid input, as IDEs fundamentally need help here from macro.

Now we aren't the only IDE for rust, IntelliJ seems to currently hardcode a bunch of popular attributes to special case(ignore) them3. So this obviously also weighs into the decision and we should agree on what to do here so that people do not have to write specific adjustments for each IDE(that would be a truly nightmare-ish scenario). cc @vlad20012

Assuming we go with the approach of fixing up/snipping out syntax invalid syntax nodes. The question remains how to do this reliably in a way that enables completion to work as good, if not better than with proc-macros trying to recover from everything as our initial plan was. This is the main question we would have to resolve here next.

On a side-note not relevant to fixing up syntax nodes, regarding completions with proc-macros, there is potentially interesting trick we could try to make use of to get proc-macros to help out IDEs with guiding completions outlined here4. This is not relevant to fixing up syntax nodes though.

Footnotes

  1. https://github.com/rust-analyzer/rust-analyzer/issues/10468#issuecomment-975480478 2

  2. https://github.com/rust-analyzer/rust-analyzer/issues/10468#issuecomment-975315065

  3. https://github.com/rust-analyzer/rust-analyzer/issues/10468#issuecomment-975436728

  4. https://github.com/rust-analyzer/rust-analyzer/issues/7402#issuecomment-770196608

@Veykril Veykril added E-unknown It's unclear if the issue is E-hard or E-easy without digging in C-Architecture Big architectural things which we need to figure up-front (or suggestions for rewrites :0) ) A-macro macro expansion labels Dec 14, 2021
@matklad
Copy link
Member

matklad commented Dec 14, 2021

The question remains how to do this reliably in a way that enables completion to work as good

An interesting bit here is that, today, rust-analyzer doesn't actually know how a valid syntax tree looks like. We never encode in a reliable manner which parts of a tree are mandatory, and which are optional. This is something ungrammar potentially can help with, as it is intended to encode valid tree structure. Though, it doesn't encode valid input on the token level.

@flodiebold
Copy link
Member

flodiebold commented Dec 14, 2021

I'm not convinced that there's a 'contract' that attribute/derive proc macros can't receive invalid syntax. On the contrary, the fact that rustc passed invalid syntax to these macros for several versions, even if that was unintentional, means that there isn't such a contract. This plus the fact that letting proc macros handle the invalid syntax is the only way to get 100% correct behavior for some proc macros means to me that we should at least have a way to let proc macros opt in to receiving invalid syntax and handling it themselves.

(Also, I still think that having a full Rust parser in every proc macro is a bad idea. Whenever Rust introduces some new syntax, these proc macros will break until their parser is updated. This could be fixed by having the compiler provide parsing functions, at the expense of hugely increasing the proc macro API interface. In that case, we could also provide a proper error-resilient parser to the proc macros.)

@jgarvin
Copy link

jgarvin commented Dec 30, 2021

If there were a way for macros to express "I am not going edit the token stream, only add around it, and I expect the stream to be normal rust code (as opposed to some arbitrary DSL)", would that case be much easier for rust-analyzer to do good completion for? As I understand it a lot of the tricky cases come from the possibility that the macro may do arbitrary edits, but a lot (most?) macros (both declarative and proc in my experience) just generate extra code to surround existing valid rust code. Either generating extra items after a struct or inserting extra control flow around a block, without actually changing the starting struct or block contents, or only changing it in a very limited way (e.g. it removes #[blah] markers on struct fields because those are just the indicator to tell it to generate some extra field specific code). This combined with macro hygiene seems like it would make it so rust-analyzer would know that it's valid to offer any completion option that would make sense if the struct/block were present without being a macro input. But I'm no expert, maybe there are still trivial examples where completion would be ambiguous?

@flodiebold
Copy link
Member

If there were a way for macros to express "I am not going edit the token stream, only add around it, and I expect the stream to be normal rust code (as opposed to some arbitrary DSL)", would that case be much easier for rust-analyzer to do good completion for?

Macros that do just that already work very well, so such an annotation wouldn't really give us anything.

@Hades32
Copy link

Hades32 commented Jan 8, 2022

As an FYI for everyone reaching this after having their while code (e.g. of a Cloudflare worker) marked as red with unexpected token: rust-analyzer:
The easiest way to workaround this is to comment out the attribute (e.g #[event(fetch)]) while editing and uncomment it, when you're done.

@twitchax

This comment has been minimized.

@memoryruins
Copy link
Contributor

memoryruins commented Jan 21, 2022

@twitchax

it makes me wonder if there could be a rust-analyzer option to opt-in to ignoring certain macros

With #11193, it is now possible to specify macros to be replaced by dummy expanders https://rust-analyzer.github.io/thisweek/2022/01/10/changelog-111.html.


If desired, can disable different kinds of proc-macros altogether for now as well https://rust-analyzer.github.io/manual.html#configuration as well as specific diagnostics (not for everyone, but this can be a reasonable option for some).

@twitchax

This comment has been minimized.

@danielhenrymantilla
Copy link

My two cents on the matter:

  • Point 0: derives always add stuff to the already given input, so these are out of the "equation" here, we'll just consider function-like and attribute macros;
  1. Ideally, as in, the best solution mid-to-long term, would be for the proc-macro API to be improved a bit, so that it be allowed to return a Result. This would allow rustc itself (or the IDE impersonating it) / the proc_macro_server side be able to automatically do this "emit the original input nonetheless since we're gonna bail out eventually anyways" logic.

    Incidentally, being able to use ? in proc-macro logic would be a boon. No more parse_macro_input!-like macros and whatnot, just good old ?-handling.

    • Aside: here is the kind of wrapping I'd find useful to be done automatically, an example: it's easy to tweak such logic to "clone" the input beforehand, and re-emit it in the |err| …-handling part.
  2. As of now, this behavior can be polyfilled by detecting invocations of compile_error! for attribute macros.

    • Granted, "detecting a compile_error! invocation" is indeed hacky / prone to false positives (an example). Hence the previous point for a proper approach. But in practice it will work 99.9% of the time, if not more.

    There is only one scenario where this could be worse than the current status quo, and it's statistically negligible, imho: a proc-macro attribute expecting semantically invalid code albeit syntactically valid one; such as cxx's unsafe extern block, free functions with self receivers, and a few other rare extra quirks. And even in that case, they'd still be able to see that the proc-macro attribute failed, as the root cause of their subsequent "invalid semantics" error.

    Hence why I think that the detect compile_error! invocations in the output and, in that case, re-emit the input as-is approach (for r-a to do it, not each proc-macro implementation nor syn/quote), is currently the best way to palliate the status quo.

  3. Regarding function-like macros, there is nothing really that can be done as of now, but I think that's okay to begin with, if we could improve the situation with proc-macros that would already be a big win.

    That being said, I'd be curious to see how the naïve approach of also applying here the "in case of compile_error!, have the server re-emit the given input, verbatim/funneled-through-an-identity-macro" logic would feel from a user's point of view.

    If anything, an optional knob could be offered for users to experiment with that, and then decide based on how often people go and reach for it.


Some context: I'm chiming in because I've been seeing, in the #dev-tools channel at the community Discord, that around 80 to 90% of the questions right now are about not getting completion inside proc-macro attrs. So, even though I'm not much of a r-a user myself (for personal "caveman" reasons (not wanting to rely too much on tools); nothing against r-a, it's quite a wonderful tool), I have to admit that this feels like an important issue, one which seems to currently be undermining mainly proc-macro attributes themselves (and maybe a r-a itself a bit as well? I may be wrong).


Finally, even though I don't have much knowledge of r-a internals, if my proposal sounds legitimate / plausible, I may try to go and implement it, should the lack of person-power to drive it to completion otherwise be a deterrent w.r.t. trying it.

@flodiebold
Copy link
Member

Sorry but no, allowing proc macros to return results is exactly the wrong direction to take. The "emit the original input" approach is just a workaround, and not a particularly good one.

All proc macro attribute do some kind of transformation to their input, which has some relevant semantic effect. The original input is not what the user actually intends to give to the compiler, and using it would be wrong. The only reason that it kind of works in a lot of cases is that we don't have a lot of checks yet that would make it more obvious.

For example, take the tokio::main macro. Just passing through the input would mean you have an async main function, which is not allowed. If rust-analyzer actually checked for this, with the original input approach that would mean that while you're typing, you get flickering "main cannot be async" errors.

Something similar goes for the async_trait macro; in fact you might also get type checking errors since we're unlikely to lower async trait functions to something that makes sense.

Macros like Rocket's get don't transform the item at all AFAIK, but generate some other items, so if they're not expanded you'll get flickering errors in other places. And so on.

@HKalbasi
Copy link
Member

There is a syn-mid crate that worth mentioning in this thread. It doesn't parse function bodies and only allow proc macros to include bodies as-is in the result. It's enough for most of attribute macros I'm aware of, and it would solve this problem, plus saving some compile time and more benefits.

It doesn't work yet for every usecase that it could. If we want to go with the fixing ecosystem route, making syn-mid perfect and suggesting it to proc macro authors is an option.

@lnicola

This comment has been minimized.

@HKalbasi
Copy link
Member

syn-mid itself depends on syn(feature=derive) and share some code with it, so it's ok with serde since it is syn(feature=derive) as well. But if there is both syn(feature=full) and syn-mid in the dependency tree it makes compile time worse. syn-mid is relatively small, so we need to measure to see if it is significant or not.

@jgarvin
Copy link

jgarvin commented Jan 28, 2022

It seems inherent to the problem that any solution is going to either involve imperfect heuristics (like dtolnays suggestion mentioned in the OP) or macros having annotations analyzer has to trust communicating the macro's intent (like the ticket linked in the OP for macros to expose a grammar). Does anybody think there is a third way?

Only other alternative that comes to mind is really still just a heuristic -- empirically track how expanded code changes in response to changes in input, concluding things like, "when this input token changes in practice these output tokens change." In theory RA could call a proc macro many times with many different minor edits to try to understand behavior. But this seems very very hard.

There's precedent in the lisp world for macro<->tool communication, e.g. in emacs lisp you can annotate your macro to specify how code passed into the macro should be indented, because even though lisp is homiconic there are different conventions for loops/branches/declarations.

@vlad20012
Copy link
Member

vlad20012 commented Jan 28, 2022

It looks like IntelliJ Rust (when completing inside an item under an attr macro) always shows completion as a union of completions inside the macro expansion and inside the item like there isn't an attr macro 🤔

@danielhenrymantilla
Copy link

danielhenrymantilla commented Jan 29, 2022

The "emit the original input" approach is just a workaround, and not a particularly good one.

All proc macro attribute do some kind of transformation to their input, which has some relevant semantic effect. The original input is not what the user actually intends to give to the compiler, and using it would be wrong.

See

a proc-macro attribute expecting semantically invalid code albeit syntactically valid one; such as cxx's unsafe extern block, free functions with self receivers, and a few other rare extra quirks. And even in that case, they'd still be able to see that the proc-macro attribute failed, as the root cause of their subsequent "invalid semantics" error.

So, granted, I did not think of #[some_crate::main] async fn main as potentially more frequent instance of this situation, but, from the looks of it, it seems that IDE users would rather get auto-completion even if at the price of getting, on top of a #[tokio::main] syntax error, an extraneous "main can't be async" error, rather than nothing at all.

And FWIW, the way tokio::main currently handles being IDE-friendly is by doing precisely this: Playground — no IDEs involved, and yet we do get a main can't be async error.


That being said, I agree that -> Result with proc-macros, alone, wouldn't be enough for a proper mid-to-long term solution. Maybe I have been too ambitious when mentioning that. I'd rather just have proc-macros not be called at all on input that even #[cfg(FALSE)] can't handle, to begin with. So I recant that part.

But I do stand by the importance of featuring a palliative workaround, sooner rather than later, and regarding that, having the proc-macro server/driver re-emit the input in case of an error is the least bad option out there.

@Follpvosten
Copy link

If there were a way for macros to express "I am not going edit the token stream, only add around it, and I expect the stream to be normal rust code (as opposed to some arbitrary DSL)", would that case be much easier for rust-analyzer to do good completion for?

Macros that do just that already work very well, so such an annotation wouldn't really give us anything.

Sorry to reply to this so late, but this is completely untrue for the current attribute macro implementation. Even the most trivial cases of these, like async-trait or route macros for various web frameworks (which don't explicitly support it, which btw is something async-trait declared it never will), just completely fail to provide any sort of integration as soon as the syntax doesn't parse. Adding an option to just treat the macro's input as the output from rust-analyzer would absolutely solve that, because that's how it worked before (as far as I understood) and it was never an issue for these cases.

@flodiebold
Copy link
Member

async-trait etc. don't just add around the token stream, they try to parse it and replace it by a compile error if they fail. Also, even disregarding that, async-trait does change the token stream quite a bit -- it has to, to make it actually valid Rust.

For why just treating the macro input as its output is not a solution, this has already been discussed above.

@Follpvosten
Copy link

Sorry, I guess what I really want to say is: If I disable attribute macro expansion, I encounter zero issues with any of the attribute macros I'm using. Enabling this (admittedly experimental, but enabled by default) rust-analyzer feature just breaks all of them.

@jgarvin
Copy link

jgarvin commented Feb 7, 2022

@flodiebold Wait a second. When I described having a designation for macros that only add to existing code, and you said those already work fine, I assumed you still included macros that have to parse what they were passed in. If it doesn't, then we are miscommunicating.

By macros that "only add code" I meant the sort of thing derive macro implementations typically do, where they iterate struct fields and generate one method for each field (e.g. generating getters or reflection methods). Iterating the struct fields still requires using something like syn to parse them. A macro that emits extra code without doing any parsing of what is passed in is nearly useless. The distinction I was trying to make is that they emit their input tokens as-is, only adding new tokens rather than editing what they were given. This still includes macros that require parsing.

@flodiebold
Copy link
Member

@jgarvin It doesn't matter whether the macro parses the input or not, what matters is whether it passes through the input even in error cases. If it does, it will work as well as we can make any macro work (i.e. not perfectly, but very well). If it does not at least pass through the input in error cases, I would not interpret that as "not editing the token stream", but that would be quite easy to fix in the macro -- not much harder than adding an annotation. This is the easiest kind of macro to make work, so it would give us almost nothing to add an annotation to handle this case.

@jgarvin
Copy link

jgarvin commented Feb 7, 2022

Are there any examples where rust analyzer falling back to assuming tokens are passed through unedited on proc macro failure would make completion worse? Since the intention of many macros is to pass through the original tokens it still seems strictly better to have this heuristic in 1 place (analyzer) than counting on the entire ecosystem to update to an alternative that doesn't exist yet.

@flodiebold
Copy link
Member

It might not make completion worse, but it will lead to all kinds of other weird effects, especially when we have more diagnostics, as explained above.

it still seems strictly better to have this heuristic in 1 place (analyzer) than counting on the entire ecosystem to update to an alternative that doesn't exist yet

This is a false dichotomy. We can implement a workaround that will actually work correctly most of the time, even if it's not the full solution: Fixing up the input nodes that we pass to attribute macros.

@vlad20012
Copy link
Member

vlad20012 commented Feb 8, 2022

@flodiebold If most users only complain about completion, you could implement the fallback only for completion, without all kinds of other weird effects like false-positive diagnostics. I.e. most of the analysis in RA see just compile_error there, but the completion code sees the original TT. This is how it works in IntelliJ Rust now. But I'm not sure how this fits in with the RA architecture.

bors bot added a commit that referenced this issue Feb 12, 2022
11444: feat: Fix up syntax errors in attribute macro inputs to make completion work more often r=flodiebold a=flodiebold

This implements the "fix up syntax nodes" workaround mentioned in #11014. It isn't much more than a proof of concept; I have only implemented a few cases, but it already helps quite a bit.

Some notes:
 - I'm not super happy about how much the fixup procedure needs to interact with the syntax node -> token tree conversion code (e.g. needing to share the token map). This could maybe be simplified with some refactoring of that code.
 - It would maybe be nice to have the fixup procedure reuse or share information with the parser, though I'm not really sure how much that would actually help.

Co-authored-by: Florian Diebold <[email protected]>
@trevyn
Copy link

trevyn commented Feb 22, 2022

Is there a way to tell, inside of a proc-macro, when it is being expanded by the r-a host, versus by cargo/rustc during a normal check/build?

EDIT:

std::env::current_exe().unwrap().file_stem().unwrap() == "rust-analyzer"

appears to work.

I'm having to special-case this because some of my macros have side-effects that aren't really meant to be run on code that the user knows is incomplete.

In general, it does seem useful to be able to somehow distinguish between "this may be incomplete code, don't bother me with too many errors" and "ok, it's ready now, show me some errors".

@flodiebold
Copy link
Member

@trevyn Can you give some examples of such side-effects and errors? In general I don't think it's feasible to distinguish between those situations, although I guess it would make some sense to distinguish between "live diagnostics in editor" and "user-triggered build".

@trevyn
Copy link

trevyn commented Feb 22, 2022

Sure, my turbosql crate generates SQL tables and migrations directly from a derive macro, e.g.:

#[derive(Turbosql, Default)]
struct Person {
    rowid: Option<i64>
    name: Option<String>
}

has the side effect of updating a generated file in the end-user project that describes and utimately creates a SQLite Person table.

When rust-analyzer proc-macros are enabled, apparently the macro gets called for every keystroke, so if I edit the name of the struct to PersonTwo, I end up with Person, PersonT, PersonTw, and PersonTwo tables.

With rust-analyzer proc-macros disabled, the macro would run on an explicit save/check action only, yielding the original Person and edited PersonTwo tables only, as desired.

I just updated it with the above snippet so that side-effects are suppressed when the current_exe() is rust-analyzer, and it seems to work as expected.

I'm also experimenting with doing a similar thing for a code-defined RPC API in turbocharger:

#[turbocharger::backend]
pub async fn get_person(rowid: i64) -> Result<Person, turbosql::Error> {
    turbosql::select!(Person "WHERE rowid = ?", rowid)
}

which has a similar side effect of collecting function signatures into a generated file for auditing.

I think the problematic errors I ran into earlier and wanted to suppress were coming from race conditions that the frequent macro calls were revealing; with the side-effects suppressed things seem much better behaved.

@samsieber
Copy link

How accurate does the autocomplete have to be?

I think most proc-macros (at least those it would be reasonable to want autocomplete for) could express a mostly superset of their valid inputs (e.g. accepts all valid inputs plus some false negatives) as a tree-sitter grammar that has access to the the rust tree-sitter parser grammar - e.g. being able to say "this part right here would be a rust expression".

If someone wrote a crate that took a tree-sitter grammar and used it to emit some basic pre-flight checks on proc macros crates in a standard way, then RA could detect macros that use that crate and load up the proc macros' grammar to perform better auto-complete.

TL;DR - if we give a proc-macros semi-standard way (that's friendly to RA and autocomplete) to sort of pre-validate their input, then RA could detect those and give better completion

@flodiebold
Copy link
Member

@Veykril now that we have the syntax fixup and it seems to actually be working pretty well, should we keep this issue open?

@Veykril
Copy link
Member Author

Veykril commented Jun 21, 2022

Yes I think we can close this 👍

@safinaskar
Copy link

@Veykril , please, reopen. I can still reproduce this. Here is my code:

#[tokio::main]
async fn main() {
    {
}

And here is Cargo.toml:

[package]
name = "tok"
version = "0.1.0"
edition = "2024"

[dependencies]
tokio = { version = "1.42.0", features = ["full"] }
tracing = { version = "0.1.41", features = ["attributes"] }

rustc version is rustc 1.85.0-nightly (acabb5248 2024-12-04), I use vscode, vscode reports Version 0.3.2204, Server Version 1.85.0-nightly (acabb52 2024-12-04) version for rust-analyzer.

In code above whole code is highlighted in red. But if I comment #[tokio::main], then only mismatched brackets are highlighted. So, user experience with proc macros is still worse than without them. The same thing applies to #[tracing::instrument], so this issue is not specific to tokio::main only

@safinaskar
Copy link

@Veykril . rust-analyzer.procMacro.ignored ( #11193 ) didn't help, either!!! Everything is still red! The only thing that did work for me is this ( #15528 ):

#[cfg_attr(not(rust_analyzer), tracing::instrument)]
fn f() {
  {
}

@Veykril
Copy link
Member Author

Veykril commented Dec 5, 2024

That is #18244

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-macro macro expansion A-proc-macro proc macro C-Architecture Big architectural things which we need to figure up-front (or suggestions for rewrites :0) ) E-unknown It's unclear if the issue is E-hard or E-easy without digging in
Projects
None yet
Development

No branches or pull requests