You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was researching StyleCop.Analyzers code as a reference for implementing my own custom analyzer and there is something that is not obvious to me.
Is there any reason that the complex token-level processing is used in spacing analyzers instead of simple parsing of raw text lines? For example, the analyzing code in SA1028CodeMustNotContainTrailingWhitespace class might look like the following:
SourceTextsourceText=context.Tree.GetText(context.CancellationToken);foreach(TextLinelineinsourceText.Lines){if(line.Span.IsEmpty)continue;stringtext=line.ToString();intwsStart=FindTrailingWhitespace(text);if(wsStart>=0){LocationdiagnosticLocation=Location.Create(context.Tree,TextSpan.FromBounds(line.Start+wsStart,line.End));// TODO: report diagnostic for the location above}}
This seems more simple and reliable than the current implementation.
The text was updated successfully, but these errors were encountered:
We don't examine text inside disabled text #if false, or within verbatim string literals. Also many of the spacing rules have special exceptions depending on where you are in the syntax tree (see #1191 for example).
Hi all,
I was researching StyleCop.Analyzers code as a reference for implementing my own custom analyzer and there is something that is not obvious to me.
Is there any reason that the complex token-level processing is used in spacing analyzers instead of simple parsing of raw text lines? For example, the analyzing code in SA1028CodeMustNotContainTrailingWhitespace class might look like the following:
This seems more simple and reliable than the current implementation.
The text was updated successfully, but these errors were encountered: