Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Join implicit concatenated strings when they fit on a line #13663

Merged
merged 27 commits into from
Oct 24, 2024

Conversation

MichaReiser
Copy link
Member

@MichaReiser MichaReiser commented Oct 7, 2024

Summary

Implements #9457 for f-strings, regular strings, and byte literals. The implementation doesn't handle raw strings or triple quoted strings, similar to black's string-processing preview style.

Here an example from the ecosystem check. The formatter now joins the implicit concatenated f-string into a single string literal.

             raise TypeError(
-                "Expected str or callable for parameter 'probe', "
-                f"not '{method.__class__.__name__}'"
+                f"Expected str or callable for parameter 'probe', not '{method.__class__.__name__}'"
             )

What makes this change "complicated" is that Ruff has already many special casing for strings:

  • Comments at the end of an assignment where the value is a string gets inlined:
    aaaaaaaaaaaaa = (
         "long string literal"  # comment
    )
    
    # instead  of 
    aaaaaaaaaaaaa = (
         "long string literal"
    )  # comment
  • Docstring formatting trims leading and trailing whitespace

In addition, there are many other string-related preview styles, and this implementation must be compatible with the current stable formatting and any other combination of preview styles. Specifically, the implementation must support the stable and preview f-string formatting.

assert formatting

This new style requires a few smaller changes in how expressions containing strings are formatted to mitigate instabilities. However, there's one large change that I'm undecided if we should move forward with it.

The existing assertion formatting is prone to unstable formatting when the right is an implicit concatenated string that can be joined. The easiest fix was to change the formatting to use Parenthesize::IfBreaksParenthesized which we use in other places where we format multiple expressions with the maybe_parenthesize layout. However, it has the effect that assertions now break from right to left: We always parenthesize the right before we start splitting the left. This impacts formatting of almost all assert statements that are longer than the configured line length and the right is a string because we used to split the assertion and we now parenthesize the message instead.

Here two examples from the ecosystem checks:

     with conn.read_ctx() as cursor:  # make sure it wrote in the DB but not the last one
-        assert cursor.execute("SELECT b from a").fetchall() == [
-            (1,),
-            (2,),
-            (3,),
-            (4,),
-        ], "other greenlet should write to the DB"  # noqa: E501
+        assert cursor.execute("SELECT b from a").fetchall() == [(1,), (2,), (3,), (4,)], (
+            "other greenlet should write to the DB"
+        )  # noqa: E501
                         new_id = f"{id}{suffix}{id_suffixes[id]}"
                     resolved_ids[index] = new_id
                     id_suffixes[id] += 1
-        assert len(resolved_ids) == len(
-            set(resolved_ids)
-        ), f"Internal error: {resolved_ids=}"
+        assert len(resolved_ids) == len(set(resolved_ids)), (
+            f"Internal error: {resolved_ids=}"
+        )
         return resolved_ids
 
     def _resolve_ids(self) -> Iterable[str]:

You can see how the formatter now keeps the assertion flat (left) and parenthesizes the message. I do find that this overall improves the readability and is also in line with what we do in assignments: We parenthesize the value to avoid splitting the target.

I have three concerns with the style change:

1. Black compatibility/Upgrade diff

IMO, this is the most important one. Our assert formatting has never been compatible with black and this has come up in the past (#13386, #8331, #8388), but there's no clear consensus from users on which style they prefer. This PR would deviate our implementation further from Black's style and it will lead to larger diffs when migrating from Black but it also leads to large diffs when upgrading to the new style.

2. More parentheses

Black overall tries to reduce the parentheses. For example, Black prefers

if aaaaaaaaaaaaaa + bbbbbbbbbbbbbbb / len([
    122222,
    33333333333333,
    44444444444444,
    55444,
]):
    ...

over

if (
    aaaaaaaaaaaaaa
    + bbbbbbbbbbbbbbb
    / len([122222, 33333333333333, 44444444444444, 55444])
):
    ...

when the left-most or right-most expression has its own parentheses (and some more rules). Black also applies this to assert formatting (-: Black, +: Ruff).

-            assert isinstance(
-                document, dict
-            ), "document should be of type `dict[str,Any]`. But found: `{}`".format(
-                type(document)
+            assert isinstance(document, dict), (
+                "document should be of type `dict[str,Any]`. But found: `{}`".format(
+                    type(document)
+                )
             )
-        assert isinstance(
-            input, dict
-        ), "The input to RunnablePassthrough.assign() must be a dict."
+        assert isinstance(input, dict), (
+            "The input to RunnablePassthrough.assign() must be a dict."
+        )
 

You can see how Black uses the parentheses of the call expression over introducing new parentheses to break the right.

IMO, this makes assert statements harder to read because it is unclear where the assertion ends and the message start. However, it helps to reduce the overall number of parentheses

3. assertions that end in parentheses

There's one cases where the new formatting is arguably worse:

# Black

assert graph.artifacts == [
    GraphArtifact(
        id=expected_top_level_artifact.id,
        created=expected_top_level_artifact.created,
        key=expected_top_level_artifact.key,
        type=expected_top_level_artifact.type,
        data=(
            expected_top_level_artifact.data
            if expected_top_level_artifact.type == "progress"
            else None
        ),
        is_latest=True,
    )
], "Expected artifacts associated with the flow run but not with a task to be included at the roof of the graph."

# Ruff 
assert graph.artifacts == [
    GraphArtifact(
        id=expected_top_level_artifact.id,
        created=expected_top_level_artifact.created,
        key=expected_top_level_artifact.key,
        type=expected_top_level_artifact.type,
        data=expected_top_level_artifact.data
        if expected_top_level_artifact.type == "progress"
        else None,
        is_latest=True,
    )
], (
    "Expected artifacts associated with the flow run but not with a task to be included at the roof of the graph."
)

In this example, parenthesizing the message is useless because it doesn't help to get more space for the message. The indent requires as much space as the ], (.

The reason I went with this change (and am seeking feedback now) is that:

  • It's an extremely easy fix for the instability
  • We use break right over left in other places. Therefore, it's arguably more consistent for Ruff
  • I do find that it improves formatting in many cases

I do think it would be possible to implement something closer to today's style by using best_fit similar to the assignment formatting. And it might not even be that much work (a day or two). So I'm up to tackle this if we conclude that we want to keep closer to Black's style, reduce the upgrade diff, or are concerned about the "bad" cases.

TODOs

  • Review for instabilities when an implicit concatenated string is the right side of an assignment where the target or the annotation is splittable.
  • Review for instabilities with the hug expressions style
  • Review implicit concatenated string formatting in maybe_parenthesize_expression changes similar to assert stmt (e.g. YieldExpr)
  • Support the f-string formatting preview style (Big)
  • Test with QuoteStyle preserve
  • In doc-string positions
     def in_docstring_position():
         "diffent '" 'quote "are fine"                '
  • Handle strings in ExprStmt positions (there's no group)
  • Review changes to larger repositories and the ecosystem results
  • (stretch) Remove parentheses when collapsing f-strings https://play.ruff.rs/1355c0b0-f452-445a-8ffc-6285e1b0885e
  • Tests for can_omit_optional_parentheses changes.

Review

  • I would appreciate feedback on the assertion style changes.
  • Look out if there are more string edge cases that need handling

@AlexWaygood I don't expect you to review the code changes but I highly value your input on the assertion formatting changes.

Test Plan

I added plenty of new integration tests and I reviewed the ecosystem changes and did test the formatting on some larger projects.

Non-truncated ecosystem changes: https://gist.github.com/MichaReiser/c82d6132e577d9e7ee9ae85543904099

@MichaReiser MichaReiser added formatter Related to the formatter preview Related to preview mode features labels Oct 7, 2024
Copy link
Contributor

github-actions bot commented Oct 7, 2024

ruff-ecosystem results

Linter (stable)

✅ ecosystem check detected no linter changes.

Linter (preview)

✅ ecosystem check detected no linter changes.

Formatter (stable)

✅ ecosystem check detected no format changes.

Formatter (preview)

ℹ️ ecosystem check detected format changes. (+2228 -2609 lines in 575 files in 38 projects; 16 projects unchanged)

DisnakeDev/disnake (+1 -2 lines across 1 file)

ruff format --preview

disnake/player.py~L550

             fallback = cls._probe_codec_fallback
         else:
             raise TypeError(
-                "Expected str or callable for parameter 'probe', "
-                f"not '{method.__class__.__name__}'"
+                f"Expected str or callable for parameter 'probe', not '{method.__class__.__name__}'"
             )
 
         codec = bitrate = None

RasaHQ/rasa (+59 -73 lines across 32 files)

ruff format --preview

rasa/cli/utils.py~L278

     # Check if a valid setting for `max_history` was given
     if isinstance(max_history, int) and max_history < 1:
         raise argparse.ArgumentTypeError(
-            f"The value of `--max-history {max_history}` " f"is not a positive integer."
+            f"The value of `--max-history {max_history}` is not a positive integer."
         )
 
     return validator.verify_story_structure(

rasa/cli/x.py~L165

         attempts -= 1
 
     rasa.shared.utils.cli.print_error_and_exit(
-        "Could not fetch runtime config from server at '{}'. " "Exiting.".format(
+        "Could not fetch runtime config from server at '{}'. Exiting.".format(
             config_endpoint
         )
     )

rasa/core/actions/action.py~L322

         if message is None:
             if not self.silent_fail:
                 logger.error(
-                    "Couldn't create message for response '{}'." "".format(
+                    "Couldn't create message for response '{}'.".format(
                         self.utter_action
                     )
                 )

rasa/core/actions/action.py~L470

         else:
             if not self.silent_fail:
                 logger.error(
-                    "Couldn't create message for response action '{}'." "".format(
+                    "Couldn't create message for response action '{}'.".format(
                         self.action_name
                     )
                 )

rasa/core/channels/console.py~L194

     exit_text = INTENT_MESSAGE_PREFIX + "stop"
 
     rasa.shared.utils.cli.print_success(
-        "Bot loaded. Type a message and press enter " "(use '{}' to exit): ".format(
+        "Bot loaded. Type a message and press enter (use '{}' to exit): ".format(
             exit_text
         )
     )

rasa/core/channels/telegram.py~L97

                     reply_markup.add(KeyboardButton(button["title"]))
         else:
             logger.error(
-                "Trying to send text with buttons for unknown " "button type {}".format(
+                "Trying to send text with buttons for unknown button type {}".format(
                     button_type
                 )
             )

rasa/core/nlg/callback.py~L81

         body = nlg_request_format(utter_action, tracker, output_channel, **kwargs)
 
         logger.debug(
-            "Requesting NLG for {} from {}." "The request body is {}." "".format(
+            "Requesting NLG for {} from {}.The request body is {}.".format(
                 utter_action, self.nlg_endpoint.url, json.dumps(body)
             )
         )

rasa/core/policies/policy.py~L250

         max_training_samples = kwargs.get("max_training_samples")
         if max_training_samples is not None:
             logger.debug(
-                "Limit training data to {} training samples." "".format(
+                "Limit training data to {} training samples.".format(
                     max_training_samples
                 )
             )

rasa/core/policies/ted_policy.py~L835

             # take the last prediction in the sequence
             similarities = outputs["similarities"][:, -1, :]
         else:
-            raise TypeError(
-                "model output for `similarities` " "should be a numpy array"
-            )
+            raise TypeError("model output for `similarities` should be a numpy array")
         if isinstance(outputs["scores"], np.ndarray):
             confidences = outputs["scores"][:, -1, :]
         else:

rasa/core/policies/unexpected_intent_policy.py~L612

         if isinstance(output["similarities"], np.ndarray):
             sequence_similarities = output["similarities"][:, -1, :]
         else:
-            raise TypeError(
-                "model output for `similarities` " "should be a numpy array"
-            )
+            raise TypeError("model output for `similarities` should be a numpy array")
 
         # Check for unlikely intent
         last_user_uttered_event = tracker.get_last_event_for(UserUttered)

rasa/core/test.py~L772

         ):
             story_dump = YAMLStoryWriter().dumps(partial_tracker.as_story().story_steps)
             error_msg = (
-                f"Model predicted a wrong action. Failed Story: " f"\n\n{story_dump}"
+                f"Model predicted a wrong action. Failed Story: \n\n{story_dump}"
             )
             raise WrongPredictionException(error_msg)
     elif prev_action_unlikely_intent:

rasa/core/train.py~L34

             for policy_config in policy_configs:
                 config_name = os.path.splitext(os.path.basename(policy_config))[0]
                 logging.info(
-                    "Starting to train {} round {}/{}" " with {}% exclusion" "".format(
+                    "Starting to train {} round {}/{} with {}% exclusion".format(
                         config_name, current_run, len(exclusion_percentages), percentage
                     )
                 )

rasa/core/training/interactive.py~L723

     # export training data and quit
     questions = questionary.form(
         export_stories=questionary.text(
-            message="Export stories to (if file exists, this "
-            "will append the stories)",
+            message="Export stories to (if file exists, this will append the stories)",
             default=PATHS["stories"],
             validate=io_utils.file_type_validator(
                 rasa.shared.data.YAML_FILE_EXTENSIONS,

rasa/core/training/interactive.py~L738

             default=PATHS["nlu"],
             validate=io_utils.file_type_validator(
                 list(rasa.shared.data.TRAINING_DATA_EXTENSIONS),
-                "Please provide a valid export path for the NLU data, "
-                "e.g. 'nlu.yml'.",
+                "Please provide a valid export path for the NLU data, e.g. 'nlu.yml'.",
             ),
         ),
         export_domain=questionary.text(
-            message="Export domain file to (if file exists, this "
-            "will be overwritten)",
+            message="Export domain file to (if file exists, this will be overwritten)",
             default=PATHS["domain"],
             validate=io_utils.file_type_validator(
                 rasa.shared.data.YAML_FILE_EXTENSIONS,

rasa/core/utils.py~L41

     """
     if use_syslog:
         formatter = logging.Formatter(
-            "%(asctime)s [%(levelname)-5.5s] [%(process)d]" " %(message)s"
+            "%(asctime)s [%(levelname)-5.5s] [%(process)d] %(message)s"
         )
         socktype = SOCK_STREAM if syslog_protocol == TCP_PROTOCOL else SOCK_DGRAM
         syslog_handler = logging.handlers.SysLogHandler(

rasa/core/utils.py~L73

     """
     if hot_idx >= length:
         raise ValueError(
-            "Can't create one hot. Index '{}' is out " "of range (length '{}')".format(
+            "Can't create one hot. Index '{}' is out of range (length '{}')".format(
                 hot_idx, length
             )
         )

rasa/nlu/featurizers/sparse_featurizer/count_vectors_featurizer.py~L166

                 )
             if self.stop_words is not None:
                 logger.warning(
-                    "Analyzer is set to character, "
-                    "provided stop words will be ignored."
+                    "Analyzer is set to character, provided stop words will be ignored."
                 )
             if self.max_ngram == 1:
                 logger.warning(

rasa/server.py~L289

         raise ErrorResponse(
             HTTPStatus.BAD_REQUEST,
             "BadRequest",
-            "Invalid parameter value for 'include_events'. "
-            "Should be one of {}".format(enum_values),
+            "Invalid parameter value for 'include_events'. Should be one of {}".format(
+                enum_values
+            ),
             {"parameter": "include_events", "in": "query"},
         )
 

rasa/shared/core/domain.py~L198

             domain = cls.from_directory(path)
         else:
             raise InvalidDomain(
-                "Failed to load domain specification from '{}'. "
-                "File not found!".format(os.path.abspath(path))
+                "Failed to load domain specification from '{}'. File not found!".format(
+                    os.path.abspath(path)
+                )
             )
 
         return domain

rasa/shared/core/events.py~L1961

 
     def __str__(self) -> Text:
         """Returns text representation of event."""
-        return (
-            "ActionExecutionRejected("
-            "action: {}, policy: {}, confidence: {})"
-            "".format(self.action_name, self.policy, self.confidence)
+        return "ActionExecutionRejected(action: {}, policy: {}, confidence: {})".format(
+            self.action_name, self.policy, self.confidence
         )
 
     def __hash__(self) -> int:

rasa/shared/core/generator.py~L401

 
             if num_active_trackers:
                 logger.debug(
-                    "Starting {} ... (with {} trackers)" "".format(
+                    "Starting {} ... (with {} trackers)".format(
                         phase_name, num_active_trackers
                     )
                 )

rasa/shared/core/generator.py~L517

                     phase = 0
                 else:
                     logger.debug(
-                        "Found {} unused checkpoints " "in current phase." "".format(
+                        "Found {} unused checkpoints in current phase.".format(
                             len(unused_checkpoints)
                         )
                     )
                     logger.debug(
-                        "Found {} active trackers " "for these checkpoints." "".format(
+                        "Found {} active trackers for these checkpoints.".format(
                             num_active_trackers
                         )
                     )

rasa/shared/core/generator.py~L553

                 augmented_trackers, self.config.max_number_of_augmented_trackers
             )
             logger.debug(
-                "Subsampled to {} augmented training trackers." "".format(
+                "Subsampled to {} augmented training trackers.".format(
                     len(augmented_trackers)
                 )
             )

rasa/shared/core/trackers.py~L634

         """
         if not isinstance(dialogue, Dialogue):
             raise ValueError(
-                f"story {dialogue} is not of type Dialogue. "
-                f"Have you deserialized it?"
+                f"story {dialogue} is not of type Dialogue. Have you deserialized it?"
             )
 
         self._reset()

rasa/shared/core/training_data/story_reader/story_reader.py~L83

         )
         if parsed_events is None:
             raise StoryParseError(
-                "Unknown event '{}'. It is Neither an event " "nor an action).".format(
+                "Unknown event '{}'. It is Neither an event nor an action).".format(
                     event_name
                 )
             )

rasa/shared/core/training_data/story_reader/yaml_story_reader.py~L334

 
         if not self.domain:
             logger.debug(
-                "Skipped validating if intent is in domain as domain " "is `None`."
+                "Skipped validating if intent is in domain as domain is `None`."
             )
             return
 

rasa/shared/nlu/training_data/formats/dialogflow.py~L34

 
         if fformat not in {DIALOGFLOW_INTENT, DIALOGFLOW_ENTITIES}:
             raise ValueError(
-                "fformat must be either {}, or {}" "".format(
+                "fformat must be either {}, or {}".format(
                     DIALOGFLOW_INTENT, DIALOGFLOW_ENTITIES
                 )
             )

rasa/shared/utils/io.py~L127

             return f.read()
     except FileNotFoundError:
         raise FileNotFoundException(
-            f"Failed to read file, " f"'{os.path.abspath(filename)}' does not exist."
+            f"Failed to read file, '{os.path.abspath(filename)}' does not exist."
         )
     except UnicodeDecodeError:
         raise FileIOException(

rasa/shared/utils/io.py~L157

     """
     if not isinstance(path, str):
         raise ValueError(
-            f"`resource_name` must be a string type. " f"Got `{type(path)}` instead"
+            f"`resource_name` must be a string type. Got `{type(path)}` instead"
         )
 
     if os.path.isfile(path):

rasa/shared/utils/io.py~L443

             )
     except FileNotFoundError:
         raise FileNotFoundException(
-            f"Failed to read file, " f"'{os.path.abspath(file_path)}' does not exist."
+            f"Failed to read file, '{os.path.abspath(file_path)}' does not exist."
         )
 
 

rasa/utils/common.py~L308

         access_logger.addHandler(file_handler)
     if use_syslog:
         formatter = logging.Formatter(
-            "%(asctime)s [%(levelname)-5.5s] [%(process)d]" " %(message)s"
+            "%(asctime)s [%(levelname)-5.5s] [%(process)d] %(message)s"
         )
         socktype = SOCK_STREAM if syslog_protocol == TCP_PROTOCOL else SOCK_DGRAM
         syslog_handler = logging.handlers.SysLogHandler(

rasa/utils/endpoints.py~L33

         return EndpointConfig.from_dict(content[endpoint_type])
     except FileNotFoundError:
         logger.error(
-            "Failed to read endpoint configuration " "from {}. No such file.".format(
+            "Failed to read endpoint configuration from {}. No such file.".format(
                 os.path.abspath(filename)
             )
         )

tests/engine/recipes/test_default_recipe.py~L98

         (
             "data/test_config/config_pretrained_embeddings_mitie.yml",
             "data/graph_schemas/config_pretrained_embeddings_mitie_train_schema.yml",
-            "data/graph_schemas/"
-            "config_pretrained_embeddings_mitie_predict_schema.yml",
+            "data/graph_schemas/config_pretrained_embeddings_mitie_predict_schema.yml",
             TrainingType.BOTH,
             False,
         ),

tests/graph_components/validators/test_default_recipe_validator.py~L758

     if should_warn:
         with pytest.warns(
             UserWarning,
-            match=(f"'{RulePolicy.__name__}' is not " "included in the model's "),
+            match=(f"'{RulePolicy.__name__}' is not included in the model's "),
         ) as records:
             validator.validate(importer)
     else:

tests/graph_components/validators/test_default_recipe_validator.py~L859

     num_duplicates: bool,
     priority: int,
 ):
-    assert (
-        len(policy_types) >= priority + num_duplicates
-    ), f"This tests needs at least {priority + num_duplicates} many types."
+    assert len(policy_types) >= priority + num_duplicates, (
+        f"This tests needs at least {priority + num_duplicates} many types."
+    )
 
     # start with a schema where node i has priority i
     nodes = {

tests/graph_components/validators/test_default_recipe_validator.py~L968

     with pytest.warns(
         UserWarning,
         match=(
-            "Found rule-based training data but no policy "
-            "supporting rule-based data."
+            "Found rule-based training data but no policy supporting rule-based data."
         ),
     ):
         validator.validate(importer)

tests/nlu/featurizers/test_count_vectors_featurizer.py~L773

 
 
 @pytest.mark.parametrize(
-    "initial_train_text, additional_train_text, " "use_shared_vocab",
+    "initial_train_text, additional_train_text, use_shared_vocab",
     [("am I the coolest person?", "no", True), ("rasa rasa", "sara sara", False)],
 )
 def test_use_shared_vocab_exception(

tests/nlu/featurizers/test_regex_featurizer.py~L44

 
 
 @pytest.mark.parametrize(
-    "sentence, expected_sequence_features, expected_sentence_features,"
-    "labeled_tokens",
+    "sentence, expected_sequence_features, expected_sentence_features,labeled_tokens",
     [
         (
             "hey how are you today",

tests/nlu/featurizers/test_regex_featurizer.py~L219

 
 
 @pytest.mark.parametrize(
-    "sentence, expected_sequence_features, expected_sentence_features, "
-    "labeled_tokens",
+    "sentence, expected_sequence_features, expected_sentence_features, labeled_tokens",
     [
         (
             "lemonade and mapo tofu",

tests/nlu/featurizers/test_regex_featurizer.py~L383

 
 
 @pytest.mark.parametrize(
-    "sentence, expected_sequence_features, expected_sentence_features,"
-    "case_sensitive",
+    "sentence, expected_sequence_features, expected_sentence_features,case_sensitive",
     [
         ("Hey How are you today", [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], True),
         ("Hey How are you today", [0.0, 1.0, 0.0], [0.0, 1.0, 0.0], False),

tests/nlu/featurizers/test_spacy_featurizer.py~L131

         vecs = ftr._features_for_doc(doc)
         vecs_capitalized = ftr._features_for_doc(doc_capitalized)
 
-        assert np.allclose(
-            vecs, vecs_capitalized, atol=1e-5
-        ), "Vectors are unequal for texts '{}' and '{}'".format(
-            e.get(TEXT), e.get(TEXT).capitalize()
+        assert np.allclose(vecs, vecs_capitalized, atol=1e-5), (
+            "Vectors are unequal for texts '{}' and '{}'".format(
+                e.get(TEXT), e.get(TEXT).capitalize()
+            )
         )
 
 

tests/nlu/test_train.py~L151

             #   publicly available anymore
             #   (see https://github.com/RasaHQ/rasa/issues/6806)
             continue
-        assert (
-            cls.__name__ in all_components
-        ), "`all_components` template is missing component."
+        assert cls.__name__ in all_components, (
+            "`all_components` template is missing component."
+        )
 
 
 @pytest.mark.timeout(600, func_only=True)

tests/shared/core/training_data/test_graph.py~L10

     for n in sorted_nodes:
         deps = incoming_edges.get(n, [])
         # checks that all incoming edges are from nodes we have already visited
-        assert all([
-            d in visited or (d, n) in removed_edges for d in deps
-        ]), "Found an incoming edge from a node that wasn't visited yet!"
+        assert all([d in visited or (d, n) in removed_edges for d in deps]), (
+            "Found an incoming edge from a node that wasn't visited yet!"
+        )
         visited.add(n)
 
 

Snowflake-Labs/snowcli (+17 -16 lines across 5 files)

ruff format --preview

src/snowflake/cli/_plugins/nativeapp/artifacts.py~L248

     def __init__(self, *, project_root: Path, deploy_root: Path):
         # If a relative path ends up here, it's a bug in the app and can lead to other
         # subtle bugs as paths would be resolved relative to the current working directory.
-        assert (
-            project_root.is_absolute()
-        ), f"Project root {project_root} must be an absolute path."
-        assert (
-            deploy_root.is_absolute()
-        ), f"Deploy root {deploy_root} must be an absolute path."
+        assert project_root.is_absolute(), (
+            f"Project root {project_root} must be an absolute path."
+        )
+        assert deploy_root.is_absolute(), (
+            f"Deploy root {deploy_root} must be an absolute path."
+        )
 
         self._project_root: Path = resolve_without_follow(project_root)
         self._deploy_root: Path = resolve_without_follow(deploy_root)

tests/api/utils/test_templating_functions.py~L309

     "input_value, expected_output",
     [
         ("test_value", "test_value"),
-        (" T'EST_Va l.u-e" "", "TEST_Value"),
+        (" T'EST_Va l.u-e", "TEST_Value"),
         ("", "_"),
         ('""', "_"),
         ("_val.ue", "_value"),

tests_e2e/test_installation.py~L66

     assert "Initialized the new project in" in output
     for file in files_to_check:
         expected_generated_file = project_path / file
-        assert expected_generated_file.exists(), f"[{expected_generated_file}] does not exist. It should be generated from templates directory."
+        assert expected_generated_file.exists(), (
+            f"[{expected_generated_file}] does not exist. It should be generated from templates directory."
+        )
 
 
 @pytest.mark.e2e

tests_integration/spcs/testing_utils/spcs_services_utils.py~L223

 
         describe_result = self._execute_describe(service_name)
         assert describe_result.exit_code == 0, describe_result.output
-        assert (
-            new_container_name not in describe_result.json[0]["spec"]
-        ), f"Container name '{new_container_name}' found in output of DESCRIBE SERVICE before spec has been updated. This is unexpected."
+        assert new_container_name not in describe_result.json[0]["spec"], (
+            f"Container name '{new_container_name}' found in output of DESCRIBE SERVICE before spec has been updated. This is unexpected."
+        )
 
         spec_path = f"{self._setup.test_root_path}/spcs/spec/spec_upgrade.yml"
         upgrade_result = self._setup.runner.invoke_with_connection_json([

tests_integration/spcs/testing_utils/spcs_services_utils.py~L244

         describe_result = self._execute_describe(service_name)
         assert describe_result.exit_code == 0, describe_result.output
         # do not assert direct equality because the spec field in output of DESCRIBE SERVICE has some extra info
-        assert (
-            new_container_name in describe_result.json[0]["spec"]
-        ), f"Container name '{new_container_name}' from spec_upgrade.yml not found in output of DESCRIBE SERVICE."
+        assert new_container_name in describe_result.json[0]["spec"], (
+            f"Container name '{new_container_name}' from spec_upgrade.yml not found in output of DESCRIBE SERVICE."
+        )
 
     def list_endpoints_should_show_endpoint(self, service_name: str):
         result = self._setup.runner.invoke_with_connection_json([

tests_integration/test_snowpark.py~L622

                 "type": "function",
             },
             {
-                "object": f"{database}.{different_schema}.schema_function(name "
-                "string)",
+                "object": f"{database}.{different_schema}.schema_function(name string)",
                 "status": "created",
                 "type": "function",
             },

aiven/aiven-client (+2 -2 lines across 1 file)

ruff format --preview

aiven/client/cli.py~L868

                         types = spec["type"]
                         if isinstance(types, str) and types == "null":
                             print(
-                                "  {title}{description}\n" "     => --remove-option {name}".format(
+                                "  {title}{description}\n     => --remove-option {name}".format(
                                     name=name,
                                     title=spec["title"],
                                     description=description,

aiven/client/cli.py~L879

                                 types = [types]
                             type_str = " or ".join(t for t in types if t != "null")
                             print(
-                                "  {title}{description}\n" "     => -c {name}=<{type}>  {default}".format(
+                                "  {title}{description}\n     => -c {name}=<{type}>  {default}".format(
                                     name=name,
                                     type=type_str,
                                     default=default_desc,

PlasmaPy/PlasmaPy (+49 -62 lines across 8 files)

ruff format --preview

src/plasmapy/diagnostics/charged_particle_radiography/synthetic_radiography.py~L1052

 
         if not self._has_run:
             raise RuntimeError(
-                "The simulation must be run before a results "
-                "dictionary can be created."
+                "The simulation must be run before a results dictionary can be created."
             )
 
         # Determine locations of points in the detector plane using unit

src/plasmapy/particles/_parsing.py~L328

             element = element_info
         else:
             raise InvalidParticleError(
-                f"The string '{element_info}' does not correspond to "
-                f"a valid element."
+                f"The string '{element_info}' does not correspond to a valid element."
             )
         return element
 

src/plasmapy/particles/_parsing.py~L352

 
             if isotope not in _isotopes.data_about_isotopes:
                 raise InvalidParticleError(
-                    f"The string '{isotope}' does not correspond to "
-                    f"a valid isotope."
+                    f"The string '{isotope}' does not correspond to a valid isotope."
                 )
 
         else:

src/plasmapy/particles/_parsing.py~L434

             )
         else:
             warnings.warn(
-                "Redundant charge information for particle "
-                f"'{argument}' with Z = {Z}.",
+                f"Redundant charge information for particle '{argument}' with Z = {Z}.",
                 ParticleWarning,
             )
 

src/plasmapy/particles/_parsing.py~L535

             )
         else:
             warnings.warn(
-                "Redundant charge information for particle "
-                f"'{argument}' with Z = {Z}.",
+                f"Redundant charge information for particle '{argument}' with Z = {Z}.",
                 ParticleWarning,
             )
 

src/plasmapy/particles/atomic.py~L586

             isotopes_list = known_isotopes_for_element(element)
         except InvalidElementError as ex:
             raise InvalidElementError(
-                "known_isotopes is unable to get "
-                f"isotopes from an input of: {argument}"
+                f"known_isotopes is unable to get isotopes from an input of: {argument}"
             ) from ex
         except InvalidParticleError as ex:
             raise InvalidParticleError("Invalid particle in known_isotopes.") from ex

src/plasmapy/particles/ionization_state.py~L98

 
     def __repr__(self) -> str:
         return (
-            f"IonicLevel({self.ionic_symbol!r}, "
-            f"ionic_fraction={self.ionic_fraction})"
+            f"IonicLevel({self.ionic_symbol!r}, ionic_fraction={self.ionic_fraction})"
         )
 
     @property

src/plasmapy/particles/ionization_state.py~L477

 
             if len(fractions) != self.atomic_number + 1:
                 raise ParticleError(
-                    "The length of ionic_fractions must be "
-                    f"{self.atomic_number + 1}."
+                    f"The length of ionic_fractions must be {self.atomic_number + 1}."
                 )
 
             if isinstance(fractions, u.Quantity):

src/plasmapy/particles/ionization_state_collection.py~L321

         all_nans = np.all(np.isnan(new_fractions))
         if not all_nans and (new_fractions.min() < 0 or new_fractions.max() > 1):
             raise ValueError(
-                f"{errmsg} because the new ionic fractions are not "
-                f"all between 0 and 1."
+                f"{errmsg} because the new ionic fractions are not all between 0 and 1."
             )
 
         normalized = np.isclose(np.sum(new_fractions), 1, rtol=self.tol)

tests/diagnostics/test_thomson.py~L191

     alpha, wavelength, Skw = single_species_collective_spectrum
 
     # Check that alpha is correct
-    assert np.isclose(
-        alpha, 1.801, atol=0.01
-    ), f"Collective case alpha returns {alpha} instead of expected 1.801"
+    assert np.isclose(alpha, 1.801, atol=0.01), (
+        f"Collective case alpha returns {alpha} instead of expected 1.801"
+    )
 
     i_width = width_at_value(wavelength.value, Skw.value, 2e-13)
     e_width = width_at_value(wavelength.value, Skw.value, 0.2e-13)
 
     # Check that the widths of the ion and electron features match expectations
     assert np.isclose(i_width, 0.1599, 1e-3), (
-        "Collective case ion feature "
-        f"width is {i_width}"
-        "instead of expected 0.1599"
+        f"Collective case ion feature width is {i_width}instead of expected 0.1599"
     )
 
     assert np.isclose(e_width, 17.7899, 1e-3), (

tests/diagnostics/test_thomson.py~L421

 
     # Check width
     assert np.isclose(width, 0.17, 1e-2), (
-        f"Multiple ion species case spectrum width is {width} instead of "
-        "expected 0.17"
+        f"Multiple ion species case spectrum width is {width} instead of expected 0.17"
     )
 
     # Check max value
     assert np.isclose(max_skw, 6e-12, 1e-11), (
-        f"Multiple ion species case spectrum max is {max_skw} instead of "
-        "expected 6e-12"
+        f"Multiple ion species case spectrum max is {max_skw} instead of expected 6e-12"
     )
 
     # Check max peak location

tests/diagnostics/test_thomson.py~L492

     alpha, wavelength, Skw = single_species_non_collective_spectrum
 
     # Check that alpha is correct
-    assert np.isclose(
-        alpha, 0.05707, atol=0.01
-    ), f"Non-collective case alpha returns {alpha} instead of expected 0.05707"
+    assert np.isclose(alpha, 0.05707, atol=0.01), (
+        f"Non-collective case alpha returns {alpha} instead of expected 0.05707"
+    )
 
     e_width = width_at_value(wavelength.value, Skw.value, 0.2e-13)
 

tests/formulary/collisions/test_collisions_frequencies.py~L781

             self.True_electrons, methodVal.si.value, rtol=1e-1, atol=0.0
         )
         errStr = (
-            f"Collision frequency should be {self.True_electrons} and "
-            f"not {methodVal}."
+            f"Collision frequency should be {self.True_electrons} and not {methodVal}."
         )
         assert testTrue, errStr
 

tests/formulary/collisions/test_collisions_frequencies.py~L803

             self.True_protons, methodVal.si.value, rtol=1e-1, atol=0.0
         )
         errStr = (
-            f"Collision frequency should be {self.True_protons} and "
-            f"not {methodVal}."
+            f"Collision frequency should be {self.True_protons} and not {methodVal}."
         )
         assert testTrue, errStr
 

tests/particles/test_ionization_collection.py~L34

 
     for element, abundance_from_abundances in abundances.items():
         abundance_from_log_abundances = 10 ** log_abundances[element]
-        assert np.isclose(
-            abundance_from_abundances, abundance_from_log_abundances
-        ), "Mismatch between abundances and log_abundances."
+        assert np.isclose(abundance_from_abundances, abundance_from_log_abundances), (
+            "Mismatch between abundances and log_abundances."
+        )
 
 
 def has_attribute(attribute, tests_dict):

tests/particles/test_ionization_collection.py~L142

         assert (
             a == a  # noqa: PLR0124
         ), "IonizationStateCollection doesn't equal itself."
-        assert (
-            a == b
-        ), "IonizationStateCollection instance does not equal identical instance."
+        assert a == b, (
+            "IonizationStateCollection instance does not equal identical instance."
+        )
 
     @pytest.mark.parametrize(
         "test_name",

tests/particles/test_ionization_collection.py~L464

         )
 
     def test_kappa_defaults_to_inf(self) -> None:
-        assert np.isinf(
-            self.instance.kappa
-        ), "kappa does not default to a value of inf."
+        assert np.isinf(self.instance.kappa), (
+            "kappa does not default to a value of inf."
+        )
 
     @pytest.mark.parametrize(
         "uninitialized_attribute", ["number_densities", "ionic_fractions"]

tests/particles/test_ionization_collection.py~L474

     def test_attribute_defaults_to_dict_of_nans(self, uninitialized_attribute) -> None:
         command = f"self.instance.{uninitialized_attribute}"
         default_value = eval(command)  # noqa: S307
-        assert (
-            list(default_value.keys()) == self.elements
-        ), "Incorrect base particle keys."
+        assert list(default_value.keys()) == self.elements, (
+            "Incorrect base particle keys."
+        )
         for element in self.elements:
-            assert (
-                len(default_value[element]) == atomic_number(element) + 1
-            ), f"Incorrect number of ionization levels for {element}."
-            assert np.all(
-                np.isnan(default_value[element])
-            ), f"The values do not default to an array of nans for {element}."
+            assert len(default_value[element]) == atomic_number(element) + 1, (
+                f"Incorrect number of ionization levels for {element}."
+            )
+            assert np.all(np.isnan(default_value[element])), (
+                f"The values do not default to an array of nans for {element}."
+            )
 
     @pytest.mark.parametrize(
         "uninitialized_attribute", ["abundances", "log_abundances"]

tests/particles/test_ionization_collection.py~L539

         self.new_fractions = [0.3, 0.7]
         self.instance["H"] = self.new_fractions
         resulting_fractions = self.instance.ionic_fractions["H"]
-        assert np.allclose(
-            self.new_fractions, resulting_fractions
-        ), "Ionic fractions for H not set using __setitem__."
+        assert np.allclose(self.new_fractions, resulting_fractions), (
+            "Ionic fractions for H not set using __setitem__."
+        )
         assert "He" in self.instance.ionic_fractions, (
             "He is missing in ionic_fractions after __setitem__ was "
             "used to set H ionic fractions."

tests/particles/test_ionization_collection.py~L687

 
         assert u.quantity.allclose(
             self.instance.ionic_fractions[element], valid_ionic_fractions
-        ), "Item assignment of valid number densities did not yield correct ionic fractions."
+        ), (
+            "Item assignment of valid number densities did not yield correct ionic fractions."
+        )
 
         assert u.quantity.allclose(
             self.instance.number_densities[element], valid_number_densities
-        ), "Item assignment of valid number densities did not yield correct number densities."
+        ), (
+            "Item assignment of valid number densities did not yield correct number densities."
+        )
 
     def test_resetting_invalid_densities(self) -> None:
         """

tests/particles/test_ionization_collection.py~L782

         number_densities = self.instances[test_key].number_densities
         for base_particle in self.instances[test_key].base_particles:
             assert not np.any(np.isnan(number_densities[base_particle])), (
-                f"Test {test_key} should have number densities "
-                f"defined, but doesn't."
+                f"Test {test_key} should have number densities defined, but doesn't."
             )
 
     @pytest.mark.parametrize("test_key", ["no_ndens3", "no_ndens4", "no_ndens5"])

tests/particles/test_ionization_collection.py~L791

         number_densities = self.instances[test_key].number_densities
         for base_particle in self.instances[test_key].base_particles:
             assert np.all(np.isnan(number_densities[base_particle])), (
-                f"Test {test_key} should not have number densities "
-                f"defined, but does."
+                f"Test {test_key} should not have number densities defined, but does."
             )
 
     @pytest.mark.parametrize(

apache/airflow (+146 -209 lines across 48 files)

ruff format --preview

airflow/jobs/scheduler_job_runner.py~L506

 
                         if current_task_concurrency >= task_concurrency_limit:
                             self.log.info(
-                                "Not executing %s since the task concurrency for"
-                                " this task has been reached.",
+                                "Not executing %s since the task concurrency for this task has been reached.",
                                 task_instance,
                             )
                             starved_tasks.add((task_instance.dag_id, task_instance.task_id))

airflow/www/views.py~L4657

             if skipped:
                 skipped_repr = ", ".join(repr(k) for k in sorted(skipped))
                 flash(
-                    f"The variables with these keys: {skipped_repr} were skipped "
-                    "because they already exists",
+                    f"The variables with these keys: {skipped_repr} were skipped because they already exists",
                     "warning",
                 )
             self.update_redirect()

dev/breeze/src/airflow_breeze/commands/setup_commands.py~L77

 )
 @setup.command(
     name="self-upgrade",
-    help="Self upgrade Breeze. By default it re-installs Breeze "
-    f"from {get_installation_airflow_sources()}."
+    help=f"Self upgrade Breeze. By default it re-installs Breeze from {get_installation_airflow_sources()}."
     if not generating_command_images()
     else "Self upgrade Breeze.",
 )

dev/breeze/src/airflow_breeze/commands/setup_commands.py~L174

     get_console().print(f"[info]Used Airflow sources : {get_used_airflow_sources()}[/]\n")
     if get_verbose():
         get_console().print(
-            f"[info]Installation sources config hash : "
-            f"{get_installation_sources_config_metadata_hash()}[/]"
+            f"[info]Installation sources config hash : {get_installation_sources_config_metadata_hash()}[/]"
         )
         get_console().print(
             f"[info]Used sources config hash         : {get_used_sources_setup_metadata_hash()}[/]"

dev/breeze/src/airflow_breeze/utils/docker_command_utils.py~L180

     )
     if response.returncode != 0:
         get_console().print(
-            "[error]Docker is not running.[/]\n"
-            "[warning]Please make sure Docker is installed and running.[/]"
+            "[error]Docker is not running.[/]\n[warning]Please make sure Docker is installed and running.[/]"
         )
         sys.exit(1)
 

dev/breeze/tests/test_selective_checks.py~L90

                     assert received_value == expected_value, f"Correct value for {expected_key!r}"
                 else:
                     print(
-                        f"\n[red]ERROR: The key '{expected_key}' missing but "
-                        f"it is expected. Expected value:"
+                        f"\n[red]ERROR: The key '{expected_key}' missing but it is expected. Expected value:"
                     )
                     print_in_color(expected_value)
                     print_in_color("\nOutput received:")

dev/breeze/tests/test_selective_checks.py~L494

                     "providers/tests/http/file.py",
                 ),
                 {
-                    "affected-providers-list-as-string": "amazon apache.livy "
-                    "dbt.cloud dingding discord http",
+                    "affected-providers-list-as-string": "amazon apache.livy dbt.cloud dingding discord http",
                     "all-python-versions": "['3.9']",
                     "all-python-versions-list-as-string": "3.9",
                     "python-versions": "['3.9']",

docker_tests/test_prod_image.py~L99

         packages_installed = set(d["package_name"] for d in providers)
         assert len(packages_installed) != 0
 
-        assert (
-            packages_to_install == packages_installed
-        ), f"List of expected installed packages and image content mismatch. Check {package_file} file."
+        assert packages_to_install == packages_installed, (
+            f"List of expected installed packages and image content mismatch. Check {package_file} file."
+        )
 
     def test_pip_dependencies_conflict(self, default_docker_image):
         try:

performance/src/performance_dags/performance_dag/performance_dag_utils.py~L597

     if env_name in MANDATORY_performance_DAG_VARIABLES:
         if env_name not in performance_dag_conf:
             raise ValueError(
-                f"Mandatory environment variable '{env_name}' "
-                f"is missing from performance dag configuration."
+                f"Mandatory environment variable '{env_name}' is missing from performance dag configuration."
             )
         return performance_dag_conf[env_name]
 

providers/src/airflow/providers/amazon/aws/hooks/batch_client.py~L417

                 )
         else:
             raise AirflowException(
-                f"AWS Batch job ({job_id}) description error: exceeded status_retries "
-                f"({self.status_retries})"
+                f"AWS Batch job ({job_id}) description error: exceeded status_retries ({self.status_retries})"
             )
 
     @staticmethod

providers/src/airflow/providers/amazon/aws/operators/sagemaker.py~L172

         """Raise exception if resource type is not 'model' or 'job'."""
         if resource_type not in ("model", "job"):
             raise AirflowException(
-                "Argument resource_type accepts only 'model' and 'job'. "
-                f"Provided value: '{resource_type}'."
+                f"Argument resource_type accepts only 'model' and 'job'. Provided value: '{resource_type}'."
             )
 
     def _check_if_job_exists(self, job_name: str, describe_func: Callable[[str], Any]) -> bool:

providers/src/airflow/providers/amazon/aws/operators/sagemaker.py~L553

                 self.operation = "update"
                 sagemaker_operation = self.hook.update_endpoint
                 self.log.warning(
-                    "cannot create already existing endpoint %s, "
-                    "updating it with the given config instead",
+                    "cannot create already existing endpoint %s, updating it with the given config instead",
                     endpoint_info["EndpointName"],
                 )
                 if "Tags" in endpoint_info:

providers/src/airflow/providers/celery/executors/default_celery.py~L123

         DEFAULT_CELERY_CONFIG["broker_use_ssl"] = broker_use_ssl
 except AirflowConfigException:
     raise AirflowException(
-        "AirflowConfigException: SSL_ACTIVE is True, "
-        "please ensure SSL_KEY, "
-        "SSL_CERT and SSL_CACERT are set"
+        "AirflowConfigException: SSL_ACTIVE is True, please ensure SSL_KEY, SSL_CERT and SSL_CACERT are set"
     )
 except Exception as e:
     raise AirflowException(

providers/src/airflow/providers/common/sql/operators/sql.py~L995

                     test_results[metric] = self.ignore_zero
 
             self.log.info(
-                (
-                    "Current metric for %s: %s\n"
-                    "Past metric for %s: %s\n"
-                    "Ratio for %s: %s\n"
-                    "Threshold: %s\n"
-                ),
+                ("Current metric for %s: %s\nPast metric for %s: %s\nRatio for %s: %s\nThreshold: %s\n"),
                 metric,
                 cur,
                 metric,

providers/src/airflow/providers/docker/decorators/docker.py~L141

         serializer = serializer or "pickle"
         if serializer not in _SERIALIZERS:
             msg = (
-                f"Unsupported serializer {serializer!r}. "
-                f"Expected one of {', '.join(map(repr, _SERIALIZERS))}"
+                f"Unsupported serializer {serializer!r}. Expected one of {', '.join(map(repr, _SERIALIZERS))}"
             )
             raise AirflowException(msg)
 

providers/src/airflow/providers/google/cloud/hooks/bigquery.py~L3722

                 test_results[metric] = float(ratios[metric]) < threshold
 
             self.log.info(
-                (
-                    "Current metric for %s: %s\n"
-                    "Past metric for %s: %s\n"
-                    "Ratio for %s: %s\n"
-                    "Threshold: %s\n"
-                ),
+                ("Current metric for %s: %s\nPast metric for %s: %s\nRatio for %s: %s\nThreshold: %s\n"),
                 metric,
                 cur,
                 metric,

providers/tests/airbyte/triggers/test_airbyte.py~L210

         actual = await generator.asend(None)
         expected = TriggerEvent({
             "status": "error",
-            "message": f"Job run {self.JOB_ID} has not reached a terminal status "
-            f"after {end_time} seconds.",
+            "message": f"Job run {self.JOB_ID} has not reached a terminal status after {end_time} seconds.",
             "job_id": self.JOB_ID,
         })
         assert expected == actual

providers/tests/amazon/aws/secrets/test_systems_manager.py~L180

         mock_ssm_client.assert_called_once_with(service_name="ssm", use_ssl=False)
 
     @mock.patch(
-        "airflow.providers.amazon.aws.secrets.systems_manager."
-        "SystemsManagerParameterStoreBackend._get_secret"
+        "airflow.providers.amazon.aws.secrets.systems_manager.SystemsManagerParameterStoreBackend._get_secret"
     )
     def test_connection_prefix_none_value(self, mock_get_secret):
         """

providers/tests/amazon/aws/secrets/test_systems_manager.py~L197

         mock_get_secret.assert_not_called()
 
     @mock.patch(
-        "airflow.providers.amazon.aws.secrets.systems_manager."
-        "SystemsManagerParameterStoreBackend._get_secret"
+        "airflow.providers.amazon.aws.secrets.systems_manager.SystemsManagerParameterStoreBackend._get_secret"
     )
     def test_variable_prefix_none_value(self, mock_get_secret):
         """

providers/tests/amazon/aws/secrets/test_systems_manager.py~L214

         mock_get_secret.assert_not_called()
 
     @mock.patch(
-        "airflow.providers.amazon.aws.secrets.systems_manager."
-        "SystemsManagerParameterStoreBackend._get_secret"
+        "airflow.providers.amazon.aws.secrets.systems_manager.SystemsManagerParameterStoreBackend._get_secret"
     )
     def test_config_prefix_none_value(self, mock_get_secret):
         """

providers/tests/apache/hive/transfers/test_s3_to_hive.py~L170

 
     def test__match_headers(self):
         self.kwargs["field_dict"] = {"Sno": "BIGINT", "Some,Text": "STRING"}
-        assert S3ToHiveOperator(**self.kwargs)._match_headers([
-            "Sno",
-            "Some,Text",
-        ]), "Header row doesn't match expected value"
+        assert S3ToHiveOperator(**self.kwargs)._match_headers(["Sno", "Some,Text"]), (
+            "Header row doesn't match expected value"
+        )
         # Testing with different column order
-        assert not S3ToHiveOperator(**self.kwargs)._match_headers([
-            "Some,Text",
-            "Sno",
-        ]), "Header row doesn't match expected value"
+        assert not S3ToHiveOperator(**self.kwargs)._match_headers(["Some,Text", "Sno"]), (
+            "Header row doesn't match expected value"
+        )
         # Testing with extra column in header
-        assert not S3ToHiveOperator(**self.kwargs)._match_headers([
-            "Sno",
-            "Some,Text",
-            "ExtraColumn",
-        ]), "Header row doesn't match expected value"
+        assert not S3ToHiveOperator(**self.kwargs)._match_headers(["Sno", "Some,Text", "ExtraColumn"]), (
+            "Header row doesn't match expected value"
+        )
 
     def test__delete_top_row_and_compress(self):
         s32hive = S3ToHiveOperator(**self.kwargs)

providers/tests/databricks/operators/test_databricks.py~L1983

 
         with pytest.raises(TaskDeferred) as exec_info:
             operator.monitor_databricks_job()
-        assert isinstance(
-            exec_info.value.trigger, DatabricksExecutionTrigger
-        ), "Trigger is not a DatabricksExecutionTrigger"
+        assert isinstance(exec_info.value.trigger, DatabricksExecutionTrigger), (
+            "Trigger is not a DatabricksExecutionTrigger"
+        )
         assert exec_info.value.method_name == "execute_complete"
 
     @mock.patch("airflow.providers.databricks.operators.databricks.DatabricksHook")

providers/tests/dbt/cloud/triggers/test_dbt.py~L213

         actual = await generator.asend(None)
         expected = TriggerEvent({
             "status": "error",
-            "message": f"Job run {self.RUN_ID} has not reached a terminal status "
-            f"after {end_time} seconds.",
+            "message": f"Job run {self.RUN_ID} has not reached a terminal status after {end_time} seconds.",
             "run_id": self.RUN_ID,
         })
         assert expected == actual

providers/tests/docker/operators/test_docker.py~L468

         operator = DockerOperator(
             private_environment={"PRIVATE": "MESSAGE"}, image=TEST_IMAGE, task_id="unittest"
         )
-        assert operator._private_environment == {
-            "PRIVATE": "MESSAGE"
-        }, "To keep this private, it must be an underscored attribute."
+        assert operator._private_environment == {"PRIVATE": "MESSAGE"}, (
+            "To keep this private, it must be an underscored attribute."
+        )
 
     @mock.patch("airflow.providers.docker.operators.docker.StringIO")
     def test_environment_overrides_env_file(self, stringio_mock):

<a href='https://github.com/apache/airflow/blob/4d54cda4...*[Comment body truncated]*

@MichaReiser MichaReiser force-pushed the micha/format-implicit-concatenated-strings branch 3 times, most recently from c0565b9 to c7380d4 Compare October 16, 2024 08:48
@MichaReiser
Copy link
Member Author

I like what I'm seeing in the diff. I don't like seeing another instability, ugh

@MichaReiser MichaReiser force-pushed the micha/format-implicit-concatenated-strings branch 6 times, most recently from cf7b995 to 54175dc Compare October 16, 2024 13:35
@MichaReiser
Copy link
Member Author

MichaReiser commented Oct 16, 2024

This one seems unfortunate

         # Written on 06.06.2022 and can be removed after a couple of months if everything goes well
         assert (
             ZERO <= used_amount <= self._acquisitions_heap[0].acquisition_event.remaining_amount
-        ), f"Used amount must be in the interval [0, {self._acquisitions_heap[0].acquisition_event.remaining_amount}] but it was {used_amount} for {asset}"  # noqa: E501
+        ), (
+            f"Used amount must be in the interval [0, {self._acquisitions_heap[0].acquisition_event.remaining_amount}] but it was {used_amount} for {asset}"
+        )  # noqa: E501
 
         self._acquisitions_heap[0].acquisition_event.remaining_amount -= used_amount
         if self._acquisitions_heap[0].acquisition_event.remaining_amount == ZERO:

But I think it is related to f-string formatting

@MichaReiser MichaReiser force-pushed the micha/format-implicit-concatenated-strings branch 4 times, most recently from 690b548 to 845d8ac Compare October 18, 2024 13:48
@MichaReiser MichaReiser force-pushed the micha/format-implicit-concatenated-strings branch from c49f8c7 to 0602554 Compare October 21, 2024 12:15
@MichaReiser MichaReiser changed the base branch from main to micha/refactor-normalize October 21, 2024 12:17
Base automatically changed from micha/refactor-normalize to main October 21, 2024 19:23
@MichaReiser MichaReiser force-pushed the micha/format-implicit-concatenated-strings branch from 0602554 to 4310a02 Compare October 22, 2024 11:56
@MichaReiser MichaReiser changed the base branch from main to micha/f-string-alternate-quotes October 22, 2024 11:56
@MichaReiser
Copy link
Member Author

This is mostly done. I plant to do some ecosystem testing tomorrow and do a final self-review before publishing

@MichaReiser MichaReiser force-pushed the micha/format-implicit-concatenated-strings branch from 92ba8cd to c26a5f5 Compare October 22, 2024 16:12
Base automatically changed from micha/f-string-alternate-quotes to main October 23, 2024 05:57
@MichaReiser MichaReiser force-pushed the micha/format-implicit-concatenated-strings branch from c26a5f5 to f3256b8 Compare October 23, 2024 07:01
Comment on lines +432 to +438
// Can-omit layout is relevant for `"abcd".call`. We don't want to add unnecessary
// parentheses in this case.
if can_omit_optional_parentheses(expression, f.context()) {
optional_parentheses(&unparenthesized).fmt(f)
} else {
parenthesize_if_expands(&unparenthesized).fmt(f)
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the main change. All other changes are replacing expression.format().with_options(Parentheses::Never) with `unparenthesized

@AlexWaygood
Copy link
Member

@AlexWaygood I don't expect you to review the code changes but I highly value your input on the assertion formatting changes.

Overall I find these changes especially are a significant improvement to readability.

I tried running this branch on some repos I have clones for locally, and a lot of the changes were related to this, but I found all of them to be improvements. I think there will undoubtedly be some specific patterns where formatting is likely to regress (such as 3. assertions that end in parentheses in your PR description), but I'm quite strongly in favour of this change overall. I also think the formatting for that specific case would be quite easy for a user to fix manually by splitting the string over multiple lines (but I don't think that's something that Ruff should try to do for the user):

assert graph.artifacts == [
    GraphArtifact(
        id=expected_top_level_artifact.id,
        created=expected_top_level_artifact.created,
        key=expected_top_level_artifact.key,
        type=expected_top_level_artifact.type,
        data=expected_top_level_artifact.data
        if expected_top_level_artifact.type == "progress"
        else None,
        is_latest=True,
    )
], (
    "Expected artifacts associated with the flow run "
    "but not with a task to be included at the roof of the graph."
)

Copy link
Member

@charliermarsh charliermarsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is excellent. The code is so clear and easy to read (especially for the formatter). Well done with the comments, clarity, etc.

// because the value is no longer an implicit concatenated string.
// ```python
// ____aaa = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbvvvvvvvvvvvvvvv" # c
// ```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Very helpful.)

@charliermarsh
Copy link
Member

I think the assert formatting is probably an improvement, though I hear your concerns. This one stuck out to me a bit:

-            assert isinstance(
-                document, dict
-            ), "document should be of type `dict[str,Any]`. But found: `{}`".format(
-                type(document)
+            assert isinstance(document, dict), (
+                "document should be of type `dict[str,Any]`. But found: `{}`".format(
+                    type(document)
+                )
             )

But it's actually not an increase in total lines, and the condition and message are actually more-clearly separated than before.

Copy link
Member

@dhruvmanila dhruvmanila left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really thorough! Thanks for taking the time to add comments throughout the code and especially in tests. It was really easy for me to look at the formatted output directly in test because the comment described the original source code and what to expect.

Regarding the assert formatting, I too think that it's improves readability as it's easier to split the condition and the message mentally.

crates/ruff_python_formatter/src/statement/stmt_assign.rs Outdated Show resolved Hide resolved
@MichaReiser
Copy link
Member Author

Okay. Let's merge this as is and revisit the assert formatting decision depending on the preview style feedback

@MichaReiser MichaReiser merged commit 73ee72b into main Oct 24, 2024
20 checks passed
@MichaReiser MichaReiser deleted the micha/format-implicit-concatenated-strings branch October 24, 2024 09:52
lmaotrigine added a commit to lmaotrigine/ruff that referenced this pull request Oct 26, 2024
* [red-knot] binary arithmetic on instances (#13800)

Co-authored-by: Alex Waygood <[email protected]>

* [red-knot] Fix edge case for binary-expression inference where the lhs and rhs are the exact same type (#13823)

## Summary

This fixes an edge case that @carljm and I missed when implementing
https://github.com/astral-sh/ruff/pull/13800. Namely, if the left-hand
operand is the _exact same type_ as the right-hand operand, the
reflected dunder on the right-hand operand is never tried:

```pycon
>>> class Foo:
...     def __radd__(self, other):
...         return 42
...         
>>> Foo() + Foo()
Traceback (most recent call last):
  File "<python-input-1>", line 1, in <module>
    Foo() + Foo()
    ~~~~~~^~~~~~~
TypeError: unsupported operand type(s) for +: 'Foo' and 'Foo'
```

This edge case _is_ covered in Brett's blog at
https://snarky.ca/unravelling-binary-arithmetic-operations-in-python/,
but I missed it amongst all the other subtleties of this algorithm. The
motivations and history behind it were discussed in
https://mail.python.org/archives/list/[email protected]/thread/7NZUCODEAPQFMRFXYRMGJXDSIS3WJYIV/

## Test Plan

I added an mdtest for this cornercase.

* [red-knot] Enhancing Diagnostics for Compare Expression Inference (#13819)

## Summary

- Refactored comparison type inference functions in `infer.rs`: Changed
the return type from `Option` to `Result` to lay the groundwork for
providing more detailed diagnostics.
- Updated diagnostic messages.

This is a small step toward improving diagnostics in the future.

Please refer to #13787

## Test Plan

mdtest included!

---------

Co-authored-by: Carl Meyer <[email protected]>

* [python_ast] Make the iter_mut functions public (#13542)

* [red-knot] Implement more types in binary and unary expressions (#13803)

Implemented some points from
https://github.com/astral-sh/ruff/issues/12701

- Handle Unknown and Any in Unary operation
- Handle Boolean in binary operations
- Handle instances in unary operation
- Consider division by False to be division by zero

---------

Co-authored-by: Carl Meyer <[email protected]>
Co-authored-by: Alex Waygood <[email protected]>

* Update BREAKING_CHANGES.md for Ruff 0.7 (#13828)

* Bump MSRV to Rust 1.80 (#13826)

* Update Rust crate pep440_rs to 0.7.1 (#13654)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <[email protected]>

* [red-knot] Cleanup generated names of mdtest tests (#13831)

Co-authored-by: Alex Waygood <[email protected]>
Co-authored-by: Micha Reiser <[email protected]>

* Simplify iteration idioms (#13834)

Remove unnecessary uses of `.as_ref()`, `.iter()`, `&**` and similar, mostly in situations when iterating over variables. Many of these changes are only possible following #13826, when we bumped our MSRV to 1.80: several useful implementations on `&Box<[T]>` were only stabilised in Rust 1.80. Some of these changes we could have done earlier, however.

* Modernize build scripts (#13837)

Use the modern `cargo::KEY=VALUE` syntax that was stabilised in MSRV 1.77, rather than the deprecated `cargo:KEY=VALUE` syntax.

* Update dependency mdformat to v0.7.18 (#13843)

* Update dependency ruff to v0.7.0 (#13847)

* Update Rust crate libc to v0.2.161 (#13840)

* Update Rust crate anyhow to v1.0.90 (#13839)

* Update Rust crate proc-macro2 to v1.0.88 (#13841)

* Update Rust crate syn to v2.0.82 (#13842)

* Update Rust crate fern to 0.7.0 (#13844)

* Update Rust crate serde_json to v1.0.132 (#13848)

* Update Rust crate uuid to v1.11.0 (#13845)

* Update dependency tomli_w to v1.1.0 (#13849)

This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
| [tomli_w](https://redirect.github.com/hukkin/tomli-w)
([changelog](https://redirect.github.com/hukkin/tomli-w/blob/master/CHANGELOG.md))
| `==1.0.0` -> `==1.1.0` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/tomli_w/1.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/pypi/tomli_w/1.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/pypi/tomli_w/1.0.0/1.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/tomli_w/1.0.0/1.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>hukkin/tomli-w (tomli_w)</summary>

###
[`v1.1.0`](https://redirect.github.com/hukkin/tomli-w/blob/HEAD/CHANGELOG.md#110)

[Compare
Source](https://redirect.github.com/hukkin/tomli-w/compare/1.0.0...1.1.0)

-   Removed
    -   Support for Python 3.7 and 3.8
-   Added
- Accept generic `collections.abc.Mapping`, not just `dict`, as input.
Thank you [Watal M. Iwasaki](https://redirect.github.com/heavywatal) for
the
        [PR](https://redirect.github.com/hukkin/tomli-w/pull/46).
- `indent` keyword argument for customizing indent width of arrays.
Thank you [Wim Jeantine-Glenn](https://redirect.github.com/wimglenn) for
the
        [PR](https://redirect.github.com/hukkin/tomli-w/pull/49).
-   Type annotations
- Type annotate `dump` function's output stream object as
`typing.IO[bytes]` (previously `typing.BinaryIO`)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4xMjAuMSIsInVwZGF0ZWRJblZlciI6IjM4LjEyMC4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* Update pre-commit dependencies (#13850)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[abravalheri/validate-pyproject](https://redirect.github.com/abravalheri/validate-pyproject)
| repository | minor | `v0.20.2` -> `v0.21` |
|
[astral-sh/ruff-pre-commit](https://redirect.github.com/astral-sh/ruff-pre-commit)
| repository | minor | `v0.6.9` -> `v0.7.0` |
| [crate-ci/typos](https://redirect.github.com/crate-ci/typos) |
repository | minor | `v1.25.0` -> `v1.26.0` |
|
[executablebooks/mdformat](https://redirect.github.com/executablebooks/mdformat)
| repository | patch | `0.7.17` -> `0.7.18` |

Note: The `pre-commit` manager in Renovate is not supported by the
`pre-commit` maintainers or community. Please do not report any problems
there, instead [create a Discussion in the Renovate
repository](https://redirect.github.com/renovatebot/renovate/discussions/new)
if you have any questions.

---

### Release Notes

<details>
<summary>abravalheri/validate-pyproject
(abravalheri/validate-pyproject)</summary>

###
[`v0.21`](https://redirect.github.com/abravalheri/validate-pyproject/releases/tag/v0.21)

[Compare
Source](https://redirect.github.com/abravalheri/validate-pyproject/compare/v0.20.2...v0.21)

#### What's Changed

- Added support PEP 735 by
[@&#8203;henryiii](https://redirect.github.com/henryiii) in
[https://github.com/abravalheri/validate-pyproject/pull/208](https://redirect.github.com/abravalheri/validate-pyproject/pull/208)
- Added support PEP 639 by
[@&#8203;henryiii](https://redirect.github.com/henryiii) in
[https://github.com/abravalheri/validate-pyproject/pull/210](https://redirect.github.com/abravalheri/validate-pyproject/pull/210)
- Renamed testing extra to test by
[@&#8203;henryiii](https://redirect.github.com/henryiii) in
[https://github.com/abravalheri/validate-pyproject/pull/212](https://redirect.github.com/abravalheri/validate-pyproject/pull/212)
-   General updates in CI setup

**Full Changelog**:
https://github.com/abravalheri/validate-pyproject/compare/v0.20.2...v0.21

</details>

<details>
<summary>astral-sh/ruff-pre-commit (astral-sh/ruff-pre-commit)</summary>

###
[`v0.7.0`](https://redirect.github.com/astral-sh/ruff-pre-commit/releases/tag/v0.7.0)

[Compare
Source](https://redirect.github.com/astral-sh/ruff-pre-commit/compare/v0.6.9...v0.7.0)

See: https://github.com/astral-sh/ruff/releases/tag/0.7.0

</details>

<details>
<summary>crate-ci/typos (crate-ci/typos)</summary>

###
[`v1.26.0`](https://redirect.github.com/crate-ci/typos/releases/tag/v1.26.0)

[Compare
Source](https://redirect.github.com/crate-ci/typos/compare/v1.25.0...v1.26.0)

#### \[1.26.0] - 2024-10-07

##### Compatibility

-   *(pre-commit)* Requires 3.2+

##### Fixes

- *(pre-commit)* Resolve deprecations in 4.0 about deprecated stage
names

</details>

<details>
<summary>executablebooks/mdformat (executablebooks/mdformat)</summary>

###
[`v0.7.18`](https://redirect.github.com/executablebooks/mdformat/compare/0.7.17...0.7.18)

[Compare
Source](https://redirect.github.com/executablebooks/mdformat/compare/0.7.17...0.7.18)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config
help](https://redirect.github.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4xMjAuMSIsInVwZGF0ZWRJblZlciI6IjM4LjEyMC4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* Update NPM Development dependencies (#13851)

This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
|
[@cloudflare/workers-types](https://redirect.github.com/cloudflare/workerd)
| [`4.20241004.0` ->
`4.20241018.0`](https://renovatebot.com/diffs/npm/@cloudflare%2fworkers-types/4.20241004.0/4.20241018.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@cloudflare%2fworkers-types/4.20241018.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@cloudflare%2fworkers-types/4.20241018.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@cloudflare%2fworkers-types/4.20241004.0/4.20241018.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@cloudflare%2fworkers-types/4.20241004.0/4.20241018.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[@types/react-dom](https://redirect.github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/react-dom)
([source](https://redirect.github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/react-dom))
| [`18.3.0` ->
`18.3.1`](https://renovatebot.com/diffs/npm/@types%2freact-dom/18.3.0/18.3.1)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@types%2freact-dom/18.3.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@types%2freact-dom/18.3.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@types%2freact-dom/18.3.0/18.3.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@types%2freact-dom/18.3.0/18.3.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[@typescript-eslint/eslint-plugin](https://typescript-eslint.io/packages/eslint-plugin)
([source](https://redirect.github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin))
| [`8.8.0` ->
`8.10.0`](https://renovatebot.com/diffs/npm/@typescript-eslint%2feslint-plugin/8.8.0/8.10.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@typescript-eslint%2feslint-plugin/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@typescript-eslint%2feslint-plugin/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@typescript-eslint%2feslint-plugin/8.8.0/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@typescript-eslint%2feslint-plugin/8.8.0/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[@typescript-eslint/parser](https://typescript-eslint.io/packages/parser)
([source](https://redirect.github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/parser))
| [`8.8.0` ->
`8.10.0`](https://renovatebot.com/diffs/npm/@typescript-eslint%2fparser/8.8.0/8.10.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@typescript-eslint%2fparser/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@typescript-eslint%2fparser/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@typescript-eslint%2fparser/8.8.0/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@typescript-eslint%2fparser/8.8.0/8.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [eslint-plugin-react-hooks](https://react.dev/)
([source](https://redirect.github.com/facebook/react/tree/HEAD/packages/eslint-plugin-react-hooks))
| [`^4.6.0` ->
`^5.0.0`](https://renovatebot.com/diffs/npm/eslint-plugin-react-hooks/4.6.2/5.0.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/eslint-plugin-react-hooks/5.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/eslint-plugin-react-hooks/5.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/eslint-plugin-react-hooks/4.6.2/5.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/eslint-plugin-react-hooks/4.6.2/5.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[miniflare](https://redirect.github.com/cloudflare/workers-sdk/tree/main/packages/miniflare#readme)
([source](https://redirect.github.com/cloudflare/workers-sdk/tree/HEAD/packages/miniflare))
| [`3.20240925.0` ->
`3.20241011.0`](https://renovatebot.com/diffs/npm/miniflare/3.20240925.0/3.20241011.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/miniflare/3.20241011.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/miniflare/3.20241011.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/miniflare/3.20240925.0/3.20241011.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/miniflare/3.20240925.0/3.20241011.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [tailwindcss](https://tailwindcss.com)
([source](https://redirect.github.com/tailwindlabs/tailwindcss)) |
[`3.4.13` ->
`3.4.14`](https://renovatebot.com/diffs/npm/tailwindcss/3.4.13/3.4.14) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/tailwindcss/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/tailwindcss/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/tailwindcss/3.4.13/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/tailwindcss/3.4.13/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [typescript](https://www.typescriptlang.org/)
([source](https://redirect.github.com/microsoft/TypeScript)) | [`5.6.2`
-> `5.6.3`](https://renovatebot.com/diffs/npm/typescript/5.6.2/5.6.3) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/typescript/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/typescript/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/typescript/5.6.2/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/typescript/5.6.2/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [vite](https://vite.dev)
([source](https://redirect.github.com/vitejs/vite/tree/HEAD/packages/vite))
| [`5.4.8` ->
`5.4.9`](https://renovatebot.com/diffs/npm/vite/5.4.8/5.4.9) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/vite/5.4.9?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/vite/5.4.9?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/vite/5.4.8/5.4.9?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/vite/5.4.8/5.4.9?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [wrangler](https://redirect.github.com/cloudflare/workers-sdk)
([source](https://redirect.github.com/cloudflare/workers-sdk/tree/HEAD/packages/wrangler))
| [`3.80.0` ->
`3.81.0`](https://renovatebot.com/diffs/npm/wrangler/3.80.0/3.81.0) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/wrangler/3.81.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/wrangler/3.81.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/wrangler/3.80.0/3.81.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/wrangler/3.80.0/3.81.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>cloudflare/workerd (@&#8203;cloudflare/workers-types)</summary>

###
[`v4.20241018.0`](https://redirect.github.com/cloudflare/workerd/compare/caeb4e0d9e8a7ecbef208e8c54c27bae7e412f7b...fa7168988f89ec72e218a0112be4f6f0229c2d6b)

[Compare
Source](https://redirect.github.com/cloudflare/workerd/compare/caeb4e0d9e8a7ecbef208e8c54c27bae7e412f7b...fa7168988f89ec72e218a0112be4f6f0229c2d6b)

###
[`v4.20241011.0`](https://redirect.github.com/cloudflare/workerd/compare/76198481858fce538e4efa2783c3844e38149227...caeb4e0d9e8a7ecbef208e8c54c27bae7e412f7b)

[Compare
Source](https://redirect.github.com/cloudflare/workerd/compare/76198481858fce538e4efa2783c3844e38149227...caeb4e0d9e8a7ecbef208e8c54c27bae7e412f7b)

</details>

<details>
<summary>typescript-eslint/typescript-eslint
(@&#8203;typescript-eslint/eslint-plugin)</summary>

###
[`v8.10.0`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/eslint-plugin/CHANGELOG.md#8100-2024-10-17)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.9.0...v8.10.0)

##### 🚀 Features

- support TypeScript 5.6
([#&#8203;9972](https://redirect.github.com/typescript-eslint/typescript-eslint/pull/9972))

##### ❤️  Thank You

-   Josh Goldberg ✨

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

###
[`v8.9.0`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/eslint-plugin/CHANGELOG.md#890-2024-10-14)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.8.1...v8.9.0)

##### 🩹 Fixes

- **eslint-plugin:** \[no-unnecessary-type-parameters] cannot assume
variables are either type or value

- **scope-manager:** \[no-use-before-define] do not treat nested
namespace aliases as variable references

- **eslint-plugin:** \[return-await] sync the behavior with
await-thenable

- **eslint-plugin:** \[prefer-literal-enum-member] report a different
error message when `allowBitwiseExpressions` is enabled

-   **eslint-plugin:** \[no-loop-func] sync from upstream base rule

- **eslint-plugin:** \[no-unused-vars] never report the naming of an
enum member

-   **eslint-plugin:** correct use-at-your-own-risk type definitions

-   **eslint-plugin:** handle unions in await...for

##### ❤️  Thank You

-   Abraham Guo
-   Anna Bocharova
-   Arya Emami
-   auvred
-   Joshua Chen
-   Kirk Waiblinger
-   Lotfi Meklati
-   mdm317
-   Ronen Amiel
-   Sukka
-   YeonJuan

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

###
[`v8.8.1`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/eslint-plugin/CHANGELOG.md#881-2024-10-07)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.8.0...v8.8.1)

##### 🩹 Fixes

- **eslint-plugin:** stop warning on
[@&#8203;ts-nocheck](https://redirect.github.com/ts-nocheck) comments
which aren't at the beginning of the file

##### ❤️  Thank You

-   Brad Zacher
-   Ronen Amiel
-   WhitePiano

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

</details>

<details>
<summary>typescript-eslint/typescript-eslint
(@&#8203;typescript-eslint/parser)</summary>

###
[`v8.10.0`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/parser/CHANGELOG.md#8100-2024-10-17)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.9.0...v8.10.0)

##### 🚀 Features

- support TypeScript 5.6
([#&#8203;9972](https://redirect.github.com/typescript-eslint/typescript-eslint/pull/9972))

##### ❤️  Thank You

-   Josh Goldberg ✨

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

###
[`v8.9.0`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/parser/CHANGELOG.md#890-2024-10-14)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.8.1...v8.9.0)

This was a version bump only for parser to align it with other projects,
there were no code changes.

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

###
[`v8.8.1`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/parser/CHANGELOG.md#881-2024-10-07)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.8.0...v8.8.1)

This was a version bump only for parser to align it with other projects,
there were no code changes.

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

</details>

<details>
<summary>facebook/react (eslint-plugin-react-hooks)</summary>

###
[`v5.0.0`](https://redirect.github.com/facebook/react/blob/HEAD/packages/eslint-plugin-react-hooks/CHANGELOG.md#500)

[Compare
Source](https://redirect.github.com/facebook/react/compare/a87edf62d7d69705ddbcec9a24f0780b3db7535f...eslint-plugin-react-hooks@5.0.0)

- **New Violations:** Component names now need to start with an
uppercase letter instead of a non-lowercase letter. This means `_Button`
or `_component` are no longer valid.
([@&#8203;kassens](https://redirect.github.com/kassens)) in
[#&#8203;25162](https://redirect.github.com/facebook/react/pull/25162)

<!---->

- Consider dispatch from `useActionState` stable.
([@&#8203;eps1lon](https://redirect.github.com/eps1lon) in
[#&#8203;29665](https://redirect.github.com/facebook/react/pull/29665))
- Add support for ESLint v9.
([@&#8203;eps1lon](https://redirect.github.com/eps1lon) in
[#&#8203;28773](https://redirect.github.com/facebook/react/pull/28773))
- Accept `as` expression in callback.
([@&#8203;StyleShit](https://redirect.github.com/StyleShit) in
[#&#8203;28202](https://redirect.github.com/facebook/react/pull/28202))
- Accept `as` expressions in deps array.
([@&#8203;StyleShit](https://redirect.github.com/StyleShit) in
[#&#8203;28189](https://redirect.github.com/facebook/react/pull/28189))
- Treat `React.use()` the same as `use()`.
([@&#8203;kassens](https://redirect.github.com/kassens) in
[#&#8203;27769](https://redirect.github.com/facebook/react/pull/27769))
- Move `use()` lint to non-experimental.
([@&#8203;kassens](https://redirect.github.com/kassens) in
[#&#8203;27768](https://redirect.github.com/facebook/react/pull/27768))
- Support Flow `as` expressions.
([@&#8203;cpojer](https://redirect.github.com/cpojer) in
[#&#8203;27590](https://redirect.github.com/facebook/react/pull/27590))
- Allow `useEffect(fn, undefined)`.
([@&#8203;kassens](https://redirect.github.com/kassens) in
[#&#8203;27525](https://redirect.github.com/facebook/react/pull/27525))
- Disallow hooks in async functions.
([@&#8203;acdlite](https://redirect.github.com/acdlite) in
[#&#8203;27045](https://redirect.github.com/facebook/react/pull/27045))
- Rename experimental `useEvent` to `useEffectEvent`.
([@&#8203;sebmarkbage](https://redirect.github.com/sebmarkbage) in
[#&#8203;25881](https://redirect.github.com/facebook/react/pull/25881))
- Lint for presence of `useEvent` functions in dependency lists.
([@&#8203;poteto](https://redirect.github.com/poteto) in
[#&#8203;25512](https://redirect.github.com/facebook/react/pull/25512))
- Check `useEvent` references instead.
([@&#8203;poteto](https://redirect.github.com/poteto) in
[#&#8203;25319](https://redirect.github.com/facebook/react/pull/25319))
- Update `RulesOfHooks` with `useEvent` rules.
([@&#8203;poteto](https://redirect.github.com/poteto) in
[#&#8203;25285](https://redirect.github.com/facebook/react/pull/25285))

</details>

<details>
<summary>cloudflare/workers-sdk (miniflare)</summary>

###
[`v3.20241011.0`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/miniflare/CHANGELOG.md#3202410110)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.20241011.0)

##### Patch Changes

-
[#&#8203;6961](https://redirect.github.com/cloudflare/workers-sdk/pull/6961)
[`5761020`](https://redirect.github.com/cloudflare/workers-sdk/commit/5761020cb41270ce872ad6c555b263597949c06d)
Thanks
[@&#8203;dependabot](https://redirect.github.com/apps/dependabot)! -
chore: update dependencies of "miniflare" package

    The following dependency versions have been updated:

    | Dependency                | From          | To            |
    | ------------------------- | ------------- | ------------- |
    | workerd                   | 1.20241004.0  | 1.20241011.1  |
|
[@&#8203;cloudflare/workers-types](https://redirect.github.com/cloudflare/workers-types)
| ^4.20241004.0 | ^4.20241011.0 |

-
[#&#8203;6943](https://redirect.github.com/cloudflare/workers-sdk/pull/6943)
[`7859a04`](https://redirect.github.com/cloudflare/workers-sdk/commit/7859a04bcd4b2f1cafe67c371bd236acaf7a2d91)
Thanks [@&#8203;sdnts](https://redirect.github.com/sdnts)! - fix: local
queues now respect consumer max delays and retry delays properly

###
[`v3.20241004.0`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/miniflare/CHANGELOG.md#3202410040)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.20241004.0)

##### Patch Changes

-
[#&#8203;6949](https://redirect.github.com/cloudflare/workers-sdk/pull/6949)
[`c863183`](https://redirect.github.com/cloudflare/workers-sdk/commit/c86318354f1a6c0f5c096d6b2a884de740552a19)
Thanks
[@&#8203;dependabot](https://redirect.github.com/apps/dependabot)! -
chore: update dependencies of "miniflare" package

    The following dependency versions have been updated:

    | Dependency                | From          | To            |
    | ------------------------- | ------------- | ------------- |
    | workerd                   | 1.20240925.0  | 1.20241004.0  |
|
[@&#8203;cloudflare/workers-types](https://redirect.github.com/cloudflare/workers-types)
| ^4.20240925.0 | ^4.20241004.0 |

###
[`v3.20240925.1`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/miniflare/CHANGELOG.md#3202409251)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.20240925.1)

##### Patch Changes

-
[#&#8203;6835](https://redirect.github.com/cloudflare/workers-sdk/pull/6835)
[`5c50949`](https://redirect.github.com/cloudflare/workers-sdk/commit/5c509494807a1c0418be83c47a459ec80126848e)
Thanks [@&#8203;emily-shen](https://redirect.github.com/emily-shen)! -
fix: rename asset plugin options slightly to match wrangler.toml better

    Renamed `path` -> `directory`, `bindingName` -> `binding`.

</details>

<details>
<summary>tailwindlabs/tailwindcss (tailwindcss)</summary>

###
[`v3.4.14`](https://redirect.github.com/tailwindlabs/tailwindcss/releases/tag/v3.4.14)

[Compare
Source](https://redirect.github.com/tailwindlabs/tailwindcss/compare/v3.4.13...v3.4.14)

##### Fixed

- Don't set `display: none` on elements that use `hidden="until-found"`
([#&#8203;14625](https://redirect.github.com/tailwindlabs/tailwindcss/pull/14625))

</details>

<details>
<summary>microsoft/TypeScript (typescript)</summary>

###
[`v5.6.3`](https://redirect.github.com/microsoft/TypeScript/compare/v5.6.2...d48a5cf89a62a62d6c6ed53ffa18f070d9458b85)

[Compare
Source](https://redirect.github.com/microsoft/TypeScript/compare/v5.6.2...v5.6.3)

</details>

<details>
<summary>vitejs/vite (vite)</summary>

###
[`v5.4.9`](https://redirect.github.com/vitejs/vite/releases/tag/v5.4.9)

[Compare
Source](https://redirect.github.com/vitejs/vite/compare/v5.4.8...v5.4.9)

Please refer to
[CHANGELOG.md](https://redirect.github.com/vitejs/vite/blob/v5.4.9/packages/vite/CHANGELOG.md)
for details.

</details>

<details>
<summary>cloudflare/workers-sdk (wrangler)</summary>

###
[`v3.81.0`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3810)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.81.0)

##### Minor Changes

-
[#&#8203;6990](https://redirect.github.com/cloudflare/workers-sdk/pull/6990)
[`586c253`](https://redirect.github.com/cloudflare/workers-sdk/commit/586c253f7de36360cab275cb1ebf9a2373fd4f4c)
Thanks
[@&#8203;courtney-sims](https://redirect.github.com/courtney-sims)! -
feat: Adds new detailed pages deployment output type

##### Patch Changes

-
[#&#8203;6963](https://redirect.github.com/cloudflare/workers-sdk/pull/6963)
[`a5ac45d`](https://redirect.github.com/cloudflare/workers-sdk/commit/a5ac45d7d5aa7a6b82de18a8cf14e6eabdd22e9e)
Thanks [@&#8203;RamIdeas](https://redirect.github.com/RamIdeas)! - fix:
make `wrangler dev --remote` respect wrangler.toml's `account_id`
property.

This was a regression in the `--x-dev-env` flow recently turned on by
default.

-
[#&#8203;6996](https://redirect.github.com/cloudflare/workers-sdk/pull/6996)
[`b8ab809`](https://redirect.github.com/cloudflare/workers-sdk/commit/b8ab8093b9011b5d7d47bcd31fa69cefa6c8fe2a)
Thanks [@&#8203;emily-shen](https://redirect.github.com/emily-shen)! -
fix: improve error messaging when accidentally using Workers commands in
Pages project

If we detect a Workers command used with a Pages project (i.e.
wrangler.toml contains `pages_output_build_dir`), error with Pages
version of command rather than "missing entry-point" etc.

###
[`v3.80.5`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3805)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.80.5)

##### Patch Changes

- Updated dependencies
\[[`5761020`](https://redirect.github.com/cloudflare/workers-sdk/commit/5761020cb41270ce872ad6c555b263597949c06d),
[`7859a04`](https://redirect.github.com/cloudflare/workers-sdk/commit/7859a04bcd4b2f1cafe67c371bd236acaf7a2d91)]:
    -   [email protected]

###
[`v3.80.4`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3804)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.80.4)

##### Patch Changes

-
[#&#8203;6937](https://redirect.github.com/cloudflare/workers-sdk/pull/6937)
[`51aedd4`](https://redirect.github.com/cloudflare/workers-sdk/commit/51aedd4333cce9ffa4f6834cdf19e22148dab7e9)
Thanks [@&#8203;lrapoport-cf](https://redirect.github.com/lrapoport-cf)!
- fix: show help when kv commands are run without parameters

- Updated dependencies
\[[`c863183`](https://redirect.github.com/cloudflare/workers-sdk/commit/c86318354f1a6c0f5c096d6b2a884de740552a19),
[`fd43068`](https://redirect.github.com/cloudflare/workers-sdk/commit/fd430687ec1431be6c3af1b7420278b636c36e59)]:
    -   [email protected]
-
[@&#8203;cloudflare/workers-shared](https://redirect.github.com/cloudflare/workers-shared)[@&#8203;0](https://redirect.github.com/0).6.0

###
[`v3.80.3`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3803)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.80.3)

##### Patch Changes

-
[#&#8203;6927](https://redirect.github.com/cloudflare/workers-sdk/pull/6927)
[`2af75ed`](https://redirect.github.com/cloudflare/workers-sdk/commit/2af75edb3c0722c04793c74f46aa099f4a3f27a9)
Thanks [@&#8203;emily-shen](https://redirect.github.com/emily-shen)! -
fix: respect `CLOUDFLARE_ACCOUNT_ID` with `wrangler pages project`
commands

Fixes
[#&#8203;4947](https://redirect.github.com/cloudflare/workers-sdk/issues/4947)

-
[#&#8203;6894](https://redirect.github.com/cloudflare/workers-sdk/pull/6894)
[`eaf71b8`](https://redirect.github.com/cloudflare/workers-sdk/commit/eaf71b86cc5650cffb54c942704ce3dd1b5ed6a7)
Thanks
[@&#8203;petebacondarwin](https://redirect.github.com/petebacondarwin)!
- fix: improve the rendering of build errors when bundling

-
[#&#8203;6920](https://redirect.github.com/cloudflare/workers-sdk/pull/6920)
[`2e64968`](https://redirect.github.com/cloudflare/workers-sdk/commit/2e649686c259c639701a62e754c53448cb694dfc)
Thanks [@&#8203;vicb](https://redirect.github.com/vicb)! - chore: update
unenv dependency version

Pulls in [feat(node/net): implement Server
mock](https://redirect.github.com/unjs/unenv/pull/316).

-
[#&#8203;6932](https://redirect.github.com/cloudflare/workers-sdk/pull/6932)
[`4c6aad0`](https://redirect.github.com/cloudflare/workers-sdk/commit/4c6aad05b919a56484d13e4a49b861dcafbc0a2c)
Thanks [@&#8203;vicb](https://redirect.github.com/vicb)! - fix: allow
`require`ing unenv aliased packages

    Before this PR `require`ing packages aliased in unenv would fail.
    That's because `require` would load the mjs file.

This PR adds wraps the mjs file in a virtual ES module to allow
`require`ing it.

###
[`v3.80.2`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3802)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.80.2)

##### Patch Changes

-
[#&#8203;6923](https://redirect.github.com/cloudflare/workers-sdk/pull/6923)
[`1320f20`](https://redirect.github.com/cloudflare/workers-sdk/commit/1320f20b38d7b4623fe21d38118bdc9fb8514a99)
Thanks [@&#8203;andyjessop](https://redirect.github.com/andyjessop)! -
chore: adds eslint-disable for ESLint error on empty typescript
interface in workers-configuration.d.ts

###
[`v3.80.1`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3801)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/[email protected]@3.80.1)

##### Patch Changes

-
[#&#8203;6908](https://redirect.github.com/cloudflare/workers-sdk/pull/6908)
[`d696850`](https://redirect.github.com/cloudflare/workers-sdk/commit/d6968507b7eab36abdc4d6c2ffe183788857d08c)
Thanks [@&#8203;penalosa](https://redirect.github.com/penalosa)! - fix:
debounce restarting worker on assets dir file changes when `--x-dev-env`
is enabled.

-
[#&#8203;6902](https://redirect.github.com/cloudflare/workers-sdk/pull/6902)
[`dc92af2`](https://redirect.github.com/cloudflare/workers-sdk/commit/dc92af28c572e3f7a03b84afd53f10a40ee2a5f8)
Thanks
[@&#8203;threepointone](https://redirect.github.com/threepointone)! -
fix: enable esbuild's keepNames: true to set .name on functions/classes

-
[#&#8203;6909](https://redirect.github.com/cloudflare/workers-sdk/pull/6909)
[`82180a7`](https://redirect.github.com/cloudflare/workers-sdk/commit/82180a7a7680028f2ea24ae8b1c8479d39627826)
Thanks [@&#8203;penalosa](https://redirect.github.com/penalosa)! - fix:
Various fixes for logging in `--x-dev-env`, primarily to ensure the
hotkeys don't wipe useful output and are cleaned up correctly

-
[#&#8203;6903](https://redirect.github.com/cloudflare/workers-sdk/pull/6903)
[`54924a4`](https://redirect.github.com/cloudflare/workers-sdk/commit/54924a430354c0e427770ee4289217660141c72e)
Thanks
[@&#8203;petebacondarwin](https://redirect.github.com/petebacondarwin)!
- fix: ensure that `alias` config gets passed through to the bundler
when using new `--x-dev-env`

Fixes
[#&#8203;6898](https://redirect.github.com/cloudflare/workers-sdk/issues/6898)

-
[#&#8203;6911](https://redirect.github.com/cloudflare/workers-sdk/pull/6911)
[`30b7328`](https://redirect.github.com/cloudflare/workers-sdk/commit/30b7328073c86ff9adebd594015bca6844da7163)
Thanks [@&#8203;emily-shen](https://redirect.github.com/emily-shen)! -
fix: infer experimentalJsonConfig from file extension

Fixes
[#&#8203;5768](https://redirect.github.com/cloudflare/workers-sdk/issues/5768)
- issue with vitest and Pages projects with wrangler.toml

- Updated dependencies
\[[`5c50949`](https://redirect.github.com/cloudflare/workers-sdk/commit/5c509494807a1c0418be83c47a459ec80126848e)]:
    -   [email protected]

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config
help](https://redirect.github.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4xMjAuMSIsInVwZGF0ZWRJblZlciI6IjM4LjEyMC4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* [`pylint`] - restrict `iteration-over-set` to only work on sets of literals (`PLC0208`) (#13731)

* [red-knot] Consistently rename BoolLiteral => BooleanLiteral (#13856)

## Summary

- Consistent naming: `BoolLiteral` => `BooleanLiteral` (it's mainly the
`Ty::BoolLiteral` variant that was renamed)

  I tripped over this a few times now, so I thought I'll smooth it out.
- Add a new test case for `Literal[True] <: bool`, as suggested here:
https://github.com/astral-sh/ruff/pull/13781#discussion_r1804922827

* [red-knot] handle unions on the LHS of is_subtype_of (#13857)

## Summary

Just a drive-by change that occurred to me while I was looking at
`Type::is_subtype_of`: the existing pattern for unions on the *right
hand side*:
```rs
            (ty, Type::Union(union)) => union
                .elements(db)
                .iter()
                .any(|&elem_ty| ty.is_subtype_of(db, elem_ty)),
```
is not (generally) correct if the *left hand side* is a union.

## Test Plan

Added new test cases for `is_subtype_of` and `!is_subtype_of`

* Speed up mdtests (#13832)

Co-authored-by: Alex Waygood <[email protected]>

* formatter: Introduce `QuoteMetadata` (#13858)

* [red-knot] Improve chained comparisons handling (#13825)

## Summary

A small fix for comparisons of multiple comparators.
Instead of comparing each comparator to the leftmost item, we should
compare it to the closest item on the left.

While implementing this, I noticed that we don’t yet narrow Yoda
comparisons (e.g., `True is x`), so I didn’t change that behavior in
this PR.

## Test Plan

Added some mdtests 🎉

* Speedup mdtest parser (#13835)

* Update cloudflare/wrangler-action action to v3.9.0 (#13846)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* [red-knot] Support for not-equal narrowing (#13749)

Add type narrowing for `!=` expression as stated in
#13694.

###  Test Plan

Add tests in new md format.

---------

Co-authored-by: David Peter <[email protected]>

* [red-knot] Report line numbers in mdtest relative to the markdown file, not the test snippet (#13804)

Co-authored-by: Alex Waygood <[email protected]>
Co-authored-by: Micha Reiser <[email protected]>
Co-authored-by: Carl Meyer <[email protected]>

* [red-knot] is_subtype_of: treat literals as subtype of 'object' (#13876)

Add the following subtype relations:
- `BooleanLiteral <: object`
- `IntLiteral <: object`
- `StringLiteral <: object`
- `LiteralString <: object`
- `BytesLiteral <: object`

Added a test case for `bool <: int`.

## Test Plan

New unit tests.

* ci(docker): incorporate docker release enhancements from uv (#13274)

## Summary

This PR updates `ruff` to match `uv` updated [docker releases
approach](https://github.com/astral-sh/uv/blob/main/.github/workflows/build-docker.yml).
It's a combined PR with changes from these PR's
* https://github.com/astral-sh/uv/pull/6053
* https://github.com/astral-sh/uv/pull/6556
* https://github.com/astral-sh/uv/pull/6734
* https://github.com/astral-sh/uv/pull/7568

Summary of changes / features

1. This change would publish an additional tags that includes only
`major.minor`.

    For a release with `x.y.z`, this would publish the tags:

    * ghcr.io/astral-sh/ruff:latest
    * ghcr.io/astral-sh/ruff:x.y.z
    * ghcr.io/astral-sh/ruff:x.y

2. Parallelizes multi-platform builds using multiple workers (hence the
new docker-build / docker-publish jobs), which cuts docker releases time
in half.

3. This PR introduces additional images with the ruff binaries from
scratch for both amd64/arm64 and makes the mapping easy to configure by
generating the Dockerfile on the fly. This approach focuses on
minimizing CI time by taking advantage of dedicating a worker per
mapping (20-30s~ per job). For example, on release `x.y.z`, this will
publish the following image tags with format
`ghcr.io/astral-sh/ruff:{tag}` with manifests for both amd64/arm64. This
also include `x.y` tags for each respective additional tag. Note, this
version does not include the python based images, unlike `uv`.

* From **scratch**: `latest`, `x.y.z`, `x.y` (currently being published)
* From **alpine:3.20**: `alpine`, `alpine3.20`, `x.y.z-alpine`,
`x.y.z-alpine3.20`
* From **debian:bookworm-slim**: `debian-slim`, `bookworm-slim`,
`x.y.z-debian-slim`, `x.y.z-bookworm-slim`
* From **buildpack-deps:bookworm**: `debian`, `bookworm`,
`x.y.z-debian`, `x.y.z-bookworm`

4. This PR also fixes `org.opencontainers.image.version` for all tags
(including the one from `scratch`) to contain the right release version
instead of branch name `main` (current behavior).

    ```
> docker inspect ghcr.io/astral-sh/ruff:0.6.4 | jq -r
'.[0].Config.Labels'
    {
      ...
      "org.opencontainers.image.version": "main"
    }
    ```

Closes https://github.com/astral-sh/ruff/issues/13481

## Test Plan

Approach mimics `uv` with almost no changes so risk is low but I still
tested the full workflow.

* I have a working CI release pipeline on my fork run
https://github.com/samypr100/ruff/actions/runs/10966657733
* The resulting images were published to
https://github.com/samypr100/ruff/pkgs/container/ruff

* Fix `D204`'s documentation to correctly mention the conventions when it is enabled (#13867)

* [red-knot] Treat empty intersection as 'object', fix intersection simplification (#13880)

## Summary

- Properly treat the empty intersection as being of type `object`.
- Consequently, change the simplification method to explicitly add
`Never` to the positive side of the intersection when collapsing a type
such as `int & str` to `Never`, as opposed to just clearing both the
positive and the negative side.
- Minor code improvement in `bindings_ty`: use `peekable()` to check
whether the iterator over constraints is empty, instead of handling
first and subsequent elements separately.

fixes #13870

## Test Plan

- New unit tests for `IntersectionBuilder` to make sure the empty
intersection represents `object`.
- Markdown-based regression test for the original issue in #13870

* [red-knot] rename {Class,Module,Function} => {Class,Module,Function}Literal (#13873)

## Summary

* Rename `Type::Class` => `Type::ClassLiteral`
* Rename `Type::Function` => `Type::FunctionLiteral`
* Do not rename `Type::Module`
* Remove `*Literal` suffixes in `display::LiteralTypeKind` variants, as
per clippy suggestion
* Get rid of `Type::is_class()` in favor of `is_subtype_of(…, 'type')`;
modifiy `is_subtype_of` to support this.
* Add new `Type::is_xyz()` methods and use them instead of matching on
`Type` variants.

closes #13863 

## Test Plan

New `is_subtype_of_class_literals` unit test.

---------

Co-authored-by: Alex Waygood <[email protected]>

* Alternate quotes for strings inside f-strings in preview (#13860)

* [red-knot] Use track_caller for expect_ methods (#13884)

## Summary

A minor quality-of-life improvement: add
[`#[track_caller]`](https://doc.rust-lang.org/reference/attributes/codegen.html#the-track_caller-attribute)
attribute to `Type::expect_xyz()` methods and some `TypeInference` methods such that the panic-location
is reported one level higher up in the stack trace.

before: reports location inside the `Type::expect_class_literal()`
method. Not very useful.
```
thread 'types::infer::tests::deferred_annotation_builtin' panicked at crates/red_knot_python_semantic/src/types.rs:304:14:
Expected a Type::ClassLiteral variant
```

after: reports location at the `Type::expect_class_literal()` call site,
where the error was made.
```
thread 'types::infer::tests::deferred_annotation_builtin' panicked at crates/red_knot_python_semantic/src/types/infer.rs:4302:14:
Expected a Type::ClassLiteral variant
```

## Test Plan

Called `expect_class_literal()` on something that's not a
`Type::ClassLiteral` and saw that the error was reported at the call
site.

* [`flake8-type-checking`] Support auto-quoting when annotations contain quotes (#11811)

## Summary

This PR updates the fix generation logic for auto-quoting an annotation
to generate an edit even when there's a quote character present.

The logic uses the visitor pattern, maintaining it's state on where it
is and generating the string value one node at a time. This can be
considered as a specialized form of `Generator`. The state required to
maintain is whether we're currently inside a `typing.Literal` or
`typing.Annotated` because the string value in those types should not be
un-quoted i.e., `Generic[Literal["int"]]` should become
`"Generic[Literal['int']]`, the quotes inside the `Literal` should be
preserved.

Fixes: https://github.com/astral-sh/ruff/issues/9137

## Test Plan

Add various test cases to validate this change, validate the snapshots.
There are no ecosystem changes to go through.

---------

Signed-off-by: Shaygan <[email protected]>
Co-authored-by: Dhruv Manilawala <[email protected]>

* Fix stale syntax errors in playground (#13888)

* Fix E221 and E222 to flag missing or extra whitespace around `==` operator (#13890)

* [red-knot] Type narrowing for `isinstance` checks (#13894)

## Summary

Add type narrowing for `isinstance(object, classinfo)` [1] checks:
```py
x = 1 if flag else "a"

if isinstance(x, int):
    reveal_type(x)  # revealed: Literal[1]
```

closes #13893

[1] https://docs.python.org/3/library/functions.html#isinstance

## Test Plan

New Markdown-based tests in `narrow/isinstance.md`.

---------

Co-authored-by: Alex Waygood <[email protected]>

* Remove "default" remark from `ruff check` (#13900)

## Summary

`ruff check` has not been the default in a long time. However, the help
message and code comment still designate it as the default. The remark
should have been removed in the deprecation PR #10169.

## Test Plan

Not tested.

* Use referencial equality in `traversal` helper methods (#13895)

* Join implicit concatenated strings when they fit on a line (#13663)

* [red-knot] Infer subscript expression types for bytes literals (#13901)

## Summary

Infer subscript expression types for bytes literals:
```py
b = b"\x00abc\xff"

reveal_type(b[0])  # revealed: Literal[b"\x00"]
reveal_type(b[1])  # revealed: Literal[b"a"]
reveal_type(b[-1])  # revealed: Literal[b"\xff"]
reveal_type(b[-2])  # revealed: Literal[b"c"]

reveal_type(b[False])  # revealed: Literal[b"\x00"]
reveal_type(b[True])  # revealed: Literal[b"a"]
```


part of #13689
(https://github.com/astral-sh/ruff/issues/13689#issuecomment-2404285064)

## Test Plan

- New Markdown-based tests (see `mdtest/subscript/bytes.md`)
- Added missing test for `string_literal[bool_literal]`

* [red-knot] Format mdtest Python snippets more concisely (#13905)

* Fix preview style name in `can_omit_parentheses` to is_f_string_formatting_enabled (#13907)

* Fix `normalize` arguments when `fstring_formatting` is disabled (#13910)

* Bump version to 0.7.1 (#13913)

* Enable nursery rules: 'redundant_clone', 'debug_assert_with_mut_call', and 'unused_peekable' (#13920)

* [red-knot] knot benchmark: fix `--knot-path` arg (#13923)

## Summary

Previously, this would fail with

```
AttributeError: 'str' object has no attribute 'is_file'
```

if I tried to use the `--knot-path` option. I wish we had a type checker
for Python*.

## Test Plan

```sh
uv run benchmark --knot-path ~/.cargo-target/release/red_knot
```

\* to be fair, this would probably require special handling for
`argparse` in the typechecker.

* [red-knot] Infer `Todo`, not `Unknown`, for PEP-604 unions in annotations (#13908)

* [red-knot] Remove lint-phase (#13922)

Co-authored-by: Alex Waygood <[email protected]>

* Docs: Add GitLab CI/CD to integrations. (#13915)

* [red-knot] Type narrow in else clause (#13918)

## Summary

Add support for type narrowing in elif and else scopes as part of
#13694.

## Test Plan

- mdtest
- builder unit test for union negation.

---------

Co-authored-by: Carl Meyer <[email protected]>

---------

Signed-off-by: Shaygan <[email protected]>
Co-authored-by: Carl Meyer <[email protected]>
Co-authored-by: Alex Waygood <[email protected]>
Co-authored-by: cake-monotone <[email protected]>
Co-authored-by: Neil Mitchell <[email protected]>
Co-authored-by: Shaygan Hooshyari <[email protected]>
Co-authored-by: Micha Reiser <[email protected]>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Aditya Pratap Singh <[email protected]>
Co-authored-by: Steve C <[email protected]>
Co-authored-by: David Peter <[email protected]>
Co-authored-by: TomerBin <[email protected]>
Co-authored-by: Alex <[email protected]>
Co-authored-by: David Peter <[email protected]>
Co-authored-by: aditya pillai <[email protected]>
Co-authored-by: Carl Meyer <[email protected]>
Co-authored-by: samypr100 <[email protected]>
Co-authored-by: Dhruv Manilawala <[email protected]>
Co-authored-by: Mihai Capotă <[email protected]>
Co-authored-by: Jonas Vacek <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
formatter Related to the formatter preview Related to preview mode features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants