Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-25299] Yet another attempt to integrate API with scheduler #559

Closed
wants to merge 5 commits into from

Conversation

yifeih
Copy link

@yifeih yifeih commented May 24, 2019

No description provided.

Copy link

@mccheah mccheah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll look more thoroughly later, just wanted to get the general philosophy of this approach. This looks sound to me - @squito should give another opinion. This isn't comprehensive - e.g. you can't use this to continuously add new locations for a given block after it's been committed once, but I think it goes a long way to solving the numerous contradictions we had with our other tries at this.

incrementEpoch()
case None =>
throw new SparkException("unregisterMapOutput called for nonexistent shuffle ID")
}
}

def removeMapAtLocation(shuffleLoc: ShuffleLocation): Unit = {
shuffleStatuses.valuesIterator.foreach { mapStatuses =>
if (shuffleLocationComponents.isDefined) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can just do shuffleLocationComponents.forEach

_shuffleDriverComponents.initializeApplication().asScala.foreach {
_shuffleDataIo = maybeIO.head
maybeIO.head.driver()
.initializeApplication().asScala.foreach {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indentation seems weird

@@ -81,16 +82,17 @@ case object Resubmitted extends TaskFailedReason {
*/
@DeveloperApi
case class FetchFailed(
bmAddress: BlockManagerId, // Note that bmAddress can be null
shuffleLocation: ShuffleLocation, // Note that bmAddress can be null
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make thus shuffleLocation an Option as well.

@yifeih
Copy link
Author

yifeih commented May 24, 2019

This is pretty similar to #548 except that we move the port/host logic behind the API. Thanks to @mccheah for the idea!

This gives us a cleaner API, allowing the plugin to define the logic for whether to remove the MapStatus or not based on the implementation and structure of ShuffleLocation and MapShuffleLocations.

We avoid putting the shouldRemoveMapOutputOnLostBlock() logic inside of MapShuffleLocations directly to keep the MapShuffleLocations as an immutable object (instead of something with mutable state, which would further complicate the plugin implementation). Furthermore, placing it in the ShuffleLocationsComponents allows extensibility for cases where the immutable MapShuffleLocations object might not have all the information necessary for whether to remove a MapStatus or not. An example would be in the individual file server case. For example, if you had the following in your MapOutputTracker:

  • Map task 1
    ** Block 1: [hostA, hostB]
    ** Block 2: [hostA, hostB]
  • Map task 2
    ** Block 1: [hostA, hostC]
    ** Block 2: [hostA, hostC]

If we get a FetchFailed([hostA, hostB]), then shouldRemoveMapOutputOnLostBlock() wouldn't remove the MapStatus associated with map task 2 because both blocks can be retrieved from hostC. Suppose that the scheduler gets a FetchFailure([hostC, hostD]) later. It also doesn't mark map task 2's MapStatus as invalid because the MapShuffleLocations object itself doesn't know that hostA has already been invalidated. Therefore, the MapShuffleLocations object itself doesn't have enough state to know whether a certain location should result in deleting that MapStatus entry.

By moving the logic inside of a separate entity like ShuffleLocationComponents, we make it possible for that object to maintain bookkeeping about which hosts have been blacklisted, allowing it to do additional filters for whether the MapStatus should be removed.

Furthermore, we were hesitant about placing ShuffleLocationComponents inside of ShuffleDriverComponents because of the order of initializations within spark. Currently, the SparkContext first initializes the SparkEnv and then calls ShuffleDriverComponents#initializeApplication(). This makes sense because in some implementations, such as the K8s individual file server implementation, it might need to call to retrieve modules from SparkEnv (https://github.com/mccheah/spark/pull/6/files#diff-1a614b54924c44438c4301263685e461R42). However, the MapOutputTracker is initialized inside the SparkEnv, and we'd ideally want to pass the map status removal logic to the MapOutputTracker. If we placed the map status removal logic inside of ShuffleDriverComponents, we would either need to mandate that SparkEnv is never used inside of ShuffleDriverComponents and initialize it before SparkEnv, or keep it initialized afterwards and pass it to the MapOutputTracker afterwards through a MapOutputTracker.initialize() method.

@mccheah
Copy link

mccheah commented May 25, 2019

Think @ifilonenko will want to look at this too

def removeMapOutput(mapId: Int, reduceId: Int, shuffleLoc: ShuffleLocation)
: Unit = synchronized {
if (mapStatuses(mapId) != null && mapStatuses(mapId).mapShuffleLocations != null &&
mapStatuses(mapId).mapShuffleLocations.getLocationForBlock(reduceId) == shuffleLoc) {
_numAvailableOutputs -= 1
mapStatuses(mapId) = null
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand what's going on with removeMapAtLocation, but I'm confused by this part. When there is a fetch failure from host X for map M from reduce R, the dagscheduler removes shuffle outputs two different ways (no idea why) -- (1) it uses the path you're changing below in removeMapAtLocation to remove all map output on host X. I see how you're changing that part below to move logic into the ShuffleLocationComponent. (2) here, it is removing the Map M from host X. before this change, it removes the entire map output of Map M. You're changing it to take in the reduce R as a parameter, but still removing all map output of Map M (mapStatus(mapId) = null), which doesn't seem consistent.

Or is there some special logic in the extra condition mapStatuses(mapId).mapShuffleLocations.getLocationForBlock(reduceId) == shuffleLoc? I guess I don't understand when it would be true or false -- seems like it should always be true. Can one ShuffleLocation really represent multiple locations? I'd still expect them to be equal here, as the mapOutputTracker would have stored multiple locations, and the fetch failure would also send back multiple locations, right?

(above discussion ignores the host / executor distinction for external shuffle service in current implementation just to keep things "simple")

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh the first point about this function taking a reducer parameter R, it's using R to validate that the ShuffleLocation does indeed exist in the MapStatus for mapper M. I was doing this to mimic the previous behavior where you're checking that the bmAddress exists in the MapStatus before removing it, although I wasn't exactly sure why one would need to check (could it be that we might have received an obsolete FetchFailed?). However, you're right that I can replace the mapStatuses(mapId).mapShuffleLocations.getLocationForBlock(reduceId) == shuffleLoc logic with something that calls into the logic inside ShuffleLocationComponent, which would be more consistent.

On the note about ShuffleLocation representing multiple locations: yup that's right, I tried to code up what that would look like in DAGSchedulerFileServerSuite. Fetch failure would then send back a single ShuffleLocation that encodes multiple host/port combos.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can an obsolete FetchFailed be a problem, I would leave the check

Does order in the multiple host/port combos matter in this equality check?

complete(taskSets(0), Seq((Success, mapStatus), (Success, mapStatus)))

// the 2nd ResultTask failed. This removes the first executor, but the
// other task is still intact because it was uploaded to the remove dfs
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think you mean "This removes the map output from the first task, but the map output from the second task is still intact because it was uploaded to the remote resilient datastore".

But -- how do you know that it was uploaded successfully? The executor might have been lost before that upload succeeded too, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea you're right, it definitely could have been lost too, but it's assuming the best case scenario right now... I'm wondering whether there should be logic inside of ShuffleLocationComponents to ask the dfs whether the other MapStatus blocks exist or not. Otherwise, the driver is completely in the blind when it comes to answering that question, and it would either 1) assume best case scenario (that the blocks do exist) and sacrifice an attemptId increment to retrying the first mapper that definitely did fail, or 2) assume worst case scenario and recalculate everything on that executor, which might do a lot more work than necessary. Neither 1 or 2 seem ideal

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you pointed out the problem with (1) on another PR or doc something -- if its really gone, then you spend a lot of time retrying to discover all of the blocks which have been lost, and it also might put you over the retry limit for just one failure. This isn't that unlikely a scenario -- we're already talking about dealing with some sort of failure here, doesn't seem unreasonable that the node was slow to do the async backups before that (eg. because it was overloaded or has some hardware which makes it limp along etc.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the async to dfs backup case I thought that there can be no assumption by the shuffle fetcher that the backup write was successful. And the best way to know if it exists is by doing simultaneous fetches on all host:port combinations in the shuffle location until one returns a result. I dont think it is necessary for the driver to have a heartbeat check on whether the dfs contains the MapStatus block. I mean, the usual scenario would be that you would read from local and only use the dfs as backup. I guess w.r.t. to the cases you listed, for no blocks to exist this would mean that your local blocks are missing and it failed to backup (if local is missing it probably wasn't immediate... maybe there is a cleanup script run after backup completes or the node fails after a long time) so I think you can assume there is a catastrophic failure on the DFS side if the backup is missing the block, which I would assume would kick off case 2.

I agree, that in reference to @squito question, it is safe to say that this is testing the case that the write was successful.

val actualShuffleLocations = mapOutputTracker.shuffleStatuses(shuffleId).mapStatuses
.map(mapStatus => {
if (mapStatus == null) {
return null
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what this is supposed to be doing -- the method's return type is Unit so this is strange. is this supposed to be fail(s"missing map status for $shuffleId")? Or should the map above be a flatMap and this gets ignored?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's mapping all the mapStatuses to their corresponding mapShuffleLocations, but mapStatus could be null since the shuffle writer plugin doesn't necessarily have to return a MapShuffleLocation upon completion of the write. It maps the nulls to null so I can assert that the nulls exist in the mapOutputTracker.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still confused. don't you see a warning msg from the compiler like

warning: enclosing method assertMapOutputTrackerContains has result type Unit: return value discarded
             return null

?

if you've got a mapStatuses of Array(someMapStatus, null, anotherMapStatus), what do you want this method to do? Is it supposed to skip the assert below entirely? (I think that is what it is doing). Do you want it to check the non-null members (eg. so you'd only expect it to pass if set had two elements, not three?

Or do you want it to check the mapShuffleLocations, unless its null, in which case keep it null? Eg. the check would pass if set == Array(someShuffleLocations, null, anotherShuffleLocations). If its this, then I think you want

map { mapStatus =>
  if (mapStatus == null) {
    null
  } else {
    mapStatus.mapShuffleLocations
  }
}

or even map { mapStatus => Option(mapStatus).map(_.mapShuffleLocations).getOrElse(null)}

}
}

class FileServerShuffleLocationComponents extends ShuffleLocationComponents {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

general comment -- one thing these added tests are missing are cases involving multiple shuffles. In particular, this is important for understanding what you do with shuffle data that is not involved in the current shuffle, eg. rddA --> (groupBy userId) --> rddB ---> (groupBy Date) --> rddC --> (result). If you get a fetch failure when you're trying to compute rddC, what do you assume about the shuffle output generated from the first shuffle on the same host?

I don't think these tests are actually really going through the removeMapAtLocations code you added because they are only touching data of the active shuffle (or maybe it is going through the code, but not in a very interesting way because the data has already been removed)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think they are going through the removeMapAtLocations codepath because that's the only way you're going to remove other mapStatuses (mapOutputTracker.unregisterMapOutput() only removes the mapId that failed, and the third test here asserts that two MapStatuses are removed from a single fetch failure). However, you are right that the test cases don't deal with multiple shuffleIds.

assertMapOutputTrackerContains(shuffleId, Seq(null, null))

scheduler.resubmitFailedStages()
complete(taskSets(2), Seq((Success, mapStatus1), (Success, mapStatus2)))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Want to make sure I understand what is going on here correctly, especially with storing each (map, reduce) shuffle block in its own location, as opposed to just storing the entire map output together. When these tasks re-run, their going to regenerate the entire map output, though actually only one reduce is missing. So now the shuffle server needs to deal w/ getting another version of the output for the (map, reduce) block which was fine before. I think to be consistent with the other ways spark handles non-deterministic output (eg. from a repartition()), it needs to take all of the latest output and throw away whatever it had before.

Does that sound right? I guess its OK as long as its clearly documented, but does seem like an extra headache for implementations.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that's how we've been thinking about it too. It's true that when a single reduce is missing, the entire map output will need to be recomputed.

Because of this "overwriting" of still-existing data, we were talking about adding an extra "attemptId" to the API (http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-SPARK-25299-SPIP-Shuffle-storage-API-td27209.html) to ensure that we don't read half-overwritten data.

Also, i'm not sure why it would be extra headache for the implementations? Doesn't the scheduler handle invalidating the already-computed data already in the case of non-deterministic outputs? In the case of deterministic outputs, a successful reducer task that completed before a FetchFailed should still be fine and doesn't need to be recomputed, and the scheduler handles that right now right?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I guess I shouldn't say its a headache for implementations, that is an exaggeration -- it just seems like an extra thing to keep straight and a bit more confusing, to me, anyway.


// Perform map task
val mapStatus1 = makeFileServerMapStatus("hostA", "hostB", "hostC", "hostD")
val mapStatus2 = makeFileServerMapStatus("hostA", "hostB", "hostC", "hostE")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

super-nit: the behavior would be a little more obvious, to me anyway, if this took two tuples as arguments instead makeFileServerMapStatus(("hostA", "hostB"), ("hostC", "hostD")). On first read I thought there were 4 replicas and I was quite confused.

@@ -848,9 +848,9 @@ private[spark] class TaskSetManager(
}
isZombie = true

if (fetchFailed.bmAddress != null) {
if (fetchFailed.maybeBmAddr != null && fetchFailed.maybeBmAddr.isDefined) {
Copy link

@ifilonenko ifilonenko May 29, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not .foreach() here as well?

@yifeih yifeih closed this Aug 6, 2019
bulldozer-bot bot pushed a commit that referenced this pull request Nov 7, 2019
###### _excavator_ is a bot for automating changes across repositories.

Changes produced by the roomba/latest-baseline check.

# Release Notes
## 0.50.0
[feature] Warn against .parallel() calls on Java streams (#537) 
[fix] Correct prioritisation of versions.props to match nebula logic (#533)


## 0.51.0
- [feature] New 'com.palantir.baseline-reproducibility' plugin (#539) 
- [improvement] `./gradlew idea` deletes redundant ipr files (#550) 
- [fix] ValidateConstantMessage error-prone check is more accurate. (#546)


## 0.51.1
- [fix] Fix cleanup of old idea project files (#559) 
- [fix] Remove stale references to no longer existing puppycrawl checkstyle DTDs (#556)

## 0.52.0
- [improvement] errorprone 2.3.2 -> 2.3.3 (#561) 
- [feature] new plugin: `com.palantir.baseline-exact-dependencies` helps declare necessary and sufficient dependencies (#548)
- [improvement] Split out circle style plugin into generic junit reports plugin #564

## 0.53.0
- [improvement] Disallow javafx imports with checkstyle #569
- [fix] Avoid lambda to allow build caching of checkstyle results #576

## 0.54.0
- [feature] New `com.palantir.baseline-release-compatibility` plugin (#582)

## 0.55.0
[break] Enable running of unique class check on multiple configurations (#583)

## 0.55.1
[fix] checkImplicitDependencies shouldn't count ignored artifacts (#601) 

## 0.55.2
[fix] BaselineReleaseCompatibility up-to-date checking of compile tasks (#605)

## 0.56.0
[feature] Add an errorprone rule GradleCacheableTaskAction that prevents passing a lambda to Task.doFirst or Task.doLast when implementing gradle plugins (#608)

## 0.57.0
* [feature] Error prone rule to replace `Iterables.partition(List, int)` with `Lists.partition(List, int)` (#622)
* [feature] Error prone rule to prefer `Lists` or `Collections2` `transfrom` over `Iterables.transform` (#623)

## 0.58.0
[improvement] make CheckClassUniquenessTask cacheable (#637)
[fix] Add Javac Settings to uncheck "Use compiler from module target JDK when possible" (#629)
[fix] class uniqueness rule must have a config (#638) 

## 0.59.0
[improvement] Spotless to remove blank lines at start of methods (#641) 

## 0.60.0
* [improvement] New PreferBuiltInConcurrentKeySet suggestion (#649) 
* [improvement] Start publishing plugin to the [Gradle plugin portal](https://plugins.gradle.org/plugin/com.palantir.baseline) (#613)

## 0.61.0
- [improvement] Sensible defaults for test tasks (HeapDumpOnOutOfMemory) (#652)

## 0.62.0
* [improvement] Ensure Optional#orElse argument is not method invocation (#655)


## 0.62.1
[fix] Revert "[improvement] Ensure Optional#orElse argument is not method invocation" (#659)

## 0.63.0
[improvement] Support auto-applying error-prone suggested fixes (#660) 

## 0.64.0
* [improvement] Refaster rule compilation (#661)

## 0.64.1
- [improvement] JUnit 5 boilerplate #666

## 0.65.0
[improvement] Error-prone check to help prevent logging AuthHeader and BearerToken (#654)
[fix] fix potential NPE when configuring testing (#669) 
[fix] Fix refaster compilation to support version recommendations (#667)

## 0.66.0
[improvement] Ignore DesignForExtension for ParameterizedTest (#673) 

## 0.66.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | The PreventTokenLogging error-prone check will now correctly handle null use in SLF4J and Safe/Unsafe Arg functions. | palantir/gradle-baseline#674 |


## 1.0.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Add refaster rule to migrate away from optional.orElse(supplier.get()) | palantir/gradle-baseline#679 |
| Fix | Projects can now compile using Java12, because the one errorprone check that breaks (Finally) is now disabled when you use this toolchain. It remains enabled when compiling against earlier JDKs. | palantir/gradle-baseline#681 |


## 1.1.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Ensure that format tasks execute after compilation | palantir/gradle-baseline#688 |


## 1.1.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Auto-fix OptionalOrElseMethodInvocation using `-PerrorProneApply`. | palantir/gradle-baseline#690 |


## 1.2.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Spotless check for disallowing dangling parenthesis. | palantir/gradle-baseline#687 |


## 1.3.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Don't cache test tasks in the build cache by default.<br>It's possible to restore caching by adding `com.palantir.baseline.restore-test-cache = true` to your `gradle.properties`. | palantir/gradle-baseline#694 |


## 1.4.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | No longer cache javaCompile tasks when applying errorprone or refaster checks. | palantir/gradle-baseline#696 |
| Feature | Test helper for refaster checks. | palantir/gradle-baseline#697 |


## 1.5.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Determine whether to use junitPlatform on a per source set basis | palantir/gradle-baseline#701 |
| Feature | OptionalOrElseMethodInvocation now checks for constructor invocations. | palantir/gradle-baseline#702 |


## 1.6.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | The severity of PreferSafeLoggableExceptions and PreferSafeLoggingPreconditions is now WARNING. | palantir/gradle-baseline#704 |
| Fix | OptionalOrElseMethodInvocation now allows method references in orElse. | palantir/gradle-baseline#709 |


## 1.6.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Do not overwrite user provided test configure when using junit5 | palantir/gradle-baseline#712 |


## 1.7.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Baseline can now re-format all your Java files using the Eclipse formatter. This is currently an opt-in preview, try it out by running `./gradlew format -Pcom.palantir.baseline-format.eclipse`. | palantir/gradle-baseline#707 |
| Improvement | Add errorprone check to ensure junit5 tests are not used with junit4 Rule/ClassRule | palantir/gradle-baseline#714 |


## 1.8.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Checkstyle now tolerates empty lambda bodies (e.g. `() -> {}` | palantir/gradle-baseline#715 |


## 1.8.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Correctly set dependency between spotlessApply and baselineUpdateConfig to prevent a race | palantir/gradle-baseline#724 |


## 1.8.2
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Correctly handle `EnableRuleMigrationSupport` in `JUnit5RuleUsage` errorprone-rule | palantir/gradle-baseline#725 |


## 1.9.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Wrap long parameterized types where necessary | palantir/gradle-baseline#716 |
| Improvement | Allow suppression of the TODO checkstyle check by giving it an ID. Clarify its comment to allow // TODO(username): ... | palantir/gradle-baseline#727 |
| Improvement | IntelliJ GitHub issue navigation | palantir/gradle-baseline#729 |
| Improvement | print out suggestion for module dependencies inclusion in useful format | palantir/gradle-baseline#733 |
| Fix | The `checkImplicitDependencies` task will no longer suggest a fix of the current project. | palantir/gradle-baseline#736, palantir/gradle-baseline#567 |
| Improvement | Implement DangerousCompletableFutureUsage errorprone check | palantir/gradle-baseline#740 |


## 1.10.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Refaster to use `execute` over `submit` when the result is ignored | palantir/gradle-baseline#741 |


## 1.10.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Enable applying refaster rules for repos with -Xlint:deprecation | palantir/gradle-baseline#742 |


## 1.11.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Apply `InputStreamSlowMultibyteRead` error prone check at ERROR severity | palantir/gradle-baseline#749 |


## 1.12.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | The `baseline-idea` plugin now generates configuration more closely aligned with Gradle defaults. | palantir/gradle-baseline#718 |
| Improvement | Apply the suggested fixes for `UnusedMethod` and `UnusedVariable`. | palantir/gradle-baseline#751 |
| Improvement | Refaster `stream.sorted().findFirst()` into `stream.min(Comparator.naturalOrder())` | palantir/gradle-baseline#752 |
| Improvement | Error prone validation that Stream.sort is invoked on comparable streams | palantir/gradle-baseline#753 |
| Improvement | `DangerousStringInternUsage`: Disallow String.intern() invocations | palantir/gradle-baseline#754 |


## 1.12.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Do not apply the suggested fixes for `UnusedMethod` and `UnusedVariable` which automaticall remove code with side effects. | palantir/gradle-baseline#757 |


## 1.13.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Remove errorprone `LogSafePreconditionsConstantMessage` | palantir/gradle-baseline#755 |
| Improvement | Disable errorprone `Slf4jLogsafeArgs` in test code | palantir/gradle-baseline#756 |
| Improvement | error-prone now detects `Duration#getNanos` mistakes and bans URL in equals methods | palantir/gradle-baseline#758 |


## 1.14.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Implement `OptionalOrElseThrowThrows` to prevent throwing from orElseThrow | palantir/gradle-baseline#759 |


## 1.15.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | LogSafePreconditionsMessageFormat disallows slf4j-style format characters | palantir/gradle-baseline#761 |
| Improvement | Error Prone LambdaMethodReference check | palantir/gradle-baseline#763 |


## 1.16.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | baseline-circleci no longer integrates with the (deprecated) FindBugs plugin, as a pre-requisite for Gradle 6.0 compatibility. | palantir/gradle-baseline#766 |


## 1.17.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | The `TypeParameterUnusedInFormals` errorprone check is disabled when compiling on Java 13, to workaround an error-prone bug. | palantir/gradle-baseline#767 |
| Improvement | Publish scm information within POM | palantir/gradle-baseline#769 |


## 1.17.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | LambdaMethodReference avoids suggestions for non-static methods | palantir/gradle-baseline#771 |


## 1.17.2
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Remove pom only dependencies from analysis in checkUnusedDependencies | palantir/gradle-baseline#773 |


## 1.18.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | When computing unused dependencies, compileOnly and annotationProcessor<br>dependencies are ignored due to false positives as these dependencies<br>will not appear as dependencies in the generated byte-code, but are in<br>fact necessary dependencies to compile a given module. | palantir/gradle-baseline#783 |


## 1.19.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Disable `PreconditionsConstantMessage` on gradle plugins | palantir/gradle-baseline#790 |


## 2.0.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Break | Add gradle 6.0-20190904072820+0000 compatibiltiy. This raises minimum required version of gradle for plugins from this repo to 5.0. | palantir/gradle-baseline#791 |


## 2.1.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Feature | Automatically configure the [Intellij Eclipse format plugin](https://plugins.jetbrains.com/plugin/6546-eclipse-code-formatter) to use the eclipse formatter | palantir/gradle-baseline#794 |


## 2.1.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Stop applying error prone patches for checks that have been turned off. | palantir/gradle-baseline#793 |


## 2.2.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | baseline-circleci now validates that the rootProject.name isn't the CircleCI default (`project`) as can interfere with publishing. | palantir/gradle-baseline#775 |
| Improvement | Remove JGit dependency | palantir/gradle-baseline#798 |


## 2.2.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Don't add whitespace to blank lines inside comments. Fixes apache#799 | palantir/gradle-baseline#800 |
| Fix | Eclipse formatter now aligns multicatch so that it passes checkstyle. | palantir/gradle-baseline#807 |


## 2.2.2
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | ClassUniquenessPlugin now checks the `runtimeClasspath` configuration by default. | palantir/gradle-baseline#810 |


## 2.3.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | SafeLoggingExceptionMessageFormat disallows `{}` in safelog exception messages | palantir/gradle-baseline#815 |


## 2.4.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | A new `StrictUnusedVariable` check will catch any unused arguments (e.g. AuthHeaders) to public methods. If you need to suppress this, rename your variable to have an underscore prefix (e.g. `s/foo/_foo/`) or just run `./gradlew classes -PerrorProneApply` to auto-fix | palantir/gradle-baseline#819 |
| Improvement | Message format checks use instanceof rather than catching | palantir/gradle-baseline#821 |


## 2.4.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Avoid false positives caused by `module-info.class` when checking class uniqueness | palantir/gradle-baseline#823 |


## 2.4.2
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Checkstyle tasks only check their own source set and only actual java sources. They don't look in your `src/*/resources` directory anymore. | palantir/gradle-baseline#830 |


## 2.4.3
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Add link to StrictUnusedVariable that directs users to baseline repo. | palantir/gradle-baseline#829 |
| Fix | Long try-with-resources statements are now aligned such that the first assignment stays on the first line. | palantir/gradle-baseline#835 |


## 2.5.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Error Prone StringBuilderConstantParameters. StringBuilder with a constant number of parameters should be replaced by simple concatenation. The Java compiler (jdk8) replaces concatenation of a constant number of arguments with a StringBuilder, while jdk 9+ take advantage of JEP 280 (https://openjdk.java.net/jeps/280) to efficiently pre-size the result for better performance than a StringBuilder. | palantir/gradle-baseline#832 |


## 2.6.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Excavator PRs that apply other refaster rules (e.g. Witchcraft ones) will not also apply baseline refaster rules. | palantir/gradle-baseline#827 |
| Improvement | Added a new ErrorProne check `PreferAssertj` to assist migration to AssertJ from legacy test frameworks. It may be necessary to add a dependency on `org.assertj:assertj-core` in modules which do not already depend on AssertJ. If there's a technical reason that AssertJ cannot be used, `PreferAssertj` may be explicitly disabled to prevent future upgrades from attempting to re-run the migration. | palantir/gradle-baseline#841 |


## 2.7.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | `StrictUnusedVariable` now ignores variables prefixed with `_` and the suggested fix will rename all unused parameters in public methods instead of removing them | palantir/gradle-baseline#833 |
| Improvement | ErrorProne will now detect dangerous usage of `@RunWith(Suite.class)` that references JUnit5 classes, as this can cause tests to silently not run! | palantir/gradle-baseline#843 |


## 2.8.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | PreferAssertj provides better replacements fixes | palantir/gradle-baseline#850 |
| Improvement | Do not run error prone on any code in the build directory | palantir/gradle-baseline#853 |


## 2.8.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Fix hamcrest arrayContainingInAnyOrder conversion | palantir/gradle-baseline#859 |


## 2.9.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | StrictUnusedVariable can only be suppressed with `_` prefix | palantir/gradle-baseline#854 |
| Improvement | StrictUnusedVariable is now an error by default | palantir/gradle-baseline#855 |
| Fix | The PreferAssertj refactoring will only be applied if you have explicitly opted in (e.g. using `baselineErrorProne { patchChecks += 'PreferAssertj' }` | palantir/gradle-baseline#861 |


## 2.9.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Error prone will correctly ignore all source files in the build directory and in any generated source directory | palantir/gradle-baseline#864 |
| Fix | Ensure that `StrictUnusedVariable` correctly converts previously suppressed variables `unused` to `_` | palantir/gradle-baseline#865 |


## 2.9.2
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | When removing unused variables, `StrictUnusedVariable` will preserve side effects | palantir/gradle-baseline#870 |


## 2.10.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | A new `checkJUnitDependencies` task detects misconfigured JUnit dependencies which could result in some tests silently not running. | palantir/gradle-baseline#837 |
| Improvement | Some AssertJ assertions can now be automatically replaced with more idiomatic ones using refaster. | palantir/gradle-baseline#851 |
| Fix | PreferAssertj check avoids ambiguity in assertThat invocations | palantir/gradle-baseline#874 |
| Improvement | Improve performannce of error prone PreferAssertj check | palantir/gradle-baseline#875 |
| Improvement | StringBuilderConstantParameters suggested fix doesn't remove comments | palantir/gradle-baseline#877 |


## 2.10.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Allow junit4 dependencies to exist without junit4 tests | palantir/gradle-baseline#880 |


## 2.11.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | PreferAssertj supports migration of zero-delta floating point array asserts | palantir/gradle-baseline#883 |


## 2.11.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | checkJunitDependencies only checks Java source | palantir/gradle-baseline#885 |


## 2.11.2
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | AssertJ Refaster fixes use static `assertThat` imports | palantir/gradle-baseline#887 |


## 2.12.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Disable `UnusedVariable` error prone rule by default | palantir/gradle-baseline#888 |


## 2.13.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Refaster for AssertJ isZero/isNotZero/isOne and collections | palantir/gradle-baseline#881 |
| Improvement | AssertJ refaster migrations support string descriptions | palantir/gradle-baseline#891 |
| Fix | Certain error-prone checks are disabled in test code, and the presence of JUnit5's `@TestTemplate` annotation is now used to detect whether a class is test code. | palantir/gradle-baseline#892 |
| Fix | BaselineFormat task exclude generated code on Windows | palantir/gradle-baseline#896 |


## 2.14.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Refaster rules for AssertJ tests | palantir/gradle-baseline#898 |
| Improvement | refaster replacement for assertj hasSize(foo.size) -> hasSameSizeAs | palantir/gradle-baseline#900 |
| Fix | Keep spotless plugin from eagerly configuring all tasks | diffplug/spotless#444 |
| Fix | Continue when RefasterRuleBuilderScanner throws | palantir/gradle-baseline#904 |
| Improvement | Refaster now works on repos using Gradle 6.0 | palantir/gradle-baseline#804, palantir/gradle-baseline#906 |


## 2.15.0
_No documented user facing changes_

## 2.16.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Rewrite ImmutableCollection#addAll to add for arrays | palantir/gradle-baseline#743 |
| Improvement | Add refaster rule to simplify empty optional asserts | palantir/gradle-baseline#911 |
| Improvement | Baseline now allows static imports of AssertJ and Mockito methods. | palantir/gradle-baseline#915 |
| Improvement | Remove refaster AssertjIsOne rule. | palantir/gradle-baseline#917 |
| Improvement | Add assertj refaster rules for map size asserts | palantir/gradle-baseline#919 |
| Improvement | Added a Refaster rule to change `isEqualTo` checks into `hasValue` checks |  |


## 2.17.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Implement AssertjCollectionHasSameSizeAsArray | palantir/gradle-baseline#922 |
| Improvement | Implement assertj map refactors for containsKey and containsEntry | palantir/gradle-baseline#925 |
| Improvement | Refaster assertj migrations support descriptions with format args | palantir/gradle-baseline#926 |
| Improvement | Refaster out String.format from describedAs | palantir/gradle-baseline#927 |


## 2.18.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Refaster rules to simplify negated boolean expressions and extract null checks. | palantir/gradle-baseline#935 |
| Improvement | Refaster rules for checks that maps do not contain a specific key | palantir/gradle-baseline#935 |
| Improvement | Refaster rule 'CollectionStreamForEach' | palantir/gradle-baseline#942 |
| Improvement | ExecutorSubmitRunnableFutureIgnored as error prone ERROR | palantir/gradle-baseline#943 |


## 2.19.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | checkJUnitDependencies detects a possible misconfiguration with spock and JUnit5 which could lead to tests silently not running. | palantir/gradle-baseline#951 |


## 2.20.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Use Mockito verifyNoInteractions over deprecated verifyZeroInteractions | palantir/gradle-baseline#924 |
| Improvement | Errorprone rules for usage of Guava static factory methods | palantir/gradle-baseline#941 |
| Improvement | Fix error-prone `UnnecessaryParentheses` by default | palantir/gradle-baseline#952 |
| Improvement | Implement Error Prone `ThrowError` to discourage throwing Errors in production code<br>Errors are often handled poorly by libraries resulting in unexpected<br>behavior and resource leaks. It's not obvious that 'catch (Exception e)'<br>does not catch Error.<br>This check  is intended to be advisory - it's fine to<br>`@SuppressWarnings("ThrowError")` in certain cases, but is usually not<br>recommended unless you are writing a testing library that throws<br>AssertionError. | palantir/gradle-baseline#957 |
| Improvement | Improve TestCheckUtils.isTestCode test detection | palantir/gradle-baseline#958 |
| Improvement | Implement Error Prone `Slf4jLevelCheck` to validate that slf4j level checks agree with contained logging. | palantir/gradle-baseline#960 |


## 2.20.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Suppress error-prone PreferCollectionConstructors on jdk13 | palantir/gradle-baseline#968 |


## 2.21.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Feature | Users can opt-in to format their files using our fork of google-java-format (palantir-java-format) | palantir/gradle-baseline#936 |


## 2.22.0
_Automated release, no documented user facing changes_

## 2.23.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Implement error prone ReverseDnsLookup for unexpected reverse dns lookups<br><br>Calling address.getHostName may result in a DNS lookup which is a network request,<br>making the invocation significantly more expensive than expected depending on the<br>environment.<br>This check  is intended to be advisory - it's fine to<br>@SuppressWarnings("ReverseDnsLookup") in certain cases, but is usually not<br>recommended. | palantir/gradle-baseline#970 |


## 2.24.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | The deprecated `verifyZeroInteractions` now gets rewritten to `verifyNoMoreInteractions`, which has the same behaviour. | palantir/gradle-baseline#975 |
| Improvement | ReadReturnValueIgnored: Check that read operation results are not ignored | palantir/gradle-baseline#978 |
| Improvement | Stop migrating source sets to safe-logging, unless they already have the requisite library (`com.palantir.safe-logging:preconditions`). | palantir/gradle-baseline#981 |
| Improvement | For users who opted into palantir-java-format, we now reflow strings and reorder imports. | palantir/gradle-baseline#982 |


## 2.25.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | checkstyle Indentation rule is disabled when palantir-java-format is enabled | palantir/gradle-baseline#987 |
| Improvement | Load palantir-java-format dynamically from the same configuration set up by `com.palantir-java-format` which is also used to determine the version used by IntelliJ. | palantir/gradle-baseline#989 |


## 2.26.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Run `./gradlew formatDiff` to reformat the relevant sections of any uncommitted changed Java files (relies on `git diff -U0 HEAD` under the hood) | palantir/gradle-baseline#988 |


## 2.27.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Slf4jLogsafeArgs fixes safe-log wrapped throwables | palantir/gradle-baseline#1001 |
| Improvement | `DangerousParallelStreamUsage` checks for `Collection.parallelStream()` and `StreamSupport` utility methods with parallel=true. | palantir/gradle-baseline#1005 |
| Improvement | DangerousThrowableMessageSafeArg disallows Throwables in SafeArg values.<br>Throwables must be logged without an Arg wrapper as the last parameter, otherwise unsafe data may be leaked from the unsafe message or the unsafe message of a cause. | palantir/gradle-baseline#997 |
| Improvement | Implement a suggested fix for CatchBlockLogException | palantir/gradle-baseline#998 |


## 2.28.0
| Type | Description | Link |
| ---- | ----------- | ---- |
| Improvement | Implement `FinalClass` error prone check, replacing the checkstyle implementation | palantir/gradle-baseline#1008 |
| Improvement | Error prone validation to avoid redundant modifiers | palantir/gradle-baseline#1010 |


## 2.28.1
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Fix `RedundantModifier` interpretation of implicit modifiers | palantir/gradle-baseline#1014 |


## 2.28.2
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Fix RedundantModifier failures types nested in interfaces | palantir/gradle-baseline#1017 |


## 2.28.3
| Type | Description | Link |
| ---- | ----------- | ---- |
| Fix | Fix error-prone mathcing literal null as a subtype.<br>The most common issue this fixes is failures on `SafeArg.of("name", null)`<br>assuming that the null literal value parameter may be a throwable. | palantir/gradle-baseline#1020 |



To enable or disable this check, please contact the maintainers of Excavator.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants