-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Possibly unnecessary prefetch during GroupIntoBatches #26395
Comments
. |
Thanks for reporting this. This if block exists at the first place when
Have you tested that this is caused by the pointed code path? Would be nice if there is some benchmark data to share with |
"assuming there was some reason for that" "Have you tested that this is caused by the pointed code path? Would be nice if there is some benchmark data to share with" Also I don't think my pipeline result can be considered good enough for benchmarking regarding consistency and straightforwardness. ... but basically what I did was that I ran the pipelines on the same kafka streams with different configs concurrently. So theoretically the "shuffled" data amount should be equal. It wasn't. Meanwhile everything else (read data from kafka, written data into bq, etc) were essentially identical. My original goal wasn't to test this, but to actually optimize the pipelines by using non-default config values as the default seemed rather unoptimized based on the GCS storage usage pattern, and average batch size, and sometimes the pipeline couldn't even scale down due to the estimated backlog size (when cpu was clearly available) In order to do that I was playing with these configs:
When I increased them - as expected - the behaviour, cpu utilization, storage usage pattern, etc changed in a way that corresponds with having bigger batches, but I noticed increased costs due to the "processed data amount". So it was obvious for some reason it handles data differently, so I started checking the code what it might be, and this seems like the only thing that could cause this. For example one run like I mentioned:
Pipeline2:
|
…ntoBatches by completely removing it
@Abacn |
…ntoBatches by using an experimental flag
Thanks for sharing benchmark result! Let me help find people with experience to review |
Also thanks for being persistent. I am also looking into it in the mean time, and the benchmark result looks impressive. To my understanding, at least there is no harm to add a parameter to the GroupIntoBatches PTransform specifically, something like GroupIntoBatches.disablePreFetch |
Before I created the PRs I was thinking what option to choose, and I decided against having a specific method on the
So to finally sum it up, either stateful processing needs it, and we need it everywhere, or it doesn't and we don't need it at all either. I saw no reason not to so I would have just removed it completely, but I also created the experiment version, just so we can release/use it without potentially breaking a release. |
yeah, thanks, get it. Experimental flag sounds good in turns of not changing the behavior of the default config. I am trying to run some test also in this weekend. |
CC: @kennknowles (from file history) may have insight |
Update: I tested #26618 with a test pipeline and found no noticable change with the switch on/off. Basically I am generating a steady stream of 100k element per second and each of 1 kB, so it is 100 MB/s throughput. pipeline option provided: and add the disable prefetch experiments option. Both jobs are drained after 15 min.
Both settings having similar ratio (3kB per element, which is 3 times the actual data size), however, disabling the prefetch show significant regression on the through put, and causing surging backlog: Backlog: (prefetch enabled) @nbali Have you been able to test your changed code with a pipeline? To test modified java-core code with your pipeline, one can build both sdks-java-core and dataflow worker jar (for runner v1) sth like
and use the compiled jar as the dependency of java core, and pass the shadow worker jar to dataflow as pipeline option above. test pipeline: public class GroupIntoBatchesTest {
public static void main(String[] argv) {
PipelineOptions options = PipelineOptionsFactory.fromArgs(argv).withValidation().as(PipelineOptions.class);
Pipeline p = Pipeline.create(options);
p.apply(GenerateSequence.from(0).withRate(100000, Duration.standardSeconds(1)))
.apply(Window.into(FixedWindows.of(Duration.standardSeconds(10))))
.apply(MapElements.via(new MapToKVFn(1000)))
.apply(GroupIntoBatches.ofSize(50000))
.apply(ParDo.of(new DoFn<KV<Integer, Iterable<byte[]>>, Void>(){
@ProcessElement
public void procees(ProcessContext ctx) {
System.out.println("grouped key:" + Objects.requireNonNull(ctx.element()).getKey());
}
}));
p.run();
}
static class MapToKVFn extends SimpleFunction<Long, KV<Integer, byte[]>> {
private transient Random rd;
private final int valueSize;
public MapToKVFn(int valueSize) {
this.valueSize = valueSize;
}
@Override
public KV<Integer, byte[]> apply(Long input) {
if (rd == null) {
rd = new Random();
}
byte[] data = new byte[valueSize];
rd.nextBytes(data);
return KV.of(Long.hashCode(input)%10, data);
}
}
} |
I haven't had the time yet - I will try to make time in the upcoming days -, but your example has a theoretical maximum at 100k element/sec, executed for 900s, meanwhile it's only 28M and 54M. Which means it was 30-60k/sec. The hashing split it into 10 keys. So a single key received about 3-6k msg/sec. The windowing essentially closed every batch after 10s. So a window for a key received 30-60k element, which indicates totally different amount of prefetching (even if both are using the master version), yet the data amount is the same. Seems odd to me. Do you have the input/output stats of the GIB transform? Also over a minute delays with both cases with 10s windowing? It's seems CPU limited. I mean isn't this essentially single threaded? |
@Abacn First of all I'm not saying your tests are invalid in any way, but I also did mine and saw huge differences. I didn't mention originally as I thought it's irrelevant and should happen with every Lines 686 to 689 in bb044f4
This is what I use on the BQ.Write: .withMethod(BigQueryIO.Write.Method.FILE_LOADS)
.withTriggeringFrequency(...)
.withAutoSharding() For the benchmark I removed some irrelevant parts from my pipeline, but kept the core logic of Kafka.Read->BigQuery.Write. First deployed a flex template, then launched the template with different configurations to start consuming. About the configurations:
That gives me 4 scenario. The source is a kafka stream with 12 partitions, about 1k elements/sec and 1 MB/sec for every partition. Reaching the prefetchSize (500'000 / 5) takes typically almost 2 minutes, so if we trigger the batches more often we never call the "prefetch" code. The 2 MB limit most likely triggers it immediately in a few seconds, so we should never "prefetch". The 100 MB limit most likely triggers 1 prefetch (100MB / 1KB ~100'000). After 22 min of runtime:
As you can see there are huge differences in the results, but also compared to your test pipeline in the code as well, for example you don't use For the next run I have increased the 100 MB option to 256 MB to trigger prefetching more (and also switched to Drained after 40 min:
I could only guess the reason you couldn't see it is that you fire too quickly and it might be still cached? I would modify the rate from 100'000/sec to 10'000/sec for example (so with the key creation, it will be 1'000/sec for every key on average). When I have time again, I will try with a modified runner. |
Quick update, runtime 11h, same as the previous example with the exception that the message size has been reduced to ~1/10, but I process every message 10 times, so still 1MB/sec for every partition, but 10k elements/sec.
|
@Abacn Tested with the modified |
@Abacn I see how that would cause a similar pattern in unexpected shuffled data amount, and I also get it how that change fixes that issue, but what I can't see is where do we use an EDIT: |
What happened?
As I was inspecting why the Dataflow "Total streaming data processed" metric was so much higher (2-4× depending on the pipeline) than the actual data that was being processed as it only has one shuffle/gbk/gib/etc transform I stumbled upon something that I couldn't justify being there:
beam/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/GroupIntoBatches.java
Lines 566 to 569 in f549fd3
I might be missing some context here, but I'm not even sure what benefit this provides for any generic
GroupIntoBatches
transform, but it seems to be especially useless if called on aGroupIntoBatches.WithShardedKey
(when every data the batch contains comes from the same worker already).I would appreciate any insight before I submit a PR to change this.
Issue Priority
Priority: 3 (minor)
Issue Components
The text was updated successfully, but these errors were encountered: