You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The parallel xz decoder is fast but it doesn't bother thinking much about the used memory. It simply uses a block cache storing in the order of number-of-cores blocks of uncompressed data. For very high compression ratios or large block sizes or even anomalous high compression ratios as might be used by attackers, this might lead to out of memory errors. Ratarmounts priorities are currently on performance not security, so beware when e.g. exposing it as a service and run it in a memory-bound VM or something like that! However, if it not only matters for security but even for legitimate usecases (such as limited memory combined with large but "normal" compressino ratios combined with block sizes chosen too large, e.g., by xz -T), then it should be checked against.
One check for this has already been implemented, namely, the parallel xz decoder is not used, even when requested, if the xz file only has a single block. Because it would not speed it up anyway and only require more memory.
Additionally, the block cache could stop decoding after a cut-off size and then simply decode the block on-the-fly when it is requested instead of caching the uncompressed data.
The text was updated successfully, but these errors were encountered:
The parallel xz decoder is fast but it doesn't bother thinking much about the used memory. It simply uses a block cache storing in the order of number-of-cores blocks of uncompressed data. For very high compression ratios or large block sizes or even anomalous high compression ratios as might be used by attackers, this might lead to out of memory errors. Ratarmounts priorities are currently on performance not security, so beware when e.g. exposing it as a service and run it in a memory-bound VM or something like that! However, if it not only matters for security but even for legitimate usecases (such as limited memory combined with large but "normal" compressino ratios combined with block sizes chosen too large, e.g., by
xz -T
), then it should be checked against.One check for this has already been implemented, namely, the parallel xz decoder is not used, even when requested, if the xz file only has a single block. Because it would not speed it up anyway and only require more memory.
Additionally, the block cache could stop decoding after a cut-off size and then simply decode the block on-the-fly when it is requested instead of caching the uncompressed data.
The text was updated successfully, but these errors were encountered: