Describe the bug
Since 718f00ff6fe42db7e6ba09a7f7992b3e85283f77, ZSTD_decompress for some buffer & size returns ZSTD_error_corruption_detected instead of ZSTD_error_dstSize_tooSmall depending on the size
To Reproduce
I have a reproducer but I can't share it as it contains PII :(
I'll happy to try any patch
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdint.h>
#include <zstd.h>
#include <zstd_errors.h>
#define NELEMS(x) (sizeof(x) / sizeof((x)[0]))
int main(int argc, const char** argv)
{
//Unable to begin ZSTD decompression (in buffer is 1670 bytes, out buffer is 10020 bytes): Corrupted block detected
unsigned char buf1_bin[] = {
<can't share that>
};
size_t ret;
char *decompressed;
decompressed = malloc(10*1024*1024);
ret = ZSTD_decompress(decompressed, 3340, buf1_bin, NELEMS(buf1_bin));
printf("%s\n", ZSTD_getErrorName(ret));
ret = ZSTD_decompress(decompressed, 10020, buf1_bin, NELEMS(buf1_bin));
printf("%s\n", ZSTD_getErrorName(ret));
return ZSTD_getErrorCode(ret) != ZSTD_error_dstSize_tooSmall;
}
git bisect run sh -c 'git clean -xfd && make lib && gcc ../bugzstd/bugzstd.c -o bugzstd -Ilib/ lib/libzstd.a && ./bugzstd'
bisecting between v1.4.4 and v1.4.5 gave me 718f00ff6fe42db7e6ba09a7f7992b3e85283f77
This impacts librdkafka in https://github.com/edenhill/librdkafka/issues/2943
When ZSTD_getFrameContentSize returns ZSTD_CONTENTSIZE_UNKNOWN, librdkafka triple the dst buffer size until the decompression succeeds
Also librdkafka uses streaming compression but non streaming decompression
https://github.com/edenhill/librdkafka/blob/master/src/rdkafka_zstd.c
Expected behavior
Both size return "Destination buffer is too small"
Actual Behavior
After 718f00ff6fe42db7e6ba09a7f7992b3e85283f77, the second call returns "Corrupted block detected"
Desktop (please complete the following information):
We will absolutely find and fix this issue. We don't have extensive tests to ensure that ZSTD_error_dstSize_tooSmall always gets returned when the data is valid but the output buffer is too small, so that is something we will have to add.
The action items are:
tests/fuzzer that generates compressed blocks and then decompresses them with an output buffer that is too small and ensures the result is ZSTD_error_dstSize_tooSmall.I experience the same error after upgrading gozstd from zstd v1.4.4 to v1.4.5
Below are samples of compressed blocks, which cannot be decompressed with zstd v1.4.5 when the destination buffer is too small:
block len=162;
block data (hex)=28B52FFD603701C50400F404000002E74EA2E80000000300BF0300000002000003021E000200A813031482060205017F1209090000000400000305060200DB12000000000100000303081E20060003061E4A8E001801000003031E220039012AE34066201B585B91805C02463300A064BB840B6A040A03400257806EC0770138E05ED00DF80ED80C1C0274081406DE4010A00F150698B05DA038F0005A59893508B0
block len=133;
block data(hex)=28B52FFD608801DD0300C4030000029DCD1DC0000000046C4909000000040000030618000102363D0303020022C606000000030300351C0F04C0DB05013F4907003C3D09040C6D051C0098AB10A01EF0DD8273C0770BCE01DF0150001D028505A500DDC0774B86D1A01EF82E15005EE50116C00AA00F150698B05DA038F0005A59893508B0
block len=140;
block data(hex)=28B52FFD6088011504001404000002E74EA2E80000000300BF0900000004000003061E000100B4F603000000020000030202A8130C03030301545604006B4D060285940801B899060904FCE2121D006016A01BF82E7D853820840BEA01DF2D38077CB7E01CF0DD8273E0BB5400441817C0BDA043A030000276C16AA10F150698B05DA038F0005A59893508B0
Any plan to release 1.4.6 ?
v1.4.6 is an internal release. It might be published some time in the future, but that's immaterial.
v1.4.7 is the next public release.
It is expected to happen during this coming December.