bin/magento catalog:images:resize
While trying to figure out why it takes over 12 hours to run the command bin/magento catalog:image:resize
on a very beefy server with a Magento CE 2.1.2 shop with about 6500 products and 8500 images, I found a couple of performance problems in the code:
Magento/blank
or Magento/luma
on the frontend, you still get images resized for those themes, and they take a lot of time to generate, so this is not goodimage type
(thumbnail, small_image, image, ...). All the other parameters (width, height, keep frame, transparency, quality, ...) to generate a unique file are correct I think. But I don't think the distinction on image type
is important, but maybe I'm missing something?'image_type' => $this->getDestinationSubdir(),
I ran a very small benchmark using a test shop with 3 products and 5 images in total. Only Magento/blank
and Magento/luma
themes are installed and only Magento/luma
is active in frontend:
Combined result of both optimizations:
I think this is a significant enough improvement for you guys to at least consider optimizing this :)
Thanks!
Tracking internally in one ticket as MAGETWO-60316
What is the status of this? It is a BIG issue.
One shop is now short of space as we had a test and production site that were identical.
200GB was NOT enough as the image directory alone was using between 50 and 60 GB x2 (dev and prod) and then the rest of the space on the server was used by logs and cache etc.
First after the migration of magento 1 to two I did not check about the resources used by the images but when the shop almost went down I found out this was the source of the problems.
When 200GB is not enough to run a testing and production site with about 6000 products then magento is just not doing this right.
This is a small update with some new observations seen by testing Magento CE 2.1.6. Watch out, my initial post was about the develop
branch, It looks like the develop
branch is now a lot different then Magento 2.1.6 for this kind of functionality. Anyway, this is an update specifically for version 2.1.6.
In my initial post, for the second problem, I suggested to remove the image_type
from Magento\Catalog\Model\Product\Image::getMiscParams
, but it looks like while testing Magento CE 2.1.6 that this isn't the correct location to remove it. In Magento CE 2.1.6, it's better to remove it from Magento\Catalog\Model\Product\Image\ParamsBuilder::build
And another update: instead of removing image_type
altogether, we should hardcode it to a certain value which is being used today, like thumbnail
, this will avoid having to run bin/magento catalog:images:resize
yet again. Otherwise it would change the hash being used in the directory structure again and all the product images would disappear from the product listing pages again, just like they did after the upgrade to Magento 2.1.6.
And yet another update. Manipulating image_type
in Magento\Catalog\Model\Product\Image\ParamsBuilder::build
which I mentioned above, isn't the right solution, as image_type
is being used in Magento\Catalog\Model\View\Asset\Image::getPlaceHolder
to figure out what placeholder to use.
My new proposition is to change this in Magento\Catalog\Model\View\Asset\Image::getMiscPath
as follows:
diff --git a/app/code/Magento/Catalog/Model/View/Asset/Image.php b/app/code/Magento/Catalog/Model/View/Asset/Image.php
index 05f7044cbf1..0fd6690224d 100755
--- a/app/code/Magento/Catalog/Model/View/Asset/Image.php
+++ b/app/code/Magento/Catalog/Model/View/Asset/Image.php
@@ -194,7 +194,20 @@ class Image implements LocalInterface
*/
private function getMiscPath()
{
- return $this->encryptor->hash(implode('_', $this->miscParams), Encryptor::HASH_VERSION_MD5);
+ $miscParams = $this->miscParams;
+
+ // since 'image_type' has no influence as to how an image is manipulated (the resulting files are binary the same if all the other params match),
+ // it makes no sense to include it in the hash caluclation
+ // the best solution would be to remove it, but to avoid introducing new backwards incompatible hashes being generated here,
+ // we decided to hardcode 'image_type' to one which was already in use before: 'thumbnail'
+ // if we simply removed it, we would have to re-run `bin/magento catalog:images:resize`, to again generate all new images, and that's just very annoying
+
+ if (isset($miscParams['image_type']))
+ {
+ $miscParams['image_type'] = 'thumbnail';
+ }
+
+ return $this->encryptor->hash(implode('_', $miscParams), Encryptor::HASH_VERSION_MD5);
}
/**
This is for Magento 2.1.6, I haven't checked out the latest develop
branch code, to see if this also applies over there.
@hostep, thank you for your report.
The issue is already fixed in 2.2.0
@magento-engcom-team, out of curiosity: any chance you can point me to some commits which fix this? I've search the commit messages for MAGETWO-60316
but can't seem to find anything?
@magento-engcom-team: I just retested this on a clean 2.2.0 installation, and can't see any difference in the results, so this issue is definitely not fixed, please reopen :)
Squeeze in a couple of new lines of code on line 63 of the ProductImageCache model:
// if ($theme->getThemeTitle() == 'Magento Luma') continue;
echo "executing '{$theme->getThemeTitle()}'\n";
$ rm -R pub/media/catalog/product/cache/*
$ php bin/magento catalog:images:resize
executing 'Magento Blank'
executing 'Magento Luma'
.
Product images resized successfully
$ find pub/media/catalog/product/cache -type f | wc -l
35
Now change the newly introduced code and uncomment the first line, so only the Magento Blank theme is processed
$ rm -R pub/media/catalog/product/cache/*
$ php bin/magento catalog:images:resize
executing 'Magento Blank'
.
Product images resized successfully
$ find pub/media/catalog/product/cache -type f | wc -l
29
There are less images being produced when only generating them for one single theme, which should happen by default, since Magento Luma isn't in use anywhere.
$ php bin/magento catalog:images:resize
executing 'Magento Blank'
.
Product images resized successfully
$ find pub/media/catalog/product/cache -type f | wc -l
29
$ find pub/media/catalog/product/cache -type f -exec shasum {} \; | sort
0cc30892e82612b64bc8a69a0ede6e977773341a pub/media/catalog/product/cache/926507dc7f93631a094422215b778fe0/s/c/screen_shot_2017-09-30_at_11.43.47.png
0cc30892e82612b64bc8a69a0ede6e977773341a pub/media/catalog/product/cache/afad95d7734d2fa6d0a8ba78597182b7/s/c/screen_shot_2017-09-30_at_11.43.47.png
0cc30892e82612b64bc8a69a0ede6e977773341a pub/media/catalog/product/cache/c687aa7517cf01e65c009f6943c2b1e9/s/c/screen_shot_2017-09-30_at_11.43.47.png
27734c04683faaf56fbab8694783ac85f49af19c pub/media/catalog/product/cache/cdec6e528c16187a547aea54d9e1d6ee/s/c/screen_shot_2017-09-30_at_11.43.47.png
27734c04683faaf56fbab8694783ac85f49af19c pub/media/catalog/product/cache/dca4079c45c8bedb9968e3d3d4e45631/s/c/screen_shot_2017-09-30_at_11.43.47.png
387c943f6bf316cadbc7f777c25360a936b86358 pub/media/catalog/product/cache/806d6fa663c29d159ca59727157b4a59/s/c/screen_shot_2017-09-30_at_11.43.47.png
4166ad59674e5e41ebe0d7b321d45749dcd2d717 pub/media/catalog/product/cache/2f067dfaa2eefc9cc6820ffd207e9866/s/c/screen_shot_2017-09-30_at_11.43.47.png
4166ad59674e5e41ebe0d7b321d45749dcd2d717 pub/media/catalog/product/cache/3cf5799449660ed39031217945ace72a/s/c/screen_shot_2017-09-30_at_11.43.47.png
43b15e0154462edc6ca4385c843592bc7ceeb296 pub/media/catalog/product/cache/15dc7e9ba1a6bafcd505d927c7fcfa03/s/c/screen_shot_2017-09-30_at_11.43.47.png
43b15e0154462edc6ca4385c843592bc7ceeb296 pub/media/catalog/product/cache/2b4546e5ba001f3aea4287545d649df0/s/c/screen_shot_2017-09-30_at_11.43.47.png
4a205dba28e0a569c55197f04d2eb7602be07da4 pub/media/catalog/product/cache/914b1ba9268f8c1d0e58a8e7ce614488/s/c/screen_shot_2017-09-30_at_11.43.47.png
597873debbacf71bd76fab47d5c85af492753f46 pub/media/catalog/product/cache/0f831c1845fc143d00d6d1ebc49f446a/s/c/screen_shot_2017-09-30_at_11.43.47.png
60e6e629c454e7747ddef89101cc5d601dfeb924 pub/media/catalog/product/cache/633177f689f3c479eab7d48212fd720b/s/c/screen_shot_2017-09-30_at_11.43.47.png
7e1122d6679e7af404873917104af678a4ecabbb pub/media/catalog/product/cache/9b0529d63c590f29ded60308ccd979ee/s/c/screen_shot_2017-09-30_at_11.43.47.png
86c6e5ee0dbb48c5f5b4f54b91fb8f74eb608139 pub/media/catalog/product/cache/a2d2345650965cd6042e53fd7d716674/s/c/screen_shot_2017-09-30_at_11.43.47.png
873706d62738fc28e5ad408a3dee8906f0e218a7 pub/media/catalog/product/cache/81ea8665b1d657e2313096e2818a187e/s/c/screen_shot_2017-09-30_at_11.43.47.png
973b0ad14825f1dcf2cd3cb3faff5d33def81510 pub/media/catalog/product/cache/f073062f50e48eb0f0998593e568d857/s/c/screen_shot_2017-09-30_at_11.43.47.png
98f583c4f2a63ac96d3e0a6467905ecfeda9ef8c pub/media/catalog/product/cache/fd09478435d4f3d9e62d28584118149d/s/c/screen_shot_2017-09-30_at_11.43.47.png
98f583c4f2a63ac96d3e0a6467905ecfeda9ef8c pub/media/catalog/product/cache/fd4c882ce4b945a790b629f572e4ef93/s/c/screen_shot_2017-09-30_at_11.43.47.png
9a424621957c6949cbb316067eebd0691e71821e pub/media/catalog/product/cache/6633e7fcc9a7e88021adbe9a2450a512/s/c/screen_shot_2017-09-30_at_11.43.47.png
9cc63a227559842bfbd478a3f32ca5f6a496def8 pub/media/catalog/product/cache/75eed2686e01eb22cb4050b2f40ddf97/s/c/screen_shot_2017-09-30_at_11.43.47.png
ab9b1014c3905a7718ce99c7dcf31020a725849f pub/media/catalog/product/cache/8a4e709a70e03bf31b178a318a79cf0e/s/c/screen_shot_2017-09-30_at_11.43.47.png
ab9b1014c3905a7718ce99c7dcf31020a725849f pub/media/catalog/product/cache/ee4ee1fe1bbe32e9b93a354df94c32e2/s/c/screen_shot_2017-09-30_at_11.43.47.png
b3efa42d422f39088d1eeb137b114edd138ed8a3 pub/media/catalog/product/cache/3f695f7dd477cbb47cd99d2622d93108/s/c/screen_shot_2017-09-30_at_11.43.47.png
e361be9aefc08e9d924b07b7e6d5c67126e89984 pub/media/catalog/product/cache/ccf7793e39f95beba8c329ba40e7df07/s/c/screen_shot_2017-09-30_at_11.43.47.png
e361be9aefc08e9d924b07b7e6d5c67126e89984 pub/media/catalog/product/cache/f4a2bc458ca2feecb5750446998dc347/s/c/screen_shot_2017-09-30_at_11.43.47.png
e69edb0a25381abe75d0150da41de13b1c4bebe7 pub/media/catalog/product/cache/2f5bcdd08b6b861f73e29326ee14ef04/s/c/screen_shot_2017-09-30_at_11.43.47.png
e69edb0a25381abe75d0150da41de13b1c4bebe7 pub/media/catalog/product/cache/f485795eb4b45ff97c82d72651274f10/s/c/screen_shot_2017-09-30_at_11.43.47.png
f960c5eb18d957cfeb550b0bed7ae74d35c89155 pub/media/catalog/product/cache/3bb5001b99d4c204f1708e92b30dda97/s/c/screen_shot_2017-09-30_at_11.43.47.png
You can see a bunch of duplicated hashes which isn't desirable since those images waste a lot of disk space and it takes longer to generate all of those duplicated files.
@magento-engcom-team: can you review my comment above, I still think this ticket needs to be re-opened. Thanks!
@hostep Thank you for the investigation.
Issue reopened for further research.
@hostep, thank you for your report.
We've created internal ticket(s) MAGETWO-80606 to track progress on the issue.
Additional information and research on the issue: https://github.com/magento/magento2/issues/8469
Any way to work around this? Should be a way to block automatically generate cache images until run the catalog:images:resize command manually?
Please make this function able to execute asynchronously since it lends idself perfectly to parallelism (it uses high CPU on a single thread, has low IO and has a very long running time).
@j0um and others following this thread: according to the release notes of Magento 2.2.6, there were some optimizations done to this command.
I haven't found the time yet to look into them, but they sound promissing:
The catalog:image:resize command execution time has been reduced by up to 90% in the release. However, this improvement necessitates these additional steps after upgrading your Magento instance to 2.2.6:
- Remove pub/media/catalog/product/cache . (Removing this folder frees up space.)
- Run bin/magento catalog:image:resize to generate a new image cache. (This step is necessary because we鈥檝e changed the path to cached images and must remove the previously cached images.)
@hostep We have upgraded to 2.2.6 and we are currently in the processed of "resizing" our images. The current ETA is 12 hours for 124262 entries which is about the same as 2.2.5. It leads me to believe these optimisations are context specific. :(
I should also mention were running the script on a beastly machine and it is a shame to see only one core being 100% occupied by the process while the IO is being underutilized.
@j0um: thanks for the feedback, very good to know!
I dug in a little bit and found the commit where all the magic is supposed to be happening: https://github.com/magento/magento2/commit/de83f82842d
I think the 2nd point from my initial post was tackled, since the image_type
(thumbnail, small_image, ...) is no longer used in calculating the hash, so in theory no duplicated resized files should be created anymore, so that should already make the code run many times faster.
It's a bit sad to see that they haven't hardcoded the image_type
to a previously existing value, so the hash didn't have to change and you didn't have to run the command again to generate new images, this could have been prevented should they have read this thread more carefully.
Instead of unset($miscParams['image_type']);
, this would have been better: $miscParams['image_type] = 'thumbnail';
on this line
(Disclaimer: I quickly went through the code, nothing tested yet, might be wrong about all this)
Anyways, as for your suggestion to parallelise this process, that's a very good suggestion, you might want to create a new issue for that, maybe create it over at: https://github.com/magento/community-features/issues, otherwise they might close it claiming it is not a bug but a feature request.
The pub/media/catalog/product/cache
is quite large on some servers, will these commits work on Magento 2.2.5?
Or should I update to 2.2.6 if that fixes?
On Magento 2.2.5 a lot of duplicates. (28 or so per product with 1 image only)
Check for your self like this.
apt-get install fdupes -y
fdupes -r /var/www/vhosts/yoursite/httpdocs/pub/media/catalog/product/cache
I should note, this is not from running the "bin/magento catalog:image:resize" command it just seems to happen over time after deleting the cache directory.
Workaround: Replace duplicates with hardlinks shrugs use at own risk.
cd yoursite/pub/media/catalog/product/cache
fdupes -r -1 . | while read line; do j="0"; for file in ${line[*]}; do if [ "$j" == "0" ]; then j="1"; else ln -f ${line// .*/} $file; fi; done; done
4000+ products and the HDD is full...
In reading though the 2.2.6. release notes, it's made clear that M2 has moved forward with its image generation tool. My only question is, will on-the-fly image generation still be supported? A 90% performance increase is unimportant when the operation takes days (and has never personally completed ever).
Have the concerns raised in this ticket been addressed? To sum it up, M2 moved forward with their image generation tool, realized it worked terribly, and then walked it back and reverted to on-the-fly image generation again, which is superior.
Any updates on if this is fixed? Magento 2.3 is out now.
Edit: Nope, still running into this issue of duplicate images in cache.
For reference to anyone coming by, I'll explain the current state of this command based on my latest experience.
I'm using Magento 2.2.7
.
The output of the catalog:image:resize
command is much better than it used to be. Additionally, it's clearly not resizing the same images over and over again. These are about the only positive things I can say about it, though.
Here are some of my current numbers:
428233
product images in my store.2.7
images per second, based on my store. I waited until 323 images hit exactly 2 minutes, so 323/120
= 2.7
images per second.428233/2.7
= 158604s / 60s
= 2643mins / 60mins
= 44hours
In other words, it will take 44 hours for this tool to resize all the images in my store for the first time. That's truly devastating performance. At the very least, this is the first time I've discovered an approximate amount of time to run this whole thing, given the two pros mentioned earlier, so my expectations are set (extremely low). Of course, I actually have to run this completely through to see how resilient it is, or whether it can even finish.
Other numbers:
More numbers again:
I am sure there is a way to bring that number of thumbnails per product down, but I've not discovered it, yet.
I should mention that these images are being generated on a 512GB SSD, so the IOPS is quite high.
Does anyone have suggestions to make this better? In its current state, it's clearly anything but an acceptable way of resizing catalog images.
Edit:
I'd like to note that in both developer
and production
deploy mode, images are still generated on the fly, if they don't exist, which is good. The release notes for 2.1.7, and then 2.2.6 both used to state that on-the-fly image generation was removed in favour of using the command line tool to generate images, but it seems the release notes were edited to remove this information. Anyway, as long as on-the-fly image generation continues to be a thing, I won't care that much. It is immensely inconvenient that Magento2 decided to change the paths to all resized images, though. That means I will no matter what have to run the command at least once.
Before leaving work yesterday I decided to see how long this operation would truly take. As is tradition, Magento2 failed catastrophically after 45 minutes without further details--providing a shallow message about a fart, when really it should be giving me all the gory details about its crap-filled underwear.
Before I could do that, though, it failed multiple times about a swatch_image.jpg
path, which derailed me for a while, until I deleted the files under the path and suddenly it was able to progress. At that point I thought the troubles were over, detached my terminal multiplexer and ventured home. The next morning I awoke, feeling positive that I would see immense progress and possibly a disk that ran out of space (which would still be progress!), but instead I learned that the operation failed a mere 45 minutes after leaving, gracing me with its ever-useful error message, "Unsupported image format," which I've referenced before. Why not--at the very least--just log it and continue? WHY do you need to fail catastrophically and die?
This is so comically bad, it deserves its own Jimmy Fallon Thank You Notes bit:
@magento-engcom-team
Before I could do that, though, it failed multiple times about a
swatch_image.jpg
path, which derailed me for a while, until I deleted the files under the path and suddenly it was able to progress. At that point I thought the troubles were over, detached my terminal multiplexer and ventured home. The next morning I awoke, feeling positive that I would see immense progress and possibly a disk that ran out of space (which would still be progress!), but instead I learned that the operation failed a mere 45 minutes after leaving, gracing me with its ever-useful error message, "Unsupported image format," which I've referenced before. Why not--_at the very least_--just log it and continue? _WHY_ do you need to fail catastrophically and die?
Having run into this before; anyone needing to lint images before resize can use ImageMagicks identify
utility command to be sure the process doesn't abruptly fail due to one bad image.
Find malformed/invalid images recursively
with ImageMagick's identify utility of current working dir display output listing of current file scanned, only non 0 exit status get logged.
find -D rates . -type f \( -name '*.gif' -o -name '*.png' -o -name '*.jpg' -o -name '*.jpeg' \) -print -exec bash -c 'identify "$1" &> /dev/null || echo "$1" >> invalid-imgs.log' none {} \;
EDIT to ignore cache directory use (note the -not -path):
find -D rates * -type f \( -name '*.gif' -o -name '*.png' -o -name '*.jpg' -o -name '*.jpeg' \) -not -path "*cache/*" -print -exec bash -c 'identify "$1" &> /dev/null || echo "$1" >> invalid-imgs.log' none {} \;
a quick suggestion (i apologize if its redundant).
add theme name as an argument to resize images for the theme only.
similarly add attribute (small, thumbnail...) as an argument for resize image specific to that attribute only (globally or theme specific).
I have not yet tested the resize image command but I believe this will help in performance as it won't be resizing for all the themes and all the attributes.
Thanks,
RT
This is still a persistent issue on M2.3.3. I have a multi-store view with 7239 unique images and it's estimated 19 hours before the command completes. I'm watching duplicate images be generated.
After upgrading from 2.2.7 -> 2.3.3 all images have to be regenerated due to a change in the way the image hash is generated. This isn't realistic to do when we upgrade a production environment.
EDIT: I redeployed the MCloud environment and the current est is 5.9 hours... still way too long for an image set this small.
@0x15f If you on Magento Cloud then you can try use Fastly image optimization
@0x15f If you on Magento Cloud then you can try use Fastly image optimization
That works for the Magento Cloud environment. Thanks!
@hostep Questions for you:
The command should only produced images for the themes which are currently active in the frontend
expected result in your initial postDo you know if Magento 2.3.3 still generates images for all themes, when running the bin/magento catalog:image:resize
command?
If so, is there a solution you're using to prevent this (such as applying your closed PR #8142 as a composer-patch)?
The command shouldn't produce multiple files which are binary exactly the same
expected result in your initial postThis issue appears to be fixed (as I believe has been stated earlier in this issue). To verify, I did this: I have a Magento 2.3.3 Open Source site with ~10 products, each with one image. I ran bin/magento catalog:image:resize
. I then ran your find pub/media/catalog/product/cache -type f -exec shasum {} \; | sort
command, and all of the values were unique.
erikhansen:01:49 PM:/server/sites/example.test/pub/media/catalog (develop *+) $ find product/cache -type f -exec shasum {} \; | sort
002b2d16bf089d7b3e0ceeccc455a86b99518569 product/cache/a04afc3e976ed6149b98e612cd847386/1/0/108621.jpg
0070d89d33e84d607316035055964cfea66d5d25 product/cache/15a69d31a3bb49055dcbb63a1b2f050b/1/0/106092.jpg
009b7dee5acc9705040617ba783914b3633e4cd9 product/cache/e6a0a7bdd5afd91c4137e6e70e2940d5/1/0/105198.jpg
01d6b820bd04c5f8fcecfa3099b93972e6d4d894 product/cache/42c45adaacb663ac3a62438e6239dd4d/1/1/110741.jpg
038a457f472a377924c70951ca7ac4b0dc0c5d70 product/cache/014b93be21b914e12928cc25894b9ace/1/1/110741.jpg
03a1f7f01712a5e4408f81edc8e4075cf02e10cb product/cache/904c139ed8e5652575bede09b34216f5/1/1/110096.jpg
0440c720d8a286afa92ab7a2879813f79161e9bf product/cache/e6a0a7bdd5afd91c4137e6e70e2940d5/1/0/108715.jpg
Hi @erikhansen
We had PR https://github.com/magento/magento2/pull/8142 implemented on one of our bigger shops as a composer patch, but that was on Magento 2.1.x
We are currently in the process of upgrading that particular shop to 2.3.x and the image resizing part hasn't been looked into yet, so currently I can't say with much confidence how Magento 2.3.x is doing regarding this issue.
I'll try to remember looking into it one of the coming days/weeks.
If somebody else has more experience with if Magento 2.3.x still resizes images for non-used themes, feel free to share!
@hostep Thanks for the details.
I applied #8142 as a patch on a small 2.3.3 site with 10 products (each with a single image). The number of images in pub/media/catalog/product/cache
was exactly the same before vs after applying the patch. I deleted pub/media/catalog/product/cache
and ran bin/magento regenerate:product:url
between tests. So my expectation is that the patch doesn't have any effect on 2.3.3. But I'm not certain I tested things correctly, so I'd love to hear from someone else about this.
@0x15f
After upgrading from 2.2.7 -> 2.3.3 all images have to be regenerated due to a change in the way the image hash is generated. This isn't realistic to do when we upgrade a production environment.
Are you making this up? I don't need another reason to rant. I'm not even over them changing the image hash format from M1 to M2. You saying starting in M233 I now need to host a third image cache (of 191GB) of the same images? It's like they hate search engines.
unfortunately no joke. the change to the hashes was added in 2.3.0. regenerating the image caches took 5+ days for one of our customer with highres raw images...
we resorted to hardcode the few hashes now in our customers' instances just in case. :(
@0x15f
After upgrading from 2.2.7 -> 2.3.3 all images have to be regenerated due to a change in the way the image hash is generated. This isn't realistic to do when we upgrade a production environment.
Are you making this up? I don't need another reason to rant. I'm not even over them changing the image hash format from M1 to M2. You saying starting in M233 I now need to host a third image cache (of 191GB) of the same images? It's like they hate search engines.
My images broke right after I upgraded and I was told they needed to be re-generated and it solved the issue. I saw something somewhere about the hash method changing.
@heldchen I'm not seeing any mention of that in the 2.3.0 Release Notes, or in the 2.3.3 Release Notes @0x15f .
I'm genuinely curious to know if this is a thing, as my team just upgraded to 2.2.10
, and have also been working on a 2.3.3
release for February. We haven't come across this in our testing.
that's because there was no mention about this change. welcome to the wonderful world of magento upgrades...
@heldchen I mean, I wouldn't be surprised, but to their credit they've been pretty alright about their release notes--and I say this as someone who takes his crapping-on-Magento very seriously. I know they did this sometime in the 2.2.x releases, where they legitimately changed how images get created, and then only later announced it as an addendum and much flack.
Hi @engcom-Echo. Thank you for working on this issue.
Looks like this issue is already verified and confirmed. But if you want to validate it one more time, please, go though the following instruction:
Component: XXXXX
label(s) to the ticket, indicating the components it may be related to.[ ] 2. Verify that the issue is reproducible on 2.4-develop
branchDetails
- Add the comment @magento give me 2.4-develop instance
to deploy test instance on Magento infrastructure.
- If the issue is reproducible on 2.4-develop
branch, please, add the label Reproduced on 2.4.x
.
- If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and _stop verification process here_!
[ ] 3. If the issue is not relevant or is not reproducible any more, feel free to close it.
Hi, @hostep. Thank you for your report.
I tried to reproduce this in 2.4-developer, 2.3.3 composer using your steps.
Unfortunately, this issue is not reproducible.
Please feel free to comment, reopen or create new ticket according to the Issue reporting guidelines
if you are still facing this issue on the latest 2.4-develop
branch. Thank you for collaboration.
It looks like the issues brought up in here have been resolved more or less in Magento 2.3.x, so that's good, but I found another huge problem, for which I've created https://github.com/magento/magento2/issues/26796
i use multistore - 13 web-sites, ~20.000 products, 31760 images, 2 additional themes.
product images folder is ~3GB, cache images folder is ~218GB. is it legal?!
for each new image there are 230+ new cache images.
Cache folder seems somewhat pointless if you are using Cloudflare or other CDN.
Can we just disable this product image cache feature and have requests serve a resized image but cache that response with an etag based on the original source file's checksum?
Most helpful comment
Before leaving work yesterday I decided to see how long this operation would truly take. As is tradition, Magento2 failed catastrophically after 45 minutes without further details--providing a shallow message about a fart, when really it should be giving me all the gory details about its crap-filled underwear.
Before I could do that, though, it failed multiple times about a
swatch_image.jpg
path, which derailed me for a while, until I deleted the files under the path and suddenly it was able to progress. At that point I thought the troubles were over, detached my terminal multiplexer and ventured home. The next morning I awoke, feeling positive that I would see immense progress and possibly a disk that ran out of space (which would still be progress!), but instead I learned that the operation failed a mere 45 minutes after leaving, gracing me with its ever-useful error message, "Unsupported image format," which I've referenced before. Why not--at the very least--just log it and continue? WHY do you need to fail catastrophically and die?This is so comically bad, it deserves its own Jimmy Fallon Thank You Notes bit:
@magento-engcom-team