Core: VarnishPurger cannot handle alot of entities

Created on 12 Apr 2018  路  24Comments  路  Source: api-platform/core

Based on this commit: https://github.com/Simperfit/api-perf/commit/4201cc28c4ddb7c493df31a095d8ad71b8fe1297#diff-c828d4117439a2216f86c406d522c79c

To reproduce it

docker-compose pull
docker-compose up -d
docker-compose exec php bin/console d:f:l

Results in:


In RequestException.php line 113:

  Client error: `BAN http://cache-proxy` resulted in a `400 Bad Request` response  

bug

Most helpful comment

Hello,
I managed to fix the problem temporarily by modifying the VarnishPurger class like bellow :

<?php

/*
 * This file is part of the API Platform project.
 *
 * (c) K茅vin Dunglas <[email protected]>
 *
 * For the full copyright and license information, please view the LICENSE
 * file that was distributed with this source code.
 */

declare(strict_types=1);

namespace ApiPlatform\Core\HttpCache;

use GuzzleHttp\ClientInterface;

/**
 * Purges Varnish.
 *
 * @author K茅vin Dunglas <[email protected]>
 *
 * @experimental
 */
final class VarnishPurger implements PurgerInterface
{
    private $clients;

    /**
     * @param ClientInterface[] $clients
     */
    public function __construct(array $clients)
    {
        $this->clients = $clients;
    }

    /**
     * {@inheritdoc}
     */
    public function purge(array $iris)
    {
        if (!$iris) {
            return;
        }

        // Create the regex to purge all tags in just one request
        $parts = array_map(function ($iri) {
            return sprintf('(^|\,)%s($|\,)', preg_quote($iri));
        }, $iris);

        $parts = array_chunk($parts,10);
        foreach ($parts as $part) {
            $regex = \count($part) > 1 ? sprintf('(%s)', implode(')|(', $part)) : array_shift($part);
            foreach ($this->clients as $client) {
                $client->request('BAN', '', ['headers' => ['ApiPlatform-Ban-Regex' => $regex]]);
            }
        }

    }
}

But I do not know how to extend that class as it is declared final, for now I have modified directly the file on vendor/api-platform/core/src/HttpCache/VarnishPurger.php

All 24 comments

Would #1776 help? :smile:

Any news on this issue?

As a workaround, you can do that: https://github.com/api-platform/demo/pull/24/commits/0bc68491cf2859538f6749218a227095ec0ec852#diff-628212751f227bfce20484adbd0d4191R1

Maybe can we enable the Varnish subsystem only for the prod env in the default skeleton too?

I'd again argue that there's a need for the developer to verify that the caching (especially the invalidation) by reverse proxy works as expected in the dev env. Surprises are bad.

And it's a bug we need to fix anyway.

The only way to "fix" it is to skip the listener in those specific batch cases. It's up the developper to purge Varnish using another way (probably a full purge) in those cases. But dev commands such as Doctrine Migrations ones must work.

A more efficient cache invalidation method should help. But yes, of course we could never guarantee that there will not be a timeout if a certain higher limit is still hit. That will be up to the developer to improvise. My point of disagreement is mainly on disabling it out of the box in the dev env.

We may at least disable it in CLI + dev to allow such commands to work.

Proposal: if in debug mode + cli : log a warning but disable it.

I'm also having this issue trying to import data with Doctrine from a command (Which I need to do in production too).
Do we have a workaround ?
For now I've simply reduced my bulk size to avoid this.

I also have the issue in a prod environment when invalidating an object with a lot of relations, so it's definitely not only related to dev commands

I'm having this issue too when I change my profile picture to my API hosted on GKE. Is there a workaround ?

Hello,
I managed to fix the problem temporarily by modifying the VarnishPurger class like bellow :

<?php

/*
 * This file is part of the API Platform project.
 *
 * (c) K茅vin Dunglas <[email protected]>
 *
 * For the full copyright and license information, please view the LICENSE
 * file that was distributed with this source code.
 */

declare(strict_types=1);

namespace ApiPlatform\Core\HttpCache;

use GuzzleHttp\ClientInterface;

/**
 * Purges Varnish.
 *
 * @author K茅vin Dunglas <[email protected]>
 *
 * @experimental
 */
final class VarnishPurger implements PurgerInterface
{
    private $clients;

    /**
     * @param ClientInterface[] $clients
     */
    public function __construct(array $clients)
    {
        $this->clients = $clients;
    }

    /**
     * {@inheritdoc}
     */
    public function purge(array $iris)
    {
        if (!$iris) {
            return;
        }

        // Create the regex to purge all tags in just one request
        $parts = array_map(function ($iri) {
            return sprintf('(^|\,)%s($|\,)', preg_quote($iri));
        }, $iris);

        $parts = array_chunk($parts,10);
        foreach ($parts as $part) {
            $regex = \count($part) > 1 ? sprintf('(%s)', implode(')|(', $part)) : array_shift($part);
            foreach ($this->clients as $client) {
                $client->request('BAN', '', ['headers' => ['ApiPlatform-Ban-Regex' => $regex]]);
            }
        }

    }
}

But I do not know how to extend that class as it is declared final, for now I have modified directly the file on vendor/api-platform/core/src/HttpCache/VarnishPurger.php

@applizem above worked for executing fixtures but admin part stopped responding for me :/

I'd really like to encourage doing smaller / more requests, if you could afford to. More granular requests could also help with cache hits, as you'd reduce specialized requests.

For me this is happening when I trigger fixtures via docker-compose exec php bin/console doctrine:fixtures:load

To prevent the problem when loading fixtures, you can enable the cache invalidation mechanism only for the prod environment. It's what we've done in the demo: https://github.com/api-platform/demo/blob/master/api/config/packages/prod/api_platform.yaml#L7-L11

I'd again argue that there's a need for the developer to verify that the caching (especially the invalidation) by reverse proxy works as expected in the dev env. Surprises are bad.

I don't agree. We are talking about Symfony environments here.
The dev and test environments are already very different from the prod one, because most caches are disabled and debug tools are loaded. Proper (manual and e2e) testing must be done using the prod environment, even locally.

To prevent the problem when loading fixtures, you can enable the cache invalidation mechanism only for the prod environment. It's what we've done in the demo

@dunglas Should we ship that as the default then?

I'd again argue that there's a need for the developer to verify that the caching (especially the invalidation) by reverse proxy works as expected in the dev env. Surprises are bad.

I don't agree. We are talking about Symfony environments here.
The dev and test environments are already very different from the prod one, because most caches are disabled and debug tools are loaded.

Going by that logic, perhaps we shouldn't ship the varnish ("cache-proxy") container in our Docker Compose setup?

@dunglas Should we ship that as the default then?

Yes I think so.

Going by that logic, perhaps we shouldn't ship the varnish ("cache-proxy") container in our Docker Compose setup?

You mean that we need a docker-compose.prod.yaml file? :D Maybe will we end up with that then...

You mean that we need a docker-compose.prod.yaml file? :D Maybe will we end up with that then...

I have already worked on that, if you'd accept it. :wink: I could open a PR.

Let's do that then

3843

Was this page helpful?
0 / 5 - 0 ratings