Velero: RFE: option to delete & recreate objects that already exist when restoring

Created on 1 May 2018  路  36Comments  路  Source: vmware-tanzu/velero

I set up ark 0.8.1 to make backups of my cluster, after that I was testing the restore just to make sure that ark restore will work. I got some warning and errors so I'm wondering if they are expected or I'm doing something wrong.

This is a warning, not sure why it's failing? I'd expect to ark replace this resource even if the resource already exist, maybe a ark flag to force restore could solve this issue.

kube-system:  not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.

This is an error:

error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set

Full ark restore output

Giancarlos-MBPro:.ssh grubio$ ark restore describe logging-multiple-hostnames-20180501104707
Name:         logging-multiple-hostnames-20180501104707
Namespace:    heptio-ark
Labels:       <none>
Annotations:  <none>

Backup:  logging-multiple-hostnames

Namespaces:
  Included:  *
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io
  Cluster-scoped:  auto

Namespace mappings:  <none>

Label selector:  <none>

Restore PVs:  auto

Phase:  Completed

Validation errors:  <none>

Warnings:
  Ark:        <none>
  Cluster:  not restored: persistentvolumes "pvc-138f24f1-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-13b0f8f2-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-13d14da2-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-13f6562d-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-37a6990b-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-37c27b62-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-37c9b935-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-6c54e367-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
  Namespaces:
    default:      not restored: services "kubernetes" already exists and is different from backed up version.
    ingress:      not restored: configmaps "intern-intern" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-controller-metrics" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-controller-stats" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-controller" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-default-backend" already exists and is different from backed up version.
                  not restored: services "ingress-oauth-proxy" already exists and is different from backed up version.
    kube-system:  not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.
                  not restored: configmaps "ingress-shim-controller" already exists and is different from backed up version.
                  not restored: configmaps "monitoring.v69" already exists and is different from backed up version.
                  not restored: endpoints "kube-controller-manager" already exists and is different from backed up version.
                  not restored: endpoints "kube-scheduler" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473820" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473880" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473940" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524488340" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-job" already exists and is different from backed up version.
                  not restored: services "heapster" already exists and is different from backed up version.
                  not restored: services "kube-dns" already exists and is different from backed up version.
                  not restored: services "kube-system-kubernetes-dashboard" already exists and is different from backed up version.
                  not restored: services "tiller-deploy" already exists and is different from backed up version.
    logging:      not restored: configmaps "intern-logging-intern-logging" already exists and is different from backed up version.
                  not restored: services "cerebro-logging-cluster" already exists and is different from backed up version.
                  not restored: services "elasticsearch-discovery-logging-cluster" already exists and is different from backed up version.
                  not restored: services "elasticsearch-logging-cluster" already exists and is different from backed up version.
                  not restored: services "es-data-svc-logging-cluster" already exists and is different from backed up version.
                  not restored: services "kibana-logging-cluster" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-controller-metrics" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-controller-stats" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-controller" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-default-backend" already exists and is different from backed up version.
    monitoring:   not restored: configmaps "monitoring-kube-prometheus" already exists and is different from backed up version.
                  not restored: endpoints "alertmanager-operated" already exists and is different from backed up version.
                  not restored: endpoints "prometheus-operated" already exists and is different from backed up version.
                  not restored: services "monitoring-prometheus-pushgateway" already exists and is different from backed up version.

Errors:
  Ark:        <none>
  Cluster:    <none>
  Namespaces:
    kube-system:  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "69f1831d34b8a772e16fe4b53dfde156": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "2f971a1dcd6eb045c364011a4cd3eb0b": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "a78c3a37fa41e2979affd20e9b8e0111": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1e7be17cb58e298472eb0bcf5529d4ca": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "7b2a70d4cf5b688ab13ddbe564ef527e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "0e92292cb0f619d5a229297600d7bb97": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-103-41.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-103-41.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "5963c325107b331ab635aad75b94927b": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "bac2cc1636847764a0815d26720c8cd7": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-107-213.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-107-213.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "377aa0ca81598973093dac679d794bba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-68-173.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-68-173.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "4b96cd34114ce182fb895b5851df1076": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-71-34.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-71-34.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "f97f3000e965824d1fbf2f5e271c5dcb": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "5092b3704cad1cae1ba58baa1f89c044": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-81-61.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-81-61.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "41e604c2a05ff59d4ca71eae2650b77b": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-82-127.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-82-127.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "8ad729bc65359d65c67211a9c8cad910": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "92a7e3e865f9d8fefcc21e84377b4f40": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
Giancarlos-MBPro:.ssh grubio$ 
EnhancemenUser Needs Product Reviewed Q2 2021

Most helpful comment

Yes, the flow would be

  1. Try to create the object
  2. If it failed because the item of the same name already exists

    1. If the item in the backup and the item in the cluster are the same, no-op

    2. Otherwise, check the conflict strategy and proceed with delete/create or logging a warning

I also don't think we'd ever want to delete a PV or PVC. We have another issue open for cloning preexisting PVs into a cluster (#192). We'll need to make sure we special case things like PVs/PVCs here.

All 36 comments

Hi @gianrubio

kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.

This type of message is a warning and it indicates that there is an item with the same name that already exists in the cluster. Ark examined the backed up copy and compared it to the in-cluster copy, and there were differences, so Ark records a warning so you're aware that it wasn't able to restore the item.

error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set

This is #428

This type of message is a warning and it indicates that there is an item with the same name that already exists in the cluster. Ark examined the backed up copy and compared it to the in-cluster copy, and there were differences, so Ark records a warning so you're aware that it wasn't able to restore the item.

Does it make sense to ask not restore the object even if it鈥檚 not the same? How does ark compare the object?

Does it make sense to ask not restore the object even if it鈥檚 not the same?

I'm not sure what you mean?

How does ark compare the object?

Ark clears out fields that would differ such as .metadata.uid and then checks for equality using reflect.DeepEqual().

@gianrubio for the warnings such as kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version., do you believe the items are identical and that Ark is not comparing them correctly?

Is there anything else you need for this issue, or would it be ok to close it?

@gianrubio for the warnings such as kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version., do you believe the items are identical and that Ark is not comparing them correctly?

The items are probably not equal but I'd expect ark to replace them.

I think we'd need to provide a control for that behavior and let the user doing the restore decide if Ark should delete & recreate or no-op.

Should we repurpose this issue as "RFE: option to delete & recreate objects that already exist when restoring"?

Yes, that was my point, maybe a flag like --force could solve this behaviour, WDYT?

cc @jbeda

I'm thinking maybe something like --conflict-strategy with options replace (delete what's in the cluster and create what's in the backup), preserve (keep what's in the cluster and record a warning as we're doing now). (All names TBD)

The proposal sounds good, I have only one thought. Deleting the object before applying will rescheduled all the object, doing it in big cluster can cause issues. I'd rather delete objects that have failed to apply the changes, big warning on deleting volumes and PVCs

Yes, the flow would be

  1. Try to create the object
  2. If it failed because the item of the same name already exists

    1. If the item in the backup and the item in the cluster are the same, no-op

    2. Otherwise, check the conflict strategy and proceed with delete/create or logging a warning

I also don't think we'd ever want to delete a PV or PVC. We have another issue open for cloning preexisting PVs into a cluster (#192). We'll need to make sure we special case things like PVs/PVCs here.

User story:

As a cluster operator, I want to use Ark as a mechanism to keep two clusters in sync. This might be Prod A and Prod B, or alternatively every night mirror Production to Staging so that we have a fresh environment for testing/staging.

For stateless apps, this sounds like a healthy feature for us to add. I agree with Andy that we _probably_ don't want to delete PV/PVC by default.

That said, if the use-case is mirroring Production to Staging, I don't want to keep around my old staging PV/PVCs. Perhaps we need another CLI flag for PV/PVC specifically? --conflict-strategy-volumes?

@heptio/ark-team I'd like to propose resolving this as part of v0.11.0. We should discuss during the v0.11.0 planning meeting what might have to get pushed back to let this in.

Adding Needs Product label.

@rosskukulinski can we talk about this soon?

@ncdc sure! Maybe Tuesday?

Sounds good.

@rosskukulinski I know we've gone back and forth on this a number of times. Is this actually a priority to solve and something we need to do as part of v0.11?

@skriss I spoke with our CRE team and this issue is not a customer priority. Feel free to push out of v0.11 - and maybe the P1 label can come off

Thanks @rosskukulinski. Moved to New Issues with a 1.x milestone and removed the P1 label.

Would be really great if this feature comes in v0.11.0. We actually wanted the resource to applied from backup during restore if it is already present in the cluster.

We've run into this issue as well - we expected that velero would restore a pod to backup state, even it the pod still exists. Do I gather correctly that when one wants to restore a pod to backed up state, one has to delete the pod in question (or deployment, if there is one managing the pod) first, and then do a restore?

in case that this is already documented - would you point me to the docs please? If it's not documented - it wasn't obvious at least for us... Many examples one finds online simulate disaster by deleting the example namespace and restoring afterwards, but we did not expect that deletion is actually necessary.

@valentin-krasontovitsch yes - in the current state, the resources would need to not exist in your cluster before being restorable. We are still interested in having the option to delete/recreate or update objects during restore to match up the restored state, but we haven't been able to get to working on it yet. And I'm afraid the documentation around this is probably less than clear. If you have time, we'd love a PR to the docs to add some context! We'll keep this issue open for continuing to track the feature request.

I also would love to have this option, my plan is to sync a replica cluster time to time but the restore only works for the first time because any changes on the primary cluster would cause the replica restore to warn about it and simple not restoring that objects at all.

not restored: services "foo.bar" already exists and is different from backed up version.

I guess that a workaround would be deleting the previous restored namespace before the new restore which is not a good approach.

Do you folks have any plans of working on it? Maybe I can help with something.

@vsantos I don't know that we have a solid plan of action yet. So far, Velero's approach has been to never delete any resources, so as to guarantee that we don't delete anything by accident.

The workaround you suggest is, essentially, what Velero would do when done at the namespace level. Are you saying you don't think it's a good approach when done by the user, or when done by Velero?

Steve has some comments on a related, though different issue. Velero's selection logic means that we'll have to take some time to think carefully about implementing this cloning solution so that it meets all these cases. I suspect we'll have to introduce some parameters to allow users to specify how they might want restores to behave.

Some questions:

  • Would an acceptable interim workaround be to delete the pre-existing PV and PVC before restoring from velero backup?
  • Can deletion be limited to the PVC and PV only (i.e don't delete the entire namespace) ?

@archmangler yes, you could first manually delete the objects that you want to restore. It's possible you'd run into issues where you couldn't delete the PV/PVC because they were being used by a pod, though.

@skriss is there easy way to find diff between current state and desired restore to get a list of items that need to be deleted before applying the restore based on the conversation above?

@debianmaster we don't have any kind of built-in diff, no. It would obviously be easiest if you could get away with deleting the entire namespace before restoring it, but that may not be feasible depending on your use case.

Perhaps you could use something like https://github.com/weaveworks/kubediff to diff the contents of the backup tarball vs. the in-cluster config?

@skriss thanks for your response. In a single namespace i might have selected workloads w/ specific labels so might not be suitable for my case as you mentioned.

will explore option of kubediff.

cc @dinesh

we should add a flag that by default it is set to false, but when enabled it instructs verero to override existing pods during restore. thoughts?

@michmike What about other objects, besides pods?

@nrb yes, this should apply for all objects. my bad

Some complications to think through:

  • pods/other objects owned by controllers (they'll be recreated if we delete them in prep for a restore)
  • PVCs/PVs that are in use by a pod (deletes will be disallowed due to the PVC/PV in-use protection finalizer)

PVCs/PVs that are in use by a pod (deletes will be disallowed due to the PVC/PV in-use protection finalizer)

Really any finalizer will be an issue; we can look for finalizer labels and log them.

There's also cascading deletes - if we delete an object, it may cause many other objects that reference it to be deleted. Pods are an easy example, but Custom Resources make this more tricky, as we wouldn't be able to find all references to the current object unless we saw all objects in the cluster.

As I write this out, it seems like a case where 2 pass restores could help a lot. One pass where we see what needs to be restored and what might reference it, then the second pass to actually manipulate the cluster.

i like the idea of a 2 pass restore @nrb (and the second pass can be best effort, while the first pass behaves just like restores do today)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

skriss picture skriss  路  4Comments

Berndinox picture Berndinox  路  3Comments

doronmak picture doronmak  路  3Comments

abh picture abh  路  4Comments

vitobotta picture vitobotta  路  3Comments