Hello.
I created robot account for jenkins CI.
From that time, there's no docker login fail (regarding #10832 ).
Now, there's a problem when docker push. This occurs sometimes.
Here's Jenkins log.
[2020-02-24T16:25:53.746Z] + docker push harbor.company.net/project/container
[2020-02-24T16:25:53.746Z] The push refers to repository [harbor.company.net/project/container]
[2020-02-24T16:25:53.746Z] 06c4036205f0: Preparing
[2020-02-24T16:25:53.746Z] 318c7183ebf4: Preparing
[2020-02-24T16:25:53.746Z] 89a4929bfb9c: Preparing
[2020-02-24T16:25:53.746Z] a210241fb7b0: Preparing
[2020-02-24T16:25:53.746Z] 307652cfaa21: Preparing
[2020-02-24T16:25:53.746Z] 4e6f1765500c: Preparing
[2020-02-24T16:25:53.746Z] 849a812dfffd: Preparing
[2020-02-24T16:25:53.746Z] 5559ed7737ee: Preparing
[2020-02-24T16:25:53.746Z] 550aefd43d00: Preparing
[2020-02-24T16:25:53.746Z] 27e45ca143e1: Preparing
[2020-02-24T16:25:53.746Z] dd7d5adb4579: Preparing
[2020-02-24T16:25:53.746Z] 4e6f1765500c: Waiting
[2020-02-24T16:25:53.746Z] 849a812dfffd: Waiting
[2020-02-24T16:25:53.746Z] 27e45ca143e1: Waiting
[2020-02-24T16:25:53.746Z] dd7d5adb4579: Waiting
[2020-02-24T16:25:53.746Z] 5559ed7737ee: Waiting
[2020-02-24T16:25:53.746Z] 550aefd43d00: Waiting
[2020-02-24T16:25:53.746Z] 307652cfaa21: Retrying in 5 seconds
[2020-02-24T16:25:53.746Z] 06c4036205f0: Retrying in 5 seconds
[2020-02-24T16:25:54.003Z] 318c7183ebf4: Retrying in 5 seconds
[2020-02-24T16:25:54.005Z] a210241fb7b0: Retrying in 5 seconds
[2020-02-24T16:25:54.005Z] 89a4929bfb9c: Retrying in 5 seconds
[2020-02-24T16:25:54.936Z] 307652cfaa21: Retrying in 4 seconds
[2020-02-24T16:25:54.936Z] 06c4036205f0: Retrying in 4 seconds
[2020-02-24T16:25:54.936Z] 318c7183ebf4: Retrying in 4 seconds
[2020-02-24T16:25:54.936Z] a210241fb7b0: Retrying in 4 seconds
[2020-02-24T16:25:54.936Z] 89a4929bfb9c: Retrying in 4 seconds
[2020-02-24T16:25:55.867Z] 307652cfaa21: Retrying in 3 seconds
[2020-02-24T16:25:55.868Z] 06c4036205f0: Retrying in 3 seconds
[2020-02-24T16:25:55.868Z] 318c7183ebf4: Retrying in 3 seconds
[2020-02-24T16:25:55.868Z] a210241fb7b0: Retrying in 3 seconds
[2020-02-24T16:25:55.868Z] 89a4929bfb9c: Retrying in 3 seconds
[2020-02-24T16:25:56.799Z] 307652cfaa21: Retrying in 2 seconds
[2020-02-24T16:25:56.800Z] 06c4036205f0: Retrying in 2 seconds
[2020-02-24T16:25:56.800Z] 318c7183ebf4: Retrying in 2 seconds
[2020-02-24T16:25:56.800Z] a210241fb7b0: Retrying in 2 seconds
[2020-02-24T16:25:56.800Z] 89a4929bfb9c: Retrying in 2 seconds
[2020-02-24T16:25:58.172Z] 06c4036205f0: Retrying in 1 second
[2020-02-24T16:25:58.173Z] 307652cfaa21: Retrying in 1 second
[2020-02-24T16:25:58.173Z] 318c7183ebf4: Retrying in 1 second
[2020-02-24T16:25:58.173Z] a210241fb7b0: Retrying in 1 second
[2020-02-24T16:25:58.173Z] 89a4929bfb9c: Retrying in 1 second
[2020-02-24T16:25:59.106Z] 06c4036205f0: Retrying in 10 seconds
[2020-02-24T16:25:59.106Z] 307652cfaa21: Retrying in 10 seconds
[2020-02-24T16:25:59.107Z] 318c7183ebf4: Retrying in 10 seconds
[2020-02-24T16:25:59.107Z] a210241fb7b0: Retrying in 10 seconds
[2020-02-24T16:25:59.107Z] 89a4929bfb9c: Retrying in 10 seconds
[2020-02-24T16:26:00.036Z] 06c4036205f0: Retrying in 9 seconds
[2020-02-24T16:26:00.037Z] 307652cfaa21: Retrying in 9 seconds
[2020-02-24T16:26:00.037Z] 318c7183ebf4: Retrying in 9 seconds
[2020-02-24T16:26:00.037Z] a210241fb7b0: Retrying in 9 seconds
[2020-02-24T16:26:00.037Z] 89a4929bfb9c: Retrying in 9 seconds
[2020-02-24T16:26:00.968Z] 06c4036205f0: Retrying in 8 seconds
[2020-02-24T16:26:00.968Z] 307652cfaa21: Retrying in 8 seconds
[2020-02-24T16:26:00.968Z] 318c7183ebf4: Retrying in 8 seconds
[2020-02-24T16:26:00.968Z] a210241fb7b0: Retrying in 8 seconds
[2020-02-24T16:26:00.968Z] 89a4929bfb9c: Retrying in 8 seconds
[2020-02-24T16:26:01.899Z] 06c4036205f0: Retrying in 7 seconds
[2020-02-24T16:26:01.900Z] 307652cfaa21: Retrying in 7 seconds
[2020-02-24T16:26:01.900Z] 318c7183ebf4: Retrying in 7 seconds
[2020-02-24T16:26:01.900Z] a210241fb7b0: Retrying in 7 seconds
[2020-02-24T16:26:01.900Z] 89a4929bfb9c: Retrying in 7 seconds
[2020-02-24T16:26:02.832Z] 06c4036205f0: Retrying in 6 seconds
[2020-02-24T16:26:02.832Z] 307652cfaa21: Retrying in 6 seconds
[2020-02-24T16:26:02.833Z] 318c7183ebf4: Retrying in 6 seconds
[2020-02-24T16:26:02.833Z] a210241fb7b0: Retrying in 6 seconds
[2020-02-24T16:26:02.833Z] 89a4929bfb9c: Retrying in 6 seconds
[2020-02-24T16:26:03.764Z] 06c4036205f0: Retrying in 5 seconds
[2020-02-24T16:26:03.764Z] 307652cfaa21: Retrying in 5 seconds
[2020-02-24T16:26:04.021Z] 318c7183ebf4: Retrying in 5 seconds
[2020-02-24T16:26:04.022Z] a210241fb7b0: Retrying in 5 seconds
[2020-02-24T16:26:04.022Z] 89a4929bfb9c: Retrying in 5 seconds
[2020-02-24T16:26:04.952Z] 06c4036205f0: Retrying in 4 seconds
[2020-02-24T16:26:04.952Z] 307652cfaa21: Retrying in 4 seconds
[2020-02-24T16:26:04.952Z] 318c7183ebf4: Retrying in 4 seconds
[2020-02-24T16:26:04.952Z] a210241fb7b0: Retrying in 4 seconds
[2020-02-24T16:26:04.952Z] 89a4929bfb9c: Retrying in 4 seconds
[2020-02-24T16:26:05.883Z] 06c4036205f0: Retrying in 3 seconds
[2020-02-24T16:26:05.884Z] 307652cfaa21: Retrying in 3 seconds
[2020-02-24T16:26:05.884Z] 318c7183ebf4: Retrying in 3 seconds
[2020-02-24T16:26:05.884Z] a210241fb7b0: Retrying in 3 seconds
[2020-02-24T16:26:05.884Z] 89a4929bfb9c: Retrying in 3 seconds
[2020-02-24T16:26:06.816Z] 06c4036205f0: Retrying in 2 seconds
[2020-02-24T16:26:06.817Z] 307652cfaa21: Retrying in 2 seconds
[2020-02-24T16:26:06.817Z] 318c7183ebf4: Retrying in 2 seconds
[2020-02-24T16:26:06.817Z] a210241fb7b0: Retrying in 2 seconds
[2020-02-24T16:26:06.817Z] 89a4929bfb9c: Retrying in 2 seconds
[2020-02-24T16:26:08.187Z] 06c4036205f0: Retrying in 1 second
[2020-02-24T16:26:08.188Z] 307652cfaa21: Retrying in 1 second
[2020-02-24T16:26:08.188Z] 318c7183ebf4: Retrying in 1 second
[2020-02-24T16:26:08.188Z] a210241fb7b0: Retrying in 1 second
[2020-02-24T16:26:08.188Z] 89a4929bfb9c: Retrying in 1 second
[2020-02-24T16:26:09.119Z] unauthorized: authentication required
I did docker login successfully before push.
Here's harbor-clair log.
{"Event":"could not download layer","Level":"warning","Location":"driver.go:130","Time":"2020-02-24 16:25:51.633354","error":"Get http://harbor-prod-harbor-core/v2/project/container/blobs/sha256:c2d332d9c1e7675bcb9dfae323b7dab73a1cd195602b3c4425c31f113991e846: dial tcp 10.96.61.194:80: connect: connection refused"}
{"Event":"failed to extract data from path","Level":"error","Location":"worker.go:122","Time":"2020-02-24 16:25:51.633479","error":"could not find layer","layer":"02332fa37e93f287f81e7e574674e5fa0a989f7356f81394b60f74f4a371e6e4","path":"http://harbor-prod-harbor-core/v2/project/container/blobs/sha256:c2d332d9c1e7675bcb9dfae323b7dab73a1cd195602b3c4425c31f113991e846"}
Here's harbor-core log. Please ignore timestamp. I paste some errors in here.
2020-02-25T06:55:49Z [ERROR] [/core/main.go:280]: Failed to parse SYNC_QUOTA: strconv.ParseBool: parsing "": invalid syntax
2020-02-25T06:55:50Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: failed to verify the secret: user does not exist, name: robotaccount
2020-02-25T06:56:27Z [ERROR] [/core/service/notifications/registry/handler.go:174]: registry notification: trigger scan when pushing automatically: error: 10409(conflict) : scan controller: scan : cause: error: 10409(conflict) : a previous scan process is Running
<snip>
2020/02/25 09:02:41 [D] [server.go:2774] | 127.0.0.1| 404 | 18.855152ms| match| HEAD /v2/project/container/blobs/sha256:b34f6cab83075934a1c8fa259d0543773a5f807e2822ef00c79ece4b3cfe8b8b r:/v2/*
2020-02-25T09:02:41Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: failed to verify the secret: user does not exist, name: robotaccount
<snip>
2020-02-25T09:03:17Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: failed to verify the secret: user does not exist, name: robotaccount
and I think harbor-core restarted at that time because I use clair scanning at push time option
and at that time tens of images push to same project and clair's load increases suddenly,
it may lead core's load increase and then core restarted or hang.
What is the problem?
Versions:
Thanks,
It seems there is some quota related error could you help to figure it out? @wy65701436 @heww
I turned off "Automatically scan images on push" option yesterday, and then there's no error. I will keep trace for couple of days.
This may be a problem depending on your network situation. Do you have harbor installed in k8s environment?
@catchups yes I installed on k8s.
I guess scanning is root cause but it isn't.
There was core pod restart, I can't find why, but I think there's performance problem in here...
The scan is not the root cause, how do you deploy Harbor?
using helm chart.
my values.yaml is below.
expose:
type: ingress
tls:
enabled: true
secretName: "ingress-tls"
ingress:
hosts:
core: harbor.company.net
externalURL: https://harbor.company.net
persistence:
enabled: true
persistentVolumeClaim:
registry:
storageClass: "nfs-client-prod"
size: 20Gi
chartmuseum:
storageClass: "nfs-client-prod"
size: 10Gi
jobservice:
storageClass: "nfs-client-prod"
size: 2Gi
database:
storageClass: "nfs-client-prod"
size: 2Gi
redis:
storageClass: "nfs-client-prod"
size: 2Gi
logLevel: info
harborAdminPassword: "1qaz"
proxy:
httpProxy: http://proxy:port
httpsProxy: http://proxy:port
noProxy: 127.0.0.1,localhost,*.company.net
components:
- clair
core:
nodeSelector: {}
chartmuseum:
enabled: true
clair:
enabled: true
nodeSelector: {}
notary:
enabled: false
database:
internal:
password: "1qaz"
maxIdleConns: 0
maxOpenConns: 0
We're also facing this issue with Harbor version 1.10.0. We've had no issues earlier but suddenly we started seeing this issue on all our push requests. This happened 3 days ago when our containers restarted due to a server reboot.
Our logs in harbor says:
May 4 09:48:12 172.18.0.1 core[6055]: 2020-05-04T07:48:12Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: failed to verify the secret: user does not exist, name: robot$CICD
But that robot account definitely exists. This happens to accounts that were created before our problems started, and after. The problem also occurs when using our CLI Secrets. All containers appear to be up and running.
Notary logs the following:
May 4 10:13:47 172.18.0.1 notary-server[6055]: {"go.version":"go1.13.4","http.request.host":"FOOBAR:port","http.request.id":"FOOBAR","http.request.method":"GET","http.request.remoteaddr":"FOOBAR:PORT","http.request.uri":"FOOBAR.json","http.request.useragent":"Go-http-client/1.1","level":"info","msg":"metadata not found: You have requested metadata that does not exist.: No record found","time":"2020-05-04T08:13:47Z"}
We've tried to restart all containers again, but it doesn't solve the issue.
We've tried to turn on and off the image scanning, it doesn't solve it either.
Met same issue.
Unable to push image.
K8s installation.
Harbor version 1.10.2
2020-07-23T01:37:41Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: failed to verify the secret: user does not exist, name: robot$ci-cd
hi @Hokwang, @Mierpo @liuyuewen91 , have you re-deploied/upgrade the helm? If the private key is replaced, the token issused by the previous version cannot be decoded.
We are having these issues as well, in addition to seeing this error when attempting a docker pull.
core 2020-08-11T17:39:29Z [ERROR] [/server/middleware/security/oidc_cli.go:54][requestID="80181cde-987d-4f65-bbc5-76b982d60896"]: failed to verify secret: failed to verify the secret: user does not exist, name: robot$temp.
When trying to test locally I cannot login from the CLI as the robot user and get
Error response from daemon: Get https://<registry-name>/v2/: unauthorized: authentication required
the private key is replaced, the token issused by the previous version cannot be decoded.
what private key? @wy65701436
I am running into the same issue and have deployed/upgraded harbor using harbor helm chart
We are having these issues as well, in addition to seeing this error when attempting a
docker pull.
core 2020-08-11T17:39:29Z [ERROR] [/server/middleware/security/oidc_cli.go:54][requestID="80181cde-987d-4f65-bbc5-76b982d60896"]: failed to verify secret: failed to verify the secret: user does not exist, name: robot$temp.
When trying to test locally I cannot login from the CLI as the robot user and get
Error response from daemon: Get https://<registry-name>/v2/: unauthorized: authentication required
It was found that the time on our worker nodes was out of sync. Once synced our login issue was resolved, even though we still see the error failed to verify secret: failed to verify the secret: user does not exist, name: <robot user>.
Has anyone solved the "user does not exist" issue for robot user accounts? It is blocking me as well. I create a new robot account and try to docker login and it just fails (logs show the same thing:
failed to verify secret: user does not exist, name: robot@foobar
We are on version 2.0.1 I believe.
I have the same problem,I installed it in k8S
push:
rs to repository [core.harbor.staging.xfreeapp.com/test/ng]
2367050c34dd: Retrying in 6 seconds
2c8583333eb3: Retrying in 6 seconds
e2a648dc6400: Retrying in 6 seconds
93e19e6dd56b: Retrying in 6 seconds
ace0eda3e3be: Retrying in 6 seconds
show REGISTRY log:
kubectl logs harbor-2-harbor-registry-7df5f74984-z47rw -c registry
time="2020-12-15T05:08:26.839799948Z" level=error msg="response completed with error" auth.user.name="harbor_registry_user" err.code=unknown err.detail="filesystem: mkdir /storage/docker: permission denied" err.message="unknown error" go.version=go1.14.7 http.request.host=core.harbor.staging.xfreeapp.com http.request.id=3b99b0ae-a3f4-4f7f-89d4-fa0e7514ec22 http.request.method=POST http.request.remoteaddr=192.168.9.211 http.request.uri="/v2/test/ng/blobs/uploads/" http.request.useragent="docker/19.03.13 go/go1.13.15 git-commit/4484c46d9d kernel/3.10.0-1062.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.13 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=178.896895ms http.response.status=500 http.response.written=155 vars.name="test/ng"
Because I'm using a local filesystem store,It looks like I don't have permission to store it。
Most helpful comment
We're also facing this issue with Harbor version 1.10.0. We've had no issues earlier but suddenly we started seeing this issue on all our push requests. This happened 3 days ago when our containers restarted due to a server reboot.
Our logs in harbor says:
May 4 09:48:12 172.18.0.1 core[6055]: 2020-05-04T07:48:12Z [ERROR] [/core/filter/security.go:244]: Failed to verify secret: failed to verify the secret: user does not exist, name: robot$CICDBut that robot account definitely exists. This happens to accounts that were created before our problems started, and after. The problem also occurs when using our CLI Secrets. All containers appear to be up and running.
Notary logs the following:
May 4 10:13:47 172.18.0.1 notary-server[6055]: {"go.version":"go1.13.4","http.request.host":"FOOBAR:port","http.request.id":"FOOBAR","http.request.method":"GET","http.request.remoteaddr":"FOOBAR:PORT","http.request.uri":"FOOBAR.json","http.request.useragent":"Go-http-client/1.1","level":"info","msg":"metadata not found: You have requested metadata that does not exist.: No record found","time":"2020-05-04T08:13:47Z"}We've tried to restart all containers again, but it doesn't solve the issue.
We've tried to turn on and off the image scanning, it doesn't solve it either.