8.21.0Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.670 PST [31247] ERROR: duplicate key value violates unique constraint "sentry_eventuser_project_id_1a96************_uniq"
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.670 PST [31247] DETAIL: Key (project_id, hash)=(6, 902e****************************) already exists.
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.670 PST [31247] STATEMENT: INSERT INTO "sentry_eventuser" ("project_id", "hash", "ident", "email", "username", "name", "ip_address", "date_added") VALUES (6, '902e****************************', NULL, NULL, NULL, NULL, '158.106.203.154', '2017-12-04 22:34:43.668629+00:00') RETURNING "sentry_eventuser"."id"
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.707 PST [31247] ERROR: duplicate key value violates unique constraint "sentry_environmentproject_project_id_2925************_uniq"
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.707 PST [31247] DETAIL: Key (project_id, environment_id)=(6, 3) already exists.
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.707 PST [31247] STATEMENT: INSERT INTO "sentry_environmentproject" ("project_id", "environment_id") VALUES (6, 3) RETURNING "sentry_environmentproject"."id"
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.721 PST [31247] ERROR: duplicate key value violates unique constraint "sentry_grouprelease_group_id_46ba************_uniq"
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.721 PST [31247] DETAIL: Key (group_id, release_id, environment)=(26, 13, ) already exists.
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.721 PST [31247] STATEMENT: INSERT INTO "sentry_grouprelease" ("project_id", "group_id", "release_id", "environment", "first_seen", "last_seen") VALUES (6, 26, 13, '', '2017-12-04 22:34:43+00:00', '2017-12-04 22:34:43+00:00') RETURNING "sentry_grouprelease"."id"
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.875 PST [31249] ERROR: duplicate key value violates unique constraint "sentry_organizationonboar_organization_id_47e9************_uniq"
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.875 PST [31249] DETAIL: Key (organization_id, task)=(3, 6) already exists.
Dec 04 14:34:43 dev postgres[13944]: 2017-12-04 14:34:43.875 PST [31249] STATEMENT: INSERT INTO "sentry_organizationonboardingtask" ("organization_id", "user_id", "task", "status", "date_completed", "project_id", "data") VALUES (3, NULL, 6, 1, '2017-12-04 22:34:43.872772+00:00', 6, '{}') RETURNING "sentry_organizationonboardingtask"."id"
I've been running Sentry for over a year on my home server, migrating the database whenever updates come out - in fact, you can see my comments about minor issues in the past on the arch user package's page. However, sometimes, when some client _somewhere_ sends events to my Sentry server, I get the above error in my logs.
If necessary, I can set up a little proxy trap to capture requests to get more insight into exactly which request is triggering this.
Let me know if anything else could be helpful: anything specific in the configuration? Database schema dump/etc.? Let me know :smile:
This isn't really a bug, we handle this error gracefully within the application. It's easier to just attempt to INSERT all the time instead of doing a SELECT + INSERT to test if the row exists first. And we just ignore the error.
So this is working as intended and there's nothing wrong here. You can mitigate some of this by enabling caching config, and we'll cache things for a bit to not retry all the time.
Hi, @mattrobenolt
I think this implementation cause too many unnecessary update on sentry_environmentproject_id_seq. Is it you expected?
Yeah, I think it will bump the sequence. I'd have to check to be sure, but is this an issue? It shouldn't be.
I think bumping is not a issue (probably bigint have enough space). But, updating the sequence is a write operation. High frequency update is not good, right?
By the way, I don't know what happen if the sequence is overflow.
If you overflow a 64 bit integer, lemme know. :) If you're concerned, configure a cache so it stops slamming this as often.
If you overflow a 64 bit integer, lemme know.
:laughing:
I am using redis for cache. But duplicate key value violates log appears almost twice per second.
How do I configure cache correctly?
(Yes, too many events received. It's another problem.)
My sentry.conf.py
# A primary cache is required for things such as processing events
SENTRY_CACHE = 'sentry.cache.redis.RedisCache'
Yeah, it's a bit confusing, but you also need to configure CACHES. I'm not sure off the top of my head where the docs are for that. It's this: https://github.com/getsentry/sentry/blob/master/src/sentry/data/config/sentry.conf.py.default#L38-L55
Oh!, :open_mouth:
Sentry currently utilizes two separate mechanisms. While CACHES is not a
requirement, it will optimize several high throughput patterns.
I've installed django-redis and configure CACHES.
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379/1',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient'
},
'KEY_PREFIX': 'cache'
}
}
Then, redis-cli monitor shows cache activity. But duplicate key value violates log appears still high rate.
2018-06-08 18:31:10 JST user=sentry, db=sentry, remote=127.0.0.1(37532), pid=13685, xid=803490604 ERROR: duplicate key value violates unique constraint "sentry_environmentproject_project_id_29250c1307d3722b_uniq"
2018-06-08 18:31:10 JST user=sentry, db=sentry, remote=127.0.0.1(37532), pid=13685, xid=803490604 DETAIL: Key (project_id, environment_id)=(3, 1) already exists.
2018-06-08 18:31:10 JST user=sentry, db=sentry, remote=127.0.0.1(37532), pid=13685, xid=803490604 STATEMENT: INSERT INTO "sentry_environmentproject" ("project_id", "environment_id") VALUES (3, 1) RETURNING "sentry_environmentproject"."id"
:confused:
Thansk,
They're not going to entirely go away.
But either way, not sure what to tell you, this isn't going to cause you any issues besides some log noise. So it's not anything to be concerned with.
Thanks, I don't care that anymore.
It would seem that you could instead use insert on conflict update or upsert. Is there any reason why this isn't implemented?
Because we support versions of Postgres that don't have that feature. Specifically the version we run in production ourselves cannot do any ON CONFLICT.
They're not going to entirely go away.
But either way, not sure what to tell you, this isn't going to cause you any issues besides some log noise. So it's not anything to be concerned with.
Hi锛寃ould those logs take space?
All logging uses disk space, yeah.
so, would it be fixed on next versions?
Fix what? Please read the comments. This is working as intended.
With a busy sentry instance you have postgres logs filled with errors (at least 3 lines for each event).
So it might be a problem with filling disk log space.
On of (maybe not elegant) solution is to alter postgres role not to log those messages (alter user sentry set log_min_messages to 'log';)
I agree with @michallipka , my postgres log 294721 lines yesterday, full of this errors.
Actually, 99% of my postgresql log is now filled with these "Task failed successfully" messages.
This approach is.... kind of lame.
PRs accepted for support ON CONFLICT DO NOTHING in the ORM.
Hi, @mattrobenolt
Could you tell me the PRs that you said? I couldn't find that.
There isn't one. I was suggesting anyone else could open one to fix this issue if they feel passionately about it.
Most helpful comment
Actually, 99% of my postgresql log is now filled with these "Task failed successfully" messages.
This approach is.... kind of lame.