Radarr: "Database is locked" error messages after import

Created on 18 Mar 2017  ·  40Comments  ·  Source: Radarr/Radarr

Description:

After a movie is downloaded, radarr successfully imports it, renames and moves to the final location. It notifies the plex server and kodi successfully. However, every once in a while, it gives an error message "couldn't import movie, database is locked". The activity tab shows the movie download progress as being active with 0 seconds left.

Restarting radarr does nothing.

If I go to the movie details page and click on "Update movie info and scan disk" it finds the local file and marks it as downloaded, history tab shows it as downloaded as well. However, the activity tab still shows the download progress as active.

If I delete the movie entry in radarr (but not the files) and re-add, the activity tab still shows the download as active even though sabnzbd shows it as successfully downloaded.

The only way to get it off of the activity tab is to hit "Remove from download client", which is something I'd rather not do because it deletes the history from sabnzbd.

It seems like radarr is unable to mark the database with the entry that sabnzbd download is finished.

Radarr Version:

0.2.0.535
Mono version 4.2.1
linuxserver docker with ubuntu xenial base

Logs:
Here's the log with the database locked message
http://pastebin.com/y2GEfdJ2

And when I remove the movie entry from radarr I keep getting the following error message on repeat until I re-add the movie:
```17-3-18 16:49:29.9|Error|DownloadMonitoringService|Couldn't process tracked download Rogue.One.2016.1080p.BluRay.x264-SPARKS English

[v0.2.0.535] NzbDrone.Core.Datastore.ModelNotFoundException: Movie with ID 21 does not exist
at NzbDrone.Core.Datastore.BasicRepository`1[TModel].Get (Int32 id) [0x0009d] in C:\projects\radarr-usby1\src\NzbDrone.Core\Datastore\BasicRepository.cs:77
at NzbDrone.Core.Tv.MovieService.GetMovie (Int32 movieId) [0x00000] in C:\projects\radarr-usby1\src\NzbDrone.Core\Tv\MovieService.cs:135
at NzbDrone.Core.Download.CompletedDownloadService.Process (NzbDrone.Core.Download.TrackedDownloads.TrackedDownload trackedDownload, Boolean ignoreWarnings) [0x00100] in C:\projects\radarr-usby1\src\NzbDrone.Core\Download\CompletedDownloadService.cs:96
at NzbDrone.Core.Download.TrackedDownloads.DownloadMonitoringService.ProcessClientItems (IDownloadClient downloadClient, NzbDrone.Core.Download.DownloadClientItem downloadItem) [0x00042] in C:\projects\radarr-usby1\src\NzbDrone.Core\Download\TrackedDownloads\DownloadMonitoringService.cs:128
```

bug cannot reproduce

Most helpful comment

You answered the question adequately enough when you said it’s not
possible. There’s plenty of reasons why someone might want redundancy,
leave the snark out.

On Wed, Mar 11, 2020 at 9:28 AM ta264 notifications@github.com wrote:

The simple answer is you can't. Radarr isn't designed to be clustered, and
it's not clear what the point of that would be. Don't overcomplicate it :)


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Radarr/Radarr/issues/1218#issuecomment-597634202, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AA3MRVZMIHPC5HYSSUPIEBTRG6GZLANCNFSM4DEGFEBA
.

All 40 comments

got same random error with locked db on Centos 7.3 (1611) and i have to update library to find the imported file (the downloaded nzb gets deleted from nzbget) so its just the import into db and then delete the activity entry thats needed.

Version 0.2.0.535
Mono Version 4.8.0 (Stable 4.8.0.495/e4a3cf3 Wed Feb 22 18:07:20 UTC 2017)

kinda irritating but its still development branch so some issues are expected..

I'm wondering if it's a race condition where two processes are trying to write to the database at the same time at the end of the rename, one locks it while writing and the other errors out.

Still having this issue about 20% of the time. Any acknowledgement or suggestions from the team?

From this morning:
```17-5-3 09:16:48.8|Warn|ImportApprovedMovie|Couldn't import movie /mnt/cache/.apps/sabnzbd/complete/Movies/Mine.2016.720p.Bluray.x264-KYR/mine.2016.720p.bluray.x264-kyr.mkv

[v0.2.0.654] System.Data.SQLite.SQLiteException (0x80004005): database is locked
database is locked
at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x00088] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteDataReader.NextResult () [0x0016b] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave) [0x00090] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader:.ctor (System.Data.SQLite.SQLiteCommand,System.Data.CommandBehavior)
at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteCommand.ExecuteScalar (System.Data.CommandBehavior behavior) [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteCommand.ExecuteScalar () [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at Marr.Data.QGen.InsertQueryBuilder1[T].Execute () [0x00046] in C:\projects\radarr-usby1\src\Marr.Data\QGen\InsertQueryBuilder.cs:140 at Marr.Data.DataMapper.Insert[T] (T entity) [0x0005d] in C:\projects\radarr-usby1\src\Marr.Data\DataMapper.cs:728 at NzbDrone.Core.Datastore.BasicRepository1[TModel].Insert (TModel model) [0x0002d] in C:\projects\radarr-usby1\src\NzbDrone.Core\Datastore\BasicRepository.cs:111
at NzbDrone.Core.MediaFiles.MediaFileService.Add (NzbDrone.Core.MediaFiles.MovieFile episodeFile) [0x00000] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\MediaFileService.cs:148
at NzbDrone.Core.MediaFiles.EpisodeImport.ImportApprovedMovie.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.EpisodeImport.ImportMode importMode) [0x002a2] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\EpisodeImport\ImportApprovedMovie.cs:115

17-5-3 09:17:41.2|Error|TaskExtensions|Task Error

[v0.2.0.654] System.Data.SQLite.SQLiteException (0x80004005): database is locked
database is locked
at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x00088] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteDataReader.NextResult () [0x0016b] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave) [0x00090] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader:.ctor (System.Data.SQLite.SQLiteCommand,System.Data.CommandBehavior)
at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteCommand.ExecuteScalar (System.Data.CommandBehavior behavior) [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at System.Data.SQLite.SQLiteCommand.ExecuteScalar () [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0
at Marr.Data.QGen.InsertQueryBuilder1[T].Execute () [0x00046] in C:\projects\radarr-usby1\src\Marr.Data\QGen\InsertQueryBuilder.cs:140 at Marr.Data.DataMapper.Insert[T] (T entity) [0x0005d] in C:\projects\radarr-usby1\src\Marr.Data\DataMapper.cs:728 at NzbDrone.Core.Datastore.BasicRepository1[TModel].Insert (TModel model) [0x0002d] in C:\projects\radarr-usby1\src\NzbDrone.Core\Datastore\BasicRepository.cs:111
at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push[TCommand] (TCommand command, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x0013d] in C:\projects\radarr-usby1\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:82
at (wrapper dynamic-method) System.Object:CallSite.Target (System.Runtime.CompilerServices.Closure,System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger)
at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet] (System.Runtime.CompilerServices.CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3) [0x0003e] in <2392cff65f724abaaed9de072f62bc4a>:0
at (wrapper delegate-invoke) System.Func6[System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,System.Object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger,System.Object]:invoke_TResult_T1_T2_T3_T4_T5 (System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger) at (wrapper dynamic-method) System.Object:CallSite.Target (System.Runtime.CompilerServices.Closure,System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandQueueManager,object,NzbDrone.Core.Messaging.Commands.CommandPriority,NzbDrone.Core.Messaging.Commands.CommandTrigger) at NzbDrone.Core.Messaging.Commands.CommandQueueManager.Push (System.String commandName, System.Nullable1[T] lastExecutionTime, NzbDrone.Core.Messaging.Commands.CommandPriority priority, NzbDrone.Core.Messaging.Commands.CommandTrigger trigger) [0x000b7] in C:\projects\radarr-usby1\src\NzbDrone.Core\Messaging\Commands\CommandQueueManager.cs:95
at NzbDrone.Core.Jobs.Scheduler.ExecuteCommands () [0x00043] in C:\projects\radarr-usby1\src\NzbDrone.Core\Jobs\Scheduler.cs:42
at System.Threading.Tasks.Task.InnerInvoke () [0x00012] in :0
at System.Threading.Tasks.Task.Execute () [0x00016] in :0 ```

No idea what could be causing this issue other than other programs having the database open / directory or something. Can you make sure no other programs are using the db?

Do you mean the radarr database? Nothing else, really. It runs in docker so radarr is the only process running in that container. The only thing that is integrated with radarr is the android nzb360 app, which I believe uses the api so no direct access to the database.

Do you think it could be two separate radarr processes trying to access the database? I am using the renamer, which successfully renames and moves the files just prior to the error message above. And the process that fails is the one that marks the download as completed. Perhaps sometimes there is a slight overlap? Is there anyway to introduce a slight delay before that last process of marking the movie completed in the databse?

@galli-leo I just had a new revelation.

When the database locked error occurs, I get the same error in sonarr as well, which is running in a separate docker container on the same machine.

Here's a log from radarr having the issue: https://pastebin.com/ZPX3scjM
Please note that the error messages happen between 12pm and 12:05pm today

Here's a log from sonarr: https://pastebin.com/ScihjXB4
Note that the issue occurs at 12:03pm today

My server is unraid (slackware based nas) running various docker containers including radarr and sonarr (both linuxserver versions). Both containers are based on the latest images with mono 5.

Not sure what it means, I hope it gives you some clue. Sonarr and radarr are both running in their separate containers and do not share a database or anything. But with docker, there is some sharing of the host machine's resources.

@bjornstromberg are you perhaps running sonarr as well on the same machine?

PS. I went back and checked and every time radarr has the database locked error, sonarr had it at the same time, too.

Seems like an isse with docker. Maybe it locks the volumes? Idk

Well, I concur with galli... I'd look in the docker & unraid environment first. (I'll happily chalk it up to 'external cause'. And I hardly call it the radar-devs being 'baffled', just saying.)

Some observations though:

17-5-31 12:04:31.0|Info|RssSyncService|Starting RSS Sync
17-5-31 12:03:37.9|Error|TaskExtensions|Task Error

Time going backward? That's not good.

I'm also wondering why you haven't switched to Trace level logs, correlated with OS log files and that kind of stuff.
That Sonarr log snippet is way too short, missing a ton of context.
The 12pm correlation is interesting, but it's just one datapoint, does it happen each day? how often? data data data...

Finally, I hope you're not mounting those docker volumes on any kind of network filesystem. Coz that's a recipe for disaster with sqlite.

Btw. sqlite does retry a couple of times (30 sec iirc), so if db locked message gets logged _with_ stacktrace, then it's not just an incident.

:-) Baffled may be a bit over-dramatic. Didn't mean no offense.

I'll switch both to trace level logs prior to adding a new movie on radarr and keep repeating until it happens again. Thanks for the suggestion.

The sonarr log snippet is all it contained. Above and below that are the regular rss syncs that fire off every half an hour. To my knowledge, nothing was being processed in sonarr at that time (or active downloads).

The issue is intermittent. Most radarr downloads and processing completes successfully. Every once in a while, I get the error. When that happens, the files are already renamed and moved over to the final location, however no notifications are sent to plex or emby. Radarr still shows the sabnzbd download under the activity tab as active. The only solution is to have radarr delete the download from client, and refresh the library so it picks up the downloaded and renamed files.

In terms of frequency, it really depends on the download activity. According to the logs, it happened twice in the last week (out of a total of 4 downloads, but usually it's less frequent than that). Both times, sonarr also displayed database locked errors at about the same time. Sonarr logs do not have show any other errors apart from the two associated with the radarr processing.

The mount points are not on a network, just regular native filesystem mount.

Can you recheck the old Sonarr log, verify that the timestamp indeed went backward? Coz that's highly suspicious.

Here's a longer one. I pulled it straight from the "Files" view Sonarr.Txt: https://pastebin.com/rwub2ZVY

Keep an eye on it.

Thanks, will post a trace level log when I have it.

@aptalca yes i also got sonarr running concurrent with radarr, no docker stuff though,
its a plain linux box with zfs volumes.

so my setup is they both have different home directorys they work inside, and does not share databases.
im still running mono 4.8 so other people get to iron out the new issues with a major version bump..

its just a bump in the road, as a click on refresh from disc solves the problem when it occurs..

After thorough testing, I noticed that file transfers (copy/write on the same disk) over 10GB tend to lock up my server temporarily (until transfer is completed) and during that time, sonarr and radarr are unable to write to disk. I am using a btrfs pool of 4 ssd drives. What is strange is that the write error occurs even after the file transfer is completed (for a brief time), perhaps it is cached in ram, but still being written to disk when presented to OS as transfer completed. Perhaps, radarr could introduce a delay before post-processing since @bjornstromberg is also having this issue with a different file system (zfs)? I believe couchpotato had a certain delay for that.

In any case, I'm closing this issue. I will continue troubleshooting the btrfs pool and file transfers.

Thanks

Final update.

I can confirm that this problem was due to btrfs. For some reason, with btrfs, disk io goes through the roof during unrar, repair, or copy actions and with high disk io, both sonarr and radarr have issues accessing their sqlite databases.

Btrfs raid 0 config (2 or more ssd drives) causes really high disk io and leads to no disk access for an extended period of time, and leading to the errors listed above (plus others like crashing VMs, etc.)

A single btrfs drive still has high disk io, but not for long enough to cause serious issues, just "database locked" messages in the logs.

After formatting the drive to xfs (same ssd drive) all these issues completely went away. No more errors logged.

@aptalca Are your Sqlite databases being served off of your NAS? If so, are you using NFS to mount the volumes they are on?

I'm not understanding how the filesystem matters if it's being served over NFS... ? Thanks!

@cjbottaro
Oh, no. Don't put sqlite on nfs or remote mounts. They don't like that at all. Sqlite dbs are all on local filesystem (bind mounts in docker)

Yeah. That's unfortunate. It makes running this stuff in a Kubernetes cluster difficult.

@cjbottaro I looked into k8s and was very confused that there was no out of the box solution for syncing volumes. Most of the guides I find online are either using k8s with a hosted mysql like google cloud sql, or they have the database on a single node with bind mounted volumes on host. Some just use stateless node or nginx containers and not even bother with volumes, some use nfs for regular (non-db) data.

The only k8s storage driver that advertises sql compatibility is a paid enterprise one: https://portworx.com/basic-guide-kubernetes-storage/

Yeah I think it's just a tiny use case. Kubernetes' list of natively supported persistent volume types includes NFS: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes but I guess it's "not their problem" if an application just won't run properly using it.

Unfortunately, NFS is really the only option for home clusters. I certainly can't mount an EBS volume on my home machines... :/

I worked around the problem by putting all my sqllite databases on normal bind mounts, then having a Kubernetes cronjob that rsync's them to my NAS every hour.

I also have an init container to sync the other way (from NAS to bind mount location) if the bind mount location is empty, for example on brining up a new cluster.

It works fine, but is a lot of extra work. It would be so nice if these applications could be configured to use Postgresql or Mysql, but the apps are meant to run standalone plus the target audience has no idea about this stuff... :/

Off topic, but it would be really cool to make a home media platform based on Docker and/or Kubernetes where the end user would just have to have Docker/Kubernetes installed and from there, they can install any component they want with simple one line commands:

helm install CoolProjectName/radarr
helm install CoolProjectName/sonarr
helm install CoolProjectName/plex
helm install CoolProjectName/emby

@cjbottaro Have you heard of unraid? It’s a NAS platform that has basically that for the docket side of things (no k8s). It’s more a gui driven setup, but docket based for the ‘apps’.

I seem to be running into this issue quite a bit on my home docker swarm. Everything is running off of NFS and I am consistently running into this issue when adding new media -

[v0.2.0.1459] System.Data.SQLite.SQLiteException (0x80004005): database is locked database is locked.

Sadly it sounds like there doesn't seem to be a reasonable workaround just yet.

I'm having this issue and I spent more than 4 hours trying to fix it. It drives me nuts! I'm at the moment mounting my NFS share using nfs4. Still no luck with NFS remote mounts.

You must not put the database on a network drive else it'll corrupt

haha I am the most recent person to join the club! I don't know how else to configure this though for a 3 node k8s cluster... *apart from NFS

The simple answer is you can't. Radarr isn't designed to be clustered, and it's not clear what the point of that would be. Don't overcomplicate it :)

You answered the question adequately enough when you said it’s not
possible. There’s plenty of reasons why someone might want redundancy,
leave the snark out.

On Wed, Mar 11, 2020 at 9:28 AM ta264 notifications@github.com wrote:

The simple answer is you can't. Radarr isn't designed to be clustered, and
it's not clear what the point of that would be. Don't overcomplicate it :)


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Radarr/Radarr/issues/1218#issuecomment-597634202, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AA3MRVZMIHPC5HYSSUPIEBTRG6GZLANCNFSM4DEGFEBA
.

No one is trying to cluster anything. The root of the problem is that people want to use NAS for Radarr's configuration dirs/files, which is completely orthogonal to Kubernetes, containers, etc.

I think that's a very reasonable thing to do. Most people using Radarr have NAS to store media, it would be nice to be able to store Radarr's configuration and internal metadata there as well.

If you want to store it's application data on the Nas you'll need to run it on the Nas. This is very unlikely to change - the effort / reward just isn't worth it for us

I've bumped into this issue on a 3-node Kubernetes cluster backed by NFS storage. Is there a way to configure another DB engine for Radarr (e.g., MySQL, Postgres) instead of SQLite?

No. And do not try to run a cluster of Radarr instances with the same database, it will break.

Thanks, it was running with only 1 replica but fixed it by adding a hostPath volume to Radarr on a fixed node:

Changes in the deployment config for anyone looking for solution:

yaml: ... containers: - name: radarr ... volumeMounts: - name: config mountPath: "/config" ... volumes: - name: config hostPath: path: /srv/radarr/config type: DirectoryOrCreate ... nodeName: node1

Now the error is gone and the web UI is responsive again.

Be warned that if you are accessing the database via NFS it will get corrupted eventually.

Thanks but this hostPath directory is on-the-node storage (VM filesystem), not a dynamically provisioned NFS PV as it was before. Thus if the node goes down, Radarr stops to work. IMO this is still better than an unusably slow web UI with the db stored on NFS (and later corrupting it), it works fine now.

Yeah I think it's just a tiny use case. Kubernetes' list of natively supported persistent volume types includes NFS: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes but I guess it's "not their problem" if an application just won't run properly using it.

Unfortunately, NFS is really the only option for home clusters. I certainly can't mount an EBS volume on my home machines... :/

I worked around the problem by putting all my sqllite databases on normal bind mounts, then having a Kubernetes cronjob that rsync's them to my NAS every hour.

I also have an init container to sync the other way (from NAS to bind mount location) if the bind mount location is empty, for example on brining up a new cluster.

It works fine, but is a lot of extra work. It would be so nice if these applications could be configured to use Postgresql or Mysql, but the apps are meant to run standalone plus the target audience has no idea about this stuff... :/

Off topic, but it would be really cool to make a home media platform based on Docker and/or Kubernetes where the end user would just have to have Docker/Kubernetes installed and from there, they can install any component they want with simple one line commands:

helm install CoolProjectName/radarr
helm install CoolProjectName/sonarr
helm install CoolProjectName/plex
helm install CoolProjectName/emby

@cjbottaro would you be so kind to share your setup for the initContainer + cronjob? I'm facing the same issue and this looks like one of the better options I've seen so far 👍

This appears to be the root of my issue too. My k3s cluster uses NFS as the storage backing via the external NFS provisioner and after starting Radarr V3 fresh I get the database locked error.

I don't get this with any of my other apps, it's just Radarr that locks up.

Why the hell are you commenting on a 4!!!! Year old closed issue man?

Why the hell are you commenting on a 4!!!! Year old closed issue man?

The discussion is ongoing and the problem isn't resolved.

Simply commenting effectively MeToo with nothing productive to add on a stale, 4 year old GitHub issue on a version that is EOL is quite frankly and bluntly, not the smartest idea.

Let alone one tagged as could not reproduce.

Similarly the database lock issue is not an issue. It's simple means it can't keep up with the I/O effectively, doesn't cause any actual problems.

Comments and replies like yours that add no beneficial information are why certain GHI get locked.

With that said, I do appreciate you clearly using the search function :)

Edit:
Keeping yourself database on an NFS share WILL result in corruption one day. It is all but guaranteed, I'd suggest daily backups/Snapshots.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

woble picture woble  ·  41Comments

savaloi picture savaloi  ·  56Comments

joelstitch picture joelstitch  ·  98Comments

galli-leo picture galli-leo  ·  49Comments

aporzio1 picture aporzio1  ·  39Comments