There may be instances where users of RStudio Connect experience an error when attempting to access the ide wherein there is an error message presented stating "Database is locked". The error message looks something like this:
This error often occurs when a server or its file system are under substantial load & the database is not able to serve requests to keep up with the demand.
There are a number of causes for this issue. The first, and most common is high CPU utilization. There may be processes running in the background that are utilizing CPU without any indication that they have started or are running. This can also be caused by a large number of read/writes to the database which fully utilize the compute capabilities of your hardware infrastructure.
Migrating Database Provider
The first, and most recommended solution that we suggest is replacing the default SQLite3 database with a Postgres database. RStudio Connect is shipped with an SQLite3 database as standard. While this does the job, it is prone to service degradation when it is put under large amounts of load.
As such, our first recommendation is to shift from the default SQLite database, to using a separate PostgreSQL database. Using a separate host dedicated to handling database transactions will vastly improve product performance & you will likely not see the error message re-occur. We have a comprehensive guide on migrating from the default SQLite3 database to Postgres here:
The next way to resolve this issue, is to switch to using a proxy that sits in front of a cluster of multiple RStudio Connect hosts. This will avoid the database-locking errors, but does not solve the fact that the host might be over-utilized. If you find that you are asking the host to process more work than it can handle, the next best answer is to shift to a cluster of hosts with a proxy in front.
Please note, that this solution still requires you to have a separate Postgres database which all nodes in the cluster are able to connect to.
We have a thorough guide on moving to a high availability infrastructure setup with a load balancer which can be found here:
Clearing up compute capacity
Another solution would be to ssh into your server and check to see if there are any processes that are occupying compute resources that you can remove. You can check running processes by running the command below:
top -n 10
From here, you can identify the process id occupying the most amount of compute resources & remove it by running:
This solution however, is more of a band-aid fix and it's only a matter of time before the issue re-appears. It's best to resolve the issue for the long term by either running a proxy or migrating to an external database. However, this final solution will get you unstuck for a short while
If you still have issues after completing the above, you can always lodge a support ticket, where our group of friendly, and incredibly knowledgeable staff can assist with any issues that you may be having. You can submit a ticket here: