Hi,
I am having a problem moving data between measurements. I want to copy all data from "mysql" measurement to "mysql_old" measurement. I tried with the following command:
> select * into "mysql_old" from "mysql" group by *
ERR: timeout
But it returns ERR: timeout.
I am using InfluxDB version: 1.5.2
The configuration I am using is:
reporting-disabled = false
bind-address = ":8088"
[meta]
dir = "/var/lib/influxdb/meta"
retention-autocreate = true
logging-enabled = true
[data]
dir = "/var/lib/influxdb/data"
wal-dir = "/var/lib/influxdb/wal"
query-log-enabled = false
cache-max-memory-size = 1073741824
cache-snapshot-memory-size = 26214400
cache-snapshot-write-cold-duration = "10m0s"
compact-full-write-cold-duration = "4h0m0s"
max-series-per-database = 1000000
max-values-per-tag = 1000000
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[admin]
enabled = true
bind-address = ":8083"
https-enabled = false
https-certificate = "/etc/ssl/influxdb.pem"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[subscriber]
enabled = true
http-timeout = "30s"
insecure-skip-verify = false
ca-certs = ""
write-concurrency = 40
write-buffer-size = 1000
[http]
enabled = true
bind-address = ":8086"
auth-enabled = true
log-enabled = true
write-tracing = false
pprof-enabled = true
https-enabled = false
https-certificate = "/etc/ssl/influxdb.pem"
https-private-key = ""
max-row-limit = 10000
max-connection-limit = 0
shared-secret = ""
realm = "InfluxDB"
unix-socket-enabled = false
bind-socket = "/var/run/influxdb.sock"
[[graphite]]
enabled = false
bind-address = ":2003"
database = "graphite"
retention-policy = ""
protocol = "tcp"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
consistency-level = "one"
separator = "."
udp-read-buffer = 0
[[collectd]]
enabled = false
bind-address = ":25826"
database = "collectd"
retention-policy = ""
batch-size = 5000
batch-pending = 10
batch-timeout = "10s"
read-buffer = 0
typesdb = "/usr/share/collectd/types.db"
[[opentsdb]]
enabled = false
bind-address = ":4242"
database = "opentsdb"
retention-policy = ""
consistency-level = "one"
tls-enabled = false
certificate = "/etc/ssl/influxdb.pem"
batch-size = 1000
batch-pending = 5
batch-timeout = "1s"
log-point-errors = true
[[udp]]
enabled = true
bind-address = ":8089"
database = "udp"
retention-policy = ""
batch-size = 5000
batch-pending = 10
read-buffer = 0
batch-timeout = "1s"
precision = ""
[continuous_queries]
log-enabled = true
enabled = true
run-interval = "1s"
Thanks,
How much data are you dealing with? You may need to break up the SELECT INTO with some time ranges (from and to)...and run it multiple times.
I have the same issue here. This does not seem to be related to the length but the amount of data. I am using the restore function to import a backup into a new temporary database. Then I use the SELECT * INTO command to move the data from the temporary database into an existing database. The command exits with 'ERR: timeout' after about 30 minutes. In this case, the backup has a size of approx. 1 Gigabyte. Smaller backups make no problem. Separating the import into smaller "packages" seems to work.
I realize this is an old thread but I'm having the same issue and I've dug through the docs to no avail. My database I'm trying to restore is about 21GB with just over 2 years of data. When testing, I can import 4 days (time > now() - 4d) before I receive the timeout error. Is there a more efficient way of accomplishing this or is there a way to increase the timeout duration that I'm simply overlooking? Going 4 days at a time (abt. 20-30 mins each) will take a completely impractical amount of time to complete the restore.
Unfortunately, I was only able to get this working by separating the import into six hour packages. In this case I restore the backup into a separate database and then run the SELECT * INTO command only for a six hour time range in a loop through the whole time range. I know this is nasty to do, but I run an automated script to do this and have no issue since.
That sucks. Would you mind sharing the script or at least the command you were using to hone in on a short time frame? I'm struggling with the syntax of the WHERE argument.
Most helpful comment
How much data are you dealing with? You may need to break up the SELECT INTO with some time ranges (from and to)...and run it multiple times.