Google-cloud-python: datastore: very slow queries, single record by key

Created on 24 Feb 2018  路  15Comments  路  Source: googleapis/google-cloud-python

Ubuntu 16.04.3 LTS
Python 3.6.3
google-cloud-datastore>=1.5.0

repro code:

from google.cloud import datastore
from time import time
start = time() * 1000
ds = datastore.Client('project-name')
key = ds.key('TestPerf', 'c10d6806-e3d4-4694-a8be-38f481817c')
entity = ds.get(key)
end = time() * 1000
print("duration : ", (end - start))

We are just starting out with google cloud platform and chose datastore as the option to store data because it seems the simplest, however we are noticing very slow performance on queries. Most common query is by primary key (above code) or = on one of the fields.

We are getting about ~200/300ms on the above query with a single entity or many entities in the DB. The queries are done either from a Compute Engine instance or Kubernetes clusters (both same perf).

We are not sure what to do from here, short of switching to a different DB option on the GCP platform. Any help would be greatly appreciated. Thanks!

question datastore performance awaiting information

Most helpful comment

I am happy to hear that someone is raising this point. We experience the slowness with the Java client. First of all datastore is slower than cloud sql. Secondly there are fluctuations between 50ms and a couple of hundred ms . This forces us to use intermediate caches instead of querying the datastore directly.

All 15 comments

Hi @natasha-aleksandrova. It seems you're creating a new datastore client every time. This has quite a bit of overhead. Can you move the datastore client creation outside of your timing loop and see what you get?

there is no loop in the above code.

moving client creation before start variable makes virtually no difference.

Hrm, on my end:

In [9]: %timeit ds.get(ds.key('Faces', 'flex_and_vision.jpg'))
10 loops, best of 3: 67.5 ms per loop

Can you tell me a bit more about your environment?

certainly, could you tell what specifically you're looking for?

btw 67ms query time to look up by key doesn't seem like good performance against comparable databases (pgsql, cassandra etc)

natasha.aleksandrova@dev-box-2 ~/$ venv/bin/python3.6 get_ds_test.py 
duration :  177.80712890625
duration :  260.043701171875
duration :  147.9111328125
duration :  182.68212890625
duration :  144.354736328125
duration :  174.015869140625
duration :  156.070556640625
natasha.aleksandrova@dev-box-2 ~/$ venv/bin/python3.6 get_ds_test.py 
duration :  312.77734375
duration :  48.459716796875
duration :  57.712890625
duration :  165.5234375
duration :  156.344970703125
duration :  80.740234375
duration :  45.97998046875

above is running the script with multiple get queries after creating the client first.

it seems pretty inconsistent, we are using Datastore with REST endpoints which create client then do a single query to return the data. It seems that the first query is always slow.

Like I said earlier, ~50ms still seems a little slow.

@dmcgrath could you let me know if this is within the expected latency for datastore?

I am happy to hear that someone is raising this point. We experience the slowness with the Java client. First of all datastore is slower than cloud sql. Secondly there are fluctuations between 50ms and a couple of hundred ms . This forces us to use intermediate caches instead of querying the datastore directly.

PM for Datastore here. Some quick insights.

1) Our system spans many servers across multiple data centers (it's synchronously replicated across regions). To get the best performance, we have a series of caches for things like permission checking that will be your database until you send us non-trivial traffic to hit various servers and warm those caches. When doing benchmarking, longer tests will give more reliable indicates of actual production performance (more below).

2) Yes, for individual entity reads we will be slower than Cloud SQL. We trade off some latency for much greater durability and availability guarantees, along with other properties like scale down to zero, etc. We will scale better and more consistently, so queries the the same results will have the same latency when you have 1GiB with 10 reads/second as when you have 1PiB and 10 million reads/sec.

3) Yes, 50ms would be considered slow for a single entity read. I modified the above test (still not a great test) and ran it in Cloud Shell, getting numbers that would be closer aligned with expected performance:

Code:

from google.cloud import datastore
from time import time
ds = datastore.Client('project-id-goes-here')

for y in xrange(15):
    start = time() * 1000
    for x in xrange(100):
        key = ds.key('TestPerf', 'c10d6806-e3d4-4694-a8be-38f481817c')
        entity = ds.get(key)
    end = time() * 1000
    print("avg duration (n=100) : ", (end - start)/100)

Output:

(venv) [email protected]:?:~/timings$ python time.py
('avg duration (n=100) : ', 25.29804931640625)
('avg duration (n=100) : ', 21.823388671875)
('avg duration (n=100) : ', 20.97033935546875)
('avg duration (n=100) : ', 21.27089111328125)
('avg duration (n=100) : ', 20.589169921875)
('avg duration (n=100) : ', 18.09662109375)
('avg duration (n=100) : ', 17.6180712890625)
('avg duration (n=100) : ', 16.16829833984375)
('avg duration (n=100) : ', 19.7347607421875)
('avg duration (n=100) : ', 16.29345947265625)
('avg duration (n=100) : ', 18.2619384765625)
('avg duration (n=100) : ', 14.91742919921875)
('avg duration (n=100) : ', 16.11412841796875)
('avg duration (n=100) : ', 13.5105908203125)
('avg duration (n=100) : ', 15.86523193359375)
(venv) [email protected]:?:~/timings$

Thank you for information.

Your results look much better than mine. What could account for such drastic difference?
Avg of ~20ms is much better and something we could work with (although coming from PostgreSQL with 1-2ms queries by the PK, it is still a little high).

Also, a little bit about our use case. We are working on REST endpoints and using Datastore to store our core entities like User, Driver, Company etc. They are fairly simple data structures, and most commonly our endpoints will do a single query by the Key to get the entity. So the code snippet I provided in the issue I reported is pretty much the basis of the code, init DS client, then do a get by Key.

Given our use case, would SQL be more suitable for us? Or anything we can do to optimize Datastore for us?

Thanks!
Natasha

I'm going to go ahead and close this, but by all means please continue discussing. If there's an actionable issue for this library, we can re-open or start a new issue.

Thank you @dmcgrath for giving a thorough answers here. :)

@natasha-aleksandrova -> I would recommend running the version of the test I posted with my results, then comparing the numbers. I strongly suspect you'll see they then match.

These are results:

avg duration (n=100) :  90.1637060546875
avg duration (n=100) :  93.5896826171875
avg duration (n=100) :  69.44139892578124
avg duration (n=100) :  67.1531201171875
avg duration (n=100) :  60.3359521484375
avg duration (n=100) :  63.8241650390625
avg duration (n=100) :  58.40623779296875
avg duration (n=100) :  62.62049560546875
avg duration (n=100) :  57.0997607421875
avg duration (n=100) :  63.35859619140625
avg duration (n=100) :  61.26636474609375
avg duration (n=100) :  51.29640625
avg duration (n=100) :  50.8125390625
avg duration (n=100) :  53.75461181640625
avg duration (n=100) :  48.53342041015625

just to close on this sounds like:

  • because we are running in west and the datastore is in central there is about 50ms extra latency
  • cold cache aspect is most likely affecting our performance

@natasha-aleksandrova What i learned from using datastore, is to use google cloud sql if i don't need to store a huge amount of data and have a psychic amount of queries per second. google cloud sql is just awesome plus the fact that you are not locked in a propietary solution which you cannot change afterwards. Bonus: you can use cool third party libraries like sql alchemy.
If i would have one table which would be gigantic i would either use cassandra or bigtable directly. Otherwise you will have to set up caching layers(redis, memcache,...) in the middle because the datastore performance will be indeterministic and queries will be slow.

@david-gang thanks for sharing your insights! it is certainly helpful, and we are looking at trying out the cloud sql.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

bmenasha picture bmenasha  路  3Comments

LMeinhardt picture LMeinhardt  路  4Comments

jlara310 picture jlara310  路  3Comments

tweeter0830 picture tweeter0830  路  4Comments

VikramTiwari picture VikramTiwari  路  4Comments