Azure-docs: Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size

Created on 16 Oct 2018  ·  47Comments  ·  Source: MicrosoftDocs/azure-docs

I am using cosmos db mongo api for doing aggregation. our documents are large(but still within the limit of 2mb may be 200kb) but when i group the data cosmosdb throws up an error "Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size".
I have tried the aggregate option of "allowDiskUse": true but it didn't work.


Document Details

Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

Pri2 cosmos-dsvc cxp product-question triaged

Most helpful comment

Happy birthday to you, happy birthday to you, happy birthday 40 mb issue, happy birthday to you!

All 47 comments

@kamaljoshi911 Thank you for the detailed feedback. We are actively investigating and will get back to you soon.

I am having the same issue.

Similar situation, but my documents are much smaller in size. I am also grouping the data. "allowDiskUse" did not help.

@kamaljoshi911 and @ccnoecker Could you please paste the typical query that causes this? How many documents are there in the collection(s) it touches? Your detailed feedback will be super helpful here. Thank you.

There are currently 587,239 documents in the collection, but there should be many more in the future. I am using refactored code to do this, and it worked with no issues on a local Mongo. I had never seen this error before switching to azure/cosmos.

I am using python to make the query:

db.logs.aggregate([
        {"$group":{
            "_id":{"player_name":"$player_name",
                "player_retailer":"$player_retailer",
                "player_country_code":"$player_country_code",
                "player_function_aggregated":"$player_function_aggregated",
                "retailer_country":"$retailer_country",
                "retailer_name":"$retailer_name",
                "retailer_city":"$retailer_city",
                "item_id":"$item_id",
                "item_name":"$item_name",
                "item_type":"$type",
                "item_context":"$context"},
            "playback_count":{"$sum":"1"},
            "avg_duration":{"$avg":"$duration"}
        }
        },
        {"$project":{
            "playback_count":"$playback_count",
            "avg_duration":{"$trunc":"$avg_duration"}
        }
        },
        {"$out":"playback_count_by_player"}
    ])

@ccnoecker Can you please let us know what your Cosmos DB instance name is? You can send that to AzCommunity at Microsoft Com, if you do not wish to post the information here. I have the product team looking in to this. We have a possible work around but want to take a look at your specific environment, if that is okay with you. Thank you!

@kamaljoshi911 If you can share your queries and your instance name, I can get you some feedback as well. Thank you.

@ccnoecker Thank you for the additional details you have provided. This information has been provided to the product team.

@kamaljoshi911 @ccnoecker The Azure Cosmos DB has in place a 40MB memory limit on queries while the product team works out an issue with aggregations. The limit is indented to ensure no one query chews up all the memory on that cluster. I am hoping that the product team can provide some workarounds for the use case of @ccnoecker. If there are other examples that need to be looked, please provide:

  • Instance Name
  • Example Query causing the 40MB threshold to be invoked

Thank you,
Mike

We will now proceed to close this thread. If there are further questions regarding this matter, please comment and we will gladly continue the discussion.

For the 40MB issue in general, we understand the problem and are working on a fix. As a workaround, customers can try 1) reducing the fields used from each document and 2) reducing the overall number of documents covered by the query.

Hi,
I have the same issue. I am executing
db.getCollection('collectionName').aggregate([ { $limit : 1 }])

But Cosmos DB is returning "Query exceeded the maximum allowed memory usage of 40 MB", but i am filtering already (Well, i want to get only the first element)

Hi,

I am having same problem with
bigin.documents.azure.com

db.getCollection('labels').aggregate([
{"$match": {"category": "bg:itemView",
"application": ObjectId("5bd6adb506a2f64866ddc7d2"),
"timestampDate": {"$gt": ISODate("2018-10-31 15:00:00.000Z"), "$lt": ISODate("2018-11-18 15:00:00.000Z")}}},
{"$project": {"product": 1, "user": 1, "visit": 1}},
{"$group": {"_id": {"product": "$product", "user": "$user", "visit": "$visit"}}},
{"$group": {"_id": {"product": "$_id.product", "user": "$_id.user"}, "count": {"$sum": 1}}},
{"$match": {"count": {"$gt": 1}}},
{"$group": {
"_id": "$_id.product",
"count": {"$sum": 1},
"views": {"$sum": "$count"},
"users": {"$push": {"id": "$_id.user", "count": "$count"}}
}},
{"$project": {"users": 1, "count": 1, "views": 1, "average": {"$divide": ["$views", "$count"]}}},
{"$match": {"average": {"$gt": 3.5}}}
])

any updates?

So as I read this, at the moment, Aggregations are NOT possible?
This command db.getCollection('collectionName').aggregate([ { $limit : 1 }]) from @bernardo5304
gives you a clue that there is a larger underlying problem.

This just put a stop (show stopper) to our migration plans.

I have the same issue with the following query:

db.getCollection("Telemetry").distinct("DeviceId")

I have 80,000 documents and there are 2 distinct values for DeviceId at the moment.

I did not specify any indexing, so, as far as I undertand it, all my fields should be indexed.

Same problem here, using the MongoDB API with Cosmos DB is starting to look like death by a thousand paper cuts

I have escalated to the Product Group with regard to each of your individual comments @bernardo5304 @zxshinxz @gerhardcit @Peter-B- @lferrao. I am looking to have feedback shortly.

We are actively working on improving agg. fwk and post-GA will remove this limit. Thanks.

I thought Cosmos had already reached GA? The current limit is being hit just doing a simple date query (get last 3 days of documents), making cosmos useless for us at the moment. Off to MongoDB Atlas.

Thanks for your feedback. We are working on addressing the above case you've encountered. Please connect with us at [email protected] and we can help you with your use-case. Thanks.

This is still an issue. I can't begin to understand how hard it is to make all these PAAS services work and play nicely with multiple tenants, but if aggregation does not work this is not really a Mongo compliant API and a lot of the use cases for Cosmos start to fall apart, is there any update on when this functionality will be available again? Selecting less than 100K rows in an aggregation query causes this problem, short of running micro batches to process anything, I have limited options if I use Cosmos..

@zpappa, as mentioned above - we are working on it. In the meantime, please connect with us at [email protected] and we try to help and unblock you. Thanks!

@rimman thanks for the timely response. I will reach out soon.

I am having the same issue using the following query

aggregate([
{
$unwind : "$Projections.key_discovery.KeysWithProjectedTypes"},
{
$group:
{
_id:"$Projections.key_discovery.KeysWithProjectedTypes.Key",
dataTypes : {
$addToSet : "$Projections.key_discovery.KeysWithProjectedTypes.StoredDataTypes"
},
count: {
$sum : 1
}
}
},
{
$project : {
KeyName:"$_id",
DataTypes:"$dataTypes",
Count:"$count",
_id:false
}
}
])

This is a serious blocking issue for our development, what can I do to fix this?

We ended up installing mongo on a tiny VM, a fraction of the cost and more performant (and it works with aggregation). CosmosDB looks great, it just isn't there yet.

Did you take a look at MongoDb Atlas? You can get a managed instance there. This is where we ended up for now.

CosmosDB looks great, it just isn't there yet.
True words.

Did you take a look at MongoDb Atlas? You can get a managed instance there. This is where we ended up for now.

CosmosDB looks great, it just isn't there yet.
True words.

Unfortunate reality, but outside of smaller application use cases, metadata stores and the like, I can't justify using Cosmos for anything "big". There's definitely a lot of limitations and issues I've discovered, request rate throttling is one of them, there's another pagination issues (which to MS's credit they are actively working on) with aggregation. It's hard not to understand their point of view though, they don't want dangerous aggregation queries running away and sucking up provisioned compute and memory and affecting other tenants. One of the many challenges with any cloud product.

I am facing the same issues with aggregation, total no of records in collection is just 4612109 only. I am also facing same issue, Can anyone please tell me when this resolved ?

@haresh1288 : please send us your query and the error message to askcosmosmongoapi[at]microsoft[dot]com so that we can take a look if there are any mitigation.

Overview of Problem Statement.

We are planning to use CosmosDB – MongoDB API for our IoT Telemetry Data.
The requirement had to make join between two collections (some how it’s not
possible to do deformalized schema, due to functional requirement, Hence at
least two collections required to get desired data). So we will planning to
leverage MongoDB Aggregation Pipeline ( with $lookup ).

[Problem]

I saw that Mongodb API has some feature in preview mode like ‘Aggregation
Pipeline’, ‘MongoDB 3.4 wire protocol (version 5)’, ‘Per-document TTL’. I
need to use aggregation and probably TTL functionality.

My question is if I “Enable” preview feature and deploy my application in
PRODUCTION, does is there any impact *when this preview feature are GA and
what is the solution of 40MB limit of aggregation pipeline *?

@haresh1288 we are constantly working to make improvements on the Cosmos DB's API for MongoDB. In order for us to effectively see whether there would be any implications to your use case using our currently previewed features, it'd be more effective for us to have a detailed conversation around your queries. Could you send an email with this description + your queries so that we can help you out?

You can e-mail us at : askcosmosmongoapi[at]microsoft[dot]com

Hi,

Below is my sample document,

{ "_id" : ObjectId("5cb994c7077cca3e1f5f603a"), "Type" : "E",
"EventID" : "2695", "partitionId" : 1, "Time" : "02,05,06",
"EventClass" : "9003", "UnitID" : "2", "Date" : "04-03-2019",
"Message" : "sending show_error(0) to jssh failed.", "Timestamp" :
"1551683106", "Record" : NumberLong(1555666119562)}

*db.events.aggregate([ { $lookup: { from:
"device", localField: "UnitID", foreignField:
"UnitID", as: "device_docs" } }, { $match:
{"partitionId":2 , "UnitID" : "2", "Record" : NumberLong(1555668270717) }
}, *{ $limit : 5}

],{ allowDiskUse: true })

I am trying to evaluate the cosmos MongoDB API for my customer first
Production release end of this year, if this will not work I have to choose
some different approach. My device collection has attributes which might
get updated by user, hence I have to use $lookup, plus above is the just
sample query. I need group by and almost all aggregation functions to get
desired result.

Haresh PatelMo : +91 8401437591

On Wed, Apr 24, 2019 at 2:27 AM Rohan Arora notifications@github.com
wrote:

@haresh1288 https://github.com/haresh1288 we are constantly working to
make improvements on the Cosmos DB's API for MongoDB. In order for us to
effectively see whether there would be any implications to your use case
using our currently previewed features, it'd be more effective for us to
have a detailed conversation around your queries. Could you send an email
with this description + your queries so that we can help you out?

You can e-mail us at : askcosmosmongoapi[at]microsoft[dot]com


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/MicrosoftDocs/azure-docs/issues/16997#issuecomment-485971291,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA46THU5YNJO6WRLNXQR4U3PR5ZZ7ANCNFSM4F4JQ32Q
.

@haresh1288 , save yourself a ton of time, pain and frustration, and sign up with Atlas Mongo DB hosted on Azure. All your mongo problems will go away and you still have the best of both worlds. Apps running on Azure (we have about 100 of them) talking to your real mongo DB on the same network. It works like a dream.

Hi Haresh,

The aggregation pipeline feature should GA this summer. With that update, along with improved performance most cases resulting in the 40 MB issue will no longer encounter that error. Depending on your query, we could maybe help you sooner - if you are interested, please share samples of the .aggregate queries you are performing.

Regards,
Jason

resolve

From: Cosmos DB Core Support cdbrrt@microsoft.com
Sent: Sunday, April 21, 2019 11:13 PM
To: Haresh Patel hareshpatel.eng@gmail.com; MicrosoftDocs/azure-docs reply@reply.github.com
Cc: Azure Cosmos DB Customer Engagement Team cdbest@service.microsoft.com; Shireesh Thota Shireesh.Thota@microsoft.com; Cosmos DB Mongo Team docdbmongo@microsoft.com; Haresh Patel hareshpatel.eng@gmail.com; Ask Cosmos DB with MongoDB API Support askcosmosmongoapi@microsoft.com
Subject: Re: [MicrosoftDocs/azure-docs] Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size (#16997) -[I116500463]
Importance: Low

Hello,

Thanks for taking the time to send your request to Azure Cosmos DB Engineering Team.
We will get back to you shortly. If you need help right away, please file an Azure support requesthttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fazure%2Fazure-supportability%2Fhow-to-create-azure-support-request&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905113536&sdata=lN2%2BLRQ77jefBOQStb9hkESXOiwK3dWWluwAIek%2BADM%3D&reserved=0.

This conversation is tracked. Please do not alter the subject of this email. Use this email thread for further communication with the Cosmos DB Team on this subject.

Thank you,
Azure Cosmos DB Team

P.S. For the latest Azure Cosmos DB news please follow us on Twitter @AzureCosmosDBhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2FAzureCosmosDB&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905113536&sdata=ZSjKf5sEanR84LV7EsOFoSf8HsMVrp5dxJG2hhpkaG8%3D&reserved=0


From: Haresh Patel hareshpatel.eng@gmail.com>
Sent: Monday, April 22, 2019 6:13:23 AM
To: MicrosoftDocs/azure-docs
Cc: haresh1288; Ask Cosmos DB with MongoDB API Support
Subject: Re: [MicrosoftDocs/azure-docs] Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size (#16997)

Overview of Problem Statement.
We are planning to use CosmosDB - MongoDB API for our IoT Telemetry Data. The requirement had to make join between two collections (some how it's not possible to do deformalized schema, due to functional requirement, Hence at least two collections required to get desired data). So we will planning to leverage MongoDB Aggregation Pipeline ( with $lookup ).

[Problem]
I am facing the same issues with aggregation, total no of records in collection is just 4612109 only. I am also facing same issue, Can anyone please tell me when this resolved ?

I saw that Mongodb API has some feature in preview mode like 'Aggregation Pipeline', 'MongoDB 3.4 wire protocol (version 5)', 'Per-document TTL'. I need to use aggregation and probably TTL functionality.

My question is if I "Enable" preview feature and deploy my application in PRODUCTION, does is there any impact when this preview feature are GA and what is the solution of 40MB limit of aggregation pipeline ?

Please help me here.

Haresh Patel
Mo : +91 8401437591

On Sat, Apr 20, 2019 at 10:58 PM Siddhesh notifications@github.com> wrote:

@haresh1288https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fharesh1288&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905123535&sdata=y7n5qC2kveyIJK0WdVh1bpIrQZ%2B%2F36h1UQQU1C8eGdE%3D&reserved=0 : please send us your query and the error message to askcosmosmongoapi[at]microsoft[dot]com so that we can take a look if there are any mitigation.

-
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FMicrosoftDocs%2Fazure-docs%2Fissues%2F16997%23issuecomment-485145265&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905123535&sdata=WABXR8bpshuHBtZeVbhhCqRiMVd8eKJaHJx3aOVgEoA%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAA46THTM5HPX5HA2FKOKUA3PRNHFDANCNFSM4F4JQ32Q&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905133524&sdata=BKDut3561WrWLgSuWjH74tDOKMQ7RqfSz%2FnOVSufmpc%3D&reserved=0.

I'm still having the same issue...

Any update on this? I'm facing the same 40mb limit error.

Hi Haresh, The aggregation pipeline feature should GA this summer. With that update, along with improved performance most cases resulting in the 40 MB issue will no longer encounter that error. Depending on your query, we could maybe help you sooner - if you are interested, please share samples of the .aggregate queries you are performing. Regards, Jason #resolve From: Cosmos DB Core Support cdbrrt@microsoft.com Sent: Sunday, April 21, 2019 11:13 PM To: Haresh Patel hareshpatel.eng@gmail.com; MicrosoftDocs/azure-docs reply@reply.github.com Cc: Azure Cosmos DB Customer Engagement Team cdbest@service.microsoft.com; Shireesh Thota Shireesh.Thota@microsoft.com; Cosmos DB Mongo Team docdbmongo@microsoft.com; Haresh Patel hareshpatel.eng@gmail.com; Ask Cosmos DB with MongoDB API Support askcosmosmongoapi@microsoft.com Subject: Re: [MicrosoftDocs/azure-docs] Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size (#16997) -[I116500463] Importance: Low Hello, Thanks for taking the time to send your request to Azure Cosmos DB Engineering Team. We will get back to you shortly. If you need help right away, please file an Azure support requesthttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fazure%2Fazure-supportability%2Fhow-to-create-azure-support-request&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905113536&sdata=lN2%2BLRQ77jefBOQStb9hkESXOiwK3dWWluwAIek%2BADM%3D&reserved=0. This conversation is tracked. Please do not alter the subject of this email. Use this email thread for further communication with the Cosmos DB Team on this subject. Thank you, Azure Cosmos DB Team P.S. For the latest Azure Cosmos DB news please follow us on Twitter @AzureCosmosDBhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2FAzureCosmosDB&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905113536&sdata=ZSjKf5sEanR84LV7EsOFoSf8HsMVrp5dxJG2hhpkaG8%3D&reserved=0

________________________________ From: Haresh Patel hareshpatel.eng@gmail.com> Sent: Monday, April 22, 2019 6:13:23 AM To: MicrosoftDocs/azure-docs Cc: haresh1288; Ask Cosmos DB with MongoDB API Support Subject: Re: [MicrosoftDocs/azure-docs] Query exceeded the maximum allowed memory usage of 40 MB. Please consider adding more filters to reduce the query response size (#16997) Overview of Problem Statement. We are planning to use CosmosDB - MongoDB API for our IoT Telemetry Data. The requirement had to make join between two collections (some how it's not possible to do deformalized schema, due to functional requirement, Hence at least two collections required to get desired data). So we will planning to leverage MongoDB Aggregation Pipeline ( with $lookup ). [Problem] I am facing the same issues with aggregation, total no of records in collection is just 4612109 only. I am also facing same issue, Can anyone please tell me when this resolved ? I saw that Mongodb API has some feature in preview mode like 'Aggregation Pipeline', 'MongoDB 3.4 wire protocol (version 5)', 'Per-document TTL'. I need to use aggregation and probably TTL functionality. My question is if I "Enable" preview feature and deploy my application in PRODUCTION, does is there any impact when this preview feature are GA and what is the solution of 40MB limit of aggregation pipeline ? Please help me here. Haresh Patel Mo : +91 8401437591 On Sat, Apr 20, 2019 at 10:58 PM Siddhesh notifications@github.com> wrote: @haresh1288https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fharesh1288&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905123535&sdata=y7n5qC2kveyIJK0WdVh1bpIrQZ%2B%2F36h1UQQU1C8eGdE%3D&reserved=0 : please send us your query and the error message to askcosmosmongoapi[at]microsoft[dot]com so that we can take a look if there are any mitigation. - You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FMicrosoftDocs%2Fazure-docs%2Fissues%2F16997%23issuecomment-485145265&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905123535&sdata=WABXR8bpshuHBtZeVbhhCqRiMVd8eKJaHJx3aOVgEoA%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAA46THTM5HPX5HA2FKOKUA3PRNHFDANCNFSM4F4JQ32Q&data=02%7C01%7Cjasontho%40exchange.microsoft.com%7Cc69a28051450452e93f208d6c6e996d1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636915103905133524&sdata=BKDut3561WrWLgSuWjH74tDOKMQ7RqfSz%2FnOVSufmpc%3D&reserved=0.

Did not GA? Still an issue.

Happy birthday to you, happy birthday to you, happy birthday 40 mb issue, happy birthday to you!

Any solution? :'(

@alejuanito This has been resolved. If you deploy version 3.6 of MongoDB API for Cosmos DB you will not experience this issue. If you currently have a 3.2 deployment, please send me your Azure Subscription ID to AzCommunity and I will have your instance upgraded. If you already have an Azure Support Plan, please open a support request and send me the support request ID. Another option is to export data, deploy a v.3.6 instance and then import the data (mongoimport).

Hi @Mike-Ubezzi-MSFT
I was experiencing this problem and asked Microsoft Support to update to 3.6 version. I have just tried after being updated and it's still happening. Is anyone else having this problem with v3.6?
Regards

Yes. I created a new 3.6 CosmosDb, ingested some data and tried an aggregation with group by. The issue is still there.

@diego-palla Can you please provide the support request ID that was issued to request this upgrade and @Peter-B-, can you send me your subscription ID. Please send this to AzCommunity and I will let the product group know this is occurring. Thank you for bringing this to my attention.

@diego-palla Can you please provide the support request ID that was issued to request this upgrade and @Peter-B-, can you send me your subscription ID. Please send this to AzCommunity and I will let the product group know this is occurring. Thank you for bringing this to my attention.

Hi Mike, support request ID is 120031725001124.
Thanks for your help

This is being investigated but please ensure you are using the 3.6 endpoint (.mongo.cosmos.azure.com) and not the Mongo 3.2 endpoint (. documents.azure.com).

@Mike-Ubezzi-MSFT Thanks for the hint. That might indeed have been my mistake.
Unfortunately, I already deleted my Test-CosmosDb.
I am kind of busy at the moment, but I will give it another try and come back to you.

This is being investigated but please ensure you are using the 3.6 endpoint (_.mongo.cosmos.azure.com) and not the Mongo 3.2 endpoint (_. documents.azure.com).

Great!! I didn't know about this endpoint change but once updated it works.

Thanks a lot for your help!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jharbieh picture jharbieh  ·  3Comments

DeepPuddles picture DeepPuddles  ·  3Comments

bdcoder2 picture bdcoder2  ·  3Comments

jamesgallagher-ie picture jamesgallagher-ie  ·  3Comments

JeffLoo-ong picture JeffLoo-ong  ·  3Comments