If the current minimum throughput is 30000 in the Unlimited storage capacity. Would be the minimum throughput decreased automatically if volume of data (actual throughput) is decreased? How can we decrease the current minimum throughput (20000 for example) in case of we know volume of data (actual throughput) is decreased. Thanks.
⚠Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
@kazshi Thank you for your interest in Azure products and services. We are investigating and will get back to you soon.
@kazshi The throughput isn't automatically decreased if your data size reduces. You have to change the throughput limit to the value that you need, you can update the throughput by using Azure portal or programmatically by using one of the supported SDK. You can billed for the provisioned throughput and not based on the usage.
If using portal, you can change the throughput limit under the scale and settings blade.
@SnehaGunda Thanks for your comment. Sorry for my lack of explanation. Actually the original question is from my customer and their current throughput value is 31900 and it's lower limit. They cannot reduce the throughput value from the Portal or using program because of a large number of partitions, and they're trying to reduce data size to decrease the current throughput value (change the lower limit). Would the number of partitions be decreased by reducing the data size? How can they change lower limit of the throughput value in this case? Thanks.
Hi @SnehaGunda , do you have any information of how to change the lower limit for the case? Thanks.
Yes I'm seeing this same issue on the SQL API side. If you scale up your database above a certain threshold, usually 10k RU/s, you can no longer come back down to 400 and get locked into a higher tier of some sort. This is very frustrating since I wanted to create an auto-scaler function that scales up the collection every day for a big bulk import, and then scales back down after the import is finished.
Also, there is no info on why the minimum RU/s limit is set to what it is, which is the crux of this issue. According to this article, you should have a minimum of 400 RU/s limit with up to 4 collections, 100 RU/s additional minimum per collection over 4, and 40 RU/s additional minimum per GB of storage. Using that math, since I have 5 collections and 43GB of data, my minimum should be 2,220 RU/s, but my minimum is 3,200 which seems totally arbitrary!
@kazshi @ijabit I have reached out to the product group to get an accurate answer. Apologies for the delay. Thank you!
@kazshi and @ijabit Sorry about the delay in getting back on this.
We will be providing a new header "x-ms-cosmos-min-throughput", it will be available soon. It represents the minimum throughput value an offer can be scaled down to. Once you have this header, you should be able to scale the RU limit to not lower than that value. The minimum throughput value is based on the following three properties:
o Max throughput that you ever increase on the container or the database.
o Data stored in the container or the database.
o If you are using database-level/shared throughput, this value depends on the number of containers within the database.
@ijabit Due to above factors you can't always scale down thr RU value to 400. Your calculation should include the above three properties, I think there will be a formula that is used to calculate the minimum value which will give a better idea.
@kazshi Would the number of partitions be decreased by reducing the data size? - The number of partitions vary based on the data size and throughput. You don't have to worry about the number of partitions when you choose the right value. The right partition key should consider these factors: https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview#choose-partitionkey
@Mike-Ubezzi-MSFT in case you have any other update from the product team please share it as well.
@SnehaGunda Thank you for the info. Do you have any ETA of when the new header will be implemented? and it'd be great if you could provide content about the new header when it's available so that I can provide it to my customer. Thanks.
o Max throughput that you ever increase on the container or the database.
That is the really frustrating thing, it makes it impossible to scale up for a large load without getting locked in to a higher rate. I thought that was one of the big selling points, that you can scale up and down as demand requires! Nothing in the docs says that once you scale up you get locked in. Very disappointing...
@SnehaGunda Thanks for your valuable inputs. I have assigned this issue to you for further clarifications.
@kazshi I don't know of any ETAs as of now we will notify you with blog posts and doc updates. Yes we are working on documentation for this feature, we will be updating the main conceptual and REST API docs. We will notify you when the docs are available.
@ijabit There might be a different way to solve your problem, which I may not be aware. Can you please send an email to [email protected] alias, a broader audience can look at your questions, our team can guide you in the right direction.
I also dropped an email to our product group, @kirillg will respond to this issue soon.
@SnehaGunda were you able to get a response?
@ijabit no, I sent a followup email to Kirill, can you also please reach out the [email protected] alias, its monitored by a broader audience and easy to get faster response.
@SnehaGunda I've sent a message to that Email. Thanks!
@ijabit Unfortunately I can't find your email to askcosmosdb alias, I am sure you sent it because other PM saw the email. I may have deleted it by mistake. Did you get your questions addressed?
@snehagunda the subject contains I110793756 if that helps. Karthik Raman asked me for clarification which I replied to. I haven't heard back form anyone since then. Thanks.
@ijabit I pinged Karthik to look into this issue, please send a followup email from your end too :-) . We will track with the alias and close the issue here.
We will now proceed to close this thread. If there are further questions regarding this matter, please comment and we will gladly continue the discussion but, the specific issue will be addressed via the email communication that is in-progress.
@SnehaGunda Was "x-ms-cosmos-min-throughput" implemented? Thanks.
@Mike-Ubezzi-MSFT @SnehaGunda Could please reopen this thread so that I can track about the "x-ms-cosmos-min-throughput" implementation? I'll need to provide info of it to my customer. Thanks.
@kazshi Best course of action is to reach out to askcosmosdb at microsoft.com where you will be issued a tracking ID, as previously advised.
@Mike-Ubezzi-MSFT I've not heard anything back in over a month. Not sure if that is the "best course of action".
Please send your Subscription ID to AzCommunity and I will send you instructions to have a support request filed. If you do have an Azure support plan, please go ahead and open a ticket with support. It is the best way to get this escalated. Thank you.
@Mike-Ubezzi-MSFT Actually the original issue was from support request, and I provided information (according to the following page - confirmed to Ask Azure Cosmos DB) of there's no way to decrease the minimum RU to the customer, the SR case has been already closed. Thanks for your support.
Provision throughput on Azure Cosmos containers and databases | Microsoft Docs
https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput#update-throughput-on-a-database-or-a-container
It doesn't make sense that you can scale a container up with unlimited maximum throughput, but can't scale it back down to the original minimum. There should at very least be a warning about this in the documentation (and in the portal for that matter) and clear details on what thresholds will end up changing the minimum possible throughput.
This is joke - Isn't it?
I have run some performance tests against CosmosDb app and now I can't scale down and I will pay 60dollars per month for not using instance? Great job!
Most helpful comment
@kazshi and @ijabit Sorry about the delay in getting back on this.
We will be providing a new header "x-ms-cosmos-min-throughput", it will be available soon. It represents the minimum throughput value an offer can be scaled down to. Once you have this header, you should be able to scale the RU limit to not lower than that value. The minimum throughput value is based on the following three properties:
o Max throughput that you ever increase on the container or the database.
o Data stored in the container or the database.
o If you are using database-level/shared throughput, this value depends on the number of containers within the database.
@ijabit Due to above factors you can't always scale down thr RU value to 400. Your calculation should include the above three properties, I think there will be a formula that is used to calculate the minimum value which will give a better idea.
@kazshi Would the number of partitions be decreased by reducing the data size? - The number of partitions vary based on the data size and throughput. You don't have to worry about the number of partitions when you choose the right value. The right partition key should consider these factors: https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview#choose-partitionkey
@Mike-Ubezzi-MSFT in case you have any other update from the product team please share it as well.