Tell us about your request
Now that we have ECS tags for tasks and services from https://github.com/aws/containers-roadmap/issues/15, it would be nice if Cost Explorer was able to let us slice and dice our EC2 costs based on these tags.
Which service(s) is this request for?
ECS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
The majority of our EC2 spend is on ECS clusters. We're running around 30 applications on each cluster, so it would be really nice to be able to allocate that spend back to our teams easily within cost explorer.

We could probably figure out how to do some manual work each month to take our usage allocation by either vcpu or RAM and then subdivide our EC2 bill by this, but that feels like a lot of work that would be pretty easy to get wrong.
It seems like AWS should have all of the information it needs to do these calculations for us (maybe with some hint from us to determine if the cluster capacity is CPU or memory constrained):
Are you currently working around this issue?
We don't currently allocate EC2 costs on an application or team basis. If needed for an estimate we'll take a proxy like RDS spend and allocate EC2 with the same ratio as a best effort.
Note that data transfer allocation is going to be crucial here as well.
Here's an old article covering how to do this manually (before tagging and usage were available): https://aws.amazon.com/blogs/compute/measuring-service-chargeback-in-amazon-ecs/
You can see how it might be a bit overwhelming to try to do manually.
What if some instances are CPU constrained and some are memory constrained? What if some instances are not fully utilized or are not running tasks at all - how would you like those costs allocated?
You'd almost certainly need to have those go to a designated cost center. Companies can then special case handling those.
The "uneven constraints" problem is trickier. If it helps, this doesn't need to be to-the-penny accurate, but rather at the 80%+ level for most use cases.
Having an “unused” dollar cost like Corey mentioned would be ideal (similar to the catch all “no tag key” category when grouping spend by tag). It would be a really interesting “efficiency” metric for your cluster, and might make it easier to figure out if you could save some money by moving to fargate, or figure out better auto scaling strategies.
I’m not sure if instances could figure out themselves if they were memory or cpu constrained (this feels awfully similar to some of the discussions on the autoscaling discussion btw).
In our case (right now at least), our cluster is memory constrained 100% across the board.
The other option I guess might be to calculate two cost metrics: one for memory and one for CPU, and let the user choose which one they’re interested in using?
Any clue, is this feature request accepted and work has started?
I really need this for a long time. An update will be helpful.
For now, we are having a report from the billing S3 bucket for vCPU hours per service. Though it is difficult for the management to understand.
Most helpful comment
Note that data transfer allocation is going to be crucial here as well.