I'd like to be able to create small containers in terms of memory + CPU so that I can make more use of the 2 CPU and 1Gb memory limits on OpenShift Online.
However if I ever try to use low limits like:
resources:
limits:
memory: '200Mi'
cpu: '200m'
I get errors like this:
minimum cpu usage per Pod is 29m, but request is 23m., minimum memory usage per Pod is 150Mi, but request is 125829120., minimum memory usage per Container is 150Mi, but request is 120Mi., minimum cpu usage per Container is 29m, but request is 23m
Right now (rounding up a little) these are about the smallest I've been able to get:
resources:
limits:
memory: '250Mi'
cpu: '250m'
But I'd like to run more small containers. e.g. to demo FaaS I'd like to have lots of little containers running. I'd be happy to use about 50Mb for each really. e.g. here's the view from the OpenShift Online console of their resource usage:

so its using 260Mb of memory, barely any CPU but already my quota has gone.
Is there a chance we could specify smaller memory & CPU limits?

@jstrachan The quota for Online and other resource restrictions were defined keeping various factors in mind. Can you please share your use case with me via email and we can discuss how best to support it.
FWIW, I have the same issue.
I need 4 containers + 1 init container, the minimum of 250MB per container kills it, we effectively can't deploy more than 4 containers in OpenShift Online even if we don't need the full 1GB or memory.
Any news on this? I have the same problem here.. I need to deploy a lot of low memory, low traffic containers. Some of them average on 5MB memory in production.
I would love to stick with OpenShift Online, since it's the most beautiful container platform I came across. But the 4-containers-per-gig-limit kills it.
We are in the process of reducing the minimum pod/container memory requirements. Currently, the target is setting the minimum memory to 200Mi (should be done in a few days) and we will continue to test/experiment and reduce it further to 100Mi.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
We just reduced the min pod/container memory in OpenShift Online Pro to 100Mi. I'll keep you posted on developments here around reducing limits in Starter.
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen.
Mark the issue as fresh by commenting/remove-lifecycle rotten.
Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.