Origin: permission denied in pods, using import docker-compose

Created on 17 May 2016  Â·  4Comments  Â·  Source: openshift/origin

I am trying the new feature of import docker-compose and facing many issues like permission denied inside the container. This docker-compose file works fine with docker-compose itself.

Version
$ openshift version
openshift v1.3.0-alpha.0-559-g14d77ab-dirty
kubernetes v1.3.0-alpha.1-331-g0522e63
etcd 2.3.0
Steps To Reproduce

Here is the compose file I am using:

version: "2"

services:
  mariadb:
    image: centos/mariadb
    ports:
      - "3306"
    environment:
      MYSQL_ROOT_PASSWORD: wordpress
      MYSQL_DATABASE: wordpress
      MYSQL_PASSWORD: wordpress
      MYSQL_USER: wordpress

  wordpress:
    image: wordpress
    ports:
      - "8080:80"
    depends_on:
      - mariadb
    restart: always
    environment:
      WORDPRESS_DB_HOST: mariadb
      WORDPRESS_DB_NAME: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_USER: wordpress

pods:

$ oc get pods --config admin.kubeconfig
NAME                READY     STATUS             RESTARTS   AGE
mariadb-1-2j22m     1/1       Running            0          31m
wordpress-1-hwhsg   0/1       CrashLoopBackOff   10         30m

wordpress pod logs

$ oc logs wordpress-1-hwhsg --config admin.kubeconfig | tail -10
tar: ./wp-includes/meta.php: Cannot open: No such file or directory
tar: ./wp-includes: Cannot mkdir: Permission denied
tar: ./wp-includes/ms-blogs.php: Cannot open: No such file or directory
tar: ./wp-includes: Cannot mkdir: Permission denied
tar: ./wp-includes/ms-default-constants.php: Cannot open: No such file or directory
tar: ./wp-includes: Cannot mkdir: Permission denied
tar: ./wp-includes/ms-default-filters.php: Cannot open: No such file or directory
tar: ./wp-includes: Cannot mkdir: Permission denied
tar: ./wp-includes/ms-deprecated.php: Cannot open: No such file or directory
tar: ./wp-includes: Cannot mkdir: Permission denied

The complete log: http://paste.fedoraproject.org/367599/49740014/ OR
https://gist.github.com/surajssd/6b2634be2cd90daca0910ac34cf39f2c

Additional Information

other logs are:

$ oc logs mariadb-1-2j22m --config admin.kubeconfig
Running mysql_install_db ...
Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...
160517 14:28:02 [Note] /usr/libexec/mysqld (mysqld 5.5.47-MariaDB) starting as process 35 ...
OK
Filling help tables...
160517 14:28:03 [Note] /usr/libexec/mysqld (mysqld 5.5.47-MariaDB) starting as process 43 ...
OK
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following commands:
'/usr/bin/mysqladmin' -u root password 'new-password'
'/usr/bin/mysqladmin' -u root -h mariadb-1-2j22m password 'new-password'
Alternatively you can run:
'/usr/bin/mysql_secure_installation'
which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.
See the MariaDB Knowledgebase at http://mariadb.com/kb or the
MySQL manual for more instructions.
You can start the MariaDB daemon with:
cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'
You can test the MariaDB daemon with mysql-test-run.pl
cd '/usr/mysql-test' ; perl mysql-test-run.pl
Please report any problems at http://mariadb.org/jira
The latest information about MariaDB is available at http://mariadb.org/.
You can find additional information about the MySQL part at:
http://dev.mysql.com
Support MariaDB development by buying support/new features from MariaDB
Corporation Ab. You can contact us about this at [email protected].
Alternatively consider joining our community based development effort:
http://mariadb.com/kb/en/contributing-to-the-mariadb-project/
Finished mysql_install_db
160517 14:28:03 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
160517 14:28:03 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
componencomposition lifecyclstale prioritP2

Most helpful comment

@smarterclayton thanks, after sometime of CrashLoopBackOff when I did oc status -v got to know that the process in container were not allowed to run as super user so they were failing so had to do

oadm policy add-scc-to-user anyuid -n compose -z default --config ~/admin.kubeconfig

And then the containers which were failing on permission denied now work, but I don't know if this is the right thing to do.

All 4 comments

This is compose file I tried and similar kind of errors:

[vagrant@localhost compose]$ cat docker-compose.yml 
version: "2"
networks:
  mynet:
services:
  db:
    container_name: "db"
    image: postgres
    networks:
      - mynet
    ports:
       - "5432:5432"
    environment:
      - POSTGRES_USER=ticketmonster
      - POSTGRES_PASSWORD=ticketmonster-docker
  modcluster:
    container_name: "modcluster"
    networks:
      - mynet
    image: karm/mod_cluster-master-dockerhub
    environment:
      - MODCLUSTER_NET=192. 172. 10. 179. 213.
      - MODCLUSTER_PORT=80
    ports:
       - "80:80"
  wildfly:
    image: rafabene/wildfly-ticketmonster-ha
    #build: ../Dockerfiles/ticketmonster-ha/
    networks:
      - mynet

Pods

[vagrant@localhost compose]$ oc get pods --config ~/admin.kubeconfig 
NAME                 READY     STATUS             RESTARTS   AGE
db-1-deploy          1/1       Running            0          10m
db-1-tamye           0/1       CrashLoopBackOff   6          9m
modcluster-1-fpdy3   0/1       CrashLoopBackOff   6          9m
wildfly-1-0g9n2      0/1       CrashLoopBackOff   6          9m

Pod wise logs
Postgresql db pod:

[vagrant@localhost compose]$ oc logs db-1-tamye --config ~/admin.kubeconfig 
chmod: changing permissions of ‘/var/lib/postgresql/data’: Operation not permitted
[vagrant@localhost compose]$ oc logs modcluster-1-fpdy3 --config ~/admin.kubeconfig 
Starting httpd with mod_cluster
===============================
MODCLUSTER_PORT            80
MODCLUSTER_ADVERTISE       On
MODCLUSTER_ADVERTISE_GROUP 224.0.1.105:23364
MODCLUSTER_NET             192. 172. 10. 179. 213.
MODCLUSTER_MANAGER_NET     192. 172. 10. 179. 213.
Creating /opt/httpd-build/conf/extra/mod_cluster.conf configuration file:
/docker-entrypoint.sh: line 29: /opt/httpd-build/conf/extra/mod_cluster.conf: Permission denied

Wildfly pod:

[vagrant@localhost compose]$ oc logs wildfly-1-0g9n2 --config ~/admin.kubeconfig 
=========================================================================
  JBoss Bootstrap Environment
  JBOSS_HOME: /opt/jboss/wildfly
  JAVA: /usr/lib/jvm/java/bin/java
  JAVA_OPTS:  -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
=========================================================================
java.lang.IllegalArgumentException: Failed to instantiate class "org.jboss.logmanager.handlers.PeriodicRotatingFileHandler" for handler "FILE"
    at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:116)
    at org.jboss.logmanager.config.LogContextConfigurationImpl.doPrepare(LogContextConfigurationImpl.java:335)
    at org.jboss.logmanager.config.LogContextConfigurationImpl.prepare(LogContextConfigurationImpl.java:288)
    at org.jboss.logmanager.config.LogContextConfigurationImpl.commit(LogContextConfigurationImpl.java:297)
    at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:546)
    at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:97)
    at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:514)
    at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:476)
    at java.util.logging.LogManager$3.run(LogManager.java:399)
    at java.util.logging.LogManager$3.run(LogManager.java:396)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.util.logging.LogManager.readPrimordialConfiguration(LogManager.java:396)
    at java.util.logging.LogManager.access$800(LogManager.java:145)
    at java.util.logging.LogManager$2.run(LogManager.java:345)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.util.logging.LogManager.ensureLogManagerInitialized(LogManager.java:338)
    at java.util.logging.LogManager.getLogManager(LogManager.java:378)
    at org.jboss.modules.Main.main(Main.java:482)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:114)
    ... 17 more
Caused by: java.io.FileNotFoundException: /opt/jboss/wildfly/standalone/log/server.log (Permission denied)
    at java.io.FileOutputStream.open0(Native Method)
    at java.io.FileOutputStream.open(FileOutputStream.java:270)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
    at org.jboss.logmanager.handlers.FileHandler.setFile(FileHandler.java:151)
    at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.setFile(PeriodicRotatingFileHandler.java:102)
    at org.jboss.logmanager.handlers.FileHandler.setFileName(FileHandler.java:189)
    at org.jboss.logmanager.handlers.FileHandler.<init>(FileHandler.java:119)
    at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.<init>(PeriodicRotatingFileHandler.java:70)
    ... 22 more
java.util.concurrent.ExecutionException: Operation failed
    at org.jboss.threads.AsyncFutureTask.operationFailed(AsyncFutureTask.java:74)
    at org.jboss.threads.AsyncFutureTask.get(AsyncFutureTask.java:268)
    at org.jboss.as.server.Main.main(Main.java:103)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.jboss.modules.Module.run(Module.java:329)
    at org.jboss.modules.Main.main(Main.java:507)
Caused by: org.jboss.msc.service.StartException in service jboss.as: Failed to start service
    at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1904)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: WFLYDR0006: Directory /opt/jboss/wildfly/standalone/data/content is not writable
    at org.jboss.as.repository.ContentRepository$Factory$ContentRepositoryImpl.<init>(ContentRepository.java:188)
    at org.jboss.as.repository.ContentRepository$Factory.addService(ContentRepository.java:154)
    at org.jboss.as.server.ApplicationServerService.start(ApplicationServerService.java:146)
    at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
    at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
    ... 3 more

If you run oc status, you should see warnings about this that instruct you what to do (you may have to pass oc status -v).

@smarterclayton thanks, after sometime of CrashLoopBackOff when I did oc status -v got to know that the process in container were not allowed to run as super user so they were failing so had to do

oadm policy add-scc-to-user anyuid -n compose -z default --config ~/admin.kubeconfig

And then the containers which were failing on permission denied now work, but I don't know if this is the right thing to do.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.

/lifecycle stale

Was this page helpful?
0 / 5 - 0 ratings