Gitea: Feature Request: Move INTERNAL_TOKEN value out of app.ini

Created on 20 Dec 2017  路  44Comments  路  Source: go-gitea/gitea

  • Gitea version (or commit ref): latest
  • Git version: latest
  • Operating system: Linux
  • Database (use [x]):

    • [ ] PostgreSQL

    • [ ] MySQL

    • [ ] MSSQL

    • [ ] SQLite

  • Can you reproduce the bug at https://try.gitea.io:

    • [ ] Yes (provide example URL)

    • [ ] No

    • [x] Not relevant

  • Log gist:

Description

I'd like to request that the INTERNAL_TOKEN value that is currently placed in the app.ini during Gitea startup be moved to a separate file/config than the user-defined app.ini. It should probably be placed in a lockfile, pid, or separate gitea-system-controlled config file, etc.

The reason for this is that a lot of organizations use automation tools such as Puppet, SaltStack, Ansible, etc. to deploy configurations to servers. If the app.ini (user-defined config file) is being managed by one of these, it will overwrite the existing app.ini that includes the INTERNAL_TOKEN. This is less than ideal, and obviously breaks the functioning of Gitea.

References:

kinrefactor revieweconfirmed

Most helpful comment

@Nodraak There's already a PR: https://github.com/kubernetes/charts/pull/3408 but we're blocked on this issue :+1:

All 44 comments

same problem using kubernetes and configmaps, config files should not be used to save any kind of state

Any chance to fix this in near future? It's annoying to use idempotent config management with this...

Any chance to fix this in near future? It's annoying to use idempotent config management with this...

If you provide a pull-request chances are high that it will be merged

:+1: will have a look

@xoxys @strk

Any updates? We are unfortunately blocked on this and we're unable to add Gitea as a Helm chart to Kubernetes since it's trying to write to app.ini... Despite app.ini only used for configuration, INTERNAL_TOKEN is being updated within in.

See errors: https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/charts/3408/pull-charts-e2e/7056/build-log.txt

Also our Helm chart PR here: https://github.com/kubernetes/charts/pull/3408

ping @lafriks

Specifically:

I0326 19:50:47.406] ---Logs from container gitea in pod gitea-7056-1-gitea-5f94847576-krj5q:---
I0326 19:50:47.658] Generating /data/ssh/ssh_host_ed25519_key...
I0326 19:50:47.659] chown: /data/gitea/conf/app.ini: Read-only file system
I0326 19:50:47.659] Mar 26 19:32:40 syslogd started: BusyBox v1.26.2
I0326 19:50:47.659] Generating /data/ssh/ssh_host_rsa_key...
I0326 19:50:47.659] Generating /data/ssh/ssh_host_dsa_key...
I0326 19:50:47.660] Generating /data/ssh/ssh_host_ecdsa_key...
I0326 19:50:47.660] /etc/ssh/sshd_config line 32: Deprecated option UsePrivilegeSeparation
I0326 19:50:47.660] Mar 26 19:32:40 sshd[14]: Server listening on :: port 22.
I0326 19:50:47.660] Mar 26 19:32:40 sshd[14]: Server listening on 0.0.0.0 port 22.
I0326 19:50:47.661] 2018/03/26 19:32:40 [...s/setting/setting.go:924 NewContext()] [E] Error saving generated JWT Secret to custom config: open /data/gitea/conf/app.ini: read-only file system
I0326 19:50:47.661] chown: /data/gitea/conf/app.ini: Read-only file system
I0326 19:50:47.661] 2018/03/26 19:32:41 [...s/setting/setting.go:924 NewContext()] [E] Error saving generated JWT Secret to custom config: open /data/gitea/conf/app.ini: read-only file system
I0326 19:50:47.662] chown: /data/gitea/conf/app.ini: Read-only file system
I0326 19:50:47.662] 2018/03/26 19:32:42 [...s/setting/setting.go:924 NewContext()] [E] Error saving generated JWT Secret to custom config: open /data/gitea/conf/app.ini: read-only file system
I0326 19:50:47.662] chown: /data/gitea/conf/app.ini: Read-only file system

Which is causing the issue (Gitea is trying to write to app.ini when it shouldn't..)

I guess any update would be mentioned here.
Your best bet is providing a pull request

@cdrage you can pregenerate token and other values using gitea cli command generate that was added in 1.4.0 (see https://docs.gitea.io/en-us/command-line/)

@lafriks Yup. I saw from your other comments on the other issues. Yes, that helps with configuration and setting up your .ini file. However, this issue is regarding when INTERNAL_TOKEN isn't set, what gitea should do.

The problem is that app.ini or whatever configuration file you use, should be immutable when deploying (through Ansible, read-only file system, etc.)

My thinking:

If INTERNAL_TOKEN does not exist, the key should be automatically generated and set internally rather than being passed to the configuration file.

Reference: https://github.com/go-gitea/gitea/blob/96c268c0fcc22604103f67821d66fef39944e80b/modules/setting/setting.go#L924

I'll most likely push a PR later this week once I have enough time!

Hi @cdrage! I am very much interested in deploying Gitea with a k8s Chart. If you need any help (testing, review, code, ...), don't hesitate to ping me :)

@Nodraak There's already a PR: https://github.com/kubernetes/charts/pull/3408 but we're blocked on this issue :+1:

Maybe put that on a file like pid? @cdrage the INTERNAL_TOKEN token will be shared by gitea web and gitea hook processes. So I don't think a memory shared is enough. Any idea?

@lunny Yeah, anything else preferably, pid would probably do. The code: https://github.com/go-gitea/gitea/blob/96c268c0fcc22604103f67821d66fef39944e80b/modules/setting/setting.go#L924 shows that INTERNAL_TOKEN is actually the only variable that it set within the configuration, no other environment variables are set / modify the configuration file after launching.

Alternatively (as a hack, for now) we could set the internal token as part of the configMap within Kubernetes (before launching). But then again.. it would be nice to have it automatically generated instead.

Anyways! I'll try to push a PR this/next week.

TL;DR: for internal gitea calls, why not call the function directly, instead of using an HTTP api?


Reading again this thread, I have some trouble understanding the utility of INTERNAL_TOKEN.

Correct me if I'm wrong, but from what I have understood:

I've played a little with the code, and managed to substitute the calls (replace HTTP requests by function calls) and to remove all references to INTERNAL_TOKEN. I have not tested extensively, but all the tests passes.

I won't submit a PR yet, but please have a look at my two commits (the most relevant is commit https://github.com/Nodraak/gitea/commit/c20afe9e0a571e4b5cacd645ccdacbb52068da2a "Substitute calls") and if they solve properly this issue, I can open a PR. I am not aware of all the trade-off of gitea, so I probably missed something.


@ cdrage: I did have seen that you planned to work on a PR, sorry for "stealing" your work. I just tried to naively implement my idea :)

@Nodraak what you have removed whas specially made so that ssh process would not need to create database connection that update data as this totally break sqlite3 db backend (sqlite can have only single connection that updates data)

Ahah, so that was what I missed. Actually, I'm stupid, I should have run a git blame and dig in the git history. Anyway.

So, if I understand correctly, concurrent write access and timeouts are handled by the HTTP protocol (I guess gitea has only one webserver, so there can be only one access to the DB), instead of directly by the database. I don't see why this is a better solution, IMO it adds a lot of complexity (due to indirect calls). Moreover, sqlite support concurrent write acces: the transaction will retry until the DB is unlock or the timeout expires. By default the timeout is 0 so any concurrent write will immediatly return Error: database is locked, but with _busy_timeout one can set an appropriate timeout, for instance a few seconds. (the parameter _busy_timeout is passed when opening the connection)

@Nodraak internal calls is planned to increase most probably if we want to support clustering of some kind

As for sqlite locking it is a bit more difficult because we use xorm and also there is GoLang layer not directly c calls

Two benefit for this change from both gitea web and gitea hook connected to mysql or sqlite. One is to resolve sqlite share write problem. Another is SSH could be deployed different machine from web in future.

Did anyone else happen to start work on this recently? This is still blocking us from deploying Gitea to Kubernetes: https://github.com/kubernetes/charts/pull/3408

@lafriks @Nodraak

Hi

Sadly I could not take a look. For now my priorities have changed and I can't say if I will have the time to work on this issue in the future

We could create a single file on data/internal/lock to share between multiple processes?

@cdrage Is this indeed blocking? There have been others who have deployed Gitea to kubernetes

While I certainly have been able to work around the INTERNAL_TOKEN issue myself in Puppet and elsewhere in automation, the workarounds are less than ideal. It makes life difficult to manage Gitea settings while Gitea is modifying the settings file that I'm laying down through automation. The applications tend to fight each other. While this may not be critical, it is definitely a fairly large hiccup.

Thank you to anyone who has the time to look into this and place a PR! You would be my savior, that's for sure. ;-)

@techknowlogick Yes. You're unable to create a ConfigMap for the app.ini settings. If you go by the links you posted, it's using volumes for configuration management, rather than the better way of using ConfigMaps (more Kubernetes-esque).

@minoru7 What was your work-around?

@cdrage Well, for Puppet, I have the configuration file being laid down only if it doesn't already exist. Which, that's fine for a new system, but defeats the purpose of Puppet and continuous configuration. If I make a change to the configuration, which happens on occasion, I have to manually delete the app.ini to have it take effect. As for Kubernetes, I haven't had enough time yet to start digging into that, so unfortunately, I don't have any help for you there, sorry.

@minoru7 Thanks!

That helps, since it's the exact same issue as when deploying to Kubernetes. Since app.ini should be immutable, it's difficult to have continuous configuration with INTERNAL_TOKEN making changes to the app.ini file.

So essentially it's also blocking any Puppet deployment too.

Could INTERNAL_TOKEN be set when first set up?

@lunny I believe, last I remember, that it was linked to the daemon or somesuch. The INTERNAL_TOKEN value changes upon each restart of the service. So that's why my original request mentioned maybe adding this to a pid file or something instead.

The only one place to change the token is https://github.com/go-gitea/gitea/blob/master/modules/setting/setting.go#L918 when INTERNAL_TOKEN is empty. If it is changed each restart that maybe a bug.

Yeah. The problem is that on each restart, a new token is generated. If you decide to set one yourself at start-up (providing INTERNAL_TOKEN to app.ini) the service will run into an error of trying to modify app.ini despite it already being set (see my above comments for the log)

OK. I will investigate it.

So, as far as I can see, the code should only generate this token if it _does not_ exist. Is this still an issue?

I'm using version 1.3.2. Last time I attempted it, I generated a token and inserted it into the ini file. When I restarted the service it overwrote my InternalToken with a newly generated one, which then threw Puppet into overwriting it, which restarts the service, which Puppet then overwrites again. It was a bad situation. Not only that, but if I'm rolling out a few of these servers, I want to be able to automate that Token generation in that case. Or otherwise, the best scenario would be to take the Token out of the config file that Puppet would need to manage. I have not attempted a newer version of Gitea, so maybe you guys have sorted out my original issue? I'll give it a try soon when time permits.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs during the next 2 weeks. Thank you for your contributions.

@​stale nope plz

Yeah, this is still a major blocker getting Gitea on Kubernetes.

@cdrage I follow your helm chart to deploy gitea on the Kubernetes 1.11.6, and i add INTERNAL_TOKEN in configmap.yaml, and it's works well. So can we add internalToken in gitea chart values.yaml first? If this feature completed, we can change.

Below is my configmap.yaml :

[security]
    INTERNAL_TOKEN = abcdef123456

@pytimer The problem is that INTERNAL_TOKEN is the only value that's not hard-coded / doesn't change for Gitea. It's not "kubernetes like" to have to modify configmap.yaml afterwards. That's why we should move INTERNAL_TOKEN out of app.ini and somewhere else.

I think the real problem is that internal token should not change and I can't find why it should in the code.
The PR https://github.com/go-gitea/gitea/pull/3531 should have introduced all the needed generator to provide a valid and stable configuration.

In fact I like @sapk's idea.

Was this page helpful?
0 / 5 - 0 ratings