Test-infra: podutils: add a timestamp wrapper for build log?

Created on 8 Nov 2018  Â·  23Comments  Â·  Source: kubernetes/test-infra

I can see this is useful for debugging, and is missing from bootstrap. thoughts?

/area prow
cc @cjwagner @stevekuznetsov @fejta @BenTheElder

areprow kinfeature prioritimportant-longterm

Most helpful comment

Sounds good, I'll try to put up a pull request

All 23 comments

I would be somewhat surprised if this didn't already exist as a polished tool.

thoughts? I think this is also a big difference between bootstrap and podutils

You could append timestamps to newlines in a different implementation of the writer here:

https://github.com/kubernetes/test-infra/blob/master/prow/entrypoint/run.go#L89

However, if we want to get rid of entrypoint and the wrapper in general maybe we want a different approach?

it's probably fine, nobody complaint yet

check things like this first maybe :^) https://unix.stackexchange.com/a/26797

Also for k8s, ginkgo has a wrapper so now we actually have double timestamps which looks a bit odd...

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

We have been waiting for this feature to migrate to podutil https://github.com/knative/test-infra/issues/265, and now are at a point where we have to migrate. The timestamp would be very helpful to us, and for @krzyzacy 's concern about double timestamp, may be there can be a boolean somewhere to make this optional?
/remove-lifecycle stale

The idea is solid, the implementation should not be too hard. Do you think you could try a pull request to add it?

Sounds good, I'll try to put up a pull request

@chaodaiG - have you begun with the implementation? If not I can pick this up.

/assign

@clarketm , it's not a top priority for us anymore, feel free pick it up

@krzyzacy @stevekuznetsov
Do we want this to be configurable via an entrypoint option? Also, should this be enabled by default?

Do we want this to be configurable via an entrypoint option? Also, should this be enabled by default?

Since this is technically a breaking change it should be optional. Additionally if something is already printing timestamps we'd want to avoid printing them twice.
In practice it shouldn't break anyone though (hopefully no one is parsing logs...) and this would be generally useful so it might be appropriate to have it default to enabled?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/reopen
/remove-lifecycle rotten

It looks this issue hasn't been fixed.

@chizhg: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
/remove-lifecycle rotten

It looks this issue hasn't been fixed.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Can this be re-opened?

Not sure if this was known by everyone here since it wasn't called out - kubectl logs --timestamp currently already exists, so its possible we can do it without any sort of wrapper

The uploaded logs are not collected via kubectl. The entrypoint wrapper
binary writes subprocess output to disk which is uploaded by the sidecar.

This could probably be a podutils option in any case, but you may have to
escalate out of band or implement this yourself if you need it.

On Wed, Jul 22, 2020, 18:55 John Howard notifications@github.com wrote:

Can this be re-opened?

Not sure if this was known by everyone here since it wasn't called out - kubectl
logs --timestamp currently already exists, so its possible we can do it
without any sort of wrapper

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/test-infra/issues/10100#issuecomment-662783049,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAHADK2CSOY5TLYKECBNVMLR46KAVANCNFSM4GCWQOHA
.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chaosaffe picture chaosaffe  Â·  3Comments

sjenning picture sjenning  Â·  4Comments

spzala picture spzala  Â·  4Comments

lavalamp picture lavalamp  Â·  3Comments

MrHohn picture MrHohn  Â·  4Comments