Linkerd2: Intermittent 502 status code

Created on 7 May 2019  Β·  42Comments  Β·  Source: linkerd/linkerd2

Bug Report

What is the issue?

We have an restful application, where the client receives intermittent 502 status codes, but the application itself logs a 201. If we disable linkerd2 we are unable to reproduce this issue.

The basic traffic flow is as follows (all supposedly HTTP1.1):
Client -> Ambassador(envoy) -(via linkerd2)-> App
(see additional context for diagram)

This only happens for a very __particular route__, which also calls a cluster external http-service and for no other routes so far!

How can it be reproduced?

We tried to reproduce this with an artificial setup, setting up the ingress with ambassador and using httpbin as application both using linkerd2 as service-mesh. However this was unsuccessful and we were unable to reproduce this outside our production deployments or with other routes.

Logs, error output, etc

In the linkerd sidecar attached to ambassador the following error pops up, whenever the route fails:

[figo-ambassador-586c797dc-p9pt8 linkerd-proxy] WARN [  1861.009733s] proxy={server=out listen=127.0.0.1:4140 remote=10.7.73.113:44428} linkerd2_proxy::proxy::http::orig_proto unknown l5d-orig-proto header value: "-"
[figo-ambassador-586c797dc-p9pt8 linkerd-proxy] WARN [  1861.009760s] proxy={server=out listen=127.0.0.1:4140 remote=10.7.73.113:44428} hyper::proto::h1::role response with HTTP2 version coerced to HTTP/1.1
[figo-ambassador-586c797dc-p9pt8 linkerd-proxy] ERR! [  1864.515657s] proxy={server=out listen=127.0.0.1:4140 remote=10.7.73.113:44428} linkerd2_proxy::app::errors unexpected error: http2 general error: protocol error: unspecific protocol error detected
[figo-ambassador-586c797dc-7s9x6 linkerd-proxy] ERR! [  1833.975088s] proxy={server=out listen=127.0.0.1:4140 remote=10.7.69.131:57912} linkerd2_proxy::app::errors unexpected error: http2 general error: protocol error: unspecific protocol error detected

(The warnings were caused by a previous successful call)

We increase the log level via config.linkerd.io/proxy-log-level: trace
https://gist.github.com/trevex/ca0791aad3402137ed551b251970d329

linkerd check output

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
-----------------
√ control plane namespace exists
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ no invalid service profiles

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √

Environment

  • Kubernetes Version: 1.14.1
  • Cluster Environment: bare-metal
  • Host OS: ContainerLinux
  • Linkerd version: stable-2.3.0
  • CNI: Cilium 1.4.4
  • DNS: CoreDNS 1.5.0

Possible solution

Additional context

Diagram from Slack:
https://files.slack.com/files-pri/T0JV2DX9R-FJA61H9CH/ambassador-linkerd2.png

Please let me know if I can provide more information :)

areproxy needrepro prioritP0

Most helpful comment

@olix0r we're seeing the same issue - I've opened a ticket as I wasn't sure how they are triaged. We've just tested with 2.3.2 and still seeing the same issues.

All 42 comments

I took a look at the logs you posted and I noticed these messages:

TRCE [   116.240740s] proxy={client=out dst=10.7.73.102:8000 proto=Http2} h2::frame::headers hpack decoding error; err=InvalidStatusCode
DBUG [   116.240743s] proxy={client=out dst=10.7.73.102:8000 proto=Http2} h2::codec::framed_read connection error PROTOCOL_ERROR -- failed HPACK decoding; err=Hpack(InvalidStatusCode)

It looks like the Ambassador proxy may be seeing a malformed HTTP/2 frame...it might be worth looking at the traffic between that proxy and 10.7.73.102 using a tool like Wireshark.

I'm not sure Ambassador has anything to do with that, @hawkw -- it at least appears that's a message on an outbound client (not a server, from Ambassador). What is running at 10.7.73.102? Is that a kubernetes Service IP? A pod IP? Is linkerd injected on that pod?

Yeah, didn't mean to imply the problem was on the Ambassador side of that proxy --- I just meant the proxy injected into the Ambassador pod (which is the only one we have logs for thus far)

@trevex maybe can answer what 10.7.73.102 is/was. What I do know, is, that when we disabled linkerd2 and uninjected it from the pods, everything worked fine.

@christianhuening @trevex I'm guessing that pod is the prodAPI pod IP... Can you share proxy logs from a prodAPI pod during this sort of error?

For context: the proxy uses HTTP/2 when communicating with other proxies... this looks like the other pod is somehow responding with a malformed set of headers! So this is a great find, we'll definitely want to dig in more here.

As a workaround, I'd bet that you can install/upgrade the control plane with --disable-h2-upgrade to avoid hitting this error. But we'll definitely want to get a clearer repro for this!

@olix0r Yes, you are correct: 10.7.73.102 is a pod IP from one of the application API pods.

I will collect the logs of the related linkerd-proxy as well.

API linkerd-proxy log:
https://gist.github.com/trevex/01a5dc9fdf658bac2576c7e236da2176
Ambassador linkerd-proxy log:
https://gist.github.com/trevex/0b5892b9b35dd667396dae2323a09026

The 502 error happend atleast 3 times during that period.

I tried to reproduce this in a test case: https://github.com/trevex/linkerd-test
The first test just uses httpbin, while for the second I implemented a simple go test application.

The test application has a difference between the versions I tested 0.2.0 just calls an external service and 0.3.0 added a timeout.

Furthermore I was informed that our application sets duplicate headers, so I tried to incorporate this into the example. Interestingly the application always sets x-figo-rid to - first and atleast the dash popped up in the logs as well.

However I was not able to reproduce this issue reliably. Over all test runs I was able to reproduce this issue just once with the test-2-testapp setup and am unable to verify whether it was the same issue, because the loglevel was _warn_.

@olix0r How does Linkerd2 handle duplicate headers?
From the RFC2616:
(https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2)

Multiple message-header fields with the same field-name MAY be present in a message if and only if the entire field-value for that header field is defined as a comma-separated list [i.e., #(values)]. It MUST be possible to combine the multiple header fields into one "field-name: field-value" pair, without changing the semantics of the message, by appending each subsequent field-value to the first, each separated by a comma. The order in which header fields with the same field-name are received is therefore significant to the interpretation of the combined field value, and thus a proxy MUST NOT change the order of these field values when a message is forwarded.

Just saw that Ambassador 0.61.0 (just released) has this in their release notes:

Previously add_request_headers and add_response_headers could only append to a set of existing headers. Now, these two annotations can overwrite existing headers if requested. Thanks to Vyshakh P.

Thanks for the updated info, folks! We'll look over those logs and let you know what we find.

@christianhuening

@olix0r How does Linkerd2 handle duplicate headers?

It should handle them properly: the underlying http parser we use has been tested pretty exhaustively, but @seanmonstar can probably confirm.

That said, I'm curious if Ambassador's change helps you at all.

@olix0r Updating ambassador 0.61.0 didn't help. It only works if we disable Linkerd2 in the ambassador and the prodAPI pods. When both are disabled, it works.

I'm sorry, I tried looking through all the logs posted, and I couldn't find the mentioned warns or errors. I'm especially interested in the InvalidStatusCode error, which is saying a received :status header isn't a decimal number between 100-599. Was the mention of that in a comment not actually part of this issue?

We found either a fix or a workaround:
Setting ALPN protocols to h2,http/1.1 resolves the intermittent 502s.
Described here:
https://www.getambassador.io/reference/core/tls/#alpn_protocols
https://www.envoyproxy.io/docs/envoy/v1.5.0/api-v1/cluster_manager/cluster_ssl

I am curious however why this is necessary? If I understand this correctly ambassador (envoy) will now upgrade traffic itself to http2 and this resolves the issues. The question remains however why it would break with http/1.1?

EDIT:
The 502 are still occuring. As they are flaky we were just lucky to not encounter them for a minute... So setting the aforementioned option does not help!

@trevex to help narrow down the issue, can you try installingupgrading your control plane with --disable-h2-upgrade? This will prevent the proxy from automatically using h2 between proxy nodes. I'm not confident that this will eliminate the error, but, since you've seen some logs related to this feature, it would be helpful to test this.

@trevex @christianhuening edge-19.5.3 includes a fix to our underlying HTTP/2 implementation that we believe may have caused the class of errors you observed. We'd love to know if the new edge release exhibits the same behavior.

:; curl -sL https://run.linkerd.io/install-edge |sh -
...
:; linkerd upgrade |kubectl apply -f -
...
# and then roll injected pods to get the new proxy

@olix0r thanks! Will take a look.

@christianhuening @trevex this ended up being an Ambassador bug right?

ps. check out the new stable-2.3.1, there are some good stability fixes in there =)

@grampelberg There were two issues. The one we looked into with @olix0r at KubeCon was Ambassador related. This one, I think, we could trace back to Linkerd and the HTTP2 fixes in 2.3.1 should fix it. Weren't able to give it a shot though, yet.

@trevex @christianhuening We're eager to make sure we resolve (or at least understand) this issue before the stable-2.4.0 release. Definitely let us know if you see anything like this on stable-2.3.2, which, as you know, includes a number of proxy fixes.

@olix0r we rolled 2.3.2 out on our dev cluster. Weβ€˜ll test it ASAP and get back to. you. Hopefully tomorrow.

@olix0r we're seeing the same issue - I've opened a ticket as I wasn't sure how they are triaged. We've just tested with 2.3.2 and still seeing the same issues.

@calinah @christianhuening @trevex We have another h2 fix which (1) fixes one case where the proxy could erroneously emit 502s and (2) adds better diagnostics to logs when the h2 server emits errors. These changes will be in the upcoming edge-19.6.3.

If you'd like to test this change in the meantime, I've pushed a proxy image that you can test against by setting the pod spec annotations (or setting the proper linkerd inject flags):

config.linkerd.io/proxy-version: ver-protocol-error-fix-0
config.linkerd.io/proxy-log-level: warn,linkerd2_proxy=info,h2=debug

I'm very curious if this branch fixes your behavior or at least outputs better error descriptions...

Hey @olix0r have tested the fix on 2 different clusters. On one of them we had linkerd-edge-19.5.3 installed and on the other we had linkerd-stable2.3.0 installed (from a few days ago when we were testing pretty much all versions to check if the issue is version specific).
So, on the cluster that was running linkerd-edge-19.5.3 - applying the fix worked straight away. Ran the test 3x and worked.
On the other cluster first of all adding the annotation config.linkerd.io/proxy-version: ver-protocol-error-fix-0 to podAnnotations did not trigger a recreation of the pods for the change to be picked up. And running kubectl get deploy -o yaml | linkerd inject --proxy-log-level=warn,linkerd2_proxy=info,h2=debug --proxy-version=ver-protocol-error-fix-0 - | kubectl apply -f - also didn't recreate the pods. So I've had to downgrade to edge-19.5.3 first and then apply the specified proxy version. This is unrelated to the issue but in case that's something you may want to explore.
Now once I've applied the changes, I've ran the test about 5x times on this cluster and I get the following errors:

err: {
      "code": 1,
      "metadata": {
        "_internal_repr": {
          ":status": [
            "502"
          ],
          "content-length": [
            "0"
          ],
          "date": [
            "Tue, 18 Jun 2019 16:30:43 GMT"
          ]
        }
      },
      "details": "Received http2 header with status: 502"

and

err: {
      "code": 4,
      "metadata": {
        "_internal_repr": {}
      },
      "details": "Deadline Exceeded"
    }

Not the news I was hoping to give you. I'll continue the troubleshooting tomorrow on that particular cluster and come back to you.

@calinah do you see any PROTOCOL ERROR log messages in the linkerd logs? If so, can you share them?

It turns out that we had not actually fixed https://github.com/linkerd/linkerd2/issues/2942 in yesterday's build; but we have confirmed the fix in ver-protocol-error-fix-1.

@calinah your injection issues are surprising, but we should probably track that issue separately.

Hey @olix0r been testing quite a bit today. Now getting the following error:

DBUG [  1634.172440s] proxy={client=in dst=10.24.18.9:9090 proto=Http2} h2::codec::framed_read received; frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
DBUG [  1634.172477s] proxy={client=in dst=10.24.18.9:9090 proto=Http2} h2::proto::peer connection error PROTOCOL_ERROR -- cannot open stream StreamId(1) - not server initiated;
DBUG [  1634.172486s] proxy={client=in dst=10.24.18.9:9090 proto=Http2} h2::proto::connection Connection::poll; connection error=PROTOCOL_ERROR
DBUG [  1634.172501s] proxy={client=in dst=10.24.18.9:9090 proto=Http2} h2::codec::framed_write send; frame=GoAway { last_stream_id: StreamId(0), error_code: PROTOCOL_ERROR }
DBUG [  1634.172516s] proxy={client=in dst=10.24.18.9:9090 proto=Http2} h2::proto::connection Connection::poll; connection error=PROTOCOL_ERROR

Let me know what else can I do to help troubleshooting.

Tested with ver-protocol-error-fix-1

Very peculiar... Do the logs show a line like h2::codec::framed_write send; frame=Reset { stream_id: StreamId(1) (and probably other stream IDs) before that connection error log?

@calinah thanks for testing! We currently don’t get to do it, sorry guys!

@seanmonstar we get the following linkerd-proxy logs in the service meant to be processing the request:

DBUG [  1115.066010s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Headers { stream_id: StreamId(139), flags: (0x4: END_HEADERS) }
DBUG [  1115.066069s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Data { stream_id: StreamId(139), flags: (0x1: END_STREAM) }
DBUG [  1115.376894s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:60630} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(129), error_code: CANCEL }
DBUG [  1115.377369s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(129), error_code: CANCEL }
DBUG [  1115.378797s] proxy={client=in dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_read received; frame=Headers { stream_id: StreamId(65), flags: (0x5: END_HEADERS | END_STREAM) }
DBUG [  1115.378919s] proxy={client=in dst=10.24.22.196:9090 proto=Http2} h2::proto::peer connection error PROTOCOL_ERROR -- cannot open stream StreamId(65) - not server initiated;
DBUG [  1115.378953s] proxy={client=in dst=10.24.22.196:9090 proto=Http2} h2::proto::connection Connection::poll; connection error=PROTOCOL_ERROR
DBUG [  1115.379012s] proxy={client=in dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=GoAway { last_stream_id: StreamId(0), error_code: PROTOCOL_ERROR }
DBUG [  1115.379045s] proxy={client=in dst=10.24.22.196:9090 proto=Http2} h2::proto::connection Connection::poll; connection error=PROTOCOL_ERROR
ERR! [  1115.379368s] proxy={server=in listen=0.0.0.0:4143 remote=10.24.22.192:44078} linkerd2_proxy::app::errors unexpected error: http2 error: protocol error: unspecific protocol error detected
ERR! [  1115.379500s] proxy={server=in listen=0.0.0.0:4143 remote=10.24.22.192:44078} linkerd2_proxy::app::errors unexpected error: http2 error: protocol error: unspecific protocol error detected
DBUG [  1115.380204s] proxy={server=in listen=0.0.0.0:4143 remote=10.24.22.192:44078} h2::codec::framed_write send; frame=Headers { stream_id: StreamId(773), flags: (0x5: END_HEADERS | END_STREAM) }
DBUG [  1115.380395s] proxy={server=in listen=0.0.0.0:4143 remote=10.24.22.192:44078} h2::codec::framed_write send; frame=Headers { stream_id: StreamId(775), flags: (0x5: END_HEADERS | END_STREAM) }
DBUG [  1115.392795s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:60630} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(131), error_code: CANCEL }
DBUG [  1115.392981s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(131), error_code: CANCEL }
DBUG [  1115.409381s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:60630} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(133), error_code: CANCEL }
DBUG [  1115.409573s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(133), error_code: CANCEL }
DBUG [  1115.420777s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:60630} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(135), error_code: CANCEL }
DBUG [  1115.420989s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(135), error_code: CANCEL }
DBUG [  1145.060019s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:60630} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(137), error_code: CANCEL }
DBUG [  1145.060374s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(137), error_code: CANCEL }
DBUG [  1145.064915s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:60630} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(139), error_code: CANCEL }
DBUG [  1145.065128s] proxy={client=out dst=10.24.18.32:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(139), error_code: CANCEL }
DBUG [  1202.339043s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:35510} h2::codec::framed_read received; frame=Ping { ack: false, payload: [0, 0, 0, 0, 0, 0, 0, 17] }
DBUG [  1202.339094s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.196:35510} h2::codec::framed_write send; frame=Ping { ack: true, payload: [0, 0, 0, 0, 0, 0, 0, 17] }

and these are linkerd-proxy logs from the service that initiated the chain of requests:

DBUG [  1637.763397s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Headers { stream_id: StreamId(657), flags: (0x4: END_HEADERS) }
DBUG [  1637.763442s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Data { stream_id: StreamId(657), flags: (0x1: END_STREAM) }
DBUG [  1637.763573s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Headers { stream_id: StreamId(659), flags: (0x4: END_HEADERS) }
DBUG [  1637.763613s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Data { stream_id: StreamId(659), flags: (0x1: END_STREAM) }
DBUG [  1667.758764s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(653), error_code: CANCEL }
DBUG [  1667.758820s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Ping { ack: false, payload: [0, 0, 0, 0, 0, 0, 0, 40] }
DBUG [  1667.758845s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_write send; frame=Ping { ack: true, payload: [0, 0, 0, 0, 0, 0, 0, 40] }
DBUG [  1667.758854s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(655), error_code: CANCEL }
DBUG [  1667.758863s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(657), error_code: CANCEL }
DBUG [  1667.758868s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Reset { stream_id: StreamId(659), error_code: CANCEL }
DBUG [  1667.759089s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(653), error_code: CANCEL }
DBUG [  1667.759107s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(655), error_code: CANCEL }
DBUG [  1667.759112s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(657), error_code: CANCEL }
DBUG [  1667.759116s] proxy={client=out dst=10.24.22.196:9090 proto=Http2} h2::codec::framed_write send; frame=Reset { stream_id: StreamId(659), error_code: CANCEL }
DBUG [  1667.766185s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Headers { stream_id: StreamId(661), flags: (0x4: END_HEADERS) }
DBUG [  1667.766255s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=WindowUpdate { stream_id: StreamId(661), size_increment: 5 }
DBUG [  1667.766269s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Data { stream_id: StreamId(661), flags: (0x1: END_STREAM) }
DBUG [  1667.766350s] proxy={server=out listen=127.0.0.1:4140 remote=10.24.22.192:60800} h2::codec::framed_read received; frame=Headers { stream_id: StreamId(663), flags: (0x4: END_HEADERS) }

Let me know if you need more details.

These logs helped me isolate the why in the h2 library (thanks!), now I just need to figure out why the proxy is triggering this.

Would you happen to know if your applications frequently stop using a gRPC before it's finished? The h2 library logs are telling me that one side of the http2 stream is no longer desired before that side has finished, so it's telling the other peer via a RST_STREAM of CANCEL. (The connection error comes from the h2 library only keeping track of X amount of reset streams, to prevent wasting memory, details schmetails...)

hey @seanmonstar the application is designed to persist gRPC connections ...so I don't know under what circumstances gRPC would get stopped. Basically I can't tell you for sure is there a test I can run to confirm if that's the case?

I'm not trying to imply a bug in the applications, just trying to understand the cause of the library thinking the streams are no longer wanted. It could be something like a gRPC method is called, and then before getting the response, it is canceled. Or it could be a bug in our h2 library. I'll do some more testing, thanks for the info!

@trevex @christianhuening We just released edge-19.7.1 with what we believe is the fix. Can you let us know if this addressed the issue?

@trevex @christianhuening (bumping for edge-19.7.2)

Just to give a heads up: I had updated to 19.7.4 and now to 2.4.0. We will test now what does to the 502 issues

@christianhuening any luck?

we're running 2.4.0 in prod now as it didn't break anything. However the specific scenario was relatively hard to repro and right now we're busy with migrations. Will keep pushing ;-)

Testing with 2.4.0 looks good so far no intermittent 502. We are going to watch this for a couple of days, but looks promising :+1:

Thanks a lot guys! The effort you put into it is much appreciated!

Still no issues, let's close this :+1:

@trevex @christianhuening thanks for all of the helpful feedback! don't hesitate to let us know if anything else crops up.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

klingerf picture klingerf  Β·  3Comments

ihcsim picture ihcsim  Β·  4Comments

vikas027 picture vikas027  Β·  4Comments

adleong picture adleong  Β·  4Comments

alpeb picture alpeb  Β·  3Comments