@sarge , I have the following tests:
package forms
import (
"github.com/caddyserver/caddy/v2/caddytest"
"io/ioutil"
"testing"
"time"
)
func TestPlugin(t *testing.T) {
tester := caddytest.NewTester(t)
baseURL := "https://127.0.0.1:3443"
configFile := "assets/conf/Caddyfile.json"
configContent, err := ioutil.ReadFile(configFile)
if err != nil {
t.Fatalf("Failed to load configuration file %s: %s", configFile, err)
}
rawConfig := string(configContent)
tester.InitServer(rawConfig, "json")
tester.AssertGetResponse(baseURL+"/version", 200, "1.0.0")
time.Sleep(1 * time.Millisecond)
}
func TestLdapConfig(t *testing.T) {
tester := caddytest.NewTester(t)
baseURL := "https://127.0.0.1:3443"
configFile := "assets/conf/ldap/Caddyfile.json"
configContent, err := ioutil.ReadFile(configFile)
if err != nil {
t.Fatalf("Failed to load configuration file %s: %s", configFile, err)
}
rawConfig := string(configContent)
tester.InitServer(rawConfig, "json")
tester.AssertGetResponse(baseURL+"/version", 200, "1.0.0")
time.Sleep(1 * time.Millisecond)
}
When test suite reached TestLdapConfig, it looks like the first instance is still working.
--- PASS: TestPlugin (5.92s)
=== RUN TestLdapConfig
{"level":"info","ts":1591130556.064496,"logger":"admin","msg":"stopped previous server"}
TestLdapConfig: caddytest.go:154: unable to contact caddy server. Post "http://localhost:2019/load": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
TestLdapConfig: caddytest.go:98: failed to load config: Post "http://localhost:2019/load": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020/06/02 16:42:42 caddytest: redirecting the dialer from 127.0.0.1:3443 to 127.0.0.1:3443
--- FAIL: TestLdapConfig (7.00s)
FAIL
Btw, I disable admin port in both config files:
{
"admin": {
"disabled": true
},
"logging": {
"logs": {
"default": {
"level": "DEBUG"
}
}
},
...
Should it t.Errorf rather than t.Fatalf? Fatal will quit when it happens, Error will just mark the current test as failed and continue to the next one.
Or maybe you're saying the first server is still running at the end of the test? Maybe it needs a defer tester.ShutdownServer() or something (pseudocode)
@francislavoie , @sarge, @mholt , I am going to create a pull request to implement defer tester.ShutdownServer(). Please let me know if you object.
@greenpau are you still working on this?
@francislavoie, no. I think @sarge was working on it.
Sorry @greenpau @francislavoie I did not realise this was on me. There isn't really a server to shutdown, has a listener that is running in the testing process.
There isn't really a server to shutdown, has a listener that is running in the testing process.
@sarge , the issue here is:
How do you overcome this? One obvious solution is using different ports between test configs ...
@sarge , another test case where the shutting down of the server became relevant.
Let's say I want to test Caddyfile processing. See https://github.com/greenpau/caddy-request-debug/blob/70af061ef32353829a62b46f8f46fc9c7ff9a44b/caddyfile_test.go#L19-L22
First, I may try loading different configs ... that would be different servers.
Then, what if I want to test for the server not coming up, i.e. failing. That is, I provide invalid options and want to test that the server is failing?
@greenpau Sorry I haven't gotten to this yet.
All of the tests run different configurations, so they are always loading fresh configurations.
Here is an example of an expected failure.
https://github.com/caddyserver/caddy/blob/00e6b77fe4eb4a692649970bf1724de22b15bba9/caddytest/integration/caddyfile_test.go#L59
There are other tests in that file where each configuration is reloaded at the start of the test. They will run in any order. They will not run well in parallel for reasons that a port can not be shared.
Are you able to create a failing test based on the https://github.com/caddyserver/caddy/blob/00e6b77fe4eb4a692649970bf1724de22b15bba9/caddytest/integration/caddyfile_test.go
@sarge , I can run tester in separate tests. The issue is when I try using tester multiple times in the same test.
@greenpau ah I had not picked up from your examples that running InitServer twice inside a test was the issue.
I'm super keen to help you move past this issue.
Here are a couple of scenarios, that call InitServer in the same test, I expected this to work as we are really using the mental model that caddy supports patching the config while the server remains running. Cool feature tbh and helps a bunch with keeping high up times.
func TestCombined(t *testing.T) {
// arrange
tester := caddytest.NewTester(t)
tester.InitServer(`
{
http_port 9080
https_port 9443
}
localhost:9080 {
respond /version 200 {
body "hello from localhost"
}
}
`, "caddyfile")
// act and assert
tester.AssertGetResponse("http://localhost:9080/version", 200, "hello from localhost")
// arrange
// tester := caddytest.NewTester(t)
tester.InitServer(`
{
http_port 9080
https_port 9443
}
localhost:9080 {
redir / http://localhost:9080/hello 301
respond /hello 200 {
body "hello from localhost"
}
}
`, "caddyfile")
// act and assert
tester.AssertRedirect("http://localhost:9080/", "http://localhost:9080/hello", 301)
}
and this one that creates a new instance of the tester
func TestCombinedNewInstance(t *testing.T) {
// arrange
tester := caddytest.NewTester(t)
tester.InitServer(`
{
http_port 9080
https_port 9443
}
localhost:9080 {
respond /version 200 {
body "hello from localhost"
}
}
`, "caddyfile")
// act and assert
tester.AssertGetResponse("http://localhost:9080/version", 200, "hello from localhost")
// arrange
tester2 := caddytest.NewTester(t)
tester2.InitServer(`
{
http_port 9080
https_port 9443
}
localhost:9080 {
redir / http://localhost:9080/hello 301
respond /hello 200 {
body "hello from localhost"
}
}
`, "caddyfile")
// act and assert
tester2.AssertRedirect("http://localhost:9080/", "http://localhost:9080/hello", 301)
}
The idea of the shutting down the server isn't really a thing I'll try and show you why.
Here it runs inside the testing process. Calling caddycmd.Main().
https://github.com/caddyserver/caddy/blob/9859ab8148435e64accd8d66e67db29fb5cbc9e7/caddytest/caddytest.go#L196-L208
I guess we could call stop, but I am not sure that we should or it would help. There might be some state inside the plugin that is not being cleaned up between tests. If that is the case I would think that is a bug in the plugin as caddy expects to keep running with configuration being swapped in and out. There definitely is some internal caddy state that is not reset on loading a new config. TLS cert management caches is one example that lives for the duration of the process. So some parts of caddy might behave differently depending on the first run vs subsequent runs. Calling stop is not likely to be able to reset any and all internal state, it is a limitation of the decision to make caddy run inside the go process.
We could also fail faster if an config failed, but I wonder if that is a bit misleading too. It might be valid to test uploading an invalid config, then load in a valid one. Arguably we should return a error from InitServer and let the test code make a decision.
https://github.com/caddyserver/caddy/blob/9859ab8148435e64accd8d66e67db29fb5cbc9e7/caddytest/caddytest.go#L99
The idea of the shutting down the server isn't really a thing I'll try and show you why.
@sarge , thank you for the explanation! Let's keep status quo here. I will try learning from your examples. 馃憤