As discussing on [1], apiserver itself receives requests from not only clients(e2e test in this case) but also from kubernetes components. So even if a client sent a single request, apiserver can receive multiple requests in its sequence.
Now apicoverage tool investigates e2e test log only, and it is good to have another parser for apiserver side log also from the above viewpoint.
/cc @timothysc
Perhaps a couple of coverage metrics are useful to users in the end, too?
@stevekuznetsov
Perhaps a couple of coverage metrics are useful to users in the end, too?
Yeah, some metrics could be useful.
What kind of metrics do you want to check?
I'd like to get ideas for going forward.
The obvious choices are API coverage segmented by client, so e2e client, each controller, the kubelet, etc. Unclear exactly how useful this may or may not be for this effort, though.
@stevekuznetsov
Thanks for your comment, I got it.
I think we will want to measure API coverage from different viewpoints more, so these metrics will be helpful for us.
https://github.com/kubernetes/kubernetes/pull/62535 has added the log of apiserver like:
I0413 12:10:56.612005 1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1/apiservices/v1.apps/status: (1.671974ms) 200 [[kube-apiserver/v1.11.0 (linux/amd64) kubernetes/7297c1c] 127.0.0.1:44356]
I0413 12:10:56.661734 1 wrap.go:42] PATCH /api/v1/nodes/e2e-1439-8af947-minion-group-pn5z/status: (338.229碌s) 403 [[node-problem-detector/v1.4.0 (linux/amd64) kubernetes/$Format] 35.227.115.243:47004]
I0413 12:10:56.811298 1 wrap.go:42] POST /apis/certificates.k8s.io/v1beta1/certificatesigningrequests: (510.314碌s) 403 [[kubelet/v1.11.0 (linux/amd64) kubernetes/7297c1c] 35.227.115.243:47012]
I0413 12:10:56.870816 1 wrap.go:42] POST /apis/certificates.k8s.io/v1beta1/certificatesigningrequests: (511.191碌s) 403 [[kubelet/v1.11.0 (linux/amd64) kubernetes/7297c1c]
So it is nice time to add this parser.
One point is that do we want to include error cases(HTTP40x) also in the coverage calculation, or not?
Related to client metrics, we will be able to know like:
$ grep wrap.go kube-apiserver.log | awk -F "[" '{print $3}' | awk '{print $1}' | sort | uniq
cluster-proportional-autoscaler/v1.6.5
curl/7.38.0]
curl/7.58.0]
dashboard/v1.8.3]
e2e.test/v0.0.0
eventer/v0.0.0
event-exporter/v0.0.0
glbc/v0.0.0
Go-http-client/2.0]
heapster/v0.0.0
kube-apiserver/v1.11.0
kube-controller-manager/v1.11.0
kubectl/v1.11.0
kubectl/v1.9.3
kube-dns/1.14.9
kubelet/v1.11.0
kube-probe/1.11+]
kube-proxy/v1.11.0
kube-scheduler/v1.11.0
metrics-server/v0.0.0
node-problem-detector/v1.4.0
pod_nanny/v0.0.0
rescheduler/v0.0.0
So do we want to know the version part also? Or is it fine to merge them into single metric without version info?
One point is that do we want to include error cases(HTTP40x) also in the coverage calculation, or not?
Yes, tests may expect to get 40x responses from the server when checking negative conditions.
So do we want to know the version part also? Or is it fine to merge them into single metric without version info?
Interesting to see kubectl/v1.9.3 in there. Can we allow a flag so that a user of the tool can choose whether to coalesce versions? @timstclair do you have opinions here?
Hi @stevekuznetsov
Thanks for your comment.
Can we allow a flag so that a user of the tool can choose whether to coalesce versions?
That is a nice idea. OK, I will try doing that after current apiserver log parser is merged into master.