I am opening this issue to discuss potential improvements to the development experience of Kibana. The topics I am bringing up here are mostly specific to the development toolchain as I have a bit of experience with the current field of JS project tools.
From Babel to webpack, TypeScript, Jest, files in scripts
, grunt files, dotfiles, etc., the configuration of tools is spread across many files in various locations across the repo. It takes a fair amount of source diving to connect all the pieces to get a clear understanding of how a build works.
With our very custom build setup and monorepo management structure, we deviate from patterns and setups that the community is used to, which raises the barrier to entry for external contributors. Possibly re-evaluate tools like lerna make yarn workspaces more consistent or organized.
Things like build, test, and lint times could always be improved. 馃槂
There are many pieces of our toolchain that could be used outside the toolchain to make development consistent and easier. If non-application-specific configuration was extracted, we could re-use the tooling across more repositories and expose it as tooling for other projects and plugin authors to consume. By co-mingling the application's concerns with that of the tools, we make it so these must be versioned and shipped together when this isn't necessarily a requirement.
The invitation is open for anyone to add their ideas around improvements or more challenges to development experience. Adding @tylersmalley and @tsouza for their insight and direction on this.
Pinging @elastic/kibana-operations
Under the "community deviation" section, you could put linting rules in with this. Adopting as many of the defaults of the Airbnb and Prettier linting settings could eliminate a lot of deviation and customization of rules.
@skaapgif @restrry If you notice any other potential improvements to the developer experience, pls comment here
I will more points later, right now the main problem for me is to understand how to run any kind of tests (unit, *_integration, functional, etc.) and in what environment it is executed (jest, mocha, karma, etc.)
My biggest pain point is exactly the same as @restrry
On a bad day my workflow looks like this:
To improve this, I'd like to be able to:
The build system is hard to understand:
kibana/tsconfig.json
and managing includes/exclude file lists is rather complex. It makes sense to standardise on some typescript compiler options, but perhaps these should be put into a kibana/tsconfig.defaults.json
file from which other files can inherit. That way each typescript file has full control of included/excluded files.@crob611 @peterschretlen @lizozom @emmacunningham @flash1293 @wylieconlon @aaronjcaldwell
If you noticed any other potential improvements to the developer experience, pls comment here
Talking to some of the downstream teams (ML, APM, CodeSearch etc.) two things were mentioned several times: 1) The cycle time on a PR, and 2) difficulty figuring out how to run tests.
I think particularly for solutions, the monorepo format feels like a burden because many of their changes feel specific/isolated, and they'd like to "just run the APM" tests for example.
It uses several "task runners" (I use the term broadly here): grunt, package.json scripts and files in the scripts/* folder. This means it's not always obvious how I perform a specific step in the build process or what the dependencies between steps are.
I've attempted to address this through my github checks pr - https://github.com/elastic/kibana/pull/34673
Each check corresponds to a CI task and the check states the command and its arguments.
(Yes, it would be better to have more consistency but perhaps this helps.)
but it would be even easier if the test suite had as output a command I could copy and paste that would target all the failed test cases.
@skaapgif - The command you might run locally is different from the command run in CI. That said, I do think communicating such a thing is fairly simple. Might be worthy of more conversation.
@mattkime https://github.com/elastic/kibana/pull/34673 Definitely solves my biggest developer pain at the moment 馃帀 If the CI gave me 1 command I could copy and paste that would only target the failed tests instead of showing me the failed test suite command, that would reduce the cycle time even further, but I would consider that a luxury, not a necessity.
It uses several "task runners" (I use the term broadly here): grunt, package.json scripts and files in the scripts/* folder. This means it's not always obvious how I perform a specific step in the build process or what the dependencies between steps are.
This comment was specifically related to making changes to the build system as opposed to understanding the results of the CI system. I had to read through the source code of several build scripts to understand how (and which files) we copy to the build directory, how we modify the typescript config files, what flags the typescript compiler is run with etc. Not impossible, but much harder to wrap my head around than a high-level abstraction like:
gulp.task('ts', function () {
return gulp
.src('scripts/**/*.ts')
.pipe(typescript(tscConfig.compilerOptions))
.pipe(gulp.dest('wwwroot'));
});
(of course our build system wouldn't be that simple even if it was all gulp tasks, but I think the high-level stream + plugin abstractions would make it easier to follow)
If we are talking about solutions, I've got quite a bit of experience wrangling separate tooling into organized chunks via Neutrino. It may be a helpful tool to employ to solve some of these issues, but granted not all.
I think some other important parts of the development process are code reviews and source-diving. There have been numerous case studies and guidelines that recommend generous whitespace to improve readability and retention, both in design as well as reading. I think opting for more generous and automated whitespace changes could make for a more positive and efficient process. I also understand that some of these rules could be quite subjective.
I used to codify changes like these in ESLint (example here enforces newlines between conditionals, declarations, and modules):
'padding-line-between-statements': [
'error',
{
blankLine: 'always',
prev: ['const', 'let', 'var'],
next: '*',
},
{
blankLine: 'never',
prev: ['const', 'let', 'var'],
next: ['const', 'let', 'var'],
},
{
blankLine: 'always',
prev: ['cjs-import'],
next: '*',
},
{
blankLine: 'always',
prev: ['import'],
next: '*',
},
{
blankLine: 'always',
prev: '*',
next: ['cjs-export'],
},
{
blankLine: 'always',
prev: '*',
next: ['export'],
},
{
blankLine: 'never',
prev: ['import'],
next: ['import'],
},
{
blankLine: 'never',
prev: ['cjs-import'],
next: ['cjs-import'],
},
{
blankLine: 'any',
prev: ['export'],
next: ['export'],
},
{
blankLine: 'any',
prev: ['cjs-export'],
next: ['cjs-export'],
},
{ blankLine: 'always', prev: 'multiline-block-like', next: '*' },
{
blankLine: 'always',
prev: '*',
next: ['if', 'do', 'for', 'switch', 'try', 'while'],
},
{ blankLine: 'always', prev: '*', next: 'return' },
],
I like to think of whitespace as a controller for context switching into a decision tree. Whenever code needs to branch or change context, whitespace can be a good indicator for your brain to also switch context while reading.
import a from 'a';
import b from 'b';
import c from 'c';
const alpha = 'alpha';
const beta = 'beta';
const gamma = 'gamma';
if (alpha === beta) {
return doSomething();
} else if (beta === gamma) {
const value = await somethingElse();
return `delta %{value}`;
} else {
throw new Error();
}
return 'anything';
// module context
import a from 'a';
import b from 'b';
import c from 'c';
// declaration context
const alpha = 'alpha';
const beta = 'beta';
const gamma = 'gamma';
// conditional context
if (alpha === beta) {
return doSomething();
} else if (beta === gamma) {
const value = await somethingElse();
// return context
return `delta %{value}`;
} else {
// error context
throw new Error();
}
// return context
return 'anything';
There are many places in Kibana where this seems to be followed via conventions, but it is certainly not consistent.
Thoughts on standardizing on some sort of whitespace automation to improve readability and retention?
@eliperelman I absolutely agree and the way you described whitespace as a tool for isolating contexts is how I think of it too. The analogy I like to use is that grouping logic together using whitespace is like breaking up long-form text into paragraphs. The same way each paragraph communicates an intention or idea, so does a group of logic communicate an intention within a context.
When I'm writing code, this helps me structure the contexts in a way that makes sense. And as a reader, it helps me scan the code's contexts and focus into the ones in which I'm interested.
Is it possible to run a subset of functional test? It is currently our painpoint that we need to change the test config file in order to do that
Is it possible to run a subset of functional test? It is currently our painpoint that we need to change the test config file in order to do that
You can try to use --grep
or resort to describe.only/it.only
.
Is it possible to run a subset of functional test? It is currently our painpoint that we need to change the test config file in order to do that
You can try to use
--grep
or resort todescribe.only/it.only
.
are you sure that works? is the parameter followed by grep the test name or directory name? could you give me an example so I can verify? So far that never works for us.
another feedback is very hard to write mocha test (under __test__
) for plugin with typescript, because the entire project is configured with global jest type and that makes us use a lot of ts-ignore for mocha test
I have a PR here:
https://github.com/elastic/kibana/pull/34754
to address some of our problem, but it might not be an universal solution
@zfy0701 see https://github.com/elastic/kibana/pull/34986 for examples of how to grep for the different test runners. Please add a comment there if those examples aren't clear or doesn't work for you.
@zfy0701 see #34986 for examples of how to grep for the different test runners. Please add a comment there if those examples aren't clear or doesn't work for you.
wow, that's super clean and finally, I got this worked
We need to document the Functional Test Runner, so developers don't have to dig into the code to figure out how to use it.
Improve the Functional test runner. For example, the Functional Test Runner requires a hard-coded "configuration" for some of the permutation test, so trying to test the various combinations becomes really challenging from the coding perspective, and it can drastically increase the time it takes for CI to run.
We need to set the test up as much as possible using the Kibana API, the SavedObjectClient, ES Client, esArchiver instead of using the UI.
Developers should know clearly what QA cover and not cover.
Developer should write automated functional test for their feature.
Try not to make very fine grained functional test.
To add to my previous comment, use --grep
definitely works. But it's still very error prone if we want to use it more frequently. what we did is like canvas, put some scripts, e.g.
but it would be nice if we could have some universal test entry point like
node scripts/$testType --plugin $pluginName
one more suggestion, I'd really like the backport process could be automatic, I filed another issue https://github.com/elastic/kibana/issues/41321 to describe the detail there
additional pain points:
kibana-ci
status is red because of X-Pack Chrome Functional tests / Group 3
failure, although X-Pack Chrome Functional tests / Group 3
group is shown in pending status and I cannot re-run it, I have to re-run all the checks.kbn es snapshot
spends a lot of time downloading a new Elasticsearch version daily. Is there a way to invalidate cache less frequently, because according to our docs These snapshots are built on a nightly basis which expires after a couple of weeks
?https://github.com/restrry/kibana/blob/8b5aa217572509c7efc9907e7cbafb43c4d31813/CONTRIBUTING.md#nightly-snapshot-recommendedoss
and x-pack
tests are run separately. it's a bit counterintuitive when some scripts work from the root for both projects (type_check
, e.g.) probably addressed by https://github.com/elastic/kibana/issues/41129, https://github.com/elastic/kibana/issues/41130package.json
update in both oss
and x-pack
folders. x-pack
always exists in the context of kibana and will reuse packages from the parent folder.@mattkime can address the points about the Github reporter. But as of right now, re-running a specific test won't take less time. We're working on possibly utilizing pipelines to address this.
Is there a way to invalidate cache less frequently
I have an issue I opened when creating kbn-es https://github.com/elastic/kibana/issues/17419, but so far there hasn't been much support for it. I added a comment mentioning it being raised here.
These snapshots are built on a nightly basis which expires after a couple of weeks
The snapshots expire, meaning they are no longer available since we utilize TTL's on the S3 bucket. That comment is really only relevant when attempting to run kbn-es from an old branch, say 6.7
. Additionally, having something like https://github.com/elastic/kibana/issues/17419 would alleviate that if you already had that branch cached.
oss and x-pack tests are run separately. it's a bit counterintuitive when some scripts work from the root for both projects (type_check, e.g.) probably addressed by #41129, #41130
I agree - Prior to 6.3, X-Pack was a separate repository and we still need to work making a more cohesive, unified experience now that is no longer the case. However, we still have to make sure that people can run/develop only OSS if necessary.
More requests for functional testing:
@tylersmalley, @brianseeders, @spalger and I have been discussing how the functional tests can be improved to address:
Improve the Functional test runner. For example, the Functional Test Runner requires a hard-coded "configuration" for some of the permutation test, so trying to test the various combinations becomes really challenging from the coding perspective, and it can drastically increase the time it takes for CI to run.
and
Reduce overall number of FTR configs by eliminating most common needs for new configuration permutations (licensing, TLS)
Spencer will be working on a proof-of-concept for a solution that he has in mind.
Not sure if it is the right place to ask, but the thing I am missing (or maybe I just don't know it exists) is a publicly exposed kibana endpoint built from a pull-request branch.
If we could provide a URL to kibana which we built for running ci tests on a branch, that would:
yarn start
(build differences). These led to tedious troubleshooting, as building the code locally in the same way as it is built on ci is not the first thing we tried. But having an endpoint exposed from ci would help us find such kind of issues faster.The point @Dosant mentioned also came up a couple of times during our team syncs. There is a high demand for having preview builds on the PR automatically for the listed reasons. Do you think this is anything that'll be feasible in the mid-term to achieve that?
@tylersmalley Anything we can do to address Mikhail's above feedback?
All packages within the /packages
folder have to duplicate their development setup. Could we provide a utility package consolidating the dev dependencies list and providing a set of dev scripts (build, liniting, etc.)
Regarding @kbn/optimizer
perf complaints. I added a plugin analyzing the time spent in different loaders. It looks like sass-loader
is the bottleneck at the moment o_O
Results for a random plugin:
sass-loader took 5 mins, 20.91 secs
module count = 4
modules with no loaders took 3 mins, 3.15 secs <<< plain js from node_modules
module count = 151
babel-loader took 2 mins, 41.88 secs
module count = 145
yeah, I think it makes sense to focus on improving caching strategy, but we might squeeze a couple of minutes tuning build pipeline (at least for development)
Is there an issue in the Kibana repo to tune optimizer perf? @elastic/kibana-operations
While debugging x-pack tests I noticed React component testing is extremely slow.
For example, it takes ~ 5mins to complete https://github.com/elastic/kibana/blob/master/x-pack/plugins/cross_cluster_replication/public/__jest__/client_integration/auto_follow_pattern_list.test.js
Thanks for mentioning this, @restrry. Would it help if we excluded our client integration tests from the test:jest
script, and created a separate script for them?
@cjcenizal there is a dedicated jest runner for integration tests x-pack/scripts/jest_integration
. It's had been disabled for some time, but it's available again.
@rudolf do you think it should be possible to apply the logic from https://github.com/elastic/kibana/pull/34986 to write a single script that runs any type of test :thinking: ?
Most helpful comment
Not sure if it is the right place to ask, but the thing I am missing (or maybe I just don't know it exists) is a publicly exposed kibana endpoint built from a pull-request branch.
If we could provide a URL to kibana which we built for running ci tests on a branch, that would:
yarn start
(build differences). These led to tedious troubleshooting, as building the code locally in the same way as it is built on ci is not the first thing we tried. But having an endpoint exposed from ci would help us find such kind of issues faster.