Hello...
I've been using this repo now for a while and have always wondered if there was any reason why the install and validate scripts are based in different sections of the packer files in the same provisioner section.
For exampled we have
{
"type": "powershell",
"scripts":[
"{{ template_dir }}/scripts/Installers/Install-7zip.ps1"
]
},
{
"type": "powershell",
"scripts":[
"{{ template_dir }}/scripts/Installers/Validate-7zip.ps1"
]
}
separate from each other. I cannot see a reason for this but I wonder if this would not be a better layout for future maintenance?
{
"type": "powershell",
"scripts":[
"{{ template_dir }}/scripts/Installers/Install-7zip.ps1",
"{{ template_dir }}/scripts/Installers/Validate-7zip.ps1"
]
}
I suggest this as I'd want to know if an install had failed before continuing on to hours more installs only for the process to fail at its validate stage found later on in the script.
Hi @jmos5156! It's better to run validation scripts after all the software is installed to make sure one tool didn't break the others in terms of PATH, env. variables, etc. We've seen that many times when the tool itself worked just fine right after the installation, but at the end of the build it was broken
Just an idea, we can run tests twice:
cc: @alepauly , switching to Pester for image testing can help a lot with it
I'd agree with the pester testing as this is something I've done before with Powershell DSC. Running this at the end of an image creation can help to tick the boxes of installs affecting others.
However, going back to @miketimofeev
I feel that an install should be atomic. It either works or not... All validation of new software should be conducted after the install to verify that path etc... are where they should be. Otherwise, how do you organize and focus on where problem installs are? Waiting for 5hrs for the packer process to fail on the validation of software installed in the 1st hour is time wasted -no? However, if validation fails validation after install then we can log this and is much faster to triage.
Furthermore, for what I need, I do not need all the packages on offer. Several versions of Python etc, are not needed and delay/add further points of failure. I make this point here https://github.com/actions/virtual-environments/issues/507#issuecomment-610605219, where I suggest that the packer file could be used to allow people more control over what gets installed and the versions that do without affecting the install scripts. I strip out entire swathes out of the Packer file for my needs. In doing so I have hit issues along the way with dependencies etc... hence why I ask about organizing installs, validations, and basic software together.
There are certain installs that I feel should be grouped together, or considered the 'basics' ie. 7-zip. notepad++, Azure Cli, Powershell, NPM. These are software that other installs depend upon and should be available on all agents. I'd personally add this in the initialize phase but list them as follows
{
"type": "powershell",
"environment_vars": [
"ImageVersion={{user `image_version`}}"
],
"scripts":[
"{{ template_dir }}/scripts/Installers/Windows2019/Initialize-VM.ps1",
"{{ template_dir }}/scripts/Installers/Update-DotnetTLS.ps1",
"{{ template_dir }}/scripts/Installers/Validate-DotnetTLS.ps1",
"{{ template_dir }}/scripts/Installers/Install-PowershellCore.ps1",
"{{ template_dir }}/scripts/Installers/Validate-PowershellCore.ps1",
"{{ template_dir }}/scripts/Installers/Install-7zip.ps1",
"{{ template_dir }}/scripts/Installers/Validate-7zip.ps1"
]
}
The suggestion I make above is more about organizing the installs in _a modular way_ that anyone can contribute from their hours of testing without it affecting up/downstream installs, as with my other post (see above) give more control and flexibility to users of this repo get more successful builds.
Thank you for your suggestions. The both, testing approach and modular structure, make sense to us.
I can't share ETA for now but we have a feature on our backlog to rework and improve it.
I am closing this issue for now but feel free to share any additional thoughts / proposals.
Most helpful comment
Just an idea, we can run tests twice:
It is not big deal to run tests twice because they are pretty quick.
cc: @alepauly , switching to Pester for image testing can help a lot with it