Virtual-environments: Reproducing the Windows 2019 image

Created on 5 Mar 2020  路  32Comments  路  Source: actions/virtual-environments

Describe the bug
For various reasons we are generating our own copy of the Windows Server 2019 image. We have found the process to be very flaky and our build of the image often fails at seemingly random points (20 minutes, 50 minutes, several hours in). A rerun of the build will go past the point of the previous failure with no changes needed.

Are you experiencing this behaviour as well or does it run well for you with no random failures? Our current best approach is running several isolated packer builds simultaneously and hoping one finishes with no errors!

Area for Triage:
Deployment/Release

Question, Bug, or Feature?:
Question

Virtual environments affected

  • [ ] macOS 10.15
  • [ ] Ubuntu 16.04 LTS
  • [ ] Ubuntu 18.04 LTS
  • [ ] Windows Server 2016 R2
  • [x] Windows Server 2019

Expected behavior
Windows 2019 packer build runs correctly on a repeatable basis

Actual behavior
Random failures whilst building the Windows 2019 image.

Windows community-feature question

Most helpful comment

@elduddz, Hi, This is a known choco issue - https://github.com/chocolatey/choco/issues/1521.

Only a few years old 馃槒

All 32 comments

Hi @MarcDenman, thanks for reaching out! What's the rate of flakiness that you are seeing and is there any specific type of error you are seeing? We do see occasional (~1/6th of builds) failures mostly due to network issues or third-party package sources not responding. Also, are you running the builds off a cloud VM, a local box, a server? What type of network connection does it have?

In short yes. I've experienced the same and more. There are several issues already discussed in this repo from WinRM timeout, download failures, validation checks not working, to installs not completing. I'd say it's around 1/4 of all builds I do.

There are a couple of suggestions of issues with Azure but even so, this repo offers you the full bells and whistles Windows 2019 agent build which just seems to flaky to run for several hours, at cost, for it to fail. That's too much investment IMHO.

I've run this in most European Azure locations with varying levels of success.
Firstly I have trimmed down the vast numbers of installs to only what I need. I don't need several python versions, for example. I carefully review the validation steps as it annoys me that the process can fail simply because of an error in the validation process. I don't like this as it has branched me away from this repo and would prefer to remain closer to the changes seen here.

Also, watch out for dependencies between packages. This has caught me out several times.

The windows updates and virus scan will add hours and are a real killer to this process.

I have resorted to running several builds each tackling a single install just to test the code. If these work individually, I will attempt a full build, but there's no guarantee it will work.

I carefully review the validation steps as it annoys me that the process can fail simply because of an error in the validation process.

@jmos5156 - Agreed! These are frustrating to say the least. To help reduce these type of failures we (finally) were able to put in place a sort of "CI" to verify all PRs prior to merging into master.

One option to increase the likelihood of the build succeeding is to build off the latest release branch. Release branches indicate that the scripts built in production, however, since there are dependencies in packages that might have moved since then, this path is not full proof.

If it helps at all, we've had most luck building in CUS region of Azure.

@alepauly Thanks for quick response. That is very useful to know. We are currently around a 35% failure rate. We are currently running it on both an Azure VM in UK South plus a server in our data centre while we workout where is most reliable.

We are also orchestrating via Azure Pipelines (private agents) but that doesn't seem to be causing a problem(apart from timeouts but we have fixed that!).

It is very useful to know about CUS, will try that tomorrow and see if that helps. Is there anything we can do to help make it more reliable?

I have added some errors below, a lot of the errors we have had are network related though we have had a few builds fail the same step (installing powershell core).

Errors

    vhd: The install of mingw was NOT successful.
    vhd: Error while running 'C:\ProgramData\chocolatey\lib\mingw\tools\chocolateyinstall.ps1'.
    vhd:  See log for details.
    vhd:
    vhd: Chocolatey installed 0/1 packages. 1 packages failed.
    vhd:  See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
    vhd:
    vhd: Failures
    vhd:  - mingw (exited 404) - Error while running 'C:\ProgramData\chocolatey\lib\mingw\tools\chocolateyinstall.ps1'
    vhd: Downloading azure-cli
    vhd:   from 'https://azurecliprod.blob.core.windows.net/msi/azure-cli-2.1.0.msi'
    vhd: Progress: 93% - Saving 46.69 MB of 50.02 MBChocolatey timed out waiting for the command to finish. The timeout
    vhd:  specified (or the default value) was '2700' seconds. Perhaps try a
    vhd:  higher `--execution-timeout`? See `choco -h` for details.
    vhd: The install of azure-cli was NOT successful.
    vhd: Error while running 'C:\ProgramData\chocolatey\lib\azure-cli\tools\chocolateyInstall.ps1'.
    vhd:  See log for details.
    vhd:
    vhd: Chocolatey installed 0/1 packages. 1 packages failed.
    vhd:  See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
    vhd:
    vhd: Failures
    vhd:  - azure-cli (exited -1) - Error while running 'C:\ProgramData\chocolatey\lib\azure-cli\tools\chocolateyInstall.ps1'.
==> vhd: Provisioning with Powershell...
==> vhd: Provisioning with powershell script: ********\virtual-environments\images\win/scripts/Installers/Install-PowershellCore.ps1
    vhd: VERBOSE: About to download package from
    vhd: 'https://github.com/PowerShell/PowerShell/releases/download/v7.0.0/PowerShell-7.0.0-win-x64.msi'
==> vhd: Quiet install failed, please rerun install without -Quiet switch or ensure you have administrator rights
==> vhd: At line:393 char:21
==> vhd: + ...             throw "Quiet install failed, please rerun install without ...

@jmos5156 Good to know it isn't just us, thank you. We have started trimming out the parts we don't want as well but similarly, we don't want to fork too far away from this repo. The dependencies between packages is a very good point. I would have completely missed that.

@MarcDenman @jmos5156 definitely not just you two (and us). I've heard from others with similar problems. I'm hoping we can soon spend some time figuring out ways to improve on the problems we are running into while generating the image. We have ideas, it's just a matter of finding time 鈱涳笍

I don't suppose it would be possible to get a share of the image created by yourselves on a SIG or even the vhd?

I know this chat has now been closed but @alepauly if there is anything that I can help with, in any small part shout out.
FYI, I ran two builds overnight. Both failed. One on a chocolately download for Azure CLI, the second on waiting for WinRM. Both in West Europe. This is frustrating to watch. On the WinRM side, I actually wait for the Packer output where is says waiting for WinRM. If I see it not continuing on for say 5-7 mins (usually it happens within a minute or two), I run these commands via the Run Command option in VM settings in the portal...

$Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "$env:COMPUTERNAME"
Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse
New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force
Stop-Service winrm
Start-Service winrm

FYI, we are a heavy user of Desired State Configuration (DSC). We use it for pretty much everything, from installing and configure AD, SQL, webservers, we install 3rd party products etc... and this is vastly more reliable. If it wasn't for the fact that packer wraps up the deployment, image creation and clean up all in one, I'd invest my time in refactoring this in DSC. That would obviously break me away from the community effort which I'd rather not, but assure me of a completed build at the end.

@jmos5156 I'll reopen so we can continue to document the issues affecting reliably reproducing the Win2019 image and help each other find ways to improve it.

For your WinRM problem, we never see that now. We used to a while back but stopped seeing it when we changed our network configuration. For context, the machines we use to run the builds are Azure VM themselves and we found that a combination of network reliability issues and policies would often get in the way of WinRM establishing connection.

To solve this we changed the configuration so that Packer would no longer attempt to create its own vnet and instead use the vnet that we supply. That vnet is configured as a peer of the vnet where the machine running the build is, so they effectively live in the same network and there is no need to open ports to the public internet or register a public IP. For this you'll want to include the following two params in your packer call:

                -var "private_virtual_network_with_public_ip=false"
                -var "virtual_network_name=<your vnet name>" 

Note that we had to peer the vnets because of internal configs but in your case you might be able to get away with simply using the same vnet for both. Hope this helps!

So if I understand correctly you are using self-hosted agents to run the packer image building from twp peered vnets where packer is creating the image in the other. This means WinRM is working 'internally' and therefore more reliable.
If correct - I see.
Unfortunately, I don't have this setup and use Azure-hosted agents to do the heavy lifting.

We attempted this build on the Azure Pipelines pool one time, but didnt know that after 6 hours the build gets cancelled 馃ぃ

I have had 2 parallel builds running on tin we own in the same building as us. from there I was able to run 2 builds, one to UKSouth and the other using Central US and see which wins. The UK south crashed out after getting the toolcache. the Central US one completed.

I was wondering if the validation could call back to the installer on error to have another go.

my UKS falls over on the powershell-core get which uses an aka link. I converted this to a chocolatey install and added a bit more defence around it (see below).

The problem with packer is that after the build fails you get nothing useful, the image might be corrupted, and in the case of a managed disk, I am not sure you get anything useable.

function Install-Choco {
    [CmdletBinding()]
    param(
        [Parameter(Mandatory)]
        [string]$install,
        [int]$retries = 5
    )

    begin { }
    process {
        $condition = $false
        $count = 0
        do {
            Write-Output "running: powershell choco install $install -y"
            powershell choco install $install -y

            $installed = powershell choco list -lo $install --all
            $match = (($installed -match "^$install.*").Length -ne 0)
            if ($match) {
                Write-Output "package installed: $install"
                $condition = $true
            }
            else {
                $count++
                if ($count -eq $retries) {
                    Write-Error "Could not install $install after $count attempts"
                    exit 1
                }
            }
        } while ($condition -eq $false)
    }
    end { }
}

this doesnt handle the more complete choco's yet, and if you install a package with this name as a part of it, it would give a false positive, but deals with the basic install -y setup ok.

doing this and using choco for powershell core actually helped that element successfully install.

Both cus and suk failed after hitting Wix install. Not had this before. Using prerelease 20200308 as a base.

azure-arm: Chocolatey v0.10.15
    azure-arm: Installing the following packages:
    azure-arm: wixtoolset
    azure-arm: By installing you accept licenses for the packages.
    azure-arm: wixtoolset not installed. An error occurred during installation:
    azure-arm:  The operation has timed out
    azure-arm: wixtoolset package files install completed. Performing other installation steps.
    azure-arm: The install of wixtoolset was NOT successful.
    azure-arm: wixtoolset not installed. An error occurred during installation:
    azure-arm:  The operation has timed out
    azure-arm:
    azure-arm: Chocolatey installed 0/1 packages. 1 packages failed.
    azure-arm:  See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
    azure-arm:
    azure-arm: Failures
    azure-arm:  - wixtoolset (exited 1) - wixtoolset not installed. An error occurred during installation:
    azure-arm:  The operation has timed out

Adding function call mentioned previously and trying again.

just reiteration of the intermittent nature of the internet, this was a new extension added for the 20200308.0 release

_work\45\s\virtual-environments\images\win/scripts/Installers/Windows2019/Install-AnalysisExtenstion.ps1
    azure-arm: Downloading Microsoft.DataTools.AnalysisServices.vsix extension
==> azure-arm: Exception calling "DownloadFile" with "2" argument(s): "The remote server returned an error: (429)."
==> azure-arm: At C:\Windows\Temp\script-5e6732ed-391b-0775-6ef9-1c4750aefbc9.ps1:10 char:1
    azure-arm: Installing Microsoft.DataTools.AnalysisServices.vsix extension
==> azure-arm: + (New-Object System.Net.WebClient).DownloadFile($extensionUrl, $extens ...
==> azure-arm: + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
==> azure-arm:     + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
==> azure-arm:     + FullyQualifiedErrorId : WebException
==> azure-arm:
    azure-arm: Unsuccessful exit code returned by the installation process: 2001.
==> azure-arm: Provisioning step had errors: Running the cleanup provisioner, if present...
==> azure-arm: 
==> azure-arm: Cleanup requested, deleting resource group ...
==> azure-arm: Resource group has been deleted.
Build 'azure-arm' errored: Script exited with non-zero exit status: 1.Allowed exit codes are: [0]

Another fail in SUK my CUS is still going ok, just hit the run antivirus step :100:

_work\47\s\virtual-environments\images\win/scripts/Installers/Install-AzureCli.ps1
    azure-arm: Chocolatey v0.10.15
    azure-arm: Installing the following packages:
    azure-arm: azure-cli
    azure-arm: By installing you accept licenses for the packages.
    azure-arm: Progress: Downloading azure-cli 2.2.0... 100%
    azure-arm:
    azure-arm: azure-cli v2.2.0 [Approved]
    azure-arm: azure-cli package files install completed. Performing other installation steps.
    azure-arm: Downloading azure-cli
    azure-arm:   from 'https://azurecliprod.blob.core.windows.net/msi/azure-cli-2.2.0.msi'
    azure-arm: Progress: 84% - Saving 42.63 MB of 50.25 MBChocolatey timed out waiting for the command to finish. The timeout
    azure-arm:  specified (or the default value) was '2700' seconds. Perhaps try a
    azure-arm:  higher `--execution-timeout`? See `choco -h` for details.
    azure-arm: The install of azure-cli was NOT successful.

This is going to be impossible to say it is just one thing. I keep coming back to more defence around the process, if a couple of retries doesnt fix the problem then we have a bigger problem.

I am also trying to build this out in UK South and have the same unpredictable failures that happen

==> azure-arm: Provisioning with powershell script: /home/packer/_work/2/s/packer/azure/windows_2019_devops_build_agent/scripts/Installers/Install-MysqlCli.ps1
    azure-arm: Downloading vcredist_x64.exe...
    azure-arm: Starting Install vcredist_x64.exe...
    azure-arm: Installation successful

Then hung until I killed after 3 hours

    azure-arm: mingw v8.1.0 [Approved]
    azure-arm: mingw package files install completed. Performing other installation steps.
    azure-arm: Attempt to get headers for https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/8.1.0/threads-posix/seh/x86_64-8.1.0-release-posix-seh-rt_v6-rev0.7z/download failed.
    azure-arm:   The remote file either doesn't exist, is unauthorized, or is forbidden for url 'https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/8.1.0/threads-posix/seh/x86_64-8.1.0-release-posix-seh-rt_v6-rev0.7z/download'. Exception calling "GetResponse" with "0" argument(s): "Unable to connect to the remote server"
    azure-arm: Downloading mingw 64 bit
    azure-arm:   from 'https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/8.1.0/threads-posix/seh/x86_64-8.1.0-release-posix-seh-rt_v6-rev0.7z/download'
    azure-arm: ERROR: The remote file either doesn't exist, is unauthorized, or is forbidden for url 'https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/8.1.0/threads-posix/seh/x86_64-8.1.0-release-posix-seh-rt_v6-rev0.7z/download'. Exception calling "GetResponse" with "0" argument(s): "The operation has timed out"
    azure-arm: This package is likely not broken for licensed users - see https://chocolatey.org/docs/features-private-cdn.
    azure-arm: The install of mingw was NOT successful.
    azure-arm: Error while running 'C:\ProgramData\chocolatey\lib\mingw\tools\chocolateyinstall.ps1'.
    azure-arm:  See log for details.
    azure-arm:
    azure-arm: Chocolatey installed 0/1 packages. 1 packages failed.
    azure-arm:  See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
    azure-arm:
    azure-arm: Failures
    azure-arm:  - mingw (exited 404) - Error while running 'C:\ProgramData\chocolatey\lib\mingw\tools\chocolateyinstall.ps1'.
    azure-arm:  See log for details.
==> azure-arm: INFO: Could not find files for the given pattern(s).
==> azure-arm: Join-Path : Cannot bind argument to parameter 'Path' because it is null.

and....

==> azure-arm: Provisioning with powershell script: /home/packer/_work/2/s/packer/azure/windows_2019_devops_build_agent/scripts/Installers/Install-ContainersFeature.ps1
    azure-arm: Install Containers feature
    azure-arm:
    azure-arm: Success Restart Needed Exit Code      Feature Result
    azure-arm: ------- -------------- ---------      --------------
    azure-arm: True    Yes            SuccessRest... {Containers}
    azure-arm: WARNING: You must restart this server to finish the installation process.
    azure-arm: Skipping installation of Hyper-V feature
==> azure-arm: Restarting Machine
==> azure-arm: Waiting for machine to restart...
==> azure-arm: A system shutdown is in progress.(1115)
==> azure-arm: Timeout waiting for machine to restart.
==> azure-arm: Provisioning step had errors: Running the cleanup provisioner, if present...
==> azure-arm: 

I'm now on attempt six

Attempt six

==> azure-arm: Provisioning with powershell script: /home/packer/_work/2/s/packer/azure/windows_2019_devops_build_agent/scripts/Installers/Update-AndroidSDK.ps1
    azure-arm:
    azure-arm: SUCCESS: Specified value was saved.
    azure-arm:
    azure-arm: SUCCESS: Specified value was saved.
    azure-arm:
    azure-arm: SUCCESS: Specified value was saved.
    azure-arm:
    azure-arm: ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
    azure-arm:
    azure-arm: Please set the JAVA_HOME variable in your environment to match the
    azure-arm: location of your Java installation.
==> azure-arm: Provisioning step had errors: Running the cleanup provisioner, if present...
==> azure-arm: 
==> azure-arm: Cleanup requested, deleting resource group ...
==> azure-arm: Resource group has been deleted.

@a8ree have you tried Central US?

@elduddz I eventually got it to run (once) from UK South. I'm going to see if I can get some consistency with Central US as you suggest - then replicate to UK South!

@maxim-lobanov @a8ree @alepauly @miketimofeev

does anyone know anyone at hashicorp?

I have raised with hashicorp:
https://github.com/hashicorp/packer/issues/8972#issuecomment-606459263

maybe we can get some pleases?

Hi all.
I have only managed to get back to this feature now and can see that things are progressing. I do see a number of changes in the lastest win19/202000331 release branch and was wondering where the direction of this project is going.

I know that stability is the main issue of this thread. I have encountered my fair share of packer deploy issues (hence having to drop this for a bit) but hoped someone would be able to give me direction as to what the next steps should be.

I see, for example, that through the single packer file, we can turn off antivirus scanning through a simple env variable that is read by the PowerShell script. This seems true also of the GO versions. Is this to be the new format to allow users of this repo to selectively choose products they wish to install?

If the above is true (which to me seems like a great idea), this allows for more granular testing of individual scripts, will there be an agreed standard to the env variable formats, ie.

"go_install" : "false"
"go_versions": "1.9, 1.10, 1.11, 1.12, 1.13, 1.14",
"go_default": "1.14"

"python_install": "true",
"python_versions" : "2.7, 3.5, 3.6, 3.7, 3.8",
"python_deafult" : "3.8",

"php_install": "true",
"php_versions" : "7.0, 7.1, 7.2, 7.3, 7.4, 8,0",
"php_default" : "8.0"

"docker_install" : "false",
"vs2019_install : "true",
"run_scan_antivirus": "false"

this way we can test with a higher degree of accuracy each script without having to invest time and money for fruitless deploys.

Also where can we keep up to date with the stability changes made? I can't see what changes are being made without going through all the PR's.

I have to say that the 20200331 release completed successfully in 5 hours. Not sure if it is a one off, will see how the next one goes.

That's brilliant news and I guess that was the entire build?
Did you make no changes to the files yourself?
What region did you deploy into?
@maxim-lobanov @a8ree @alepauly @miketimofeev would you guys be able to give me an overview of what other changes you have planned for this project?

I have been running back to back deploys in West Europe over the last two days and every time I have to manually intervene to run

$Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "$env:COMPUTERNAME"
Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse
New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force
Stop-Service winrm
Start-Service winrm

via the portal otherwise, packer fails to communicate with the deployed vm. Does anyone have a concrete resolution to this issue that can work both manually and via devOps?

Hello everyone,
Recently, we have done a lot of PRs / improvements to increase stability of Windows image generation.
(Thank you for @elduddz for his contribution). 馃殌

It would be great if you can sync your repository with latest master and shared some feedback if your pass rate becomes better or not

@maxim-lobanov Thanks, I am not sure what is going on with this.

I have tested in UKS and Central US, the CUS was fine, the UKS returned this message:

==> azure-arm: Provisioning with powershell script: E:\VSTSUnifiedAgents\HTFSPRDUAG05-Eric-A01\_work\92\s\virtual-environments\images\win/scripts/Installers/Validate-NSIS.ps1
==> azure-arm: Get-Command : The term 'makensis' is not recognized as the name of a cmdlet, function, script file, or operable
==> azure-arm: program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
==> azure-arm: At C:\Windows\Temp\script-5eb2a213-359c-2c6a-3dc5-432063def61a.ps1:8 char:5
    azure-arm: Nullsoft Install System (NSIS) is not installed
==> azure-arm: + if (Get-Command -Name makensis)
==> azure-arm: +     ~~~~~~~~~~~~~~~~~~~~~~~~~~
==> azure-arm:     + CategoryInfo          : ObjectNotFound: (makensis:String) [Get-Command], CommandNotFoundException
==> azure-arm:     + FullyQualifiedErrorId : CommandNotFoundException,Microsoft.PowerShell.Commands.GetCommandCommand

Looking at the install step:

==> azure-arm: Provisioning with Powershell...
==> azure-arm: Provisioning with powershell script: E:\VSTSUnifiedAgents\HTFSPRDUAG05-Eric-A01\_work\92\s\virtual-environments\images\win/scripts/Installers/Install-NSIS.ps1
    azure-arm: Running [#1]: choco install nsis -y
    azure-arm: Chocolatey v0.10.15
    azure-arm: Installing the following packages:
    azure-arm: nsis
    azure-arm: By installing you accept licenses for the packages.
    azure-arm: Progress: Downloading nsis.install 3.5.0.20200106... 100%
    azure-arm: Progress: Downloading nsis 3.5.0.20200106... 100%
    azure-arm:
    azure-arm: nsis.install v3.5.0.20200106 [Approved]
    azure-arm: nsis.install package files install completed. Performing other installation steps.
    azure-arm: Downloading nsis.install
    azure-arm:   from 'https://astuteinternet.dl.sourceforge.net/project/nsis/NSIS%203/3.05/nsis-3.05-setup.exe'
    azure-arm: Progress: 52% - Saving 783.76 KB of 1.46 MBERROR: The remote file either doesn't exist, is unauthorized, or is forbidden for url 'https://astuteinternet.dl.sourceforge.net/project/nsis/NSIS%203/3.05/nsis-3.05-setup.exe'. Exception calling "Read" with "3" argument(s): "Received an unexpected EOF or 0 bytes from the transport stream."
    azure-arm: This package is likely not broken for licensed users - see https://chocolatey.org/docs/features-private-cdn.
    azure-arm: The install of nsis.install was NOT successful.
    azure-arm: Error while running 'C:\ProgramData\chocolatey\lib\nsis.install\tools\chocolateyInstall.ps1'.
    azure-arm:  See log for details.
    azure-arm:
    azure-arm: nsis v3.5.0.20200106 [Approved]
    azure-arm: nsis package files install completed. Performing other installation steps.
    azure-arm:  The install of nsis was successful.
    azure-arm:   Software install location not explicitly set, could be in package or
    azure-arm:   default install location if installer.
    azure-arm:
    azure-arm: Chocolatey installed 1/2 packages. 1 packages failed.
    azure-arm:  See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

so it looks like the Choco-Install has detected that nsis has installed, but the validate cannot run one of the commands? I am wondering if looking for the program in choco list isnt 100% proof, is it possible to catch the error?

@elduddz, Hi, This is a known choco issue - https://github.com/chocolatey/choco/issues/1521.

@elduddz, Hi, This is a known choco issue - https://github.com/chocolatey/choco/issues/1521.

Only a few years old 馃槒

@elduddz Did you get a solution for the 6 hour hosted agent limit? I have hit the same issue now: https://github.com/actions/virtual-environments/issues/1105

It would be really strange if it was not possible to use the hosted agents to build and image of the hosted agent, because it takes too long! Especially as this is the recommendation from Microsoft if you need more space or speed or other requirements....

@elduddz Did you get a solution for the 6 hour hosted agent limit? I have hit the same issue now: https://github.com/actions/virtual-environments/issues/1105

It would be really strange if it was not possible to use the hosted agents to build and image of the hosted agent, because it takes too long! Especially as this is the recommendation from Microsoft if you need more space or speed or other requirements....

I'm afraid @gregpakes you have hit one of the hard constraints of the hosted agents, even on a paid for version (I think it is in the docs about the pipeline)

We went to our own agents.

:-( thanks

Hello everyone,
Recently, we have done the following things:

  • Added retry logic where it is possible in Windows image-generation
  • Improve Windows tests to run them twice (right after tool installation and at the end of image-generation). It allows to catch broken software earlier and don't spend build minutes.

In our CI, we see passrate about 90-95% so windows image-generation looks pretty stable for us.
If you still experience issues with unstable builds, feel free to create new issues and provide details.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

philipengberg picture philipengberg  路  37Comments

AlenaSviridenko picture AlenaSviridenko  路  28Comments

BrightRan picture BrightRan  路  32Comments

kaylangan picture kaylangan  路  56Comments

benoittgt picture benoittgt  路  34Comments