I just completed creating my first Azure Function App, complete with five separate PowerShell functions (four triggered by HTTP/REST, one triggered when messages are added to an Azure Storage Queue), and while working on this I ran into several design issues that raised many questions about that design that I hope to get answered. For items that you agree are issues with Azure Function Apps, I will log separate issues so that they can be tracked and properly fixed. For any of these items that are not issues and that are simply due to a misunderstanding on my part, if you could highlight what I am missing I would really appreciate it.
When it comes to PowerShell modules, I noticed that the original Azure Function Apps design was to simply search for all .dll, .psd1 and .psm1 files in a modules subfolder under an Azure function, and that about 5 months ago that was extended to recursively search for .dll, .psd1 and .psm1 files. Both the original design and this extended design are seriously flawed, a significant step backwards when it comes to support for modules when within PowerShell, and should be reconsidered.
The current design has the following flaws:
$env:PSModulePath and all of the value that brings to PowerShell modules.With the design as is today, it appears that you are reinventing the wheel with the thinking of a PowerShell 1.0 scripter. This design needs to be discarded in favor of a proper one.
Azure Function Apps should automatically update $env:PSModulePath in the runspace used to invoke an Azure Function that is based on PowerShell such that it includes two additional folder paths: one for all functions within an Azure Function App, and one for each PowerShell function within an Azure Function App. The folder path specific to a PowerShell function should take precedence (be defined first in $env:PSModulePath) over the folder path for all functions. This would allow for command overrides. The current design (to recursively search a modules folder for specific files) should be discarded in favor of this, so that Azure Function Apps properly leverage PowerShell's built-in module discovery mechanism ($env:PSModulePath).
There should also be a well-known folder for the shared modules location that already exists that users can copy modules into, at least using the Azure Storage Explorer, but eventually via the Azure Function Apps UI.
An added benefit to this design, which is well worth pursuing, is that you could have an "Add to Azure Function App" button next to modules in the PowerShell Gallery. This would be very similar to the "Add to Azure Automation" button that exists in the PowerShell Gallery today. It would allow you to install functions into Azure Function Apps, making them available for all PowerShell functions or only to a specific PowerShell function.
You can work around this issue today by manually creating a modules folder with the Microsoft Azure Storage Explorer. I created one in the /data folder for my Azure Function App. Once that is created, place any modules you want to be able to load, including their dependencies, inside of that modules folder as is (don't extract the contents, just copy over the root module folders). Once those are in place, add a function similar to the following to any PowerShell function in your Azure Function App where you want to use those modules, and manually load the module with Import-ModuleEx:
function Import-ModuleEx {
[CmdletBinding()]
[OutputType([System.Void])]
param(
[Parameter(Position=0, Mandatory=$true)]
[ValidateNotNullOrEmpty()]
[System.String]
$Name
)
try {
$oldPSModulePath = $env:PSModulePath
$modulePathEntries = @($env:PSModulePath -split ';')
$localModulesFolder = "${env:HOME}\data\Modules"
if ($modulePathEntries -notcontains $localModulesFolder) {
$env:PSModulePath += ";${localModulesFolder}"
}
Import-Module -Name $Name -Verbose
} finally {
$env:PSModulePath = $oldPSModulePath
}
}
Realistically, the modifications of $env:PSModulePath here should not be necessary, nor should I have to manually invoke Import-Module (nor my Import-ModuleEx wrapper). I should simply be able to invoke a command and let PowerShell's autoloading do the rest for me. That is how modules have worked since PowerShell 2.0 was released in 2009.
If this is not properly recognized as a design issue, I would appreciate someone explaining to me why the wheel needs to be reinvented here for Azure Function Apps.
Within my Azure Function App, I have five separate PowerShell functions. Three of those functions use the same set of credentials to connect to a managed resource, and the same csv data to do something with that resource. The current interface allows me to upload files I have on disk, or add new files directly in the UI, however referencing those files seems to be a challenge, for the following reasons:
$pwd variable and the [Environment]::CurrentDirectory property both report the location as C:\Windows\System32.$env:HOME environment variable refers to a folder that is three levels higher than the folder where the run.ps1 file that defines your function body is located.$PSScriptRoot common variable is $null in the scope of the function when it is executed because of how that function is executed (as a dynamically created and invoked script block instead of as a PowerShell file).$MyInvocation cannot be used to identify the path to the script file either.The only way it seems I can reference files I upload for a specific function is by using the entire path, which I figured out is ${env:HOME}/site/wwwroot/_MyAzureFunctionName_/_MyFileName_.
To further complicate things, there does not seem to be a way to upload files such that they are accessible to multiple Azure functions from the Azure Function App console. Instead I need to choose a file location (which I did: I'm using /data/Files, right next to my /data/Modules folder), and then use the Azure Storage Explorer to upload the shared files I want to reference from multiple functions into that folder. Then, I can reference my files using the path ${env:HOME}/data/Files/_MyFileName_.
This all needs to be much easier. There really should be:
You can create these manually, using the folder paths I shared, or using your own, but the issue with this workaround is that it's manual and it may differ from one Azure Function App to another.
Azure Automation, and SMA before it, both have support for credential management in a secure fashion. Azure Function Apps need something similar so that credentials can be managed in a secure way and not visible via plain text viewing of files or scripts on a screen.
This one is pretty straight forward -- you just need an encrypted store to put credentials into, and a PowerShell command that can be used to get them back out, unencrypted, so that they can be used in a function inside of an Azure Function App.
Right now I'm storing my credentials unencrypted in separate json files so that they at least won't appear on screen if I am demonstrating my functionality to someone else.
Some PowerShell commands have verbose output if requested, and you can use the Write-Verbose command to write something to the verbose stream. This is very useful when troubleshooting, and it distinguishes between output data and verbose messaging. There doesn't seem to be a way to enable verbose output in Azure Functions, and worse, even if you manually set it in a script by setting $VerbosePreference = 'Continue', you don't get any verbose messages in the logs or output stream.
Verbose output is important -- there should be an option to turn on verbose output on each function without modifying that function's code, and Azure Function apps should most certainly show verbose output when PowerShell is configured to show it.
None
Like the Verbose output issue, Warning output is ignored/discarded. This is a big limitation that should be fixed.
Support warning output -- it's there for a reason!
None
Azure Function Apps strays from the default error handling behaviour that exists in PowerShell by treating all errors as terminating errors. This includes non-terminating errors. This behavioural change is concerning. Azure Function Apps that are used to run PowerShell are doing so in a way that differs from how PowerShell runs. I understand the need to block certain things for security reasons -- that makes complete sense; however, I don't feel changing the core PowerShell behaviour in Azure Function Apps is the right thing to do, because that makes every script, function, or module out there potentially hit or miss -- if it runs in native PowerShell, it may not run in Azure Function Apps for the wrong reasons. Conversely, users who are less familiar with PowerShell and learn something about how PowerShell works in Azure Function Apps will eventually discover that PowerShell doesn't work that way except in Azure Functions Apps.
Fix this so that Azure Function Apps use of PowerShell is consistent with how PowerShell itself works.
None. You can only use Write-Output to capture text in a benign way in the logs, and Out-File to output data to the caller of your function.
I believe All of these unnecessary differences should be pushed out of the product in favour of consistency with the core PowerShell language except where it must behave differently for security reasons.
Thanks for listening if you made it this far.
Regarding the shared files, I've found there are currently a few scenarios that work. I'd especially discourage using the Azure Storage Explorer for anything other than diagnostics.
$PsHome/www/...$storageCtx or save a new connection string in apps writings. Perhaps in the case of (2) the runtime could hydrate the $storageCtx for the user already since I know it tries to keep the runtime hot.
@KirkMunro, @jdkang Thank you for trying our PowerShell in Azure Functions at this experimental stage. More importantly, thank you for your detailed and constructive feedback. We will add your suggestions to our list of requirements for consideration in our future work for PowerShell support.
@tohling My pleasure. I have one more important thought on this topic to share while it is top of mind.
Right now it seems that Azure Functions written in PowerShell have to return data via the Out-File cmdlet, putting all output data into a file referenced by the $res variable, and that log messages are written by either using the Write-Output command or simply not capturing output. This is also a questionable design, for the following reasons:
I think I can get all of this (minus the PowerShell v5-specific content which requires a backend upgrade that I cannot do) working using proxy functions and by changing my Azure Functions to use a specific framework/format. It really should be designed from the ground up to work this way though.
FYI You can see verbose/warning output in the main log stream by redirecting it to standard out using the 4>&1 and 3>&1 at the end of your command. You can also redirect it to a file log of course, e.g. 4>>verbose.log
+1 for Sharing information across functions. Being able to reference $PSScriptRoot would definitely be handy..
Most helpful comment
@tohling My pleasure. I have one more important thought on this topic to share while it is top of mind.
Returning output from a Azure Function written in PowerShell
Right now it seems that Azure Functions written in PowerShell have to return data via the Out-File cmdlet, putting all output data into a file referenced by the
$resvariable, and that log messages are written by either using the Write-Output command or simply not capturing output. This is also a questionable design, for the following reasons:Recommendations
Workaround
I think I can get all of this (minus the PowerShell v5-specific content which requires a backend upgrade that I cannot do) working using proxy functions and by changing my Azure Functions to use a specific framework/format. It really should be designed from the ground up to work this way though.