Runtime: Support single-file distribution

Created on 5 Oct 2018  ·  225Comments  ·  Source: dotnet/runtime

This issue tracks progress on the .NET Core 3.0 single-file distribution feature.
Here's the design doc and staging plan for the feature.

area-Single-File

Most helpful comment

First start is expected to be much slower - it extracts the app onto the disk - so lot of IO.

This should have never happened, you make the user experience horrible BY DESIGN, horrible choice, that is how you make users hate the tech the developer is using for them

As mentioned by @Safirion we are working on the next improvement for single-file which should run most of the managed code from the .exe directly (no extract to disk). Can't promise a release train yet though.

Why release this officially now if it's gonna change soon? should be marked as preview/experimental

in my opinion this is waste of time and ressources, focus on AOT compilation and tree-shaking, put all your ressources there, stop with hacks

All 225 comments

Out of interest, how does this initiative compare to CoreRT? They seem like similar efforts?

Is it related to 'possibly native user code', i.e. it this will still allow code to be JIT-compiled, not just AOT?

Also, I assume that the runtime components ('Native code (runtime, host, native portions of the framework..') will be the ones from the CoreCLR repo?

You're asking great questions, but since this is still early in design, I don't have great answers yet.

Out of interest, how does this initiative compare to CoreRT? They seem like similar efforts?

There would likely be somewhat similar outcomes (a single file), but the design may have different performance characteristics or features that do/don't work. For example, a possible design could be to essentially concatenate all of the files in a .NET Core self-contained application into a single file. That's 10s of MB and might start more slowly, but on the other hand, it would allow the full capabilities of CoreCLR, including loading plugins, reflection emit and advanced diagnostics. CoreRT could be considered the other end of the spectrum -- it's single-digit MB and has a very fast startup time, but by not having a JIT, it can't load plugins or use reflection emit and build time is slower than most .NET devs are used to. It currently has a few other limitations that could get better over time, but might not be better by .NET Core 3.0 (possibly requiring annotations for reflection, missing some interop scenarios, limited diagnostics on Linux). There are also ideas somewhere between the two. If folks have tradeoffs they'd like to make/avoid, we'd be curious to hear about them.

Is it related to 'possibly native user code', i.e. it this will still allow code to be JIT-compiled, not just AOT?

By "native user code," I meant that your app might have some C++ native code (either written by you or a 3rd-party component). There might be limits on what we can do with that code -- if it's compiled into a .dll, the only way to run it is off of disk; if it's a .lib, it might be possible to link it in, but that brings in other complications.

Also, I assume that the runtime components ('Native code (runtime, host, native portions of the framework..') will be the ones from the CoreCLR repo?

Based on everything above, we'll figure out which repos are involved. "Native portions of the framework" would include CoreFX native files like ClrCompression and the Unix PAL.

A single file distribution in this manner, even if has slightly slower startup time, can be invaluable for ease of deployment. I would much rather have the ability to have the full power than be forced to give up some of that.

Some scenarios that are of interest to us. How would this work in terms of cross platform?
I assume we'll have a separate "file" per platform?

With regards to native code, how would I be able to choose different native components based on the platform?

Some scenarios that are of interest to us. How would this work in terms of cross platform?
I assume we'll have a separate "file" per platform?
With regards to native code, how would I be able to choose different native components based on the platform?

@ayende, I'm quoting from @morganbr comment:

a possible design could be to essentially concatenate all of the files in a .NET Core self-contained application into a single file.

The current cross-platform story for self-contained applications is creating a deployment package per platform that you'd like to target, because you ship the application with the runtime, which is a platform-specific.

@morganbr I appreciate you taking to time to provide such a detailed answer

I'll be interested to see where the design goes, this is a really interesting initiative

I have a few questions for folks who'd like to use single-file. Your answers will help us narrow our options:

  1. What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)
  2. Does your app include (non-.NET) C++/native code?
  3. Would your app load plugins or other external dlls that you didn't originally include in your app build?
  4. Are you willing to rebuild and redistribute your app to incorporate security fixes?
  5. Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?
  6. What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?
  7. Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?
  8. Would you be willing to do extra work if it would cut the size of your app in half?
  1. Console/UI app on all platforms.
  2. Maybe as a third party component.
  3. Possibly yes.
  4. Yes, especially if there is a simple ClickOnce-like system.
  5. Some initial slowdown can be tolerated. Can point 3 help with that?
  6. Depends on assets. Hello world should have size on the order of MB.
  7. Doesn't matter if it is just production.
  8. Like whitelisting reflection stuff? Yes.

@morganbr, do you think that these questions are better asked to a broader audience; i.e., broader that people who know about this GitHub issue?

For example, a possible design could be to essentially concatenate all of the files in a .NET Core self-contained application into a single file.

Looking at compressing it; or using a compressed file system in the file?

@tpetrina, thanks! Point 3 covers a couple of design angles:

  1. Tree shaking doesn't go well with loading plugins that the tree shaker hasn't seen since it could eliminate code the plugin relies on.
  2. CoreRT doesn't currently have a way to load plugins
    Point 5 is more about whether we'd optimize for size or startup time (and how much)
    Point 8, yes I was mostly thinking about reflection stuff

@TheBlueSky, we've contacted other folks as well, but it helps to get input from the passionate folks in the GitHub community.

@benaadams, compression is on the table, but I'm currently thinking of it as orthogonal to the overall design. Light experimentation suggests zipping may get about 50% size reduction at the cost of several seconds of startup time (and build time). To me, that's a radical enough trade-off that if we do it, it should be optional.

@morganbr several seconds of startup time when using compression? I find that hard to believe when considering that UPX claims decompression speeds of

~10 MB/sec on an ancient Pentium 133, ~200 MB/sec on an Athlon XP 2000+.

@morganbr, for me the answers are:

1) Service (console app running Kestrel, basically). Running as Windows Service / Linux Daemon or in docker.
2) Yes
3) Yes, typically managed assemblies using AssemblyContext.LoadFrom. These are provided by the end user.
4) Yes, that is expected. In fact, we already bundle the entire framework anyway, so no change from that perspective.
5) As a service, we don't care that much for the startup time. 5 seconds would be reasonable.
6) 75MB is probably the limit. A lot depends on the actual compressed size, since all packages are delivered compressed.
7) For release builds, longer (even much longer) build times are acceptable.
8) Yes, absolutely. Size doesn't matter that much, but smaller is better.

Something that I didn't see mentioned and is very important is the debuggability of this.
I hope that this isn't going to mangle stack traces, and we would want to be able to include pdb files or some sort of debugging symbols.

About compression, take into account the fact that in nearly all cases, the actual delivery mechanism is already compressed.
For example, nuget packages.
Users are also pretty well versed in unzipping things, so that isn't much of an issue.
I think you can do compression on the side.

Thanks, @ayende! You're right that I should have called out debuggability. I think there are only a few minor ways debugging could be affected:

  1. It might not be possible to use Edit and Continue on a single-file (due to needing a way to rebuild and reload the original assembly)
  2. The single-file build might produce a PDB or some other files that are required for debugging beyond those that came with your assemblies.
  3. If CoreRT is used, it may have some debugging features that get filled in over time (especially on Linux/Mac).

When you say "include pdb files", do you want those _inside_ the single file or just the ability to generate them and hang onto them in case you need to debug the single-file build?

1) Not an issue for us. E&C is not relevant here since this is likely to be only used for actual deployment, not day to day.
2) Ideally, we have a single file for everything, including the PDBs, not one file and a set of pdbs on the side. There is already the embedded PDB option, if that would work, it would be great.
3) When talking about debug, I'm talking more about production time rather than attaching a debugger live. More specifically, stack trace information including file & line numbers, being able to resolve symbols when reading dump, etc.

  1. Mainly services but some UI
  2. Some do, but this wouldn't be urgent
  3. Yes
  4. Yes
  5. A few seconds is ok
  6. Doesn't matter to us. Sum of dll size is fine
  7. Ideally not
  8. Size is not of primary importance for us

Another question for us is whether you'd be able to do this for individual components too (perhaps even staged)? E.g. we have library dlls that use lots of dependencies. If we could package those it would save a lot of pain of version management etc. If these in turn could be packaged into an exe that would be even nicer?

  1. Services and some UI.
  2. Not at the moment.
  3. Yes. Ideally plugins that could be loaded from a folder and reloaded at runtime.
  4. Yes
  5. Not a problem so long as we aren't pushing 10-15+.
  6. Sum of DLL size, or similar.
  7. Yes. For a production build time isn't really a problem so long as debug/testing builds build reasonably quick.
  8. Depends, but the option would be handy.
  1. Service and UI.
  2. Sometimes.
  3. Yes, usually.
  4. Yes.
  5. It is best to be less than 5 seconds.
  6. The UI is less than 5 seconds, Service doesn't matter.
  7. The build time is not important, and the optimization effect is the most important.
  8. Yes.

@tpetrina @ayende @bencyoung @Kosyne @expcat you responded yes to question 3 ("Would your app load plugins or other external dlls that you didn't originally include in your app build?") - can you tell us more about your use case?

The main selling point of a single file distribution is that there is only one file to distribute. If your app has plugins in separate files, what value would you be getting from a single file distribution that has multiple files anyway? Why is "app.exe+plugin1.dll+plugin2.dll" better than "app.exe+coreclr.dll+clrjit.dll+...+plugin1.dll+plugin2.dll"?

app.exe + 300+ dlls - which is the current state today is really awkward.
app.exe + 1-5 dlls which are usually defined by the user themselves is much easier.

Our scenario is that we allow certain extensions by the user, so we would typically only deploy a single exe and the user may add additional functionality as needed.

It isn't so much that we _plan_ to do that, but we want to _be able_ to do that if the need arise.

@ayende Agreed, same with us.

Also, if we could so this at the dll level then we could package dependencies inside our assemblies so they didn't conflict with client assemblies. I.e. by choosing a version of NewtonSoft.Json you are currently defining it for all programs, plugins and third-party assemblies in the same folder, but if you could embed it then third-parties have flexibility and increase version compatibility

Agree with @ayende .

Thanks, everyone for your answers! Based on the number of folks who will either use native code or need to load plugins, we think the most compatible approach we can manage is the right place to start. To do that, we'll go with a "pack and extract" approach.

This will be tooling that essentially embeds all of the application and .NET's files as resources into an extractor executable. When the executable runs, it will extract all of those files into a temporary directory and then run as though the app were published as a non-single file application. It won't start out with compression, but we could potentially add it in the future if warranted.

The trickiest detail of this plan is where to extract files to. We need to account for several scenarios:

  • First launch -- the app just needs to extract to somewhere on disk
  • Subsequent launches -- to avoid paying the cost of extraction (likely several seconds) on every launch, it would be preferable to have the extraction location be deterministic and allow the second launch to use files extracted by the first launch.
  • Upgrade -- If a new version of the application is launched, it shouldn't use the files extracted by an old version. (The reverse is also true; people may want to run multiple version side-by-side). That suggests that the deterministic path should be based on the contents of the application.
  • Uninstall -- Users should be able to find the extracted directories to delete them if desired.
  • Fault-tolerance -- If a first launch fails after partially extracting its contents, a second launch should redo the extraction
  • Running elevated -- Processes run as admin should only run from admin-writable locations to prevent low-integrity processes from tampering with them.
  • Running non-elevated -- Processes run without admin privileges should run from user-writable locations

I think we can account for all of those by constructing a path that incorporates:

  1. A well-known base directory (e.g. %LOCALAPPDATA%\dotnetApps on Windows and user user profile locations on other OSes)
  2. A separate subdirectory for elevated
  3. Application identity (maybe just the exe name)
  4. A version identifier. The number version is probably useful, but insufficient since it also needs to incorporate exact dependency versions. A per-build guid or hash might be appropriate.

Together, that might look something like c:\users\username\AppData\Local\dotnetApps\elevated\MyCoolApp\1.0.0.0_abc123\MyCoolApp.dll
(Where the app is named MyCoolApp, its version number is 1.0.0.0 and its hash/guid is abc123 and it was launched elevated).

There will also be work required to embed files into the extractor. On Windows, we can simply use native resources, but Linux and Mac may need custom work.

Finally, this may also need adjustments in the host (to find extracted files) and diagnostics (to find the DAC or other files).

CC @swaroop-sridhar @jeffschwMSFT @vitek-karas

I feel like this cure is worse than the disease. If we have to deal with external directories (different across OS's), updating, uninstalling and the like, that flies in the face of my reason for desiring this feature in the first place (keeping everything simple, portable, self contained and clean).

If it absolutely has to be this way, for my project, I'd much prefer a single main executable and the unpacked files to live in a directory alongside that executable, or possibly the ability to decide where that directory goes.

That's just me though, I'm curious to hear from others as well.

I have to agree here, using a different directory can have many exciting problems - e.g. you place a config file alongside the exe and this exe is not picked up because the "real" directory is somewhere else.
Disk space could be a problem to, also random file locks due to access policies etc. pp.
I would like to use this feature, but not if it adds a host of faile modes which are impossible to detect before.

Agreed with @Kosyne - The proposed initial solution seems to simply automate an "installer" of sorts. If that was the limit of the problem we're trying to solve with a single exec then I think we'd have all simply performed that automation ourselves.

The key goal of the single exec proposal should be to be able to run an executable on an unmanaged system. Who knows if it even has write access to any chosen destination "install" directory? It should certainly not leave artefacts of itself after launch either (not by default).

As a small modification to the existing proposal to satisfy the above: Could we not unpack into memory and run from there?

Agree with the rest of the comments. Unzipping to another location is something that is _already_ available.
We can have a self extracting zip which will run the extracted values fairly easily. That doesn't answer a lot of the concerns that this is meant to answer and is just another name for installation.

The location of the file is important. For example, in our case, that would mean:

  • Finding the config file (which we generate on the fly if not there and let the user customize)
  • Finding / creating data files, which is usually relative to the source exe.
  • The PID / name of the process should match, to ensure proper monitoring / support.

One of our users need to run our software from a DVD, how does that works, on a system that may not actually _have_ a HD to run on.

I agree that it would be better to do everything in memory. And the concern about the startup time isn't that big, I would be fine paying this for every restart, or manually doing a step to alleviate that if needed.

Another issue here is the actual size. If this is just (effectively) an installer, that means that we are talking about file sizes for a reasonable app in the 100s of MB, no?

It seems that building the proposed solution does not require (many if any) CLR changes. Users can already build a solution like that. There is no point in adding this to CoreCLR. Especially, since the use case for this is fairly narrow and specific.

@GSPP This seems like basically something that I can do today with 7z-Extra, I agree that if this is the case, it would be better to _not_ have it at all.

Sorry I'm late to this party, I got here after following a link posted in a duplicate ticket that I was tracking.
After reading the latest comments here, I'm sorry to see that you're considering packing and extracting. This seems like overkill, why not start with the ability to deploy SFAs for basic console apps? It seems to me that it should be possible to create a rudimentary console app with some network, IO, and some external dependencies (nuget, etc) that sits in a single file.
I guess what I'm trying to say is that instead of gathering the requirements of everyone, gather the requirements of no one and instead start small with a first iteration that makes sense for everyone and yields results quickly.

This seems like basically something that I can do today with 7z-Extra

You are right that number of programming environment agnostic solutions to address this problem exist today. Another example out of many: https://www.boxedapp.com/exe_bundle.html

The added value here would be integration into the dotnet tooling so that even non-expert users can do it easily. Expert users can do this today by stitching existing tools together as you have pointed out.

Personally, I agree with you that it is not clear that we are making the right choice here. We had a lot of discussion about this within the core team.

Putting an intern on it and coming out with a global tool (that is recommended, but not supported) would do just as well, and can be fairly easy to install as well.

Effectively, we are talking about dotnet publish-single-file and that would do everything required behind the scene.

I don't see anything that is actually required by the framework or the runtime to support this scenario, and by making this something that is explicitly outside the framework you are going to allow users a lot more freedom about how to modify this.
No "need" to get a PR (with all the associated ceremony, backward compact, security, etc) that you would get if you want to make a change to the framework.
You just fork a common sideline project and use that.

Note that as much as I would like this feature, I would rather not have something in (which means that it is _always_ going to be in) that can be done just as well from the outside.

I want to ask a higher level question: What is the main motivation for customers desiring single-file distribution?
Is it primarily:

  1. Packaging? If so, regardless of whether the solution is inbox or a third party tool, what characteristics are most important?
    a) Startup time (beyond the first run)
    b) Ability to run in non-writable environments
    c) Not leaving behind files after the run
    d) Not having to run an installer
    2) Performance?
    a) Speed (static linking of native code, avoiding multiple library loads, cross-module optimizations, etc.)
    b) Code size (need only one certificate, tree shaking, etc)
    3) Any others?

I'm a bit concerned that this feature is being read by some as not so important, or at least might not be a good goal to include in the core featureset here. I'd like to just reiterate that I think it would be immensely powerful for any application that acts as a service or sub-module of a larger application. One that you, the author, may not even be the developer of. I don't want to have to pass on dependency requirements that can only be resolved with installations (auto or otherwise), or post-execution artefacts on disk that might need additional security elevation by the user, etc.
Right now, .NET just isn't a good choice for this niche problem. But it could be (And should be IMO).

The compiled executable must:

  • Package all relevant dependencies
  • Have no execution artefacts (No disk writes)

Re @swaroop-sridhar I daresay every application will have its own unique order of performance needs and so I'd imagine the best approach after tackling the core solution is to pick the low hanging fruit and go from there.

@swaroop-sridhar For me, this is about packaging and ease of use for the end user.
No need to deal with installation of system wide stuff, just click and run.

This is important because we allow our software to be embedded, and a single file addon is a lot easier to manage.

@strich The point about embedding is a good one. We are commonly used as a component in a micro service architecture, and reducing the deployment overhead will make that easier.

The problem isn't whatever or not this is an important feature. The issue is whatever the proposed solution (essentially zipping things, at this point) is required to be _in the core_.
Stuff that is in the core has a much higher standard for changes. Having this as an external tool would be better, because that is easier to modify and extend.

To be honest, I would much rather see a better option altogether. For example, it is possible to load dlls from memory, instead of files (require some work, but possible).
If that happens, you can run the _entire_ thing purely from memory, with unpacking being done to pure memory and no disk hits.

That is something that should go in the core, because it will very likely require modifications to the runtime to enable that.
And that would be valuable in and off itself. And not something that we can currently do.

So a good example is to look at the experience of Go tools like Hashicorp's Consul. A single exe file that you can drop onto any machine an run. No installers, no copying folders around, no hunting for config files in lists of hundreds of files, just a really nice end user experience.

For us I'm not sure the in-memory approach would work as we'd also like this to work for plugin dlls as well (so dropping a plugin would also be a single file rather than all it's dependencies), but any progress would be good. We've looked at Fody.Costura and that works well for some stuff but we've had issues with .NET Core for that.

So a good example is to look at the experience of Go tools like Hashicorp's Consul

Eh, for tools like consul the ideal solution would be corert, not this self-extract improvisation.

@mikedn Why's that? I don't see what AOT or JIT compilation has to do with deployment method?

I want to second @strich's words, single file deployments would be a breath of fresh air for microservice architecture deployments, as well as for any console app that is — or at least starts its life as — a small tool with with command line switches.

Why's that? I don't see what AOT or JIT compilation has to do with deployment method?

Because it gives you exactly what you want - a single exe file that you can drop onto any machine (well, any machine having a suitable OS, it's a native file after all) and run. It also tends to use less resources, which for agents like consul is a good thing. It is pretty much the equivalent of what Go gives you, more than a self extract solution.

@mikedn I guess, but

1) It doesn't really exist yet in a production form (as far as I know)!
2) We use a lot of dynamic features (IL generation, arbitrary reflection)
3) We still want to be able to add plugins (again ideally compacted)

See as this issue was about asking people what they want, we're only giving our opinion! We don't really want to have to switch to a different runtime model just to get this benefit. To me they're orthogonal concerns

I don't see what AOT or JIT compilation has to do with deployment method?

Without a JIT, it's easier to get things like the debugging story good enough. The JIT part makes the problem harder which is why you won't find it in Go. This is engineering, so you either throw more engineers at the harder problem and live with the new complexity, or scope it down elsewhere. The self-extractor is about scoping it down because the number of engineers with the necessary skills is limited.

People with projects that are more like Go projects (no JIT requirements) might be pretty happy with CoreRT, if they're fine with the "experimental" label on it. It's pretty easy to try these days. It uses the same garbage collector, code generator, CoreFX, and most of CoreLib as the full CoreCLR, and produces small-ish executables (single digit megabytes) that are self-contained.

It doesn't really exist yet in a production form (as far as I know)!

Yes, my comment was mostly targeted at MS people :grin:. They have all these parallel, related, on-going projects/ideas (corert, illinker) and now they add one more, this self extract thing that, as many already pointed out, it's a bit of a "meh, we can do that ourselves" kind of thing. And it comes with downsides as well, such as extracting files to a "hidden" directory.

We use a lot of dynamic features (IL generation, arbitrary reflection)

That's something that the community as a whole might want to give a second thought. Sure it's useful to be able to do that but it kind of conflicts with other desires like single file deployment. You can still get single file deployment but that tends to come at a cost - a rather large file. If you're OK with that then that's perfectly fine. But in my experience the larger the file the gets the less useful the "single file" aspect becomes.

@MichalStrehovsky sure, there are different options. However for us we can't use experimental (convincing it was time to move to .NET Core was hard enough) and I don't think extracting to a temp folder will work in our case either. However worst case is we carry on as we are and don't use this feature.

It is something we would like though if it went the way we'd want it to :)

@mikedn I agree. Multiple parallel solutions are even more confusing. I think our ideal solution would be some kind of super ILLinker/weaver approach but I'm happy to let this play out and see where we end up.

I'm really really excited about this functionality landing, but TBH I'm equally unexcited about the initial proposal that you posted @morganbr. My answers to the list of questions you posed is similar to what others posted (so I think there is a common desired set of capabilities), but IMHO the proposed 'unpack to disk' solution is not at all what I'd hope to see implemented and as others said would almost be worse than the 'disease'. @jkotas

I agree with @strich and @ayende
The compiled executable must:
Package all relevant dependencies
Have no execution artifacts (No disk writes)

Loading .dlls from memory instead of disk may not be easy, but that's the kind of capabilities that IMO would be worth the deep expertise of MSFT low-level devs (and then leveraging in CoreClr) vs the above proposal which could just be implemented as an external tool (and already has, see https://github.com/dgiagio/warp). If this was achieved, I wonder how much time difference there would be between first and subsequent runs? For inspiration/example, I think Linux 3.17+ has memfd_create which can be used by dlopen.

On another topic, I'm wondering if the requirement to support plugins is over-indexing the design proposal? Would it be worth making this functionality opt-in, so that only the people that need this capability incur the potential penalties (lack of tree shaking etc) and everyone else (significant majority?) would get reduced deployable size, perf benefits(?)

Stepping back @swaroop-sridhar @MichalStrehovsky , I can see two broad use cases that might have different-enough goals/desires to make it hard to accomodate everyone with one solution:

  • CLI tooling ideally wants fast-run every time, small distributable; maybe better fit for CoreRT?
  • microservice can probably tolerate longer first-run (ideally <1s), larger distributable, in exchange for richer functionality like JIT and dynamic code features.

I hope this braindump makes some sense, I'm not trying to be a jerk, but just provide feedback because I'm very interested in this topic. Thanks for all your hard work, and soliciting community input! :)

About plugins.
Basically, the only thing that I would like is _not_ being blocked on Assembly.LoadFrom or LoadLibrary calls.
I don't need anything else and can do the rest on my own.

@ayende Can you please explain in a bit more detail what do you mean by "being blocked on LoadFrom and such"?

For example, some of the suggestions for this included CoreRT, which meant that we (probably) wouldn't be able to just load a managed dll.
But as long as I can provide a path to a managed dll and get an assembly or call LoadLibrary on a native dll, I'm fine with this being the plugin mechanism.

I'm saying this to make it clear that plugin scenarios are not something that should be _considered_, but rather something that should not be _blockced_.

I spent some time digging into the code and at first glance, it seems like it should be _possible_ to (speaking about Windows only to make things simple):

  • Pack the entire CoreCLR into an exec (let's say it is as embedded resource or something like that).
  • Enumerate the native dlls, call something like MemoryModule (https://github.com/fancycode/MemoryModule) to load them into memory
  • Invoke the runtime and load an assembly from memory.

I'm going to assume that this is _not_ as simple as that. For example, ICLRRuntimeHost2::ExecuteAssembly doesn't provide any way to give it a buffer, only a file on disk.
That make it the first (of what I'm sure will be many) show stopper to actually getting it working.

I'm pretty sure that there is a lot of code that might refer to related stuff as files that may fail, but this is the kind of things that I mean when I say that I want a single file exec and why that kind of a solution needs to be done in the CoreCLR and not externally (as in the zip example).

call something like MemoryModule

The binaries loaded using MemoryModule are non-diagnosable (e.g. none of the regular debuggers, profilers, etc. will work on them), and MemoryModule skips all built-in OS security measures (e.g. antiviruses, etc.). It is not something we could ever ship as a supported solution.

I'm going to assume that this is not as simple as that.

Right, there would be new APIs needed (both managed and unmanaged) to support execution from a non-expanded single file bundle. The existing programs and libraries would not "just work".

Could it be a reasonable ask for the anti malware scan interface to be
implemented by Windows engineers on MemoryModule like it recently was for
all .net assemblies including those loaded from mem?

https://twitter.com/WDSecurity/status/1047380732031762432?s=19

EDIT: Hmm MemoryModule sounded like something built into the OS for loading native modules but apparently it's not, that's probably enough reason to disregard my suggestion above

On Wed, Dec 5, 2018, 01:46 Jan Kotas notifications@github.com wrote:

call something like MemoryModule

The binaries loaded using MemoryModule are non-diagnosable (e.g. none of
the regular debuggers, profilers, etc.) will work on them, and MemoryModule
skips all built OS security measures (e.g. antiviruses, etc.). It is not
something we could ever ship as supported solution.

I'm going to assume that this is not as simple as that.

Right, there would be new APIs needed (both managed and unmanaged) to
support execution from a non-expanded single file bundle. The existing
programs and libraries would not "just work".


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/dotnet/coreclr/issues/20287#issuecomment-444314969,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AEBfudvVHxpbeKa-L_vTQjbnNZZsn6Meks5u1xdlgaJpZM4XK-X_
.

Thanks for all the feedback.
We'll post an updated plan for supporting single-file distribution shortly.

@NinoFloris see dotnet/coreclr#21370

I realize I'm late to this discussion so my apologies if this doesn't help.

To reiterate what @mikedn was saying, it can be confusing to have all these parallel solutions, and it begs the question where the focus of MS will lie in the future. The fact that the brilliant work done on CoreRT is still labelled experimental without an official roadmap and with similar functionality being implemented in CoreCLR (CPAOT, now single file publish) makes it hard to make business decisions based on the technology. I wouldn't want to pick the wrong horse again (Silverlight...).

Why not stick with the brilliant runtime in CoreRT and focus the work on incorporating a JIT (and expanding on the interpreter work until that's ready) into CoreRT for those who want full framework capabilities while still having native code built for them? Wouldn't it be possible to, in the case of wanting JIT abilities, to keep the metadata and JIT inside of a CoreRT natively compiled executable?
Wouldn't this be the best of all worlds? Eg. you'd have smallest executables, fastest startup time, the limitations of AOT code in CoreRT could be expanded by having a JIT in place, and keeping the metadata and IL code for those who want to retain full framework functionality.
IMO the focus should be on the 'best possible' solution instead of on short term functionality which might distract from long term goals.

Maybe I'm mistaken but I think having single file publish as part of CoreCLR would even further distract customers away from CoreRT which in many of the cases would be a potentially better solution for them. Eg. if this gets branded as "now you can publish your .NET code as a single file" yet it is making huge files, will suffer in startup performance etc. then I can see people start complaining about this "bloated .NET runtime" yet again. While .NET actually already has a brilliant solution for exactly that in CoreRT, while lacking the official support / official product commitment to go all the way.

Perhaps my concern is mostly that I'd like to see less parallel products (again quoting @mikedn) and more emphasis on deeply committing to, extending and supporting those that are here (including sharing roadmaps).

less parallel products

We have just one product: .NET Core. CoreRT is not a separate product and never will. This is about adding options to this one product. Here are a few examples of options we have today: workstation GC and server GC; self-contained app publish vs. framework-dependent app publish.

@jkotas I understand, and I completely agree adding options is usually great. If you replace my use of the word "products" with "options/solutions" though I still stand by my concern of not fully knowing if our product's commitment to CoreRT as an option/solution inside of the .NET Core overall product is something we can safely bet on to be supported in many years to come. Imagine what it would be like if APIs would make unpredictable breaking changes over time and be suddenly deprecated without you knowing whether or not you could trust an API to stay there. To a non-MS entity, the tools/options that are part of the product feel the same way. We're dependent on several capabilities of CoreRT that we wouldn't be able to do (at least currently) with CoreCLR (one of them the ability to protect our executables with PACE and strip lots and lots of metadata, IL, etc., easy static linking, easier understanding of the underlying platform, prospect of building obfuscation directly into the compiler, etc.). Basically I feel like yours and Michal's and others' work on CoreRT should just be upprioritized as it IMO is one of the best things that has happened in the .NET ecosystem since it was first designed. Anytime there's talk of a new tool/option which then internally could be looked at as in competition with CoreRT instead of the CoreRT option to be completed and extended to me feels like the wrong allocation of resources. Sorry didn't mean to drag the discussion out, just wanted to make sure you understood how much appreciated the CoreRT work is and that some of us out here believe that it should be the future of the platform.
I'll refrain from polluting this discussion further. Wishing all the best for the teams and looking forward to whatever you come up with, I'm sure it will be great.

@christianscheuer Your feedback on this issue and on other CoreRT-relased issues has been very valuable. You are not polluting this discussion at all.

there's talk of a new tool/option which then internally could be looked at as in competition with CoreRT

Among the core team, we have talks about a number different, ambitious or crazier options. I do not consider the "unzip to disk" a significant competitor with CoreRT full AOT. Each targets a different segment of .NET users/apps.

Our product management team who compiles data from many different channels (including github issues or face-to-face customer conversations) suggested that the biggest bang for the buck is a solution like what is proposed here. We are still validating that we have all the data on this right and that it will have the desired outcome.

I cannot promise you when the CoreRT tech will ship as part of the supported .NET Core product at this point. I do know that it has unique strengths and I am sure we are going to ship a supported solution like that for the segment of important .NET users who benefit from it eventually.

Single file distribution makes sense if:

  • It does not impact startup time (Else only the tiniest of applications will be able to use it)
  • It acts as a VFS (Virtual File System) Allows you to access assemblies and resources as if they were from disk, from application folder).
  • Does not impact the "patchability" of the app. IE: Microsoft should be able to patch critical assemblies even if they are embedded by providing overrides somewhere else, at the system level.

Assemblies as embedded resources, extracted and copied to a temp folder is not a needed solution from Microsoft. It can easily be implemented by any customer who needs it.

A "true" single file solution is needed and would provide much value.

@jkotas your comments ring true to me. I would have a hard time convincing my company's .NET devs (LOB/backoffice/integrations) to switch to CoreRT because it would be too much of a leap for them (esp given the Experimental label), but they can see the value in a relatively simple (to use) solution for Core like this thread is discussing. THanks

@popcatalin81 I actually would strongly disagree with your points here.

Startup time is not meaningful for many applications. I'll gladly trade off a 50% - 100% increase in startup time of any long running service for the purpose of reducing the operational complexity of deploying it.

Even if we add 30 seconds to the startup time of a service, that isn't really meaningful over the long run for services.
For user facing applications, that will likely be a killer, but until 3.0 is out, pretty much all CoreCLR apps are either console or services.

Not that CoreCLR's already has a mode in which you can bundle the framework with your application (self contained deployment).
In our case (RavenDB), we _rely_ on that for several purposes. First, the ability to chose our own framework irregardless of whatever it is our users are using. This is a _major_ benefit for us, since it means that we don't need to use the globally installed framework and are thus not tied to whatever the admin decided would be the framework (we used to take into account that we would be running on .NET 4.0 sometimes, even years after 4.5 was out, for example, and that was a pain).
Another important aspect is that we can run with _our own_ fork of the framework. This is very useful if we run into things that we must change.

As part of that, we take ownership of the patch cycles and do not want anyone else messing with that.

I don't _mind_ VFS, but I don't think that this would be required. And I'm perfectly fine with having to go through dedicated API / jump through some hoops to get to the assemblies/native dlls/resources that are budnled in this manner.

I was pointed towards this issue by @briacht for two days, so I am REALLY late in the discussion.
Just want to add my 2-cents!

First my answers to the questions of @morganbr:

  1. Windows app, mixing WPF and WinForms.
  2. Currently, no
  3. Yes, using Assembly.Load in different variations
  4. Yes
  5. It depends if this is a penalty that is at every startup. Yes if its a few milliseconds , but 5 seconds: no. If it's a single time, 5 seconds is no problem.
  6. 50MB is already quite large compared to a <2MB... but if this would mean the dotnet runtime is included... OK. Maybe I can provide a download with and without.
  7. Release time does not directly matter, but it should not upset the CI provider by using a lot of CPU for a long time period.
  8. YES!

The application I talk about it Greenshot, the current released version is pretty much still targeting .NET Framework 2.0 (yes really) but works without issues on the newest version (good backwards compatibility)! Zhe download, including the installer but without the .NET Framework, is around 1,7MB. The directory from which this is started contains 1 .exe and 4 .dlls, there are also 11 add-ons in a sub-directory.

I was reading through the previous comments, and a lot of them sound like you want something with functionality I would use an installer for. This does make somewhat sense, as there is currently no platform independent solution.

I was discussing the current topic with some people at Microsoft back in August, my main focus was talking about linking everything together to reduce the size and complexity of the application for deployment and also reduce the assembly loading which takes a LOT of time during the startup of my application.

With the next version of Greenshot, which targets .NET Framework 4.7.1 and dotnet core 3.0 side-by-side, the resulting download for dotnet core 3.0 currently has around 103 dlls (!!!), one exe and 15 add-ons. The total size is around 37MB and the biggest file is MahApps.Metro.IconPacks.dll (12MB!!) which contains most of the icons we use, which probably is around 4% of the file. The same with many of the other dlls, I would assume that in general 50% of the code usage is a lot!

If the user doesn't want to install the application, just download a .zip and find the executable between around 118 files... not really a nice experience! The size is around 35MB (17x) bigger, so where is the benefit for the user?

In the months before dotnet core 3.0 was a thing, I was using Fody.Costura to do pretty much that what was described here. Packing during build and unpacking at startup! It even made a single executable without extracting the files possible by hacking a but. But it also caused quite a lot of issues, so this is not my preferred solution.

I am hoping for a more or less standard solution of which I can say: it just works ™
Maybe it makes sense to define a standard folder structure, so we can place the executable, readme, license files etc in the root and have different "known" directories for different files (dotnet core, dependencies etc). This might make it easier for tools to work with, and offer functionality where everyone can decide what the want to use and what not. Sometimes even just running zip on the directory solves some issues.

Having a solution similar to Fody.Costury is definitively sometime I could imaging using to simplify the deployment of Greenshot, but it doesn't directly reduce the size and loading time. So I hope there will be some time spent here too!

@ayende Please allow me to disagree with you as well :)

While it's true services, web applications and in general server based long-running apps are not impacted by a one-time startup cost, at the same time these applications are not the ones to see most benefits from single file distribution model. ( Raven is an exception, not the norm ;) )

In my view, single file distribution is primarily targeted at applications and utilities targeted at users. Users who will download, copy, run such applications. And in the near future when.Net Core 3.0 will offer support for UI libraries, this distribution model will become quite common. However, I must emphasize, this model won't be suitable for any apps. Major apps will still use an installer.

Now the question is, this model of embedded assemblies copied to temp folders, will it work well if it becomes highly popular?

In my view these are some potential issues for the current model:

  • Dozens of apps create thousands of temporary files with duplicated assemblies. There is potential to create a mess here. (Unnoticed by users, but still a mess)
  • No compression, no linker. This might create a disadvantage for small utility apps compared to FDD or event zipping.
  • It might actually hinder server deployments instead of making them easier, the reason being: post deploy configuration with token replacements. (Where do you go to find the temp folder to overwrite config files? Do you give the service write access to its own folder to create the config file there? That's a security no-no)

I'm not here to say this model is suboptimal for all kinds of apps. I've successfully implemented it and used it myself in the past for full.Net Framework, and it's a good fit for certain apps.

I just think the optimal implementation is the version where a linker mergers all assemblies into a single file.

@popcatalin81 A world with a single view point is a poor one. And I'm not arrogant enough to think that my deployment model is the sole one out there.

The case in that I have in mind is the download and double click scenario.
Right now, we have to unzip and click on a shell script to run, because going into a directory with 100s of files and finding the right one is a chore.

About config in temp files. I would be very much against that. Config files are other user visible stuff _must_ reside right next to the actual file that is being used, not hidden somewhere.
We already seen how bad that was with IsolatedStorage files when users needed them.

In previous versions of RavenDB, we ilmerged a lot of stuff to have a simpler directory layout and reduce the number of visible files, and that made a serious impact on how usable our software was.

Will this feature use ILMerge? If not, why not? I would like to educate myself if there is a pitfall.

I don't like proposed "pack and extract" approach, especially part where if would extract files on disk. It's simple, but has too many issues. It should all work from "in memory" without extracting to disk.

I think it should be something like corert native compiler or Costura.Fody.

Optionally it should also attempt to reduce binary size (remove unused references, dead IL).

re

In the months before dotnet core 3.0 was a thing, I was using Fody.Costura to do pretty much that what was described here. Packing during build and unpacking at startup!

note that costura doesnt (by default) "unpack to disk". it extracts (in memory) the embedded assemblies from resources and loads them via bytes

Thanks for the context @SimonCropp we are exploring both options of unpack to disk and loading from a stream. Covering the loading from a stream for pure IL may be possible in our first iteration.

@jeffschwMSFT btw the feature set of seamlessly merging assemblies (and optional tree shaking) is something i would love to see as a first class citizen. costura is one of my more popular projects, but also one that takes significant time to support. so if/when MS comes out with an alternative (even as a beta), let me know and i will spend some time reviewing it and add some "there is a MS alternative" to the costura readme and nuget description

while unlikely, if u need me to release any of the code in costura under a diff license just let me know.

Thanks @SimonCropp as we start to integrate the mono/illinker into the .NET Core tool chain we will take you up on your offer of reviewing.

note that costura doesnt (by default) "unpack to disk". it extracts (in memory) the embedded assemblies from resources and loads them via bytes

Even better, I believe it loads on-demand using the AppDomain.AssemblyResolve event, so the startup shouldn't be impacted too much since it will only load assemblies (into memory) whenever needed.

Late to the conversation here. Apologies if this has already been covered.

Can we assume the artefact here will be deterministic? Therefore the same build will always produce the same binary? Byte-for-byte. Being able to distribute a binary with a canonical, accompanying checksum would be helpful too.

@morganbr There's a lot of reading necessary here. Would it be possible to log the decisions made thus far?

@edblackburn we are pulling together an updated design based on feedback. @swaroop-sridhar is driving the design now.

Will this feature use ILMerge? If not, why not? I would like to educate myself if there is a pitfall.

@simplejackcoder ILMerge combines the IL from many assemblies into one, but in the process, loses the original identity of the merged assemblies. So, things like using reflection based on original assembly names will fail.

The goal of this effort is not to merge assemblies, but to package them together while maintaining assembly identity. Further, we need to package not just the assemblies, but also native binaries, configuration files, possibly the runtime, and any other dependencies required by the app.

Merging IL is a useful feature for apps that can adapt to it -- its just a different feature from this issue.

A design review is posted at https://github.com/dotnet/designs/pull/52
Let's continue the discussion on the PR henceforth.

I have a few questions for folks who'd like to use single-file. Your answers will help us narrow our options:

  1. What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)
  2. Does your app include (non-.NET) C++/native code?
  3. Would your app load plugins or other external dlls that you didn't originally include in your app build?
  4. Are you willing to rebuild and redistribute your app to incorporate security fixes?
  5. Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?
  6. What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?
  7. Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?
  8. Would you be willing to do extra work if it would cut the size of your app in half?
  1. Our dotnet/coreclr#1 use case by far is console apps that are utilities. We'd like to do a publish for a windows or linux and specify --single-file to get one executable. For windows we'd expect this to end in .exe, for linux, it would be an executable binary.
  2. Not usually, but it could via a NuGet package unbeknownst to us
  3. Not initially, but that would be a great future feature
  4. Yes
  5. Yes
  6. Whatever works
  7. Yes, if it optimized the resulting size of executable. Ideally, may be a flag to set the optimizing level using --optimizing-level=5
  8. Yes, up to a point

Thanks @BlitzkriegSoftware.
I think the design proposed here covers your use-case. Please take a look at the design -- we welcome your feedback.

I have a few questions for folks who'd like to use single-file. Your answers will help us narrow our options:

  1. What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)
  2. Does your app include (non-.NET) C++/native code?
  3. Would your app load plugins or other external dlls that you didn't originally include in your app build?
  4. Are you willing to rebuild and redistribute your app to incorporate security fixes?
  5. Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?
  6. What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?
  7. Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?
  8. Would you be willing to do extra work if it would cut the size of your app in half?
  1. WPF .Core 3.0 App
  2. Maybe in a nuget package
  3. No
  4. Yes
  5. Yes for 200-500ms max
  6. Not important as soon as it's not 2 or 3 times bigger than the original publish folder size
  7. It's for production so not really important
  8. Yes but have the possibility to not do that may be useful if the work concerne a third party library

The feedback you received was largely for development before 2019, which is usually webservices, console applications and other services launched in the background. But now with .net core 3.0, we need solutions for WPF and UI apps. The .Net Framework 4.8 will not support the .Net Standard 2.1 and we need a solution to replace ILMerge that we used until now for WPF applications as we have to upgrade our apps from .net framework (dead framework) to .net core.

A self-extracting package is clearly not a good solution as explained in other comments before for this kind of programs.

  1. Mostly CLI apps. Small Windows services and the occasional WPF app are interesting too. For web stuff, single file distribution seems basically irrelevant. For this reason, the rest of this is mostly Windows-oriented; our Linux .NET Core interest is around the web stuff.
  2. Sometimes. When it does, it's almost always for compression libraries. If the native stuff didn't get packed in, it wouldn't be the end of the world. A good solution for pure managed stuff that required deployment of a native dependency as a separate dll would still have value.
  3. No, not in cases where we care about single file.
  4. Yes.
  5. 200ms is roughly the limit before things get annoying for a CLI to start. > 1 second and we'd almost certainly stick to .NET Framework (ILMerge + ngen) or look at other languages that compile to a native binary with any necessary runtime statically linked in. In a lot of cases we're talking about stuff that was implemented in C# because an ilmerged, ngen-ed C# app actually starts reasonably quickly. Some of this stuff is, complexity-wise, reasonable to implement in a scripting language, but those tend to have significant startup costs, especially if you need to pull in a bunch of modules.
  6. Depends. Over 10MB for hello world would be a tough sell. Huge also executables tend to have non-trivial startup costs. Even if the OS is theoretically capable of starting execution without reading the entire thing, at least on Windows, there's almost always something that wants to get a hash for some security-ish purpose first. Speaking of which, this un-pack a bunch of binaries to temp thing is going to drive AV (Defender absolutely included) nuts, in terms of resource utilization frantically scanning if nothing else.
  7. Sure. I'd be happy to add a minute or more to a release build if it meant getting something that was a reasonable size and started execution quickly.
  8. Definitely.

Reading over the proposal, it doesn't address any scenario that we have and that we wouldn't use it. There might be someone that it solves a problem for, but it's not us.

It seems like there are lots of relevant things for scenarios like ours being worked on that just haven't come together into a toolchain that is actually usable/likely to continue to work. Maybe this is just ignorance on my part, but I try to pay attention and it's not clear to me. Between ILLink, Microsoft.Packaging.Tools.Trimming, CoreRT, .NET Native and whatever happens in this issue, it seems like there should be some way to get a single tree-shaken, quick starting (possibly AoT?), reasonably sized binary out of .NET Core (and not for UWP).

  1. Single file distribution would be super useful for https://github.com/AvaloniaUI/Avalonia (Windows, Linux, MacOS) and WPF (Windows) apps. Having single exe makes autoupdates easy for example, and in general it's more convinient to have single file compared to a hundred of dlls.
  2. Yes (in the sense that it references prebuild native libs such as sqlite and skia)
  3. No
  4. Yes
  5. 200ms is probably fine. Having 5s startup time for a GUI app is not fine.
  6. Doesn't really matter
  7. Doesn't really matter. We could build single file only for production scenarios.
  8. Yes

Please don't go with pack & extract approach. We could already achieve this by using archive tools.

Please don't go with pack & extract approach. We could already achieve this by using archive tools.

I can't agree more. There are already several tools available that do this.

Thanks for the feedback @Safirion, @mattpwhite, @x2bool, @thegreatco.
From these, I see a range of usage scenarios (console apps, WPF, GUI apps, etc), but the overall requirements seem similar:

  • Focus on execution/startup time rather than build time.
  • Ability to rebuild apps for further releases/patches
  • Ability to run directly from the bundle (especially for pure-managed components).

I believe these are the requirements we've tried to address in the design doc.

Please don't go with pack & extract approach

Yes, as explained in this staging doc the goal is to progressively minimize the dependence on file extraction.

Reading over the proposal, it doesn't address any scenario that we have and that we wouldn't use it.

@mattpwhite, wanted to clarify whether you think this solution doesn't work for your based on:

  • The self-extraction based approach proposed earlier in this work-item, or
  • The design noted in this doc
    I believe the later document tries to address the scenario you've detailed. Thanks.

It seems like there are lots of relevant things for scenarios like ours being worked on that just haven't come together into a toolchain that is actually usable/likely to continue to work.

@mattpwhite, there is ongoing work to integrate ILLink (tree-shaking), crossgen (generating ready-to-run native code), and single-file bundling aspects into msbuild/dotnet CLI in a streamlined fashion for .net core 3. Once this is done, using these tools can be accomplished easily (ex: setting certain msbuild properties).

All of these tools, do come with tradeoffs. For example, crossgen generates native code ahead of time thus helping startup, but tends to increase the size of the binaries. Using single-file option, apps can be published to a single-file, but this may slow-down startup, because native-binaries are spilled to the disk, etc.

@swaroop-sridhar, thanks for taking the time to respond.

wanted to clarify whether you think this solution doesn't work for your based on

It seemed like extraction is still in the picture for anything self-contained, and the status of crossgen (or anything else to cut down in JIT time at launch) wasn't clear to me. Reading the staging doc that the current design links to it looked like the stuff that would make this more interesting to us was further out. Given that, my thinking was that we'd continue to be stuck using the .NET Framework for these niches for some time to come. I may have misunderstood.

All of these tools, do come with tradeoffs. For example, crossgen generates native code ahead of time thus helping startup, but tends to increase the size of the binaries.

Yep, understood that some of these are double edged swords. With the way the JIT has traditionally worked, ngen has always been a win for CLI/GUI apps. It's possible that some combination of tiered compilation and the fact that loading the runtime itself isn't necessarily amortized by lots of other .NET Framework processes running on the same box makes that less clear cut on Core.

@swaroop-sridhar the big concern I have with the currently accepted proposal is this step

Remaining 216 files will be extracted to disk at startup.

Aside from the fact that "Hello World" seems to require 216 files to run, extracting them to disk in some location makes removing a simple command line application hard. When people delete a tool that didn't come via an installer, they don't expect to need to hunt down other files to completely remove it.

IMHO, we should be targeting a publish experience similar to that of golang. While it would require crossgen to be in place to fully match the golang experience (and one could argue that a 1:1 match is not a _good_ thing, as we lose out on the JIT and tiered compilation), a good portion can be achieved without it. I have used many tools over the years to achieve a similar experience, and the most effective was embedded resources with a hook on the assembly loader (ILMerge was finicky and Costura.Fody didn't exist yet). What is described in the proposal doesn't even fully replicate that functionality. .NET Core has a really strong future, but its use for developing command line tools is limited so long as we need to lug around 100's of dependencies or spit them out somewhere on disk with some hand-wavy bootstrap code.

My dotnet/coreclr#1 use case is command line utilities, one file, no extraction would be my preference. But I'd point out that giving a choice as part of the command switches for all-in-one vs. all-in-one-self-extracting. In the self-extracting case it should include an undo that cleans up the unpacked files. In the first case, removing the utility means deleting one file which makes it suitable to reside in a folder of general utilities, where-as in the 2nd case, the install should have it's own folder so as to make cleaning up easier, and avoiding scenarios where the uninstall removes shared files. Just some thoughts. As I said, the one file utility (no install needed), is my primary use case as I make utilities for projects (as a tool-smith) frequently. But I can see a use case for both scenarios. But they are different.

I have a few questions for folks who'd like to use single-file. Your answers will help us narrow our options:

  1. What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)
  2. Does your app include (non-.NET) C++/native code?
  3. Would your app load plugins or other external dlls that you didn't originally include in your app build?
  4. Are you willing to rebuild and redistribute your app to incorporate security fixes?
  5. Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?
  6. What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?
  7. Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?
  8. Would you be willing to do extra work if it would cut the size of your app in half?
  1. CLI (Windows, Linux, MacOS), WPF and WinForms.
  2. Sometimes.
  3. Yes, many times and I think the scenario that a plugin can re-use a shared library from the main application should be addressed!
  4. Yes.
  5. 200-500ms are ok!
  6. For a simple CLI application without many dependencies, 10MB is enough (compressed)!
  7. Yes definitively! Build time (especially for release builds!) does not matter! The compression size and/or switch to publishing a single file should be optional!
  8. Yes e.g. CoreRT!

IMHO, we should be targeting a publish experience similar to that of golang.

@thegreatco: Yes, full compilation to native code is a good option for building single-file apps. Languages like GO are built with ahead-of-time (AOT) compilation as their primary model -- which comes with certain limitations on dynamic language features achieved via JIT compilation.

CoreRT is the .Net Core solution for AOT compiled apps. However, until CoreRT is available, we're trying to achieve single-file apps via other means in this issue.

Aside from the fact that "Hello World" seems to require 216 files to run,

There are a few options available to reduce the number of files required for HelloWorld"

  • Build a framework-dependent app, which only needs 3-4 files
  • If we need to build self-contained apps, use the ILLinker to reduce the number of file dependencies

extracting them to disk in some location makes removing a simple command line application hard.

  • As the development of single-file feature moves to further stages, the number of files spilled to disk will decrease, and will ideally be zero for most apps. But in the initial stages, several files (those containing native code) will have to be spilled.
  • The location of files spilled to disk is always deterministic and configurable. So, the spilled files can be cleaned up with a straightforward script. But I agree that this is not ideal.

Thanks for the clarification @mattpwhite.

Yes, until later stages of the feature development, self-contained apps will see files extracted out to the disk. You may be able to use some of the techniques I wrote in the above comment to ameliorate the situation for you apps.

Thanks for your response @BlitzkriegSoftware.

But I'd point out that giving a choice as part of the command switches for all-in-one vs. all-in-one-self-extracting

Whether an app published as a single-file requires extraction depends on the contents of the app (ex: whether it contains native files). The bundler design provides some configurations to decide whether all files should be extracted. We can add a property to say that the app must not use extraction to files at run time.

<propertygroup>
    <SingleFileExtract> Never | Always | AsNeeded </SingleFileExtract>
</propertygroup>

The understanding here is: when compiling SingleFileExtract=Never, if the app contains files that can only be handled via extraction (native binaries), then publishing to a single-file will fail.

In the self-extracting case it should include an undo that cleans up the unpacked files.

Sounds reasonable. The exact location of extraction (if any) is deterministic and configurable. So, we can have a script to remove an app and all of its extracted dependencies.

Thanks for your response @DoCode. Your scenario is interesting in that it uses plugins.
If the Plugins themselves have some dependencies (which are not shared with the main app), would you need/prefer to publish the plugin as a single-file as well (with dependencies embedded)? Thanks.

Yes @swaroop-sridhar, that's the goal. We think that in this case the plugin should their dependencies embedded. Only the shared dependencies should have stayed outside from the single-file main app.

Today, @madskristensen shared a good clarification with Visual Studio and the popular Newtonsoft.Json NuGet package.

We the same situations in a similar situation sometimes!

Thanks @DoCode. We plan the address the feature of single-file plugins, once single-file apps are implemented and stable. The design doc talks about plugins here.

Thanks for the continued engagement on this feature @swaroop-sridhar, it is much appreciated. Having an undo command will be great for the original stages especially.

  1. What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)
    -- Command line utilities (console apps)
  2. Does your app include (non-.NET) C++/native code?
    -- No
  3. Would your app load plugins or other external dlls that you didn't originally include in your app build?
    -- Not usually
  4. Are you willing to rebuild and redistribute your app to incorporate security fixes?
    -- Yes
  5. Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?
    -- < 1 Second would be my preference
  6. What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?
    -- Size is not important
  7. Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?
    -- 30 seconds
  8. Would you be willing to do extra work if it would cut the size of your app in half?
    -- Sure if it can be scripted or configured

1 - What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)

  • Games
  • Command line tools

2 - Does your app include (non-.NET) C++/native code?

Yes

3 - Would your app load plugins or other external dlls that you didn't originally include in your app build?

Yes, but with known symbols already, i don't rely on reflection since it is slow

4 - Are you willing to rebuild and redistribute your app to incorporate security fixes?

Yes

5 - Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?

No, it is already a bit slow

6 - What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?

Depends on the project, but for a simple command line tool, i don't expect it to be larger than single digit mbs

7 - Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?

As long as it doesn't impact development/iteration time, i'm fine with longer release build time
But remember, .net core apps already build longer than traditional .net framework apps

That tendency of slower software every year is NOT acceptable, we have better hardware, no reason for that slowness

8 - Would you be willing to do extra work if it would cut the size of your app in half?

YES!

I really hope to have AOT compilation with CoreRT working someday so we could have natively compiled applications like golang and rust and other AOT compiled langs. Similarly to how Unity3D does it with IL2CPP.

@morganbr Here is some input from me and folks that I work with. I use a lot of languages, but primarily C#, Go, and Python.

What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)

ASP.NET Core and Console programs, mostly on Linux, potentially some Windows workloads, depending on whether ML.NET meets our needs.

Does your app include (non-.NET) C++/native code?

Unlikely at this point.

Would your app load plugins or other external dlls that you didn't originally include in your app build?

It depends. Currently we can pull all of our dependencies in as packages, but I could see a situation where we pay for a commercial library, which may mean using external DLLs.

Are you willing to rebuild and redistribute your app to incorporate security fixes?

Absolutely. This is a top priority for us.

Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?

I would say that anything less than 10 seconds is more than fine by me. Startup time generally doesn't matter so long as it's idempotent and predictable - i.e. libraries are loaded in the same order, variables are initialised in the same order, etc.

What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?

Smaller is better, mostly for distribution purposes. Distribution could be user-downloaded binaries, container images with the binary, etc.

Would you accept a longer release build time to optimize size and/or startup time?

Sure, so long as it's faster than C/C++ and Java, it should be agreeable.

What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?

Likely 30-90 seconds would be ideal, but anything under 3 minutes is likely agreeable. Over 3 minutes for small/medium projects (1k-500k LOC) would be a bit much. 5 minutes for very large projects (1M LOC) could be acceptable.

Would you be willing to do extra work if it would cut the size of your app in half?

It depends. If the extra work is adding boilerplate code that can be templatized, that's likely fine. If it's unclear, not straightforward work, potentially not.

Thanks for the response @mxplusb, @dark2201, @RUSshy, @BlitzkriegSoftware

I believe the current design addresses your scenarios to the best possible extent in the absence of pure AOT compilation. If you have any concerns wrt the design, please feel free to bring it up.

With respect to having lower startup time and compilation time:
Framework dependent apps will have considerably lower startup time (at least in the initial stages of the feature bring up). The compilation time is also smaller (because fewer files need to be embedded).
I'll get back to you with startup-time, compilation throughput, and file size measurements as soon as the first iteration of the feature development is complete.

@swaroop-sridhar What is the timeline for that?

@ayende The first iteration is under active development, I think we'll have the results in a few weeks.

@swaroop-sridhar

I believe the current design addresses your scenarios to the best possible extent in the absence of pure AOT compilation. If you have any concerns wrt the design, please feel free to bring it up.

Run: HelloWorld.exe

The bundled app and configuration files are processed directly from the bundle.
Remaining 216 files will be extracted to disk at startup.

What if I need to run from read only fs, this is common for IoT, or if the exe is running inside an ro container? why isn't there the option of running it directly from the exe as well?

@etherealjoy The example shown in this section is the expected output for self-contained HelloWorld as of Stage 2 of developing this feature (as mentioned in this section). At this stage, the only apps that can run directly from the EXE are framework dependent pure-managed apps.

If we are in Stage 4 or Stage 5, the above self-contained app can execute directly from the bundle.

What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)

Basically all of above. Console utilities, WPF, asp.net, games

Does your app include (non-.NET) C++/native code?

Yes. Sqlite is a good example.

Would your app load plugins or other external dlls that you didn't originally include in your app build?

Probably no.

Are you willing to rebuild and redistribute your app to incorporate security fixes?

Yes.

Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?

net.core is already a bit too slow. But a couple seconds increase is acceptable. 5 seconds -- definetely not.

What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?

Doesn't matter nowadays

Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?

Up to 5 minutes feels is acceptable. 15-20 minutes release builds for UWP are driving everyone mad.

Would you be willing to do extra work if it would cut the size of your app in half?

Sure.

Thanks @mjr27

The stage 1 implementation is now complete, and is available as of 3.0.100-preview5-011568.

Apps can be published to a single file by setting the PublishSingleFile property to true as explained here.

In this version, all embedded files are extracted out to disk on first run, and reused in subsequent runs, as explained here.

Compilation Throughput
There is no significant difference in time whether files are written to the publish directory as individual files or as a single file.

File Size and Startup
The following table shows the performance for a couple of apps built as single-file.

  • Console: A hello world app. The time reported is the time to run the app, measured via clock ticks.
  • WPF app: msix-catalog app. The time reported is time to startup to the initial screen, measured via stopwatch.
  • Fw: Framework-dependent builds
  • Self: Self-contained builds.
    All runs were timed when Anti-virus checks are turned off in order to minimize the impact of external factors.

Measurement | Console Fw | Console Self | WPF Fw | WPF Self
-- | -- | -- | -- | --
File Size (MB) | 0.32 | 66.5 | 20.69 | 118.9
Normal Run (sec) | 0.123 | 0.127 | 3.32 | 3.24
Single-exe first run (sec) | 0.127 | 0.565 | 3.67 | 4.14
Single-exe subsequent runs (sec) | 0.124 | 0.128 | 3.30 | 3.29

I'll keep updating the numbers as implementation proceeds into subsequent stages of development.

Excellent news. Thank you

Will all stages mentioned in staging doc be finished for 3.0 RTM? Or we will have to wait post 3.0 for them to be implemented?

As mentioned here, .net core 3.0 is expected to be implement Stage 2 such that framework dependent pure msil apps can run directly from the bundle. Remaining stages will be considered for further releases.

As mentioned here, .net core 3.0 is expected to be implement Stage 2 such that framework dependent pure msil apps can run directly from the bundle. Remaining stages will be considered for further releases.

:( was hoping it could make it to RTM.

@cup I thought you could do it with dotnet publish -o out /p:PublishSingleFile=true /p:RuntimeIdentifier=win-x64. I just tested by creating a vanilla console app and running that command. It worked successfully.

You can go even further by bundling in the .pdb file with dotnet publish -o out /p:PublishSingleFile=true /p:RuntimeIdentifier=win-x64 /p:IncludeSymbolsInSingleFile=true

Thanks @seesharprun for the detailed notes. Yes @cup, the recommended method for command-line single-file publishing is to set the PublishSingleFile property from the command line.

@etherealjoy the coreclr runtime repos will be mostly locked down wrt feature work in a month or so. I don't think there's enough time for all the stages. I'm hoping that the subsequent stages will come in the next release.

I got this working today but it appears that altering the extraction base dir is not working. According to this section. I should be able to add a application switch in runtimeconfig called ExtractBaseDir to alter the location the application is extracted to.

I have a runtimeconfig.template.json that looks like this

{
    "configProperties" : {
        "ExtractBaseDir": "C:\\"
    },
    "ExtractBaseDir": "C:\\"   
}

The produced app.runtimeconfig.json looks like this:

{
  "runtimeOptions": {
    "configProperties": {
      "ExtractBaseDir": "C:\\"
    },
    "ExtractBaseDir": "C:\\"
  }
}

When running though the produced exe though it still extacts to tmp folder.

I am assuming this feature is just not ready in Preview5 but I wanted to make sure since it not clear where "ExtractBaseDir" is supposed to go in runtimeconfig.json and there are no examples.

@igloo15 Thanks for bringing this up.

In the current preview, only the environment variable DOTNET_BUNDLE_EXTRACT_BASE_DIR is enabled. This is because in the current stage of implementation all files -- including runtimeconfig.json -- are extracted out to disk before they are processed.

I'm working on adding support for ExtractBaseDir in the next preview.

Is there some guidance on how to make this work with ASP.NET Core projects? Would the recommended approach be to use reflection to find where my System types are loading from and setting my IHostingEnvironment ContentRootPath and WebRootPath relative to that directory?

@keithwill You can use AppContext.BaseDirectory to get the location where the app.dll and all other files are extracted out to disk. So, you can use this as the ContentRootPath .

@igloo15: I'm curious if you could share what your scenario is wrt to using this option.

  • Are you trying to use it for a debugging or production-deployment scenario?
  • Would the environment variable suffice in this case, or do you require the runtimeconfig.json setting?

I'm curious to hear from other developers too, if there are specific use-cases for this setting.

@swaroop-sridhar

I have a rather strange situation. My software is stored on a network share and then the network share is linked to a random windows vm and run directly from the network share. The windows vm that is spun up to run the software from network share can't have anything modified or changed on it. Logs, Config files, etc all have to be created on the network share.

Strangely this setup allows the overall system to layout multiple pieces of software in a folder and then clone that folder structure for different configurations/versions. Then when a specific version has to run the managing software spins up vms and maps drives on the vms to specific version network share and runs the software.

1.) Its is a production deployment scenario
2.) Environment variables would be difficult since the computer running the software is constantly different and being destroyed/remade

@cup I really don't agree this topic is about implementing the design doc and the design doc specifically says ExtractBaseDir should work for this feature.

If this issue is not about implementing this design doc then what is this issue about?

Thanks for the explanation @igloo15.

I ask about the configuration option, because there is a cost to parsing the configuration files early in the AppHost code. Currently the AppHost doesn't link with the code for parsing the json files (they are used by hostfxr/hostpolicy instead). Adding this feature at present would make the AppHost a bit bigger/complex. However once the implementation proceeds to further stages (where all hosting and runtime code are linked together), this is no longer an issue.

@igloo15 the design doc is an important part of it, but the core of this issue is the feature itself. ExtractBaseDir is an optional parameter and I would rather see the core feature be passed to "current" release rather than get bogged down with optional parameters.

@cup, I asked the question here, because I got the attention of interested developers in this thread.
While I don't think the question is off-topic, I do understand your concern about the length of this thread. I'll create a linked issue next time. Thanks.

@cup @swaroop-sridhar

A couple of things I would say first is that this feature only matters for stage 1 of the stage document. All other stages 2 - 5 are about running the dlls from inside the bundle and not extracting. So a setting to determine location of extract base dir is not really useful for stages 2-5. As NetCore 3.0 intends to develop stage 1 I would expect this feature be there since thats the only time it would be useful.

Secondly is how are we differentiating this method from other methods like dotnet-wrap. For me the main advantages of this method is that we are building this process into the build chain closer to when code is being compiled. Additionally this is a feature that is geared to .NET and thus understands the nuances of .NET Core applications like their configs, build process, etc. As such I would hope that we focus on features that are differentiate this tool from other tools that are not geared specifically to net core. These would be features like configuring extraction location which dotnet-wrap cannot do.

As far as how its implemented I am not sure but I would think that rather than reading the ExtractBaseDir from the config at runtime couldn't we ingrain that in the bundle exe at build time? I have not seen the code that makes this bundle process work but in my mind its generating/building an native exe with all the dlls etc inside it. Couldn't we during generation/building of exe read the bundled apps runtimeconfig file pull out the ExtractBaseDir and set it as a property on the native bundle exe.

Then its not reading an embedded runtimeconfig at runtime but instead leveraging a property internally to determine extraction location. That should potentially be much faster.

@igloo15: Extracting contents of the bundle to disk maybe useful for certain apps beyond Stage 2.
For example, if the app has custom native binaries in the bundle, then they'll need to be extracted out till Stage 5. The app may want to bundle content files of unknown type to be extracted out to files at runtime.

I agree that the utility of this setting reduces as we move to further stages of development. However, once we add the option, we'll likely have to keep it in all subsequent versions for compatibility. Therefore, it needs careful consideration now.

I agree that making the extraction location a build-time setting (ex: msbuild-property) rather than a runtimeconfig option is easier and more efficient wrt implementation. Thanks.

Hi There, I have a couple questions on this topic:

I would like to use the Single-file Distribution feature for packaging .NET Framework 4.7.2 (for example) applications into a single executable. Is that the intention of this feature or is it only meant to support .NET Core applications?

My second question is related to how the packaging is done. When the executable is created would the target frameworks runtime get bundled into the executable? Or must it be pre-installed on the target machine which will be running the application?

@gaviriar I don't think it supports .NET Framework applications. And yes it can bundle .NET Core runtime into executable (so called "self contained" publish).

@tomrus88 thanks for the response. That is a true shame regarding .NET Framework applications. Is there any method you would recommend that could be used to bundle the .NET Framework runtime into an executable?

I see a list of tools in the related work section of the document. Is there any in particular which could be reccomended for my the use case proposed above?

@gaviriar .NET Framework is part of Windows OS. There is no supported way to bundle it into an executable.

@gaviriar The need to bundle the runtime is one of the major reasons to switch to .NET Core.

@jnm2 and @jkotas thanks for the clarification, noob here to the .NET world. I agree I would love to switch to .NET Core. However, I am facing a case where I have to interact with a legacy library which is targetting .NET Framework. In such a case it is my understanding that I cannot switch my app to .NET Core if I need to interact with this library. Or is there an alternative approach such that my app is .NET Core but still can interact with the legacy .NET Framework library?

It depends on the library. A lot of libraries that target .NET Framework work fine on .NET Core as well. Have you tried to use the library in .NET Core? If you have verified that it does not work, you cannot switch until you have a solution for it.

I have indeed tried it, but this does not work as we are talking about libraries which are windows specific. I guess I have no other option than to stick with targeting .NET Framework and miss out on the .NET Core fun as you say :/

@gaviriar This is off topic but have you tried the Windows Compatibility NuGet Package it provides windows apis for .NET Core for the exact purpose you are describing Microsoft.Windows.Compatibility

@gaviriar: A few more clarifications:

When the executable is created would the target frameworks runtime get bundled into the executable? Or must it be pre-installed on the target machine which will be running the application?

This depends on the build. Both framework-dependent (runtime is installed on the target), and self-contained (runtime is packaged with the app) apps can be published as a single-file -- but only .net core 3.0 apps.

I see a list of tools in the related work section of the document. Is there any in particular which could be reccomended for my the use case proposed above?

For .net framework, the best solution I can suggest is for the app to handle the file-extraction for itself. For example: embed the dependencies as managed resources in the app binary, and then explicitly extract the resources on startup. You can also use the bundle library for the bundling and extraction (because the library is built netstandard), but using managed resources is a better solution.

@igloo15 and @swaroop-sridhar I appreciate these inputs considering they are off topic. Although I hope someone else finds this discussion useful when reading someday.

I will evaluate the options you have shared with me and let you know what turns out to be the best approach for my use case.

Thanks!

Quick update!

I have tested out this simple Hello World application. I can successfully build a single file executable. However as you can see from the project configuration, it is meant to be self-contained. But when I try to run this executable on a fresh Windows 7 installation without any .NET core runtime installed I see the following error:

Failed to load the DLL from [C:\Users\vagrant\AppData\Local\Temp\.net\HelloWorld
\muasjfkf.kyn\hostfxr.dll], HRESULT: 0x80070057
The library hostfxr.dll was found, but loading it from C:\Users\vagrant\AppData\
Local\Temp\.net\HelloWorld\muasjfkf.kyn\hostfxr.dll failed
  - Installing .NET Core prerequisites might help resolve this problem.
     https://go.microsoft.com/fwlink/?linkid=798306

What I can see is that building publishing the app as dotnet publish /p:PublishSingleFile=true or /p:PublishSingleFile=false does not change the size of the executable. Is this expected behaviour?

Steps to Reproduce

  1. Publish the project to generate the HelloWorld.exe

  2. Copy the executable to a Windows 7 installation w/o any .NET Core runtime installed

  3. Run the HelloWorld.exe

@gaviriar I tried out the example you've published, and it worked fine for me.
That is, it built a self-contained single-exe for HelloWorld that runs OK.

What I can see is that building publishing the app as dotnet publish /p:PublishSingleFile=true or /p:PublishSingleFile=false does not change the size of the executable. Is this expected behaviour?

This is definitely not expected. The single-file app should be about 70MB (for the debug build indicated on your gitlab page). Otherwise, it is a normal apphost, which is expected to fail exactly as you've noted above. Are you publishing from a clean directory?

One other note about your project file -- you have one of the properties incorrectly written as IncludeSymbolsInFile instead of IncludeSymbolsInSingleFile. But this has no connection to the problem you've reported.

@gaviriar what is your target runtime identifiers. If you don't set it correctly it won't bundle the correct hostfxr.dll. Its possible the hostfxr.dll requires a specific vc++ redistributable that does not exist on your windows 7 machine. You can check that dll at given path exists and if it does try to load it in dependency walker tool to see if a dll it depends on exists.

Merging IL: Tools like ILMerge combines the IL from many assemblies into one, but lose assembly identity in the process. This is not a goal for single-file feature.

lose assembly identity

this is not a problem.
im always against using zip/unzip or other bundles solution for Single exe self contained feature.
because this will cause the next issues.
How to reduce the Single exe file size !?? or why the exe file is soo big but its only a hello world!??

There is always the problem, and needs to be addressed, why not doing a thorough.
In fact, from the outset should use IL-Merge similar ways.

In fact, people are not really so obsessed with a single EXE.
exe(1)+dll(1-2) also OK,but not 400 mount dlls .
ppl wont like to bring the code never be execute to the server disk and run it take the server memorys.
people Just want to reduce the cost of deployment.(Disk,Memory,Bandwidth etc...)

I removed the 3.0 milestone, so that this issue tracks the implementation of the single-file feature through the stages described in this document, which is beyond the scope of 3.0 release.

@sgf, I agree that there's work needed to reduce the file-size. The size-reduction opportunities are of two forms, they are not specifically tied to single-exe:
1) Reduce the amount of content that needs to be published. For example:
* Use ILLinker like tools to trim unnecessary binaries, parts of assemblies, etc.
This work is separately tracked by issues such as dotnet/coreclr#24092 and mono/linker#607
* We can consider other measures such as having a "reduced capability" mode where JITting is not supported to reduce the
2) Add compression feature to single-file publishing.

The above features can work together to reduce the size of single-file (and non-single-file) apps:
For example, for HelloWorld console app,
Normal publish size = 67.4 MB
Trimmed size = 27 MB
Trimmed, Single file, compressed (via prototype) = 12MB

@swaroop-sridhar Is there any way to get the path of your single file exe within your application?
Using Assembly.GetExecutingAssembly().Location will output the extraction dir in \AppData\Local\Temp.net.

@chris3713 The best way for accessing the exe location currently is by PInvoke-ing to native APIs. For example:GetModuleFileNameW(Null, <buffer>, <len>)

@chris3713 The best way for accessing the exe location currently is by PInvoke-ing to native APIs. For example:GetModuleFileNameW(Null, <buffer>, <len>)

Thanks, I ended up using Process.GetCurrentProcess().MainModule.FileName.

Works fine with asp .net core worker/web api as Windows Service published with PublishSingleFile=true.

When I register service and run it via sc create testwk binPath="[...]/MyAspNetCoreWorker.exe" then sc start testwk I've got this success message :

[...]\bin\publish\x64>sc start testwk

SERVICE_NAME: testwk
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 2  START_PENDING
                                (NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x0
        WAIT_HINT          : 0x7d0
        PID                : 5060
        FLAGS              :

[...]\bin\publish\x64>sc stop testwk

SERVICE_NAME: testwk
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 3  STOP_PENDING
                                (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x0
        WAIT_HINT          : 0x0

I can't wait to see Stage 2 with runing in bundle😀
By the way, will Stage 2 still be planned to be a part of .Net Core 3.0 ?

Thank you for your work, publishing service in .net core is now realy simple 👍

CoreRT: (2.5mb)

image

image

No zip hack, no hidden cost, engine contains only what is needed, and not-used features are disabled using csproj config (reflection)

This is what dotnet needs, it competes with GO and even surpass it both in term of size and perf

For anyone interested, check the repo:

https://github.com/dotnet/corert

There are samples for MonoGame:

https://github.com/dotnet/corert/tree/master/samples/MonoGame

None of the proposed solutions can achieve this result, CoreRT is trully the best, MS is making a huge mistake by not focusing on it

I have tested this feature on macOS with dotnet-sdk-3.0.100-preview8-013656-osx-x64.tar.gz. If we switch the configuration, from Debug to Release or vice-versa and build again, it adds 10MBs to the executable size. Recompiling with old configuration does not recover this increase in size unless we delete the bin directory.

mkdir /tmp/dotnet-preview8
curl -s https://download.visualstudio.microsoft.com/download/pr/a974d0a6-d03a-41c1-9dfd-f5884655fd33/cf9d659401cca08c3c55374b3cb8b629/dotnet-sdk-3.0.100-preview8-013656-osx-x64.tar.gz | tar -xvz -C /tmp/dotnet-preview8
export PATH=/tmp/dotnet-preview8:$PATH

dotnet new console -o /tmp/myApp
pushd /tmp/myApp

# publish with Debug
dotnet publish /p:PublishSingleFile=true,RuntimeIdentifier=osx-x64,Configuration=Debug -o bin/a/b/c/d

bin/a/b/c/d/myApp
# outputs: Hello World!

du -sh bin/a/b/c/d
# outputs  70M  bin/a/b/c/d

# publish with Release
dotnet publish /p:PublishSingleFile=true,RuntimeIdentifier=osx-x64,Configuration=Release -o bin/a/b/c/d

du -sh bin/a/b/c/d
# outputs:  80M bin/a/b/c/d

# publish with Debug again
dotnet publish /p:PublishSingleFile=true,RuntimeIdentifier=osx-x64,Configuration=Debug -o bin/a/b/c/d
# still outputs:  80M   bin/a/b/c/d

Even without changing build configuration, if we invoke Rebuild target subsequently (/t:Rebuild) after the initial build, the output size increases.

Thanks @am11 for taking a look. Publish retaining old binaries is a known issue (cc @nguerrera).

Since .Net Core 3 preview 8, I can't launch my single file published app.
In the event viewer I have this error :

image

I'm on Windows Server 2019

Edit : This error has occured after updating .Net Core SDK Preview 7 to .Net Core SDK Preview 8. After rebooting the server, I can launch my application correctly again.

@Safirion we made a change to the underlying internal format around these previews. It is possible that you straddled output from one preview and built with another. There are no breaks between Preview 8 and 9.

On linux the default extraction path seems to be /var/tmp/.net . On AWS lambda the /var path is not writable, so making that folder fails. To get out of the box runability without setting DOTNET_BUNDLE_EXTRACT_BASE_DIR i propose to change this to /tmp/.net . Or perhaps there should be multiple attempts at common locations (/var/tmp, /tmp, same directory as binary, etc)

@stevemk14ebr thanks for reporting that issue. Can you file a new issue to track that work? https://github.com/dotnet/core-setup

@jeffschwMSFT Same error was throw with passage from .Net Core P9 to RC1.

Description: A .NET Core application failed.
Application: Setup.exe
Path: C:\Users\Administrateur\Desktop\Setup.exe
Message: Failure processing application bundle.
Failed to determine location for extracting embedded files
A fatal error was encountered. Could not extract contents of the bundle

@Safirion thanks for reporting this issue. Can you create a separate issue to repro this? https://github.com/dotnet/core-setup
cc @swaroop-sridhar

@Safirion can you please send repro instructions? Is this failure deterministic?
Please file an issue in core-setup repo, as @jeffschwMSFT suggested above.

I looked at this and this would only happen if the app is running in an environment where temporary directory is not accessible. For example if I do this:

set "TMP=wrong_path"
myapp.exe

It fails with

Failure processing application bundle.
Failed to determine location for extracting embedded files
A fatal error was encountered. Could not extract contents of the bundle

Note that the bundle looks for temporary directory using the GetTempPath win32 API. Which uses the env. variable TMP, followed by TEMP and so on. If the first with value contains invalid path, it would fail like this.

You can fix this by either making sure that the temp path is set correctly, or you can explicitly set DOTNET_BUNDLE_EXTRACT_BASE_DIR to a location where you want the extraction to happen.

@Safirion can you please send repro instructions? Is this failure deterministic?
Please file an issue in core-setup repo, as @jeffschwMSFT suggested above.

I'm not able to reproduce this issue after updating .Net Core to RC1 and restart my server. (I have try to uninstall RC1 and install Preview 9 again to upgrade again to RC1 but my app launch well this time)

This error has happened with passage from SDK 3.0 P7 to SDK 3.0 P8 and SDK 3.0 P9 to SDK 3.0 RC1.
And in the two cases, a simple restart of the server fix the issue.

I use a .Net Core Windows Service (Worker using C:\Windows\Temp\.net folder to extract since it's launch by system) and a .Net Core WPF app (launch by user startup session script) when I have upgraded to RC1. Maybe this error is caused by .Net core worker running durring the .Net Core SDK install, I don't know...

Note that I install SDK and not runtime cause I have to deploy the 3 .Core runtime (asp.net core, desktop .net core and core) at the same time and https://dot.net not give us a "Full Runtime installer" (I don't even found Desktop Runtime installer so, no choice)...

Update: nevermind, found the documentation.

Hey there, thanks for your hard work!

How are config files (App.config for example) are supposed to be distributed? I've just built a console app on RHEL 7.6 with the exact command sequence:

dotnet restore -r win-x64 --configfile Nuget.config
dotnet build Solution.sln --no-restore -c Release -r win-x64
dotnet publish --no-build --self-contained -c Release -r win-x64 /p:PublishSingleFile=true -o ./publish ./SomeDir/SomeProj.csproj

where SomeProj.csproj have <PublishTrimmed>true</PublishTrimmed> enabled, and got 2 files as the result: SomeProj.exe and SomeProj.pdb, but not SomeProj.config

@catlion to have the default behavior for *.config files, they should be excluded from the bundle. By default all non pdb files are included in the bundle.
The following will exclude the config from the single file:

<ItemGroup>
    <Content Update="*.config">
      <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
      <ExcludeFromSingleFile>true</ExcludeFromSingleFile>
    </Content>
  </ItemGroup>

https://github.com/dotnet/designs/blob/master/accepted/single-file/design.md#build-system-interface

@morganbr

What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)

Headless and WPF apps for Windows

Does your app include (non-.NET) C++/native code?

Yes, we P/Invoke to a variety of native dlls, some third-party items we have no control over, some internal from other teams.

Would your app load plugins or other external dlls that you didn't originally include in your app build?

Yes but we use CompilerServices to compile them from source.

Are you willing to rebuild and redistribute your app to incorporate security fixes?

Yes

Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?

Yes, our app is generally left open for long periods of time.

Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?

Yes, we can wait multiple minutes.

Would you be willing to do extra work if it would cut the size of your app in half?

Yes

Thanks @john-cullen

Let me also provide some answers for @morganbr :) I don't know if you still read these :)

  1. What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)

Text adventures (Console apps) to be run on different platforms (Linux, MacOS, Windows)

  1. Does your app include (non-.NET) C++/native code?

No.

  1. Would your app load plugins or other external dlls that you didn't originally include in your app build?

Not currently, but it would be great in the long run to support certain scenarios.

  1. Are you willing to rebuild and redistribute your app to incorporate security fixes?

Yes.

  1. Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?

Yes.
(It would be great to provide some text output or message box to notify the user something is happening)

  1. What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?

10-20 Mb.

  1. Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?

1+ minutes is ok.

  1. Would you be willing to do extra work if it would cut the size of your app in half?

Possibly.

Thanks for the response, @lenardg.

This issue tracks progress on the .NET Core 3.0 single-file distribution feature.
Here's the design doc and staging plan for the feature.

This is a very useful feature to ease deployment and reduce the size.
Few improvement thoughts on the design especially for enterprise deployments where environment is tightly controlled via group policies.

  1. Conditionally is it possible to compress while packing the single executable?
    i.e. add Wrap like logic, so we do not need other tools integration.
    The compressed will produce even smaller exe.

  2. When launching the exe, it expands its contents to a temp folder. For our product, on certain user sites, the client machines are tightly controlled and they do not allow executables to be launched from temp folder or even the %userprofile% locations.

Is it possible to specify/ control where to extract or may be 'in place extract' in a ".\extract" or such subfolder?
This allows group policies compliance as well as ability to use the single exe feature.

  1. Before packing, is it possible to sign the folder and then it picks files to pack?
    We can sign the single exe, but extract files are not signed, and hence at certain other client locations, it does not allow execution, unless the binaries are signed.

Just wanted to let you know, I saw the same issue as @Safirion today. I have a dotnetcore 3.0 app targetting ubuntu, and went to run it on my Synology server. It wouldn't extract - kept failing with an error "A fatal error was encountered. Could not extract contents of the bundle". Rebuilt several times, but the same thing. After rebooting the host, it worked fine again.

Would be useful to understand if there's a file lock or something here, and if there's any times on how to debug it.

@RajeshAKumar dotnet/core-setup#7940

I am not sure how that link helps.
It just talks about the temp extraction, I have listed 3 points above.

And I gave you the solution for one of them. I have no idea about the other 2

And I gave you the solution for one of them. I have no idea about the other 2

The temp extract does not work for us, hence that is not the solution. My request was ability to control where it extracts or may be extract inplace into a subfolder.
The IT admins at many of our client computers do not allow to run executable(s) from temp folders or from user profile.
Please re-read the post https://github.com/dotnet/coreclr/issues/20287#issuecomment-542497711

@RajeshAKumar: You can set the DOTNET_BUNDLE_EXTRACT_BASE_DIR to control the base path where the host extracts the contents of the bundle.
Please see more details here: https://github.com/dotnet/designs/blob/master/accepted/single-file/extract.md#extraction-location

@RajeshAKumar: Compression of the bundled single-exe is a feature under consideration for .net 5: https://github.com/dotnet/designs/blob/master/accepted/single-file/design.md#compression

For now, you'll need to use other utilities to compress generated the single-exe.

@RajeshAKumar, regarding signing, you can sign all the files/binaries that get published and bundled into the single-exe. When the host extracts the embedded components to disk, the individual files will themselves be signed. You can also sign the built single-file itself, as you've mentioned.

Does that meet your requirements?

@Webreaper If you can repro the problem, can you please run the app with COREHOST_TRACE turned on (set the COREHOST_TRACE environment variable to 1) and share the generated log? Thanks.

@swaroop-sridhar
This is a very cool feature. Do you have a link as to when different stages will be available in which dotnet release?

@RajeshAKumar: You can set the DOTNET_BUNDLE_EXTRACT_BASE_DIR to control the base path where the host extracts the contents of the bundle.
Please see more details here: https://github.com/dotnet/designs/blob/master/accepted/single-file/extract.md#extraction-location

Thank you swaroop-sridhar

@RajeshAKumar, regarding signing, you can sign all the files/binaries that get published and bundled into the single-exe. When the host extracts the embedded components to disk, the individual files will themselves be signed. You can also sign the built single-file itself, as you've mentioned.

Does that meet your requirements?

I am trying to figure out how to break the 'compile' and publish parts.
I use the following command currently, so it does not give me ability to sign.
Set publishargs=-c Release /p:PublishSingleFile=True /p:PublishTrimmed=True /p:PublishReadyToRun=false
dotnet publish -r win-x64 -o bin\Output\Win64 %publishargs%

Since the above compiles as well as trims and packages, not sure how to break them.
Ideally to use your approach, I will have to compile, then sign then finally publish it.

@Webreaper If you can repro the problem, can you please run the app with COREHOST_TRACE turned on (set the COREHOST_TRACE environment variable to 1) and share the generated log? Thanks.

Thanks. It's gone away now because I rebooted, but it's useful to have this noted for if it happens again!

@RajeshAKumar you'll need to tag the signing to a target that runs just before bundling.
Since you're using PublishReadyToRun and PublishTrimmed you cannot use a the standard Afterbuild or BeforePublish targets.

You can add a target that runs just before BundlePublishDirectory and performs the signing.

@swaroop-sridhar
This is a very cool feature. Do you have a link as to when different stages will be available in which dotnet release?

Thanks @etherealjoy. The work for single-files running directly from bundle is in progress, targeting .net 5 release to my best understanding.

@RajeshAKumar you'll need to tag the signing to a target that runs just before bundling.
Since you're using PublishReadyToRun and PublishTrimmed you cannot use a the standard Afterbuild or BeforePublish targets.

You can add a target that runs just before BundlePublishDirectory and performs the signing.

Thank you.
The link you gave looks like MS Build tasks.
How do I create a handler for "BundlePublishDirectory"? Is this in Studio/Project props/Build Events or do I have to create something from scratch.

@RajeshAKumar in this case, you'll need something more fine-grained than pre-build post-build events.
So, I think you should edit the project file and add something like:

<Target Name="Sign" BeforeTargets="BundlePublishDirectory">
    ... 
</Target>

What's the expected behavior of dotnet pack command with this? Let's say we have single-exe artefact pushed to local nuget repository. If i try to install it with chocolatey, it tries to restore all the nuget deps listed in the project file. Which was expected previously, but I'm in doubt if this behavior correct for SFD.

Is it possible to add some progress bar or loading indicator during extraction of single-file self-contained WPF application it can take some time with just nothing happened.
A basic self-contained WPF app is more than 80Mo and extraction can take more than 5sec. It's not very user friendly and I received complaints from my end users.

Edit : Any way to cleanup old version automaticaly at launch ?

@Safirion I can't see how that could be in scope for this feature. If you require that you would be best served by making your own tiny app that shows a splash screen and launches the real app, and then have the real app stop the splashscreen program when it starts.

@ProfessionalNihilist I think you don't understand my point.
An empty WPF app self-contained use 80mo of disk storage when compiled. You can't have a tinyer WPF app than 80mo without compile it as a framework dependent app 😉
The problem is the time of extraction of the included framework before the app can even launch. So it has to be done by the .Net Core and it is totaly related to this feature.

Maybe add the ability to have a png that would be shown while the app is decompressed?

@Safirion @ayende the lack of "UI feedback" during startup is tracked here: https://github.com/dotnet/core-setup/issues/7250

What's the expected behavior of dotnet pack command with this? Let's say we have single-exe artefact pushed to local nuget repository. If i try to install it with chocolatey, it tries to restore all the nuget deps listed in the project file. Which was expected previously, but I'm in doubt if this behavior correct for SFD.

@catlion the PublishSingleFile property is only supported by the dotnet publish command. So, it has no impact on dotnet pack. Is there a motivation to use both single-file and packaging?

Edit : Any way to cleanup old version automaticaly at launch ?

@Safirion in the current release, the cleanup is manual, the host doesn't attempt to remove the extracted files, because they could be potentially reused in future runs.

Ok, I will make my own cleaner 😉
Thank you for your reply.

@cup that is nothing special about mono, that's just a standard compilation, now add a nuget reference and that solution isn't a single file anymore. Mono has mkbundle though to achieve single file's

@Suchiman are you sure? My example above produces a single 3 KB file. While dotnet minimum size seems to be 27 MB:

https://github.com/dotnet/coreclr/issues/24397#issuecomment-502217519

@cup yes

C:\Users\Robin> type .\Program.cs
using System;
class Program {
   static void Main() {
      Console.WriteLine("sunday monday");
   }
}
C:\Users\Robin> C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe .\Program.cs
Microsoft (R) Visual C# Compiler version 4.8.3752.0
for C# 5
Copyright (C) Microsoft Corporation. All rights reserved.

This compiler is provided as part of the Microsoft (R) .NET Framework, but only supports language versions up to C# 5, which is no longer the latest version. For compilers that support newer versions of the C# programming language, see http://go.microsoft.com/fwlink/?LinkID=533240

C:\Users\Robin> .\Program.exe
sunday monday
C:\Users\Robin> dir .\Program.exe


    Verzeichnis: C:\Users\Robin


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       10.11.2019     18:36           3584 Program.exe

@Suchiman are you sure? My example above produces a single 3 KB file. While dotnet minimum size seems to be 27 MB:

#24397 (comment)

Your 3k file only works because mono is already installed. The whole point of dotnetcore single file distribution is that you don't need the clr installed, it's a standalone single exe including the runtime.

@Webreaper actually i believe it only used mcs from mono to compile, since he didn't write mono sun-mon.exe it likely ran on .NET Framework.

But .NET Core supports the runtime being preinstalled scenario as well, aka framework dependent deployment. It's still not quite single file deployment in that case though since there are a few additional files for .NET Core such as .deps.json and .runtimeconfig.json

@chris3713 The best way for accessing the exe location currently is by PInvoke-ing to native APIs. For example:GetModuleFileNameW(Null, <buffer>, <len>)

It seems Environment.CurrentDirectory is a better solution, though I haven't tried both approaches on something other than Windows yet.

EDIT: Nope. That path is subject to change at different entry points in the application. No good.

On a slightly related note, I found this regression in the single-file publishing of Blazor apps in the latest preview of VS for Mac: https://github.com/aspnet/AspNetCore/issues/17079 - I've reported it under AspNeCore/Blazor, but it may be that this is more relevant for the coreclr group - not sure. Will leave it for you guys to move around!

@Suchiman careful, that compiler has problems:

https://github.com/dotnet/roslyn/issues/39856

@cup except that using the file path i've named, you're using the old C# 5 compiler written in C++, that is not roslyn and they'll probably close that issue for that reason. But roslyn can do the same thing, just a different path...

On a slightly related note, I found this regression in the single-file publishing of Blazor apps in the latest preview of VS for Mac: aspnet/AspNetCore#17079 - I've reported it under AspNeCore/Blazor, but it may be that this is more relevant for the coreclr group - not sure. Will leave it for you guys to move around!

@Webreaper Thanks for reporting the issue, that looks like an ASP.net issue regarding static assets. So, that's the right place to file it.

* Moving post from other issue to here. Original post: https://github.com/dotnet/coreclr/issues/27528 *

@swaroop-sridhar ,

The startup times of DotNet Core Single File WPF apps is a lot slower then the original ILMerge-ed WPF application build on .net 4.7. Is this to be expected or will this improve in the future?

Builds come from my ImageOptimizer: https://github.com/devedse/DeveImageOptimizerWPF/releases

| Type | Estimated First Startup time | Estimated second startup time | Size | Download link |
| -- | -- | -- | -- | -- |
| .NET 4.7.0 + ILMerge | ~3 sec | ~1 sec | 39.3mb | LINK |
| dotnet publish -r win-x64 -c Release --self-contained=false /p:PublishSingleFile=true | ~10 sec | ~3 sec | 49mb | |
| dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true | ~19 sec | ~2 sec | 201 mb | |
| dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true /p:PublishTrimmed=true | ~15 sec | ~3 sec | 136mb | LINK |
| dotnet publish -r win-x64 -c Release | ~2.5 sec | ~1.5 sec | 223kb for exe (+400mb in dlls) | |

@devedse, to make sure, is the "second startup" the average of several runs (other than the first)?
I'm curious, but lacking any explanation for why the /p:PublishSingleFile=true /p:PublishTrimmed=true run should be slower than `/p:PublishSingleFile=true run.

So, before investigating, I want to make sure the numbers in the "second startup" are stable numbers and that the difference in startup is reproducible,

Also, this issue is about single-file plugins, can you please move the perf discussion to a new issue, or to dotnet/coreclr#20287? Thanks.

@swaroop-sridhar , in response to your question about it being the average:
It's a bit hard for me to very accurately time this, so the timing was done by counting while the application was starting and then trying it a few times to see if there's a significant difference in startup time. If you're aware of a better method you can easily reproduce it by building my solution: https://github.com/devedse/DeveImageOptimizerWPF

My main question relates to why it takes longer for a bundled (single file) application to start in comparison to an unbundled .exe file.

I may be wrong here but it makes sense to me since there is overhead with a single file. Essentially you have an app that is starting another app. While the ILMerge is starting directly. ILMerge only merged referenced dlls into the exe it did not wrap the whole thing in another layer which is what is currently being done with PublishSingleFile.

@devedse The single file is essentially extracting, checking the checksums, etc. before starting the dotnet run.
Guess that is why it took that time.
The extract is "cached" so next run, there is no IO overhead.

@RajeshAKumar , hmm is extracting really the way to go in this scenario? Wouldn't it be better to go the ILMerge way and actually merge the DLL's into one single bundle?

Especially for bigger .exe files you're also inducing the disk space cost of storing all files twice then.

@devedse We are all waiting for next stages of this feature (Run from Bundle) but for now, it's the only solution. 😉

https://github.com/dotnet/designs/blob/master/accepted/single-file/staging.md

this is what you get for using JIT in desktop, slow startup, looks like only Apple understood this

(Mostly repeating what was already stated):
First start is expected to be much slower - it extracts the app onto the disk - so lot of IO. Second and subsequent starts should be almost identical to non-single-file version of the app. In our internal measurements we didn't see a difference.

How to measure: We used tracing (ETW on Windows) - there are events when the process starts and then there are runtime events which can be used for this - it's not exactly easy though.

As mentioned by @Safirion we are working on the next improvement for single-file which should run most of the managed code from the .exe directly (no extract to disk). Can't promise a release train yet though.

JIT: All of the framework should be precompiled with Ready2Run (CoreFX, WPF), so at startup only the application code should be JITed. It's not perfect, but it should make a big difference. Given the ~1-2 second startup times I think it is already using that in all of the tests.

Thanks all, I wasn't aware of the next steps that are planned. This clarifies it.

First start is expected to be much slower - it extracts the app onto the disk - so lot of IO.

This should have never happened, you make the user experience horrible BY DESIGN, horrible choice, that is how you make users hate the tech the developer is using for them

As mentioned by @Safirion we are working on the next improvement for single-file which should run most of the managed code from the .exe directly (no extract to disk). Can't promise a release train yet though.

Why release this officially now if it's gonna change soon? should be marked as preview/experimental

in my opinion this is waste of time and ressources, focus on AOT compilation and tree-shaking, put all your ressources there, stop with hacks

@RUSshy Why so much hate? If you don't want the startup delay when you first launch then don't use single-file deployment.

I find the startup is significantly less than 10s, and since it's only the first time you run, it's no problem at all. I'm deploying a server-side webapp which means in most cases it's going to be starting up once and then running for days/weeks, so the initial extraction is negligible in the scheme of things - so I'd much prefer this as a stop-gap until there's a single-compiled image, because it just makes deployment far easier than copying hundreds of DLLs around the place.

+1 in my case, we have a build docker generating single exe, and a seperate docker to run the app (using regular Alpine docker image without dotnet). After the bulid step, we hot-load the runtime container once and docker-commit the layer. Subsequently, we have not observed any performance regression compared to a framework-dependent deployment. Once the load-from-bundle file mechanism is implemented and shipped, we will remove the intermediate hot-loading step.

@vitek-karas, is there an issue tracking "load runtime assets from bundle" feature? interested in understanding what kind of impediments are there. :)

@am11 We're currently putting together the detailed plan. You can look at the prototype which has been done in https://github.com/dotnet/coreclr/tree/single-exe. The real implementation will probably not be too different (obviously better factoring and so on, but the core idea seems to be sound).

@Webreaper For web apps, it isn't a problem at all but maybe because .Net Core 3 is recommanded for WPF/WinForm development now, and that sharing a Desktop application .exe lost in hundred .dll is not an option then I totaly understand the frustration related to the first stage of this feature.
;)

And no user wait 10sec (or more than 3sec) before re-clicking on an exe today. The fact that there is no loading indicator is the second big issue of this feature. Unfortunately, it seems that the loading indicator will not be a part of .Net Core 3.1 so users will have to be patient...

Desktop developers realy waiting for stage 2, and I expect that will be a part of .Net 5 because actualy, Desktop development in .Net Core is a realy bad experience for end users.

@RUSshy Why so much hate? If you don't want the startup delay when you first launch then don't use single-file deployment.

this is not hate, this is constructive and honest feedback, i care about C# and .net, i use both everyday, i don't want it to be replaced by GO or something else

just recently:
https://old.reddit.com/r/golang/comments/e1xri3/choosing_go_at_american_express/
https://old.reddit.com/r/golang/comments/ds2z51/the_stripe_cli_is_now_available_and_its_written/

negative feedback is as helpful as positive feedback, but if you take it as "hate" then i can't help you

.net community is way to silent, passive and biased, disruption is the only way to go

.NET is objectively the best platform for most applications these days. I wish more people realized that.

The war stories I hear from other platforms such as Java, Go, Rust, Node, ... are frankly disturbing. These platforms are productivity killers.

in my opinion this is waste of time and ressources, focus on AOT compilation and tree-shaking, put all your ressources there, stop with hacks

I agree. .Net has a great type system. Too many times it gets circumvented using reflection. Tooling needs to focus on AOT but also on minimizing reflection. https://github.com/dotnet/corert/issues/7835#issuecomment-545087715 would be a very good start. The billion dollar mistake of nulls is being mitigated now; the same should be done with reflection with a setting or marker for reflection-free code (or otherwise linker or corert compatible code).

Eliminating reflection would be awesome. So much suffering could be avoided if reflection was banned. My most recent horror story was discovering I couldn’t event move code around because the framework used (service fabric SDK) found it prudent to tie the serialized bytes to the assembly name of the serializer implementation with no override possible.

Any progress towards discouraging reflection would be progress.

Btw was looking for way to merge assemblies to reduce bundle size and load times, allow whole program optimization. I gather this issue isn’t really targeted at that.

Edit: Since this post gathered some reactions just to clarify. I believe meta programming should happen at design-time, where the code is the data, and it’s under my control.

Types I like to use to enforce invariants that I can trust. Runtime reflections breaks that trust.

Hence I’d like runtime reflection to be replaced with design time meta programming. Where it could also be be more powerful overlapping with use cases such as analyzers, quick fixes and refactoring.

Tagging subscribers to this area: @swaroop-sridhar
Notify danmosemsft if you want to be subscribed.

Single-file feature design is in this document: https://github.com/dotnet/designs/blob/master/accepted/2020/single-file/design.md
Tracking progress of single-file apps in this issue: https://github.com/dotnet/runtime/issues/36590.
Thanks to everyone for your feedback and suggestions in this issue.

Was this page helpful?
0 / 5 - 0 ratings