Hey All.
I work on video games, and for quite a long time git has been unusable for us due to the large binary file nature of most of our assets in video game development. With file locking and lfs working amazingly well, I've started experimenting moving projects over to using git as our primary version control workflow.
One issue we are having is that file locks persist after pushing to a remote repository. This behavior is fairly unintuitive, and actually runs counter to other version controls such as perforce which automatically unlocks files after submitting them.
An option or ability to unlock files automatically after pushing them would be really nice to have, and would make workflow much easier
Thanks!
https://github.com/git-lfs/git-lfs/pull/666
"automatically unlocking (& making read-only) after push might be difficult, since there is no post-push hook. "
Also funny Pull Request number XD
I assume this is why it's not been done?
Original design states:
+5. File is unlocked if:
@RoyAwesome thanks for opening this. Clearly I'm not a game dev, so any insights into workflows that game devs use are very welcome.
There are a couple reasons we didn't implement this:
@RoyAwesome: Would you be willing to describe your workflow at a high level? Does everyone push to master? Do you use feature branches? Do you have multiple branches for different stages of development (devel, release, hotfix, etc)?
Hey @technoweenie,
I'm a game developer over at Inxile Entertainment along with @RoyAwesome who is a game developer at Puny Human games. The typical workflow for UE4 uses perforce but would love to use git but it has a hard requirement to lock files as they have a visual scripting language call blueprint that is saved as a binary file that doesn't differ from other binary files of the engine (so a blueprint or an animation are all .uassets.)
The issue with an option not being there is that the Git UE4 plugin will go through and lock the uasset files that you change as you save the files. The issue is that this makes it really easy to get say 10-20 files locked though the course of a working day. On a given day with extreme cases in mind (think crunch time) up to a handful of people change the same uasset file and that uasset file must maintain a lock and be unlocked. Requiring developers to run a command for every single file that the editor locks means that a large part of time will be devoted to figuring out why is something locked and getting it unlocked to edit it. (as even if you do have the power to remove someone's lock you shouldn't if they are actively working on it.)
I believe to your first point, I think adding this in as some sort of option. Even if its like per tracked asset would be huge for us game developers without hurting anyone who doesn't use that option.
So for your second point.... I see, I see... What if we just unlock them early? I know it's not typically the sane thing but in our, the game developers, case I think it is. As most, if not all assets effected (affected?) by this will be in LFS.
Some sort of experimental option would go a long way for UE4 support.
Regards,
Michael Brune
Thanks for the feedback. I'm open to experimenting in LFS, but it'd probably be done in a separate command. I'm thinking something like:
$ git push origin master
$ git lfs locks unlock-all master
Unlocking early introduces a possible race condition. Here's a high level description of git pushing, with LFS:
git push ...pre-push hook, which calls git-lfs pre-pushSo if the final step, the actual git push, fails, then the git repository doesn't have your changes. It means someone can then lock files on the server's master commit (which doesn't include your changes), and start their changes.
One way that Git pushes can fail (not including network/server issues) is someone could do a small master branch push while your push is running. Maybe while your push is uploading objects or auto unlocking locks, another user pushes up a tiny text file change. Then once your pre-push is finished, the Git push fails because master changed. I call this "push sniping" :)
This isn't an issue if the locks are intact until after a successful push. You could then run:
$ git pull origin master
$ git push origin master
That would download the user's update, merge your changes, and push again. Any LFS objects uploaded in the first push can be skipped, so it should go faster.
While an unlock all might be useful in some cases it wouldn't be useful in most as we would only want to unlock the files we pushed. Perhaps both an unlock-all and unlock-last-pushed files?
Additionally in the unlock command we should be able to specify multiple lock ids at least so it would be like
git lfs unlock -i 2901 2902 2903 2904 2905 2906
Lastly that said I think at least the option of auto unlocking after push even given the risks of collision would be really useful. I would like to test it and see how many collisions a small game studio would run into.
I think a better unlock-all command could work on a commit range.
$ git lfs locks unlock-all BEFORE...AFTER
This could be run with the before and after commit SHAs after merging a branch into master. LFS could scan the range of commits for lockable files, and unlock them all. Actually getting the right commits won't be the most fun thing, but could be automated by some tool like GH Desktop, SourceTree, Unity, etc.
Unlocking with multiple IDs or paths is a good idea too :+1:
I support this thread completely. File Unlocking should definitely happen automatically on push. But should be interesting to see how/if this can be done through the https://github.com/SRombauts/UE4GitPlugin
and SourceTree (which I use for git)
Hey guys been a few months now. I wanted to loop back around on this and see what sort of action could I take in order to speed up some sort of solution. Is there a way we can get an "unlock all files from last commit" button. I would also love just some way to automatically unlock when we push but just for a workaround I think I can make a unlock all files from last commit button as efficient, maybe only a little less.
All to me Backup by private
Here are some things that can be done with LFS:
git lfs locks unlock-all BEFORE...AFTER to unlock files in a range of commits.Unfortunately, it's not possible for LFS to add functionality after a successful Git push. Git would have to be taught a new post-commit hook. No work outside of the discussion in this has taken place for the above. Contributions are very welcome here :)
Anything else will require specific integration with tools, and those requests should go to the teams working on those tools. We don't currently have anyone on the LFS core team that also works on any other Git tools. You don't even have to wait for LFS to implement the above. Someone could experiment with this functionality:
git diff --name-only {NEWER_SHA} {OLDER_SHA} to get a list of changed files.git lfs locks --json output.git lfs unlock --id={id} for each file that shows up in 1 and 2.Would you think it's possible to use the post-receive hook instead of a post-commit or pre-push hook?
Sure, but that's a server side change that can't be implemented in this repository. There are a lot more technical challenges to solve, once we've decided on the correct behavior. Experimenting with this client side (in the LFS client or tools like UE) is a good first step.
Well according to documentation it doesn't close the connection until that is received. So can't you just assume when the connection is closed that the files you just pushed are unlocked? You can use that hook server side to unlock them with authority.
(and if I understand correctly this is the git lfs client repo and this would need to be done over on the server repo, right?)
Bumping this thread.. while those of us using git are likely sophisticated enough to script this ourselves as mentioned above, the point is to improve the interface of git lfs with the goal of increasing its adoption in the "binary asset" management industry (which is only growing) - the fact that large teams that are working on games or other pipelines involving lots of binary art and/or assets are turning down git solely because of file-locking limitations is severely disappointing. I was very pleased to find UE4 and the excellent updated Git-LFS-2 plugin @ https://github.com/SRombauts/UE4GitPlugin - My whole team sighed a breath of relief when we realized we could continue to use git as we moved to UE4-based development (although, we use the git hosting over at visualstudio.com over github since it supports server-side LFS locking and doesn't cost a dime..) - this is probably the last "bug" we need fixed to achieve a sane workflow around UE4 and git. +1 for unlock-all with a commit range..
@mhaimes just to make it easier for people here is my unlockall.sh
git lfs locks | grep -oh [0-9][0-9][0-9][0-9] | while read -r line; do git lfs unlock --id="$line"; done
Although I've spent maybe 10 seconds on this command as it's still dog slow for unlocking and simply just unlocks all locked files instead of something smarter like the last commit but in reality I would only expect that sort of polish in a feature of a proper application to control sources. Not in a script written as a stop-gap.
That said I am in the exact boat as you. This is literally the last thing git lfs needs to have a good working UE4 source control. While perforce asks for $2,520 yearly for a team of 6. Git lfs could easily be less than that. Although the runaround on this issue is strong. Make a script, script it, probably took longer to write the script than I will actually save on the time from the script running. https://xkcd.com/1205/
@MJBrune
Well, you can at least rest easy knowing that you're saving me and my team time in addition to your own. Surely in total that's longer than it took you to write that, although sometimes those bash one-liners end up an endless rabbit hole.. ⌚️💰❤️
P.S. Don't know if you read my post but we have found visualstudio.com's git hosting to work just as well as github's server-side LFS implementation, the only noticeable difference being it's completely free. At first we tried to set up gitlab+lfs on a local machine, but it seems they haven't implemented the server-side locking API yet - or at least, haven't rolled it out to the free-edition customers. But M$'s hosting has been more than adequate - no quotas, no caps, no fees. That tip is my return payment for your one-liner.
Step 1) Fix Git LFS + Locking unreal workflow.
Step 2) ?
Step 3) Profit.
It seems like there are two things going on here:
Git LFS should ideally be able to support unlocking locked files that have been successfully pushed to the default branch of a repository, and
Git LFS should support multiple filepaths and/or IDs to unlock with the git-lfs-unlock(1) program at once.
Here are some thoughts about each:
Well according to documentation it doesn't close the connection until that is received. So can't you just assume when the connection is closed that the files you just pushed are unlocked? You can use that hook server side to unlock them with authority.
I don't think that this would provide the level of guarantee that one is looking for in this type of functionality, since the connection associated with a push might be closed after the remote _rejected_ the push. In this case, it would _not_ be safe to release the locks.
for _, id := range os.Args {
if _, err := unlock(id); err != nil {
return err
}
}
This has two problems:
It opens up N connections, where only one should be required, and
If you requested to unlock a lock that is not owned by you (without --force), then _part_ of the locks that you requested to release will be so, and the other will remain locked.
I think that something like this needs a coordination mechanism, wherein the range of locks are unlocked if and only if _all_ of the locks are able to be granted/released at once. For example:
Client gathers and sends the list of all locks that it would like to unlock.
The server accepts/rejects the list wholesale.
Though currently, this would have to be "hacked" together using the verification API. I think that something like this change would be more well-suited to the next breaking version change on the locking API, to encode this behavior into one request.
K well lets stick this issue to the original problem which is needing to unlock after a successful push. Issue 2 is more of a stop-gap solution to issue 1. Are you saying then git-lfs isn't planned to have this ability? If not is there a development path to that ability?
There's sort of two things going on in this thread. One is git-lfs developers trying to design this system properly in a future-proof way, and the other is your user base desperately needing a sane workflow. Like I said, some of us can script around the issue for now but considering the choices for UE4 collaboration seem to basically be subversion, perforce and git lfs, the world desperately needs a good git setup... I hadn't even heard the word subversion in 10 years - the original svn developers themselves came out a long time ago encouraging people to switch to distributed vcs tools; and with perforce being heavyweight and way too pricey for small, talented teams there is a real need for some git lfs love. The UE4 userbase is already huge and growing fast.
Honestly, y'all git-lfs devs should apply for an Unreal Dev Grant and use the money to develop some collaborative git lfs features - I'd bet they'd fund you.
@ttaylorr one thing I thought is simply checking to see if the commit is successful on a close connection. Basically go find the last commit ID, was did it match the last one we unlocked? If not then grab all the locked files in that commit for that user and unlock them. Is this not possible?
Are you saying then git-lfs isn't planned to have this ability? If not is there a development path to that ability?
Indeed; I am saying that Git LFS _can't_ have this ability because of the limitations on the information that Git sends us currently. In terms of forging a path ahead, this would require upstream changes in Git.
@ttaylorr one thing I thought is simply checking to see if the commit is successful on a close connection. Basically go find the last commit ID, was did it match the last one we unlocked? If not then grab all the locked files in that commit for that user and unlock them. Is this not possible?
This is an interesting suggestion. I am wondering if it's possible since I'm not sure of what the behavior should be if the ref moves _again_ between closing the connection and checking for the latest commit on the remote. What do you think would be the best way to handle this case?
@ttaylorr if I am not mistaken we could on the remote server have a post-receive hook. From digital oceans help page it had "This is run on the remote when pushing after the all refs have been updated. It does not take parameters, but receives info through stdin in the form of "\
@mhaimes Not sure you had any luck with your File locking.. I am a game developer and we are looking at migrating all of our source files from perforce to Git LFS due to the support of file locking .. although we use Unity instead. However we have the same issue as you regarding unlocking a file.
We haven't yet settled on a new Git repo option but its likely going to be GitHub or Git for TFS. Unlike your suggestion for unlocking files automatically id rather it be a manual process as often you commit or push binary files but wish to continue working on them?
Plus would you not want the Unlock to only happen once the file gets back into your mainline branch? ie. once a Pull Request is merged in. That way you will have the correct binary files and source code in sync?
I have tried few scenarios for GIT LFS file handling and LFS file locking.
LFS handling of *.rtf file ( BIP templates )
Steps are taken:
• Allow LFS option unchecked from Bitbucket and .gitattributes removed from the branch
• .rtf files committed and pushed in Bitbucket branch , which is now normal file in Bitbucket. ( This made repo size to increase )
• Check Allow LFS option and track the “.rtf” files.
• .gitattributes is created for tracking “.rtf” files, and new .rtf files is added in branch and pushed in the repository.
• While pushing the .rtf files, even existing .rtf files got committed and pushed. So Existing files is also being handled with LFS along with new files.
Problems identified for LFS file:
Issues for existing files handled with LFS
• But handling existing .rtf files with LFS, does not make the repository size decrease, it remains same, as the file history is already present in Bitbucket.
• Only rewriting the entire history of the files, will make sure the size of the repo to be decreased.
LFS file lock/unlocking
• Locking file/files by one user, make sure another user won’t be able to push the changes for the file ( verified )
• File can be unlocked by another user using with force option command “git lfs unlock
• To edit the file again by the user, he/she has to clone the branch/repo again to edit the unlocked file. ( Checked with Scenarios in GIT )
Please provide suggestions how to implement this for LFS locking / unlocking after pushing to Bitbucket file need to be editable ?
Expecting your valuable reply! Thanks!
@sarvanguru It is known that unless you rewrite history, the size of the repository doesn't decrease, which is just a fact of life with Git. As for the file unlocking, it's certainly possible for us to mark the file read-write after using git lfs unlock --force, but since that's a separate issue, it would be great if you could open a new issue for that.
@bk2204 Thanks for your reply!
Yes, I would like to know the considerations for LFS File locking. that should need to know be read/write after unlocking with --force. seems i am unable to do that. please suggest me the solution and steps to be followed with writable for the after unlocking files. I have enclosed few scenarios which i tried.
Expecting your valuable reply!
could you please provide the steps for how to make writable for after unlocking with --force.
@sarvanguru As mentioned, could you open a new issue for that? That's distinct from this issue, and it would be good to track it independently?
@bk2204 please let me know where I can raise a new request for LFS file locking help ?
@sarvanguru Feel free to open a new issue requesting automatic unlocking after --force at https://github.com/git-lfs/git-lfs/issues/new.
@bk2204 unless I'm an idiot, and reading this wrong couldn't we use this post hook to handle this automated unlocking of files when pushed to a remote?
Yes, if your server supports post-receive hooks, you can use that to perform the unlock.
Are you opposed to baking that in as a configurable option for lfs as a whole?
We're open to hearing proposals for this as part of the protocol in a separate PR. We would need to preserve the existing option and provide an opt-in behavior for it, though.
Just so we're on the same page you suggest me opening a new issue?
Let me clarify. I think this feature is fine as an opt-in feature and I'm fine with this issue being used to track it.
However, nobody has proposed a design on how this should work via the API and such an option requires server-side support. If you want to propose such a design, please open an issue with the proposal tag and the contents suggested in the contributing documentation. Otherwise, we'll consider writing up a proposal in due course once we think about how it should work.
@bk2204 Awesome! Thanks for the clarification, I'll start working on a proposal after talking to @RoyAwesome a bit more