Core: ownCloud server synchronisation (mirrored/redundant serving) [$275]

Created on 15 Jan 2013  Â·  69Comments  Â·  Source: owncloud/core

It says on Wikipedia that synchronization between different ownCloud servers is being worked on and not supported by now. I am actually surprised this isn't implemented yet as this feature would justify the term cloud (imo). Any status on that?

In my case I have two different LANs linked via VPN. However, bandwidth between them is
horrible so in each offices runs a NAS that rsyncs every n minutes. Now, this is working for two server but is not satisfying in terms of revision control and adding more servers in the future. Lately we are testing csync2 (http://oss.linbit.com/csync2/), which keeps track of file changes (SQLite) and allows multiple hosts nevertheless it has not got that 'DropBox-feeling', because you have to be logged in to the network via VPN in order to remotely access your files.

I would not be here, if I was happy with the current configuration so I am hoping to find some people to help check out conditions on this project to implement such a feature so ownCloud servers (a)sync between each other to accomplish a redundant ownCloud service for you and your friends.

bounty enhancement statuSTALE

Most helpful comment

Server Replication vs LAN Sync vs Scaling Hosting

I do have pretty much the same scenario as @gebhard73. Slow internet connection at home + road warrior However I think LAN Sync may be the thing most people reading this thread are looking for. Thus I'd like to describe the 3 scenarios regarding sync, since I think this is not yet clear to everybody:

LAN Sync

The idea behind LAN Sync is a Pear-to-Pear Sync similar to bittorrent sync. Each client can sync the data directly to another client, without having to go to the master server. This not only takes load from the server and saves internet bandwidth, it even allows offline sync between to clients.

Dropbox has a really strong implementation of LAN Sync since it takes a lot of load of their servers. And it works really well and fast on LAN's. This is why I love dropbox and avoided owncloud so far!

LAN Sync vs bt-sync has the advantages of having master for a single point of truth of data, rights and as a direct accessible master server if no pear is available.

Eg. LAN Sync Szenario:

  • Primary OwnCloud Server accessible from the Internet (eg Cloud) (single point of truth, where we connect to)
  • Local "Client-Server" which runs on an NAS. This is not really a server though! It's just one client, like every other, just running continuously in the intranet LAN. It syncs all (or a subset of) data from the primary muster. It can in addition share the data on a network drive for local access (eg. in company)
  • Road Warriors Sync to the next client having the

There is an opened issue on Lan Sync which somehow seams to have died :( I think this is really the feature we all need.

Scaling owncloud across multiple machines.

Is described at Scaling Multiple Machines. This in my mind is interesting if you need a really fault tolerant, high available, scalable hosting of owncloud for large scale enterprises. However this in most cases means a single point of truth at one location. However this is too complicated for a lot of users especially small and medium businesses.

Multi Master to Master Sync

Considering LAN Sync, Master to Master Sync in my mind is only a high availability issue. Master to master sync in my mind is the feature small and medium business need for their availability, since it is simpler in setup than a scalable hosting. With each owncloud server having a simple local storage. This gives the following usecases:

  • primary server and a backup server (failover)
  • multi location servers with each being a replica (and backup) for the other.
  • cloud server + local replica (probably better and simpler solved with lan sync in this case)

However I consider both servers would have to be in the cloud though.

All 69 comments

This is indeed a feature that we want to have in the future. Unfortunately it's not there yet. But everybody is welcome to contribute.

Hez karlitschek!
Do you know when this feature will be implemented in the future? It would be really great and finally make this a full dropbox alternative!!

Cheers
delusho

PS: Thanks for all the great work so far!

@delusho Unfortunately no one stepped up so far and implemented this.

What dou you think how much work will it be to do this?

On Mon, Mar 4, 2013 at 5:37 PM, Frank Karlitschek
[email protected]:

@delusho https://github.com/delusho Unfortunately no one stepped up so
far and implemented this.

—
Reply to this email directly or view it on GitHubhttps://github.com/owncloud/core/issues/1190#issuecomment-14390097
.

----- Original Message -----

Hez karlitschek!
Do you know when this feature will be implemented in the future? It
would be really great and finally make this a full dropbox
alternative!!
Surely this is already available.

Use a database for settings and a clustered filesystem such as GFS on RHEL on the backend. If you are doing a large installation then there is probably a single sign on service such as Active Directory or Directory Server for authentication.

Kind regards
Xander

Hey Xander,

I do not really understand and I think I can not change the Filesystem on my Owncloud Server. I think it would be great if you could copy/paste some sort of secret key (compromised of password and address)from one server config into the other to establish server syncing.

If I would know how to do it I would do it myself. However I can be a tester. It would be great!

Same here. Any progress on this?

That's a good case for a bounty! I have just donated $5 and encourage everybody to do the same. This is for sure a very complex feature but also what most owncloud users would love to have.

https://www.bountysource.com/issues/905996

Good point, I added a bounty too.

Are the files stored in a database? If so, can the database be moved to a mongdb or couchdb database and then have the database do the replication for us?

Depending on the setup, distributed http serving might be of interest for mirrored servers. This could significantly speed up access to two servers when each has its own uplink. It bundles the uplink of several home servers. Thus is reduces the uplink bottleneck, possibly making home servers as fast as servers hosted in a data center.
Note: AFAICS distributed http is dependent on this feature, but the there should be no need to program the feature in a specific way to later allow distributed http to be added.

Definitely this good feature for owncloud. Also putted $5 at https://www.bountysource.com/issues/905996 , hope everybody who want this feture send a little donation.

Whats the status on this?

this will be extremely helpful for our company too.
we have several locations and we want lot of documentation synced between OC servers so user access LAN only.

is this still on the table?

So this is dead then, no dev interested? Would it not be possible to implement the desktop client in a similar fashion as server sync app?

Meantime this may assist those with the need to sync between various clouds https://multcloud.com

Though by far not a sync but at least it gets the files from the source OC to the target OC initially is to add the source OC as external to the target OC and the utilize the move/copy app (https://apps.owncloud.com/content/show.php/Files+move?content=150271) to copy from the files from the source OC to the target OC.

Another downside of this method, aside from being a manual transfer than an automated sync, is that the modification from the source OC is not maintained at the target OC...

Wondering whether perhaps this Sync API https://www.getsync.com/api could be useful and developed as a server-side sync app for syncing between various OC?

The move/copy app (https://apps.owncloud.com/content/show.php/Files+move?content=150271) did not pan out in connection with server side encryption, just producing gibberish on move/copy. Just as follow up on the previous.

I hope this feature will be implemented some time soon :)

I think it is already existed.
It's obviously that you can just use web cluster as front-end, and mysql cluster as back-end.
Even the file, it's not that bad to write a problem or use something that already exist to keep the data folder updated with the other server.

Synchronising files between oC servers in a cluster is something that should be handled at the filesystem level, by something like Ceph or Gluster FS. If ownCloud were to synchronise files, it adds another level of complexity (which can easily go wrong), completely different end points (we can't use existing WebDAV since that touches the database), and mtimes will be different for all files between the servers. Also, there's always the issue of file update conflicts. All of this is already solved by the above mentioned filesystems.

@karlitschek @DeepDiver1975 Can we close this?

I think there's more than just file synchronization: server configuration, users, credentials, database.

and discarting something just because its potential complexity don't seem to be a solid reason.
if OC aims for corporate use it needs to achieve some sort of HA/redundancy feature. for example, replicating only a subset of shares to an expecialized department server.

@muzzol Server configuration must be transferred manually, since before being configured ownCloud would have no idea what the other servers are. All others are stored in the database, which all the servers in the cluster connect to, so that data is already shared.

Of course you need to do a minimal first configuration, as in any other HA system you have to join the cluster/manager/master machine. But this is just a one time step and from that point you just manage one instance that control everything.
I'm not quite sure why I'm explaining this, are you sure you understand the request in the first place?
This is very common in corporate (meaning big deploys) scenarios.

Also Ceph and Gluster FS recommend high speed Internet between the different nodes that make up the HA cluster.

A potential use case of synced owncloud servers is to have a server on the lan which makes it possible to quickly sync file revisions and always have an up to date onsite backup. This owncloud instance would then slowly sync to an offsite location over a slow uplink connection. The same story applies to this location, users there would be able to quickly sync with the local server and slowly their files are synced to the other server.

I'm just trying to make clear that Ceph or Gluster FS is probably not a viable solution in all cases and network environments.

This owncloud instance would then slowly sync to an offsite location over a slow uplink connection.

Theoretically there is GlusterFS Geo-replication for such task, however I guess it will not play well for ownCloud synchronization with remote server (as @Jip-Hop I imply slow uplink and delays here) since it most likely will lag behind database synchronization process (obviously, because transferring files to remote server will take more time than database synchronization).

@Xenopathic thoughts ?

@RussianNeuroMancer My thoughts are that ownCloud will never be able to do file replication between servers as well as existing solutions such as Ceph, Gluster FS or a variety of other filesystems and synchronisation solutions. The same applies for database replication, where solutions such as the Galera cluster are far more effective than any PHP-based system has the potential to be. Performing replication across servers is therefore out of scope for ownCloud.

But clearly I don't understand the request in the first place, as having PHP do file and database replication to other servers is apparently normal for corporate environments.

Is there any update on this feature? It would be really greate to have it!

My use case is quite easy. I have two oC installations: one at home (on NAS) and one at work (on my desktop PC) having this way full access to both of them. Now I would like to sync both installations to have acces to my home date at work and vice versa. Best case would be if portabel devices could recognise which network they are connected to and choose the appropriate oC server to connect. But it would be enough if every portabe device would have a fixed oC server to connect to as long as both of them are syncronised. Also I do not need continuous sync, it would be also OK if sync would be run by schedul using e.g. cron.

Cupora: what you want can be done with clustering tech, that's pretty much out of scope for ownCloud. But for non-expert users it could be interesting, of course - and if somebody wants to work on it, they can. That is why this issue is still open.

The best usecase I can think of for syncing actual data between servers is backup. That, too, would be a layer violation, in the sense that there are excellent backup tools out there. But there is also a a project which might implement this: https://github.com/pbek/ownbackup/issues/3 Feel free to support him ;-)

I disagree: precisely because the nature of this project, syncing should be done internally and not with any external tool.

pure date is what you backup (images, movies, documents), but every other aspect of replication should be done by OC itself.

@muzzol it isn't so much about 'external tool' as 'underlying platform'. ownCloud runs on a database and linux distribution with filesystem etc - these come with backup and replication capabilities and using those is the best performing and most robust solution. For companies, nothing else is an option as they want to use one backup solution and one replication solution for all the apps they have, and ownCloud is usually just one of those apps.

However, it requires quite some knowledge about the underlying platform, which is why it is nice there are solutions like ownBackup for home users.

I'd love to have a OC server to OC server sync because of this situation:

  • OC server 1 at hosting provider (reachable from the Internet)
  • OC server 2 at home (behind not always-on slooooooow DSL line)

OC server at home should be able to sync when it's online (or scheduled) some content from the OC server at hosting provider.

Result: fast access to OC server 1 when on the road, very fast access to OC server 2 when at home.

Please make this happen :-)
Cheers,
Gebhard

Nice design. Pretty much the concept of memory :D

rsync?

I'd really like to have this as a feature in OC because IMHO it fits into the concept if we are talking about distributed architecture.

Of course for a temp. workaround this may be a candidate :-)

as I said before, any network project with wide use goals need an HA feature.

sure some dirty workarounds can work for few people, but we're not talking about just syncing the data but all user and configuration information.

even with data-only syncrhonization you need some mechanism to resolve conflicts (same file updated on more than one pool) and reconciliation (which one do you keep/discard).

you can take a look at unison manual, for example: http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/unison-manual.html#conflicts

I would like to point out that different users have different needs. One group that need just ownCloud app that behave like ownCloud desktop client, syncing files between remote server and local server. Other group need solution with complete server replication (all files, user accounts, locks state, etc.)

I can understand why developers recommend use Ceph or GlusterFS to second group of users. While in fact such solution doesn't fit into all multiple-node deploying scenarios (for example it doesn't fit for main office with fast Internet connection and branches with slow Internet connection, but with local servers that can be used for deploying local ownCloud nodes that syncing branch data with nodes in main office; not all data should be synced in this case - branches doesn't need all data of all other branches, they need just own data and some data from main office; also local nodes at branches is requirement due to slow connection to nodes at main office) at least it fit some into some scenarios.

But recommend deploy Ceph or GlusterFS to first group of users IMO is not understanding needs of users who request this feature.

Well users do have different need yes, but I would also say what muzzol stated above:
"as I said before, any network project with wide use goals need an HA feature."

Is Ceph good for syncing over the WAN? Personally I would not prefer to use Ceph for that purpose.
Maybee GlusterFS..

I could pay some money for this feature, if it is ever implemeted.

For my asynchronous usage scenario I would not use a cluster file system or low level sync tool ... Here only an application level sync makes sense IMHO because there will be trouble (slow / canceled transfers) and this has to be taken care of by the application because this is the point where all information is ... just my 2 ct again :)

As was said: for companies, ownCloud does HA just fine, see https://doc.owncloud.org/server/8.2/admin_manual/installation/deployment_recommendations.html

For home users, yes, this would be nice to have but then somebody has to feel like writing it. See https://www.bountysource.com/issues/905996 - there's somebody willing to write it, provided we bring together Eur 3000. IMZ is being awesome here, by the way, this wouldn't make for a good salary but I guess he could buy a new laptop ;-)

I would also really like to have this option. Just donated $20.

For my usage scenario, I would like to see following working:

  1. OC Server at home with ALL data and SLOW DSL line (always on)
  2. OC Server at provider with very LIMITED DISK SPACE but GREAT NETWORK BANDWIDTH

On the road, every use should connect primarily to the 2nd OC server at provider to sync all required files to be synced - but when it's necessary, the 2nd OC server has to get the file from the 1st OC server (the 2nd OC server not always has got all files because of the limited space)

Usage scenario:

  • Would really love to see this issue made a reality as I have a somewhat unreliable dedicated server (uptime wise as they are able to shut it off whenever they want if there's an issue), and would like to be able to leave my home-hosted server that in-sync as a backup server for my users.
  • Ideally I would like to be able to mask them so that the user connects to one IP and then gets redirected to the first IP, which if it fails redirects to the second, though that likely would require a 100% uptime third server.

The first step in the plan is for the ability to even sync the servers, the rest is just cake on top!

This is a really old thread (3 yrs). There are many use cases that can likely be solved by newer features like federation or syncing to multiple accounts/servers with the client. What other use cases currently don't have a solution?

The core issues of the original asks and the referenced asks aren't sated.

They're more along the lines of server syncing.

(for if a server goes down, you can use the backup server and when the other server goes back up, it autosyncs back the misses content and any new users/changes, etc.)

Most importantly would be not needing to use a special type of filesystem just for this as it's not practical...

The secret key idea (or something similar that would allow them to sync easily) that @delusho suggested is still top of my mind, though that is likely much harder to implement than it makes it sound

I haven't found a real good workaround for my scenario yet (syncing two OC servers using a client installation is unsatisfying).
Situation:

  • OC server 1 at hosting provider (reachable from the Internet)
  • OC server 2 at home (behind a not always-on slooooooow dial-up line)

OC server at home should be able to sync when it's online (or scheduled) some content from the OC server at hosting provider.

Result: fast access to OC server 1 when on the road, very fast access to OC server 2 when at home.

replication between server (file level, permissions, database, etc)
client supports fail over, if one of the server goes down, it automaticly uploads to the one that are still up.

Though just hit me:
is GlusterFS an alternative toghether with galeracluster?

@Kyrluckechuck @jonathanselea This is high availability (HA) and because ownCloud sits very cleanly within the application layer this is already fully supported and comparatively simple to implement. To achieve HA each layer needs to be highly available. For web servers this is simple because PHP applications are stateless. You need to have the same code on all web servers and you need a load balancer. You need sticky sessions unless you can have the PHP sessions stored centrally like in a Redis cache (in which case that layer needs to be HA as well - Redis Sentinel is your friend there).

At the DB layer, Galera is the best approach in my opinion. If we wanted ownCloud to handle this task, it would require the config file to have an array of database connection details and every single write (UPDATE, INSERT, DELETE, etc) query would need to loop through that array. What would happen if a given database was offline? ownCloud would possibly sit there until a timeout expires or some other condition was met. Now how does that missing write ever get back into that database when it comes online? ownCloud, as a PHP application, is stateless. So the only way for it to detect that a given database was out of sync would be to compare every single row in every table every time any single operation is performed. And then having determined the differences, which database is correct? ownCloud has only existed for a few nanoseconds? Should it add rows missing from one database or delete them in the other? It has no idea which one is the database of record.

Enter Galera. ownCloud can read and write from any database node and Galera handles getting any write queries to all the other nodes. If a node fails, is rebooted, has MySQL restarted for a config change, Galera makes sure that when it's back up it gets in sync with the other nodes. All automatically. Add redundant load balancers between the application servers and database servers and you can have the application servers point to a vIP, reboot database servers after kernel updates, and so on with no application downtime. A pretty simple architecture.

Very similar problems exist at the file system level except the data is much, much, much bigger so slow connections are even more susceptible to problems. GlusterFS offers block level de-duplication and compression to minimize the amount of data that needs to be transferred. I believe Ceph offers similar features but I've not researched it. A far, far more elegant solution than trying to have PHP write every file to a local directory on every web server. It doesn't have the "database of record" issue that you have with MySQL because it can look at the filecache to determine which files should be deleted or copied when a web server comes online. But that all needs to be checked at every instantiation also. Every file needs to be checked on all web servers every single time the application comes to life because it's stateless, and then any conflicts need to be resolved and files transferred between web servers. Then you can get a css file, now onto that first js file. Let's check all the files again because PHP applications are goldfish.

The filesystem stuff doesn't necessarily need to be done by GlusterFS or Ceph. The only requirement is that all application servers are accessing the same file storage. They could be mounting a SAN volume or a file share on another highly available storage solution which is what we do. We have an SMB share mounted on all our web servers for ownCloud's local storage. We also archive apache and ownCloud logs there and a few other things related to this application. So long as users directories and upload caches are available to all servers (so that the web server that gets the last chunk of an upload has access to all the other chunks received by other web servers) then ownCloud will work just fine. Because it is wholly within the application layer.

@gebhard73 There is no solution I can come up with to running a server system of any kind over a dial up line, let alone an intermittent one. If you think accessing a few web pages or uploading/downloading the individual files you want is slow, imagine syncing the entire data directory and database over that slow connection. Just comparing all files in the data directory for a system of reasonable size would take a day or more.

But there is a solution for you to have immediate access to all your files when you're behind that dial up line. Sync your content to your laptop when you're connected to a faster internet connection. And share api calls will be simple enough to do over dial up so you can still manage shares to other people or create public links directly from your desktop. Or you can leave the sync client paused while on the dial up line and start it up again when on a faster connection and have your computer sync up all changes automatically.

@Kyrluckechuck @jonathanselea This is high availability (HA) and because ownCloud sits very
cleanly within the application layer this is already fully supported and comparatively simple to
implement...

just the time you took to describe it somewhat contradicts your own sentence.

maybe at an enterprise level with lot of resources and infrastructure you can consider current solutions as 'simple', but that applies to 99% of server software.
what the majority of people is asking here is a low-to-mid tier solution that requires near zero effort to install and mantain, even if is an aproach that leaves some cases uncovered.

@muzzol I couldn't agree more.

What @scolebrook wrote is a great solution, no doubt about it! But, it's not a practical solution for anyone but enterprises or those with enterprise-grade hardware/setups & the know-how!

I understand this isn't a huge priority as it's not a huge gain for how much work it may take (?), but it's a very practical ask for a lot of users who don't have such setups and would need the ownCloud module (or an extension that has been given the thumbs up from the main devs) to solve the problem at hand

Well, I agree, for a IT specialist the sync is easy to setup. But for most of us, the owncloud installation is already a challenge ;)

What I do is to use Bittorrent Sync to sync files between 2 x NAS and 3 Laptop / Desktop computers. That allows me to always access the files I need locally and fast (say u save a large file in the office, it syncs to the home NAS while I commute home and once I open my home PC it's copied from home NAS in a few seconds.

Then I have a owncloud instance installed on the office NAS for the case I need to access files on the road or from mobile or...

I know this is a dirty workaround and I would love to have the files only on the servers only instead of syncing local files but this method works for me for years. It's actually super to know your files are synced (and backuped !!!) even on the road (when online). It's also brilliant for sharing files when employees have access to certain folders on the NAS.

This kind a easy to install sync I'm really missing in owncloud.

Or maybe somebody can write a great tutorial?

I fully agree with the comments above - there may be wonderful clustering solutions out there which fit into an enterprise situation (with focus on reliability, speed, consistency, HA, ...). But IMHO what we are really missing is a simple way to sync between OC instances indepent of their location: a way to sync files on application level (push or pull) without high speed or real time ... just on best effort basis when the counterpart is reachable. Something like a small OC client plugin for a server.
Just my 2 ct.
(I know how to set up and operate enterprise software but this IMHO doesn't make sense for SoHo scenarios we're talking about here.)

@gebhard73: +1 something like a "add/connect server" button would be absolutely phantastic.

IMHO you can setup different servers and use bittorrent sync to sync the data today already but not user access, etc pp. So if you have a few users, one could setup 2-3 servers manually and sync data with Bittorent Sync. But that would be essentially 3 independent OC installations and not a smart cluster (or whatever that would be called).

In my personal opinion, OC has only a limited use if there is no sync between different locations. It is not convenient to switch on your PC in a different physical location and wait for it to connect to OC and start downloading new files via internet. AFAIK, OC also does not support P2P: so everything does have to be downloaded (and uploaded!) from the one and only server/NAS while you might have 2 or more laptops/PCs with the data available in the same LAN.
In this kind of scenario bittorrent sync is amazing (btw: I am not in any way affiliated with them). Using 1-2 "puffer servers" (=BT Sync clients on NAS which are always on) keeps everything synced. Very smooth, super easy to set up.

OC has other great features Bittorent Sync does not have and being able to replicate the sync process with OC, too, would be truely awesome.

@maddhin
How well does BTsync work with two OC installations?
Adding files into OC from the "wrong way" does require some actions also.

@jonathanselea: well, somebody has to test this properly. I'm only using one OC installation and - frankly - do not access the files through OC very often. I think there is some lag as files only appear on OC once OC ran a file scan (or something like that) but other than that it seems to work. I'm actually not using the OC client for sync (for reasons above: too slow).

I think I just pointed BT Sync to the files subfolder. One needs to do a bit of fiddling with the permissions.

The whole question is rather philosphical: BT Sync is no cloud per se but OC is. BT Sync is old school if you like. But on a practical level, syncing the files you need locally fast beats the cloud. It's efficient as the time it needs to sync a remote server/client is during commute or simply while you are still working. When home/office, you get the file from local server at LAN speed and can continue working on your 200MB presentation or whatever (AFAIK BT Sync also only syncs the part of the file that is changed not the complete file!) .
Having your own cloud is great (privacy, etc) but if my work efficency goes down I'm conservative ;)

Well I was wondering if OC allowed any kind of duplication out of the box. Looks like it's not. However arguing against is a non sense. Having to know and setup any heavy external tools in 2016 is a painy lost of time for small companies like ours. period.
(Still OC is an awesome project =) )

Any news here on this feature-request?

Server Replication vs LAN Sync vs Scaling Hosting

I do have pretty much the same scenario as @gebhard73. Slow internet connection at home + road warrior However I think LAN Sync may be the thing most people reading this thread are looking for. Thus I'd like to describe the 3 scenarios regarding sync, since I think this is not yet clear to everybody:

LAN Sync

The idea behind LAN Sync is a Pear-to-Pear Sync similar to bittorrent sync. Each client can sync the data directly to another client, without having to go to the master server. This not only takes load from the server and saves internet bandwidth, it even allows offline sync between to clients.

Dropbox has a really strong implementation of LAN Sync since it takes a lot of load of their servers. And it works really well and fast on LAN's. This is why I love dropbox and avoided owncloud so far!

LAN Sync vs bt-sync has the advantages of having master for a single point of truth of data, rights and as a direct accessible master server if no pear is available.

Eg. LAN Sync Szenario:

  • Primary OwnCloud Server accessible from the Internet (eg Cloud) (single point of truth, where we connect to)
  • Local "Client-Server" which runs on an NAS. This is not really a server though! It's just one client, like every other, just running continuously in the intranet LAN. It syncs all (or a subset of) data from the primary muster. It can in addition share the data on a network drive for local access (eg. in company)
  • Road Warriors Sync to the next client having the

There is an opened issue on Lan Sync which somehow seams to have died :( I think this is really the feature we all need.

Scaling owncloud across multiple machines.

Is described at Scaling Multiple Machines. This in my mind is interesting if you need a really fault tolerant, high available, scalable hosting of owncloud for large scale enterprises. However this in most cases means a single point of truth at one location. However this is too complicated for a lot of users especially small and medium businesses.

Multi Master to Master Sync

Considering LAN Sync, Master to Master Sync in my mind is only a high availability issue. Master to master sync in my mind is the feature small and medium business need for their availability, since it is simpler in setup than a scalable hosting. With each owncloud server having a simple local storage. This gives the following usecases:

  • primary server and a backup server (failover)
  • multi location servers with each being a replica (and backup) for the other.
  • cloud server + local replica (probably better and simpler solved with lan sync in this case)

However I consider both servers would have to be in the cloud though.

I use csync2 to keep several folders in sync across a few hosts, and then mount those folders as "external storage." It seems to work OK, but it is not 100% optimal.

I was thinking about a different workaround: how about using hooks to call rsync or rsnapshot to sync the files between multiple hosts? Anybody has any ideas in this direction?

You may want to check into csync2 if you want to handle a synchronization
at the filesystem level. It’s designed to synchronize a cluster with an
arbitrary number of nodes.
https://github.com/LINBIT/csync2

On Wed, Apr 24, 2019 at 2:34 AM fr0z3nfyr notifications@github.com wrote:

I was thinking about a different workaround: how about using hooks to
call rsync or rsnapshot to sync the files between multiple hosts? Anybody
has any ideas in this direction?

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/owncloud/core/issues/1190#issuecomment-486124666, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAL7NAD4UAFDPFOMPHFYV7LPSALPVANCNFSM4AC4S27Q
.

@SpiraMirabilis Thanks for the reference but haven't checked in detail to figure out whether this can solve a case where the main server is facing internet with a static public IP and another local server with a dynamic public IP behind NAT. A VPN might work here but I will have to test the setup (which I really doubt is the best solution)

@SpiraMirabilis Thanks for the reference but haven't checked in detail to figure out whether this can solve a case where the main server is facing internet with a static public IP and another local server with a dynamic public IP behind NAT. A VPN might work here but I will have to test the setup (which I really doubt is the best solution)

You can use a hostname, so should not even be an issue.

@SpiraMirabilis Thanks for the reference but haven't checked in detail to figure out whether this can solve a case where the main server is facing internet with a static public IP and another local server with a dynamic public IP behind NAT. A VPN might work here but I will have to test the setup (which I really doubt is the best solution)

host's reachability shouldn't be owncould's job. most multi-node solutions perform a "check connection" test during installation and after that is admin's job to maintain a good network setup.

How can I top-up the bounty?

Bountysource decided to update their Terms of Service:

2.13 Bounty Time-Out.
If no Solution is accepted within two years after a Bounty is posted, then the Bounty will be withdrawn and the amount posted for the Bounty will be retained by Bountysource. For Bounties posted before June 30, 2018, the Backer may redeploy their Bounty to a new Issue by contacting [email protected] before July 1, 2020. If the Backer does not redeploy their Bounty by the deadline, the Bounty will be withdrawn and the amount posted for the Bounty will be retained by Bountysource.

the Bounty will be withdrawn and the amount posted for the Bounty will be retained by Bountysource.

so bountysource will get all the money? Suxxers - time to leave then ....

Was this page helpful?
0 / 5 - 0 ratings