I was all excited when I hear about multimaster replication for Vault, but it appears that the multiple Vault servers use the same, single database server. It sounds like Replicator is designed more for having a vault "farm" at one location than for having a replicated Vault at multiple sites.
We'd still have a single point of failure with the database. If the site with the database becomes unavailable, all of the sites lose access to Vault. Would it be feasible to use some type of SQL replication method (transactional, mirroring, log shipping, DoubleTake, Availl, etc.) in order to have a redundant database (even if it's only a passive hot stand-by)?
When one of the Vault web servers is unable to reach the database server for a period of time, and changes (check-ins) are made at other sites, when it later reconnects, how does it true-up the missing/changed files in its local file store with the new records in the database?
Also, is it at all possible to use merge replication with Vault? I realize that I'd have to assign different identity ranges to each replicated server to prevent replication conflicts, and that the Merge setup will add Timestamp and RowGUID columns, but do you foresee any problems with this configuration? If we can use Replicator to replicate the file store real time (multimaster), and we can use merge replication to replicate the database real time (also multimaster), it seems that we could get a much more robust multi-site Vault topology. In theory, the file store and the database would stay in sync, we'd get multi-master replication, and each site can share a single vault with the rest of the sites (they currently have to go over the WAN if the project is at another site.)
Thank you very much for the quick response and number of details already provided.
Bill Daly
www.millerlegg.com