Discussion:
GFS2 support EMC storage SRDF??
yu song
2011-12-12 03:29:39 UTC
Permalink
Hi GFS2 gurus,

I am planning to setup 2 x 2 nodes clusters in our environment with using
emc storage to build cluster shared filesytems (GFS2).

PROD: 2 nodes ( cluster 1)

DR: 2 nodes ( cluster 2).

as below shows:

*PROD*
*Cluster 1 share LUNs for PROD (node1, node2)***
· 1 x 100G = Tier 1 (R1)
· 1 x 200G = Tier 2 (R1)
· 1 x 200G = Tier 3 (R1)
**
*DR*
*Cluster 2 share LUNs for DR (node 1,node 2)***
· 1 x 100G = Tier 1 (R2)
· 1 x 200G = Tier 2 (R2)
· 1 x 200G = Tier 3 (R2)
* *


My question is that GFS2 supports SRDF ?? looking at KB in redhat site, it
only says that GFS2 does not support using asynchronous or active/passive
array based replication. but it seems like does not apply for SRDF.

if anyone has done this before, appreciate you can give some ideas.

cheers!

Yu
Fajar A. Nugraha
2011-12-12 04:03:10 UTC
Permalink
My question is that GFS2 supports SRDF ??  looking at KB in redhat site, it only says that GFS2 does not support using asynchronous or active/passive array based replication. but it seems like does not apply for SRDF.
And why would you conclude it does not apply for SRDF? AFAIK it's just
another array replication. Nothing really special about it.

Also, "not supported" does not necessarily mean it won't work. It
might mean "it works, but performance will be horrible", or it might
also mean "it might work, but if anything goes wrong don't ask help
from us".
--
Fajar
Jankowski, Chris
2011-12-12 04:37:02 UTC
Permalink
Yu,

GFS2 or any other filesystems being replicated are not aware at all of the block replication taking place in the storage layer. This is entirely transparent to the OS and filesystems clustered or not. Replication happens entirely in the array/SAN layer and the servers are not involved at all.

So, there is nothing for Red Hat to support or not support - they just do not see it. Nor do they have any ability to see it even if they wanted. Very often the array ports for replication are on separate ports and in separate FC zones.

Storage replication may have some performance impact, but this just looks like slower disks. GFS2 does not have any specific numerical requirements for IO rate, bandwidth and latency.

Could you quote the Red Hat KB - what exactly does it say and in what context?

Regards,

Chris Jankowski



From: linux-cluster-***@redhat.com [mailto:linux-cluster-***@redhat.com] On Behalf Of yu song
Sent: Monday, 12 December 2011 14:30
To: linux clustering
Subject: [Linux-cluster] GFS2 support EMC storage SRDF??

Hi GFS2 gurus,

I am planning to setup 2 x 2 nodes clusters in our environment with using emc storage to build cluster shared filesytems (GFS2).

PROD: 2 nodes ( cluster 1)

DR: 2 nodes ( cluster 2).

as below shows:

PROD
Cluster 1 share LUNs for PROD (node1, node2)
* 1 x 100G = Tier 1 (R1)
* 1 x 200G = Tier 2 (R1)
* 1 x 200G = Tier 3 (R1)


DR
Cluster 2 share LUNs for DR (node 1,node 2)
* 1 x 100G = Tier 1 (R2)
* 1 x 200G = Tier 2 (R2)
* 1 x 200G = Tier 3 (R2)




My question is that GFS2 supports SRDF ?? looking at KB in redhat site, it only says that GFS2 does not support using asynchronous or active/passive array based replication. but it seems like does not apply for SRDF.

if anyone has done this before, appreciate you can give some ideas.

cheers!

Yu
Steven Whitehouse
2011-12-12 17:03:01 UTC
Permalink
Hi,
Post by Jankowski, Chris
Yu,
GFS2 or any other filesystems being replicated are not aware at all of
the block replication taking place in the storage layer. This is
entirely transparent to the OS and filesystems clustered or not.
Replication happens entirely in the array/SAN layer and the servers
are not involved at all.
That is true if the cluster is contained within the same physical
location. If, for example, the plan was a split site implementation,
then the inter-site interconnect would be added into the equation too.
I'm not sure which case is being proposed in this particular case.
Post by Jankowski, Chris
So, there is nothing for Red Hat to support or not support – they just
do not see it. Nor do they have any ability to see it even if they
wanted. Very often the array ports for replication are on separate
ports and in separate FC zones.
It is probably just a case of not supporting what we do not test. I'm
wondering what the use case would be if both arrays were on the same
site mirroring the same filesystem?

If they are on different sites, then knowing which end is active becomes
a problem. If both ends become active, even for a short time then there
is no way to merge the two filesytems together again later on.
Post by Jankowski, Chris
Storage replication may have some performance impact, but this just
looks like slower disks. GFS2 does not have any specific numerical
requirements for IO rate, bandwidth and latency.
True to a certain extent, but if things get too slow then obviously it
is not going to meet a reasonable expectation of performance. The disk
latency has a big effect on how quickly cached data can be migrated
between nodes.

Also, a guarantee of reasonable network bandwidth and latency is a
requirement for corosync and thus all the services running over it, such
as fencing. So there are some issues which need to be addressed in order
to ensure that everything works as intended,

Steve.
Post by Jankowski, Chris
Could you quote the Red Hat KB – what exactly does it say and in what
context?
Regards,
Chris Jankowski
Sent: Monday, 12 December 2011 14:30
To: linux clustering
Subject: [Linux-cluster] GFS2 support EMC storage SRDF??
Hi GFS2 gurus,
I am planning to setup 2 x 2 nodes clusters in our environment with
using emc storage to build cluster shared filesytems (GFS2).
PROD: 2 nodes ( cluster 1)
DR: 2 nodes ( cluster 2).
PROD
Cluster 1 share LUNs for PROD
(node1, node2)
· 1 x 100G = Tier 1 (R1)
· 1 x 200G = Tier 2 (R1)
· 1 x 200G = Tier 3 (R1)
DR
Cluster 2 share LUNs for DR (node
1,node 2)
· 1 x 100G = Tier 1 (R2)
· 1 x 200G = Tier 2 (R2)
· 1 x 200G = Tier 3 (R2)
My question is that GFS2 supports SRDF ?? looking at KB in redhat
site, it only says that GFS2 does not support using asynchronous or
active/passive array based replication. but it seems like does not
apply for SRDF.
if anyone has done this before, appreciate you can give some ideas.
cheers!
Yu
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
Bryn M. Reeves
2011-12-13 14:26:25 UTC
Permalink
Post by Jankowski, Chris
GFS2 or any other filesystems being replicated are not aware at all
of the block replication taking place in the storage layer. This is
entirely transparent to the OS and filesystems clustered or not.
Replication happens entirely in the array/SAN layer and the servers
are not involved at all.
That's not strictly true. While it is the case that normal I/O issued on
the active (R1 in SRDF-speak) side of the replicated volume needs no
special handling things get a bit more complex when the SRDF pair state
changes (e.g. splits/failovers/partitions).

SRDF pair states govern which side of the replicated volume is writable
and a change in state may mean a device abruptly becomes write-disabled.

There's currently no mechanism for handling these changes automatically
on the host side so there is a need for manual intervention (or clever
scripting) whenever such a change occurs.
Post by Jankowski, Chris
So, there is nothing for Red Hat to support or not support - they
just do not see it. Nor do they have any ability to see it even if
they wanted. Very often the array ports for replication are on
separate ports and in separate FC zones.
SRDF is normally used between two or more Symmetrix arrays (often but
not always in physically separate locations). Replication traffic
travels over a dedicated inter-array link via a remote link director (a
specialised symm channel adapter) rather than the regular front end
director ports of the array.

In a typical deployment the R2 (remote/slave) side of the replicated
volume is not reachable from the same fabric as the R1 (master).

This means additional considerations are required in a disaster recovery
scenario since any backup hosts on the R2 side will need to be
configured to cope with the fact that LUNs and ports will have different
WWIDs than on the R1 site (this may cause problems for some multipath
boot implementations for e.g.).

Regards,
Bryn.
Bryn M. Reeves
2011-12-13 14:43:10 UTC
Permalink
Post by yu song
My question is that GFS2 supports SRDF ?? looking at KB in redhat site, it
only says that GFS2 does not support using asynchronous or active/passive
array based replication. but it seems like does not apply for SRDF.
SRDF offers both synchronous and asynchronous replication but is
active/passive. I.e. the administrator can configure whether the primary
(R1) site waits for write acknowledgement from the remote (R2) site or
not but at any one time it is only possible to write to either the R1 or
the R2 device.

Synchronous replication guarantees write order fidelity for the R2 copy
and ensures the remote copy is crash consistent at all times.

Asynchronous replication allows SRDF to support longer distances (or
lower bandwidth / higher latency inter site links) by packaging multiple
writes into delta sets to be sent to the remote site.

More complex modes and combinations exist that allow consistency to be
maintained among a group of devices, for example a database's data store
and redo logs, or that relax some of the synchronous replication
guarantees to improve efficiency (semi-synchronous operation).

Active/passive in the context of storage replication usually refers to
the states of the devices on the two sites. In active/active replication
both sides are fully active at all times and writes may be issued on
either side of the replication (a bit like multi-master application
layer replication). An active/passive design only allows one side to be
active for writes at a time.

Most array based implementations are active/passive and offer
asynchronous, synchronous or semi-synchronous operation.

Regards,
Bryn.
Jankowski, Chris
2011-12-14 01:03:11 UTC
Permalink
*Unidirectional* replication is probably a better phrase to describe what EMC SRDF and all other typical block mode storage arrays do for replication.

Typically this is used for manual or semi-automated DR systems and works very well for this purpose. This approach splits the HA and DR domains.

It can also be used with a HA stretched cluster configuration for failing over services from one site to the other. You need to integrate into the service scripts unmounting of the filesystems for the service on one site, changing the direction of the replication and mounting the filesystem on the other site. This is quite complex and fiddly to say the least. I have yet to see an implementation where the users will be really happy with the robustness of the integrated solution.

To implement a HA cluster that uses a cluster filesystem such as GFS2 across geographical area you need a different type of storage - a geographically distributed storage to have a chance of the cluster surviving the inter-site link failure or site failure. Standard unidirectional replication won't do for this. I know of only one such storage - Left Hand Networks iSCSI arrays (now owned by HP - the P4300, P4500 and P4800 storage arrays). Again, implementation of such cluster is very complex. IMHO it is easier to have local HA clusters on both sites and a good DR process based on replication.

You could also try to implement the stretched cluster purely in software using separate LUNs on storage arrays in two sites and mirroring them. Personally, I believe that this will not yield a robust solution with the current versions of software.

Regards,

Chris Jankowski

-----Original Message-----
From: linux-cluster-***@redhat.com [mailto:linux-cluster-***@redhat.com] On Behalf Of Bryn M. Reeves
Sent: Wednesday, 14 December 2011 01:43
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
Post by yu song
My question is that GFS2 supports SRDF ?? looking at KB in redhat site, it
only says that GFS2 does not support using asynchronous or active/passive
array based replication. but it seems like does not apply for SRDF.
SRDF offers both synchronous and asynchronous replication but is
active/passive. I.e. the administrator can configure whether the primary
(R1) site waits for write acknowledgement from the remote (R2) site or
not but at any one time it is only possible to write to either the R1 or
the R2 device.

Synchronous replication guarantees write order fidelity for the R2 copy
and ensures the remote copy is crash consistent at all times.

Asynchronous replication allows SRDF to support longer distances (or
lower bandwidth / higher latency inter site links) by packaging multiple
writes into delta sets to be sent to the remote site.

More complex modes and combinations exist that allow consistency to be
maintained among a group of devices, for example a database's data store
and redo logs, or that relax some of the synchronous replication
guarantees to improve efficiency (semi-synchronous operation).

Active/passive in the context of storage replication usually refers to
the states of the devices on the two sites. In active/active replication
both sides are fully active at all times and writes may be issued on
either side of the replication (a bit like multi-master application
layer replication). An active/passive design only allows one side to be
active for writes at a time.

Most array based implementations are active/passive and offer
asynchronous, synchronous or semi-synchronous operation.

Regards,
Bryn.
m***@emc.com
2011-12-14 21:56:01 UTC
Permalink
Actually, another product that implements "geographically distributed storage" is VPlex (from EMC). VPlex is a product for geographically distributed storage. It works quite nicely. And, yes you can put a HA file system on top of that. I haven't tried it with GSF2 yet; but I have tried it with OCFS2, and there is no reason why GFS2 would be any different. I.e. there is every reason why it should work.

..m
Post by yu song
Post by Jankowski, Chris
To implement a HA cluster that uses a cluster filesystem such as GFS2 across geographical area you need a
different type of storage - a geographically distributed storage to have a chance of the cluster surviving the
inter-site link failure or site failure. Standard unidirectional replication won't do for this. I know of only
one such storage - Left Hand Networks iSCSI arrays (now owned by HP - the P4300, P4500 and P4800 storage
arrays). Again, implementation of such cluster is very complex. IMHO it is easier to have local HA clusters on
both sites and a good DR process based on replication.
-----Original Message-----
From: linux-cluster-***@redhat.com [mailto:linux-cluster-***@redhat.com] On Behalf Of Jankowski, Chris
Sent: Wednesday, December 14, 2011 3:03 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??

*Unidirectional* replication is probably a better phrase to describe what EMC SRDF and all other typical block mode storage arrays do for replication.

Typically this is used for manual or semi-automated DR systems and works very well for this purpose. This approach splits the HA and DR domains.

It can also be used with a HA stretched cluster configuration for failing over services from one site to the other. You need to integrate into the service scripts unmounting of the filesystems for the service on one site, changing the direction of the replication and mounting the filesystem on the other site. This is quite complex and fiddly to say the least. I have yet to see an implementation where the users will be really happy with the robustness of the integrated solution.

To implement a HA cluster that uses a cluster filesystem such as GFS2 across geographical area you need a different type of storage - a geographically distributed storage to have a chance of the cluster surviving the inter-site link failure or site failure. Standard unidirectional replication won't do for this. I know of only one such storage - Left Hand Networks iSCSI arrays (now owned by HP - the P4300, P4500 and P4800 storage arrays). Again, implementation of such cluster is very complex. IMHO it is easier to have local HA clusters on both sites and a good DR process based on replication.

You could also try to implement the stretched cluster purely in software using separate LUNs on storage arrays in two sites and mirroring them. Personally, I believe that this will not yield a robust solution with the current versions of software.

Regards,

Chris Jankowski

-----Original Message-----
From: linux-cluster-***@redhat.com [mailto:linux-cluster-***@redhat.com] On Behalf Of Bryn M. Reeves
Sent: Wednesday, 14 December 2011 01:43
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
Post by yu song
My question is that GFS2 supports SRDF ?? looking at KB in redhat site, it
only says that GFS2 does not support using asynchronous or active/passive
array based replication. but it seems like does not apply for SRDF.
SRDF offers both synchronous and asynchronous replication but is
active/passive. I.e. the administrator can configure whether the primary
(R1) site waits for write acknowledgement from the remote (R2) site or
not but at any one time it is only possible to write to either the R1 or
the R2 device.

Synchronous replication guarantees write order fidelity for the R2 copy
and ensures the remote copy is crash consistent at all times.

Asynchronous replication allows SRDF to support longer distances (or
lower bandwidth / higher latency inter site links) by packaging multiple
writes into delta sets to be sent to the remote site.

More complex modes and combinations exist that allow consistency to be
maintained among a group of devices, for example a database's data store
and redo logs, or that relax some of the synchronous replication
guarantees to improve efficiency (semi-synchronous operation).

Active/passive in the context of storage replication usually refers to
the states of the devices on the two sites. In active/active replication
both sides are fully active at all times and writes may be issued on
either side of the replication (a bit like multi-master application
layer replication). An active/passive design only allows one side to be
active for writes at a time.

Most array based implementations are active/passive and offer
asynchronous, synchronous or semi-synchronous operation.

Regards,
Bryn.
Redding, Erik
2011-12-14 22:53:24 UTC
Permalink
HP had a product called SVSP - they're phasing it out now (they sold the IP or lost the lease), but it beat the *pants* off of VPlex (which requires 24 front-side ports on your MDS - laughable to the 4 needed for HP SVSP). We've got 96Tb SVSP configuration and use it for our stretch VMWare cluster and it's doing a great job. I'm waiting for a firmware upgrade so I can use it with RHCS. There's an alua issue with the current version of the firmware and it causes issues with multipathd working correctly.


Erik Redding
Systems Programmer, RHCE
Core Systems
Texas State University-San Marcos
Post by m***@emc.com
Actually, another product that implements "geographically distributed storage" is VPlex (from EMC). VPlex is a product for geographically distributed storage. It works quite nicely. And, yes you can put a HA file system on top of that. I haven't tried it with GSF2 yet; but I have tried it with OCFS2, and there is no reason why GFS2 would be any different. I.e. there is every reason why it should work.
..m
Post by yu song
Post by Jankowski, Chris
To implement a HA cluster that uses a cluster filesystem such as GFS2 across geographical area you need a
different type of storage - a geographically distributed storage to have a chance of the cluster surviving the
inter-site link failure or site failure. Standard unidirectional replication won't do for this. I know of only
one such storage - Left Hand Networks iSCSI arrays (now owned by HP - the P4300, P4500 and P4800 storage
arrays). Again, implementation of such cluster is very complex. IMHO it is easier to have local HA clusters on
both sites and a good DR process based on replication.
-----Original Message-----
Sent: Wednesday, December 14, 2011 3:03 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
*Unidirectional* replication is probably a better phrase to describe what EMC SRDF and all other typical block mode storage arrays do for replication.
Typically this is used for manual or semi-automated DR systems and works very well for this purpose. This approach splits the HA and DR domains.
It can also be used with a HA stretched cluster configuration for failing over services from one site to the other. You need to integrate into the service scripts unmounting of the filesystems for the service on one site, changing the direction of the replication and mounting the filesystem on the other site. This is quite complex and fiddly to say the least. I have yet to see an implementation where the users will be really happy with the robustness of the integrated solution.
To implement a HA cluster that uses a cluster filesystem such as GFS2 across geographical area you need a different type of storage - a geographically distributed storage to have a chance of the cluster surviving the inter-site link failure or site failure. Standard unidirectional replication won't do for this. I know of only one such storage - Left Hand Networks iSCSI arrays (now owned by HP - the P4300, P4500 and P4800 storage arrays). Again, implementation of such cluster is very complex. IMHO it is easier to have local HA clusters on both sites and a good DR process based on replication.
You could also try to implement the stretched cluster purely in software using separate LUNs on storage arrays in two sites and mirroring them. Personally, I believe that this will not yield a robust solution with the current versions of software.
Regards,
Chris Jankowski
-----Original Message-----
Sent: Wednesday, 14 December 2011 01:43
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
Post by yu song
My question is that GFS2 supports SRDF ?? looking at KB in redhat site, it
only says that GFS2 does not support using asynchronous or active/passive
array based replication. but it seems like does not apply for SRDF.
SRDF offers both synchronous and asynchronous replication but is
active/passive. I.e. the administrator can configure whether the primary
(R1) site waits for write acknowledgement from the remote (R2) site or
not but at any one time it is only possible to write to either the R1 or
the R2 device.
Synchronous replication guarantees write order fidelity for the R2 copy
and ensures the remote copy is crash consistent at all times.
Asynchronous replication allows SRDF to support longer distances (or
lower bandwidth / higher latency inter site links) by packaging multiple
writes into delta sets to be sent to the remote site.
More complex modes and combinations exist that allow consistency to be
maintained among a group of devices, for example a database's data store
and redo logs, or that relax some of the synchronous replication
guarantees to improve efficiency (semi-synchronous operation).
Active/passive in the context of storage replication usually refers to
the states of the devices on the two sites. In active/active replication
both sides are fully active at all times and writes may be issued on
either side of the replication (a bit like multi-master application
layer replication). An active/passive design only allows one side to be
active for writes at a time.
Most array based implementations are active/passive and offer
asynchronous, synchronous or semi-synchronous operation.
Regards,
Bryn.
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
yu song
2011-12-15 02:06:20 UTC
Permalink
Gents,

beauty!! it is great to see your ideas.

I found a doc from redhat kb, which has the following statement

"
*Multi-Site Disaster Recovery Clusters*

A multi-site cluster established for disaster recovery comprises two
completely different clusters. These clusters typically have the same
configuration, with one active and the other passive (and sometimes powered
off). If the primary site fails, the secondary site is manually activated
and takes over all services.



Multi-site clusters are supported since implementation involves two
separate clusters with the same configuration/architecture at two physical
locations. Shared storage must be replicated from the primary to the
back-up site using array-based replication. During a site failover, the
cluster administrator must first toggle the directionality of the storage
replication so that the back-up site becomes the primary and then start up
the back-up cluster. These steps cannot be automated since using heuristics
like site-to-site link failure might result in primary/back-up toggling
when there are intermittent network failures.
"

It does give me an answer what I am after.

have a great Christmas!!
Post by m***@emc.com
Actually, another product that implements "geographically distributed
storage" is VPlex (from EMC). VPlex is a product for geographically
distributed storage. It works quite nicely. And, yes you can put a HA
file system on top of that. I haven't tried it with GSF2 yet; but I have
tried it with OCFS2, and there is no reason why GFS2 would be any
different. I.e. there is every reason why it should work.
..m
Post by yu song
Post by Jankowski, Chris
To implement a HA cluster that uses a cluster filesystem such as GFS2
across geographical area you need a
Post by yu song
Post by Jankowski, Chris
different type of storage - a geographically distributed storage to
have a chance of the cluster surviving the
Post by yu song
Post by Jankowski, Chris
inter-site link failure or site failure. Standard unidirectional
replication won't do for this. I know of only
Post by yu song
Post by Jankowski, Chris
one such storage - Left Hand Networks iSCSI arrays (now owned by HP -
the P4300, P4500 and P4800 storage
Post by yu song
Post by Jankowski, Chris
arrays). Again, implementation of such cluster is very complex. IMHO it
is easier to have local HA clusters on
Post by yu song
Post by Jankowski, Chris
both sites and a good DR process based on replication.
-----Original Message-----
Sent: Wednesday, December 14, 2011 3:03 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
*Unidirectional* replication is probably a better phrase to describe what
EMC SRDF and all other typical block mode storage arrays do for replication.
Typically this is used for manual or semi-automated DR systems and works
very well for this purpose. This approach splits the HA and DR domains.
It can also be used with a HA stretched cluster configuration for failing
over services from one site to the other. You need to integrate into the
service scripts unmounting of the filesystems for the service on one site,
changing the direction of the replication and mounting the filesystem on
the other site. This is quite complex and fiddly to say the least. I have
yet to see an implementation where the users will be really happy with the
robustness of the integrated solution.
To implement a HA cluster that uses a cluster filesystem such as GFS2
across geographical area you need a different type of storage - a
geographically distributed storage to have a chance of the cluster
surviving the inter-site link failure or site failure. Standard
unidirectional replication won't do for this. I know of only one such
storage - Left Hand Networks iSCSI arrays (now owned by HP - the P4300,
P4500 and P4800 storage arrays). Again, implementation of such cluster is
very complex. IMHO it is easier to have local HA clusters on both sites and
a good DR process based on replication.
You could also try to implement the stretched cluster purely in software
using separate LUNs on storage arrays in two sites and mirroring them.
Personally, I believe that this will not yield a robust solution with the
current versions of software.
Regards,
Chris Jankowski
-----Original Message-----
Sent: Wednesday, 14 December 2011 01:43
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
Post by yu song
My question is that GFS2 supports SRDF ?? looking at KB in redhat site,
it
Post by yu song
only says that GFS2 does not support using asynchronous or active/passive
array based replication. but it seems like does not apply for SRDF.
SRDF offers both synchronous and asynchronous replication but is
active/passive. I.e. the administrator can configure whether the primary
(R1) site waits for write acknowledgement from the remote (R2) site or
not but at any one time it is only possible to write to either the R1 or
the R2 device.
Synchronous replication guarantees write order fidelity for the R2 copy
and ensures the remote copy is crash consistent at all times.
Asynchronous replication allows SRDF to support longer distances (or
lower bandwidth / higher latency inter site links) by packaging multiple
writes into delta sets to be sent to the remote site.
More complex modes and combinations exist that allow consistency to be
maintained among a group of devices, for example a database's data store
and redo logs, or that relax some of the synchronous replication
guarantees to improve efficiency (semi-synchronous operation).
Active/passive in the context of storage replication usually refers to
the states of the devices on the two sites. In active/active replication
both sides are fully active at all times and writes may be issued on
either side of the replication (a bit like multi-master application
layer replication). An active/passive design only allows one side to be
active for writes at a time.
Most array based implementations are active/passive and offer
asynchronous, synchronous or semi-synchronous operation.
Regards,
Bryn.
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
Steven Whitehouse
2011-12-15 10:06:11 UTC
Permalink
Hi,
Post by yu song
Gents,
beauty!! it is great to see your ideas.
I found a doc from redhat kb, which has the following statement
"
Multi-Site Disaster Recovery Clusters
A multi-site cluster established for disaster recovery comprises two
completely different clusters. These clusters typically have the same
configuration, with one active and the other passive (and sometimes
powered off). If the primary site fails, the secondary site is
manually activated and takes over all services.
Multi-site clusters are supported since implementation involves two
separate clusters with the same configuration/architecture at two
physical locations. Shared storage must be replicated from the primary
to the back-up site using array-based replication. During a site
failover, the cluster administrator must first toggle the
directionality of the storage replication so that the back-up site
becomes the primary and then start up the back-up cluster. These steps
cannot be automated since using heuristics like site-to-site link
failure might result in primary/back-up toggling when there are
intermittent network failures.
"
It does give me an answer what I am after.
have a great Christmas!!
Also, just to clarify, these multi-site clusters are not supported when
combined with GFS2,

Steve.
Post by yu song
Actually, another product that implements "geographically
distributed storage" is VPlex (from EMC). VPlex is a product
for geographically distributed storage. It works quite
nicely. And, yes you can put a HA file system on top of
that. I haven't tried it with GSF2 yet; but I have tried it
with OCFS2, and there is no reason why GFS2 would be any
different. I.e. there is every reason why it should work.
..m
Post by yu song
Post by Jankowski, Chris
To implement a HA cluster that uses a cluster filesystem
such as GFS2 across geographical area you need a
Post by yu song
Post by Jankowski, Chris
different type of storage - a geographically distributed
storage to have a chance of the cluster surviving the
Post by yu song
Post by Jankowski, Chris
inter-site link failure or site failure. Standard
unidirectional replication won't do for this. I know of only
Post by yu song
Post by Jankowski, Chris
one such storage - Left Hand Networks iSCSI arrays (now
owned by HP - the P4300, P4500 and P4800 storage
Post by yu song
Post by Jankowski, Chris
arrays). Again, implementation of such cluster is very
complex. IMHO it is easier to have local HA clusters on
Post by yu song
Post by Jankowski, Chris
both sites and a good DR process based on replication.
-----Original Message-----
Jankowski, Chris
Sent: Wednesday, December 14, 2011 3:03 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
*Unidirectional* replication is probably a better phrase to
describe what EMC SRDF and all other typical block mode
storage arrays do for replication.
Typically this is used for manual or semi-automated DR systems
and works very well for this purpose. This approach splits the
HA and DR domains.
It can also be used with a HA stretched cluster configuration
for failing over services from one site to the other. You
need to integrate into the service scripts unmounting of the
filesystems for the service on one site, changing the
direction of the replication and mounting the filesystem on
the other site. This is quite complex and fiddly to say the
least. I have yet to see an implementation where the users
will be really happy with the robustness of the integrated
solution.
To implement a HA cluster that uses a cluster filesystem such
as GFS2 across geographical area you need a different type of
storage - a geographically distributed storage to have a
chance of the cluster surviving the inter-site link failure or
site failure. Standard unidirectional replication won't do for
this. I know of only one such storage - Left Hand Networks
iSCSI arrays (now owned by HP - the P4300, P4500 and P4800
storage arrays). Again, implementation of such cluster is very
complex. IMHO it is easier to have local HA clusters on both
sites and a good DR process based on replication.
You could also try to implement the stretched cluster purely
in software using separate LUNs on storage arrays in two sites
and mirroring them. Personally, I believe that this will not
yield a robust solution with the current versions of software.
Regards,
Chris Jankowski
-----Original Message-----
Reeves
Sent: Wednesday, 14 December 2011 01:43
To: linux clustering
Subject: Re: [Linux-cluster] GFS2 support EMC storage SRDF??
Post by yu song
My question is that GFS2 supports SRDF ?? looking at KB in
redhat site, it
Post by yu song
only says that GFS2 does not support using asynchronous or
active/passive
Post by yu song
array based replication. but it seems like does not apply
for SRDF.
SRDF offers both synchronous and asynchronous replication but is
active/passive. I.e. the administrator can configure whether the primary
(R1) site waits for write acknowledgement from the remote (R2) site or
not but at any one time it is only possible to write to either the R1 or
the R2 device.
Synchronous replication guarantees write order fidelity for the R2 copy
and ensures the remote copy is crash consistent at all times.
Asynchronous replication allows SRDF to support longer
distances (or
lower bandwidth / higher latency inter site links) by
packaging multiple
writes into delta sets to be sent to the remote site.
More complex modes and combinations exist that allow
consistency to be
maintained among a group of devices, for example a database's data store
and redo logs, or that relax some of the synchronous
replication
guarantees to improve efficiency (semi-synchronous operation).
Active/passive in the context of storage replication usually refers to
the states of the devices on the two sites. In active/active replication
both sides are fully active at all times and writes may be issued on
either side of the replication (a bit like multi-master application
layer replication). An active/passive design only allows one side to be
active for writes at a time.
Most array based implementations are active/passive and offer
asynchronous, synchronous or semi-synchronous operation.
Regards,
Bryn.
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
https://www.redhat.com/mailman/listinfo/linux-cluster
Loading...