Monday, May 17, 2021

NetApp ONTAP 9.8 S3 - Provisioning Object Storage

 


NetApp ONTAP 9.8 software supports the Amazon Simple Storage Service (S3). ONTAP supports a subset of AWS S3 API actions and allows data to be represented as objects in ONTAP-based systems, including AFF, FAS, and ONTAP Select.

 

The primary purpose of S3 in ONTAP is to provide support for objects on ONTAP-based systems. The ONTAP unified storage architecture now supports files (NFS and SMB), blocks (FC and iSCSI), and objects (S3).

 

Architecture Object storage is an architecture that manages data as objects, as opposed to other storage architectures such as file or block storage. Objects are kept inside a single container (such as a bucket) and are not nested as files inside a directory inside other directories.

 



ONTAP - S3 Implementation:


1. Enable S3 Service in any data-SVM.

Requirements

 Platforms

                NetApp AFF storage system. S3 is supported on all AFF platforms using ONTAP 9.8+.

                FAS storage system. S3 is supported on all FAS platforms using ONTAP 9.8+.

                NetApp ONTAP Select. S3 is supported on all platforms using ONTAP Select 9.8+.

                Cloud Volumes ONTAP. S3 is not supported on Cloud Volumes ONTAP.

 

Data LIFs

Storage virtual machines (SVMs) hosting object store servers require data LIFs to communicate with client applications using S3. When configured for remote cluster tiering, FabricPool is the client and the object store is the server.

Cluster LIFs

When configured for local cluster tiering, a local tier (also known as a storage aggregate in the ONTAP CLI) is attached to a local bucket. FabricPool uses cluster LIFs for intracluster traffic.




Configuring S3 Server (Object Storage Server)


Creating Bucket for Object Storage tiering.

















In another Cluster, create a Fabric Pool use the following Steps.

Add Cloud Tier and select the ONTAP S3









Cloud Tier Added Successfully.





In cluster2 list the object server and bucket details.



Then you can attach the local tiers to this cloud tier.


Sunday, May 16, 2021

NetApp ONTAP 9.8 Snapmirror Business Continuity (SM-BC) Configuration

 


  

Beginning with ONTAP 9.8, you can use SnapMirror Business Continuity (SM-BC) to protect applications with LUNs, enabling applications to fail over transparently, ensuring business continuity in case of a disaster.

 


Benefits

SnapMirror Business Continuity provides the following benefits:

• Provides continuous availability for business-critical applications

• Ability to host critical applications alternately from primary and secondary site

• Simplified application management using consistency groups for dependent write-order consistency

• The ability to test failover for each application

 

Role of Mediator

ONTAP Mediator provides an alternate health path to the peer cluster, with the intercluster LIFs providing the other health path. With the Mediator’s health information, clusters can differentiate between intercluster LIF failure and site failure. When the site goes down, Mediator passes on the health information to the peer cluster on demand, facilitating the peer cluster to fail over. With the Mediator-provided information and the intercluster LIF health check information, ONTAP determines whether to perform an auto failover, if it is failover incapable, continue or stop.

 








Hardware

• Only two-node HA clusters are supported

• Both clusters must be either AFF or ASA (no mixing)

Software

• ONTAP 9.8 or later

• ONTAP Mediator 1.2 or later

• A Linux server or virtual machine for the ONTAP Mediator running one of the following:

◦ RedHat Enterprise Linux 7.6 or 7. 7

◦ CentOS 8.0 or 8.1

Licensing

• SnapMirror synchronous (SM-S) license must be applied on both clusters

• SnapMirror license must be applied on both clusters

 

Supported protocols

• Only SAN protocols are supported (not NFS/CIFS)

• Only Fibre Channel and iSCSI protocols are supported

 

Steps to Implement SM-BC:


 




























Friday, May 8, 2020

NetApp ONTAP 9.7 NVMe Configuration and Management

What is NVMe?


NVM Express (NVMe) is a data storage protocol that delivers the fastest response times for business-critical enterprise applications. However, NVMe is more than a storage specification; the broader NVMe over Fabrics protocol encompasses the entire data path, from server to network to storage system.

NVMe—the NVM Express data storage standard—is emerging as a core technology for enterprises that are building new storage infrastructures or upgrading to modern ones.
NVMe is both a protocol optimized for solid-state storage devices, and a set of open-source architectural standards for NVMEM components and systems.

NVMe adds some new names for some common structures.


An NVMe Qualified Name (NQN) identifies an endpoint and is similar to an iSCSI Qualified Name (IQN) in both format (domain registration date, domain registered, and something unique like a serial number). A namespace is analogous to a LUN (Logical Unit Number, a unique identifier for a logical or physical device); both represent an array of blocks presented to an initiator. A subsystem is analogous to an initiator group (igroup) (subsystems have considerably more functionality but for our purposes, we are focusing on how to map LUNs/namespaces), it is used to mask an initiator so that it can see and mount a LUN or namespace. Asymmetric Namespace Access (ANA) is a new protocol feature for monitoring and communicating path states to the host operating system’s Multipath I/O (MPIO) or multipath stack, which uses information communicated through ANA to select and manage multiple paths between the initiator and target.

NVMe will become an essential part of the modern data center, because it addresses three crucial attributes of data storage performance: IOPS, throughput, and latency.

The IOPS and bandwidth improvements are primarily the result of NVMe’s flexibility and its ability to take advantage of fast transport technologies to move NVMe commands and data.

These transports include:
                     FCP. Currently available in speeds of 16 and 32Gbps and soon 64Gbps.
                     RDMA Protocol.
Data center fast Ethernet: Currently available in 25, 40, 50, and 100Gbps.
InfiniBand: currently available with speeds up to 100Gbps.
                     PCI Express 3.0. Supports 8 gigatransfers per second (GT/s), which translates to approximately 6.4Gbps.

NVMe accelerates many of today’s most important emergent business workloads:

Artificial intelligence (AI).

Machine learning (ML)/deep learning (DL).

Internet of Things (IoT).

NVMe as a Storage Attachment Architecture

NVMe is most commonly used today for attaching disks and disk shelves. Many storage vendors and suppliers have introduced offerings based on using NVMe as a storage-attachment architecture and standard. Technically, in most cases, NVMe is the protocol used to perform I/O, whereas the physical transport is primarily PCIe.
In this scenario, NVMe replaces the SCSI command set with the NVMe command set and frequently replaces SATA or serial-attached SCSI (SAS) with PCIe to connect drives to the storage controller. NVMe relies on a physical attachment and transport. It uses PCIe as the transport.

NVMe-attached flash offers more bandwidth and reduced latencies because:
                     It offers more and much deeper queues: 64k (65,535) queues, each with a queue depth of 64k.
                     The NVMe command set is streamlined and therefore more efficient than legacy SCSI command sets.


Create a dedicated NVMe protocol supported SVM.



List the SVM details.



Check and List the NVMe/FC adapters.



Create NVMe LIF's.



List the NVMe Interfaces with their transport address.




Create a subsystem.




Get the host nqn (In linux host server:# cat /etc/nvme/hostnqn) & add the host nqn to subsystem.


Create a Namespace (like LUN).




Then Map the Namespace to the subsystem.



You can view the namespaces in ONTAP System Manager.



 You can view NVMe Namespaces  health and Performance in ActiveIQ Unified Manager.





Statistics view in sysstat command.


 To list the enabled NVMe feature and Max. Nmaespace size. 




Host Side Commands: (Linux Server)


To list the host nqn value.

# cat /etc/nvme/hostnqn

Check nvme-cli RPM or else install the RPM.

rpm -qa|grep nvme-cli
nvme-cli-1.6-1.el7.x86_64

To discover the namespaces (Devices).

nvme connect-all --transport=fc --traddr=nn-0x200a00a098c80f09:pn-0x200b00a098c80f09 --host-traddr=nn-0x20000090fae0ec9d:pn-0x10000090fae0ec9d

 List the connected nvme devices.

nvme list

Thursday, April 30, 2020

Three-Site Data Center Data Protection using NetApp ONTAP 9.7 SnapMirror



In today’s constantly connected global business environment, companies expect rapid recovery of critical application data with zero data loss. Organizations devise effective disaster recovery plans by keeping the below requirements in mind:

• Data should be recoverable in the event of catastrophic failure at one or more data centers (disaster recovery).
• Data should be replicated and distributed in an optimal way, taking into consideration major business criteria such as cost of storage, protection level against site failures, and so on.
• What needs to be protected and for how long should be driven by:
Shrinking recovery point objective (RPO), to achieve zero data loss
     Near-zero recovery time objective (RTO), for faster recovery of business-critical applications in case of disaster

Three-Site Data Center Topology
The three-site data center configuration can be done in either fan-out or cascade topology.

Fan-out Topology
Fan-out topology consists of primary and near disaster recovery data centers within region sites (with replication network having <10ms round trip time), will synchronously replicate data between themselves to achieve zero RPO. And the primary asynchronously replicates data on a regular basis depending on the quantity of new data being generated to the remote or far disaster recovery out-of-region data center. If disaster strikes the primary site, the application can be started with zero data loss from the near disaster recovery data center, which would also take over asynchronous replication to the far disaster recovery data center.



Steps to Configure Fan-out Topology:

Create intercluster role Lif's and configure cluster peer and SVM peer.

Check the cluster peer and vserver peer in all the 3 site NetApp Clusters.

Cluster1  --- peered with cluster2 and cluster3





Cluster2 peered with cluster1 and cluster3.



Cluster3 is peered with cluster1 and cluster2.




In cluster1, create a new source volume (Ex- volnew). (Primary Site)



In cluster2, create a destination volume with DP type. (Near DR Site)



In cluster2, create SnapMirror  (SM-S) Sync relationship. 




Then initialize the Snapmirror relationship, once the base line transfer is successful, the status will be Insync.




In Cluster3, (Far DR Site) create a DP volume and create a Async SnapMirror relationship between Primary Site and Far DR site.



Initialize the SnapMirror Relationship.







Primary Site to Near  DR Site (Sync) 


Primary Site to Far DR Site (Async).




Cascade Topology

Cascade topology is where the primary would synchronously replicate to the near disaster recovery site, and the near disaster recovery site would asynchronously replicate data to the far disaster recovery site. In case the near disaster recovery site goes down, you can configure an asynchronous replication directly to the far disaster recovery site over longer distance to ensure all the delta updates are being updated or replicated.


In Cluster1(Primary Site), Create a source volume to replicate.






In Cluster2 (Near DR Site) , Create a DP volume.



In Cluster2, create a SnapMirror Sync relationship between these two volumes.





In Cluster3(Far DR Site), create a DP volume.




In cluster3, create a Snapmirror (Async) between cluster2 and Clster3 volumes.