Wednesday, July 31, 2019

NetApp MAX DATA Configuration and Management



NetApp MAX Data is an application acceleration product that uses the latest server-side storage technologies to improve application response time and provide enterprise-class data protection to persistent memory. MAX Data provides a file system for your application data with the performance of Intel Optane DCPMM and the capacity of a NetApp AFF storage array.
MAX FS, the file system for MAX Data, spans the memory and storage tiers in a single file system and intelligently moves (tiers) data between the memory tier and the storage tier based on use. It keeps the hottest, or most frequently accessed, data in the memory tier, and tiers the cooler, or less frequently used, data to the storage tier.
This behavior allows you to accelerate your application working set at memory speed while allowing a much larger total data-set size. MAX Data also provides enterprise data services to Optane DCPMM with snapshots and replication technologies. NetApp, as an industry leader, is the first to offer an application-independent solution that merges the highest local speeds with global data protection


MAX Data is a file system driver, currently available for Linux operating systems that will utilize DRAM and persistent memory in the server as a primary data tier.  Persistent memory used is Intel Optane (3D Xpoint) as NVDIMMs or PCIe attached.  MAX Data expands the internal persistent memory with a backing storage for the primary data tier that is called the secondary data tier.  The secondary data tier is a storage network connected NetApp All Flash Array.
Data stored on the server in the primary data tier is also protected by mirroring to a remote MAX Recovery Server.  The MAX Recovery Server is another server with the MAX Data file system driver and persistent memory connect over a network.
The NetApp All Flash Array may be an All Flash FAS (AFF) or an E-Series system.  NetApp storage is connected as block storage over a storage network.  A LUN from the NetApp storage appears as persistent memory to the application with the MAX Data software managing the backing store transfer between the storage and the server.





Extract the NetApp Max_data package in RedHat Linux 7.x





Go to the package directory and install max_data software using the max_install.sh script.



After successful installation, you can access and configure max_data via browser using https://hostname of linux server:9090




Access via web browser. Now login with linux credentials.



Now the Max Data Configurator page opens and you can manage your max_data.

Using this, you can integrate to ONTAP9, Configure MAX DR servers, MAX data Hosts and Application Storage Provisioning.






First Configure the Host Cluster.



Specify the ONTAP9 Cluster Vserver information with credentials, Protocol and Management SVM IP.





After providing all the information, Now Connect to ONTAP 9 Cluster.




Now connected successfully to ONTAP9 Cluster.




For MAX DR, Add License and Specify the Max recovery server IP address.



Now add the MAX Data Hosts. Provide max_data software installed linux server information.





Now successfully added and listing the MAX data Host.




Next Step to provisioning the Storage from NetApp AFF system.
For that add a Max Data License and Provide the Application Type.



Then provide the number of Lun's with the mount name and size.


Then Set the SnapShot copies for Data Protection as per schedule.




Now Application with the details added successfully.



Now you can go for deploy this.




While it is doing deployment process, it will connect to ONTAP9 and create igroup, LUN's and Map it.



Server will reboot, so you reconnect again.




You can check the configuration Logs also.




Now MAX cluster done successfully and application is in active state.




You manage and monitor capacity, performance and utilization.



Lun's are deployed as a datafiles here.






For data protection, you can create a snapshot for max_data files.



Snapshot created.



If you check in the cluster you can see the snapshot.




Just to check this, delete some files and restore the snapshot to recover the files.



Now restore the data from the snapshot.





Data restored successfully.



2 comments: