Frequently Asked Questions
Model9 Cloud Data Management for Mainframe
Yes. For an incremental backup the data set change bit must be turned on in the Policy options.
Yes. Like HSM or CA-Disk, when a data set is referenced by a batch job or online in TSO, the data set is automatically recalled to primary DASD for use.
Model9 Manager can use any of the following options when running on z14/z15 machines:
- Lz4 – for use on the zIIP
- GZIP - for use with zEDC or zIIP
- DFDSS-GZIP – for use with zEDC
- DFDSS-COMPRESS for use when no zIIP is available
In addition to compressing the data, before transmitting the data over TCP/IP, Model9 Manager breaks the data down into chunks and moves the data in parallel.
No. Model9 Cloud Data Platform policies can work with your existing SMS management classes and storage group definitions.
All the information and logs are available in the Model9 Manager UI. Running a Model9 Manager policy using JCL includes the policy run log in the JOB output. This log can be reviewed either on the mainframe or through the Model9 Manager UI.
Yes
Data sets migrated/archived with Model9 Manager have a VOLSER of M9ARCH (default) or a similar name chosen by the installer. The end-user can browse, select, or edit the data set in ISPF in the same manner as if it was migrated by HSM. The data set is automatically recalled. The message, ZM9RC01I RECALL NEEDED, DSN=dataset name is issued.
Yes, Model9 Shield supports Object Lock enablement at the bucket level. The customer needs to set up a lifecycle policy to delete the old versions after a given time period
Model9 Manager automatically allows for backups of open data sets. Model9 can be configured by the user to finish a backup with a warning about data sets that were open during the backup.
While 10GbE OSA cards are getting more and more widely available, some customers have only a limited number of 1GbE OSA cards. You do not need to compare the number of FICON cards you have to the number of OSA cards needed for data management. OSA cards can be fully utilized for the full bandwidth, multiple OSA cards can be used in parallel to increase both read and write throughput. Model9 Cloud Data Platform allows compressing the data on the mainframe using either zIIP engines or zEDC cards in order to reduce the amount of data being sent over the network thus reducing the amount of bandwidth needed by OSA cards.
No changes are needed for any application using either BSAM or QSAM. The application will still think it is reading/writing to tape while those reads/writes will be intercepted by CDS and redirected to object storage. These data sets will be cataloged with a special pseudo VOLSER to represent the fact that they reside on object storage
Yes. Model9 Manager can run side by side with any backup/archive software. Both products will be able to archive and automatically recall their own archives/migrates in a transparent manner. One exception is when you want to incrementally backup the same data set using the different products. You will not want both products to reset the change bit, as that will interfere with the other software backup decisions. When needed, Model9 Manager can avoid resetting the change bit at a Policy level to avoid any conflicts when running multiple backup software on the same data.
Standard HTTPS communications will be used with the cloud storage in order to secure the data in transit, either on-site or to the public cloud.
Both in-flight and at-rest data encryption are transparent to the end user using Model9 Cloud Data Platform. This is done automatically by Model9 Cloud Data Platform and the cloud infrastructure.
Data integrity of backup is maintained in several ways:
1. Data blocks are written by DFSMSdss which has its own data integrity mechanism. Any changes to the data will be detected by DFSMSdss during restore and will fail the restore operation
2. Cloud object storage does not allow for the alteration of objects. Any alteration to an object requires uploading a new version of this object. Uploading a new object is recorded in a cloud vendor audit and detectable by the new object create date. The old version which, was overridden, can also be kept if the versioning storage feature is enabled in the Policy option.
3. Model9 Cloud Data Platform is capable of uploading each object with a hash signature verified by the target cloud object storage before accepting the object. This feature needs to be enabled specifically.
All data set types are eligible for migration to object storage. The object storage repository serves as secondary storage, it is not intended to replace primary DASD for z/OS. Both backups and archives can reside on object storage. Data sets can also be imported from tape without intermediate DASD space being required.
Model9 Cloud Data Platform is hardware and vendor agnostic. We have formed strategic partnerships with various companies to bring a full backup/archive solution for all silos within the data center.
Model9 Cloud Data Management for Mainframe user interfaces include a web UI, a RESTful API that can be executed from z/OS, and a TSO command line interface (CLI) to perform Model9 functions from batch, REXX or TSO.
Yes, we can create a process for historical full dump import from DFSMSdss or DFSMShsm.
Backups and archives are defined through policies that can be executed either from the web UI or initiated from z/OS batch jobs using an API. There is also a z/OS command line interface (CLI) to initiate single data set backups/restores, archives/recalls, that can be called from TSO, REXX, or JCL.
Model9 Manager backs up the entire data set.
Yes. After each volume backup is performed a list of data sets that resides on that volume is generated and kept in the Model9 Cloud Data Platform database.
Yes. As mentioned after each full volume backup all data sets are indexed to allow restore from physical backup at the data set level.
We only support STANDARD or INFREQUENT ACCESS (IA) as values for the object storage tier. Access to Glacier is not permitted, rather we write to S3 and AWS lifecycle policy migrates data to Glacier.
We redirect to port 443 if someone tries to use 80. It’s just so that the customer won’t get a weird HTTP response when trying to access the non-encrypted port.
Last modified 1yr ago