This is the public portal for all IBM Z Hardware and Operating System related offerings. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Forgot to add that the SELECTMULTI(ANY) keyword should be added to the DFDSS job using CHECKVTOC to cater for multi-volume datasets.
This isn't a DFDSS issue it is a Storage Management issue. Without some idea of the number of datasets or GB, and how time constrained these backups are, i.e. how many GB to backup within what time period before online's come up, it's hard to offer solid solutions.
Some things to consider:
1. Why not backup direct to Tape in the first instance rather than to a Flash pool and then to Tape? Has it been tested to see what difference in timing that would make? With virtual tapes you might find direct to tape is fine.
2. If you have a number of these backups which target the Flash pool and each backup can vary then you're always going to potentially have a problem with Flash pool sizing, especially if you have no control over what is included in the backups or the dataset sizes. And even if you develop a mechanism to calculate the space when do you do that, and how much time do you have to adjust the Flash pool space before the backups start? The best you can do is monitor the Flash pool over a reasonable period that includes peak processing periods and long holiday periods for example when it usage would be at its highest. Use that as a base point and add an extra 20-30% or so to cater for fluctuations.
3. If the INCLUDE and EXCLUDE masks for ALL the backups can be brought together then these can be used as input to a process to calculate the overall space the datasets use. If you run a DFDSS job with PARM='TYPRUN=NORUN' mode and the CHECKVTOC keyword using the masks it will report each datasets extents and extent size, but it does need a STORGRP(sgname) or LIDY(volser). Or DCOLLECT and other numerous utilities (including OEM's such as CRplus or FDREPORT) can cope with the masking to get totals. The subsequent REXX post-processing can deal with anything they can't. From the space calculated you have some idea of the Flash pool requirement and if it's run at a time that allows capacity to be added/enabled before backups start that's fine. The total Storgrp capacity and freespace can be found using D SMS,STORGRP(sgname) or the Naviquest Storgrp report, or other OEM utilities. I assume the backups are being compressed with zEDC so depending on the number of datasets being compressed already then you could expect the backup sizes to be substantially smaller than the total of the datasets.