Skip to Main Content
IBM Z Hardware and Operating Systems Ideas Portal


This is the public portal for all IBM Z Hardware and Operating System related offerings. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Future consideration
Workspace z/OS
Categories DFSMS DSS
Created by Guest
Created on Jul 3, 2022

DFSMS - Total capacity required by DSS DUMP or COPY

Daily, we have a batch running to dump few datasets to flash volumes using ADRDSSU FR(REQ) while the datasets are closed by ONLINEs and later dump those data on flash volumes to Tape. The other day, we encountered flash volumes capacity wasnt sufficient as the source files sizes have been increased and we had to increase the capacity in flash volumes storage group.

Since we do not know how much capacity is needed, we tripled the size of flash pool. Am trying to see if a utility can be built that helps in calculating space required by a dump or copy job? is that something doable? Can this be incorporated in DSS or some other utility? DCOLLECT & REXX can be used but in ADRDSSU we use patterns with INCLUDE & EXCLUDE and this makes the code build complicated.

Generally, flash pools are limited in capacity. So, when a team runs their jobs, and if they can validate whether the capacity can fit into that pool, they can proceed to actual backup.. please let me know if you need more info

Idea priority High
  • Guest
    Reply
    |
    Jul 29, 2022

    Forgot to add that the SELECTMULTI(ANY) keyword should be added to the DFDSS job using CHECKVTOC to cater for multi-volume datasets.

  • Guest
    Reply
    |
    Jul 29, 2022

    This isn't a DFDSS issue it is a Storage Management issue. Without some idea of the number of datasets or GB, and how time constrained these backups are, i.e. how many GB to backup within what time period before online's come up, it's hard to offer solid solutions.

    Some things to consider:

    1. Why not backup direct to Tape in the first instance rather than to a Flash pool and then to Tape? Has it been tested to see what difference in timing that would make? With virtual tapes you might find direct to tape is fine.

    2. If you have a number of these backups which target the Flash pool and each backup can vary then you're always going to potentially have a problem with Flash pool sizing, especially if you have no control over what is included in the backups or the dataset sizes. And even if you develop a mechanism to calculate the space when do you do that, and how much time do you have to adjust the Flash pool space before the backups start? The best you can do is monitor the Flash pool over a reasonable period that includes peak processing periods and long holiday periods for example when it usage would be at its highest. Use that as a base point and add an extra 20-30% or so to cater for fluctuations.

    3. If the INCLUDE and EXCLUDE masks for ALL the backups can be brought together then these can be used as input to a process to calculate the overall space the datasets use. If you run a DFDSS job with PARM='TYPRUN=NORUN' mode and the CHECKVTOC keyword using the masks it will report each datasets extents and extent size, but it does need a STORGRP(sgname) or LIDY(volser). Or DCOLLECT and other numerous utilities (including OEM's such as CRplus or FDREPORT) can cope with the masking to get totals. The subsequent REXX post-processing can deal with anything they can't. From the space calculated you have some idea of the Flash pool requirement and if it's run at a time that allows capacity to be added/enabled before backups start that's fine. The total Storgrp capacity and freespace can be found using D SMS,STORGRP(sgname) or the Naviquest Storgrp report, or other OEM utilities. I assume the backups are being compressed with zEDC so depending on the number of datasets being compressed already then you could expect the backup sizes to be substantially smaller than the total of the datasets.