Skip to Main Content
IBM Z Hardware and Operating Systems Ideas Portal


This is the public portal for all IBM Z Hardware and Operating System related offerings. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Delivered
Workspace z/OS
Created by Guest
Created on Jan 22, 2021

Extended GDG causes space issues

The LIMIT of 999 may be too high, not only for space reasons, but also for space management reasons.
There are 3 major issues in the current implementation:
1) Installations have no options in terms of limit the maximum number of GDGs. It is either 255 or 999.
2) Flood HSM recall queue with too many recalls, which can be:
Too fast: With BATCH_RCLMIGDS(PARALLEL) in ALLOCxx all 999 recalls would be processed at once, which would take all HSM tape drives for recall. That would cause all jobs that issued recalls after the 999 recalls were issued to wait for a while. Storage group could be flooded with too many data sets with little opportunity for automation to react.
Too slow: with BATCH_RCLMIGDS(SERIAL) in ALLOCxx all 999 recalls would be processed one at the time, which could also take a while.
3) Flood storage groups with too many data sets, which could cause an out-of-space condition. That is a problem that exists today. Nothing prevents a job from recalling 255 today. The only difference is that LIMIT(999) worsens the problem 292.8%.

Idea priority Medium
  • Guest
    Reply
    |
    Jan 25, 2023
    This function is available via APAR OA62222 for IDCAMS. Another RFE has been taken to address the HSM aspect of this problem.
  • Guest
    Reply
    |
    May 4, 2022

    It would be nice to have some sort of granular control over this.

    e.g.

    - PARALLEL RECALL could involve a SAF call so only authorised UserID's can exploit it, and others are transparently forced to non-parallel.

    - set a threshold value of x number of recalls per UserID in ALLOCnn after which a WTOR is issued from DFHSM and requires a reply to continue or not. That WTOR could have some automation built around it to check the recall queue or Storgrp space etc based on site standards before replying.

  • Guest
    Reply
    |
    Jan 25, 2021

    We do not have this issue (or did not encouter it yet), but for our smaller systems I could see the potential occurence of this issue.