Skip to Main Content
IBM Z Hardware and Operating Systems Ideas Portal


This is the public portal for all IBM Z Hardware and Operating System related offerings. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Workspace z/OS
Categories DFSMS SMS
Created by Guest
Created on May 17, 2023

Dataset extensions on EAV volumes limited

When moving to EAV volumes it means fewer volumes in the STORGRP available for dataset volume extensions. This can result in space failures based on original JCL allocations that worked fine with smaller and more volumes in the STORGRP, because there are fewer to extend to, even if the capacity is greater, and because you're more likely to have duplicate names found during a volume extend.

If there was a variable such as &EAV available it would be possible to counter this issue by assigning a DATACLAS that forces larger allocation sizes for EAV allocations, especially for &DSNTYPE=BASIC or PDS. This could help avoid having to make bulk JCL changes.

Idea priority Medium
  • Guest
    Reply
    |
    May 23, 2023
    There are no ways for the system to know the data set create request will go to the EAVs before the storage group ACS routine has run.
    However, as a storage admin, you know your environment has been changed to EAVs. I think you may try to explore the variable &USER_ACSVAR where you can set the value you want through our SYS1.PARMLIB(IGDSMSxx) member.
  • Guest
    Reply
    |
    May 22, 2023

    Yes you have captured the essence of it and I see the obvious flaw with an &EAV variable of course not available during DATACLAS ACS processing! So other variables would need to be used to identify datasets known to be EAV candidates.

    We're exploring ways to avoid bulk JCL changes yes because it could be a phased implementation across different plexes. We could substitute specified DATACLAS's in JCL with EAV friendly equivalents in ACS according to &SYSPLEX/&SYSNAME
    e.g
    WHEN(&DATACLAS EQ 'NEAVSML' AND
    &SYSPLEX EQ 'TEST')
    SET &DATACLAS = 'EAVSML'

    I was sure I'd seen the duplicate name issue on our test systems using EAV's but can't reproduce it which matches what you're saying. Perhaps it was only with Temporary datasets.

  • Guest
    Reply
    |
    May 19, 2023
    First, we would like to say "With fewer volumes that increases the chance that during a volume extension the system will attempt to allocate on a volume that already contains a portion of the dataset resulting in a duplicate dataset name situation" is not true. We never attempt to extend a data set on its existing volume as a new volume.

    Your issues are:
    1. You don't want to modify your JCL jobs to update the SPACE parameter after you switch from 100 mod-54 volumes to 20 EAV volumes. Although number of volumes in your storage group is reduced, your storage group capability is the same or greater.
    2. You want to reserve the potential maximum size of your data set after number of volumes in a storage group is changed by somehow changing the SPACE parameter.

    If we understand your request correctly, you are looking for a variable "&EAV... to counter this issue by assigning a DATACLAS that forces larger allocation sizes for EAV allocations". The order of the ACS routine invocations is: Data Class, Storage Class, Management Class, and Storage Group. After a storage group ACS routine returns, it's the time we may know where the allocation will go, either EAV or non-EAV. So, before going to the data class ACS routine, we don't know how to determine the allocation request will be on an EAV. If we give you the variable &EAV, we don't know where to get its value.

    After we review your response, we think there is another issue if you don't want to change your JCL jobs. For example, if you have a job to create a data set with volume count of 21 and you only have 20 volumes in your storage group, our current logic will fail your request because the storage group has insufficient volumes.

    Do we understand your request correctly?
  • Guest
    Reply
    |
    May 19, 2023

    If for example you have a dataset allocation for 10,10 CYL's then depending on the &DSNTYPE that could potentially amount to 160 CYL's per volume or 1230 CYL's.

    Then depending on the DATACLAS DYNVOL or VOL=(,,,nn) that provides for either 160 CYL's * DYNVOL/VOL or 1230CYL's * DYNVOL/VOL theoretical maximum for the dataset.

    If the STORGRP has for example 100 * Mod-54 volumes then if DYNVOL/VOL is 59, then the dataset could reach up to a potential maximum of either 9440 CYL's or 72570 CYL's across 59 volumes.

    If the STORGRP has the volumes replaced with for example 250GB EAV's then the number of volumes in the STORGRP to provide the same capacity would reduce from 100 volumes to 20 volumes.

    That means the 10,10 CYL allocation is now limited to 160 CYL's * 20 (3200 CYL's) or 1230CYL's * 20 (24600 CYL's) which is only around 33% of what it could have been with more volumes available.

    Also, depending on other allocations and fragmentation of volumes at the time it may be that not all possible extents could be taken on each volume which forces more volume extensions. With fewer volumes that increases the chance that during a volume extension the system will attempt to allocate on a volume that already contains a portion of the dataset resulting in a duplicate dataset name situation and a failure.