This is the public portal for all IBM Z Hardware and Operating System related offerings. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Hello,
It looks like you misunderstood the request. We know that STORC report provides memory allocation for VTAM, but what we are asking is that the information that is shown in the STORC report have details for VTAM, as we know that the consumption of ECSA that appears for VTAM is related to the services that it (VTAM) does for other Address Spaces (CICS connections,between others).So I want to know who is doing these ECSA getmains for VTAM, which today we can only do with a DUMP.
Service ELAP -- Percent Used - ----- Amount Used -----
Jobname Act C Class ASID Time CSA ECSA SQA ESQA CSA ECSA SQA ESQA
%MVS 4 10 28 88 81600 46M 539K 113M
%REMAIN 17 1 0 0 350K 3404K 3328 298K
VTAM S SYSSTC 0087 20.5D 1 11 0 0 15888 48M 0 4952
When I drill down to %REMAIN, I see the 'Common Storage Remaining' report.
I want that when I detail the VTAM, I have a report with the details of who is consuming the ECSA, just as I see the %REMAIN detail.
Regards
RMF reports at the address space level, so you get the total used ECSA. That said, using the Monitor III batch reporting capability, it is possible to monitor the ECSA usage and issue console message when the allocated space reaches a configured threshold.
Using the message processing facility (MPF), actions can be taken when the message is received.
Hello,
The point here is have more details about who is demanding the ECSA allocation for zOS (%MVS) and VTAM. There are many STCs that does getmains with the system as the owner and this is attributed to MVS in the STORC panel. It is not zOS doing getmanis itself but it is not possible to know that without capturing a dump. If we can get those details in the same way that we can have it when we just click over the %REMAIN in the STORC panel it would be great.
For the VTAM case, it would be great if we can know which service is pushing the need of ECSA getmains, or in other words, it is for the CommServer code that the allocation it is being done or it is to keep the data due the target it is not being able to receive the data right away. Another example could be more CICS connections between regions using VTAM were defined and then more ECSA it is allocated for each connection defined independently if it is being used or not. Again the idea is to have some insight about which service is demanding the ECSA getmains done by VTAM.
I hope that the above info would have clarified a little bit more what is being suggested in this idea.
Regards.
Does the STORC report not provide the requested information? The STORC report provides memory allocation Amount Used and % Used for CSA, ECSA, SQA, and ESQA for all address spaces including VTAM. The header fields of the report contain the Availability, Average, and Peak amounts for the same memory areas.
Thank you!