Skip to Main Content
IBM Z Hardware and Operating Systems Ideas Portal


This is the public portal for all IBM Z Hardware and Operating System related offerings. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Delivered
Workspace z/OS
Created by Guest
Created on Feb 15, 2018

Implement VIPA registration for zNFS Server to make it behave "Sysplex Aware"

zNFS Server registers on startup with the LPARs Static VIPA for Kerberos. This means that the principal requires to be in the format:
nfs/static_vipa_dns
The Principal Name is defined in the KERB segment of the STC userid.

This behaviour requires separate STC UserIDs for running zNFS on different LPARs with Kerberos or changing the KERB segment of the STC User every time it is moved to an other LPAR. This also requires that all Client remount the NFS shares once the zNFS Server moves to an other LPAR. So there is no kind of transparency or Sysplex Awareness for the zNFS Server when using Kerberos.

The behaviour should be changed or enhanced that the zNFS Server can register with the Dynamic VIPA when there is a TCPIP BIND done for zNFS Server during startup. This would make it possible to run the zNFS Server on any LPAR in the Sysplex without having new definitions for all LPARs that can possible run the zNFS Server.

IBM made the SOD that the zSMB Server is stabilized and names zNFS Server as full replacement. Unfortunately this is actually not possible because of the limitations running zNFS Server with Kerberos accross the Sysplex. Also, there are some security limitations running SMB. But zNFS is no full replacement.
Beginning with Windows 10 zSMB Server is no longer compatible with the Client side. The use of zNFS with Kerberos is also not possible in a Sysplex when the zNFS Server moves to other LPARs

Idea priority Urgent
  • Guest
    Reply
    |
    Jan 31, 2021

    .This RFE is implemented in D-APAR OA58912. z/OS NFS server support Kerberos on bind-activated DVIPA in INET, CINET with stack affinity, and CINET default stack.

    1) Basically the z/OS NFS (on LPAR SYA and SYB) MUST HAVE its TCP/IP Profile modified to specify PORT, Server Job, and BIND
    i.e.,
    PORT
    ...
    2043 TCP MVSNFS NOAUTOLOG BIND 10.1.1.124 ; associated host=vipaA_deptA.company.com
    2044 TCP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2045 TCP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2046 TCP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2047 TCP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2048 TCP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2049 TCP MVSNFS NOAUTOLOG BIND 10.1.1.124

    2043 UDP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2044 UDP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2045 UDP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2046 UDP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2047 UDP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2048 UDP MVSNFS NOAUTOLOG BIND 10.1.1.124
    2049 UDP MVSNFS NOAUTOLOG BIND 10.1.1.124
    ...

    The above DVIPA 10.1.1.124 MUST BE in a VIPARANGE stmt.

    When starting MVSNFS on LPAR SYA ( or LPAR SYB) it displays

    GFSA327I (MVSNFS) z/OS Network File System Server starting with INET and 10.1.1.124
    EZD1205I DYNAMIC VIPA 10.1.1.124 WAS CREATED USING BIND BY MVSNFS ON

    2) The DNS (Domain Name Server) MUST HAVE a record of the DVIPA 10.1.1.124 and its associated hostname
    i.e., nslookup 10.1.1.124 would report the hostname vipaA_deptA.company.com

    3) Create the Service Principal "nfs/vipaA_deptA.company.com" and securely transfer the keytab to LPAR SYA and LPAR SYB.

    4) Use the z/OS "keytable merge" to merge the generated keytab to the existing krb5.keytab

    if the DNS and Kerberos and krb5.conf are correctly configured then the z/OS NFS Server would display

    GFSA730I (MVSNFS) NETWORK FILE SYSTEM SERVER KERBEROS INITIALIZATION
    SUCCESSFUL FOR "nfs/vipaA_deptA.company.com"

    Please request the ++APAR OA58912 (for z/OS V2R3 or V2R4) to explore this new feature.

  • Guest
    Reply
    |
    Mar 7, 2018

    As discussed on phone we will further describe and priorize our wishes for implementation of that:

    Prio 1:
    Our prefered solution for the behaviour would be if zNFS registers on startup with a dynamic VIPA with kerberos. So that the KERBNAME in the KERB segment of the STC user would be "nfs/vipa_dsn@domain". With this behaviour it would be possible to move the zNFS STC to any system in the sysplex without the need to change any definition in Kerberos or RACF.


    Alternate:
    An alternate for us would be if the solution you described would work:
    a) z/OS NFS Server1 has "nfs/Server1.domain" Kerberos service name, and
    b) z/OS NFS Server2 has "nfs/Server2.domain" Kerberos service name, and
    c) the VIPA generic is "Server.domain" where all NFS Clients mount to "Server:\path".
    With Static VIPA registration with Kerberos and mounts with Kerberos over the VIPA address the Client would not be required to remount the filesystem when zNFS1 is stopped and zNFS2 is started on another LPAR. It would be ok to have seperate STC users for seperate LPARs and maintain all RACF/Kerberos definitions for the users and systems where zNFS could be started.


    Anyway: The prefered solution would be the first one so that there are no double definitions required.

  • Guest
    Reply
    |
    Feb 28, 2018

    Can you confirm us that we have understood your last comment in the right way?

    I took your sample:
    a) z/OS NFS Server1 has "nfs/Server1.domain" Kerberos service name, and
    b) z/OS NFS Server2 has "nfs/Server2.domain" Kerberos service name, and
    c) the VIPA generic is "Server.domain" where all NFS Clients mount to "Server:\path".

    Is it possible to make mounts with Kerberos with the VIPA address and after a move (stop NFS1/start NFS2) the mounts still be available and functional on the Client side? Reading the PMR this is a bit missleading because we understood in the PMR that Kerberos (including mounts) only work with the Static VIPA definition?

    If this is true, this would help us a lot. If you can confirm that for us we could test it with the z/OS NFS Client and the "move" (Stop NFS1/Start NFS2) of the z/OS NFS Server to confirm that for us. This was one of the main problems we saw.

    Please note:
    It would be very helpful if this would be added to the documentation regarding the sysplex awareness (or however it is called). I suppose that would help a lot of customers.

    Regarding part 2:
    If it works that NFS1/NFS2 can be used by a VIPA then it is still required to have one STC userid per LPAR because we can only have one KERB segment per user. It would be helpful to only have one STC userid that can have a general KERBNAME for example in the format nfs/vipa_dns.domain. This would be possible if the zNFS would use the VIPA for kerberos instead of the Static VIPA.

  • Guest
    Reply
    |
    Feb 23, 2018

    Assume that
    we have 2 LPARs and 2 z/OS NFS Servers (one on each LPAR) where both share the same VSAM MHDB and LDB data sets for High Availability z/OS NFS Server.
    At any given time only one z/OS NFS Server is active.
    When the z/OS NFS Server1 on LPAR1 is stopped, then the z/OS NFS Server2 on LPAR2 is started, and VIPA direct NFS traffic from LPAR1 to LPAR2.

    Assume that
    a) z/OS NFS Server1 has "nfs/Server1.domain" Kerberos service name, and
    b) z/OS NFS Server2 has "nfs/Server2.domain" Kerberos service name, and
    c) the VIPA generic is "Server.domain" where all NFS Clients mount to "Server:\path".

    Assume that z/OS NFS Server1 is active and Clients mount with < -o sec=krb5 Server:\path >, VIPA directs all NFS traffic to Server1.

    The Kerberos RPCSEC_GSS tokens/contexts are negotiated and used between the Clients and "nfs/Server1.domain".

    When the z/OS NFS Server1 is stopped and the z/OS NFS Server2 is started, VIPA directs all NFS traffic to Server2.

    Obviously all previously obtained Kerberos RPCSEC_GSS tokens/contexts that are associated with Server1 are NO longer valid at Server2.
    The RPCs with the old Kerberos RPCSEC_GSS tokens would be failed by Server2 with RPCSEC_GSS_CREDPROBLEM
    Then the NFS Clients refresh the Kerberos RPCSEC_GSS tokens/contexts by negotiating and exchanging information between the Clients and "nfs/Server2.domain". This RPCSEC_GSS security negotiation is performed by NFS Client and Server2 without any user intervention.

    Secondly the Clients are implicitly re-established his/her TSO user logon on LPAR2.

    Please work with our NFS Level2 Service and provide the details of the claim
    {{
    This behaviour requires separate STC UserIDs for running zNFS on different LPARs with Kerberos or changing the KERB segment of the STC User every time it is moved to an other LPAR. This also requires that all Client remount the NFS shares once the zNFS Server moves to an other LPAR. So there is no kind of transparency or Sysplex Awareness for the zNFS Server when using Kerberos.
    }}

  • Guest
    Reply
    |
    Feb 15, 2018

    As alternate of registration with the Dynamic VIPA from BIND it would also be possible to make the Principal configurable (like it is implemented by the LDAPSRV). This would make it possible to use a Distributed Dynamic VIPA.