Everyone wants a faster, cheaper mainframe. They want it both ways —better performance on a budget. And, while this might sound idealistic, mainframe managers can have their cake and eat it too by making mainframe optimization a priority.

At the heart of mainframe optimization is this: What resources are on the mainframe, how are they being used, and for what price? On the surface these might seem like easy questions to answer, but anyone who has worked with mainframes for long enough knows that there is no such thing as easy solutions when it comes to z/OS.

Answering the above questions requires an extensive collection of software. A data analytics dashboard, capacity planning platform, budget management interface, etc. — historically, mainframe optimization has been too finicky to even consider. It’s worth it in the long run, but how can mainframe managers overcome the headache associated with mainframe optimization?

Mainframe optimization is a multi-tiered, ongoing process. To unlock new z/OS efficiencies you must employ a combination of the following techniques:

  • Review WLM
  • Troubleshoot inefficiencies
  • Optimize code
  • Be aware of mainframe expenses
  • Correlate mainframe activity with business services

Let’s dive into this optimization workflow in greater detail to better understand what contributes to mainframe issues and what managers can do to overcome common z/OS inefficiencies.

1. Review WLM

Workload management (WLM) is a default z/OS operation which allows users to manage system resources. Through this application, you can control all physical resources running on the system.

A universal planning strategy cannot adequately accommodate the nuances of workload requirements, and WLM offers the level of granularity you need to accurately make performance tweaks. Workloads require different variable resources according to their user impact. For example, transactional operations that directly impact user response time require more attention than a backend workload.

WLM can be used to organize workloads and control which applications receive system resources most urgently. By prioritizing mission-critical workloads, mainframe users can better cater to their specific business concerns.

WLM is often overlooked as an optimization tool due to its complexity and inaccessible interface. WLM distributes resources and offers performance metrics in the moment, but provides no insight into past performance, recording or cataloging capabilities. For anyone hoping to leverage performance metrics long term, WLM might be lacking.

This procedural gap illustrates the need for analytical capability within WLM. An analytics dashboard offers a way to ensure WLM efforts are actually contributing to mainframe optimization: metrics on consumption by importance, performance indexes, transaction rates, and more help add actionable analytics to the otherwise static WLM interface.

2. Troubleshoot inefficiencies

If, after reviewing WLM rules, your mainframe still experiences performance issues, it stands to reason that a procedural inefficiency exists elsewhere within the system. This can be a frustrating situation — inefficiencies can hide in countless places, and, left unaddressed, can slow mainframe processes. Without the proper diagnostic tools, finding optimization opportunities can be like trying to find a needle in a haystack.

A good place to start is to evaluate your hardware hierarchy, and gain a global understanding of all pieces. Instead of relying on guesswork, perform a system-wide audit of all assets to discover what might be bogging down mainframe efficiency.

As you might imagine, a universal diagnosis of this nature requires many, many man hours. Hours that your team might not be able to afford on a diagnostic project. In reality, uncovering system inefficiencies can be quite impractical because it requires exhaustive, manual review of SMF records.

SMF records — which offer acute insight into machine activity — can be a treasure trove of mainframe insight, but only if you know where to look. By default, there is no intuitive way to analyze SMF contents, and, when dealing with 255 different SMF records, the sheer volume of information can be overwhelming to even the most seasoned data scientist.   

Mainframe analytics software makes this process more accessible. Analytics software can collect SMF records in real time, store them in a centralized database, and display the information in a user-friendly interface. This enables IT personnel to better synthesize data from SMF records and uncover new z/OS optimization opportunities.

3. Optimize code

Contrary to the previous points, mainframe issues are not always the fault of the machine. It is not uncommon for blockage in the chain of command to lurk in an application’s code. Hiding in plain sight, an unoptimized line of code can delay application performance — compounding over time and effectively slowing the entire mainframe system.

Discovering application performance issues is not easy. Reviewing the code of all applications is as inefficient as it is tedious. The best way to identify these issues is to investigate historical data and identify ongoing, recurring inefficiencies. Questions to ask yourself as you review historical data include: Which versions are running? Which applications require high levels of consumption? Are any applications requiring mainframe resources at an unsustainable rate?

With the help of a mainframe analytics dashboard, mainframe managers can easily answer the above questions and more. A mainframe analytics dashboard gives a big-picture view of z/OS activity and delivers application data through a comprehensible, user-friendly interface.

4. Be aware of mainframe expenses

Every month IBM offers mainframe users an invoice of expenses, but this document provides no insight into why costs amounted to the total that they did. Receiving a laundry list of expenses does not help inform future optimization initiatives.

This further punctuates the importance of identifying mainframe expenses. While this might seem fundamental, after years of mainframe activity it is not uncommon for a system to accumulate outdated software and contracts. No matter how confident you are in your mainframe management, it’s likely that you could benefit from a little housekeeping.

First, consider the utility of software urbanism. Software urbanism is the process of analyzing the value of all mainframe assets — weighing the financial cost of the software against how much ROI it offers the IT team. This process helps to uncover any expired contracts, redundant applications, or outdated software that could be secretly contributing to your monthly bill.

Next, assess Workload Licensing Charges. This is an area that likely costs a great deal, but also one in which mainframe managers can exert the most cost control. These expenses are based off the rolling 4-hour average for MSU consumption, so it stands to reason that limiting LPAR activity would yield savings. However, standard soft capping strategies such as Defined Capacity or Group Capacity Limit put a strain on performance to accommodate the budget.

Instead, leverage a dynamic soft capping solution which takes into account LPAR requirements as it makes intuitive MSU adjustments. This software automatically adjusts defined capacity — circumventing performance bottlenecks while still controlling mainframe expenses.

5. Correlate mainframe activity with business services

Above all else, you want to ensure your mainframe efforts actually translate into business ROI. All the IT improvements in the world don’t amount to much if they aren’t aligned with your organization’s goals.

Be sure IT improvements align with company initiatives by logging business services within a business intelligence technology. This software displays all technical parts and component statuses in a central interface, so your team can easily identify which workloads are working for your business and which ones are not.

Mainframe optimization helps IT departments do more with less. Leveraging activity insight to improve mainframe performance, streamlining sub-capacity management with automatic soft capping software, aligning system activity with business goals, etc. — employing the above optimization strategies helps improve z/OS performance without breaking the bank.

Share This