Wednesday, May 13, 2015

For a Large DB2 for z/OS Table, Should You Go With Partition-by-Range or Partition-by-Growth?

There are several aspects to this question: What do I mean by "large?" Is the table in question new, or does it exist already? What is the nature of the data in the table, and how will that data be accessed and maintained? I'll try to cover these various angles in this blog entry, and I hope that you will find the information provided to be useful.

What is a "large" table?

Why even ask this question? Because the relevance of the partition-by-range (PBR) versus partition-by-growth (PBG) question is largely dependent on table size. If a table is relatively small, the question is probably moot because it is unlikely that range-partitioning a smaller table will deliver much value. Partitioning by growth would, in that case, be the logical choice (for many smaller tables, given the default DSSIZE of 4G, a PBG table space will never grow beyond a single partition).

OK, so what is "smaller" and what is "larger" when you're talking about a DB2 for z/OS table? There is, of course, no hard and fast rule here. In my mind, a larger DB2 for z/OS table is one that has 1 million or more rows. That's not to say that a table with fewer than 1 million rows would never be range-partitioned -- it's just that the benefits of range-partitioning are likely to be more appealing for a table that holds (or will hold) millions of rows (or more).

When the table in question is a new one

This, to me, is the most interesting scenario, because it is the one in which the options are really wide open. I'll start be saying that you definitely want to go with a universal table space here, primarily because a number of recently delivered DB2 features and functions require the use of universal table spaces. But should the table space be PBR or PBG? A partition-by-growth table space can be as large as a partition-by-range table space, so that's not a differentiator. What, then, would be your criteria?

To me, the appeal of a PBG table space is mostly a factor of it being a labor-saving device for DB2 for z/OS DBAs. PBG table spaces have an almost "set it and forget it" quality. There is no need to identify a partitioning key, no need to determine partition limit key values, no worries about one partition getting to be much larger than others in a table space. You just choose reasonable DSSIZE and MAXPARTITION values, and you're pretty much done -- you might check back on the table space once in a while, to see if the MAXPARTITION value should be bumped up, but that's about it. Pretty sweet deal if you're a DBA.

On the other hand, PBR can deliver some unique benefits, and these should not be dismissed out of hand. Specifically:
  1. A PBR table space provides maximum partition independence from a utility perspective. You can even run the LOAD utility at the partition level for PBR table space -- something you can't do with a PBG table space. You can also create data-partitioned secondary indexes (DPSIs) on a PBR table space (not do-able for a PBG table space), and that REALLY maximizes utility-related partition independence (though it should be noted that DPSIs can negatively impact the performance of queries that do not reference a PBR table space's partitioning key).
  2. PBR table spaces enable the use of page-range screening, a technique whereby the DB2 for z/OS optimizer can limit the partitions that have to be scanned to generate a result set when a query has a predicate that references a range-partitioned table space's partitioning key (or at least the lead column or columns thereof). Page-range screening doesn't apply to PBG table spaces, because a particular row in such a table space could be in any of the table space's partitions.
  3. A PBR table space can be a great choice for a table that would be effectively partitioned on a time-period basis. Suppose, for example, that the rows most recently inserted into a table are those most likely to be retrieved from the table. In that case, date-based partitioning (e.g., having each partition hold data for a particular week) would have the effect of concentrating a table's most "popular" rows in the pages of the most current partition(s), thereby reducing GETPAGE activity associated with retaining sets of these rows. Date-based partitioning also enables very efficient purging of a partition's data (when the purge criterion is age-of-data) via a partition-level LOAD REPLACE operation with a dummy input data set (the partition's data could be first unloaded and archived, if desired).
  4. A PBR table space tends to maximize the effectiveness of parallel processing, whether of the DB2-driven query parallelization variety or in the form of user-managed parallel batch jobs. This optimization of parallel processing can be particularly pronounced for joins of tables that are partitioned on the same key and by the same limit key values.
Those are some attractive benefits, I'd say. Still, the previously mentioned DBA labor-saving advantages of PBG table spaces are not unimportant. That being the case, this is my recommendation when it comes to evaluating PBR versus PBG for a large, new table: consider first whether the advantages of PBR, listed above, are of significant value for the table in question. If they are, lean towards the PBR option. If they are not, PBG could be the right choice for the table's table space. In particular, PBG can make sense for a large table for which access will be mostly through transactions (as opposed to batch jobs), especially if those transactions will retrieve small result sets via queries for which most row filtering will occur at the index level. In that case, the advantages of range-partitioning could be of limited value.

When the table space in question is an existing one

Here, the assumption is that the table space is not currently of the universal type. When that is true, and the aim is (as it should be) to convert the table space from non-universal to universal, the PBR-or-PBG decision will usually be pretty straightforward and will be based on the easiest path to universal: you'll go with universal PBR for an existing non-universal range-partitioned table space (if it is a table-controlled, versus an index-controlled, partitioned table space), because that change can be accomplished non-disruptively with an ALTER TABLESPACE (to provide a SEGSIZE for the table space) followed by an online REORG (if you are have DB2 10 running in new-function mode, or DB2 11). Similarly, for an existing non-partitioned table space (segmented or simple, as long as it contains only one table), you'll go with universal PBG because that change can be accomplished non-disruptively with an ALTER TABLESPACE (to provide a MAXPARTITIONS value for the table space) followed by an online REORG (again, if your DB2 environment is Version 10 in new-function mode, or DB2 11).

I recently encountered an exception to this rule: if you have a non-universal, range-partitioned table space, with almost all of the data in the last of the table space's partitions (something that could happen, depending on how partition limit keys were initially set), you might decide not to go for the non-disruptive change to universal PBR, because then you'd have a PBR table space with almost all of the data in the last of the table space's partitions. Yes, with enough ALTER TABLE ALTER PARTITION actions, you could get the table's rows to be spread across many partitions (and with DB2 11, alteration of partition limit key values is a non-disruptive change), but that would involve a lot of work. You might in that case just opt to go to a PBG table space through an unload/drop/re-create/re-load process.

To sum things up: PBR and PBG have their respective advantages and disadvantages. In choosing between these flavors of universal table space, the most important thing is to put some thought into your decision. Give careful consideration to what PBR might deliver for a table, and think also of how useful PBG might be for the same table. If you weigh your options, the decision at which you ultimately arrive will likely be the right one.

Wednesday, April 29, 2015

DB2 for z/OS: Busting a Myth About Dynamic SQL

Twice in the past month, I've encountered a misunderstanding pertaining to dynamic SQL statements issued by applications that access DB2 for z/OS via network connections (these could also be called DDF-using applications, or DRDA requesters). Now seems as good a time as any to clear things up. I'll use this blog entry for that purpose.

The misunderstanding of which I speak: some people are under the impression that dynamic SQL statements issued by DDF-connected applications are zIIP-eligible when executed, while static SQL statements issued by DDF-connected applications are not zIIP-eligible.

This is not true. zIIP eligibility is not a dynamic or static SQL thing. It is a task thing (as described in a blog entry I wrote about 15 months ago). Here, "task" refers to the type of task in a z/OS system under which an SQL statement executes. zIIP-eligible work executes under a type of task called an enclave SRB (also referred to as a preemptible SRB); thus, when a SQL statement -- dynamic or static -- runs under an enclave SRB, it is zIIP-eligible. If it runs under a TCB, it is not zIIP-eligible. When does a SQL statement run under an enclave SRB in a z/OS system? Three scenarios come to mind:

When the SQL statement is issued by a DDF-connected application (i.e., by a DRDA requester). In that case, the SQL statement will run under an enclave SRB (again, that's a preemptible SRB) in the DB2 DDF address space. The statement's execution will be off-loadable -- as much as 60% so -- to a zIIP engine (if significantly less than 60% zIIP offload is seen for SQL statements issued by DDF-connected applications, the cause could well be zIIP engine contention in the LPAR). Note that static versus dynamic is a non-issue here -- either way, the statement runs under an enclave SRB and so is zIIP-eligible.

When the SQL statement is issued by a native SQL procedure that is called by a DDF-connected application. I underlined "native SQL procedure" because a SQL statement (again, static or dynamic) issued by an external stored procedure (such as an external SQL procedure, or a stored procedure written in COBOL or C or Java) will not be zIIP-eligible, regardless of whether it is called by a DDF-connected application or by a local-to-DB2 program (such as a CICS transaction or a batch job). A SQL statement issued by an external DB2 stored procedure will not be zIIP eligible because such a stored procedure always runs under a TCB in a stored procedure address space, no matter what type of application -- local to DB2, or remote -- issues the CALL to invoke the stored procedure. Conversely, a native SQL procedure always runs under the task of the calling application process. If that process is a DDF-connected application, the application's z/OS task will be, as pointed out above, an enclave SRB in the DDF address space. That being the case, a native SQL procedure called by a DDF-connected application will run under the DDF enclave SRB representing that application in the z/OS LPAR, and the SQL statements issued by the native SQL procedure (and all statements of that type of stored procedure are SQL statements -- it's a stored procedure written in SQL) will execute under that enclave SRB and so will be (as previously noted) up to 60% zIIP-eligible.

You might think, "Hey, isn't a Java stored procedure zIIP-eligible?" Yes, the Java part of that stored procedure program will be zIIP eligible, but SQL is not Java, and the SQL statements issued by a Java stored procedure will run under a TCB in a stored procedure address space and so will not be zIIP-eligible (actually, they might be a little zIIP-eligible, because the Java stored procedure might "hold on" to a zIIP processor for just a bit after a SQL statement issued by the stored procedure starts executing).

When the SQL statement is parallelized by DB2. A query can be a candidate for parallelization by DB2 if: it is static and the associated package was bound with DEGREE(ANY); it is dynamic and the value of the CURRENT DEGREE special register is 'ANY'; or there is a row for the query in the SYSIBM.SYSQUERY and SYSIBM.SYSQUERYOPTS catalog tables (introduced with DB2 10 to enable statement-level control over execution behaviors such as parallelization and degree of parallelization, for static and dynamic SQL). If a query is parallelized by DB2, the "pieces" of the split query will run under enclave SRBs and so will be zIIP-eligible (up to 80%).

And one more thing... Since DB2 10 for z/OS, prefetch read operations have been 100% zIIP-eligible; thus, even if a query is running under a TCB and is therefore not zIIP-eligible, prefetch reads executed by DB2 on behalf of the query are zIIP-eligible. Prefetch read CPU time, as always, shows up in the DB2 database services address space (DBM1), not in the address space associated with the DB2-accessing application process (e.g., a CICS region, or the DB2 DDF address space, or a batch initiator address space).

So there you have it. To repeat a point made up front: zIIP eligibility of SQL statement execution is a task thing, not a dynamic versus static SQL thing. A static SQL statement issued by a DDF-connected application (i.e., by a DRDA requester) will be zIIP-eligible because it will run under an enclave SRB (i.e., a preemptible SRB). A dynamic SQL statement issued by a CICS transaction program will not be zIIP-eligible, because it will execute under a TCB. Clear? I hope so.

Wednesday, April 22, 2015

A DB2 11 for z/OS Temporal Data Enhancement You Might Have Missed

When DB2 10 for z/OS introduced temporal data functionality about five years ago, one of the first use cases that jumped to mind for many people was data-change auditing: a table could be created with (or an existing table altered to add) the "system time" characteristic, and thereafter one would have, in the history table associated with the base table, a record of row changes resulting from UPDATE and DELETE statements targeting the base table.

That's nice, but suppose you want to see more than WHAT a row looked like before it was changed. Suppose you also want to see WHO changed a row, and by what means (i.e., INSERT, UPDATE, or DELETE). I was recently contacted by a programmer, working on a new application in a DB2 10 for z/OS environment, who wanted to do just that. He and his team had created a table with the system time property, and in this table they had two columns to capture the identity of a data-changer and the nature of the change: one column to record the ID of the user who added a row to the table, and another column to record the ID of any user who subsequently updated the row. The rub, as this programmer saw it, concerned delete activity. How could he capture the ID of the user who deleted a row in the table? The delete operation would cause DB2 (by way of its system time temporal capability) to move a copy of the deleted row to the base table's history table, but that "pre-delete" image of the row would contain no information about the ID of the user associated with the delete operation. The programmer thought about updating a row before deleting it, just to capture (via the UPDATE) the ID of the user that would subsequently drive the row-delete action. That didn't seem like a desirable solution to the developer, but what else could he do? On top of this problem, there was the matter of not being able to easily determine whether a DELETE or an UPDATE caused a "before" image of a row to be placed in the history table. Not a good situation.

I'll tell you, I like to give people good news, and I had good news for this guy. The good news, I told him, was that his organization was about to migrate their DB2 for z/OS subsystems to DB2 11, and new functionality in that release would address his "who did what?" requirements while also allowing his team to simplify their application code.

I'm actually talking here about capabilities added to DB2 11 after its general availability, by way of several APARs and their respective PTFs. Key among these APARs is PM99683 (the text of this APAR references the related APARs that, together with PM99683, provide the new functionality I'm about to describe). The first goody here is a new type of generated column specification, GENERATED ALWAYS AS (CURRENT SQLID). That enables code simplification: there's no need to programmatically place the ID of a data-changer in a column of a row -- DB2 11 will do it for you (and note that CURRENT SQLID is one of several special registers that can now be used with GENERATED ALWAYS -- you can read more about this in the section of the DB2 11 SQL Reference that covers CREATE TABLE).

There's more: you can also have in a table a column that is GENERATED ALWAYS AS (DATA CHANGE OPERATION). What's that? It's just a 1-character indication of the nature of a data change operation: I for INSERT, U for UPDATE, D for DELETE. Isn't that cool?

I'm still not done. In addition to the new GENERATED ALWAYS AS (CURRENT SQLID) and GENERATED ALWAYS AS (DATA CHANGE OPERATION) options of CREATE TABLE (and ALTER TABLE), there is a very handy clause that can now be added to the ALTER TABLE statement used to "turn on" versioning (i.e., system time) for a table: ON DELETE ADD EXTRA ROW. When system time activation for a table includes this clause, DB2 will add an extra row to the base table's history table when a row is deleted. That is to say, you'll get (as usual) the "pre-delete" image of the row (with the "row end" timestamp showing when the row was made non-current by the DELETE), and you'll ALSO get ANOTHER version of the row added to the history table -- this one with a 'D' in your GENERATED ALWAYS AS (DATA CHANGE OPERATION) column, and the ID of the deleting user in your GENERATED ALWAYS AS (CURRENT SQLID) column.

A little more information about this "extra row" that's added to the history table for a base table DELETE when ON DELETE ADD EXTRA ROW is in effect: first, the "row begin" and "row end" timestamps in the extra row are the same, and are equal to the "row end" value in the "as usual" row placed in the history table as a result of the DELETE (by "as usual" I mean the "before-change" row image that's always been placed in a history table when a base table row is deleted). Second, "extra rows" in the history table resulting from base table DELETEs with ON DELETE ADD EXTRA ROW in effect are NOT part of a base table query result set when that query has a FOR SYSTEM_TIME period specification, no matter what that specification is. If you want to see the extra rows added to a history table by way of ON DELETE ADD EXTRA ROW functionality, you'll need to query the history table explicitly.

The text of APAR PM99683, which you can access via the hyperlink I included a few paragraphs up from here, provides a set of SQL DDL and DML statements that very effectively illustrate the use and effects of the enhancements about which I've written in this blog entry. I encourage you to try these statements (or variations of them) on a DB2 11 test or development system at your site, to see for yourself what the new capabilities can do for you.

Temporal data support was a gem when it was introduced with DB2 10. That gem just got shinier.

Tuesday, March 31, 2015

The DB2-Managed Data Archiving Feature of DB2 11 for z/OS

Over the past year and a half, I have been talking to lots of people about DB2 11 for z/OS. In the course of these discussions and presentations, I've noticed something interesting pertaining to the DB2-managed data archiving capability delivered with this latest release of the DBMS: a) it is one of the more misunderstood features of DB2 11, and b) when people do understand what DB2-managed archiving is about, it becomes one of the DB2 11 features about which they are most enthusiastic.

If you are not real clear as to what DB2-managed data archiving can do for you, I hope that this blog post will be illuminating. I also hope that it will stoke your enthusiasm for the new functionality.

What I most want to make clear about DB2-managed data archiving is this: it makes it easier and simpler to implement a mechanism that some organizations have used for years to improve performance for applications that access certain of their DB2 tables.

Before expanding on that statement, I want to draw a distinction between DB2-managed data archiving, introduced with DB2 11 for z/OS, and system-time temporal data support, which debuted with DB2 10. They are NOT the same thing (in fact, temporal support, in the form of system time and/or business time, and DB2-managed data archiving are mutually exclusive -- you can't use one with the other). A table for which system time has been enabled has an associated history table, while a table for which DB2-managed data archiving has been enabled has an associated archive table. Rows in a history table (when system time temporal support is in effect) are NOT CURRENT -- they are the "before" images of rows that were made NON-CURRENT through delete or update operations. In contrast, rows in an archive table will typically be there not because they are non-current, but because they are unpopular.

OK, "unpopular" is not a technical term, but it serves my purpose here and helps me to build the case for using DB2-managed data archiving. Consider a scenario in which a table is clustered by a non-continuously-ascending key. Given the nature of the clustering key, newly inserted rows will not be concentrated at the "end" of the table (as would be the case if the clustering key were continuously-ascending); rather, they will be placed here and there throughout the table (perhaps to go near rows having the same account number value, for example). Now, suppose further that the rows more recently inserted into the table are the rows most likely to be retrieved by application programs. Over time, either because data is not deleted at all from the table, or because the rate of inserts exceeds the rate of deletes, the more recently inserted rows (which I call "popular" because they are the ones most often sought by application programs) are separated by an ever-increasing number of older, "colder" (i.e., "unpopular") rows. The result? To get the same set of "popular" rows for a query's result set requires more and more DB2 GETPAGEs as time goes by, and that causes in-DB2 CPU times for transactions to climb (as I pointed out in an entry I posted several years ago to the blog I maintained while working as an independent DB2 consultant). The growing numbers of "old and cold" rows in the table, besides pushing "popular" rows further from each other, also cause utilities to consume more CPU and clock time when executed for the associated table space.

As I suggested earlier, some organizations faced with this scenario came up with a mitigating work-around: they created an archive table for the problematic base table, and moved "old and cold" (but still current, and occasionally retrieved) rows from the base to the archive table (and continued that movement as newer rows eventually became unpopular due to age). They also modified code for transactions that needed to retrieve even unpopular rows, so that the programs would issue SELECTs against both the base and archive tables, and merge the result sets with UNION ALL. This archiving technique did serve to make programs accessing only popular rows more CPU-efficient (because those rows were concentrated in the newly-lean base table), but it introduced hassles for both DBAs and developers, and those hassles kept the solution from being more widely implemented.

Enter DB2 11 for z/OS, and the path to this performance-enhancing archive set-up got much smoother. Now, it's this easy:
  1. A DBA creates an archive table that will be associated with a certain base table. Creation of the archive table could be through a CREATE TABLE xxx LIKE yyy, statement, but in any case the archive table needs to have the same column layout as the base table.
  2. The DBA alters the base table to enable DB2-managed archiving, using the archive table mentioned in step 1, above. This is done via the new (with DB2 11) ENABLE ARCHIVE USE archive-table-name option of the ALTER TABLE statement.
  3. To move "old and cold" rows from the base table to the archive table requires only that the rows be deleted from the base table -- this thanks to a built-in global variable, provided by DB2 11, called SYSIBMADM.MOVE_TO_ARCHIVE. When a program sets the value of this global variable to 'Y' and subsequently deletes a row from an archive-enabled base table, that row will be moved from the base table to its associated archive table. In other words, the "mover" program just has to delete to-be-moved rows from the base table -- it doesn't have to insert a copy of the deleted row into the archive table because DB2 takes care of that when, as mentioned, the global variable SYSIBMADM.MOVE_TO_ARCHIVE is set to 'Y'. If you want the "mover" program to be able to insert rows into the base table and update existing rows in the base table, as well as delete base table rows (which then get moved by DB2 to the archive table), have that program set SYSIBMADM.MOVE_TO_ARCHIVE to 'E' instead of 'Y'. And note that the value of SYSIBMADM.MOVE_TO_ARCHIVE, or of any DB2 global variable, for that matter, has effect for a given thread (i.e., a given session). Some people take the word "global" in "global variable" the wrong way, thinking that it is global in scope, like a ZPARM parameter. Nope. "Global" here means that a global variable is globally available within a DB2 subsystem (i.e., any program can use a given built-in or a user-created global variable). It affects only the session in which it is set.
  4. If a program is ALWAYS to access ONLY data in an archive-enabled base table, and not data in the associated archive table, its package should be bound with the new ARCHIVESENSITIVE bind option set to NO. If a program will always or sometimes access data in both an archive-enabled base table and its associated archive table, its package should be bound with ARCHIVESENSITIVE set to YES. For a program bound with ARCHIVESENSITIVE(YES), the built-in global variable SYSIBMADM.GET_ARCHIVE provides a handy behavior-controlling "switch." Suppose that a bank has a DB2 for z/OS table in which the account activity of the bank's customers is recorded. When a customer logs in to the bank's Web site, a program retrieves and displays for the customer the last three months of activity for his or her account(s). Let's assume that more than 9 times out of 10, a customer does not request additional account activity history data, so it could make good sense to archive-enable the account activity table and have activity data older than three months moved to an associated archive table. An account activity data retrieval program could then be bound with ARCHIVESENSITIVE(YES). When a customer logs in to the bank's Web site, the program sets the SYSIBMADM.GET_ARCHIVE global variable to 'N', and a SELECT is issued to retrieve account activity data from the base table. When the occasional customer actually requests information on account activity beyond the past three months (less than 10% of the time, in this example scenario), the same account activity data retrieval program could set SYSIBMADM.GET_ARCHIVE to 'Y' and issue the same SELECT statement against the account activity base table. Even though the base table contains only the past three months of account activity data, because the program set SYSIBMADM.GET_ARCHIVE to 'Y' DB2 will take care of driving the SELECT against the archive table, too, and combining the results of the queries of the two tables with a UNION ALL.
And that's about the size of it. No great mystery here. This is, as I stated up front, all about making it easier -- for DBAs and for application developers -- to enhance CPU efficiency for retrieval of oft-requested rows from a table, when those "popular" rows are those that have been more recently added to the target table. You could have done this on your own, and a number of organizations did, but now DB2 11 gives you a nice assist. I hope that you will consider how DB2-managed data archiving could be used to advantage in your environment.

Friday, March 20, 2015

DB2 for z/OS: Non-Disruptively Altering a Native SQL Procedure

Not long ago, I was contacted by a DB2 for z/OS DBA who wanted to run a situation by me. On one of his organizations' mainframe DB2 systems -- a system with very high transaction volumes and very stringent application availability requirements -- there was a native SQL procedure for which the ASUTIME specification had to be changed. The DBA wanted to see if I could help him to find a non-disruptive way to effect this stored procedure modification.

[Background: a native SQL procedure is a DB2 stored procedure, written in SQL procedure language (SQP PL), for which the associated package is the the procedure's only executable (a native SQL procedure does not have an external-to-DB2 load module). ASUTIME is an option that can be included in a CREATE or ALTER statement for a DB2 stored procedure, and it indicates the maximum amount of mainframe processor time, in CPU service units, that can be consumed in one execution of the stored procedure. The default value for ASUTIME is NO LIMIT, and in this case the DBA needed to set an ASUTIME limit for a stored procedure because it was sometimes running too long.]

One stored procedure change mechanism that you really want to avoid, if at all possible, is drop and re-create. At some DB2 for z/OS sites, particularly when use of stored procedures is at an early stage, it is not uncommon for stored procedure changes to be accomplished through a drop and re-create process. That approach, aside from being relatively disruptive, can become unfeasible once an organization starts using nested stored procedures (referring to stored procedures that are called by other stored procedures). Native SQL procedures, in particular, make the drop-and-re-create means of stored procedure modification problematic, because an attempt to drop a stored procedure (whether native or external) will fail if that stored procedure is called by a native SQL procedure.

So, ALTER PROCEDURE is the way to go, versus drop and re-create, unless the change you want to make cannot be accomplished with ALTER (several parameter-related changes come to mind here -- ALTER PROCEDURE can be used to change the names of a native SQL procedure's parameters, but not to change the number of parameters for a stored procedure, or a parameter's usage (e.g., from IN to OUT), or a parameter's data type). You need to keep in mind, however, that changing some options of the current version of a native SQL procedure via ALTER PROCEDURE will cause the packages of programs that call the altered SQL procedure to be invalidated. Other ALTER PROCEDURE changes -- again, when it is the current version of the procedure that is modified -- cause the package of the native SQL procedure itself to be invalidated (and some current-version changes do both: they invalidate the SQL procedure's package and the packages of programs that call the SQL procedure). A table in the DB2 for z/OS SQL Reference shows the package invalidation effects of changing various options of the current version of a native SQL procedure via ALTER PROCEDURE. Here is the URL for information on ALTER PROCEDURE for a native SQL procedure, from the DB2 10 for z/OS Knowledge Center on IBM's Web site:

http://www-01.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/com.ibm.db2z10.doc.sqlref/src/tpc/db2z_sql_alterproceduresqlnative.dita

When you go to the Web page pointed to by the above URL, scroll down to Table 2, and there you will find the information pertaining to the package invalidation effects of ALTER PROCEDURE changes applied to the current version of a native SQL procedure. You will see in that table that changing ASUTIME via ALTER PROCEDURE does not invalidate (i.e., does not drive an implicit rebind or regeneration of) the package of the target native SQL procedure. This change does, however, invalidate the package of any program that calls the altered native SQL procedure. The DBA who brought his situation to my attention could have identified dependent packages (i.e., packages of programs that invoke the native SQL procedure for which the ASUTIME would be changed) via a query of the SYSIBM.SYSPACKDEP table in the DB2 catalog, and then could have issued the ALTER PROCEDURE statement and rebound the affected packages, given a brief window of time during which the related programs could be offline. That approach was not attractive to the DBA, given the fact that scheduled application outages at his site were hard to come by.

Putting our heads together, the DBA and I came up with an alternate, non-disruptive process for effecting the desired change in the definition of the native SQL procedure. This approach relies on two, not one, ALTER PROCEDURE statements to get the job done. The first is an ALTER PROCEDURE with the ADD VERSION clause, along with the new ASUTIME value (and this ALTER PROCEDURE statement would also include the list of parameters for the procedure, any options that will have non-default values, and the body of the procedure). Because the ASUTIME change applies to the new, added version of the SQL procedure, packages associated with callers of the current version of the stored procedure are not invalidated when the ALTER PROCEDURE with ADD VERSION is executed. A second ALTER PROCEDURE statement with the ACTIVATE VERSION clause is issued to make the just-added procedure version (the one with the updated ASUTIME value) the current version, and the change process is complete. Callers of the native SQL procedure are not disrupted by the ALTER PROCEDURE statement with ACTIVATE VERSION, because that statement just causes subsequent CALLs naming the SQL procedure to use the version of the procedure that you previously added with step 1 of this 2-step process.

The DBA tested this 2-step native SQL procedure change process during a period of heavy activity on a DB2 for z/OS system, and as expected, no contention issues were seen (even so, his plan is to use this process during periods when the system is less busy, and that is probably a good idea).

So, if you're willing to issue two ALTER PROCEDURE statements versus one, you can non-disruptively change ASUTIME and most other characteristics of a native SQL procedure. An extra ALTER PROCEDURE statement seems to me to be a small price to pay for enhanced application availability.

Thursday, February 19, 2015

Of DB2 Connect "Gateway" Servers and DB2 for z/OS DSNL030I Authentication Messages

A few days ago I received a note in which a DBA reported strange and inconsistent information in some DSNL030I messages that were generated periodically by a DB2 for z/OS subsystem in his charge. The messages in question were tied to authentication failures associated with DDF-using applications (i.e., applications that access DB2 via network connections). Such a failure could occur if an application used an invalid password on a DB2 for z/OS connection request (and that could be the result of the password for the application's DB2 authorization ID being changed on the z/OS side, with the corresponding change to the new password not being made on the application server side, due to an oversight on someone's part).

When a failure of this nature occurred for a DB2 subsystem at the DBA's company, he would see a DSNL030I message from DB2, with a 00F30085 reason code ("The requester's password could not be verified"). Nothing strange there. What struck the DBA as odd was the information that he would see in the LUWID part of the DSNL030I message text. Now, at first glance the form of the information in the LUWID part of the message text might look weird: a three-part string of seemingly meaningless letters and numbers. In fact, the first of the three parts of that string is a client IP address, the second part is the associated port number, and the third part is just a unique sequence number generated by DB2 Connect (or the by the IBM Data Server Driver Package if that DB2 client-side software is being used). OK, but what if the first part of the LUWID string looks like this:

J56F045C

That sure doesn't look like an IP address. And it couldn't be a hex (i.e., hexadecimal) representation of an IP address, could it? Not with a 'J' in the string.

Actually, it is an IP address in hex form, with the twist being that the high-order letter is translated to a number as described in the IBM "Technote" at this URL:

http://www-01.ibm.com/support/docview.wss?uid=swg21055269

As pointed out in the Technote, the string J56F045C in the first part of the LUWID section of a DB2 for z/OS DSNL030I message would resolve to an IP address of 53.111.4.92 (and using the same scheme for substituting a numerical value for the high-order letter in the second part of the DSNL030I LUWID string, G422 would be seen to be a representation of port number 1058).

The DBA who wrote to me wasn't thrown off by the representation of the IP address in the LUWID part of a DSNL030I message, because he'd already seen the aforementioned Technote and therefore knew how to derive the actual IP address. What had the DBA perplexed was the variability he saw in these IP addresses when there should not have been variability. Sometimes, he would see the IP address of a DB2 Connect "gateway" server (more on this in a moment), and sometimes he would see the address of a client application server "upstream" from the DB2 Connect gateway server. On top of that, when the DBA saw an upstream client IP address in the LUWID section of a DSNL030I message, that address was not always consistent with regard to the actual application server for which the authentication failure had occurred. What was going on?

The inconsistent IP address information that the DBA saw in the LUWID part of DB2 DSNL030I messages is related to the fact that client application servers at his site access DB2 for z/OS via DB2 Connect gateway servers, as opposed to going directly to DB2 using the IBM Data Server Driver Package. Here's how the two situations are linked: when a client application requests a connection to a DB2 for z/OS server through a DB2 Connect gateway server, authentication is a very early step in the process of establishing that connection. IF authentication is successful, the DB2 Connect gateway server will send the client's IP address to the DB2 for z/OS subsystem. If authentication is NOT successful then the client address will not be sent by the DB2 Connect gateway server to DB2 for z/OS.

If the DB2 Connect gateway server does not send the upstream client's IP address to DB2 for z/OS when client application authentication is not successful, why did the DBA sometimes see a client IP address in the LUWID part of a DSNL030I authentication failure message? That can happen when the DB2 Connect gateway server connection associated with the client authentication failure is being reused following a previously successful client authentication (keep in mind that the DB2 Connect gateway, by default, keeps a pool of connections to a downstream DB2 subsystem that it reuses for upstream clients -- a feature that boosts efficiency versus having to frequently terminate and then re-establish connections to the DB2 for z/OS host system). In that case -- when an authentication failure occurs using a pooled connection from the DB2 Connect gateway server to DB2 for z/OS that had previously been used for a successful authentication -- you will see in the LUWID part of the DSNL030I message an upstream client IP address.

What client IP address will that be? It will be the address of the last client to successfully authenticate using the DB2 Connect gateway server connection in question. That may OR MAY NOT be the client for which authentication failed. It WILL be the IP address of the client that encountered the authentication failure IF the same client was the last one to successfully authenticate to DB2 for z/OS using the connection. If the last client to successfully authenticate to DB2 using the connection between the DB2 Connect gateway server and the DB2 subsystem is DIFFERENT from the client that encountered the authentication failure, you'll see the IP address of that SUCCESSFULLY authenticated client application in the LUWID part of the DSNL030I authentication failure message.

But sometimes the DB2 DBA saw the IP address of a DB2 Connect gateway server, instead of a client IP address, in the LUWID part of a DSNL030I authentication failure message. Why? That can happen when the first client to use a connection between the DB2 Connect gateway server and the DB2 subsystem gets an authentication failure. In that case, the LUWID part of the DSNL030I message will contain the IP address of the "adjacent" (to DB2 for z/OS) server, and that will be, given a DB2 Connect gateway server set-up, the IP address of a DB2 Connect gateway server.

So, what you know is this: the LUWID part of a DB2 for z/OS DSNL030I authentication failure message will contain an IP address. Depending on the particular circumstances of the authentication failure, the IP address in the LUWID part of the DSNL030I message will be the IP address of the client that encountered the failure, or the IP address of a different client that had previously used the connection (and successfully authenticated to DB2), or the IP address of the DB2 Connect gateway server (if no client had previously used the connection and had successfully authenticated to DB2). The bottom line: you may not see a client IP address in the LUWID part of a DSNL030I message, and even if you do, that client IP address may be different from the address of the client that encountered the authentication failure.

To ensure some consistency in DSNL030I output, the fix for DB2 APAR PM82054 causes DB2 to consistently record the IP address of the "adjacent" server in the THREAD-INFO part of the DSNL030I message when an authentication error occurs. When DB2 Connect is running on a gateway server, that IP address will be the DB2 Connect gateway server's IP address. The information in the LUWID part of the message will not be consistent, and if it does contain a client IP address that address may or may not be that of the client that encountered the authentication failure.

This is another good reason to go to an IBM Data Server Driver Package, direct-to-DB2 connection set-up, versus a DB2 Connect gateway server set-up: if an authentication error occurs, the IP address of the application server on which the Data Server Driver Package is installed will show up -- consistently -- in the THREAD-INFO part of the DSNL030I message, because that server will be the "adjacent server" to the DB2 for z/OS subsystem. Note that entitlement to deploy the IBM Data Server Driver Package is based on DB2 Connect licensing: if you're licensed for the latter, you can deploy the former, and you should deploy the former -- not only for the reason I've just mentioned (having the IP address of the server "adjacent" to DB2 for z/OS be that of an application server versus a DB2 Connect gateway server), but also for a simplified IT infrastructure, better performance (through elimination of a "hop" between application servers and DB2 for z/OS), and easier upgrades to new releases of the DB2 client code (and, speaking of ease, if you license DB2 Connect Unlimited Edition for System z, you can deploy the IBM Data Server Driver on any application server or other client that directly accesses a DB2 for z/OS system, without having to have a license file on each of those client servers -- the client licenses are managed on the DB2 for z/OS host system). On top of that, going from a DB2 Connect gateway server configuration to the IBM Data Server Driver Package direct-to-DB2 configuration typically involves little to nothing in the way of application code changes -- it should just be a matter of updating the client's connection string for the target DB2 for z/OS server. In (usually) rare cases, there could be an application dependency on DB2 Connect, such as when an application needs two-phase commit capability AND the client transaction manager uses a dual-transport processing model (IBM's WebSphere Application Server uses a single-transport processing model).


The more you know about the IBM Data Server Driver Package, the better it looks. There was a time when DB2 Connect gateway server configurations made sense, but for most DB2 for z/OS-using organizations that time has passed.

Thursday, February 12, 2015

The New IBM z13 Mainframe: A DB2 for z/OS Perspective

Last month, IBM announced the z13 -- the latest generation of the mainframe computer. The z13 is chock full of great technical features, and there are already plenty of presentations, white papers, and "redbooks" in which you can find a large amount of related information. Going through all that information is an option, but you might be thinking, "I'm a DB2 for z/OS person. What's in the z13 for me?" In this blog entry, I'll give you my take on that question.

My favorite z13 feature is the larger and less expensive memory resource available on the servers. From a technical perspective, I like what I call Big Memory because nothing boosts application performance and CPU efficiency in a DB2 for z/OS environment like expansive real storage (as I pointed out in an entry that I posted to this blog a couple of months ago). A single z13 server can be configured with as much as 10 TB of memory (up from a maximum of 3 TB on a zEC12, the previous top-of-the-line mainframe), and a single z/OS LPAR on a z13 can use up to 4 TB of memory (with z/OS V2.2, or z/OS V2.1 with some PTFs) -- that's up from 1 TB for a z/OS LPAR on a zEC12. How much memory should a z/OS LPAR have? I will tell you that for an LPAR in which a production DB2 for z/OS subsystem is running, I like to see at least 20-40 GB of real storage per engine -- and I mean total engines, zIIP as well as general-purpose (so, for example, if a z/OS LPAR with a production DB2 for z/OS subsystem has eight engines -- four general-purpose and four of the zIIP variety -- then my recommendation would be to configure that LPAR with at least 160-320 GB of memory).

How would I want to exploit that memory for DB2 performance purposes? Well, for starters I'd like to have a big buffer pool configuration -- to the tune of 30-40% of the LPAR's memory resource (so, for example, if I had a z/OS LPAR with 300 GB of memory then I'd want the aggregate size of all the buffer pools allocated for a production DB2 subsystem in that LPAR to be 90-120 GB). I'd also want to have PGFIX(YES) specified for most of these buffer pools -- certainly for the more active pools. I'd also give consideration to specifying PGSTEAL(NONE) for one or more pools, and using those to cache some really performance-critical table spaces and or indexes in memory in their entirety. [Note: my 30-40% of memory guideline for the size of a production DB2 subsystem's buffer pool configuration assumes one production DB2 subsystem in the LPAR. If there were several production DB2 subsystems in a z/OS LPAR, you would not want each of them to have a buffer pool configuration sized at 30-40% of the LPAR's real storage resource. If there were multiple production DB2 subsystems in an LPAR, I would generally try to keep the combined size of all of the subsystems' buffer pool configurations at not much more than 50% of the LPAR's memory.]

I would not stop my exploitation of a z13 Big Memory environment with a large buffer pool configuration. In addition to that, I'd look at enlarging a production DB2 subsystem's dynamic statement cache, to drive the cache "hit ratio," ideally, to 95% or more (avoided "full" prepares of dynamic SQL statements can reduce CPU consumption significantly). I'd also go for a larger in-memory sort work area (specifically, I'd consider a value of SRTPOOL in ZPARM of 40-60 MB or more, and a MAXSORT_IN_MEMORY value of 20-30 MB or more). Additionally, I'd make greater use of the RELEASE(DEALLOCATE) package bind option, together with persistent threads (e.g., high-performance DBATs and CICS-DB2 protected entry threads), to improve the CPU efficiency of frequently executed programs. With the kind of LPAR memory I'm talking about, I think that I could do all of these real-storage-leveraging things and still have a demand paging rate of zero (though I wouldn't get too concerned if I had a small but non-zero demand paging rate of maybe 1 or 2 per second).

Now, I mentioned that I like the z13 memory picture from both an expanse and an expense point of view. I just covered the expanse angle. Now a look through the expense lens. The cost of memory on the z13 starts out substantially lower versus the cost of memory on a zEC12 or z196 server (the latter being the mainframe generation that preceded the zEC12). Depending on how much real storage you order for a z13, beyond what you have on a zEC12 or z196 from which you are upgrading to a z13, the memory cost can go lower still -- a LOT lower. Talk to an IBM z Systems sales representative for details about the memory deals available when upgrading your mainframe to a z13. They are impressive, if I do say so myself. Time to load up on gigabytes (or terabytes).

While I am definitely enamored with Big Memory (as DB2 for z/OS people ought to be), that's not all that I like about the z13. I'm also a fan of two enhancements that, in different but complementary ways, enable z13 processors to work smarter for enhanced application efficiency and throughput. One of these processing enhancements is called SMT -- short for simultaneous multi-threading. SMT allows multiple software threads to run on the same processor core at the same time. On a z13, two threads can execute concurrently on one SMT-supporting processor, and that is why the feature is sometimes referred to as SMT2. With SMT in effect, each of the two threads using one core will run more slowly than would be the case for a single-thread core, but throughput is boosted. My colleague Jim Elliott likes to use the example of a two-lane road with a 45 miles-per-hour speed limit versus a single-lane road with a speed limit of 60 miles per hour (and don't read anything into these speed limit numbers -- the only point is that one is lower than the other). Cars go faster on the one-lane road, but considerably more cars per unit of time travel a given distance using the two-lane road. In other words, the two-lane road with the lower speed limit has a greater carrying capacity than the one-lane road with the higher speed limit. Similarly, an SMT-exploiting processor should provide greater transactional throughput than it would if it were running in the traditional single-thread mode.

SMT is not applicable to all z13 engines. Specifically, the z13 processors that support SMT are zIIP engines and IFLs (the latter being an acronym for Integrated Facility for Linux -- processors dedicated to Linux workloads on z Systems servers, running either in native, "on the metal" LPARs or as virtual Linux systems in z/VM LPARs). That these engines support SMT is good from a DB2 for z/OS standpoint. zIIP engine utilization is often driven to a large extent by DB2 for z/OS DDF workloads (i.e., client-server applications that access DB2 data over network connections). DB2 for z/OS also utilizes zIIP engines for significant portions of utility processing, and (starting with DB2 10) for prefetch read and database write operations. Another reason z13 zIIP support for SMT is good for DB2: Java programs running in z/OS systems use zIIP engines, and z/OS is an increasingly popular environment for Java applications, and those applications very often involve access to data managed by DB2. z13 IFL support for SMT is great performance news for applications that run under Linux on z Systems, and many such applications interact with DB2 in an adjacent z/OS LPAR.

The other "work smarter" z13 processing enhancement that I like a lot is SIMD, or Single Instruction Multiple Data. With SIMD, if the same operation needs to be performed on several data elements, the operation can be performed once for all of the data elements at the same time, versus being performed for first data element, then performed again for the second element, then again for the third, etc. Again, fellow IBMer Jim Elliott provided a nice analogy: suppose you need to get three packages of the same type from point A to point B. It would be more efficient to do that by sending all three packages in one truck, as opposed to sending three trucks carrying one package apiece. Here's the DB2 for z/OS angle: SIMD should boost the performance of two kinds of application in particular: those that are data-intensive (referring to the volume of data operated upon) and those that are compute-intensive (referring to operations performed on data elements). Put those two application characteristics together, and what do you get? Analytics (among other things). In recent years, there has been a marked increase in the use of z Systems servers for analytics applications, driven in part by the steady convergence of transactional and analytics processing. z13 SIMD technology will add fuel to this trend, and it's a trend that most definitely benefits DB2 for z/OS. Software has to be modified to take advantage of SIMD, but that work is already underway. Look for SIMD exploitation by IBM analytics tools in the near future. And, in a z/OS V2.2 system (and V2.1 with some PTFs), SIMD will be exploited by XML System Services (used for things such as schema validation when XML data is stored in DB2 for z/OS tables), by our Java SDKs (good for WebSphere Application Server), and by our COBOL and PL/I compilers.

And that, folks, is the short and sweet summary of what I most like about the z13: Big Memory (like jet fuel for DB2, and priced to sell), SMT (like multi-lane highways), and SIMD (like shipping multiple packages in one truck). There's plenty more z13 technology that is cool in its own right, but as a DB2 for z/OS specialist I'm keeping my "big three" features front-of-mind.