Wednesday, November 2, 2011

IOD Dispatch (3) - Gleanings from the DB2 for z/OS "Coffee Track"

IBM's 2011 Information on Demand conference took place last week in Las Vegas. I attended a number of the event's technical sessions, and summarized what I heard in some of those sessions in a couple of entries posted to this blog. In this entry, my last focused on IOD 2011, I'll provide some of the more interesting items of information that I picked up in the "coffee track" -- that being the term often used to describe discussions that take place during coffee breaks and otherwise outside of the formal presentations. I don't know about you, but the "coffee track" has always been for me one of the most valuable aspects of conferences such as IOD. I really enjoy informal conversations with IBM DB2 developers and with professionals in the DB2 user community. Even when these talks are brief, they often result in my seeing a DB2 feature or function in a new light, and thus my understanding of DB2 technology is broadened. So, without further ado, here are some of the highlights from my "coffee track" time last week:

A lot of mainframe DB2 people still have a lot to learn about native SQL procedures. On Tuesday, I delivered a presentation on DB2 for z/OS stored procedures. During that hour I spent a lot of time talking about native SQL procedures, and some related questions that I got from a session attendee were illuminating. Native SQL procedures, by the way, were introduced with DB2 9 in new-function mode (I blogged about them a few years ago, when DB2 9 was new and I was working as an independent DB2 consultant). The functionality enables the development of stored procedures that are written entirely in SQL, and unlike external SQL procedures (introduced with DB2 Version 7), which are turned into C language programs that run in a stored procedure address space, native SQL procedures run in the DB2 database services address space and have no associated external-to-DB2 executable. Anyway, the aforementioned session attendee asked me about deployment of native SQL procedures from one DB2 environment to another (e.g., from test to production). My answer didn't quite hit the mark, and the question was asked again in a slightly different form. We went back and forth in this way for a while, until the session attendee finally asked, "With native SQL procedures, there's just one 'piece', right?" Right.

With the phrase "one piece," this person was contrasting native SQL procedures with traditional external stored procedures, which have two "pieces": an external-to-DB2 program written in a language such as COBOL or C or Java, and that program's DB2 package. In that application environment (one with which many mainframe DB2 people are very familiar), stored procedure deployment is mostly about deployment of the external-to-DB2 program, and the DB2 "piece" -- the package -- is dragged along in that process. In the case of COBOL stored procedures, for example, the external-to-DB2 executable exists in the form of a load module. Organizations have procedures, often involving the use of software tools, that are followed to get a load module deployed into a given system, and these procedures ensure that the associated DB2 package is also made available in the target system. But what if the only "piece" of a stored procedure is the DB2 package? If the DB2 package has previously been "along for the ride" as an external-to-DB2 program is deployed in, say, the production system, what's to be done when the package is in the driver's seat, so to speak, and the car (to continue the vehicular analogy) is otherwise empty (i.e., there's not a second "piece" to the stored procedure)? That is indeed something new for DB2 people, and dealing with that situation means learning about the new (with DB2 9) DEPLOY option of the BIND PACKAGE command, and about extensions to the ALTER PROCEDURE statement, such as the ACTIVATE VERSION option (useful for "backing out" a version of a native SQL procedure that is causing errors). So, if you're looking to take advantage of native SQL procedure functionality (and there can be multiple benefits of doing so), get familiar not only with development of these procedures, but with deployment, as well. I have some introductory information in an entry I posted a couple of years ago to the blog I maintained while working as an independent consultant, and you can find more detailed information in the DB2 Command Reference (for BIND PACKAGE) and the DB2 SQL Reference (for ALTER PROCEDURE) -- both the DB2 9 and the DB2 10 manuals are available online at IBM's Web site.

DB2 for z/OS for data warehousing -- when the application "cannot fail." I had a talk with a couple of IOD attendees from a large company. Their organization had recently spent months in developing go-forward plans for a mission-critical business intelligence application. The basic decision was this: leave the application's database on the mainframe DB2 platform, where it had been for some time, or move it to a non-mainframe, non-DB2 system. The company chose to stay with DB2 for z/OS, mainly for two reasons: 1) they are confident that DB2 on System z can scale to handle the big increase in database size and access activity expected over the coming years, and 2) they place a high value on the rock-solid reliability of the mainframe DB2 platform (as one of the people with whom I spoke put it: "The [decision support application] database cannot fail"). In expanding on that second point, these folks went beyond System z's and DB2's well-earned reputation for high availability, and talked up the excellence of their DB2 for z/OS support staff -- systems programmers and DBAs.

Sometimes, we take for granted the deep expertise possessed by a lot of companies' mainframe DB2 teams. DB2 for z/OS has been around for more than 25 years (IBM invented relational database technology), and many DB2 for z/OS professionals have two decades or more of experience in working with the product. They know how to configure DB2 systems and design DB2 databases for high availability, and they know what to do if a data recovery situation arises. On top of that, these teams tend to have have processes, developed over a number of years, around things such as change management, capacity planning, and performance monitoring and tuning, that are very robust and very effective. Combine a super-reliable hardware/software platform with a veteran and highly knowledgeable support staff, and you get a data-serving system that organizational leaders trust to deliver information as needed, whenever it's needed.

DB2 10 for z/OS: it's about time. There was a lot of buzz at IOD 2011 about the temporal data support provided by DB2 10 for z/OS, and with good reason: this is functionality that would be a bear to provide via your own application code (something I know from first-hand experience), and even if you went that route you couldn't match the performance of the built-in DB2 capability.

There are two flavors of temporal data support available in a DB2 10 new-function mode environment (and they can be employed separately or together for a given table): system time (this has to do with automatic saving of "before" images of rows targeted by UPDATE and DELETE operations in a history table, with information indicating the timestamps between which a row was in the "current" versus the "history" state) and business time (this enables, among other things, a "future insert" capability, whereby a row with, for example, next year's price for a product can be added to a table without affecting programs querying "currently valid" rows).

I appreciated the value of system time temporal data support as soon as I learned of it, having done, in the mid-1990s, some consulting work for an insurance company that had written "row change history" logic into their application code -- this to enable determination of a policy holder's coverage at the time that an automobile accident (for example) occurred. With regard to business time, however, I'll admit that I initially had some trouble envisioning use cases. I heard of an interesting one at the conference last week: a company is utilizing DB2 10 business time to enable analysts to do future forecasting of profit margins. Very cool. I expect that as time goes by, organizations will come up with new and innovative ways to derive value from the system time and business time capabilities of DB2 10.

I had a great time last week, and I'm already looking forward to IOD 2012 -- hope to see you there.

No comments:

Post a Comment