‹header›
‹date/time›
Click to edit Master text styles
Second level
Third level
Fourth level
Fifth level
‹footer›
‹#›
Why use PL/SQL?
If your application is mainly OLTP, with short, single-action DML statements issued by the client, there really isn’t a compelling reason to use PL/SQL. Just build a nice, data-layer framework in the language you’ve standardized on for the client side.
But if your application needs:
- to issue several SQL statements in succession for each user or event-triggered action
- to process large amounts of data in batches
- to avoid the latency and overhead of transporting data over the network for processing that could be done in the database - to comply with some architectural directive to keep all the business or data layer logic in PL/SQL
then PL/SQL is the ticket!
For those who have used PL/SQL as a core component of their enterprise applications architecture, you no doubt have run into a few frustrating limitations of PL/SQL. We’ll talk about these limitations, how 8i and 9i new features solve every one of them, and dive into some examples that should help solidify the concepts and give you ideas on where you could use the new features in database and application design.
However, I didn’t want to just give you all the answers; it doesn’t tend to stick. Getting your hands dirty, solving a work problem by yourself; this is what develops the real PL/SQL skills and expertise. What’s that old maxim? “If you teach a man to fish…” I didn’t know the exact wording or author of that old proverb, so I looked it up. At first I ran across a few that didn’t quite say what I had in mind:
In that spirit, here is where I learned how to fish.  This is where you should try to find your answers on your own. I’ve listed them in order of their past usefulness to me.
There are the two books listed at the back of the paper of course, and there are other sites like orafaq.com, experts exchange, etc. There are some excellent sites I use for DBA and performance tuning work, like Tim Gorman and Steve Adams’ sites, but these are what I use for PL/SQL programming.
There are a number of grievances I’ve had with PL/SQL. These include an incomplete UTL_FILE package (also fixed in 9i), the inability to “introspect” to see what your name is and what line you died on, Oracle’s annoying habit of stripping comments and comment headers from triggers and views, and then the list of items in this slide. The first two items on this slide were fixed in 8i. The remainder were fixed in 9.0.1 and 9.2.
Cannot store a list of values related to an individual row.  Alternative: create a separate attributive table to hold the row's list.  These "temporary" child tables clutter the schema, add processing and storage overhead, and make code more complex.
Cannot use a collection in SQL statements.  Alternative: create, populate and join to a temporary table.
Cannot access list values using character-based keys.  Alternative:  create another kludge table to store the key:value pairs, or roll your own hash tables using a packaged PL/SQL table of record, hash algorithms and functions. Do-able, but not fun.       
Cannot insert or update a table using a record.  Alternative: None. Break out a record into its individual columns for insert/update.  This effectively eliminated the benefits of abstraction and de-coupling that records were meant to provide.
NDS implementation didn’t support some of the nifty new features. Since NDS is where the real power is at, that was frustrating.
Could not model an array in more than one dimension. I haven’t really had a need for this since my C days, but I’m sure other Oracle programmers have really missed the ability.
Could not do array-like, or bulk SQL operations on collections.  Alternative: None. Limited to one-by-one row processing.  Shifting from the PL/SQL to SQL engine as each row was processed was a real drag on performance.
Cannot use awesome BULK ops to get data directly into collections of record.  Alternative: break every column down into its own collection of scalar. That was really unpleasant working with any more than a three-column table.
I enjoyed that Vincent Price photo to express my frustration, but I was originally going to go with this little Orangutan baby. But there was something unsettling about the picture. I finally realized…
…that it looked a whole lot like my 8-month old baby girl. Then it wasn’t so funny ;-)
You need to see this briefly so that you can more readily read the code examples, as opposed to scratching your head wondering what on earth I’m trying to do.
Seeing this pseudo-Hungarian notation makes most professionals who’ve seen it very glad they don’t work on my team. We’ve found it very helpful to always know exactly what we’re dealing with, rather than having to backtrack and hunt down the original variable declaration to see if it’s a string, or a number or collection. Good variable naming can bypass the need for a strict convention like this, but good variable naming often requires more characters than PL/SQL allows. So we think it’s nice for consistency, brevity, and it allows us to be a little lazy with the naming.
It’s certainly a far site better than the sorry “p” for parameter “v” for variable convention (started by unschooled Oracle corporation programmers). What does that buy you? Almost nothing . In my mind, it’s worse than no standard at all.
A record is much like a row in a relational database table, composed of one or more individual fields, or columns.  The structure of the record’s type can be explicitly defined by the programmer, or implicitly defined by the %ROWTYPE attribute.  %ROWTYPE abstracts the details of the underlying object.  As columns are added, deleted and modified, your code doesn’t have to change as often.  Records allow you to assign, access, refer to and pass entire rows and tables as a single unit, as opposed to scores of individual variables or parameters. We choose PL/SQL when we need to 1) batch up multiple SQL operations, or 2) improve speed by performing logic within the database, rather than moving the data over the network.  Since both goals involve operations on rows of data that is/will be stored in relational tables, you should be using records most of the time!  This is a common best practice in PL/SQL circles.  Leaning heavily on records leaves your code simpler, cleaner, easier to read and maintain, and more robust.
Open first code link. Cover it. Point out definitions and declarations.
A collection is nothing more than a list, an ordered group of elements, all of the same type – much like a single-dimension array in C or a one-column relational table.  Each element in a collection has a unique index, or key, that determines its position in the collection.  If you know the key, you are able to perform random access to the value, which is faster than iteration and even faster than SQL against the same information in an index-organized or cached table. Oracle provides three different collections: associative arrays, nested tables and varrays.  “Associative” because the value is directly tied, or associated, with its numeric or character-based key.  “Nested”, because as a column type, it allows you to embed, or nest, a table within a column of a row.  “Varray”, short for “variable-size array”, is mystifying.  The name and the Oracle docs[1] suggest that varrays are more flexible than the other collection types, when in fact, the opposite is true.  I have yet to find that a varray was better suited to my problem than a nested table.
[1] The PL/SQL User’s Guide is poorly worded.  It says varrays “hold a fixed number of elements (although you can change the number of elements at runtime).”  It also implies varrays are extensible: “To increase the size of a nested table or varray, use EXTEND.”  It also showed a varray with a maxsize of 10, and then proceeded to explain that the uppder bound could be extended to “8, 9, 10, and so on.”  Um, no.  You will get ORA-06532 if you try to extend beyond 10, its maximum size.
Locally means the collection type is only available inside PL/SQL, and unless it’s declared in a package specification, it’s only available to the current PL/SQL block.  Globally means it is available for use within PL/SQL, SQL, object type definitions, and as a datatype for columns in database tables.  If you want to perform SQL against your collection, it must be globally defined
Code 1
Code 2
I did searches and determined that my existing uses of collections falls into one of two categories: 1) Where I needed a temporary staging area for PKs or CORs and did not want to create a separate table for temporary data storage. This temporary data would then drive a further process to update records (kicking off a lengthy process based on the PK being fed to it) or delete. I was able revisit these recent and add bulk operations, speeding them up 60%.
Recall your experience and training in relational data modeling.  One of the goals is to avoid redundancy.  This aids data integrity and simplifies data maintenance and extension.  A good data architect attempts to keep models simple and elegant using a technique called normalization.
TROUBLE_CALL(1) has a repeating attribute.  In first normal form, that attribute becomes a child table whose primary key is a foreign key back to its parent, an attributive relationship.  After first normalization, the model looks like TROUBLE_CALL(2).  It’s clean and it meets first normal form, but it’s a pain.  As a list of dates, the callbacks don’t describe anything useful without joining to the parent.  No other entity depends on them.  I like to call these attributive tables “Clingons”.  It would be nice to stuff them in the parent as shown in TROUBLE_CALL(3), but normalization and most databases won’t allow it.
What if you could virtually include the attributive Clingons in the parent record?  It would avoid excessive joins, eliminate table clutter, and still preserve normalization.  Cascading deletes would be a breeze too: delete the parent row, and Poof! all the child rows are gone too.  Updating would be easier too since you wouldn’t have to traverse several physical tables to update a table at the bottom of a hierarchy.  In OO all this is accomplished using a technique known as object composition.  In the relational world, it’s known as heresy.  Heresy or not, since Oracle 8, you can include collections with the parent row to which they belong.  Tables like TROUBLE_CALL(3) are easy to create, and make a lot of sense for certain scenarios.
· In practice, most varrays (per row) total less than 4000 bytes.  This data will be stored RAW inline and makes varrays your best choice for speed and efficiency.  If the data surpasses the 4000 byte mark, or if you specify a LOB storage clause for the varray, Oracle will create a LOB segment and move the data out of line. · Varrays cannot be extended beyond their initially defined maxsize.  You will need to drop and recreate the type and table if you miscalculated the maxsize during modeling. · Varray columns can't be modified using SQL; you are reduced to using PL/SQL to insert, update or delete varrays.  You can't just replace an element inside the varray either; you must replace the whole array.
· Physically, nested tables are implemented by Oracle as a child table, with a hidden column (nested_table_id), which is foreign-keyed to a surrogate, hidden column in the parent table (one for every nested table column).  The child table may not be queried or modify independently of the parent. · There is no inherent order to how data in nested tables is stored or returned, and foreign keys are not supported.  If you want to impose order, you must add an attribute to the nested table for indexing or sorting. · Unlike regular DML, when unnesting Oracle doesn't allow the subquery to return zero or multiple collections.   The subquery to get the row's nested table must bring back only one nested table. · If you do create columns using nested tables, index the nested_table_id pseudocolumn.  Since the child table created by the server to store your nested table has the tendency to de-cluster the ordered data you've put in there, it is also a good idea to define the nested tables as index-organized (less physical IO). . Oracle docs speak of compressing the leading column in the UK you add to the nested table. In the docs, they show the nested_table_id as the letters A,B,C etc. In reality it is a chock-full RAW(16) column which can really save some space if compressed. . So both the parent and child end up with a rather large extra column, even though the parent probably already has a UK/PK. There is also the overhead of the extra UK on the parent table.
Most of the packages I share with you here are fairly new. You are more than welcome to find errors and let me know about them (preferrably with your suggested solution).
If you need fast and random, bi-directional or multiple-pass access over a static set of data, either for lookups or comparisons, you can’t beat a collection.
If yours is a shop that builds multiple applications in a single schema or instance, it is now time to build more specific packages for storing types.  For each subsystem, you will want another package specification dedicated to defining a number of subtypes, record types and collections of record based on the columns and tables in the subsystem.  I think of this as the end of the database design phase.  Now you can move on to the detailed design of your application’s isolation layers, using your packaged types for most of the interfaces.
Of course, as with any advice, your particular environment may mean my advice is worthless. Your mileage may vary.
You should be using records all the time and asking yourself, “Now where can I use simple variables?”, not the other way around.  Collections of records are essential if you want to build PL/SQL isolation layers (presentation, rule, lookup, data access API, etc.)  It’s not a question of when should you use records, it’s a question of how much pain you can endure as you stick with individual fields for everything.
As long as you are writing lower-level PL/SQL routines that will be called only by other PL/SQL routines, you should generally use records as the primary parameters.  This is especially true and beneficial if you are working on the data access layer.  Some of you don’t have to imagine a table with 50, 80, even 100 columns or more.  You live with such modeling horrors every day.  Now imagine an INSERT procedure that uses 50 individual parameters.  Consider the very real possibility that Marketing gets an awful promotional approved, and you’re stuck figuring out how to model and codify that which defies logic.  There’s no budget to refactor your legacy model, so you end up tacking 5 more columns onto the end of that monster table.  Now you’ve got to hunt down all dependents on that table and paste in a bunch more parameters and code. Using a record reduces that entire headache to nothing.  Since the records are tied to the table using %ROWTYPE, the code automatically adjusts.  No parameters to add, delete or change type length.  If you’ve followed best practices, you’ll only have the insert and update statements to modify. If you’re on 9i, you can even avoid that!
If the callers of your PL/SQL routine are written in other languages, avoid records.  In some, like PERL, you can write anonymous blocks that can take advantage of PL/SQL-only datatypes, but that is the exception.  Java, for example, cannot handle records, period.  Your only alternative that comes close is to use object types instead, then convert the object’s attributes to a record once the call has reached the PL/SQL level.  Then there’s the issue of driver support.  Even if your driver supports protocol Z, version X, it doesn’t mean they did it right.  We ended up having to abandon the idea of passing objects or arrays to PL/SQL thanks to our application server’s JDBC driver
Nested tables
When you don’t mind that there is no implicit ordering. When you need to access or manipulate individual entries in each collection. When you need efficient parent/child lookups. When you foresee the need to issue SQL statements against a list. When you have code that currently maintains temporary tables as staging areas or that does syncronization.
Story about re_rate_err or paren-child in alarm system.
Associative Arrays
For everything else! Bi-directional, random access and multiple scans. Especially good for the random access inherent with hash tables.
Story about resync process!
Varrays
Varrays are useful when you need to maintain ordering of your collection elements. Varrays are very efficient when you always manipulate the entire collection as a unit, and that you don't require querying on individual elements of the collections. Since they can be stored inline, there is no transparent joining going on, no possibility of inefficient joins because there is no child table and no unindexed nested_table_id. So they can be “faster”.
Nesting is all about getting relational or flat data into a collection.
MULTISET used in nesting to convert relational data into collection, usually a collection of objects. Since more than one result set is not allowed inside CAST, MULTISET tells Oracle to treat the multiple-row result set as ONE collection.
Unnesting is all about getting collection data out into a relational or flat form to be viewed or manipulated.
COLUMN_VALUE is the “hidden” name of the column where your values are stored in a scalar collection with no attributes. Collections of objects or records have named attributes for every field, and therefore do not need the COLUMN_VALUE column name to access data.
CAST doc
Systems grow. Business strategy and requirements change course. Marketing invents new promotions. Management requests new metrics. The only real constant is change. Our data models, development processes and testing methods must accommodate rapid change.
Unfortunately, the majority of us do not have such a setup. Especially with regard to the foundation of everything else, the data model. The data model is especially fragile. It needs to be designed correctly and flexibly from the first draft. If it is not, what occurs is something I call “mudballing.” The DBA or developer either cannot or does not dare alter the data model in significant ways. Instead, they just tack on yet another column to an already large, poorly modeled table, thereby adding another layer to their precious mudball they don’t dare peel or crack open. With time, these mudball tables take on a life of their own. Everything depends on them. No one dares remove anything from them “just in case.” Within 10 years, only 10% of the columns will actually be used, but there are so many layers, so many years of bad decisions that you just have to go with the flow or hope the data center burns down so you can start from scratch.
On the RMOUG listserver in January, in fact, a poster mentioned a gigantic mudball table he had inherited that had 350 columns in it.
Mudballing metaphor best represented by the fiddler crab, but I love the image of this little scarab carefully gathering and packing down his favorite stuff into another layer on his prized ball. When I hear of a mudball table or system, where the management isn’t willing to risk doing it right, cleaning up or starting over, but would rather live with all the poor decisions of the past, I like to close my eyes and think of this hard-working little beetle, carefully building and jealously guarding his ball. It makes me chuckle and helps me deal with the pain of working with such awful data structures.
Records are designed to minimize the impacts of change. If you have mudball tables and dung beetle managers, records could be especially beneficial to you.
http://www.quest.com/whitepapers/Using_PLsql_Records.pdf
Let’s look at the code seen in the paper, the rather morbid, whimsical model of soul tracking.
To perform DML on collections, the collection must be unnested with TABLE.
ORA-22905 cannot access rows from a non-nested table item
Use CAST to tell Oracle "again" that it is really what it says it is: a nested table
The docs say you can use RETURNING with INSERT, but it doesn’t currently work. “You can use the BULK COLLECT clause in the RETURNING INTO clause of an INSERT, UPDATE, or DELETE statement”
Similarly, there is no indication that you can’t have shared types accessed through synonyms. However, if you try it, you will get “ORA-22863 synonym for datatype schema.type not allowed”. Bug/Enhancement 1879471 has been filed which is off limits to public. It is fixed in 9.2.
I was going to provide lots of screenshots showing how TOAD, SQL Navigator, Rapid SQL, SQL-Station and PL/SQL Developer all handle LOBs, objects and collections. But time was running short. I’ll have to leave the comparisons up to you.
If your PL/SQL involves records and collections, you owe it to yourself to find a tool that smoothly handles complex columns, types, attributes, parameters and return types. This is especially true for generating test code, insert/update statements and anonymous blocks. Such features could save you 50-60% of your coding time. If your tool does not do these things, demand enhancements from your tool vendor.
Here are some sample types, objects, tables and test code…
Oracle9i Application Developer's Guide - Object-Relational Features
Chapter 8 Design Considerations for Oracle Objects
Nested Tables
Oracle9i Application Developer's Guide - Object-Relational Features
Chapter 2 Basic Components of Oracle Objects
Collections