Sql updating large number of rows

The goal is to have several separate sessions applying UPDATE statements at once, rather than using the sometimes restrictive PARALLEL DML alternative. LAST UPDATE test SET fk = fk_tab(i) , fill = fill_tab(i) WHERE pk = pk_tab(i); cnt := cnt pk_tab. In this round, I have removed the Foreign Key used in Round 2, and included a Bitmap index on TEST. PL/SQL solutions seem to incur a penalty when updating bitmap indexed tables.

It's a bit of a kludge, but we can do this in PL/SQL using a Parallel Enable Table Function. COUNT; END LOOP; CLOSE test_cur; COMMIT; PIPE ROW(cnt); RETURN; END; / Note that it receives its data via a Ref Cursor parameter. FK RUN 1 RUN 2 ----------------------------------- ----- ----- 1. A single bitmap index has added around 10% to the overall runtime of PL/SQL solutions, whereas the set-based (SQL-based) solutions run faster than the B-Tree indexes case (above).

I want to test on a level playing field and remove special factors that unfairly favour one method, so there are some rules: TEST (Update Source) - 100K rows TEST (Update target) - 10M rows Name Type Name Type ------------------------------ ------------ ------------------------------ ------------ PK NUMBER PK NUMBER FK NUMBER FK NUMBER FILL VARCHAR2(40) FILL VARCHAR2(40) Not many people code this way, but there are some Pro*C programmers out there who are used to Explicit Cursor Loops (OPEN, FETCH and CLOSE commands) and translate these techniques directly to PL/SQL.

Solution #3 - the final one for today The approach is very similar to the previous one because it still uses paging, but now instead of relying on the number of scanned records, we use the are not sequential and can have gaps like 25 348 is right after 25 345.

The solution also works if any records from future pages are deleted - even in that case query does not skip records. For further learning, I recommend investigating results of I suspect that join type is the parameter of the query that made the largest contribution to performance to make the 3rd query faster.

Another important thing is that the 2nd query is extremely dependent on the number of the pages to scroll. More guidance about understaing output for with a primary key (keyset pagination).

The interesting thing about this method is that it performs a context-switch between PL/SQL and SQL for every FETCH; this is less efficient.

I include it here because it allows us to compare the cost of context-switches to the cost of updates.