I have a DB2 table which has 1000 rows.after udatingg first
110 rows, my job abends. Now what I have to do if I want to
restart the job next time so that it should start updating
from 111th row (without updating first 110 rows again).
Answer Posted / shashi
Instead of commiting at the last point, commit at certain
intervals (increase the frequency of commit, say after
every 100 records in this situation)
Create one temporary table with a dummy record and insert
one record into the table for every commit with key and
occurance of commit. And this insertion should happen just
before the issue of commit.
And in the procedure division, first paragraph should read
the last record of the table and skip the records that are
already processed and committed. After processing all the
records delete the entries in the table and issue one final
commit.
| Is This Answer Correct ? | 21 Yes | 2 No |
Post New Answer View All Answers
How can record locking be achieved in those DB2 versions which do not support it?
When the like statement is used?
Why db2 is called db2?
What is the difference between cursor and select statement?
Explain correlated sub-queries.
How do I add a column to an existing table in db2?
What are the prerogatives?
B37 abend during spufi?
Is the primary key a clustered index?
what is diffrence b/w file-aid tool and file-aid utility???
Is db2 a mainframe?
What is the difference between oracle and db2?
What is db2 bind process?
What are the different types of base tables?
What is data manager?