What is Cleanup Resources and when do you use it?
Answers were Sorted based on User's Feedback
It is like:
1)When we want to clean DataStage Repository.
2)when we are trying to use a job then we get a message
telling error : job jobnumber is being used by another user.
we get only read only mode to use it.
it is being protected by a lock.
so if we want to use job we need to unlock it.
3)If a job is hanging or failing and not releasing locks :
4)to stop a job when its status is running.
Then go to Director (but not to Administrator) and select
the job u want to access and go to job menu where u will
find clear up resources option.
clicking it u will be shown a window where u will see to
panes top(processes) and bottom(locks).
here we can view and end job processes
and release the associated locks.
select a process which holds a lock and click logout.
by doing so we can unlock a job.
Actually a job can be used only by a single user at a time.
Is This Answer Correct ? | 1 Yes | 0 No |
Answer / santhu
When we want to kill the process in unix we need clean up resources.
Actually to kill any process we need process id. Then this option is available in administrator then we have to check the option called clean up resouces
Is This Answer Correct ? | 1 Yes | 1 No |
which memory is used by lookup and join
how to sort two columns in single job in datastage.
Notification Activity
SEQUENTIAL FILE I HAVE ONE RECORD,I WANT 100 RECORDS IN TARGET?HOW CAN WE DO THAT?PLS EXPLAIN ME AND WHAT STAGES ARE THERE?WHAT LOGIC?
Differentiate between operational datastage (ods) and data warehouse?
How one source columns or rows to be loaded in to two different tables?
what are .ctl(control files) files ? how the dataset stage have better performance by this files?
how can u find out the datastage job is running on how many nodes
Hi this madan, in data stage one file in Empno 12345678910 in a table, i want target is Empno 1 2 3 4 5 6 7 8 9 10
insequential file 2header avaliable,and 100 records avaliable.how to skip the 2 headers and load 100 records target?
how to read 100 records at a time in source a) hw is it fr metadata Same and b) if metadata is nt same?
How many nodes supported by a one cpu in parallel jobs?