Answer Posted / prakash balakrishnan
It is the condition, where the graph will stop processing
due to mutual dependency of data.
For Ex:
Let’s consider a concatenate component, having three
inputs. Let’s say first i/p receives 20 million records,
second i/p receives 1000 records, third i/p receives 500
records.
Now even though the concatenate receives i/p’s at second
and third i/p port, the concatenate won’t work until it
receives all the i/p’s at it’s first i/p port. So the
graph’ll stop processing until the first i/p port receives
all its data. This condition is called DEADLOCK.
This is now minimised (not prevented) by “Automated flow
buffering”. This will in turn provide more workspace in
network resource allocation. So that the processing’ll be
faster.
The Automated flow buffering is available from 1.8 version.
| Is This Answer Correct ? | 17 Yes | 1 No |
Post New Answer View All Answers
What are the features of ab initio?
Can anyone give me an example of realtime start script in the graph?
What are the different forms of output that can be obtained after processing of data?
What is the function that transfers a string into a decimal?
Explain the methods to improve performance of a graph?
Code check-in and check-out commands in AbInitio
How does the bre work with the co>operating system?
what is the difference between usersandbox,privatesandbox,publicsandbox,commonproject sandbox?
Suppose we assign you a new project. What would be your initial point and the key steps that you follow?
Give one reason when you need to consider multiple data processing?
Whenever we load data into oracle table from staging table using exchange partition...then I read somewhere that data actually doesn't move in this and this command only reset the pointer in the data dictionary...so if data doesn't move then how data is loaded into the main table ? I mean wat is the point of pointer update in data dictionary?
Can anyone give me an exaple of realtime start script in the graph?
How co>operating system integrates with legacy codes?
Describe in detail about lookup?
What will be the skew for, input file->partition by key-> partition by round robin->output file