I have source file which contains duplicate data,my
requirement is unique data should pass to one file and
duplicate data should pass another file how?
Answer Posted / dilip anand k
Its Simple!!
All you have to do is link your source to a Sort Stage.
Sort the data and generate a Key Change column.
Key Change column = ‘1’ represents that the record is
unique while Key Change Column = ‘0’ represents the
duplicates.
Put a Filter stage and filter out the data into two
different outputs based on the generated Key Change Column.
Is This Answer Correct ? | 21 Yes | 5 No |
Post New Answer View All Answers
What is the use of datastage director?
What can we do with datastage director?
What is the difference between orabulk and bcp stages?
Can you explain kafka connector?
What are the functionalities of link partitioner and link collector?
What are the job parameters?
how to achieve this output ? Two Input columns(ID & Name) - ID | Name 1 | Jack 1 | Kara In output there should be only 1 column which will be populated as - 1,Jack 1,Kara
What are the main features of datastage?
What are the different plug-ins stages used in your projects?
what is the difference between == and eq in UNIX shell scripting?
What is size of a transaction and an array means in a datastage?
How rejected rows are managed in datastage?
What are constraints and derivations?
What is "fatal error/rdbms code 3996" error?
Explain entity, attribute and relationship in datastage?