Create a job to load all unique products in one table and the duplicate rows in to another table.
The first table should contain the following output
A
D
The second target should contain the following output
B
B
B
C
C
Q2. Create a job to load each product once into one table and the remaining products which are duplicated into another table.
The first table should contain the following output
A
B
C
D
The second table should contain the following output
B
B
C
Answers were Sorted based on User's Feedback
Answer / sudheer
answer for Q2, use sort stage and use generate change key column option. use filter stage to send all change key columns having value=1 to move in to a file which generates A B C D, and filter records having value=0 to another file which generates B B C.
| Is This Answer Correct ? | 7 Yes | 1 No |
Answer / unknown
Q1: First use Aggregate stage- Row Count property then filter stage to separate Row Count 1 and more than 1.
| Is This Answer Correct ? | 4 Yes | 0 No |
Answer / nams
It can be done by unix uniq -u file_Name and uniq -d file_Name
or it can be done from datastage sequential file stage filter option from property.....write command overthere like above.....
| Is This Answer Correct ? | 1 Yes | 3 No |
How do you schedule or monitoring the job?
Can you explain players in datastage?
Difference between in process and inter process?
how many types of sorting the data in data stage?
What is quality stage?
How can you write parallel routines in datastage PX?
can explain wt is the pool for file.
How did you reconcile source with target?
If you want to use a same piece of code in different jobs, how will you achieve this?
root tree will find which is server job and which is parallel job?
which cache supports connected & un connected Lookup
in datastage interview qustion source target ------- ------- 12345 1 2 3 4 5