I have a input as:
Col
1
1
2
2
3
I want 3 output as:
Output1:
1
1
Output2:
2
2
Output3:
3
i.e. same duplicates should be in one target, other duplicate values should be in another target and so on.. Pls help
Answers were Sorted based on User's Feedback
Answer / ankit kansal
Hi, if you know the number of duplicates coming your from source then it's easy to move the duplicates to the defined target using router transformation easily,
But if you do not know the number of duplicates then first you must sort the data on the value of duplicate column and then using Transaction Component as available in informatica you can create n no of targets depending upon the values encountered.
http://deepinopensource.blogspot.in/
| Is This Answer Correct ? | 1 Yes | 1 No |
Answer / kiran kumar
hi
put the hash partition in active stage then
make three node configuration then u got the output
| Is This Answer Correct ? | 2 Yes | 2 No |
Answer / suneelbabu.etl
can u mention some more input records as well output
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / sher
something like below should work...
Job#1
Read in transformer to add a new column sequence the value of sequence will be initially 1 and increment when key changes…
so output(A) will look like…
1 1
1 1
2 2
2 2
3 3
4 4
B 5
under job sequence, use a job activity to read(A) use tail -1 and | cut to read 2nd column. we get 5..
Now, use start loop activity and end look activity stage in job sequence to create a loop from 1 to jobactivity.output incrementing by 1..
The look will now run 5 times.. inside the loop, call a job and pass the value i.e 1,2,3,4,5 each time it runs.
Inside the called job, read the input file(A) pass to transformer and then use constraint to write only records where 2nd column = variable passed and drop column 2..
Output of transformer.
1st loop output
1
1
2nd loop output
2
2
and so on...
when writing the file, use the passed variable in file name, so file name is distinct
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / narayan
just add the mod function like this =MOD(inputcol|3) and then load into different targets
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / sivakeshava
hi friends by seeing the source we need 3 targets....which id duplicates 1,1 is one target, and 2,2 is 2nd target and 3 is another target
Seqfile------------ >copy -------------- lookup(innerjoin) ------------- filter
Aggregator---------------- > filter
(count) where clause
copy to aggregate then filter to lookup then will give 3 targets
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / ankur
Thats a good question... but unfortunately I dont know the ans.. Datastage gurus- can anyone help this guy and myself to know this question's ans.. Appreciate your feedback please!!
| Is This Answer Correct ? | 0 Yes | 1 No |
Answer / vaibhav
hello Ankit,
I need the ans in datastage..how this could be implemented in datastage?
| Is This Answer Correct ? | 0 Yes | 1 No |
Answer / vaibhav
Actually it is like:
I want as many target links as much u have types of duplicates in my input file.. Pls help guys..
| Is This Answer Correct ? | 0 Yes | 3 No |
how do u capture duplicates through sort & transformer
if 3 table having different columes. like first table having 4 columns , second table having 3 columns and third table having 2 columns then how to capture the data by using funnel stage in parallel jobs...srinu.thadi
17 Answers IBM, TCS,
how can we perform the 2nd time extraction of client database without accepting the data which is already loaded in first time extraction
What is container and then types?
how to remove duplicates in transformer stage by using stage variables?one example?
how to call routines in stages?
how to get the unique records on multiple columns by using sequential file stage only
what is stage is used for below Input columns: dept|mgr|employee|salary Output columns: mgr|count of employee per mgr|avg salary per dept note: each dept has one mgr and each mgr has many employees
What is data partitioning?
How do u call shellscript/Batch file from DS?
how to design the change capture stage in(data stage parallel jobs) type 2
CAN WE DO HALF PROJECT IN PARALLEL JOBS AND HALF PROJECT IN SERVER JOBS?