I have a input as:
Col
1
1
2
2
3
I want 3 output as:
Output1:
1
1
Output2:
2
2
Output3:
3
i.e. same duplicates should be in one target, other duplicate values should be in another target and so on.. Pls help
Answers were Sorted based on User's Feedback
Answer / ankit kansal
Hi, if you know the number of duplicates coming your from source then it's easy to move the duplicates to the defined target using router transformation easily,
But if you do not know the number of duplicates then first you must sort the data on the value of duplicate column and then using Transaction Component as available in informatica you can create n no of targets depending upon the values encountered.
http://deepinopensource.blogspot.in/
| Is This Answer Correct ? | 1 Yes | 1 No |
Answer / kiran kumar
hi
put the hash partition in active stage then
make three node configuration then u got the output
| Is This Answer Correct ? | 2 Yes | 2 No |
Answer / suneelbabu.etl
can u mention some more input records as well output
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / sher
something like below should work...
Job#1
Read in transformer to add a new column sequence the value of sequence will be initially 1 and increment when key changes…
so output(A) will look like…
1 1
1 1
2 2
2 2
3 3
4 4
B 5
under job sequence, use a job activity to read(A) use tail -1 and | cut to read 2nd column. we get 5..
Now, use start loop activity and end look activity stage in job sequence to create a loop from 1 to jobactivity.output incrementing by 1..
The look will now run 5 times.. inside the loop, call a job and pass the value i.e 1,2,3,4,5 each time it runs.
Inside the called job, read the input file(A) pass to transformer and then use constraint to write only records where 2nd column = variable passed and drop column 2..
Output of transformer.
1st loop output
1
1
2nd loop output
2
2
and so on...
when writing the file, use the passed variable in file name, so file name is distinct
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / narayan
just add the mod function like this =MOD(inputcol|3) and then load into different targets
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / sivakeshava
hi friends by seeing the source we need 3 targets....which id duplicates 1,1 is one target, and 2,2 is 2nd target and 3 is another target
Seqfile------------ >copy -------------- lookup(innerjoin) ------------- filter
Aggregator---------------- > filter
(count) where clause
copy to aggregate then filter to lookup then will give 3 targets
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / ankur
Thats a good question... but unfortunately I dont know the ans.. Datastage gurus- can anyone help this guy and myself to know this question's ans.. Appreciate your feedback please!!
| Is This Answer Correct ? | 0 Yes | 1 No |
Answer / vaibhav
hello Ankit,
I need the ans in datastage..how this could be implemented in datastage?
| Is This Answer Correct ? | 0 Yes | 1 No |
Answer / vaibhav
Actually it is like:
I want as many target links as much u have types of duplicates in my input file.. Pls help guys..
| Is This Answer Correct ? | 0 Yes | 3 No |
can any one tell me how to install datastage 8.1 in windows xp with wizard
WHAT ARE PERFORMANACE SETTINGS YOU HAVE IMPLEMENTES IN YOUR PROJECT?
What is usage analysis in datastage?
In work load management there are three options of Low priority, Medium priority and High Priority Jobs which can be used for resource management. why this feature is developed when there is already jobs prescheduled by scheduler or autosys. what will be the use of workload management then?
how do u reduce warnings
How to add zero "0" before record in a field?
What could be a data source system?
Hi guys, Design job sequence, we have 3 sources, in that 1st source in abort then only run the remaining sources.. How please design the job. Thanks.
What is difference between join, merge and lookup stage?
What are the difference types of stages?
AGGREGATOR default datatype
which unix commands mostly used in datastage