one file contains
col1
100
200
300
400
500
100
300
600
300
from this i want to retrive the only duplicate like this
tr1
100
100
300
300
300 how it's possible in datastage?can any one plz explain
clearley..........?
Answers were Sorted based on User's Feedback
In order to collect the duplicate values:
first cal the count output col in aggregator stage
group by col.
aggregator type: count rows.
count output col..
next, use the filter stage to separate the multiple occurrence.
finally, use the join stage or lookup stage to map the two
tables join type INNER ..
then u can get the desired output..
Is This Answer Correct ? | 14 Yes | 1 No |
Answer / chandu
use aggregator and calculate count of source column after
that use filter or transaformer stage use condition as count
>1 it gives only duplicate records
Thanks
Chandu-9538627859
Is This Answer Correct ? | 6 Yes | 0 No |
Answer / prasad
>Agg--->Filter1------->|
| |
| |
file-->cp-------------------->Join---->Filter2---->target1
|
|
Target2
Agg: use aggregator and select Agg_type=count rows and then give the Count O/P column=Count (User defined)
Count
------------
100--2
200--1
300--3
400--1
500--1
600--1
it will generate in Agg stage then
Filter1: give condition like Count=1( u will get unique records from Filter1)
Join Stage: take Left Outer Join
Filter2:
where=column_name=''(null){u will get duplicates records)
Target1 o/p:
100
100
300
300
300
where= column_name<>''(u will get unique records)
Target2 o/p:
200
400
500
600
Please correct, if am wrong :)
Is This Answer Correct ? | 2 Yes | 0 No |
Answer / sudheer
- aggregator -
seq. file - copy join - filter - seq.op
in arrg - cnt rows
in join - left outer join - left as seq.file data
in filter - where cond. - cnt>1
Is This Answer Correct ? | 1 Yes | 0 No |
Job Design:
|----->Agg--->Filter1-->|
| |
| |
file-->cp-------------------->Join---->Filter2---->target
Agg: use aggregator and select Agg_type=count rows and then give the Count O/P column=Count (User defined).
Filter1: give the condition Count<>1
Join: select left outer join
Filter2: give the condition Count<>0
u will get the right output....what ever the duplicate records.
and if u want unique records, give the condition Count=0
Is This Answer Correct ? | 0 Yes | 0 No |
Job Design:
Agg--->Filter1---------->|
| | Unique
file-->cp-------------------->Join---->Filter2---->target1
|
|-->Duplicate
Target2
Agg: use aggregator and select Agg_type=count rows and then give the Count O/P column=Cnt (User defined).
Filter1: give the condition Where=Cnt=1
U will get unique values like 200,400,500,600
Use Join (Or) Lookup stage: select left outer join
Filter2:
Where=Column_name='' (Duplicate values like 100,100,300,300,300)
Where=Column_name<>'' (Unique Values like 200,400,500,600)
u will get the right output....what ever the duplicate records.
Plz correct me if am wrong.....
Is This Answer Correct ? | 0 Yes | 0 No |
Answer / pooja
Follow the following steps -
1. Seq file stage - Read the input data in seq file - input1.txt
2. Aggregate stage - count the number of rows (say CountRow) for each ID(group=ID)
3. Filter stage - Filter the data where CountRow<>1
4. Perform join on the output of the step 3 and input1.txt.
You will get the result :)
Is This Answer Correct ? | 0 Yes | 0 No |
Answer / me
seq----> copy
from copy stage one link to aggregator apply count rows option ---> filter (on count rows output 1 ) send as reference to look up below
from copy stage second link to lookup
apply filter
Is This Answer Correct ? | 0 Yes | 0 No |
Demonstrate experience in maintaining quality process standards?
how to transfer file from one system to another system in unix? which cmd to be use?
Could anyone give brief explanation bout datastage admin
types of errors in datastage?
How to remove duplicates in transformer stage? in parallel mode
how many dimentions and fact tables used in your project and what are names of it?
Differentiate between Join, Merge and Lookup stage?
Hi , Today 1000 records updated, tomorrow 500 records updated how to find that?
How to reverse the string using unix?
What is the default execution order of the sequential file?
I am having the 2 source files A and B and I want to get the output as, the data which is in file A and which doesn't in file B to a target 1 and which is in file B and which doesn't in file A to a target 2?
How can remove duplicates in a file using UNIX?