Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...


How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?

Answers were Sorted based on User's Feedback



How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?..

Answer / kiran

We can use dynamic cache in lookup to eliminate duplicates.

Is This Answer Correct ?    11 Yes 0 No

How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?..

Answer / joe

Option 1: using Unix for flat files

Option2: Using Checksum function in the expression to
generate a unique hexadecimal code for each record.
and comparing the same with the next record.

Is This Answer Correct ?    5 Yes 2 No

How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?..

Answer / ankur saini

sol--seq gen---rank ---filter

add a sequence generator ...

ex input is
1 a
1 b
2 a
2 b
after seq generator
1 a 1
1 b 2
2 a 3
2 b 4

then ranl it group by all file ports rank on the seq gen key
input seq rank
1 a 1 1
1 b 2 2
2 a 3 1
2 b 4 2

add filter on rank=1

enjoy!!!!!

Is This Answer Correct ?    2 Yes 0 No

How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?..

Answer / harish konda

Give the SQL query to sort the data in source in source
qualifier t/f.

And then connect to exp t/f and add one more port (say flag)
to generete numbers like, when prev row and current row
values are same, then increment number, or else give 1.

And next connect to Filter t/f and give the condition in
filter as flag=1.

Then rout the data to target.

Is This Answer Correct ?    2 Yes 1 No

How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?..

Answer / isha

Select all source rows.
The Dynamic Lookup transformation builds the caches from the target table.
When the lookup evaluates a row from the source that does not exist in the lookup cache, it inserts the row into the cache and assigns the NewLookupRow output port the value of 1. When the lookup evaluates a row from the source that exists in the lookup cache, it does not insert the row into cache and assigns the NewLookupRow output port the value of 0.
The filter in this mapping checks if the row is a duplicate or not by evaluating the NewLookupRow output port from the Lookup. If the value of the port is 0, the row is filtered out, as it is a duplicate row. If the value of the port is not equal to 0, then the row is passed out to the target table.

Is This Answer Correct ?    1 Yes 0 No

How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?..

Answer / chandu

we can achieve this by using first value or last value in lkp properties.

Is This Answer Correct ?    0 Yes 0 No

How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?..

Answer / priyank

There are several ways of achieving this. We can do it
through expression transformation and other is look up on
the target.

Expression transformation:

Create ports,

Var_PREV_KEY=Key
Var_CURR_KEY=Var_PREV_KEY
Var_CHK_DUPLICATE --> IIF(Var_CURR_KEY=Key,'DUP','NODUP')
OUT_DUPLICATE --> Var_CHK_DUPLICATE

Note: I have taken a scenario where the target table
contains only 1 Key. In case of multiple keys, will have to
create a few more Variable ports for both CURR and PREV and
in the Var_CHK_DUPLICATE port, we need to add those checks
with an 'AND' operator.E.g. For 2 keys,

Var_PREV_KEY1=Key1
Var_CURR_KEY1=Var_PREV_KEY1
Var_PREV_KEY2=Key2
Var_CURR_KEY2=Var_PREV_KEY2
Var_CHK_DUPLICATE --> IIF(Var_CURR_KEY1=Key1 AND
Var_CURR_KEY2=Key2,'DUP','NODUP')
OUT_DUPLICATE --> Var_CHK_DUPLICATE


If the Informatica version is Unix installation, then in
the pre session command you can give an unix command to
remove the duplicates from the file like

sort <file_name> | uniq > <file_name>.new

Hope it helps.

Is This Answer Correct ?    4 Yes 12 No

Post New Answer

More Informatica Interview Questions

What is the surrogate key?

0 Answers  


What are the methods for creating reusable transforamtions?

2 Answers  


how do u use sequence created in oracle in informatica? Explain with an simple example

6 Answers   CTS,


In CSV flat file date are in the following format. dd/mm/yyyy (05/01/2005)and d/m/yyyy (5/1/2005) and dd/m/yyyy (05/1/2005) and d/mm/yyyy (5/01/2005). It should be load to target in a unique format. How will you implement this?

1 Answers  


why we are using level option in normalizer transformation

1 Answers   TCS, Wells Fargo,


how to get the data from the client machine and how to get server location data to client loction can any one explain bit deep please..!

3 Answers   TCS,


Can i create a lookup table without cache.

7 Answers   iGate,


Load data to multiple targets according date. When First time session runs it should send to 1st target,second time session runs then send to 2nd target and goes on how to achieve it

1 Answers  


What is a joiner transformation?

0 Answers  


what is metadata?

6 Answers  


Generally how many Fact Tables and Dimensions Table you have used in the Project? Which one is loaded first Fact Table or Dimensions Table into the warehouse? What is the size of the Fact Table and Dimension Table? what is the size of the table and warehouse

5 Answers   TCS,


What is the difference between STOP and ABORT options in Workflow Monitor?

0 Answers  


Categories