Answer Posted / julius caeser
Hi
There are 2 ways to do this and both of them are efficient.
Method 1: Sorter -Filter.
Send all the data to a sorter and , sort by all feilds that
u want to remove duplicacy from . note that in the
preoperties tab, select Unique .
This will select and send forward only Unique Data .
Method 2; Use an Aggregator
Use AGG Transformation and group by the keys /feilds that u
want to remove duplicacy from.
| Is This Answer Correct ? | 11 Yes | 3 No |
Post New Answer View All Answers
1)you have multiple source system where u receive files ,how do you actually load into mapping using transformation,what are the transformation you use? 2)you have files in ftp location ,how do you get it into mapping with you ETL concept?
What are mapplets? How is it different from a Reusable Transformation?
Briefly define reusable transformation?
what is SDLC way of code development?
what is degenerated dimension?
What is a surrogate key?
Explain pmcmd command usage in informatica
What are the types of caches in lookup?
Describe data concatenation?
What is the commit type if you have a transaction control transformation in the mapping?
Under what condition selecting sorted input in aggregator may fail the session?
If the source has duplicate records as id and name columns, values: 1 a, 1 b, 1 c, 2 a, 2 b, the target should be loaded as 1 a+b+c or 1 a||b||c, what transformations should be used for this?
5. Consider the following products data which contain duplicate rows. A B C C B D B Q1. Design a mapping to load all unique products in one table and the duplicate rows in another table. The first table should contain the following output A D The second target should contain the following output B B B C C
What's the layout of parameter file (what does a parameter file contain?)?
What are the differences between joiner transformation and source qualifier transformation?