Source is a flat file and want to load unique and duplicate
records separately into two separate targets; right??
Answers were Sorted based on User's Feedback
Answer / nitin
Create the mapping as below to load Unique records and duplicate records each in separate targets
Source->SQ->Sorter->Aggregator->Router-> Tgt_Unique
-> Tgt_Duplicate
In aggregator use group by on all ports.
and define a port OUTPUT_COUNT = COUNT(*)
In the router define two groups OUTPUT_COUNT > 1 and OUTPUT_COUNT = 1; Connect the outputs from the first group
OUTPUT_COUNT > 1 to tgt_Duplicate and OUTPUT_COUNT = 1 to Tgt_Unique
| Is This Answer Correct ? | 1 Yes | 0 No |
Answer / ankit kansal
Hi,
What i have understood after seeing your problem is like if your source contains 1,2,1,2,3 then only 3 is taken as unique and 1,2 will be considered as duplicate values.
SRC->SQ->SRT->EXP(to set flags for dup)->ROUTER->JOINER->EXP->RTR->2TGTS
http://deepinopensource.blogspot.in/
| Is This Answer Correct ? | 1 Yes | 1 No |
Answer / mohank106
Refer the below link, the answer is crystal clear here
http://www.bullraider.com/database/informatica/scenario/11-informatica-scenario3
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / rani
Take Source Qualifier,next place sorter t/f,select option Distinct in sorter and load it in Unique_target.
Take lookup transformation and lookup on target and compare it with source, when a record occurs more than 1 ,delete that record from target using Update strategy -DD_DELETE 2 and load
in Duplicate_target.This is a source in another pipeline and take unconnected lookup and write lookup override like count(*) having >1 then load them in Duplicate_target.
| Is This Answer Correct ? | 0 Yes | 2 No |
If no. of source columns is changing every time (First time it is 10 next time it is 20 so on). How to deal with it without changing mapping?
how can we load starting with 11th record of a table from source to target
Tell me any other tools for scheduling purpose other than workflow manager pmcmd?
What is a dimensional model?
Can we run session without using workflows?
Explain Dataware house architecture .how data flow from intial to end?
comonly how meny mappings r there in Banking projects?
I have Flat file like the data, sal have 10,000. I want to load the data in the same format as sal as 10,000. Can anybody know the answer means please mail me. Thanks in advance.. My mail id is chandranmca2007@gmail.com
Hi, source data is col1 values are 5,6,7 col2 are 3,2,1 col3 are 8,9,10 and i want to get target as col1 5,6,7 col2 1,2,3 col3 8,9,10 how to do this one?
-Which expression we can not use in Maplets?, -Can we join(relate) two dimensions in a schema? -Why and where we use 'sorted input' option?
can anyone explain me about sales project in informatica?
What is meant by incremental aggregation?