Source is a flat file and want to load unique and duplicate
records separately into two separate targets; right??
Answers were Sorted based on User's Feedback
Answer / nitin
Create the mapping as below to load Unique records and duplicate records each in separate targets
Source->SQ->Sorter->Aggregator->Router-> Tgt_Unique
-> Tgt_Duplicate
In aggregator use group by on all ports.
and define a port OUTPUT_COUNT = COUNT(*)
In the router define two groups OUTPUT_COUNT > 1 and OUTPUT_COUNT = 1; Connect the outputs from the first group
OUTPUT_COUNT > 1 to tgt_Duplicate and OUTPUT_COUNT = 1 to Tgt_Unique
| Is This Answer Correct ? | 1 Yes | 0 No |
Answer / ankit kansal
Hi,
What i have understood after seeing your problem is like if your source contains 1,2,1,2,3 then only 3 is taken as unique and 1,2 will be considered as duplicate values.
SRC->SQ->SRT->EXP(to set flags for dup)->ROUTER->JOINER->EXP->RTR->2TGTS
http://deepinopensource.blogspot.in/
| Is This Answer Correct ? | 1 Yes | 1 No |
Answer / mohank106
Refer the below link, the answer is crystal clear here
http://www.bullraider.com/database/informatica/scenario/11-informatica-scenario3
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / rani
Take Source Qualifier,next place sorter t/f,select option Distinct in sorter and load it in Unique_target.
Take lookup transformation and lookup on target and compare it with source, when a record occurs more than 1 ,delete that record from target using Update strategy -DD_DELETE 2 and load
in Duplicate_target.This is a source in another pipeline and take unconnected lookup and write lookup override like count(*) having >1 then load them in Duplicate_target.
| Is This Answer Correct ? | 0 Yes | 2 No |
How will you update the row without using update statergy?
what are the types of facts with Examples?
I have done MBA in 2008. i got job as business analyst in 2008 january through consultany. but after 3 months they are giving training Informatica developer. now iam continuing this job. my question is when iam going to interview HR people ask me many times like this " YOU ARE MBA GRADUATE. HOW YOU ARE SELECT THIS POSTION. IAM EXPLAINING WHAT I HAVE MENTION ABOVE". PLEASE TELL HOW IAM TELLING THIS QUESTION ANSWER.
what is junk dimension
Let’s say I have more than have record in source table and I have 3 destination table A,B,C. I have to insert first 1 to 10 records in A then 11 to 20 in B and 21 to 30 in C. Then again from 31 to 40 in A, 41 to 50 in B and 51 to 60 in C……So on up to last record.
How to load relational source into file target?
How to create Target definition for flat files?
Can we use the mapping parameters or variables created in one mapping into any other reusable transformation?
How do you load first and last records into target table?
what are testing in a mapping level please give brif eplanation
what is the logic will you implement to load data into a fact table from n dimension tables?
How i can Schdule the Informatica job in "Unix Corn Schduling tool" ?