Data transformed successfully from Source table to target table. Now how you will ensure that the data in Target table is proper. I answered will verify one or two records and check Then question was that the Development is doing (1-2 record verification)but as a tester you have to verify the complete data, how you will do?? Please answer
Answers were Sorted based on User's Feedback
Answer / abhinaw prakash
You can write the query using both the source and target tables.
You can use the NOT IN and MINUS keyword.
Also you can make a mapping using both the tables and
generate a report on the missing records.
| Is This Answer Correct ? | 5 Yes | 0 No |
Hi can any one tell me the difference between persistence and dynamic caches? On which conditions we are using these caches?
If u r using dynamic cache lookup port will be thier in thet which option u will select
Can anyone give some input on "Additional Concurrent Pipelines for Lookup Cache Creation" ? I know that this property is used to build caches in a mapping concurently. But which values should I set into this ( i.e. 1 or 2 or 3 or something else ) for concurrent cache building ?
my source having the records like ram 3 sam 5 tom 8 and i want to load into target like ram record 3 times,sam record 5 times, tom record 8 times
How big was your fact table
What is intricate mapping?
can anyone explain me about retail domain project in informatica?
I have source like col1,col2,col3,col4 and the values are like 3,6,1,7 1,5,3,8 2,1,5,6 i want the output like 3,6,7 5,3,8 2,5,6 How we will achieve in this scenario in informatica level. Please help me.. Thanks in advance..
1:when we develop a project what are the performance issue will raise?? KPIT 2:if a table have INDEX and CONSTRAINT why it raise the performance issue bcoz when we drop the index and disable the constraint it performed better??KPIT 3:what are unix commands frequently used in informatica??
generate Unique sequence numbers for each partition in session with Unconnected Lookup ? Hi All, Please help me to resolve the below issue while Applying partitioning concept to my Session. This is a very simple mapping with Source, Lookup , router, and target. I need to Lookup on the target and compare with the source data, if any piece of data is new then Insert, and If any thing change in the existed data then Update. while Inserting the new records to the target table I'm generating sequence numbers with Unconnected lookup, by calling the maximum PK ID from the target table. The above flow is working fine from last one year. Now I wish to apply the Partitioning concept to the above floe(session) At source I used 4 pass through partitions.(For Each partition different filter conditions to pull the data from source) at Target I used 4 passthrough Partitions. it is working fine for some data, but for some rows for Insert Operation , it is throwing Unique key errors, because while Inserting the data it is generating the same sequence key twice. In detail : 1st row is coming from 1st partition and generated the sequence number 1 for that row. 2nd row is coming from 1st partition and generated the sequence number 2 for that row 3rd row is coming from the 2nd partition generated the sequence number 2 again for that row. (it must generate 3 for this row) the issue is becuase of generating the same sequence numbers twice for different partitions. Can any one Please help me to resolve this issue. While Applying partitions how can I generate a Unique Sequence numbers from Unconnected lookup for Each partitioned data. Regrads, N Kiran.
SRC1 -> EXP -> AGGR -> TGT SRC2 -> EXP -> Above is a maaping with two pipeline connected to the taret TGT. Design wise is this design is correct or not ?
What is query panel?