HOW DO YOU PARFORM INCREMENTAL LAOD ?
Answers were Sorted based on User's Feedback
Answer / subbu
by using the date coll in the source we do incremental load,
specifying the start date in source qualifier,changing the
start date in parameter file in future.
| Is This Answer Correct ? | 7 Yes | 0 No |
Answer / praveen
Taking the Target defination as source and using the joiner
and update we can do the incremental loading
2.By using lookup transformation, keeping lookup on target
and companring.
| Is This Answer Correct ? | 9 Yes | 3 No |
Answer / tshiwela
you can perform increamental load by using auxiliary
parameters.
| Is This Answer Correct ? | 2 Yes | 2 No |
Answer / sudhakar
BY USING MAPPINING VARIABLES,FIRST DEFINE THE VARIABLE
E.DATE AND NEXT COMPARE E.DATE,IF IT MATCHES THEN DATA
AUTOMATICALLY LOADED.
| Is This Answer Correct ? | 0 Yes | 0 No |
By using the date column in the source we do incremental load,
specifying the start date in source qualifier,changing the
start date in parameter file in future.
With this in mind, I will expect a load pattern like this
Every extract from the source will be a full load
Every load in terms of records, will be equal to last load + new records – deleted records
80-90 % of the extracted records will already exist in the valid table instance
Every load will be incrementally larger than the previous load, as more records are added to the sourceThinking
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / vaschiky
Disable truncate target table option from session target
table properties.
| Is This Answer Correct ? | 0 Yes | 0 No |
What if we sort the data in descending order instead of increasing order in sorter t/f and send the data in aggregator t/f is there any performance downfall? Please answer below. thank you.
SRC1 -> EXP -> AGGR -> TGT SRC2 -> EXP -> Above is a maaping with two pipeline connected to the taret TGT. Design wise is this design is correct or not ?
What are the different options used to configure the sequential batches?
Can anyone give some input on "Additional Concurrent Pipelines for Lookup Cache Creation" ? I know that this property is used to build caches in a mapping concurently. But which values should I set into this ( i.e. 1 or 2 or 3 or something else ) for concurrent cache building ?
What are the types of metadata that stores in repository?
what r the transformations that r not involved in mapplet?
On lookup on any table we can get only a)any value or b)last value but if i need both duplicate values How can i achieve?
I want skip first 5 rows to load in to target? what will be the logic at session level ??
What is meant by query override?
What are the issues that you have faced while moving your project from the Test Environment to the Production Environment? Please explain in depth thanks in advance
Design a mapping to get year of join for each employee.
What is exact use of 'Online' and 'Offline' server connect Options while defining Work flow in Work flow ?