how to design the change capture stage in(data stage
parallel jobs) type 2
Answer Posted / pooja
Let me just elaborate the earlier answer clearly.
1. Two input datasets are required for change data caputure
stage.
One is Old dataset
Second is New or updated dataset
2. Give in the 2 inputs to the change capture stage and the
target as a dataset.
3. Let the incoming data be sorted based on a key column(s)
for performance purpose in the change Caputure stage.
4. Upon executing the job, the data when viewed from the
dataset shows a new column added apart from the output
data. A change code column would be generated in the change
capture stage having values as 0, 1, 2, 3 which depicts the
changes on comparing the 2 input datasets such as copy(0),
Insert(1), Delete(2), Edit(3).
5. See what kind of data you need in the output target like
copy, insert, delete, edit.
6. To apply SCD Type 2 we require Start date and End date
columns.
7. The Change Capture Stage output is given to a
Transformer Stage, where 2 new columns are generated with
Effective Start Date and End Date.
8. If you need all Inserted or new data to be passed in to
a particular dataset then you need to specify an
appropriate condition in the Transformer Stage to the
outgoing link. Ex. Drop Output For insert=true
9. In the similar way other data can also be captured or a
Filter can also be used after the Transformer Stage to
filter the data into the targets based on the requirement.
Is This Answer Correct ? | 36 Yes | 4 No |
Post New Answer View All Answers
Differentiate between datastage and informatica?
How do you run datastage job from the command line?
Differentiate between Symmetric Multiprocessing and Massive Parallel Processing?
What are the partitioning techniques available in link partitioner?
what is flow of project?
What is datastage?
How do you import and export data into datastage?
What are the functionalities of link partitioner?
Field,NVL,INDEX,REPLACE,TRANSLATE,COLESC
Explain Quality stage?
What are the benefits of datastage?
If we take 2 tables(like emp and dept),we use join stage and how to improve the performance?
Can we use target hash file as a lookup ?
what is the use of skid in reporting?
What are the types of jobs we have in datastage?