Input file one has data as shown below
1
2
3
4
5
6
7
8
Input file two has data like
6
7
8
9
10
11
Design a data stage job which will provide me three output files as shown below:
Output 1
6
7
8
Output 2
1
2
3
4
5
Output 3
9
10
11
Let me know your answers
Answers were Sorted based on User's Feedback
seq file1 and file2 as input to change capture stage and then a switch stage with case=0,1,2 (on change code). Case=0 will give 6,7,8. Case=1 will give 1,2,3,4,5 and case=2 will give you 9,10,11
Is This Answer Correct ? | 1 Yes | 0 No |
Answer / suresh
Actually we observe 1st output has common numbers of two inputs,remaining outputs have indiaviduals wrt to input data.
Suppose we validate the application using equivalence partitioning using this approach.
Same as we validate boundary value wrt two inputs can approach this one.
Is This Answer Correct ? | 0 Yes | 1 No |
Answer / pavani
by using join type=full outer.we have to get output has 2 columns. 1st_column values are first inputs and remaining nulls,2_column values are second input and remaining are nulls.
then by using transformer write a constraint
1.output----->1st_column=2_column;
2.output----->2_column=null or'';(depending on input data)
3.output----->1st_column=null or'';(depending on data)
Is This Answer Correct ? | 0 Yes | 2 No |
Use input file1 as after file and input file2 as before file to a change capture stage. Change capture stage generates change code 0 for 6,7,8 & 1 for 1,2,3,4,5 and 2 for 9,10,11. Use a filter/switch stage after change capture stage to write records to output file1 when change code is 0, to output file2 when change code is 1 and to output file3 when change code is 2.
Is This Answer Correct ? | 0 Yes | 2 No |
Which commands are used to import and export the datastage jobs?
Please explain me the difference between 3 types of slowly changing dimension in datawarehousing?
8000 jobs r there i given commit, suddenly job will abort? what happens? 2)diff b/t transformer stage & filter stage? 3)how to load the data in the source?
Can we use Round Robin for aggregator?is there any benefit underlying?
Does datastage support slowly changing dimensions ?
Explain the importance of surrogate key in data warehousing?
How many number of reject links merge stage can have?
I have 3 jobs A,B and C , which are dependent each other. I want to run A & C jobs daily and B job run only on sunday. how can we do it?
what is advantages of snowflake when it is used?
How to add zero "0" before record in a field?
what is the difference between datasatge and datastage TX?
Why we need datasets ratherthan sequential files?