Parallel job contains more than 20 stages.
I want to find out which stage is more performance
incentive.
Answer / nish
read the dump score and player timings to find out whioch stages/nodes are taking more CPU time for execution.
consider splitting your job into parts optimally and compare the performance with the original design.
try to reduce number of transformer stages.
Check for unnecessary re-partitioning being introduced.
Hope it helps!
| Is This Answer Correct ? | 1 Yes | 0 No |
how to write server Routine coding?
What are the processing stages?
What is the difference between SQl Loader and OCI in datastage?
I have a input as: Col 1 1 2 2 3 I want 3 output as: Output1: 1 1 Output2: 2 2 Output3: 3 i.e. same duplicates should be in one target, other duplicate values should be in another target and so on.. Pls help
1.what is repartionoing technique? 2.what deliverables transferred to client using datastage? 3.how to write loop statements using nested loop sequence?
input like 2 7 8 9 5 1 7 3 6 output:2 5 6 how to find out this plz explain?
pls ,tell me good Training centre with Job Oppertunity for Data stage in chennai?
create a job that splits the data in the Jobs.txt file into four output files. You will direct the data to the different output files using constraints. • Job name: JobLevels • Source file: Jobs.txt • Target file 1: LowLevelJobs.txt − min_lvl between 0 and 25 inclusive. − Same column types and headings as Jobs.txt. − Include column names in the first line of the output file. − Job description column should be preceded by the string “Job Title:” and embedded within square brackets. For example, if the job description is “Designer”, the derived value is: “Job Title: [Designer]”. • Target file 2: MidLevelJobs.txt − min_lvl between 26 and 100 inclusive. − Same format and derivations as Target file 1. • Target file 3: HighLevelJobs.txt − min_lvl between 101 and 500 inclusive. − Same format and derivations as Target file 1. • Rejects file: JobRejects.txt − min_lvl is out of range, i.e., below 0 or above 500. − This file has only two columns: job_id and reject_desc. − reject_desc is a variable-length text field, maximum length 100. It should contain a string of the form: “Level out of range: <min_lvl>”, where <min_lvl> is the value in the min_lvl field. My Question is how do you write the stage variable for reject rows.
IS FILE SET CAN SUPPORT I/P AND O/P LINK AT A TIME?
Explain the scenarios where sequential file stage runs in parallel?
Can anybody tell me in detail any complex datastage job? I have worked only in direct load and full refresh jobs.But in all interviews this question arises
Source Like department_no, employee_name ---------------------------- 20, R 10, A 10, D 20, P 10, B 10, C 20, Q 20, S and Output should be like this department_no, employee_list -------------------------------- 10, A 10, A,B 10, A,B,C 10, A,B,C,D 20, A,B,C,D,P 20, A,B,C,D,P,Q 20, A,B,C,D,P,Q,R 20, A,B,C,D,P,Q,R,S