What is the method of removing duplicates, without the remove duplicate stage?
How do you run datastage job from the command line?
How to read the length of word in unix?
What are constraints and derivations?
How will you load you daily/monthly jobs datas in to Fact and Dimension table using datastage.
What are the types of jobs we have in datastage?
how to read 100 records at a time in source a) hw is it fr metadata Same and b) if metadata is nt same?
What is the difference between informatica and datastage?
Why fact table is in normal form?
What are the types of containers?
Can you filter data in hashed file?
What is a quality stage in datastage tool?
How can we improve the performance in datastage?
Can you explain link buffering?
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range: