If the job aborted in a sequencer, how can we start that from the previews successful job.
What is difference between 8.1 , 8.5 and 9.1 ?
create a job that splits the data in the Jobs.txt file into four output files. You will direct the data to the different output files using constraints. • Job name: JobLevels • Source file: Jobs.txt • Target file 1: LowLevelJobs.txt − min_lvl between 0 and 25 inclusive. − Same column types and headings as Jobs.txt. − Include column names in the first line of the output file. − Job description column should be preceded by the string “Job Title:” and embedded within square brackets. For example, if the job description is “Designer”, the derived value is: “Job Title: [Designer]”. • Target file 2: MidLevelJobs.txt − min_lvl between 26 and 100 inclusive. − Same format and derivations as Target file 1. • Target file 3: HighLevelJobs.txt − min_lvl between 101 and 500 inclusive. − Same format and derivations as Target file 1. • Rejects file: JobRejects.txt − min_lvl is out of range, i.e., below 0 or above 500. − This file has only two columns: job_id and reject_desc. − reject_desc is a variable-length text field, maximum length 100. It should contain a string of the form: “Level out of range: <min_lvl>”, where <min_lvl> is the value in the min_lvl field. My Question is how do you write the stage variable for reject rows.
specify data stage strength?
Why fact table is in normal form?
How many Key we can define in remove duplicate stage?
How to Convert a string function to date function by using only sequential file stage with out using other stages ?
What are routines in datastage? Enlist various types of routines.
how can or from where we can get reference data in scd type2 implementation?
Explaine the implimentation of scd's in ds indetail, please send me step by step procedure to perform scd's 1,2,3. Please replay for this, Thanks in advance
What are the functionalities of link partitioner?
how can we perform the 2nd time extraction of client database without accepting the data which is already loaded in first time extraction
What is the method of removing duplicates, without the remove duplicate stage?