when will you go for dataset and fileset?
Answers were Sorted based on User's Feedback
Answer / jaimy chacko
just want to add something to the previous answer
Data sets are operating system files,
each referred to by a control file, which by
convention has the suffix .ds.
The control file points IBM InfoSphere DataStage
to a set of other files that carry the data.
The location of these data files is determined by the “resource disk” property in the configuration file used to run the job. Using data sets wisely can be key to good performance in a set of linked jobs.
You can also manage data sets independently of a job using the Data Set Management utility, available from
the IBM InfoSphere DataStage and QualityStage Designer or Director.
WebSphere DataStage can generate and name exported files, write them to their destination, and list the files it has generated in a file whose extension is,by convention,.fs.The data files and the file that lists them are called a file set.This capability is useful because some operating systems impose a 2 GB limit on the size of a file and you need to distribute files among nodes to prevent overruns. The amount of data that can be stored in each destination data file is limited by the characteristics of the file system and the amount of free disk space available.
| Is This Answer Correct ? | 8 Yes | 4 No |
Answer / jaimy chacko
Dataset and Fileset are almost the same .
Dataset is tool dependent and File Set is OS dependent(UNIX).
Dataset don't have any regulation of the amount of data that it has where as Fileset have limits to the data.
| Is This Answer Correct ? | 3 Yes | 5 No |
What all are the different way to run a job?
What are stage variables?
for example You have One Table with 4 Columns (Mgr ID, Department ID, Salary, Employee ID). Can you find out the Average Salary and Number of Employee present per Department and Mgr
What are sequencers?
Is there no issue when you try to convert a NOt null column in nullable and vice versa in aggregator styage and transformer stage? When I tried i got the warnings but in a running code I can see such type of scenarios. Please explain
I want capture UnMatched records from Primary source and secondary source in JOIN stage?
how to do pergformence tuning in datastage?
What is quality stage?
what is the differeces between hash and modulus partition methods
How many input links can you give to a Transformer stage?
I am getting input value like X = Iconv(“31 DEC 1967”,”D”)? What is the X value? How it is? At what situation we r used Iconv(),Oconv().
What is the roundrobin collector?