How much data u can get every day?
2)which data ur project contains?
3) what is the source in ur project?what is the biggest
table & size in ur schema or in ur project?
Answers were Sorted based on User's Feedback
Answer / kiran
1.How much data u can get every day?
Ana: Exactly no one tell how much data will get and its
depend upon company 2 company and client 2 client.and first
u analyze.Are u in developer or production side.data will
get only in production side not in development side.
Approximately u can tell as like 5 lakhs record per day
2)which data ur project contains?
Ans:Flat files.
3)what is the source in ur project?what is the biggest
table & size in ur schema or in ur project?
Ans:first u will get all data in sourse file as test then
will popuylate 2 oracle then cleansing and finally populated
2 dataware house
| Is This Answer Correct ? | 7 Yes | 2 No |
Answer / kiran
1.How much data u can get every day?
Ana: Exactly no one tell how much data will get and its
depend upon company 2 company and client 2 client.and first
u analyze.Are u in developer or production side.data will
get only in production side not in development side.
Approximately u can tell as like 5 lakhs record per day
| Is This Answer Correct ? | 3 Yes | 1 No |
What are the partitioning techniques available in link partitioner?
How rejected rows are managed in datastage?
can we use sequential file as a lookup
why dataset ?
How to create environments and call them? What is the use defined variables?
What is Horizontal transformation, vertical transformation,diagonal transformation?
What is the sortmerge collector?
What is configuration your file structure 2)I have two databases both are Oracle while loading data from source to target the job takes 30 min but I want to load less time how?
How many Nodes configuration file you used in your last project?
1.i have 5 jobs(1-5),i connect with each other,i want run from 3-5 only how? 2.how to schedual the job in datastage7.5 2? what is the deff bet grip and fgrep command? how do you cleanse the data in your project
what will happen if we allow duplicates in datastage lookup abort drop record 1st value of duplicate record none
A flatfile contains 200 records.I want to load first 50 records at first time running the job,second 50 records at second time running and so on,how u can develop the job?pls give the steps?pls pls