how to closeing records after 10,000 records come?
Answers were Sorted based on User's Feedback
-->If the Source is File:
"Read First Rows"=10000 --> Only read the first specified number of rows from each file.
"Filter"= sed -n '1,10000 P' or head -10000: we can use UNIX commands to filter the data.
-->If source is DB:
Oracle: where rownum<=10000
DB2: sample 10000 row only
Teradata: fetch first 10000 rows only
-->In Transformer:
We can use @INROWNUM<=10000 system variable
We can use Stage variables and count the values and in the contraints part use Stage_Var<=10000
| Is This Answer Correct ? | 4 Yes | 0 No |
Answer / amulya kumar panda
In transformer stage has system variable and select
@INROWNUM <10000
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / guest
while before running the job it ask stop incoming rows and
warnings thete we to mention number of records
| Is This Answer Correct ? | 0 Yes | 3 No |
A flat file contains 200 records. I want to load first 50 records at first time running the job, second 50 records at second time running and so on, how u can develop this job?
in job of 30 one job is very slow due to this entire job is very slow how can u know which job is slow?
i/p o/p1 o/p2 1 1 4 1 1 5 1 1 6 2 2 2 2 2 2 3 3 4 5 6 how to populates i/p rows into o/p1&o/p2 using datastage stages?and also the same scenario using sql?
What is the surrogate key? what is the use of surrogate key? how to Create surrogate key Generator in scd2 in 8.5?
if i have two tables table1 table2 1a 1a,b,c,d 1b 2a,b,c,d,e 1c 1d 2a 2b 2c 2d 2e how can i get data as same as in tables? how can i implement scd typ1 and type2 in both server and in parallel? field1 field2 field3 suresh , 10,324 , 355 , 1234 ram , 23,456 , 450 , 456 balu ,40,346,23 , 275, 5678 how to remove the duplicate rows,inthe fields?
hi All, i have one scenario like if source--->transformer-->2 target sequential files the 1 st target sequential file is loads the data from source and 2nd target sequntial file contain the 1st target total record count,and file name of 1 st target seq file and timestamp seperated by delimeter for example if source have 10 record the 1st target seq file hav 10 records and 2nd target seq file example 10|xyz.txt|20101110 00:00:00 could you please help me out how can i implement in datastage job.
Differentiate between hash file and sequential file?
i want send my all duplicate record one tar and all uniq records one target how we will perfome explain example: input data eid 251 251 456 456 951 985 out put/target1 251 251 456 456 out put/target2 951 985 how we will bring
13 Answers Bank Of America, IBM,
What can we do with datastage director?
how many types of remove the duplicate records?
How do you generate sequence number in datastage?
8000 jobs r there i given commit, suddenly job will abort? what happens? 2)diff b/t transformer stage & filter stage? 3)how to load the data in the source?