Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...

one file contains
col1
100
200
300
400
500
100
300
600
300
from this i want to retrive the only duplicate like this
tr1
100
100
300
300
300 how it's possible in datastage?can any one plz explain
clearley..........?

Answer Posted / vinod upputuri

In order to collect the duplicate values:

first cal the count output col in aggregator stage
group by col.
aggregator type: count rows.
count output col..

next, use the filter stage to separate the multiple occurrence.

finally, use the join stage or lookup stage to map the two
tables join type INNER ..

then u can get the desired output..

Is This Answer Correct ?    14 Yes 1 No



Post New Answer       View All Answers


Please Help Members By Posting Answers For Below Questions

What is the process of killing a job in datastage?

1022


In Informatica,for the table I can find coreesponding dependent mappings.Likewise can I find the dependent jobs with all the information by using the table name

2483


How can we perform the 2nd time extraction of client database without accepting the data which is already loaded in first time extraction?

2451


What are the different options associated with dsjob command?

1442


Name the different sorting methods in datastage.

1058


What are the functionalities of link partitioner?

1022


What a datastage macro?

1065


How do you remove duplicate values in datastage?

1155


what are .ctl(control files) files ? how the dataset stage have better performance by this files?

2814


What are the stages in datastage?

1085


What is use Array size in datastage

1784


whom do you report?

1935


create a job that splits the data in the Jobs.txt file into four output files. You will direct the data to the different output files using constraints. • Job name: JobLevels • Source file: Jobs.txt • Target file 1: LowLevelJobs.txt − min_lvl between 0 and 25 inclusive. − Same column types and headings as Jobs.txt. − Include column names in the first line of the output file. − Job description column should be preceded by the string “Job Title:” and embedded within square brackets. For example, if the job description is “Designer”, the derived value is: “Job Title: [Designer]”. • Target file 2: MidLevelJobs.txt − min_lvl between 26 and 100 inclusive. − Same format and derivations as Target file 1. • Target file 3: HighLevelJobs.txt − min_lvl between 101 and 500 inclusive. − Same format and derivations as Target file 1. • Rejects file: JobRejects.txt − min_lvl is out of range, i.e., below 0 or above 500. − This file has only two columns: job_id and reject_desc. − reject_desc is a variable-length text field, maximum length 100. It should contain a string of the form: “Level out of range: ”, where is the value in the min_lvl field. My Question is how do you write the stage variable for reject rows.

2725


explain about citrix scheduling tool in datastage

2462


Can we use target hash file as a lookup ?

3327