Explain sql transformation in script mode examples in informatica
No Answer is Posted For this Question
Be the First to Post Answer
Hi, What is version control in Informatia ? Can anyone just give an idea or introduction about this? Advance Thanks
What are the issues that you have faced while moving your project from the Test Environment to the Production Environment?
How can we eliminate duplicate rows from flatfile,explain?
Suppose we have a (assume relational) source table Product_Id Month Sales 1 Jan x 1 Feb x . . . . . . 1 Dec x 2 Jan x 2 Feb x . . . . . . 2 Dec x 3 Jan x 3 Feb x . . . . . . 3 Dec x . . . . . . and so on. Assume that there could be any number of product keys and for each product key the sales figures (denoted by 'x' are stored for each of the 12 months from Jan to Dec). So we want the result in the target table in the following form. Product_id Jan Feb March.. Dec 1 x x x x 2 x x x x 3 x x x x . . So how will you design the ETL mapping for this case , explain in temrs of transformations.
What is data transformation manager process?
Source and Target are flat files, Source table is as below ID,NAME 1,X 1,X 2,Y 2,Y On Target flat file i want the data to be loaded as mentioned below ID,NAME,REPEAT 1,X,2 1,X,2 2,Y,2 2,Y,2 How to achieve this, Can i get a map structure
when we dont use aggregator in mapping ?
write sql query to filter the null value data following table? name age john 30 smith null null 34 sharp 24 i want the output following are name age john 30 sharp 24
how eliminate the duplicates in flat file with out using sorter ,aggregater
Q. We are the loading the table on daily basis it is incremental loading. and A person rahul slary was 10000, so if i check before run my salalr is 10000. but toay there is update that my sal is 15k but that will come to know after the load. braod crtiteria is we donot want to show downstream teams partial updated data. need aproad as etl developer
while for 100 records in source table loaded sucessfully in trgt table . assume ,session will take 10min or 5 min to successfully succeeded. then 100 million records r there in source how much time will take by session to succeeded. there no fail ok.trgt table will load 100 million records with out any errors . don't tell perfect time . assume your self how much time to succeeded?
IN SCD1, insource we have 10 billion records and in the first day its uploaded successfully and in the second day its taking time to upload because some records it might get update or insert new records. As a developer what will be the better solution for this??