Hi,
In a mapping I have 3 targets and one fixed width file as
source. Total 193 records are there .
I connected one port in aggregator to all 3 targets.
The same value need to be load into these 3 targets . It is
loaded like that only but in different order.
Why?
The order of insertion should be same know for all 3 targets
? Then why the order is changed ?
Any one please help me.
Advance thanks.
Answer Posted / sateesh a
passthrough-->true and default groups are different.
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
Briefly explain your complete project(sales) flow, (ie. from source received from client, transformations, then despatch to end user) what are all the process. Kindly give step by step process.
What happens when a session fails and you click on recover?
How do we call shell scripts from informatica?
How you know when to use a static cache and dynamic cache in lookup transformation?
Explain why we use partitioning the session in informatica?
What is a joiner transformation?
What are multi-group transformations?
How to start a workflow using pmcmd command?
What are the limitations of joiner transformation?
What do you mean by enterprise data warehousing?
Hello , I am unable to work with SQL transformation at least. Where do i need to give connection for sql transformaton ? At session level there is no property . I have created a SQL Transformation and chosen query mode. But do i need to pass connection information to it ? I don't know where do i need to write a query ? I have written a query in file and that file path i gave in the properties of SQL Transformation. But it is not working. Could any one of you please let know how can i work with SQL Transformation? Advance Thanks.
Source and Target are flat files, Source table is as below ID,NAME 1,X 1,X 2,Y 2,Y On Target flat file i want the data to be loaded as mentioned below ID,NAME,REPEAT 1,X,2 1,X,2 2,Y,2 2,Y,2 How to achieve this, Can i get a map structure
Explain lookup transformation in informatica
what are the best practices to extract data from flat file source which are bigger than 100 mb memory?
What is meant by LDAP users?