How to eliminate 1st and last rows from the source and load
the inbetween rows.
Answers were Sorted based on User's Feedback
Answer / kamlesh mishra
The PowerCenter Server skips the specified number of rows
before reading the file. Use this to skip header
rows in the file.
And for last rows implement following logic.
1) Take sorter transformation reverse the rows on the basis of some key field like empno.
2)take expr transformation add nextval from sequence generator
3) Take filter transformation add condition nextval !=1 .
4)(optional)reverse the rows again using sorter transformation.
5) From filter to tgt connect the respective ports.
Src---->SQ--->SORTER--->EXPR---->FILTER---->SORTER---->TGT
| Is This Answer Correct ? | 6 Yes | 0 No |
Answer / allen
Skipping the firt row in a flat file is easy:
The PowerCenter Server skips the specified number of rows
before reading the file. Use this to skip title or header
rows in the file.
Skipping the last row is not as easy. In my case, the last
row was a record count. I was able to check ports that
shouldn't be null, spaces or zeros and reject the row. The
LENGTH function can also be used to check the length of
data. Delimited or not has to be considered in any checks.
| Is This Answer Correct ? | 1 Yes | 0 No |
Answer / manoj subramanian
I have done it using sequence generator,rank and filter transformation.
Filtering first record:
Generated sequence no using sequence transformation and given
as input to rank transformation.In properties selected
top and no of ranks as maximum rank allowed in informatica
then followed by filter transformation with filter condition
as rankindex<>1.
Filtering last Record:
Followed by another set of rank and filter wit rank property as bottom.
| Is This Answer Correct ? | 1 Yes | 0 No |
Answer / sagarreddy
in source qualifier your using this query like this u get it
easily
select rowid from tablename
minus
(select min(rowid) from tablename
union
select max(rowid) from tablename )
if u write this query will work it plz go through it
| Is This Answer Correct ? | 1 Yes | 3 No |
Answer / maneesh
To eliminate first row, simply use a counter in Expression and then Filter.
To eliminate the last row, make two pipelines with the same source
In first pipeline, use AGGREGATOR without any Group By ports In second pipeline, join it with the above with JOINER. Use and Outer Join and then other than that Joined row .. load all the rows.
| Is This Answer Correct ? | 0 Yes | 4 No |
What is the procedure for creating independent data marts from informatica 7.1?
my source is a comma delimited flatfile as eno, ename, sal 111,sri,ram,kumar,1000 and my target should be eno ename sal 111 sri ram kumar 1000 i.e; we need to eliminate the commas in between the data of a comma delimited file.
What are the types of metadata that stores in repository?
we have to use order by,where,having we to implement sql query
A TABLE CONTAINS SOME NULL VALUES . HOW TO GET (NOT APPLICABLE(NA)) IN PLACE OF THAT NULL VALUE IN TARGET .?
What are the designer tools for creating tranformations?
what is incremental aggregation ,with example?
what is the size ur source like(file system or database)? how many record daily come u r banking project?
0 Answers Cognizant, Flextronics,
generate Unique sequence numbers for each partition in session with Unconnected Lookup ? Hi All, Please help me to resolve the below issue while Applying partitioning concept to my Session. This is a very simple mapping with Source, Lookup , router, and target. I need to Lookup on the target and compare with the source data, if any piece of data is new then Insert, and If any thing change in the existed data then Update. while Inserting the new records to the target table I'm generating sequence numbers with Unconnected lookup, by calling the maximum PK ID from the target table. The above flow is working fine from last one year. Now I wish to apply the Partitioning concept to the above floe(session) At source I used 4 pass through partitions.(For Each partition different filter conditions to pull the data from source) at Target I used 4 passthrough Partitions. it is working fine for some data, but for some rows for Insert Operation , it is throwing Unique key errors, because while Inserting the data it is generating the same sequence key twice. In detail : 1st row is coming from 1st partition and generated the sequence number 1 for that row. 2nd row is coming from 1st partition and generated the sequence number 2 for that row 3rd row is coming from the 2nd partition generated the sequence number 2 again for that row. (it must generate 3 for this row) the issue is becuase of generating the same sequence numbers twice for different partitions. Can any one Please help me to resolve this issue. While Applying partitions how can I generate a Unique Sequence numbers from Unconnected lookup for Each partitioned data. Regrads, N Kiran.
How we can confirm all mappings in the repository simultaneously?
How to load duplicate records in to a target table which has a primary key?
How to create different types of slowly changing dimensions (SCD) in informatica using the mapping wizard?