Which gives the more performance when compare to fixed
width and delimited file ? and why?
Answers were Sorted based on User's Feedback
Answer / ams
It all depends on data and what you are ultimately doing
with that data. Any consumer of that data will ultimately
want to break down that data into fields. During the break
down of those fields, you will need to check for a maximum
field size as you read through the data stream.
So if you have an 80 character fixed width field, you will
ultimately have 80 comparisons to see if you have reached
the field max.
Delimited break down of data requires not only a check for
the field max, but also a check for the end of field
delimiter.
So if your delimited field is only 20 characters long, you
will have only 40 comparisons made across those 20
characters you just processed.
So, the more padding you have in your fields, the less
efficient fixed width becomes when processing.
An additional factor is actual bytes that are necessary to
be transmitted. In almost all cases, delimited will
require less character having to be transmitted.
| Is This Answer Correct ? | 3 Yes | 0 No |
Answer / koti
surely fixed width gives best performance. because it need
not to check each and every time where the delimeter is
taking place.
| Is This Answer Correct ? | 2 Yes | 3 No |
What will happen if we have a filter transformation used in a mapping and in the filter condition we specify 1. Yes nothing else, only the number 1. In other words, assuming we have 10 rows coming to this filter transformation as an input, how many rows will be output by this transformation when we have only 1 in the filter condition?
How i can upload the MainFrame source For Informatica ?
how to move the mappings from your local machine to claint place?
How to update records in Target, without using Update Strategy?
What is mapplet and a reusable transformation?
Briefly explain the aggregator transformation?
What is complex mapping?
How to update a particular record in target with out running whole workflow?
Whats there in global repository
wt is Dynamic lookup Transformation? when we use?how we use?
source is a flat file empname, empno, sal ram, 101, 1,000 sam, 102, 2,000 ques: my target needs the data to be loaded as sal -1000 and 2000 excluding commas target empname, empno, sal ram, 101, 1000 sam, 102, 2000 how to implement this?
hi all, I have to do Informatica Certification? Can any body send me Some Informatica Certification Dumps/FAQS ? Advance Thanks...