1)What is ur project architecture ?
2)how to move project from developement to uat?
3)What is the difference between datastage 6,7.1 and
datasttage 7.5?
4).How to do error handling in datastage?
5)3.Whta is unit testing, system testing and integration
testing?
6)What is the Exact difference between BASIC Transformer
and NORMAL Transformer?When we will go for BASIC Or NORMAL
Transformer

7)why we use third party tools in datastage?
8)What is the purpose of Debugging stages? In real time
Where we will use?

Answers were Sorted based on User's Feedback



1)What is ur project architecture ? 2)how to move project from developement to uat? 3)What is the ..

Answer / hemachandra

Answer for 2 Question

2) By using the Datastage Manager we can move the project
from Dev to Uat.
.Through datastage manager Export the project into your
lacal machine as .dsx format (project.dsx) from DEV server.
.The same .dsx (project.dsx) import into UAT server by
using the datastage manager.

Is This Answer Correct ?    21 Yes 0 No

1)What is ur project architecture ? 2)how to move project from developement to uat? 3)What is the ..

Answer / ramya

Answer for 6th question.

1. The Transformer stage is inherent PX functionality,
whereas the BASIC Transformer uses a Server interface to
call a Server Transformer stage. There's severe performance
impact as well as partitioning limitations, but it does
give a PX job some access to existing Server functionality.
2. There ate two types of transformer i. Basic
transformer and ii. Active transformer. Basic transformer
is used for SMP system and not in MPP or cluster. Basic
transformer (BASIC is the language supported by the Data
stage server engine and available in Server job). Where in
Datastage Px the Active transformer get use.
3. Transformer stages are always active stages. The
basic transform stage is part of the Server product, but
the PX engine allows this stage to be called (the opposite,
using a PX stage in Server is not possible)

4.
In parallel jobs there are two, very different, stage
types. One is called the Transfomer stage, the other is
called the BASIC Transformer stage.

Parallel jobs do not use the DataStage BASIC run machine;
instead they use a C-based environment that is/was called
osh (from "Orchestrate shell", Orchestrate being the
original name for the parallel execution environment
acquired by Ascential by purchasing Torrent).

This is why the Transformer stage in parallel jobs does not
use BASIC. The BASIC language not supported in osh .

The BASIC Transformer stage in parallel jobs causes osh to
invoke the DataStage BASIC run machine for the stage to be
executed. This obviously is an extra overhead.


5. The Transformer stage in parallel jobs can NOT access
BASIC functions, etc. It uses a completely different
expression editor, with a largely equivalent list of
operators, and a different, though overlapping, list of
functions and other operands (but they're not BASIC
functions and operators).

The BASIC Transformer stage in parallel jobs is a direct
equivalent of the Transformer stage in server jobs. Its use
will carry a performance overhead, because the BASIC run
machine will also need to be loaded when the parallel job
is executed. But the convenience might well override this
consideration, particularly given the richness of the
function set in BASIC

Is This Answer Correct ?    5 Yes 1 No

1)What is ur project architecture ? 2)how to move project from developement to uat? 3)What is the ..

Answer / rajesh

1) bottom up architecture
2)Through ds manager export into local machine from development server in .dsx format and then import into uat server.
4) go to status view in datastage director. we can see errors.
6)Basic transformer:- Compiles in Basic language and takes less time to compile.
Doesn't supports multiple nodes and use in server jobs.
Normal transformer:- compiles in basic language and c++ and takes more time to compile compare to basic transformer.
It is used in parallel jobs and supports nodes.
7)More reliable to support particular process like sheduling ann ata modelling.
eg- autosis, erwin
8) to pick up sample data and to test d job.

Is This Answer Correct ?    6 Yes 2 No

1)What is ur project architecture ? 2)how to move project from developement to uat? 3)What is the ..

Answer / nisha

4) Error handling can be done by using the reject file link.
what are the errors coming through job needs to be caputure
in sequential file and that file needs to be fetch in job
which will load this exceptions or errors in database.

Is This Answer Correct ?    6 Yes 4 No

1)What is ur project architecture ? 2)how to move project from developement to uat? 3)What is the ..

Answer / sagar.

Answer for 7 question.
Firstly, C++ complier to generate .gcc(The GCC C++ compiler)
in UNIX environment.
secondly, Tivoli Critical Alert for monitoring jobs…..if
any job fails(maintains projects ) it will automatically
fires a Notification mail to support person.

Is This Answer Correct ?    2 Yes 1 No

1)What is ur project architecture ? 2)how to move project from developement to uat? 3)What is the ..

Answer / nish

third party tools mostly employed are for automation of job schedules and monitoring.
Products include
IBM TWS (Tivoli)
Control-M
AutoSYS

your jobs might be a part of chain of other jobs so usage of such tools becomes inevitable.

Is This Answer Correct ?    1 Yes 0 No

Post New Answer

More Data Stage Interview Questions

SOURCE LIKE I_D,F1,F2 --------- 100,N,Y 100,N,N 100,Y,N 101,Y,Y 101,N,Y 102,Y,N 103,N,N 104,Y,Y 105,N,N 106,N,Y 102,N,Y 105,Y,Y O/P LIKE ID flag1 flag2 101 Y Y 101 N Y 102 Y N 102 N Y 104 Y Y 106 N Y

4 Answers  


1)Source file contains one record, I want 100 records in target file.

3 Answers  


What is the difference between odbc and drs stage?

0 Answers  


Where do the datastage jobs get stored?

0 Answers  


How to perform incremental load in datastage?

0 Answers  






Hi guys, please design job for this, MY INPUT IS COMPANY,LOCATION IBM,CHENNAI IBM,HYDRABAD IBM,PUNE IBM,BANGLOORE TCS,CHENNAI TCS,MUMBAI TCS,BANGLOORE WIPRO,HYDRABAD WIPRO,CHENNAI HSBC,PUNE MY OUTPUT IS COMPANY,LOCATION,COUNT IBM,chennai,hydrabad,pune,banglore,4 TCS,chennai,mumbai,bangloore,3 WIPRO,hydrabad,chennai,2 HSBC,pune,1 Thanks

3 Answers   IBM,


my source is sequencial file and my target is dataset. i am running the job in two node configuration file. my source having 10 records how the data move to target?

3 Answers   TCS,


How to create user defined environment varibale(parameter)?

1 Answers   TCS,


Difference between IBM DATA STAGE8.5 and DATA STAGE9.1 ?

0 Answers   ABC, TCS,


I have 2 Files like fileA fileB Output1 Output2 Output3 1 6 1 6 11 2 7 2 7 12 3 8 3 8 13 4 9 4 9 14 5 10 5 10 15 6 11 7 12 8 13 9 14 10 15 please let know

6 Answers  


pls ,tell me good Training centre with Job Oppertunity for Data stage in chennai?

1 Answers  


1.what is materialized data? 2.how to view the materialized data?

0 Answers   HCL, IBM,


Categories