You are on page 1of 59

Question

Is it passive or active when check and uncheck the box of DISTINCT in Sorter
transformation? why?
Answer
#1
Sorter Transformation is passive when the DISTINCT option is unchecked bcoz it
wont change the number of records passes through it. Once the DISTINCT option is
checked it
may change the no of records if the records has duplicates.
Answer
#2
It is Active transformation.
If you configure the Sorter transformation for distinct output rows, the Mapping
Designer configures all ports as part of the sort key. When the PowerCenter Server
runs the session, it discards duplicate rows compared during the sort operation.
Answer
#4
I sorter is an Active Transformation,I u check the Distinct option it will output
only the distint rows from the source .
Answer
#5
if we check the box distinct means u are eliminating the duplicate record.so here
if we check the distinct while sorting its eliminating the duplicate records and
after its
sorts so the number of records in target are less compared to source for that it
is calles as active. if we uncheck
this all the records are transforms into target table so it
is passive.
Question
if i have records like these
(source table)
rowid name
10001 gdgfj
10002 dkdfh
10003 fjfgdhgjk
10001 gfhgdgh
10002 hjkdghkfh

the target table should be like these by using expression


tranformation.
(Target table)
rowid name
10001 gdgfj
10002 dkdfh
10003 fjfgdhgjk
xx001 gfhgdgh
xx002 hjkdghkfh
(that means duplicated records should contain XX in there
rowid)

Answer
#1
create an output port and write the expression to replace
values or create a stored procedure and call it expression
transformation
1
Answer
#2
Through Dynamic lookup you can handle it very easily.
Answer
#4
First make sure the values are sorted by rowid and passed
to an expression transformaion.Now create a variable port
in the exp. Lets think var1 and make it empty in the
expression .

now compare rowid with var1 by

IIF(VAR1=ROWID,XX||SUBSTR(ROWID,3,3),ROWID)

When the session starts, since initially the var1 is null,


it will be replaced with first rowid. When the second
record is passed, the values will match the required format
of rowid will be passed and remember to pass the value in
the var1 to the next transformation or target.

Hope this will solve.


Please let me if anything is wrong or does not work out.
Answer
#5
its not working
Question
Hi Friends,
I want lo truncate my records from target before loading
current month data,but i dont have permission to truncate
with truncate option
if u know any other way please give your valuable input for
this.

Thanks
Abhishek
Answer
#1
Use delete * .... in pre-sql
Answer
#2
Abhishek Bhai,

Possible Solution:-

1. While creating connection, use such a generic id, which


have all the permission on DB objects. This way you
definitely can play with your Target object.

2. If no such credential available then, Delete and commit


process can be implemented in pre-sql.. but process will
become very slow (depends on volume of data).

Cheers !
2
Answer
#3
in the warehouse designer,in targets ,in generate and
execute option truncate the table
Answer
#4
If the informatia user id does not have permissions to
truncate the table in the database. Then there is no other
way we can truncate the table.

Anyways try with this option. Try to write a pre-sql


statement in the session. So that first the sql statement
will execute and then the session will start executing
Question
Transformer is a __________ stage
option1:Passive
2.Active
3.Dynamic
4.Static
Answer
#1
active
Answer
#2
Dynamic more then active stage because its not taking space
in your DB its initiate run time with session, cashe data
do tranformations and end with session.
Question
How to list Top 10 salary, without using Rank Transmission?
Answer
#1
use sorter transformation with ascending,sequence and filter....

Answer
#2
if it is flat file ur answer is write,if it is relational
souce then go to source qualifier properties there u write
the query like
select distinct a.* from t1 a where 10=(select sal from
t1 b where a.sal>b.sal)
i think it is working
Answer
#3
only use the Aggregator function.....

first(sal>=values)
Answer
#4
SQL:
SELECT id, salary from <table name> where rownum <= 10

3
ORDER BY salary DESC;
Answer
#5
it can achieve with inline view query in sqt/r
Answer
#6
use sorter--> expression-->filter
1)sorter descend
2)use sequence generator connected expression to generator
sequence,
3)filter the value sequence number greater than 10
Answer
#7
I hope the simpliest way is what Raka #4 suggested
We can override in the SQL override in the source qualifier
transformation.
Answer
#8
First use sorter with salary based and sequent generator next filter
transformation

Sorter(salary desending order)-----> Sequent genarator

--------->Filter(seq<=10)
Answer
#9
Use source Qualifier Transformation,we can edit the default
sql query and write like as
select * from ( select * from emp
order by sal desc)
where rownum <11;
Answer
# 10
do following steps

1.in source qualifier override the query with


"order by salary desc"

2. use seq gen tr on it and give column name as "sid"

3. next use filter tr. in that write condition like "sid<=10"


Question
How to jion 2 tables, without using any condition?
Answer
#1
use user defined join option.
Answer
#2
it is not possible to actually join two tables without
using any condition.
if U would like to join two tables, then in source
qualifier transformation add all the port from two tables (
if both tables are from same database source) and write
4
down your join condition in user defined join option or
otherwise write down join condition inside the sql query
option with generated sql query.

if u use two different database source then use joiner to


join the two tables.
Answer
#3
Add dummy column in expression or Source Qul. for both
source and use that column in join condition.
Answer
#4
it is not possible to actually join two tables without
using any condition.
if U would like to join two tables, then in source
qualifier transformation add all the port from two tables (
if both tables are from same database source) and write
down your join condition in user defined join option or
otherwise write down join condition inside the sql query
option with generated sql query.

if u use two different database source then use joiner to


join the two tables
Answer
#5
In SQL override (Source Qualifier level) write a query to
join the 2 tables like emp & dept as follows:

Select * from emp,dept;

The above query will return the cartesian product of the 2


tables; BTW, this don't have any condition...
Answer
#6
Instead of Join use subquery
Answer
#7
simply use a dummy condition like 1=1.
This works :)
Answer
#8
select * from emp natural join dept;
Question
After we make a folder shared can it be reversed?Why?
Answer
#1
Folder can no't be Reversed to previous status of
nonsharable.

Answer
#2
They can not be unshared.

5
Because it is to be assumed that users have created
shortcuts to objects in these folders. Un-sharing them would
render these shortcuts useless and could have disastrous
consequences.
Question
How can the following be achieved in 1 single Informatica
Mapping.

* If the Header table record has error value(NULL) then


those records and the corresponding child records in the
SUBHEAD and DETAIL tables should also not be loaded into
the target(TARGET1,TARGET 2 or TARGET3).

* If the HEADER table record is valid, but the SUBHEAD or


DETAIL table record has an error value (NULL) then the no
data should be loaded into the target TARGET1,TARGET 2 or
TARGET3.

* If the HEADER table record is valid and the SUBHEAD or


DETAIL table record also has valid records only then the
data should be loaded into the target TARGET1,TARGET 2 and
TARGET3.

===================================================
HEADER
COL1 COL2 COL3 COL5 COL6
1 ABC NULL NULL CITY1
2 XYZ 456 TUBE CITY2
3 GTD 564 PIN CITY3

SUBHEAD
COL1 COL2 COL3 COL5 COL6
1 1001 VAL3 748 543
1 1002 VAL4 33 22
1 1003 VAL6 23 11
2 2001 AAP1 334 443
2 2002 AAP2 44 22
3 3001 RAD2 NULL 33
3 3002 RAD3 NULL 234
3 3003 RAD4 83 31

DETAIL
COL1 COL2 COL3 COL5 COL6
1 D001 TXX2 748 543
1 D002 TXX3 33 22
1 D003 TXX4 23 11
2 D001 PXX2 56 224
2 D002 PXX3 666 332
========================================================

TARGET1
2 XYZ 456 TUBE CITY2

TARGET2
2 2001 AAP1 334 443
2 2002 AAP2 44 22

6
TARGET3
2 D001 PXX2 56 224
2 D002 PXX3 666 332
Answer
#1
Hi,
This could be implemented in many ways.One such way is :

Assumption : All the 3 tables namely HEADER,SUBHEAD and


DETAIL belong to the same Database

Solution : In the Source Qualifier for HEADER , wrie a


query which will select COL1 from all HEADER table which
has valid values in all 3 tables.The query would look like :
SELECT H.COL1
FROM
HEADER H
INNER JOIN
SUBHEAD S
ON
H.COL1=S.COL1
AND
H.COL2 IS NOT NULL AND H.COL3 IS NOT NULL AND H.COL5 IS NOT
NULL AND H.COL6 IS NOT NULL AND
S.COL2 IS NOT NULL AND S.COL3 IS NOT NULL AND S.COL5 IS NOT
NULL AND S.COL6 IS NOT NULL

INNER JOIN
DETAIL D
ON
H.COL1=D.COL1
AND
D.COL2 IS NOT NULL AND D.COL3 IS NOT NULL AND D.COL5 IS NOT
NULL AND D.COL6 IS NOT NULL
i.e the above query will fetch the value 2 .Insert this to
a flat file.

Then in the same mapping , DO the following:


HEADER Source Qualifier -> Lookup on the flat File such
that HEADER.COL1=FlatFile.COL1->Insert the valid records in
Target (records which do not return NULL from the lookup)
Here only record with COL1=2 will be inserted into the
Target
Repeat the same for SUBHEAD->Target2 and DETAIL->Target3 in
the same mapping.

Please let me know if i am wrong.


Thanks Vinithra.
Answer
#2
Hi Vinithra,

I dont think while using Flaat file as source we can edit


SQL over ride option and write the above mention query.

Ahmed

7
Answer
#3
1)In the mapping join all the 3 flat files on the common
column(Col1).(2joiners are needed)
2)Create a expression drag all the columns from the end
Joiner to Exp.In the expression create a 3 ports
H_flag,S_Flag,D_flag.

H_Flag =iff(isnull(col2),'x',iff(isnull(col3),'x',iff(isnull
(col5),'x',iff(isnull(col6),'x','y'))
the result of this is if you have null in any header column
this flag value will be 'X'.
Repeat this with Detail and subhead as well.
Declare these flag ports as last ports of expression.
3)Create a router with a group in it for
H_Flag!='x'
S_flag!='x'
D_Flag!='x'
This will get you not null columns from all the three and
accordingly u can load the target.

But if you have any other conditions ,still you can use
these flags in filters and control the data flow.
Question
how to join the two flatfiles using the joiner t/r if there
is no matching port?
Answer
#1
yes you can join two flatfiles using the joiner t/r even if
you don't have any matching port for that you need to take
one dummy column in source files and based on the dummy
column you can join them
Answer
#2
Hi Jana,

How can we take a dummy column i.e what r the values would
be in that port? We need to take dummy columns on both files?

could u explain clearly. mi id is vaas31@yahoo.in


Answer
#3
Connect the source Qualifier of two different flat files to
two different Exp Trans. Create a dummy output port in both
the exp trans. then using that port connect the joiner
Tansformation
Answer
#5
create dummy1 port in flat file1
create dummy2 port in flat file2

assign the value '1' to both dummy1 and dummy2.

8
In the lookup condition use dummy1=dummy2. This is going to be always true so all
the records are took into consideration.
Question
What is the difference between Oracle performance and
Informatica Percfomance? Which performance is better?
Answer
#1
Generally we check for busy percentage in session log if
its source busy % or Tgt% is more then we try to do
database tunning,if its transformance % is more then go for
Informatica part.But form my side I would rather suggest
for both kind of performance
Answer
#2
oracle performence deals with the source &targets.
informatica performanc deals with the tranformations.
for effficient result both are impotent...
Answer
#3
Orcale performance is better.Because informatica is nothing
but a metadata.Informatica is there only to play with data.
Ya but at the end of the day what matters is how you design
the logic.
Question
How to send duplicates to one target and unique rows to one
target?target is empty
Answer
#1
1> using dynamiclookup concept
2> using variable concept

First solution

source > sorter >dynamic lookup > filter > Target1 and
Target2
Answer
#2
source> dynamic lookup>router,2 conditions 1. condition
if column_lkp port is null then insert into target1(unqie)
2. condtion if COLumn_lkp port is not null then insert into
target2(duplicates)
Answer
#4
we can do this process by 2 ways ....
1)by dynamic lookup option in lookup(new lookup row) we can
load duplicate rows in one target table and unique rows in
one target table
do to this we need to have router transformation (add to
group ports one is for unique(new lookup row=1) and other is
for duplicate(new lookup row =2))after the lookup trans.
2)we can perform this by aggregator transformation using
coutnt(*) >1 for duplicate rows .here also we need to use

9
router transformion.
Answer
#5
Using Source Qualifier Trnsformation ,
Explantion:
1.Take 2 Source Qualifier Transformations,and
2.One sq ports connect to Target(Unique Target) then Write
a SQL Query (sqlOverride) ,
SELECT DISTINCT EMPNO,ENAME
FROM EMP;
3.TAKE ANOTHER SQ AND CONNECT TO ALL PORTS TO TARGET,THEN
DEVELOP THE SQLOVERRIDE,
SELECT * FROM EMP WHERE ROWID IN(SELECT ROWID FROM EMP
MINUS
SELECT MAX(ROWID) FROM EMP
GROUP BY EMPNO,ENAME)
Question
How to load relational source into file target?
Answer
#1
use mapping designer create source db file as well as tgt
flat file with required column information .

While creating session provide the tgt file directory to get


the information.
Answer
#2
It should be a direct 1-1 mapping, source having relational
table and Target a Flat file
Answer
#3
load the data form source system to target in two ways.
one is directly storing another one is mapping (etl)in
infirmatia
Question
What is the difference between procedure and stored procedur?
Answer
#2
Hi,

I think, Diff b/w Application Procedure and Stored


Procedure.
Application Procedure developed in Oracle Developer tools
such as Forms, etc and stored in the same application.
But Stored procedure stored in oracle server.
Question
what is the process of target load planing?
Answer
#1
if u have 2 or 3 pipelines in your mapping,which pipeline
will load first?
so taget load plan option will give u the availability of
which pipeline will load first and then which one is second
10
u can decide based on requirement or your logic.
if u doesn't give tagetload planing option,informatica server
will load the pipelin
Answer
#2
As Per My Knowledge The Taget Load planing Is Most
Important Event While Loading The Target Generraly One
source Qualifier In Our Mapping then we No need to Think
For This I Mean Seriously, When It Has Multipul Source
Qualifiers Then The Problem Is Arraised, Then We
Designated Which Pipeline Is First,Then Which One Is
Next,If We Did Not Specified Proper Method To Informatica
Server Loads Targets But It Doesn't Tells Us
Proper Order So Always We Specified Order To Load The
Target.
Question
how to we create datamart?
Answer
#1
Datamart is a subset of data warehouse and it supports a
particular region, business unit or business function.
Data warehouses and data marts are built on dimensional
data modeling where fact tables are connected with
dimension tables. It is designed for a particular line of
business, such as sales, marketing, or finance.
Question
What is the difference between Bitmap and Btree index?
Answer
#1
bit map incdex is an index in which the data will be in the
form of bites,by this we can retrieve the data very fastly
btree index is an index which is default index as
normal indexes.
Answer
#2
b-tree indexes are used usuall when we have too many
distinct columns and for high cardinaties . and bitmap
indexex are used for low cardinaties usually when we have
repeated columns.
Question
wht is cdc?how to use it in creation of mappings?
Answer
#1
CDC stands for change data capture. This is used to
implement incremental load approach in data warehouse.
In this approach we traditionally keep a date field and
pull data on incremental date values.
This assures that we are picking latest data (Or may be new
batch data which ever is applicable).
Implementing this in mapping would require you to use
parameter files which will keep HiWaterMark and LoWatermark
which is repeatedly used to capture fresh data.
Answer
#2
11
Not always necessary to be a Date field. When there are 5
key colummns and you want to check if any of them changed
and would like to extract only those records with these 5
key column values updated then you will have use this
concept of change data capture.
Answer
#3
CDC is used when you want to pull the records which have
changed or newly added in the OLTP system.

Normally the OLTP tables have 2 columns


last_updated_timestamp and Added_timstamp.

Whenever a new record is added for the first time in these


tables then both the columns have the same timestamp ie
System timestamp.

Then when that particular record is changed only the column


last_updated_timestamp will change and the other column
Added_timstamp will remain same forever.

Now you need to pull this record when it was added as well
as when it was modified to keep your warehouse in sync with
OLTP system.

So based on last_updated_timestamp column (not


Added_timstamp)you need to pull the records.

This can be achieved by overriding the SQ query in the


where clause.

Example :- if product table in OLTP has 2000 records on


11th may and on 12th may 10 new records have come up and 5
records have been changed.Then in the next load 15 records
should be pulled to your warehouse.

Select Prd_nam,Typ,grp,category from product where


last_updated_timestamp>&&date_parameter

This is your SQ override query.


&&date_parameter is a mapping parameter which can be picked
up from a file.(you need to have the previous load max date
in that file which will be used as mapping parameter).

Hope this clears your doubt.


Answer
#4
CDC defines Whenever data is changed in OLTP Systems Only
that data Will be captured and loaded into our Target
SCD's works internally based on cdc logic
basically cdc's implented by using effective date
Question
if we r using aggregator we enabled sorted input but the
records r not sorted what happen?
Answer
#1
12
This technique is used when you want a performnce boost.
There will be an aggrigator cache created and will be
indexed. Although you have not given a sorted input it will
still work fine as it is indexed in cache. Please note this
is not like a page fault where you have to bring the page
from the secondary memory when not found in primary memory.
Answer
#2
The session fails.
Answer
#3
Session throws warning. can see it in log file
In case of Join , i guess session fails
Answer
#5
i think its thorws an error
Answer
#6
If u enable the 'sorted I/p'option in AGG tranformation,
power center server assumes that all data is sorted by
group.
If u use sorted i/p and dont use the sorted data session
fails.
Answer
#7
When the 'Sorted Input' is enabled and if the data is not
sorted, then unexpected results will occur. It means the
session will succeed but the end result may not be the way
as per the business requirement.

Another point to be remembered is:

Group by columns in the Aggregator transformation must be


in the same order as they appear in the Sorter
transformation
Question
daily how much amount of data send to production?
Answer
#3
It depends on ur project. am into production support. i get
almost 10 million of data per day.
Answer
#4
Usually the records count is in millions but the count
completely depends upon the business requirements and
varies from client to client.

It may be as low as in thousands and can go to millions of


records per day.
Question

13
Ho to handle changing source file counts in a mapping?
Answer
#1
we can maintain versions depends upon changes of sourece
file requirement

our current version is running source 1.1.0.0

if we have minor chages 1.1.0.1


if we have mazor chages 1.2.0.0
Question
how can u generate sequence of values in which target has
more than 2billion of records.(but with sequence generator
u can generate upto 2 biliion only)
Answer
#1
by using unconnected lookup transformation u can check
condtion ie counter >0 it will be allways true,
or you can use oracle sequence or stored procedure
u have any querries revert me on chinnadw@yahoo.com
Answer
#2
generate a sequance values through sequesnce generator and
connet it to the target get the max value of it every time
from unconnected lookup and set this value(maxvalue) to
mapping variable so that the last max value will be stored
in repository. Once the sequance generator is reached its
max value say 2billion , set the property reset to 1 so
that sequence generator will start again to produce
sequance numbers .

and in experession write the logic like

new_seq=max(mapping_variable+sequence_generator (value)
then it is like 2million+1...2......(seq_gen)

every time sequance generator will reset to 1 after


reaching its max value
Answer
#3
By using the variable port in the Expression transformation
we can create more than 2 billion records.
Answer
#4
step1
drag the source that exceed 2billion records into
mapping designer and connect it to expression
transformation.
step2
add a new column in the same expression trans with
decimal data type in any name assume column name is(PK)
step3
14
now create a sequence generatr trans and without
changing any property u directly connect the CURVAL port to
the newly creayed column(IN_PK) in the expression trans.
step4
now open the expression ports and add another new
column with decimal datatype assume column name is
(OP_PK),now enable the outport port of this column and open
the expression editor and write this expression{ CUME
(IN_PK)}.finally connect this output port (OP_PK) to u r
target table primary key column..u wil get the sequence no
Question
how can u tune u r informatica mapppings
Answer
#1
mappings can be tuned at diff transformations level.
ex; look up can be optimised by enabling the cache.
aggregator can be optimised by using sorted inout.
filter can be optimised by using it as close as possible
to the sql. so on
Answer
#2
Mapping can be tuned by identifying the following bottlenecks in order shown
below.
1. Target Bottlenecks
2. Source Bottlenecks
3. Transformation/Mapping Bottlenecks
4. Session Bottlenecks
5. System Bottlenecks
Question
how can u approach u r client
Answer
#1
First understand the requirements,if any thing not
understandable then sent a mail to client, request for make
sheduled call and as well as in that mail mention what
would be discuss on call.
Question
which quality process u can approach in ur project
Answer
#1
In so many proccesses are followed in projects that depends
on project to project and client to client
Question
What is checksum termnology in informatica? Where do you use
it ?
Answer
#1
Its a validation rule If the data is altered outside the
company firewall, the checksum will automatically detect
the violation and deny validation of the data
nswer
#2
15
Specify whether you want the PowerChannel Server to
calculate checksum for the file transfer. Enter "yes" to
enable checksum. Enter "no" to disable checksum. If you do
not enter a value for checksum, PcCmd uses the default
checksum value in the PcCmd.properties file.
Answer
#3
Checksum used to cretae unique columns in table when u
implement checsum on any field in table during load time
then it will automatically find the max value of the field
and add 1 for every next record inserted. basically used i
SCD to create Index.
Question
how to run the batch using pmcmd command
Answer
#1
Using Command task in the workflow
Answer
#2
we can write Shell script which can be fired by using
Seccion Parameters
Question
wht target override?wht advantages it has compare to target
update?
Answer
#1
when table is having huge amount of records and performance
is the key factor to implement the target overrride ..if we
use update override it will take more time than the target
override.
Answer
#2
Target Update (by using Update strategy) will work only
when we have the Primary key in Target. Update override
will happen based on this key.If we don't have the Pkey
Update override will not work.

To overcome the above issue..If our Target is not having


any key column or if we want to update the Target based on
a column other than PKey column then we need to go for
Target Update override option available in Target
properties.
Question
I'd like the load to be triggered by client. By placement
of a file or somehow. How is it done in Informatica? I'm
using version 7.1.4
This is so urgent - I really appreciate your help :-)
Answer
#1
you could use an event wait task in the workflow as the
first task. Once the event wait task detects the file in
the specified folder all other sessions tasks wil run.

16
alternatively you could use unix commanda in the command
task.
Answer
#2
You can use a pmcmd command to trigger the workflow using a
shell script. You can also use the event wait task
described in the above answer.

Alternately you can use an UNIX script to work as an active


listener. This will trigger the workflow once the presence
of file is detected in a specific directory. A schedular
tool also will be useful.

If you could describe more about your requirement one can


add more thoughts into it.
Answer
#3
Add an event wait in your workflow and have a Unix script
touch a zero byte file in the source file directory and
configure the even wait to look for this zero byte file in
the source directory as soon as the event wait detects the
file the session will execute
Question
we r using aggregator with out using groupby?
Answer
#1
If we use Aggregator without using group by option we will
get only one row from source. It is a default charater of
Informatica.

e.g If there are 20 rows in the source table and only


using SQ and followed by Aggregator then target table then
the target table will be populated by the last row coming
from teh source table.
Answer
#2
if u use aggregator t/r without using group by function u
get one summirized result but group by option allows u get
deatil result.
for example .without using groupby u want see a releince
store sales .u will get sales of whole store but when uuse
group by u get imtemized sales result
storecode itemname sales
101 vegetable 500
101 vegetables 700
101 electronics 5000
101 electronics 10000
101 cooldrinks 200
101 cooldrinks 500

result without groupby: store sales:17400


result with groupby: vegetables sales :1200
electronics sales: 15000
cooldrinks sales: 700

17
Question
what are testing in a mapping level please give brif
eplanation
Answer
#1
Hi,

find below testing points at mapping level that might be


help you..

Verify Mapping is Available


Verify parameters are defined properly with proper
datatypes
Verify whether the shorcut to source table is used as
source in the mapping from replica connection
Verify whether WHERE clause in the SQ has used properly to
implement delta condition
Verify whether the primary key is selected properly in the
target table
Verify whether versioning is maintained
Verify Source name used in the mapping
Verify Target name used in the mapping
Changes made to the existing mapping(if applicable)
Verify whether the new fields handled for NULL values(if it
is a NOT NULL column in the Target table)
Verify whether lookup is added to the mapping(if applicable)
Verify whether the Lookup override used is proper
Vefify whether the condition for Insert/Update is used in
UPDATE STATERGY transformation
Verify Whether the Filter condition used is proper
Question
What is the term PIPELINE in informatica ?
Answer
#1
pipeline is used in the context of patitioning the source
so that the dtm process is executed i a less time. to meke
informatica server read,transform n load the data into the
targets in a relatively less duration.
Answer
#2
its informatica utility
Answer
#3
pipeline term in informatica gives the physical flow of data
from source to target or any other transformation.
Answer
#4
Pipeline term in informatica means the way source is flow
to target via any transformation e.g filter transformation
Answer
#5

18
pipeline is the collection fo sources, transformations and
targets that receive data from a single active source.
Question
which T/r we can use it mapping parmeter and mapping
variable? and which one is reusable for any mapping mapping
parmeter or mapping varibale?
Answer
#1
i believe it is seq gen, filter, expression in which v can
use mapping parameter and variable.

mapping parameter is reusable. v simply change the value of


the parameter in the parameter fle.
Answer
#2
What is T/r..?? is it Transformation..!!
If ur asking about transformation then Lookup Cache (named
and un-named) is a transformation which we can use in a
multiple mapping and also in same mapping in different
lookup.
Question
I am getting five sources in a day and i donot know when i
get them. i need to load data into the target and run the
session. but here i can't keep the session in running or
can't stop the session. plz help me
Answer
#1
You can use the "Event wait" task and trigger the workflow
whenever you get a particular file in a specified
location.Here the file name should be in a specific format.
Answer
#2
gud answer rajsekhar.

wud you mind calling me on my num once 9866188658


Question
how can we update without using update transformation.
wt is push down operation in informatica.
which lookup gives more tuning performance. if so why.
Answer
#1
without using update t/mation also we can update. in
session properties select update against treat source rows
as. this wud definitely help you.

push down is a function of 8.1 version of informatica. it


reduces the load on informatica server.

unconnceted obviously bcoz not connected with the data flow


and uses only static cache.
also it can be called as many times in a mapping as a

19
result of an expession.

reach me on 9866188658.
Answer
#2
By using Update override option in Target table we can
update the table
Answer
#3
We can use update override in the target table in mapping.

Push Down Optimization is an optimizing technique in


informatica 8. Integration service pushes the transformation
logic either to the source database or to the target
database rather than executing the logic by itself.

I think dynamic lookup gives a better performance since it


allows us to tune the cache sizes(index and data).
Answer
#4
Just using the Update override won't get your work done.
You will have to select the session property Treat source
rows as 'Update'
Question
how to load only the first and last record of a flat file
into the target?
Answer
#1
first record can be loaded using the top and only one rank
in ranker transformation.
the last record using aggregator without group by option.
Answer
#2
we can write the shell script for it
head -1
tail -1
we call it either in command task or pre and post session
shell commands in seession
Answer
#3
use sequence generator , pass values only with max and min
sequence number filter to target
Answer
#4
using Aggregator trans FIRST and LAST funtions we can pass
first and last record
Question
Hi All,

I've 110 records in my table but 101 records contains an


error. When I run the session, I want to load the 100
records into the target. Can anyone suggest me the best
solution for this...
20
Thanks in Advance,
Answer
#1
Is this what you wanna say that 101th observation has an
error in your dataset and you wanna save those
observations(until 100th) into the target dataset. If it so,
I've an idea I think it shld help a bit.

Data nn;/* your new dataset*/


set base.agents;
/* a dataset named 'agents' that has more than 100
observations*/
if _Error_=1 then output;
/* _error_ is a PDV variable that returns '0' when there is
any error in the code and returns '1' when there is no error */
run;
Answer
#3
i believe this would certainly serve your purpose.

connect your source to filter transformation. if your


source contains p_key then the condition should be
p_key<=100. if it doesnt containa a primary key ex a flat
file create a new port in the filter transformation name it
s_no and connect the nextval port of seq.gen and the
condition shud be s_no<=100 and connect to the target.
Answer
#4
set commit interval as 100 in WF session and commitType as
target
Answer
#5
go the source and there is a property of skip the number of
rows.set it to 1 and then save the mapping and run the workflow.
Ritu Sharma
Question
i have two flat files.. containing same type of data
i want to load it to dwh..how many source qualifires i need
Answer
#1
If the 2 flat files have the same structure,then we can go
for filelist concept in informatica.

only one source qualifier is needed and the source should


be either of the flat files.
Answer
#3
If u have same structure then u can use union transformation
Answer
#4
if you have the same file structure.you can configure
21
indirect loading in the file properties tab of the
session.You will need only one source qualifier
transformation.
Answer
#5
Source Qualifier transformation applies to only for
databases and not for flat files (upto versoin 7.x) but i
have no idea on later versions.

It just acts like a label and we cant configuare or use any


of the properties in source qualifier if the source is flat
file.
Answer
#6
if both the files are same in structure you can concat into
one file using Unix or dos scripts and call the unix and
dos script before execution of the session in ur workflow

This would automate the process and you will not ahve to do
it manually
Question
how can u connect client to ur informatica sever iff server
is located at different place( not local to the client)
Answer
#1
Through IP Address
Answer
#2
Hi U need to connect remotly to ur server and access the
repository.

U will be given repository user name and pwd and add this
repositiory and connect to it with ur credentials.
Answer
#3
you to connect to server
Question
wt is informatica file watch timers
in aflat i want to get the first record and last record how
could i.
Answer
#1
i believe this should work.

1) use the first and last functions of aggregate t/tion.


2) use the top option and 1 rank using rank t/tion
similarly last option and 1 rank to get only the first
and lst records.
3) else connect the flat file to filter t/tion. create new
port in filter name it s_no and connect the nextval port of
s.gen and write the contion (assume the file contains 100
22
records) like s_no=1 or s_no=100
Question
How to generate the HTML output using Informatica.
Answer
#1
use java transformation .this option is available in ver 8.1
Answer
#2
you can use the shell scripting also for generating HTML
output in informatica
Question
What is the main data object present inbetween source and
target. I answered Mapping. Transformation etc.. But it is
not the answer. So please give me an apt answer. Thanks in
advance
Answer
#1
ODS
Answer
#2
hi,
i believe ods cant b called a data object. its a temp
database for validating the data n applying business logic
on that.

it shud me a transformation. probably the ans kud be


repository object since transformation is a reposiyory
object.
Answer
#3
It might be source qualifier
#4
hi, anju,
gud thinking.
r u trying for a break with the industry or currently
working on that.would you wind sending a mail to me on my
id bsgsr12@gmail.com for knowledge transfer.

Answer
#7
It may be ETL Tools like Informatica,Datastage,etc.,
Answer
#8
Data Object: intermediate table if we r using or Stage
table (temp table) which are the resulting data object
after applying the transformation.
Answer
#9
23
The main data object present inbetween source and target is
staging layer only, Staging layer will do eliminate the
inconsistency data and gives the result data object
Answer
# 11
source qualifier is the correct answer,because with out
source qualifier u cant do any thing
Answer
# 12
Answer is Source Qualifier.

Since the source may be anything so Informatica should be


able to understand the data types perfectly, so only the
data can be pass through to the target. So you can see when
drag the souce along with the source SQ will be created
automatically.

why not the other transformations?


Since other transformations are not capable of wat the SQ
transformation has.
Question
wht is full process of Information source to target

just like stg to productuon and development


Answer
#1
Initially data comes from OLTP systems, of a company, which
get loaded into a database or flatfiles with the help of
legacy systems or any pre defined methods. from here data
is transferred to the staging database applying business
logic with the help of informatica or other ETL tools. at
times stage to target is also loaded using informatica
mappings. these are transferred to another QA(quality
analysis database) in XML files. from there deployment is
done onto the production environment.
Question
How many repositories can we create in Informatica??
Answer
#1
We can create any no. of repositories in Informatica Under
one server .As per my knowledge there is no limit .
Answer
#2
In Windows we can create only one Repositor.
We may create more than one in unix\Linux Platforms.
(What i Read i am giving)
Answer
#4
In Informatica Power mart we can create any no of

24
repositories , but we can not share the matadata accross
the repositories,

In Informatica Power center we can create any no of


repositories , but we can disignate only one repository as
a global repository which can access or share matadata
from all other repositories
Answer
#5
Any No of Repo v can create only thing is ur db user name n
connect string should be differ.
n u can make them global also
(telling this by personal exp, as I have created 2 repo
both as a globla repo)
Answer
#6
We have to crate 60535 repositories in Informatica.
Acctually we have min port no is 5001,max-65535 so each
port no contain one repository in Informatica, any version
of Informatica.each repository will maintain one port no.

This is the correct answer.


Answer
#7
in that case we have to create 60535 repositories,then what
basis here ur telling that can create 60535 repositories pls
describe it...
Question
WAT IS THE MEANING OF UPGRADTION OF REPOSITORY?
Answer
#1
Do one thing Upgradation of repository means u can upgrade
the lower version into higher version tis u can do in
Repository Manager right click on that there is the option
upgrade select that and then add the licence & product
code......
Question
HOE DO U IMPLIMENT SCHEDULING IN INFORMATICA?
Answer
#1
using the informatica schedular tool or third party tools
like control m, maestro, tivoli etc. if wrong sorry
Answer
#2
u can use autosys scheduler,or workflow manager in
informatica
Answer
#3
scheduling is an administrative tool to perform to run the
session at a specific time and date. if wrong pls inform it
Answer
#4

25
Informatica has own Sheduling components .Since Informatica
widely used as Data integration and Data warehousing Entity
and there are lots of depedencies on other jobs ,reason
informat application is scheduled by external sheduluing
device such as
1) Control M
2) Crontab
3) maestro

Basic Functionlity behind scheduling is PMCMD cmd thro


shell scripts they trigger informatica jobs via pmcmd
command.
Answer
#5
There are following ways:

-- Use the scheduler in WF manager.


-- Use cron scheduler in Unix.
-- Use pmcmd scheduleworkflow command.
-- Use third party enterprise schedulers like Ctrl-M,
Redwood or Tidal etc.

If any query....gimme a call on 9833547028.


Answer
#6
I think no one know abt Informatica DAC-Its a scheduling
tool of informatica like control M
Question
WAT IS TEXT LOAD?
Answer
#1
i think
if you want to text the records from one table
that time you can go to session properties and select text
load option and give number like how many rows you want to
test(ex: 10)
after completion of the session you cant able to see the
tested records in your target table

you would get o/p like: no rows should be selected


Answer
#2
I think it's not text load, rather it is TEST LOAD
Answer
#3
First it is not text load, it is test load
the power center reads, transforms data and without writing
into targets. the power center generates all session files
and pre-post sql functions, as if running full session. the
power center writes data into relational targets. but
rollbacks data when the session is completes.
Question
In mapping f.f as one src and f.f as trg,f.f as src and
26
oracle as trg which is fast? mean which is complete first
process
Answer
#1
ff as source and ff as target, this will be the faster
process. Because writing to the flatfile is faster than
writing it into a database.
Answer
#2
flatfiles is fast then oracle,because in flat files there
is no constraints.
Question
hw to load this give the mapping?

cty state o/p

c1 s1 c1

c1 s2 s1
c1 s1 c1
c2 s3 s2
c3 s4 c1
c3 s2 s1
c2
s3
c3
s4
c3
s2

2 columns should be loaded to one column in target table?


Answer
#1
first create one normalizer transformation ,double click on
it ,select normalizer tab create column(column name is
cityandstate).set 2 in occurs field and datatype is
string ,next click ok. then automaticaly two input ports
(cityandstate,cityandstate)and threee out pouts
(cityandstate,gk_cityandstate,gcid_cityandstate) are
created.then set city is one inputport(cityandstate) and
state is another input port(cityandstate).set citystate
(outputport) to target table.
Question
can v update d records in target using update stargey
without generationg primary key ? explain
Answer
#1
By using "Update override" option in the target. Say the
key of your table is ID,NAME. But your mapping is passing
only ID to the target then you can have an update override
query in the target to update the target only based on
ID.
Answer
#2
27
no using update strategy without primary keys update is not
possible. try and read teh session log file once. it will
display a msg updates are not supported without primary
keys.

update override in the target is to update the function in


the update strategy t/tion and it updates only on non-
primary key columns like dname, loc but not on deptno.

gilbert can i have your mail id here is mine


bsinivas1213@gmail.com
else call me on my num 9866188658 once v can have kt which
is mutually beneficial
Answer
#3
Other than this option like "Target update Override "

you can drag that target Warehouse desiner ,and mark those
columns which are no keys as a key for time being ,on whihc
you are trying to update. even though those columns are
not keys attributes or key columns at database level,for
time being those will be treated as key elements, hense you
can apply or use update strategy .

else generally informatica will through error like


" No key specification error"
Question
suppose if we have dublicate records in a table temp n now
i want to pass unique values to t1 n dublicat values to t2
in single mapping?how
Answer
#1
Have a lookup to table T1 (should be dynamic one). If the
record already exists in T1 (i.e. duplicate) then route to
T2 else route to T1
Answer
#2
using constraint based transformation
Answer
#3
You can do this in one mapping by using sorter and then
expression transofrmation ..in Expression use 3 variables ..
1 Current 2 previous 3 route..
if prevous = current sent to table 2 else 1.
Question
how do u use sequence created in oracle in informatica?
Explain with an simple example
Answer
#1
by writing sql override in the source qualifier by calling
sequence which you have created in oracle
Answer
#3
28
By using the Stored Procedure

please corect me if am wrong


Answer
#4
Using Stored Porcedure Transformation we can call the
Sequence Generator
Question
supose if ur scr table contains alphanumeric values like
1,2,3,a,v,c in one columen like c1 n now u have load d data
in 2 seperare columns like ID should contain only numbers
1,2,3 n NAME col should contain a,b,cin target?How
Answer
#1
Say your input is VAR1 which are a1 and 1a.

Have an expression transformation to create two more


variables VAR2 and VAR3 out of VAR1 using the SUBSTR
function.
VAR2 = SUBSTR(VAR1,1,1) and VAR3 = SUBSTR(VAR1,2,1).
For VAR1=a1, VAR2=a and VAR3=1
VAR1=1a, VAR2=1 and VAR3=a

Pass VAR2 and VAR3 to the router. Have one output group
with condition IS_NUMERIC(VAR2) and the other obviously is
the default group. For the first group connect VAR2 to ID
of target and VAR3 to NAME of target. For default connect
VAR2 to NAME and VAR3 to ID

Output
=====
ID NAME
= =====
1 a
1 a
Answer
#2
Here you should not use a router as it sends the data to
two different target or two instances of the same target.

As the question here is to write the input row to to


different columns based on the value, you can just use an
expression, pass the column and create two output ports.
Output port 1 to detect if it is a numeric. And the second
output port to detect the alphabet.

output port 1 - op1


iif(is_numeric(to_int(c1)),c1)

output port 2 - op2


iif(is_alphabet(c1),c1)

Pass these two outputs to a filter and set ths condition

Not isnull(op1) or Not isnull(op2)

29
Link the columns to the target now. Done!
Question
what is metadata?
Answer
#1
Metadata is data about data..it will have all information
about mappings and transformations.
nswer
#3
metadata is a structure of data, information about
mappings and transformations
Answer
#5
Commonly known as "data about data" it is the data
describing context, content and structure of records and
their management through time
Answer
#6
A metadata is Data about data . The repositary contains the
metadata , it means all the information of the mappings,
task.....etc
Answer
#7
Metadata is nothing but data about data.As metadata
contains information related to data from where data is
and connections to data.
Question
Explain about HLD and LLD ?
Answer
#1
HLD refers High Level Design and
LLD refers Low Level Design

Means HLD contains overview of the design and LLD contains


detailed design.
Answer
#2
hld means high level design
lld means low level designd ie mapping doc
Answer
#3
In addition to the above

HLD will be prepared by the Leads or Managers in rare cases


where as LLD wil be detailed design of HLD and prepared by
the developers like SE or SSE's.
Answer
#4
in additional of the above

hld means main modules of the application or a software.


lld means sub modules of the application or a software.
30
Answer
#5
hi,everyone,
HLD means high level design
LLD means low level design

HLD: It refers to the functionlity to be achieved to meet


the client requirement. Precisely speaking it is a
diagramatic representation of clients operational systems,
staging areas, dwh n datamarts. also how n what frequency
the data is extracted n loaded into the target database.

LLD: It is prepared for every mapping along with unit test


plan. It contains the names of source definitions, target
definitions, transformatoins used, column names, data
types, business logic written n source to target field
matrix, session name, mapping name.

reach me on bsgsr12@gmail.com 9866188658


Answer
#8
High Level Design, means precisely that. A high level
design discusses an overall view of how something should
work and the top level components that will comprise the
proposed solution. It should have very little detail on
implementation, i.e. no explicit class definitions, and in
some cases not even details such as database type
(relational or object) and programming language and
platform.

A low level design has nuts and bolts type detail in it


which must come after high level design has been signed off
by the users, as the high level design is much easier to
change than the low level design.
Answer
# 11
HLD : High Level Design
LLD : Low Level Design

HLD: It gives the ovreall designing designing to be done to complete a product.


It is moreover block digramatic description of a complete design.

LLD: It reveals the complete details to be complted to get the product finished.
It gives the designing of inside modules of blocks specified in HLD.

For example, HLD is just representing a computer by different blocks like CPU, I/O
devices, Memory etc. Where as LLD with respect to this example is detailed
description of all the blocks of HLD like CPU, I/O devices, Memory etc.
Question
for ex: in source 10 records are there with column sal. use
a filter transformation condition as Sal=TRUE and connect
to target. what will happen.

31
Answer
#1
I checked result.

Case 1: when you use filter condition sal=true

nothing will be moved to target table. session succeeded


but no data in target table.

Case 2: when you use only TRUE as filter condition.

all 10 records from source passes through filter to target


table .(in target table also we will get 10 records)
Answer
#2
nothing will happen when we use filter option it just check
the source record where all record are salary records or nor
Answer
#4
by default filter condition has the TRUE that means don't
worry whether the incoming data numeric or string.all incoming
data passed to the next transformation or target.
if we assign a condition manually sal=True.session is executed
successfully but even single record is not loaded into the
target.
Answer
# 10
if 'salary= true' ,no records will come to target.

if only 'true' it will pass all the 10 records.


Question
i have flatfile source. i want to load the maximum salary of
each deptno into target. what isthe mapping flow
Answer
#1
We can use an aggregator to group by on deptno and create a
newport to find the max(salary) and load dept no and
salary,we'll get unique deot no and the max salary.
Answer
#2
we can also use rank transformation by setting top, no of
rank =1 and enable group by port deptno
Question
how much memory (size) occupied by a session at runtime
Answer
#1
Approximately 200,000 bytes of shared memory for each
session slot at runtime.
32
Answer
#2
12,000,000 bytes of memory to the session.

sorry i made a mistake before


Answer
#3
it ocupied 4gb memori size 2gb for data cache & 2gb for
index cache
Answer
#4
A session contains mapping and sources, trans, targets in
that mapping. I think the size of session depends on the
caches that we used for diff transformations in mapping and
the size of the data that passes through transformations.

provide me with better ans


Question
how DTM buffer size and buffer block size are related
Answer
#1
The number of buffer blocks in a session = DTM Buffer
Size / Buffer Block Size. Default settings create enough
buffer blocks for 83 sources and targets. If the session
contains more than 83, you might need to increase DTM
Buffer Size or decrease Default Buffer Block Size.
Answer
#2
(total number of sources + total number of targets)* 2] =
(session buffer blocks)

(session Buffer Blocks) = (.9) * (DTM Buffer Size) /


(Default Buffer Block Size) * (number of partitions)
Answer
#3
DTM BUFFER SIZE :12MB
BUFFER BLOCK SIZE :64KB
Question
i have f;latfile source. i have two targets t1,t2. i want to
load the odd no.of records into t1 and even no.of recordds
into t2. what is the procedure and whar t/r's are involved
and what is the mapping flow
Answer
#1
Hi,

Steps:-

1.Load ur source table into source analyzer.

33
2.generate the target tables.

3.in the maping designer take 2 filter transformations and


write these queries in each of the filter transformations

for even no of records:- select * from <tablename> where


(rowid,1) in (select rowid, mod(rownum,2) from <tablename>)

for odd no of records:- select * from <tablename> where


(rowid,0) in (select rowid, mod(rownum,2) from <tablename>)\

4. connect the output ports of this transformation to their


respective output tables.

Note:-
send your responses to suriaslesha_sreekar@yahoo.co.in
Answer
#2
Can it be done in this way....

1)Drag source to mapping designer


2)Take a router transformation .consider EMP table in which
i am using EMPNO. In group1, assign the condition as
mod(empno,2)=0 which gives even numbers and in group2,
assign the condition as mod(empno,2)!=0 which gives odd numbers.
3)connect group1 to one target and group2 to another target.

If i am wrong please tell me....


Answer
#3
We can do this in the following.

Take the sequence generator t/r


set the properties like
Start value 1
End value 2
and also Enable cycle option

connect the nextval port to the router T/r


port "SGTNO"(created by you) and also connect the ports
from source qualifier to the router t/r

Now given the first condition like


SGTNO=1 then go to first target
otherwise go to second target(here no need to mention the
second condition).

If I went wrong, please let me know

Thanks
Anand Kumar
Answer
#4
1. drag source and targets in to mapping designer work space
2. from t/r devoloper take sequence genarato t/r,exp t/r
and router t/r.

34
3. in seqg t/r give startvalue 1 ,increment by 1
give netval to a newpor in exp t/r
4. drag all ports of sq to exp in addition to newport
5. in router t/r create one group name as odd
give condetion mode(newport/2)!=0
6. give from group odd to t1
and defoult to t2.
Question
why cant we put a sequence generator or upd strategy
transformation before joiner transformation?
Answer
#1
Joiner is to join two different sources .If u use Update
strategy T.F and trying to use DD_delete&DD_reject option
the some of the data will get deleted and u can't see the
at joiner output.
So we can't go for this .
Question
In Real Time what are the scenarios u faced, what r the
tough situations u have overcome, and explain about sessions.
Answer
#1
Getting first job in Informatica and working with
transformations...
Question
how u know when to use a static cache and dynamic cache in
lookup transformation.
Answer
#1
if you need the source data after look up transformation to
transfer any other transformation more than once you can use
dynamic cache otherwise use static cache
Answer
#2
Dynamic cache is generally used when u are applying lookup
on a target table and in one flow same data is coming twice
for insertion or once for insertion and once for updation.
Performance: dynamic cache decreases the performance in
comparision to static cache since it first looks in the
whole table tht whether data ws previously present if no
thn only it inserts so it takes much tym
Static cache do not see such things just insert data as
many tyms as it is coming.
Answer
#3
when we use conneted type of lookup then we use dynamic
caches& when we use unconncted lookup in the modelling we
use static cache
Answer
#5
hi neetu,
35
i hope you were a bit wrong.
static cache is also used to check whether row exists or
not.....
well krishna for your question on dynamic cache..
static cache is one in which data when changed in the
target table cannot be incorporated in the look up on the
fly.
when you use a dynamic cache any changes to the target
table gets reflected in the lookup even if the record is
being inserted or updated in the same mapping run priorly
Answer
#6
By default lookup is static.
Dynamic is used whn the records coming frm the source to
target in multiple times. i.e if
1)
emp_id=101...city=hyd...age=25...
(first this is inserted to target)
2)at second time when 101 employee is changing his city frm
hyd to chennai here we need to update the target table with
city name as chennai. thn how lookup cache knows tht a
record is updated...
3) if it is static lookup, after updating tht record it
will not refresh the lookup cache.... where as if it is
dynamic cache it will refresh the cache....
4) Based on our requirement we r going to use.....
Question
whether Sequence generater T/r uses Caches? then what type
of Cache it is
Answer
#1
multi-caches.
Answer
#2
no it won't have any cache
we have caches for the following t/r
aggregate t/r
joiner t/r
sorter t/r
lookup t/r
Answer
#3
also a rank Xmation uses cache
Answer
#5
the seq t/r uses index cache for the sequential range of
numbers for the generated keys.
Answer
#6
Sequence generator uses a cache when reusable.

This option is to facilitate multiple sessions that are


using the same reusable sequence generator.

The number of values cached can be set in the properties of


36
the sequence generator.

Not sure about the type of Cache.


Answer
#7
reusable and non-reusable cache transformation, reusable
will provide unique value for same transformation used in
all sessions.
Question
What is difference between Informatica 6.2 Workflow and
Informatica Workflow 7.1
Answer
#1
In informatica 6.2 union transformation is not pressent and
where as in informatica 7.1 union transformation is present
Answer
#2
New features in Informatica 7.1:
1)Union and custom transformations
2)Look up on flat files
Answer
#3
In Informatica 7.1 enhancements are
Flatfile lookup,dynamic lookup chache enhancements,Union
transformation.
When we use dynamic lookup cache, the powercenter
sercer can ignore some ports when it compares values in
lookup ports and inputs ports before it updates a row in
the cache.
Answer
#4
new features in informatica 7.1 was

1. union transformation is added


2. custom ,, ,,
3. we can take flat file as lookup table
4.pmcmd command
5. server grid
6. transaction controle transformation
7. file repository concept
8.test load
Question
what is casual dimension?
Answer
#1
One of the most interesting and valuable dimensions in a
data warehouse is one that explains why a fact table record
exists. In most data warehouses, you build a fact table
record when something happens. For example:

When the cash register rings in a retail store, a fact


table record is created for each line item on the sales
ticket. The obvious dimensions of this fact table record

37
are product, store, customer, sales ticket, and time.
At a bank ATM, a fact table record is created for every
customer transaction. The dimensions of this fact table
record are financial service, ATM location, customer,
transaction type, and time.
When the telephone rings, the phone company creates a fact
table record for each "hook event." A complete call-
tracking data warehouse in a telephone company records each
completed call, busy signal, wrong number, and partially
dialed call.
In all three of these cases, a physical event takes place,
and the data warehouse responds by storing a fact table
record. However, the physical events and the corresponding
fact table records are more interesting than simply storing
a small piece of rev enue. Each event represents a
conscious decision by the customer to use the product or
the service. A good marketing person is fascinated by these
events. Why did the customer choose to buy the product or
use the service at that exact moment? If we only had a
dimension called "Why Did The Customer Buy My Product Just
Now?" our data warehouses could answer almost any marketing
question. We call a dimension like this a "causal"
dimension, because it explains what caused the event.
Answer
#3
i sincerely appreciate ur interest n moreover ur patience
in explaining so beautiful. thanks a million for ur ans.
can i have your mail id or number pls.
here are mine.
Question
Without using any transformations how u can load the data into
target?
Answer
#1
Simply connected souce with target.

If any condition is there we can put in the Source Qualifier


Answer
#2
simply connect source to target

but be very careful with the connections from source to


target
Answer
#3
at the time of draging source in to workspace it will auto
matically genartes sq t/r from this u connect to target
Answer
#5
write a SQL script like, insert into target_1 (select * from
source_1) and run it in the pre/post source/target load.

38
Answer
#7
Hi All,

We can't load the data from source to target without using


any trasnformation, once you drag the source structure into
designer, the sorce structure will come along with source
qualifier t/r.

So, i cant possible to load the data from source to target


withoust using any transformation
Answer
# 10
if i were the candidate i would simply say if there are no
transformations to be done, i will simply run an insert
script if the source and target can talk to each other. or
simply source -> source qualifier -> target. if the
interviewer says SQ is a transformation, then say "then i
dont know. i have always used informatica when there is
some kind of transformation involved because that is what
informatica is mainly used for". that will shut him up. :)
trust me, you wont lose the interview for just this
question. personally, whoever asks this qestion is not
there in the interview process to find a good candidate for
the post, he is just there to have some fun torturing
candidates. dont panic. as long as you know answers to
genuine informatica questions and know your business area,
there is always a job waiting for u.
Answer
# 12
it is not possible to load data form source to target with
out using transformation the reason is while loading the
data from source to targets there will be definitely
modifications in changing the data for example to calculate
the rank of the table like that.... simply the interviewer
will twist us . how we r confident in the subject .....
Question
Whatis the difference between View and Materialized View ?
Answer
#1
In view we cannot do DML commands where as it is possible
in Materialized view.
Answer
#2
A view has a logical existence but a materialized view has
a physical existence.Moreover a materialized view can be
indexed,analysed and so on....that is all the things that
we can do with a table can also be done with a materialized
view.
Answer
#3
39
hi,
we already know basic restrictions of view and materialized
view,

view is a logical or virtual table it doesn't have data on


its own, but materialized view has a physical structure it
stores data in local machine or on its own.materialized view
can be refreshed automatically or manually. but in view, if
any changes happened in the base tables, we want to reflect
the same in the view means view has to issue the select
statement again to the database
Answer
#4
A materialized view is a stored summary containing pre-
computed results.As the data is pre-computed, materialized
views allow for faster query answers

In warehousing applications, large amounts of data are


processed and similar queries are frequently repeated. If
these queries are pre-computed and the results stored in
the data warehouse as a materialized view, using
materialized views significantly improves performance
by providing fast lookups into the set of results.

A Materialized View generally behaves like a Index and it


can also have any Number of Aggregates and Joins.
Answer
#9
apart from all the above answer one more difference is:
suppose we delete a table then all the views becomes invalid but this is not the
case with the materialized view. its most interesting feature is auto commit
refresh...
Question
1.why we need to use unconnected transformation?
2.where we can static chach,dynamic chach
Answer
#1
In Lookup transformation;
Unconencted is used when ever u want to call the same
transformation several times and u have one return port.

Use dynamic cache if u want to update the case while


updating the target table itself and static is untouched
with the cache.
Answer
#2
1.We use unconnected transformation to use multiple number
of tables or views without physically taking the entity
into mapping.This kind of transformation is also helpful
when single return port is required.
Answer
#3
We use an unconnected transformation when we need one
40
return column from each row and it receives values from:
LKP expression from another transformation.
Answer
#4
Unconnected lkp is used for the performance preps bcz hear
we can return only one port and also it is not participated
in mapping directly in ulkp we can use only static cache.

Static cache: static cache is used in unconnected lkp, once


the data is cached it wont modified.
Dynamic cache: in connected lkp we can use dynamic lkp. By
the use of dynamic lkp we can capture the changed
data.becage dynamic cache is automatically update the
cached data
Question
How could we generate the sequence of key values without
using sequence generator transformation in the target ??
Answer
#1
by using pre sql in source qualifier we can generate
sequence in the target
Answer
#2
use an oracle sequence and create a function to call it
inside informatica
Answer
#3
It can be implemented throuh Lookup T/f,Develope the lkp
t/r with condition like NEXTVAL=CURVAL+1,Through this
conditon we can acheive.
Answer
#4
Do a lookup on the Target table with an Lookup SQl Override
Select MAX(FIELD_NAME), field 1 , field3 from target group
by field1, field2...

In the Expression increment the Max values of the field


which you just got from the lookup by 1.

Here MAX_FIELDNAME) is the Max value of the field you want


to generate the sequence of..
Answer
#5
use expression transformation
create two ports
one is assigned with it to zero
another one is assigned in outputport with expression logic
is o_seq=v_seq+1;
Answer
#6
either use oracle to generate the sequence number or use an
unconnected lkp transformation which looks up on the
41
target, get the max(value) of the column which has to be
incremented and increment the value by 1
Answer
#8
Create Two Port in Exp. Trans
v_temp : v_temp+1
o_seq : IIF(ISNULL(v_temp),0,v_temp)
Question
How do we come to know the Source data/file is
ready/Updated in the source location, when the session is
scheduled for @12:00AM and ready to run its job ? or Can we
schedule the session, when the source is updated in source
location without any time constraint?
Answer
#1
It all depends on the Organisation to Organisation which
uses different softwares (Control-M is one of them). The
job will be triggered based on the notification message
= 'Y' when the Source file is ready or else it will have a
status = 'N'.
Answer
#2
This may help for flat files.

this may be done using the event wait task.

1) upload the src file wiht a touch file.

2) schedule ur job at perticular time.

3) in wf use the event wait task.

4) event wait will wait for touch file, once it get the
touch file, it will start the load.
Question
explain the scenario for bulk loading and the normal
loading option in Informatica Work flow manager ???
Answer
#1
1)Bulkload & Narmal load

Normal: In this case server manager allocates the


resources(Buffers) as per the parameter settings. It creates
the log files in database.

Bulk: In this case server manager allocates maximum


resources(Buffers) available irrespective of the parameter
settings. It will not create any log files in database.

In first case data loading process will be time taking


process but other applications are not affected. While in
bulk data loading will be much faster but other application
are affected.
Answer
#2
42
NORMAL LOAD : IT LOADS THE RECORD ONE BY ONE AND WRITES LOG
FOR EACH FILE . IT TAKES MORE TIME TO COMPLETE
BULK LOAD : I LOAD THE NUMBER OF RECORDS AT A TIME ,IT WONT
FALLOW ANT LOG FILES OR TRACE LEVELS . IT TAKES LESS TIME
USE THE BULK MODE FOR IMPROVING THE SESSION PERFORMANCE
Answer
#5
in the normal loading the taget write all the row on the
database log , while laoading the bulk loading the database
log is not come in the picture (that mean its skip the
property )so when the session got failed we can easily find
recover the seesion by the help of data base log.but in
case of bulk loading we can do .

but normaol loading is very slow as compare to bulk laoding.


Answer
#6
NORMAL LOADING:THE INTEGRATION SERVUCE CREATE THE DATA BASE
LOG BEFORE LOADING DATA INTO THE THE TARGET DATA BASE.SO
--THE INTEGRATION SERVICE PERFORM ROLL BACK AND SESSION
RECOVERY.
BULK LOADING:THE INTEGRATION SERVICE IN WORK THE BULK
UTILITY WHEN BYPASS THE DATA BASE LOG
--THIS IS IMPROVES THE PERFORMANCE DATA LOADING
--THIS IS NOT PERFORM ROLLBACK
Question
What is Factless fact table ???
Answer
#1
A Fact table without measures(numeric data) for a column is
called Factless Fact table.

eg: Promotion Fact(only key value available in FT)


Answer
#2
FACT TABLE CONTAINS ONLY ID,KEYS AND DESCRIPTION COLUMNS AND
NOT MEASURES ARE KNOWN AS FACTLEES FACT TABLES
Answer
#3
fact less fact table is nothing but.......
Fact table consists only s.k's nothing other than sk's...

Example::
Students Attendtance... It consists only student
information nothing other than present or absent..i.e
Boolean values(Yes or No)
Answer
#5
A FACT TABLE WITHOUT FACT IS CALLED FACT LESS FACT TABLE
SUPPOSE WE NEED TO COMBINE TWO DATAMARTS ,ONE DATAMART
CONATINS FACT LESS FACT TABLE AND ANOTHER DATAMART WITH

43
THE FACT TABLE
FACT LESS FACT TABLES ARE USED TO CAPTURE DATE TRANSACTION
EVENTS
Answer
#6
The fact table which contains the business events or
coverage that could be represented in a fact table, but
there will be no measures or facts associated with these.
Question
In real time scenario where can we use mapping parameters
and variables?
Answer
#1
Mapping & Mapplet, v can use the mapping parameter &
variable. V can also create the parameter & variable in the
sesssion level.
Answer
#2
Before using mapping parameters and mapping variables we
should declare these things in mapping tab of mapping
designer.

A mapping parameter cannot change untill the session has


completed unless a mapping variable can be changed in
between the session.

Example:::
if we declare mapping parameter we can use that parameter
untill completing the session,but if we declare mapping
variable we can change in between sessions.Use mapping
variable in Transcation Control Transformation......
Answer
#3
mapping variable unlike mapping parameter changs its value
during session execution. it is used in incremental loading.

parameters are used in many scenarios like loading the data


of employees who joined ina particular year, for
timestamping, loading the data related to a particular
product id. alike.

if i am wrong let me know.


bsgsr12@gmail.com
Question
By using Filter Transformation,How to pass rows that does
not satisfy the condition(discarded rows) to another target?
Answer
#1
u can not pass the rows if condition is not satisfy .
Answer
#2

44
write opposite condition what mentioned in filter
transformation1 in another filter t.f and pass the rows
which are not satisfying filter1 but satisy 2 in to target2
Answer
#3
Don't use filter transformation.

Router transformation is a better option in this scenario.


Answer
#4
Use router transformation to load the rejected data.
Connect output ports(i.e: condition satisfied records) to
the one target table and connect default ports(not
satisfied records) to the another target
Answer
#5
Connect the ports of the filter transformation to the
second target table and enable the 'FORWARD REJECTED ROWS'
in the properties of the filter transformation. the
rejected rows will be forwarded to this table.

pls let meknow if i m wrong anywhere.


Answer
#8
it is possible to see the rejected rows after filter
transformation solution is create an empty file and write
that file name in $rejectfile option in session then after
run your session see your created file it contains rejected
rows.
Answer
# 11
Use another Filter Xn with the reverse condition and get the discared rows.
There is no such property in Filter Xn to "Forward Rejected Rows" as mentioned
above in one of the answers.
Question
THREE DATE FORMATS R THERE . HOW TO CHANGE THESE THREE INTO
ONE FORMAT WITHOUT USING EXPRESSION TRANSFORMATION ?
Answer
#1
You can write a procedure and call it to change the date
format as per your requirement
Answer
#2
Use SQL Override and apply the TO_DATE function against the
date columns with the appropriate format mask.
Question
A TABLE CONTAINS SOME NULL VALUES . HOW TO GET (NOT
APPLICABLE(NA)) IN PLACE OF THAT NULL VALUE IN TARGET .?
45
Answer
#1
with the help of ISNULL() function of the Informatica...
Answer
#2
In the column properties sheet, write N/A in the Default
value text box for the particular column
Answer
#3
use

decode function in Expression taking one new column as Flag

iif is_null(column_name,-NA-,column_name)
Question
ONE FLAT FILE IS THERE WHICH IS COMMA DALAMETED . HOW TO
CHANGE THAT COMMA DELEMITER TO ANY OTHER AT THE TIME OF
RUNNING ?
Answer
#1
I think we can change it in session properties of mapping
tab.if select flatfile on top of that we see set file
properties.
Question
two tables from two different databases r there . both
having same structure but different data . how to compare
these two tables ?
Answer
#1
Using Joiner Transformation, to compare the two tables.
Answer
#2
That is not a correct answer because joiner is used for
joini two tables
so at source analyser right click on sourc table click on
compare by this u can get
this is i have seen some where
Answer
#4
if ur comparisiom means joining it shud be using joiner
t/tion using a join condition and join type
Answer
#9
Joiner transfomation is used to join two differned soures
from the same database. Lookup transformation and
expressions(if needed) can be used to compare data from two
different two differebt types of sources.

46
Answer
# 10
If u want to compare the data present in the tables go for
joining and comparison..
if u want to compare the metadata (properties)of the tables
to for "compare Objects" in source analyser/
Question
IN SCD TYPE 1 WHAT IS THE ALTERNATIVE TO THAT LOOKUP
TRANSFORMATION ?
Answer
#1
Alternative to Lookup id Joiner.we need to import source
structure of target in Source Analyser and bring that into
mapping and use it for comparison like Lookup.
Answer
#4
You can use Joiner transformation to design scd Type1
manually. Import target as source and use joiner
transformation. Use expression to insert and update the
rows into target.

Question
1)can anyone explain how to use Normalizer transformation
for the following scenario

Source table | Target Table


|
Std_name ENG MAT ART | Subject Ramesh Himesh Mahesh
Ramesh 68 82 78 | ENG 68 73 81
Himesh 73 87 89 | MAT 82 87 79
Mahesh 81 79 64 | ART 78 89 64
|
please explain what should be
the normalizer column(s)
The GCID column

2)Also please explain the Ni-or-1 rule.


Answer
#2
Hi,
according to my idea, take 3 different groups in the
normalizer transformation like....
1. stud_id studname
------- --------
1 Ramesh
2 Himesh
3 Mahesh

2. Sub_id Subname
------- -------
10 ENG

47
20 MAT
30 ART

3. sub_id stud_id marks


------ ------- -----
1 10 68
1 20 82
1 30 78
2 10 73
2 20 87
2 30 89
3 10 81
3 20 79
3 30 64
make sure that all these 3 groups have proper relationship
with each other.
finally map the appropriate fields to the target.

I'm not exactly sure about this answer but i would be


thankfull for any suggestions.

Answer
#3
It is useful in combining all the multiple columns in to a
single column and viceversa.
Let me know if this is wrong.....!
Question
In a mapping i have three dimensions. If i want to pass a
same surrogate key value to all the three dimensions by
using one sequence generator is possible?If the mapping is
containing single flow? And in the same case if the mapping
is contaning 3 flows for the three dimensions then by using
one sequence generator can we populate surrogate key (same
value) to all the three dimensions?
Answer
#1
u can pass same surrogate key to three dimensions ,this is
possible in case 2 i.e 3 flows,according to my view
Answer
#2
yes we can pass same surrogate key for 3 dimentions. bcz
three dimention are involved in same mapping. we can also
reuse the sequence generator.
Answer
#3
Hi i have a samll doubt on this
Three dimensions are in same mapping but three different
flows. 3 diffrent flows means when the first completes then
only the second flow will start in this case by use one
sequence generator how will we pass the same values to all
the three dimensions. The surrogate key value should pass
like this for all the dimensions

48
dim1 dim2 dim3
---- ---- -----
1 1 1
2 2 2
3 3 3
4 4 4
Answer
#4
Use the Sequence and Expresion transfermations.first
genarate the surrogate with Seq trans,then send values to
exp trans,connect the exp trans o/p ports to 3 dimentions.

First Seq generate the surrogate key like 1,2,3,4,5


we wil pass this column to next tran(Exp) from there we
will connect o/p port to dimentions.so '1' wil go all
dimentions,then '2' wil go then '3' .....
swer
#5
Solution algorithm

1. Sequence Generator generate the surrogate key


2. Expresion transformation pass the (step 1) values
3. Connect Output port of (Step 2) to all 3 dimensions
Answer
#6
yes offcourse we can do it
for understandi this you can refer SCD2 version it haapens
same like ur qustion
Question
WHAT IS UPDATE OVERRIDE . DIFFERENCE BETWEEN SQL OVERRIDE
AND UPDATE OVERRIDE ?
Answer
#5
Update Override it is an option available in TARGET
instance.By defalut Target table is updated based on
Primary key values.To update the Target table on non
primary key values u can generate the default Querey and
override the Querey according to the requiremnet.Suppose
for example u want to update the record in target table
When a column value='AAA' then u can include this condition
in where clause of default Querey.

Coming to SQL override it is an option available in Source


Qualifier and Lookup transafornmation where u can inlude
joins ,filters,Group by and order by.
Answer
#6
In the case of source qualifier when we use sql override
opition we will get query with our alias name and when we
user update transformation override when we enter update
override we will get column names with alias name

In update transformation order by clause is default that is

49
not in sql over ride(According to my knowledge)

If i am wrong plz correct me to my mail-id.....


Answer
#7
You can override the WHERE clause to include non-key
columns by using target override
Question
HOW DO YOU PARFORM INCREMENTAL LAOD ?
Answer
#1
using increamental aggregration
Answer
#2
Taking the Target defination as source and using the joiner
and update we can do the incremental loading

2.By using lookup transformation, keeping lookup on target


and companring.
Answer
#3
you can perform increamental load by using auxiliary
parameters.
Answer
#4
by using the date coll in the source we do incremental load,
specifying the start date in source qualifier,changing the
start date in parameter file in future.
Question
in unconnected lookup , what are the other transformations ,
that can be used in place of that expression transformation ?
Answer
#1
Filter,source qualifier
Answer
#3
Other than Expression we can call un connected
transformation from Update strategy transformation only
Answer
#4
I think we can use agg t/f and rank t/f for this
we cant use filter ,update strategy t/f becoz in these two
we wright the condition in conditions tab but not in ports .
Answer
#5
Filter only used no other transformation cannot be used
..

50
Answer
#7
Filter, Expression & update strategy
Answer
#8
u can use any T/r based on Exp editor available in which
t.r that t/r u can use .lkp value.u can use
update ,agg,rank,filter,sq
Question
what r the transformations that r not involved in mapplet?
Answer
#1
1.Normaliaer
2.xml source qualifier
3.target definions
4.sequence generator
Answer
#3
*You cannot include the following objects in a mapplet:

1.Normalizer transformations
2.COBOL sources
3.XML Source Qualifier transformations
4.XML sources
5.Target definitions
6.Other mapplets
7.Pre- and post- session stored procedures
Answer
#6
You cannot include the following objects in a mapplet:

- Normalizer transformations
- Cobol sources
- XML Source Qualifier transformations
- XML sources
- Target definitions
- Pre- and post- session stored procedures
- Other mapplets
Question
what is the function of 'F10' informatica ?
Answer
#1
used in debugging process
Answer
#2
F10 and F5 are used in debugging process

By pressing F10, the process will move to the next


transformation from the current transformation and the
current data can be seen in the bottom panel of the window..

whereas F5 will process the full data at a stretch..in case


of F5, u can see the data in the targets at the end of the
51
process but cannot see intermediate transformation values.
Answer
#3
F10 IS USUALLU USED FOR SWAPING PERPOSE ONE INFORMATION TO
ANOTHER
Answer
#4
F10 and F5 are used in debugging process

By pressing F10, the process will move to the next


transformation from the current transformation and the
current data can be seen in the bottom panel of the window..

whereas F5 will process the full data at a stretch..in case


of F5, u can see the data in the targets at the end of the
process but cannot see intermediate transformation values.
Question
what are the reusable tasks in informatica ?
Answer
#1
command task
session task
email task
Answer
#2
TAKING CALLS
SESSION TASK
MAIL TASK & CLOSING TASK
Question
surrogate keys usage in Oracle and Informatica?
Answer
#1
surrogate key is one type of key which is used to maintain
history .
it is used in slowly changing dimension (scd)
Answer
#2
If u have mulitiple records then for mentain this records we
need to generate surrogate key in informatica
Answer
#3
A surrogate key is the substitution of natural primary key
Answer
#4
surrogate key is system generated sequence numbers which
are used for maintaining the history in SCD-TYPE-2
applications
Question
why do u use shortcuts in informatica.?
Answer
#1
Short cut is used create copy of the object in shared
folder,that copied folder inherit changes.It maintains same
52
repository.
Answer
#2
Shortcut is a concept of reusability.If there is a mapping
that can be reused across several folders, create it in one
folder and use shortcuts of it in other folders. Thus, if
you have to make change, you can do it in main mapping
which reflects in shortcut mappings automatically.
Answer
#3
shrt cut provides the easiest way to reuse the
objects,shrtcut created only in shared folder ,and it if
uwants to reuse the object in ur folder,then go to shared
folder and drag into ur local folder,in this way u can
create short cuts.
This is as for my knowledge.
Question
What is the filename which you need to configure in UNIX
while installing infromatica?
Answer
#1
pmserver.cfg
Answer
#2
in informatica 7, under $PMRootDir there is one utility
(script) called pmconfig exist, through it we can configure
the inforamtica
Question
What is data quality? How can a data quality solution be
implemented into my informatica transformations, even
internationally?
Answer
#1
Data wili have Special characters,Symbols,Nulls,Zeros,,,
You need Cure the Data using Default values,Null
updates,,,like many things r there,,
In Inf trans Avoid the nulls,Correct DataType with
Length,Unique data,,,,like,,many ways u can cure the data.

Question
What is data merging, data cleansing and sampling?
Answer
#1
Data Cleansing: A two step process of detection and
correction of errors in a data set.
Answer
#2
data merging :multiple detailes values are summarised into
single summaeised value.
data cleansing:to eliminate the inconsistant data
sampling:it is the process ,orbitarly reading the data from
group of records.

53
Answer
#4
datacleaging:it is the process of identifying and changing
inconsistency and inacquries
datamerging:it is process of integreated multiple
inputsource into singleoutput with similar srtucture and
datatype
Answer
#5
The main thing Merging of data is nothing but integrating from multiple source
systems. It is in 2 types
1.Horizontal merging(Join)
2.Vertical Merging(Union)
Question
What are the different options used to configure the
sequential batches?
Answer
#1
Two options

Run the session only if previous session completes


sucessfully. Always runs the session.
Question
Which transformation should we use to normalize the COBOL
and relational sources?
Answer
#1
Normalizer Transformation.When yoU drag the COBOL source in
to the mapping Designer workspace,the normalizer
transformation automatically appears,creating input and
output ports for every column in the source.
Answer
#3
normalizer tr is specially used for Cobol sources because
its contains "Identified by" and "OCCURS" keyword that's
merge multiple record in one Normalized creates Generate-Key
and Genetrete column key for normalize it.
Question
What are the types of lookup caches?
Answer
#1
Persistent cache: yoU can save the lookup cache files and
reuse them the next time the informatica server processes a
lookup transformation configured to use the cache.

Recache from database: If the persistent cache is not


synchronized with he lookup table, yoU can configure the
lookup transformation to rebuild the lookup cache.

Static cache: U can configure a static or readonly cache for


only lookup table. By default informatica server creates a
54
static cache. It caches the lookup table and lookup values
in the cache for each row that comes into the
transformation. when the lookup condition is true, the
informatica server does not update the cache while it
prosesses the lookup transformation.

Dynamic cache: If you want to cache the target table and


insert new rows into cache and the target, you can create a
look up transformation to use dynamic cache.The informatica
server dynamically inerts data to the target table.

shared cache: yoU can share the lookup cache between


multiple transactions.yoU can share unnamed cache between
transformations inthe same maping.
Question
what is meant by lookup caches?
Answer
#1
The informatica server builds a cache in memory when it
processes the first row af a data in a cached look up
transformation. It allocates memory for the cache based on
the amount you configure in the transformation or session
properties. The informatica server stores condition values
in the index cache and output values in the data cache.
Question
What is difference between Mapplet and reusable
transformation?
Answer
#1
Mapplet is nothing but reusable transformation, we can use
mapplet no of times. In case of reusable transformation we
can't use again.
Answer
#2
Reusable transformation is a single transformation which we
can use multiple times.
mapplet is a set of reusable transformations.which we can
use multiple times
The only diff is transformation is a single transformation
and mapplet is a set of reusable transformations
Answer
#3
1.mapplet is a set of reusable transformations,we can use
multiple times
reusable transformations is a single transformation, that
we can used multiple times
2. in mapplet the transformation logic is hide
3.if u create any mapping variables or parameters in
mapplet that can't be used in another mapping or mapplet
unlike if u create in reusable transformation u can use in
another mapplet or mapping

4. we cant include source definition in reusable


transformation.but we can include source to mapplet

55
5. we cant use cobol source qualifier,joiner,normalizer
transformations in mapplet.
Answer
#5
MAPPLET-REUSABLE COLLETION OF TRANSFORMATION CREATED TO
SOLVE A LOGIC.

REUSABLE TRANS-REUSABLE TRANSFORMATION.


Answer
#6
both were similar but mapplet consist set of reusable
transformation
Question
Explain grouped cross tab?
Answer
#1
Grouped cross tab means same as cross tab report particulary
grouped Ex:- emp dept tables take select row empno and
column in ename and group item deptno and cell select sal
then its comes
10
-------------------
raju|ramu|krishna|....
7098| 500
7034|
7023|600
--------------
20
......
....
like type ok
Question
What is source qualifier?
Answer
#1
using sq'transformation we can driveout data from an
object,and it can drivein to the next transformation,this
is an mediator betwwen source object and to the next
transformation to get the data fom source object.
Answer
#2
source qualifier is also a table, it acts as a
intermediator between source and target metadata and, it
also generates sql, which creating mapping in between
soucre and target metadata.
Question
What are the different types of schemas?
nswer
#2
three types of schemas are availble.Star schema,star flake
schema & snow flake schema.

Star schema:It is highly denormalised,so we can retrieve


56
the data very fast.

Star flake schema:Only one dimension contain one level of


hierachy key.

Snow flake schema:It is highly normalised,so retrievel of


data is slow.
Question
What are slowly changing dimensions?
Answer
#1
there are three types SCD:
type-1: in this we can over write original recourd with new
record
type-2: we can create new record
type-3: we can create new attribute
regards,
ande
Answer
#2
Slowly Changing Dimensions:(SCD)
Over period of time, the value /data associated with
dimensions may change. To track the changes we record the
changes as per the requirement.
There are three types of SCD
SCD 1:No history is maintained. As and when data comes, the
data is entered.
SCD 2: History is maintained
SCD 3: Partial History is maintained.
We maintain history for some columns but not for all.
For example,I have 3 records in a dimension
I have made 1 insert and 1 update. Then if the requirement
is that dimension is to be maintained
In SCD 1 Then
Total number of records is 4 records(1 insert & 1 update)
In SCD 2 Then
Total number of records is 5 records(1 insert & 1 update)
In SCD 3 Then
Total number of records is 4 records(1 insert & 1 update)
NOTE:
History means the slight change in the data stored and
incoming data but it doesn't means years of data.
Answer
#3
DIMENSIONS ARE CLASSIFIED INTO THREE TYPES
SCD TYPE-1 (MAINTAIN CURRENT DATA)
SCD TYPE-2 (MAINTAIN CURRENT DATA+FULL HISTORY OF CHANGES)
SCD TYPE-3 (MAINTAIN CURRENT DATA+ONE TIME HISTORY)
Question
Explain one complecated mapping?
Answer
#1
SCD 2 is one of the complecated mapping in informatica
Answer
#2
57
normalizer transformation.which will performs normalized data.
ex:year q1 q2 q3
2006 10 20 30
2007 20 30 40
after performing the normalizer transformation the date
will be like this
year item sales
2006 1 10
2006 2 20
2006 3 30
like 2007 also
Question
What is Micro Strategy? Why is it used for?
Answer
#1
It is BI tool used for reporting purposes.
Answer
#2
MicroStrategy is a business intelligence (BI), enterprise
reporting, and OLAP (on-line analytical processing)
software vendor.
Answer
#3
it is a bi tool for developing reports of company information by using database
Question
Explain about the concept of mapping parameters and
variables ?
Answer
#1
Mapping parameters s a constant value and mapping variables
will change in the mapping.It is mainly usefull for
incremental load processing
Answer
#2
Mapping parameters represents a constant value that we cant
change during the seesion run.

Mapping variables represents a value that we can change


during the seesion run.
Answer
#3
MAPPING PARAMETERS :
MAPPING PARAMETERS REPRESENT A CONSTANT VALUE AND DOES NOT
CHANGE DURING THE SESSION
MAPPING REUSABILITY CAN BE ACHEIVED

MAPPING VARIABLE REPRESENTS A VALUE THAT CHANGE THE VALUE


DURING THE EXECUTION FROM INITIAL VALUE TO THE FINAL VALUE
MAPPING VARIABLES USE IN INCREMENTAL LOAD PROCESS
Question
What are the different types of Type2 dimension maping?

58
nswer
#1
type2 scd it wil maintain historical informtion + currnt
information along with 3 options .....
1.effective date
2.version number
3.flag value
Question
Define informatica repository?
Answer
#1
Hi

Informatica repository is a central meta-data storage place,


which contains all the information which is necessary to
build a data warehouse or a data mart.

Meta-data like source def,target def,business


rules,sessions,mappings,workflows,mapplets,worklets,database
connections,user information, shortcuts etc

Question
WHAT IS THE NAME OF THAT PORT IN DYNAMIC CACHE WHICH IS USED
FOR INSERT , UPDATE OPRATION ?
Answer
#1
Associate port
Answer
#3
u mean inserting and updating the source records into the
cache. then the answer shud be insert else update which
lies below dynamic lookup port.

59

You might also like