You are on page 1of 9

TNPM Bulk Collector

In TNPM, data can be collected using 2 modes:


SNMP based
SNMP collection uses predefined formulas in TNPM (known as
discovery / collection formulas) to query SNMP enabled devices
and collect the
necessary performance data.
Bulk based.
Bulk collection reads data from flat files.

Independent of the collection type used, the following


steps will be necessary:
Discover the elements/subelements that you will collect
data from
Collect the data for those elements/subelements using a
predefined polling period
Present the data using reports and charts (you can also
export the data)

If using SNMP, you will use SNMP discovery formulas to


create the elements and subelements with its
properties. If using BULK, all discovery information will
be available in the flat file (that we are going to call
"pvline" from now on).

Internally, TNPM will create for each BULK and each


SNMP collector a kind of "processing line" called
subchannel.
For SNMP the first component in this processing line
would be called something similar to SNMP.1.2 (channel
1, subchannel 2), and for BULK something like BCOL.2.3
(channel 2, subchannel 3).

PVLINE
Type Both
OPTION:Type=Line
OPTION:PVMVersion=3.0
OPTION:Element=NETD1
# Inventory Section
G1998/08/12 23:30:00 | Family | alias | NETD1_CPU1 | | inventory | NETD_CPU
G1998/08/12 23:30:00 | Label | alias | NETD1_CPU1 | | inventory | CPU C1
G1998/08/12 23:30:00 | Invariant | alias | | NETD1_CPU1 | inventory | invcpu1
G1998/08/12 23:30:00 | Instance | alias | NETD1_CPU1 | | inventory | cpu_<1>
G1998/08/12 23:30:00 | Slot | alias | NETD1_CPU1 | | property | S1
G1998/08/12 23:30:00 | Frequency | alias | NETD1_CPU1 | | property | 1GHz
(...)
OPTION:Element=NETD2
G1998/08/12 23:30:00 | Family | alias | NETD2_CPU1 | | inventory | NETD_CPU
G1998/08/12 23:30:00 | Label | alias | NETD2_CPU1 | | inventory | CPU C1
G1998/08/12 23:30:00 | Invariant | alias | | NETD2_CPU1 | inventory | invcpu1
G1998/08/12 23:30:00 | Instance | alias | NETD2_CPU1 | | inventory | cpu_<1>
G1998/08/12 23:30:00 | Slot | alias | NETD2_CPU1 | | property | S1
G1998/08/12 23:30:00 | Frequency | alias | NETD2_CPU1 | | property | 1GHz
(...)
# Data Section
G1998/08/12 23:30:00 | AP~Specific~Bulk~NETD_CPU~CPU_idle_pct | alias | NETD1_CPU1 | | float | 25.00
G1998/08/12 23:30:00 | AP~Specific~Bulk~NETD_CPU~CPU_user_pct | alias | NETD1_CPU1 | | float | 35.00
G1998/08/12 23:30:00 | AP~Specific~Bulk~NETD_CPU~CPU_system_pct | alias | NETD1_CPU1 | | float | 40.00
(...)
G1998/08/12 23:30:00 | AP~Specific~Bulk~NETD_CPU~CPU_idle_pct | alias | NETD2_CPU1 | | float | 35.00
G1998/08/12 23:30:00 | AP~Specific~Bulk~NETD_CPU~CPU_user_pct | alias | NETD2_CPU1 | | float | 40.00
G1998/08/12 23:30:00 | AP~Specific~Bulk~NETD_CPU~CPU_system_pct | alias | NETD2_CPU1 | | float | 25.00
(...)
G1998/08/12 23:45:00 | AP~Specific~Bulk~NETD_CPU~CPU_idle_pct | alias | NETD1_CPU1 | | float | 10.00
G1998/08/12 23:45:00 | AP~Specific~Bulk~NETD_CPU~CPU_user_pct | alias | NETD1_CPU1 | | float | 60.00
G1998/08/12 23:45:00 | AP~Specific~Bulk~NETD_CPU~CPU_system_pct | alias | NETD1_CPU1 | | float | 30.00
(...)
G1998/08/12 23:45:00 | AP~Specific~Bulk~NETD_CPU~CPU_idle_pct | alias | NETD2_CPU1 | | float | 20.00
G1998/08/12 23:45:00 | AP~Specific~Bulk~NETD_CPU~CPU_user_pct | alias | NETD2_CPU1 | | float | 70.00
G1998/08/12 23:45:00 | AP~Specific~Bulk~NETD_CPU~CPU_system_pct | alias | NETD2_CPU1 | | float | 10.00

Important Points
When creating the pvline it is mandatory to add the ".pvline" extension in the filename (bfile.pvline for
example).
Some important details about the pvline format (and common mistakes):
Please becarefulwith typos. Using "TYPE Both" can be different than "Type Both"
The file contentmust be in time sequence. The collector will ignore any line older than the last line
read.
You can split the file into two main sections: Theinventory section(lines with "inventory" or
"property" in the example above) and thedata section(lines with "float"). In the official
documentation, you will see that the inventory and data sections are mixed. This is only useful if your
inventory data for the same subelement will change inside the same file for different timestamps. If
that is not the case, just write all inventory lines in the beginning of the file using the oldest timestamp
( "G1998/08/12 23:30:00"in our example) and then put all data after it.
The "OPTION:Element=" is only necessary in the inventory section. You don't have to put it in the data
section
Please notice that the formula path does not contain "~" in the beginning. This is a very common error.

That is it. Once you have the BCOL running and you move the pvline file in its input directory, the
file will be processed.
Talking about the file processing, some important points are listed below:
When using BULK collection, three facts must be true before any data can be stored in the
database:
Thesubelement(s) must existand be active
Thecollection formulas must existand theformula requestsmust be configured and active in the
RequestEditor
The oldest timestamp present in the file must beequal or newer thanthe last timestamp read by the BCOL.

If configured to do the discovery, the BCOL will generate a new inventory file after every
discovery time window (usually 60 min) of data processed. Please notice that the time window is
related to the processed data, meaning that if your pvline file contains data from 09:00 until
12:30, it will generate 3 discovery files, one when processing the data for 10:00, another at 11:00
and another at 12:00.
The first processed hour for a new subelementis always discarded, once the subelement won't
exist until the discovery window is reached and the inventory file created and processed.

Answer: The collection formula should be empty. You just have to create it and add it to the
requestEditor.
Answer: Yes, it is mandatory to set the DISCOVERY_MODE to "sync_inventory" in order to
create the subelements. Please keep in mind that the BCOL will create on inventory file per
processed hour (this file will be transmitted to the DataMart server using FTP or SFTP). In
theory you don't even have to create the inventory profile, it will be created automatically
with the name "bulk_<# subchannel>". Also, don't forget to add the "pollinv" command to
the crontab in the DataMart server. This command will consume the inventory file for the
BCOL. If this is not in the cron, the BCOL will keep waiting until the command is executed.

How differences (I mean inventory changes) will be handled in case of BULK collection ?
Answer: The inventory process will follow the same steps as for SNMP. After the profile is
created, you can change some parameters (like grouping rules to be executed or sync
actions) as you do for SNMP. The only real difference is that the BCOL will push the
inventory files towards the Datamart for every processed hour.

You might also like