Main background application tasks

The main goal of License Metric ToolBigFix Inventory is to generate PVU, and RVU audit reports based on the collected data. All calculations are done in accordance with PVU, and RVU license pricing rules that are described in the official subcapacity licensing documents.

Aggregation

Aggregation is the main calculation task in License Metric ToolBigFix Inventory. The aggregation process is a scheduled background task that is run on a daily basis at a particular hour. By default it is performed when the server time is midnight. The task calculates the PVU, and RVU values based on the data that is collected from the agents during software and capacity scans.

Reaggregation

If the initial software bundles are correct after software discovery and rebundling is unnecessary, aggregation is the only calculation process that is required on the side of the License Metric ToolBigFix Inventory server and the data that was once calculated is always correct. However, this is rarely the case. You must always modify some parts of the initial bundles that are proposed by License Metric ToolBigFix Inventory. You must always confirm which bundles are correct for complex products. A complex product is a product that can be bundled with more than one software offering. After rebundling is complete, the PVU and RVU values that were already calculated must be refreshed. Reaggregation tasks were designed to recalculate or refresh PVU and RVU values that were already calculated. Manual actions that might trigger the data reaggregation include:
  • Rebundling a software instance from one product to another
  • Confirming the default bundle
  • Including software instance in PVU, or RVU calculations
  • Excluding software instance from PVU, or RVU calculations
  • Sharing an instance
These five actions are very basic operations that the application users perform frequently to adjust the bundling data. In addition to manual actions, refresh of the calculated data can also be triggered by automated bundling.

Aggregation versus reaggregation

The aggregation process was designed to calculate the data from many agents for all products over a short time. In contrast, reaggregation process was designed to perform quick recalculation of PVU, and RVU values for a selected subset of products that were already aggregated. Aggregation of all products from all agents is much quicker (even hundreds of times) than reaggregation of the same amount of data. However, when you must recalculate the PVU and RVU values of only one product, reaggregation should be quicker than aggregation, which cannot recalculate the reporting value of only one product, but must reaggregate it for all discovered products simultaneously.

Inventory builder

Inventory builder is another background task that is executed periodically. During this time, the software inventory is built based on the data from the agent software scans. In other words, this task transforms a list of discovered software components to a list of the discovered software products. In most cases, the initial software bundling of detected components performed by the inventory builder has a very low level of confidence.

Automated bundling

Similarly to aggregation, automated bundling is a periodic background task. It is strictly related to the inventory builder task - when the execution of inventory builder ends, automated bundling starts.

When the automated bundling task runs, it determines the best bundle connections to all unconfirmed product instances. If the newly calculated bundles have a higher level of confidence than the current product bundles, automated bundling rebundles those product instances to the new product with the best match. In the vast majority of cases, subsequent automated bundling runs calculate the same level of confidence for most or even for all of the unconfirmed product instances. However, from time to time, especially in large environments with a large percentage of unconfirmed instances where it can actually happen frequently, the newly calculated confidence level can turn out to be higher then the old one. In this case, the software instance is rebundled. The most common reasons why automatic bundling is able to rebundle some old unconfirmed product instances are:
  • Import of a new set of part numbers
  • Import of a new software catalog
  • After manual rebundling or manual confirmation of one product instance, other unconfirmed product instances can be better bundled by using partition or infrastructure collocation rules
  • Detection of a new simple software component (a component that can be assigned to only one product) by agents might also change the calculations for other unconfirmed instances due to partition or infrastructure collocation automated bundling rules.
However, automated bundling does not replace the manual work that must be done to confirm or rebundle all unconfirmed product instances. The confidence level that is calculated by automated bundling is supposed to facilitate manual bundling by providing the best potential bundling options for all unconfirmed product instances.

Extract, Transform, Load (ETL)

In general, Extract, Transform, Load (ETL) is the process in database usage that combines three database functions that aim at transferring data from one database and placing it into another. The first stage, Extract, involves reading and extracting data from various source systems. The second one, Transform, converts the data from its original form into the form that meets the requirements of the target database. The last stage, Load, saves the new data into the target database, thus finishing the process of transferring the data.

In License Metric ToolBigFix Inventory, the Extract stage involves extracting data from the BigFix server. Such data includes information about the infrastructure, installed agents and detected software. ETL also checks if the new software catalog is available, gathers information about the software scan and files that are present on the endpoints, and collects data from the VM managers.

The extracted data is then transformed to a single format that can be loaded to the License Metric ToolBigFix Inventory database. This stage also involves matching scan data with the software catalog, calculating processor value units (PVU), processing the capacity scan, and converting information that is contained in the XML files.

After the data is extracted and transformed, it is loaded to the database and can be used by License Metric ToolBigFix Inventory.