Since the predictive network management system provides a good approximation of the future behavior of the data to be managed as shown in the GVT versus real time values of state in Figures 3, 6, 8, and 10, the verification query period can be automatically determined as a function of the look-ahead window and tolerance, with the goal of minimizing the frequency of verification queries thus solving the polling problem in network management.

In most standards based approaches, network management stations are sampling counters in managed entities which simply increment in value until they roll over. A management station which is simply plotting data will have some fixed polling interval and record the absolute value of the difference in value of the counter. Such a graph is not a perfectly accurate representation of the data, it is merely a statement that sometime within a polling interval the counter has monotonically increased by some amount. Spikes in this data, which may be very important to the current state of the system, may not be noticed if the polling interval is too long such that a spike followed by low data values averages out to a normal or low value. Our goal is to determine the minimum polling interval required to accurately represent the data.

From the information provided by the predictive management system, a polling interval which provides the desired degree of accuracy can be determined and dynamically adjusted; however, the cost must be determined as discussed next.

An upper limit on the number of systems which can be polled
is where *N* is the number of devices capable of
being polled, *T* is the polling interval, and is the time
required for a single poll. Thus although the data accuracy will be
constrained by this upper limit, taking advantage of characteristics
of the data to be monitored can help distribute the polling intervals
efficiently within this constraint.
Assume that is a calculated and fixed value, as is *N*.
Thus this is a lower bound on the value of .

The overhead bandwidth required for use by the management system to
perform polling is shown in Equation 3. The packet size will
vary depending upon whether it is an SNMP or CMIP packet and the MIB
object(s) being polled. The number of packets varies with the amount
of management data requested. Let *K* be the number of packets,
*L* be the bits/packet, *N* be the number of devices polled, and *T*
be the polling period. *BW* is the total available bandwidth and
is the overhead bandwidth of the management traffic.

We may want to limit the bandwidth used for polling system management data to be no more than a certain percentage of total bandwidth. Thus the optimum polling interval will use the least amount of bandwidth while also maintaining the least amount of variance due to error in the data signal. All the required information to maintain the cost versus accuracy at a desired level is provided by the predictive network management system.

Thu Feb 27 15:34:42 CST 1997