OKSI has developed and demonstrated an advanced fire detection system. The test system consists of individual smoke detectors wired to a PC providing realtime digital readout. Data were acquired at the UL fire facility for various "standard" fires and fuels including wood, paper, gasoline, plastic and smoldering fires. The algorithms described below were tested with these data and were demonstrated to produce faster detection with lower false alarm incidence than conventional multi-purpose sensors. The paradigm produces detection at a quantifiable and preselected false alarm rate. The algorithms can be expanded to any number or type of detectors including various gas sensors, thermal, ionization, photo optical, etc. Moreover, these algorithm serve as a basis for an intelligent fire protection paradigm discussed below.
The complexity and wide variability of fire events and their signatures, makes reliable fire detection at a low false alarm rate a very difficult task. Many investigators found that neither the absolute level of a signal produced by fire detection equipment, nor the rate of change, by themselves or jointly, is a reliable indicator. Present equipment almost invariably relies on a threshold for the decision to activate an alarm. In recent years, manufacturers recognized the potential of multi-criteria detectors to produce a more sensitive indicator. However, the algorithms that are used, such as "majority vote" or "winner-takes-all" do not really take advantage of the combined features and the system still suffers from the disadvantages of using a single sensors. Sensor data fusion techniques, however, can be utilized to more reliably produce an earlier detection at a lower false alarm rate. Data fusion can be applied for events characterized by several types of sensors, or by similar sensors separated spatially or in the event space.
OKSI's approach to fire detection is based on several principles:
I. Situational Awareness & Event Anticipation: In the present context this refers to a continuous analysis of the signal from one or multiple detectors. The purpose of the analysis is (i) to assert whether the steady state status (no event) has been perturbed, and if so (ii) to determine the possible nature of the perturbation. The nature of a perturbation can be assessed in comparison with the background environment. And predictions can be made as to the trend exhibited by the detected events. As more data are acquired the predictions are corrected and become more accurate. Typical methods for this prediction/correction process include Kalman Filters and Neural Networks. A filter, defined as a predictor of a state vector, uses consecutive measurements to improve the predictions. The simplest approach is analogous to a Kalman Filter, although it suffers from too many limitations. Extended Kalman Filters and generalized non-linear filtering techniques are applied.
II. Distributed control: A system with distributed control architecture utilizes a local microcontroller or microprocessor at each sensor location or node. The system may still be connected to a common central control panel to inform the operator of the situation. But it has several advantages over a conventional fire control panel - based systems:
|Local processing is faster since a local processor is not occupied with many tasks that a central control computer must perform
||Distributed control is not susceptible to a single point failure
||Significant improved performance is possible by resorting to parallel processing
||Local control provides autonomous operations long after other parts of the building or the control center were lost.
III. Data fusion includes the collection, association, and merging of data to generate a coherent representation of the situation, and to anticipate evolving situations. This requires efficient multisensor integration. The system must consider sensors that respond at different intervals and event space, both similar and non-similar sensors that provide different type of data and have different degree of accuracy. The advantages of multi-sensor data fusion are:
|Improved robustness (broader range of phenomenology utilized, less sensitive to a faulty one sensor, faster response)
||Performance enhancement (multiple looks at the event, increased detection probability due to merger of information, inherent redundancy, higher survivability)
||Better coverage (extended spatial coverage, reduced ambiguity).
This technique involves the development of communications policy and interactions between sensors, sharing of partial information, and a global control strategy to produce coherent and consistent results. As such the control system and logic within each sensor is capable of (i) understanding the information provided by other sensors and other locations, (ii) fusing the information provided and assessing statistical and probabilistic identification of the fire event in order to assess the temporal situation, and (iii) resolving sensor conflicts and failures.
Examples of fire detection and sensor data fusion considerations also include areas such as:
|The specific settings (e.g., industrial, commercial, residential, etc.)
||Building structure information (e.g., construction materials, locations relative to other sensors, type of structure)
||Purpose of location in the building (e.g., aircraft hangar, storage room, office, welding shop, warehouse, etc.)
||Temporal considerations (time of day, occupancy, etc.)
||Individual sensor parameters (reliability, response time, single sensor probability of detection and false alarm rate, sufficient measurement statistics)
||Phenomenology (e.g., mass transfer effects including conduction and convection [for chemical sensors and particulate sensors] )
||Corroborating evidence and conflict resolution (expected versus observed evidence by other type of sensors and other locations)
Considerations for fire suppression and decision theory include:
|Extent of the fire event: what abatement means to activate (consider water damage vs. fire damage), degree of abatement (partial or proportional and localized flow of sprinkles, mist vs. full flow, etc.), water (and toxic products) run-off damage
||Hypothesis testing and cost associated with a wrong decision, based on "optimal decision" rules
||Concerns regarding toxic gases
||Continuous assessment of the results of the abatement -- as a basis for further action
||Ability to distinguish between smoke and steam, etc.
The concept of an intelligent fire detection and control system is consistent with the trend for the increased development of computer controlled building and home functions. Under this scenario the control of appliances, power consumption and distribution, safety and security features, will be assisted by a computer, allowing the owner to program, control, and receive reports and feed- back on any desired function and event. Intelligent fire detection and control could be an integral part of such systems, yet it should be able to be operated independently and autonomously.
Nonparametric versus Parametric Detection
We consider the general detection problem for a sensor measurement which produces a random variable x. A decision must made be as to which one of two possibilities, the null hypothesis H0 (no fire) or the alternative hypothesis H1 (fire), is true. If H0 and H1 could be observed directly, there would be no uncertainty in the detection process, and a correct decision could be made every time. Unfortunately, however, the observations of H0 and H1 are usually corrupted by noise. Furthermore, because of the phenomenology of fire, the signal associated with H1 itself exhibits fluctuations. We must thus devise a detector that attempts to compensate for the distortions introduced by the noise and fluctuations, and chooses either H0 or H1 as the correct source of the observed data.
Let f( x | H0 ) and f( x | H1) denote the probability density functions associated with x, considered as a stochastic process, under hypothesis H0 and H1 respectively.
Parametric detectors assume that these densities are known, and utilize this knowledge to formulate a decision rule, D . If the actual density functions are the same as those assumed in determining D, the performance of the detector in terms of probability of detection and false alarm will be good. If there is, however, significant difference between the assumed and actual densities associated with the input x, the performance of the parametric detector will be poor.
Nonparametric detectors, on the other hand, make no assumptions on f. Only generalities such as continuity of the cumulative distribution function are sometimes invoked. As such, they maintain a fairly constant level of performance even for wide variations of the sensor noise density and signal fluctuations.
The development and application of nonparametric detectors was considerably delayed by the overwhelming acceptance by statisticians and engineers of the normal distribution theory. Since it could be shown that the sum of a large number of independent and identically distributed random variables approached the normal distribution, practitioners in almost every field viewed and used the normal distribution for whatever purpose they had in mind. It is only with the publication of the fundamental paper of Hotelling and Pabst did this trend began to change.
The detection theory is cast in a modified Neyman-Pearson framework, to allow for composite hypotheses. For all types of detectors, the main advantage of that paradigm over the Bayes decision criterion is that it yields a decision rule that keeps the false alarm probability, a , less than or equal to some prechosen value, while it maximizes the probability of detection for this value of a.
Modality for Nonparametric Detection
For a detector to be classified as nonparametric or distribution free, its false alarm rate must remain constant for a broad range of sensor signal densities. A detector whose test statistic uses the actual amplitudes of its inputs (i.e., the amplitudes of the sensor measurements) is not likely to have a constant false alarm rate for a "small" number of observations, as is the case in fire detection. Here, by small we mean less than a couple of hundred observations, as opposed to an asymptotic (or large) number. This can readily be inferred from any elementary discussion of parametric detectors and the t-test. As suggested by Thomas' fundamental review of the field, amplitudes of observations may be avoided by using polarity information, or by ranking the data and using the relative ranks and polarities in the decision process.
After careful consideration, we believe that the most useful fire detection paradigm will use advanced features of rank statistics. To be specific, individual sensor measurements , xi , is stored until n samples are available (user specified window). The observations are then ranked in order of increasing absolute value. The corresponding test statistics have the following general form:
where ri is the rank of the i-th observation, and u(.) is the unit step function. The function of the ranks can be chosen in many different ways, each resulting in a different detector.
We have constructed a polarity test for a simple instance of detecting whether a positive signal is embedded in noise of arbitrary distribution. We can show that is uniformly most powerful (by the Neyman-Pearson lemma) for level . Of course, if more information about the densities is available, a more efficient nonparametric test may be constructed. Also, if the densities were for example Gaussian, a parametric detection would give better results. This does not appear to be the case in fire detection.
Our main effort was, however, devoted to the construction of an efficient detector based upon the concept of rank-correlation coefficient introduced by Kendall. This coefficient is actually not being used directly. Rather, we construct a related statistic. The interesting features of this approach are as follows:
|the algorithm can readily be extended from one to two sensors;
||it posses under certain conditions a nonparametric test statistic;
||the detection process is extremely sensitive to monotonous signal trends;
||it is insensitive to signal spikes;
||one can implement a moving window that is updated recursively; this enables to keep the memory requirements of the sensor quite cheap.
In the above expression, denotes a trend construction operator, and a threshold to be determined. Then, if [.] represents the logical operator taking the values + 1 if the condition in the brackets are true, and 0 if they are false, one sets
If two sequences of data, x and y, originating from two sensors are available, we construct the statistic:
and operate with as above.
Return to OKSI's Home