On the Comparison of Estimators in a Rectangular Distribution
Given two estimators ^1 and ^2 of an unknown parameter ^, Pitman (1937) has suggested calling ^1 a "closer" estimator of ^ than is ^2 if [see pdf for notations] for all values of ^. Using this "closeness" criterion, Johnson (1950) compared several well known estimators of ^ based on a random sample from the rectangular distribution [see pdf for notations], otherwise. In addition, each estimator's mean squared error was given since Pitman's criterion does not take into account the degree of closeness. Unfortunately when two such estimators of ^ are compared, one of them is often preferred under the "closeness" criterion while the other is favored under the "smaller mean squared error" criterion. Yet each criterion does provide information not given by the other, and as Johnson comments: "Ideally, of course, it would be desirable to use both criteria (and, indeed, others such as the mean absolute error)." Accordingly, when comparing two estimators of ^ we present a technique which essentially examines (a) the odds in favor of one of the estimators being closer to ^ than is the other and (b) each estimator's average closeness to ^ not only when it is closer than is the other but also when it is not. Joint consideration of these concepts is shown to form a basis for determining which of the two estimators is preferred in a given situation. And while in this paper we consider only the aforementioned estimation problem with which Johnson dealt, the technique has broad applicability (Dyer and Keating 1979a; Dyer, Keating, and Hensley 1979b).