The methods utilized by the U.S. Geological Survey to measure subsurface temperatures have evolved considerably over the years. Although some of the early measurements were obtained using thermistor strings frozen into permafrost, the vast majority of the measurements were made in fluid-filled holes using a custom temperature sensor. A typical sensor used in Alaska prior to 1989 consisted of a series-parallel network of 20 thermistors; see Sass et al.  for a more detailed description. During a logging experiment, the resistance of the thermistor network was determined using a Wheatstone bridge prior to 1967. After that time, a 4-wire resistance measurement was made using a commercial 5.5-digit multimeter (DMM). Before 1984, boreholes were logged in the 'incremental' or 'stop-and-go' modes; the vertical spacing of the measurements was typically 3-15 m. Beginning in 1984, the depth/resistance measurements were automatically stored on magnetic tape, allowing boreholes to be logged in the 'continuous' mode; the typical data spacing for the continuous temperature logs was 0.3 m (1 ft). Many of the Alaskan boreholes were re-logged several times to quantify the thermal disturbance caused by drilling the holes (see Lachenbruch and Brewer ). A review of current temperature measuring techniques used by the USGS in the polar regions is given by Clow et al. . Data from 1950-1989 are included on the CAPS CD-ROM Version 1.0, June 1998.