HET Mid-Trimester Report
Second Period of 2004
April 1 - June 1

This report is composed of five sections:
Facility Status

In this section we will discuss the status of the HET facility and each instrument and any limitation to configurations that occurred during the period.


Observing Statistics

The following FWHM image quality statistics were taken from the LRS pre-imaging statistics and FIF acquisition guider for science operations in the night report.

For comparison here is the LRS pre-imaging quality for the same period of 2003

Here are the DIMM values reported in the night report.

Month by Month Summary

The following table gives the observing statistics for each month. The second column gives the fraction of the month that was spent attempting science (as opposed to engineering or instrument commissioning). Science time is defined to begin at 18 degree twilight or the first science target. Science time is defined to end at 18 degree twilight or the last science target. The fourth column gives the fraction of the possible science time (A) lost due to weather. The fifth through seventh columns give the amount of remaining science time (after removing weather losses) not spent attempting science targets. Please note that the first stack of the night often occurs before 18 degree twilight.

Month A:Fraction of the Time that was Possible Science B:Average Night Length C:Fraction of Total Science Time Lost due to Weather D:Fraction of Actual Science Time Spent with Shutter Open D:Fraction of Actual Science Time Lost due to Overhead E:Fraction of Actual Science Time Lost due to Calibrations D:Fraction of Actual Science Time Lost due to Alignment F:Fraction of Actual Science Time Lost due to Problems Fraction of Actual Science Time Not accounted for or Lost
April 0.926 8.22 0.38 0.44 0.30 0.03 0.08 0.02 0.13
May 0.917 7.20 0.26 0.50 0.27 0.04 0.04 0.08 0.08

The last column is new and accounts for all of the lost or not used minutes. Some of this time is accounting errors and some is time not charged to any program due to operations inefficiency.

Details on Nightly cloud cover based on the TO's observations of the sky reported 3 times a night in the night report.:

Month Fraction of the Nights that were Clear Fraction of the Nights that were Mostly Clear Fraction of the Nights that were Partly Cloudy Fraction of the Nights that were Mostly Cloudy Fraction of the Nights that were Cloudy
April 0.50 0.10 0.23 0.10 0.07
May 0.52 0.10 0.23 0.10 0.07

Please note that the HET could be closed due to humidity, smoke or high dust count and still have a "Clear" statistic in the night report.

The following tables give a break down of all attempted visits as well as the category that each falls into.

Charged exposures
Number of TimesShutter Open (Hours)Type
56196.2 A - Acceptable
6212.9 B - Acceptable but Border line conditions
13219.1 4 - Priority 4 visits (does not include 1/2 charge)
40.0 Q - charged but PI error
00.0 C - Acceptable by RA but PI rejects

Uncharged exposures
Number of TimesShutter Open (Hours)Type
10.3 I - Targets observed under otherwise idle conditions
113.3 E - Rejected by RA for Equipment Failure
132.4 H - Rejected for Human failure
254.7 W - Rejected by RA for Weather
10.2 P - Rejected by PI and confirmed by RA
30.7 N - Rejected due to unknown cause

So this is a total of 11.6 hours of uncharged spectra with an additional possible 12.9 hours of spectra that may be rejected.

The following overhead statistics include slew, setup, readout and refocus between exposures (if there are multiple exposures per visit). In the summary page for each program the average setup time is calculated. The table below gives the average setup time for each instrument PER VISIT and the average and maximum COMPLETED science exposures and visits.

The "Exposure" is defined by when the CCD opens and closes. A "Visit" is the requested total CCD shutter open time during a track and might be made up of several "Exposures". "Visit" as defined here contains no overhead. To calculate one type of observing efficiency metric one might divide the "Visit" by the sum of "Visit" + "Overhead".

The average overhead per actual visit is the overhead for each acceptable priority 0-3 (not borderline, and with overheads > 4 minutes to avoid 2nd half of exposures with unrealisticly low overheads) science target. This number reflects how quickly we can move from object to object on average for each instrument, however, this statistic tends to weight the overhead for programs with large number of targets such as planet search programs.

The average overhead per requested visit is the total charged overhead per requested priority 0-3 visit averaged per program. To get this value we average the average overhead for each program as presented in the program status web pages. The average overhead per visit can be inflated by extra overhead charged for PI mistakes (such as bad finding charts or no targets found at sky detection limits) or for incomplete visits e.g. 2 visits of 1800s are done instead of 1 visit with a CRsplit of 2. The average overhead per visit can be deflated by the 15 minute cap applied to the HRS and MRS. This method tends to weight the overhead to programs with few targets and bad requested visit lengths, ie. very close to the track length.

Instrument Avg Overhead per Actual Visit(min)Avg Overhead per Requested Visit(min) Avg Exposure (sec)Median Exposure (sec)Max Exposure (sec)Avg Visit (sec)Median Visit (sec)Max Visit (sec)
LRS 13.0 19.1 708.7 600 2400 1209.2 6006300
HRS 9.7 10.3 607.8 800 2700 800.09003600
MRS 10.4 10.3 865.0 900 1200 1061.0 9002100

NOTE: AS OF 2003-3 THE SETUP TIME FOR AN ATTEMPTED MRS OR HRS TARGET IS CAPPED AT 15 MINUTES. LRS VALUES WILL CHANGE WHEN A CAP FOR LRS OVERHEADS IS DETERMINTED.

The overhead statistics can be shortened by multiple setups (each one counted as a separate visit) while on the same target as is the case for planet search programs. The overhead statistics can be lengthened by having multiple tracks that add up to a single htopx visit as can happen for very long tracks where each attempt might only yield a half visit.

A way to improve the overhead accumulated for programs with long exposure times is to add double the above overhead to the requested visit length and make sure that time is shorter than the actual track length. This avoids the RA having to split requested visits between several different tracks.


Observing Programs Status

The following links give the summary for each institution and its programs. The resulting table will give (for each program) the total number of targets in the queue and the number completed, the CCD shutter open hours, average overhead for that program, and the TAC allocated time. This usually will be the best metric for judging completeness but there are times when a PI will tell us that a target is "done" before the total number of visits is complete.


Institution Status

This is how each institution has allocated its time by priority.

Time Allocation by Institution (hours)
Institution Priority 0 Priority 1 Priority 2Priority 3 Priority 4
PSU 6.000 (3%) 32.170 (17%) 31.830 (17%) 26.450 (14%) 91.200 (48%)
UT 15.500 (8%) 33.000 (17%) 54.000 (27%) 47.000 (24%) 50.000 (25%)
Stanford 0.000 5.330 (22%) 10.000 (42%) 0.000 (0%) 8.660(36%)
Munich 0.000 (0%) 7.000 (100%) 0.000 (0%) 0.000 (0%) 0.000 (0%)
Goetting 0.000 (0%) 10.000 (50%) 5.870 (30%) 4.000 (20%) 0.000 (0%)
NOAO 0.000 14.000 (25%) 4.000 (7%) 38.000(68%) 0.000
SALT 0.000 0.000 0.000 0.000 0.000
DDT 0.000 0.000 0.000 0.000 0.000

The following is a summary of the Acceptable CCD shutter time for each institution based on our htopx data base. It does not include any overhead. And does not weight based on any priority scheme (ie. priority 4 are counted at full cost).

CCD shutter Open by Institution (hours)
-TOTAL- Used % of All
PSU 38.090 29.8
UT 68.009 53.2
Stanford 3.333 2.6
Munich 5.525 4.3
Goetting 12.922 10.1
NOAO 5.367 --
SALT 0.000 --
DDT 0.000 --

The following is a summary of the total charged time for each institution based on our htopx data base (for shutter open) and night reports (for overhead). It includes shutter open time (weighting priority four by half) and overhead.

Time Charged by Institution (hours)
-INST- Used THIS PERIOD % of All THIS PERIOD -TOTAL TO DATE- -% TO DATE-
PSU 55.43 29.5 862.929.2
UT 92.97 49.4 1675.656.7
Stanford 7.89 4.2 199.86.8
Munich 10.91 5.8 95.93.2
Goetting 20.83 11.1 119.64.0
NOAO N/A -- -- --
SALT 0.00 -- -- --
DDT 0.00 -- -- --

The original "TOTAL TO DATE" was found to be in error on Dec. 11 2003; it did not include the overhead time. The values given here in red are the corrected totals.
Total to Date starting from Oct 1999. Priority weighting began in the 2003-2 period.

In the current queue Munich has no more viable targets. This will bring Munich under specification by the end of the trimester.


Future Priorities

The following are the high priority targets for the rest of the period for each institution:

UT
RankProgram Constraints
1 UT04-2-015 2 WD1 targets LRS g2, EE50 < 1.5, Vsky > 21.0
2 UT04-2-013 1 Toot LRS, EE50 < 2.0, Vsky > 20.4
3 UT04-2-001 Draco 19219 HRS, EE50 < 1.8, Vsky > 20.6

PSU
RankProgramConstraints
1 PSU04-2-014 many M101 targets MRS, EE50 < 2.5, Vsky > 20.5
2 PSU04-2-012 several targets LRS g3, EE50 < 2.0, Vsky > 20.6

STA
RankProgramConstraints
1 STA04-2-001 J1314+5306 LRS EE50 < 1.6, Vsky > 20.6
2 STA04-2-001 J1624+2748 LRS EE50 < 2.0, Vsky > 20.0

M
RankProgramConstraints
1 M04-2-001 NGC4594 LRS EE50 < 3.0, Vsky > 20.5

G04
RankProgramConstraints
1 G04-2-002 SBS1150+599 LRS EE50 < 1.5, Vsky > 19.5


The following is a histogram of the current HET queue visits for the rest of the period. A line has been drawn at the expected number of hour long visits that we hope to achieve in each hour bin.

The following histogram shows some of the extrema in observing conditions: good seeing dark time, bright time and bad seeing dark time. A line has been drawn at the expected number of hour long visits that we hope to achieve in each hour bin.

The following histogram shows the bright time priority 0-3 contribution from each partner. The dashed line is a rough estimate of the number of hour long visits one could complete during the remainder of the period.

The following histogram shows the dark time priority 0-3 contribution from each partner. The dashed line is a rough estimate of the number of hour long visits one could complete during the remainder of the period.

From the above plots I have determined that:

  • Future IC and Engineering: Some engineering will be taken in the next few months: most likely 3% of dark time and 7% of the bright time.

    TAC Response

    Roger Romanii wrote:
    > I would like to see again a break down of the pre-image histogram into
    > FIF acq and LRS pre-image. I still get the impression from the night reports
    > that the LRS pre-image is consistently larger; I think this should be
    > understood and, if appropriate, fixed.

    The LRS on average does give poorer images than the FIF acq. There are a couple of good reasons for this:

    Even when we have made attempts at getting the best possible LRS image quality I still don't seem to reach the FIF acq image quality but this should be tested more extensively in a controlled fashion and not by using image quality numbers from pre-science images.

    > Also the TAC report as it stands has no section for Telescope issues.
    > We might want these at the top. For example, recently there seems to have
    > been a large number of hexapod failures. If these are continuing, this could
    > affect the prioritization strategy.

    This section has been added in red above.

    Ulrich Hopp wrote:
    >I have a specific proposal. I would prefer if the RA's can plot the
    >mesaured image qualities into TWO instead of three figures (e.g.:
    >pre-image quality of this period compared to DIMM in one figure and
    >compared to previous year into another one). This would make the
    >comparison easier, faster and more reliable (I've to scroll the screen up
    >and down to see all three graph's at a time...).
    >

    We will make an attempt to do this for the next month's trimester report since it may require some significant reworking of code written by Jeff Mader who is no longer employed here.

    >I am woundering slightly about the still high overheads. During the
    >last board meetings as well as during my last visit to HET, I got the
    >impression that a severe fraction of the overhaeds results from missing
    >software - which in principle is straighforward to do but was not done due
    >to missing person power. My understanding was further that now with the
    >newly hired persons, Jin Fowler should have the time to prepare all the
    >scripts he had in mind but could not do. As the overheads remain at the
    >previous levels I am asking myself what is going on...

    The higher overheads will be address at the board meeting. Essentially the percentage of time spent with CCD shutter open has gone up but the average visit has decreased from 1200sec to 900sec which forces the percentage of time spent setting up on targets to rise even though we have made small improvements in the setup time.