Hi, Peter,

Our DAQ report is below.  Would you mind forwarding it to the collaboration?

Thanks,

-- Fred

-- Fred Gray / Visiting Postdoctoral Researcher                         --
-- Department of Physics / University of California, Berkeley           --
-- fegray@socrates.berkeley.edu / phone 510-642-4057 / fax 510-642-9811 --

Dear MuCap Collaboration,

We are sending this e-mail to provide a brief update on the current
status of the muCap DAQ.  We have spent the last 1 & 1/2 months
at PSI improving and overhauling the DAQ where necessary.  Our efforts were 
largely successful, and we feel that the DAQ is now on a firm foundation for
future experimental work and development.

Let me begin by listing the main components which currently comprise the DAQ:
     1. pc3608 (PC) -- MIDAS event builder and logger
     2. psfe90 (PPC in crate1) -- MIDAS front-end for 3 TDC400s
     3. psfe91 (PPC in crate2) -- MIDAS front-end for 4 TDC400s
     4. psfe92 (PPC in crate3) -- MIDAS front-end for CAENs, COMPs, FADCs
     5. mulandaq (PC in crate4) -- MIDAS front-end for WFDs
Below are the most important system features which merit description...

-------------------------------
Features of the present system:
-------------------------------
1. As revealed above, the DAQ is now a MIDAS-based system.  The 3 PPCs and 
mulandaq function as MIDAS front-ends, sending their data to PC3608's MIDAS
server.  There, the MIDAS event builder assembles the corresponding crate
events into a single event which is written to disk by the MIDAS logger. 
As those of you already familiar with MIDAS know, the MIDAS interface greatly
facilitates the DAQ operation.  All pertinent run information is stored in the
MIDAS online database (ODB)--for instance, the front-ends retrieve their
configuration settings before each run.  The ODB is also written to disk with 
every logged run, thereby providing a record in the data stream of the DAQ
configuration at the time of data-taking.  The ODB offers greater versatility
as well: crates can be enabled and disabled in the ODB, and reconfigurations
of modules can be accomodated with relative ease since the module arrangement
is no longer hardcoded into the front-end programs.

2. All 3 crate PPCs now run Linux rather than vxWorks.  This is a great relief,
as it eliminates our dependency on the SLS license server which was previously 
used to compile the vxWorks front-end executables.  It has also simplified
the development process, since the usual Linux tools can be used on both
the front- and back-end systems.

3. The PVIC chain which connects the DAQ computers has been supplanted by a
Gigabit Ethernet data transfer system.  (More on this later).

4. The "double-buffered" DAQ mode is functional, wherein the TDC400s take data 
into one buffer while the other is simultaneously read out.  This drastically
reduces the deadtime during data-taking.  Unfortunately, it complicates the 
readout of the CAENs and COMPs, since they do *not* have the TDC400 
double-buffer capability.  However, data rates into the crate3 modules should 
be low enough that we can keep up with the data and read it out as fast as it 
arrives.  This active readout capability has already been implemented for the 
CAENs, where we have managed to successfully keep up with about 250 kHz of
input signals.

5. All of the barrack PCs are now running the standard PSI Linux distribution,
which is based on Red Hat Linux 7.3. 

6. The WFDs and FADCs are integrated into the system for the first time.  The
WFDs can be made to run synchronously or asynchronously with the other crates
(the asynchronous mode is necessary if we wish to minimize the DAQ deadtime; it
allows the WFDs to take part in an event if they are ready; otherwise the
rest of the DAQ continues to operate while the WFDs are read out).  Currently,
we are simply triggering the FADCs on every event, and we have no FADC analysis
software in place.  But the FADCs are participating, and their data have been 
incorporated into the data stream.

7. An "automated" DAQ startup is in place.  Compared with the procedure used
in the previous run, it is very, very simple to get MIDAS running and to take 
data.  It can all be accomplished through the MIDAS Web interface thout any
typing.
-------------------------------

The features listed above form the essential backbone of the new and improved 
DAQ.  For the sake of completeness, it is worth mentioning the problems and 
tasks which remain...

-------------------------------
Remaining Tasks
-------------------------------
1. PVICs vs. GB ethernet: The PVICs have long proved themselves troublesome.  
We had hoped to make throughput measurements on each system, and compare their 
performance.  However, the PVICs were so unreliable that we were never able to
operate the DAQ at significant data rates for more than a few seconds before 
the system would inexplicably crash, freezing computers and necessitating 
lengthy reboots.  The GB ethernet, on the other hand, has so far proved to be 
robust and reliable.  From benchmark tests, it appears the the GB ethernet 
should be capable of meeting all of our data transfer needs.  The GB system is 
currently in place, and the PVIC cards have been returned to CES in Geneva for 
upgrades.  It is possible, though, that it may be best to implement a "hybrid" 
system of sorts--it depends ultimately on what sort of performance we are able 
to get from the GB ethernet, and there are several "tweaks" we have left 
to explore.  With the MVME2600 boards used in crates 1 and 2, we were able to
transfer up to about 40 MB/s per crate using a simple benchmark program.  
However, it is not yet clear what overall rate will be achieved when network
transfers are combined with VME access using the MIDAS software.

2. Optimization of VME transfers from TDC400s in crates 1 and 2.  According to
Claude, transfer rates from the TDC400s are limited to approximately 15 MB/s
per crate.  Although we have reproduced this rate with simple test programs,
it has not yet been seen in the context of the DAQ system.

3. The CAENs may still be displaying strange behavior, and we are no closer to 
understanding it now than we were before.

4. There is still some information which needs to be incorporated into the 
ODB--for example, the detector/module channel correspondence that has 
previously been written into a separate "wire mapping" file.

5. The compressors have not been tested in the current DAQ setup.  This is not
any real cause for concern, though, since the software which controlled their 
operation during run 6 is basically the same.  The difficult aspect of the 
COMP operation lies in timing their readout: the COMP data cannot be 
transferred to the barrack during the measuring period, since this introduces
wire chamber noise.  Accomplishing this without generating additional DAQ
deadtime will be tricky.

7. Implementation of an electronic logbook.  Actually, the latest version of
Stefan Ritt's online logbook software is installed on pc3608, so this is now
more of a social question than a technical one; as a collaboration, we need 
to develop a protocol for using it.

8. Analysis software, both online and off.  We have written a program which 
converts the MIDAS data structure into the data old structure, which can then 
be analyzed by current version of mu.  Eventually, the analysis software will 
be modified to read MIDAS files directly.  At the very least, we need to 
develop the capablility of looking at TPC and FADC data for the upcoming runs.

9. Compression of TDC400 data.  In order to reduce the rate of TDC400 data
from a 20-30 MB/s firehose to something more manageable, compression will 
be required.  Initially, we plan to implement a lossless algorithm that, we 
estimate, should reduce the rate by a factor of 3 to 5 while retaining the
same information as the raw TDC400 data.  In the longer term, we will continue
the development of an online track-fitting program that will reduce the 
rate significantly more, but whose systematic biases must be carefully
evaluated.

10. General bulletproofing of the DAQ: So far the DAQ has only been tested 
with artificial data (pulse generators), and comsic data from the eSCs.  
Testing and commissioning of the detector elements will likely reveal 
numerous bugs and problems.
-------------------------------

Despite the fact that our "To Do" list is longer than our "Done" list, we feel 
that the DAQ is in great shape.  Its operation is simpler, faster, and more 
reliable than ever; future refinements and improvements will be easier to 
implement and test.

We will also soon compose a list of DAQ operation instructions, to be posted 
on the MuCap web site.

-- Tom Banks and Fred Gray