Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WR+FT+TS Alignment #93

Open
tomeichlersmith opened this issue May 11, 2022 · 7 comments
Open

WR+FT+TS Alignment #93

tomeichlersmith opened this issue May 11, 2022 · 7 comments

Comments

@tomeichlersmith
Copy link
Member

tomeichlersmith commented May 11, 2022

Use the reformat infrastructure to develop alignment for these "fast" subsystems.

Subsystem Timestamp Units
WR Time (since Unix epoch) down to below ns with ability to determine when spills begin
FT Time (since Unix epoch) down to 8ns ticks, no storage of when spills begin
TS Time since spill in 8ns ticks, no storage of when spills begin
@tomeichlersmith
Copy link
Member Author

Using Erik's correlation script as a starting point.

The main issue I'm encountering is determining the spill that TS is apart of. The correlation script pulls the WR and TS timestamps into memory and then performs an alignment where the first O(10) WR events are skipped and then the TS timestamps are compared to WR timestamps and assigned based on which spills have highest number of matches. Spills in TS are separated based on a deprecated flag written to the raw data txt file.

I've tried two methods for separating TS timestamps.

  1. If the current time since spill is less than the previous one, start a new spill.
  2. If the current time since spill is zero, start a new spill.

For the WR, I simply check if the "channel" being sent in the event packet is the "Start of Spill" channel. If it is, I start a new spill. In the correlation script, Erik checks that the time difference between starts of spills is at least 5s to avoid the occasional happenstance where two spill signals are sent at once.

Testing these methods on the raw data from Run 203, I get a wide variety of number of spills.

Subsystem Method Events Spills
WR 152778 62
TS 1 28545 594
TS 2 28545 348

The elog mentions that run 203 was used for testing the monitoring and calibration, so my next step is to attempt another run: Run 205.

Subsystem Method Events Spills
WR 702789 270
TS 1 131689 2705
TS 2 131689 1688

I use the decode_2fibers_to_RAW_fromBin.py script to convert the eudaq data file into a file more easily understood.

What's puzzling to me is that the WR has more events and less spills. If it had more of both, it could easily be handled by skipping spills that are only in the WR.

@awhitbeck
Copy link
Contributor

I think method one is the right choice. Looks like from the elog the Cherenkov's were used in the trigger for TS. So, I think you should expect fewer events for TS. I can't say why the WR seems to be seeing 1/10 spills, though. The rate of events for TS (using method 1) is only ~50 events per spill, this seems too low to me. Where is the code that extracts the time stamp from the data. If this is truncating the most significant bits from the TS timestamp, you would see high spill counts.

@tomeichlersmith
Copy link
Member Author

tomeichlersmith commented May 12, 2022

I shift four bytes from the event into a 32-bit timestamp:

// first 8 bytes are the two deprecated timestamps (UTC seconds and UTC clock ticks)
static const std::size_t TIMESINCESPILL_POS = 8;
// next 4 bytes are the time since spill
static const std::size_t TIMESINCESPILL_LEN_BYTES = 4;
uint32_t timeSpill=0;
for (int iW = 0; iW < TIMESINCESPILL_LEN_BYTES; iW++) {
int pos = TIMESINCESPILL_POS + iW;
timeSpill |= (buff.at(pos) << (TIMESINCESPILL_LEN_BYTES-iW)*8); //shift by a byte at a time
}
reformat_log(debug) << "time since spill " << timeSpill;

I tried doing the opposite order of bytes, but that produced pseudo-random timestamps so I went with this method.

@bryngemark
Copy link

would it help to use the old format with the UTC timestamp still in the header, to see if your method of identifying when you should increment the spill count makes sense? i think that O(1s) precision would be enough to tell you that, right?

@tomeichlersmith
Copy link
Member Author

tomeichlersmith commented May 12, 2022

Yes, the spills were at minimum 30s apart, so that would be wonderful
Edit: Do you have a run number I can look at? I have a copy of everything that was in /u1/ldmx/data/ and I can find other stuff at SLAC.

@bryngemark
Copy link

bryngemark commented May 12, 2022

ok so what do you need, a .txt file, or .root? i had just implemented putting the UTC time stamp into the ldmx-sw format when it was taken out so i might need to rerun. otherwise you can get any older .txt file from
/nfs/slac/g/ldmx/TS-data/test_stand_data/raw/
from say, some time April 2-6.

@tomeichlersmith
Copy link
Member Author

I can make do with a text file easy enough :) thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

3 participants