Single Point Velocity Meters on Buoys


I am looking at some VELPT data from the CE Array. When processing the data into the ENU output format, is the pitch and roll of the instrument included in this calculation?



The OOI algorithms for the VELPT only correct for magnetic declination and then scale the measurements from mm/s to m/s (see the link below for the code used to correct the eastward velocity for magnetic declination). The data is converted from beam to earth coordinates (ENU) onboard the sensor. We collect the data as a 3-minute ensemble average (sampled at 1 Hz) every 15 minutes. Per the vendor documentation, the transformation from beam to ENU does use the pitch and roll. Also per the vendor documentation, tilts less than 20 degrees are considered acceptable. Between 20 and 30 degrees, the velocities would be considered suspect, and any value where the tilt exceeds 30 degrees should be discarded.


Thanks, Chris! I have some follow up questions. How do I calculate tilt from Pitch and Roll? I was looking through their manual but didn’t find any equation. I just saw that they say to use their software to check that tilt is within reasonable limits when data processing. Is this done in the OOI QA/QC? I’ve attached a plot of the data that I’ve pulled (all with an 8-day moving mean)

for ‘CE02SHSM’, node = ‘BUOY’, instrument_class = ‘VELPT’, method = ‘RecoveredInst’). Ultimately I’m just looking to get current speeds over this time period but don’t know how to QA/QC that data, and am confused by differences that correspond with deployments. Any insight you have would be really helpful. Thanks!

Hello Kristin,

I’ve been looking into the vendor documentation, and I cannot find anything on calculating tilt. That being said, in my usual processing of this data I use both pitch and roll in an OR conditional rather than tilt:

if pitch > 20 or roll > 20:
    data = suspect

if pitch > 30 or roll > 30:
    data = bad

In your processing, make sure you are dividing the pitch and roll values by 10. The instrument reports those values in decidegrees. I’ve looked at the data and I’ve recreated your plot from above. I can’t really explain the differences between the pitch and roll values from deployment to deployment, but they are well within the vendor limits. For your QA/QC, I’d use a test like the pseudocode one above, to remove any data where the pitch or roll exceeds the vendor specified limits. You could also tweak the Matlab code that you are using to grab the raw amplitude data for the three beams (amplitude_beam1, amplitude_beam2, amplitude_beam3). Those values are reported in counts, and they should be very close to each other in both magnitude and shape. If one of the three beams does not align with the others (either significantly different or the shape over time does not agree), the beam is likely blocked or otherwise damaged. There are few other tests you could use, but looking at the pitch and roll and the amplitudes will be the most useful.

All that being said, for this data set, the current velocities from deployment to deployment look very good, are quite reasonable for this site, and show good deployment-to-deployment overlap. I’ve created two plots of the data to show you what I am seeing, using your’s as a template. In the first, I’m looking at all the data from deployments 6-9, resampled to 1-hour intervals.

The biggest offsets in pitch and roll occur between deployments 8 and 9, so I zoomed in on those two and re-plotted the data. This plot really highlights the good overlap in the current speeds between the two deployments (top and bottom panels) in spite of the offsets in pitch and roll between the deployments.

Question, how are you averaging the data? Are you running the 8-day average per deployment, or are you taking the entire record from start to finish?