-
Notifications
You must be signed in to change notification settings - Fork 7.6k
Sensor axis alignment from DTS #90765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sensor axis alignment from DTS #90765
Conversation
8c05551
to
d2111a7
Compare
Also, the approach we took also appears to follow actual device(s) method for axis configuration/mapping internally as well. For instance look at page 26 of the BNO055: This appears to be identical to our approach for describing it in DTS and also is a very good reason why this should at the very least be an option at the driver level (even if a higher level option like @yperess mentioned is also choosen/used). |
I agree that if the driver allows it directly we can/should support it. I'm sure everyone would appreciate not running through the rotation matrix math on the MCU. My concern is that if drivers which don't do this on the IC all try to support this we'll get a lot of bloat. For varying dimentionality I don't think it should be that difficult to have the implementation of |
To your point, I think that's where we could "strongly suggest" lower rate sensors to use what you are suggesting. But for higher rate sensors especially in the multiple of kHz ODR I think we need something like this. Arguably we could have a default policy that if a sensor is below 400 Hz ODR it should use a rotation matrix approach like you mention (or for legacy drivers/sensors that nobody want's to update). And if it's greater than 400 Hz ODR it's up to the contributor to decide if they want to implement the method I have here for performance reasons or use what you suggest. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I prefer this approach over using a rotation matrix since sign inversion and axis swapping are much more efficient computationally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ok with this, I think seeing the bosch datasheet above showing a vendor also doing axis alignment with sign correction/mappings means this isn't uncommon and could be useful beyond just the bosch or this sensor.
We do need to help align axis of various sensors in some manner, often this is just going to be a axis correction like this without actually needing to apply a rotation to correct the sensor frame to the desired frame.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also OK with this. Curious what's @MaureenHelm's thought, though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's reasonable to add a devicetree property that specifies a transformation from the sensor frame to a reference frame, but I don't think we should apply that transformation at this early stage in the signal chain because it eliminates the possibility for an application to access raw, unmanipulated data from the sensor device. The motivation for providing a decoder function was to allow an application to choose if/when to manipulate raw sensor data into standard units, and I think this would be a better place to add support for reference frame transformations.
Interesting, so apply the transform (axis map/invert sign) as an opt in for the decoder? |
Thank you for your review @MaureenHelm, I'm not sure if I completely understand what you are suggesting changes wise. I wanted to preserve the "raw data by default" so if no rotation is explicitly provided in DTS the output is the individual sensor frame. IE: To me the "tree" in DTS implies that sensors are put in a collective "device frame", meaning their "real raw output" should make sense at a collective device level frame (notably all unscaled). I agree that if this were to scale or alter the data more than simple sign inversion or axis swap that it should 100% be in a different location/later stage. But in my opinion this does not "alter" the raw output, it merely aligns it to the context of the device and subsystem to begin with. That being said, if this is not a view that is shared, I am more than glad to make whatever changes are needed to get this in. With the exception being that the goal of this approach is to be "compute resource/latency sensitive" so for me a solution whether in zephyr main tree or out would need to allow for that motivation/constraint. Would you mind maybe describing a little more how you would desire to see something like this instead? |
A bit of real world context here. But what we see in various real robotics applications is that an cluster of components making up an IMU (inertial measurement unit) often physically get laid out on a board in some orientation that is meant to be compatible with the rest of the system, but have one or more parts oriented differently from a previous or compatible version. This can be seen in Drone Flight controllers. They may follow a published standard like Pixhawk V6X-RT. There is a cluster of Accel/Mag/Gyro on the main board, and a second set connected via an offboard module. The orientation of the offboard module or components on the offboard module inevitably gets changed between vendors and revisions for a variety of reasons. For example the way the flex cable orients the devices upside down on one but not the other. This is NOT going to change in practice. It is just expected that fixing an orientation will happen in software for that specific design. For this type of scenario, it makes sense to normalize this back to whatever the convention is for the frame of reference for the Drone flight controller. The "device frame" IMUs generally require pushing the limits on datarate. |
Introduces the ability to set static axis alignment of a sensor from DT params. Signed-off-by: Benjamin Perseghetti <[email protected]>
Allow setting fifo-highres from DT, also unify the naming of all registers and expand to a more complete list. Signed-off-by: Benjamin Perseghetti <[email protected]>
Adding asserts to RTIO stream to avoid hardfaults. Signed-off-by: Benjamin Perseghetti <[email protected]>
49920d8
ed7e4c1
to
49920d8
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's reasonable to add a devicetree property that specifies a transformation from the sensor frame to a reference frame, but I don't think we should apply that transformation at this early stage in the signal chain because it eliminates the possibility for an application to access raw, unmanipulated data from the sensor device. The motivation for providing a decoder function was to allow an application to choose if/when to manipulate raw sensor data into standard units, and I think this would be a better place to add support for reference frame transformations.
Discussion in the Sensor WG clarified that the transformation is being applied at the decoder stage, which resolves my concern.
Adds sensor axis alignment that allows you to choose the desired output reference frame.