vrmath
 Tools for VR related math¶
Toolbox of classes and functions for performing VR related math. These are used to describe and compute the spatial configuration of objects in a VR scene for the purpose of rendering and interaction.
Be aware that this module is currently in an early phase of development and may be incomplete and buggy. Please test it out and report any bugs encountered.
Overview¶
Classes¶
RigidBodyPose ([pos, ori]) 
Class representing a rigidbody pose. 
BoundingBox ([extents]) 
Class for constructing and representing 3D bounding boxes. 
Functions¶
calcEyePoses (RigidBodyPose headPose, float iod) 
Compute the poses of the viewer’s eyes given the tracked head position. 
Details¶
Classes¶

class
psychxr.tools.vrmath.
RigidBodyPose
(pos=(0., 0., 0.), ori=(0., 0., 0., 1.))¶ Class representing a rigidbody pose.
This class is an abstract representation of a rigid body pose, where the position of the body in a scene is represented by a vector/coordinate and the orientation with a quaternion. There are many class methods and properties provided to handle accessing, manipulating, and interacting with poses. Rigid body poses assume a righthanded coordinate system (Z is forward and +Y is up).
Poses can be manipulated using operators such as
*
,~
, and*=
. One pose can be transformed by another by multiplying them using the*
operator:newPose = pose1 * pose2
The above code returns pose2 transformed by pose1, putting pose2 into the reference frame of pose1. Using the inplace multiplication operator
*=
, you can transform a pose into another reference frame without making a copy. One can get the inverse of a pose by using the~
operator:poseInv = ~pose
Poses can be converted to 4x4 transformation matrices with getModelMatrix, getViewMatrix, and getNormalMatrix. One can use these matrices when rendering to transform the vertices and normals of a model associated with the pose by passing the matrices to OpenGL. The ctypes property eliminates the need to copy data by providing pointers to data stored by instances of this class. This is useful for some Python OpenGL libraries which require matrices to be provided as pointers.
Parameters:  pos (array_like) – Initial position vector (x, y, z).
 ori (array_like) – Initial orientation quaternion (x, y, z, w).
Notes
 This class is intended to be a drop in replacement for the
LibOVRPose
class, sharing much of the same attributes and methods. However, this class does not require the LibOVR SDK to use it making it suitable to work with other VR drivers.

apply
(self, v, ndarray out=None)¶ Apply a transform to a position vector. This is similar to transform.
Parameters:  v (array_like) – Vector to transform [x, y, z].
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Vector transformed by the pose’s position and orientation.
Return type: ndarray

at
¶ Forward vector of this pose (Z is forward) (readonly).
Type: ndarray

bounds
¶ Bounding object associated with this pose.

copy
(self)¶ Create an independent copy of this object.
Returns: Copy of this pose. Return type: RigidBodyPose

ctypes
¶ Pointers to matrix data.
This attribute provides a dictionary of pointers to cached matrix data to simplify passing data to OpenGL. This is particularly useful when using pyglet which accepts matrices as pointers. Dictionary keys are strings sharing the same name as the attributes whose data they point to.
Examples
Setting the model matrix:
glMatrixMode(GL_MODELVIEW) glPushMatrix() glMultTransposeMatrixf(myPose.ctypes['modelMatrix']) # run draw commands ... glPopMatrix()
If using fragment shaders, the matrix can be passed on to them as such:
# after the program was installed in the current rendering state via # `glUseProgram` ... loc = glGetUniformLocation(program, b"m_Model") # `transpose` must be `True` glUniformMatrix4fv(loc, 1, GL_TRUE, myPose.ctypes['modelMatrix'])

distanceTo
(self, v)¶ Distance to a point or pose from this pose.
Parameters: v (array_like) – Vector to transform (x, y, z). Returns: Distance to a point or RigidBodyPose. Return type: float Examples
Get the distance between poses:
distance = thisPose.distanceTo(otherPose)
Get the distance to a point coordinate:
distance = thisPose.distanceTo([0.0, 0.0, 5.0])

getAt
(self, ndarray out=None)¶ Get the at vector for this pose.
Parameters: out (ndarray or None) – Optional array to write values to. Must have shape (3,) and a float32 data type. Returns: The vector for at. Return type: ndarray Examples
Setting the listener orientation for 3D positional audio (PyOpenAL):
myListener.set_orientation((*myPose.getAt(), *myPose.getUp()))
See also
getUp()
 Get the up vector.

getModelMatrix
(self, bool inverse=False, ndarray out=None)¶ Get this pose as a 4x4 transformation matrix.
Parameters:  inverse (bool) – If
True
, return the inverse of the matrix.  out (ndarray, optional) – Alternative place to write the matrix to values. Must be a ndarray of shape (4, 4,) and have a data type of float32. Values are written assuming rowmajor order.
Returns: 4x4 transformation matrix.
Return type: ndarray
Notes
 This function create a new ndarray with data copied from cache. Use the modelMatrix or inverseModelMatrix attributes for direct cache memory access.
Examples
Using view matrices with PyOpenGL (fixedfunction):
glMatrixMode(GL_MODELVIEW) glPushMatrix() glMultTransposeMatrixf(myPose.getModelMatrix()) # run draw commands ... glPopMatrix()
For Pyglet (which is the standard GL interface for PsychoPy), you need to convert the matrix to a Ctypes pointer before passing it to glLoadTransposeMatrixf:
M = myPose.getModelMatrix().ctypes.data_as( ctypes.POINTER(ctypes.c_float)) glMatrixMode(GL_MODELVIEW) glPushMatrix() glMultTransposeMatrixf(M) # run draw commands ... glPopMatrix()
If using fragment shaders, the matrix can be passed on to them as such:
M = myPose.getModelMatrix().ctypes.data_as( ctypes.POINTER(ctypes.c_float)) M = M.ctypes.data_as(ctypes.POINTER(ctypes.c_float)) # after the program was installed in the current rendering state via # `glUseProgram` ... loc = glGetUniformLocation(program, b"m_Model") glUniformMatrix4fv(loc, 1, GL_TRUE, P) # `transpose` must be `True`
 inverse (bool) – If

getNormalMatrix
(self, ndarray out=None)¶ Get a normal matrix used to transform normals within a fragment shader.
Parameters: out (ndarray, optional) – Alternative place to write the matrix to values. Must be a ndarray of shape (4, 4,) and have a data type of float32. Values are written assuming rowmajor order. Returns: 4x4 normal matrix. Return type: ndarray Notes
 This function create a new ndarray with data copied from cache. Use the normalMatrix attribute for direct cache memory access.

getOri
(self, ndarray out=None)¶ Orientation quaternion X, Y, Z, W. Components X, Y, Z are imaginary and W is real.
The returned object is a NumPy array which references data stored in an internal structure (pxrPosef). The array is conformal with the internal data’s type (float32) and size (length 4).
Parameters: out (ndarray or None) – Optional array to write values to. Must have a float32 data type. Returns: Orientation quaternion of this pose. Return type: ndarray Notes
 The orientation quaternion should be normalized.

getOriAxisAngle
(self, degrees=True)¶ The axis and angle of rotation for this pose’s orientation.
Parameters: degrees (bool, optional) – Return angle in degrees. Default is True
.Returns: Axis and angle. Return type: tuple (ndarray, float)

getPos
(self, ndarray out=None)¶ Position vector X, Y, Z.
The returned object is a NumPy array which contains a copy of the data stored in an internal structure (pxrPosef). The array is conformal with the internal data’s type (float32) and size (length 3).
Parameters: out (ndarray or None) – Optional array to write values to. Must have a float32 data type. Returns: Position coordinate of this pose. Return type: ndarray Examples
Get the position coordinates:
x, y, z = myPose.getPos() # Python float literals # ... or ... pos = myPose.getPos() # NumPy array shape=(3,) and dtype=float32
Write the position to an existing array by specifying out:
position = numpy.zeros((3,), dtype=numpy.float32) # mind the dtype! myPose.getPos(position) # position now contains myPose.pos
You can also pass a view/slice to out:
coords = numpy.zeros((100,3,), dtype=numpy.float32) # big array myPose.getPos(coords[42,:]) # row 42

getUp
(self, ndarray out=None)¶ Get the ‘up’ vector for this pose.
Parameters: out (ndarray, optional) – Optional array to write values to. Must have shape (3,) and a float32 data type. Returns: The vector for up. Return type: ndarray Examples
Using the up vector with gluLookAt:
up = myPose.getUp() # myPose.up also works center = myPose.pos target = targetPose.pos # some target pose gluLookAt(*(*up, *center, *target))
See also
getAt()
 Get the up vector.

getViewMatrix
(self, bool inverse=False, ndarray out=None)¶ Convert this pose into a view matrix.
Creates a view matrix which transforms points into eye space using the current pose as the eye position in the scene. Furthermore, you can use view matrices for rendering shadows if light positions are defined as RigidBodyPose objects. Using
calcEyePoses()
andgetEyeViewMatrix()
are preferred when rendering VR scenes since features like visibility culling are not available otherwise.Parameters:  inverse (bool, optional) – Return the inverse of the view matrix. Default it
False
.  out (ndarray, optional) – Alternative place to write the matrix to values. Must be a ndarray of shape (4, 4,) and have a data type of float32. Values are written assuming rowmajor order.
Returns: 4x4 view matrix derived from the pose.
Return type: ndarray
Notes
 This function create a new ndarray with data copied from cache. Use the viewMatrix attribute for direct cache memory access.
Examples
Compute eye poses from a head pose and compute view matrices:
iod = 0.062 # 62 mm headPose = RigidBodyPose((0., 1.5, 0.)) # 1.5 meters up from origin leftEyePose = RigidBodyPose(((iod / 2.), 0., 0.)) rightEyePose = RigidBodyPose((iod / 2., 0., 0.)) # transform eye poses relative to head poses leftEyeRenderPose = headPose * leftEyePose rightEyeRenderPose = headPose * rightEyePose # compute view matrices eyeViewMatrix = [leftEyeRenderPose.getViewMatrix(), rightEyeRenderPose.getViewMatrix()]
 inverse (bool, optional) – Return the inverse of the view matrix. Default it

inverseModelMatrix
¶ Pose as a 4x4 homogeneous inverse transformation matrix.

inverseRotate
(self, v, ndarray out=None)¶ Inverse rotate a position vector.
Parameters:  v (array_like) – Vector to inverse rotate (x, y, z).
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Vector rotated by the pose’s inverse orientation.
Return type: ndarray

inverseTransform
(self, v, ndarray out=None)¶ Inverse transform a position vector.
Parameters:  v (array_like) – Vector to transform (x, y, z).
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Vector transformed by the inverse of the pose’s position and orientation.
Return type: ndarray

inverseTransformNormal
(self, v, ndarray out=None)¶ Inverse transform a normal vector.
Parameters:  v (array_like) – Vector to transform (x, y, z).
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Normal vector transformed by the inverse pose’s position and orientation.
Return type: ndarray

inverseViewMatrix
¶ View matrix inverse.

invert
(self)¶ Invert this pose.

inverted
(self)¶ Get the inverse of the pose.
Returns: Inverted pose. Return type: RigidBodyPose

isEqual
(self, RigidBodyPose pose, float tolerance=1e5)¶ Check if poses are close to equal in position and orientation.
Same as using the equality operator (==) on poses, but you can specify and arbitrary value for tolerance.
Parameters:  pose (OHMDPose) – The other pose.
 tolerance (float, optional) – Tolerance for the comparison, default is 1e5.
Returns: True
if pose components are within tolerance from this pose.Return type:

modelMatrix
¶ Pose as a 4x4 homogeneous transformation matrix.

normalMatrix
¶ Normal matrix for transforming normals of meshes associated with poses.

normalize
(self)¶ Normalize this pose.

normalized
(self)¶ Get a normalized version of this pose.
Returns: Normalized pose. Return type: RigidBodyPose

ori
¶ Orientation quaternion [X, Y, Z, W].
Type: ndarray

pos
¶ Position vector [X, Y, Z].
Examples
Set the position of the pose:
myPose.pos = [0., 0., 1.5]
Get the x, y, and z coordinates of a pose:
x, y, z = myPose.pos
The ndarray returned by pos directly references the position field data in the pose data structure (pxrPosef). Updating values will directly edit the values in the structure. For instance, you can specify a component of a pose’s position:
myPose.pos[2] = 10.0 # z = 10.0
Assigning pos a name will create a reference to that ndarray which can edit values in the structure:
p = myPose.pos p[1] = 1.5 # sets the Y position of 'myPose' to 1.5
Do not do this since the intermediate object returned by the multiplication operator will be garbage collected and pos will end up being filled with invalid values:
pos = (myPose * myPose2).pos # BAD! # do this instead ... myPoseCombined = myPose * myPose2 # keep intermediate alive pos = myPoseCombined.pos # get the pos
Type: ndarray

raycastSphere
(self, targetPose, float radius=0.5, rayDir=(0., 0., 1.), float maxRange=0.0)¶ Raycast to a sphere.
Project an invisible ray of finite or infinite length from this pose in rayDir and check if it intersects with the targetPose bounding sphere.
This method allows for very basic interaction between objects represented by poses in a scene, including tracked devices.
Specifying maxRange as >0.0 casts a ray of finite length in world units. The distance between the target and ray origin position are checked prior to casting the ray; automatically failing if the ray can never reach the edge of the bounding sphere centered about targetPose. This avoids having to do the costly transformations required for picking.
This raycast implementation can only determine if contact is being made with the object’s bounding sphere, not where on the object the ray intersects. This method might not work for irregular or elongated objects since bounding spheres may not approximate those shapes well. In such cases, one may use multiple spheres at different locations and radii to pick the same object.
Parameters:  targetPose (array_like) – Coordinates of the center of the target sphere (x, y, z).
 radius (float, optional) – The radius of the target.
 rayDir (array_like, optional) – Vector indicating the direction for the ray (default is Z).
 maxRange (float, optional) – The maximum range of the ray. Ray testing will fail automatically if the target is out of range. Ray is infinite if maxRange=0.0.
Returns: True if the ray intersects anywhere on the bounding sphere, False in every other condition.
Return type: Examples
Basic example to check if the HMD is aligned to some target:
targetPose = RigidBodyPose((0.0, 1.5, 5.0)) targetRadius = 0.5 # 2.5 cm isAligned = hmdPose.raycastSphere(targetPose.pos, radius=targetRadius)

rotate
(self, v, ndarray out=None)¶ Rotate a position vector.
Parameters:  v (array_like) – Vector to rotate.
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Vector rotated by the pose’s orientation.
Return type: ndarray

setIdentity
(self)¶ Clear this pose’s translation and orientation.

setOri
(self, ori)¶ Set the orientation of the pose in a scene.
Parameters: ori (array_like) – Orientation quaternion [X, Y, Z, W].

setOriAxisAngle
(self, axis, float angle, bool degrees=True)¶ Set the orientation of this pose using an axis and angle.
Parameters:

setPos
(self, pos)¶ Set the position of the pose in a scene.
Parameters: pos (array_like) – Position vector [X, Y, Z].

transform
(self, v, ndarray out=None)¶ Transform a position vector.
Parameters:  v (array_like) – Vector to transform [x, y, z].
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Vector transformed by the poses position and orientation.
Return type: ndarray

transformNormal
(self, v, ndarray out=None)¶ Transform a normal vector.
Parameters:  v (array_like) – Vector to transform (x, y, z).
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Vector transformed by the pose’s position and orientation.
Return type: ndarray

translate
(self, v, ndarray out=None)¶ Translate a position vector.
Parameters:  v (array_like) – Vector to translate [x, y, z].
 out (ndarray, optional) – Optional output array. Must have dtype=float32 and shape=(3,).
Returns: Vector translated by the pose’s position.
Return type: ndarray

up
¶ Up vector of this pose (+Y is up) (readonly).
Type: ndarray

viewMatrix
¶ View matrix derived from the current pose.

class
psychxr.tools.vrmath.
BoundingBox
(extents=None)¶ Class for constructing and representing 3D bounding boxes.
A bounding box is a construct which represents a 3D rectangular volume about some pose, defined by its minimum and maximum extents in the reference frame of the pose or world coordinate system. The axes of the bounding box are aligned to the axes of the world or the associated pose.
Bounding boxes are primarily used for visibility testing; to determine if the extents of an object associated with a pose (eg. the vertices of a model) falls completely outside of the viewing frustum. If so, the model can be culled during rendering to avoid wasting CPU/GPU resources on objects not visible to the viewer. See
cullPose()
for more information.Bounding boxes on their own are axisaligned. When associated with a
RigidBodyPose
class, they become objectaligned and are transformed by their parent pose during visibility checks.Parameters: extents (tuple, optional) – Minimum and maximum extents of the bounding box (mins, maxs) where mins and maxs specified as coordinates [x, y, z]. If no extents are specified, the bounding box will be invalid until defined. Examples
Create a bounding box and add it to a pose:
# minumum and maximum extents of the bounding box mins = (.5, .5, .5) maxs = (.5, .5, .5) bounds = (mins, maxs) # create the bounding box and add it to a pose bbox = BoundingBox(bounds) modelPose = BoundingBox() modelPose.boundingBox = bbox

clear
(self)¶ Clear the bounding box.
After calling, the isValid property will be False until either mins and maxs is specified or new points are provided.

extents
¶ The extents of the bounding box (mins, maxs).

isValid
¶ True
if a bounding box is valid. Bounding boxes are valid if all dimensions of mins are less than each of maxs which is the case afterclear()
is called.If a bounding box is invalid,
cullPose()
will always returnTrue
.

maxs
¶ Point defining the maximum extent of the bounding box.

mins
¶ Point defining the minimum extent of the bounding box.

Functions¶

psychxr.tools.vrmath.
calcEyePoses
(RigidBodyPose headPose, float iod)¶ Compute the poses of the viewer’s eyes given the tracked head position.
Parameters:  headPose (RigidBodyPose) – Object representing the pose of the head. This should be transformed so that the position is located between the viewer’s eyes.
 iod (float) – Interocular (or lens) separation of the viewer in meters (m).
Returns: Left and right eye poses as RigidBodyPose objects.
Return type: Examples
Calculate the poses of the user’s eyes given the tracked head position and get the view matrices for rendering:
leftEyePose, rightEyePose = calcEyePoses(headPose, iod=0.062) leftViewMatrix = leftEyePose.viewMatrix rightViewMatrix = rightViewMatrix.viewMatrix