The tyMesher object can be used to convert point clouds into surfaces. Point clouds can be derived from both particles and the vertices of regular geometry.
Blob mesh: input particles/geometry will be converted to a blob mesh using OpenVDB.
Combined mesh: input particles/geometry will be combined into a single mesh.
Pathfinding mesh: input particles/geometry will be combined into a mesh that is ideal for use with the Pathfinding operator.
“Pathfinding mesh” mode takes input geometry and resolves all intersections using PRISM. Then it subtracts any specified obstacles, and potentially contracts open edges and relaxes topology, depending on the specified parameters. Technically these operations can all be performed separately using a tyBoolean modifier (for the PRISM operations), tyRelax modifier (for the relax operation), etc. However, using a tyMesher in “pathfinding mesh” mode is a shortcut to performing all of those operations in separate modifiers, and a quicker way to generate a mesh that is optimized for pathfinding.
Tet mesh: input particles/geometry will be converted into a volume of tetrahedrons.
UVW mesh: input particles/geometry will have their specified map channel data extracted as a mesh.
Object list: the list of objects whose particles/vertices will be meshed.
Hide after adding: controls whether objects will be hidden in the scene after adding them to the listbox.
Enable in viewport: controls whether the tyMesher will generate its mesh in the viewport.
Show icon: controls whether the tyMesher icon is displayed in the viewport.
Icon size: controls the size of the tyMesher icon.
Show name: displays the name of the tyMesher next to its icon in the viewport.
For example, if the time slider is at frame 5 and the offset value is set to -5, the input objects will return their mesh/particle data from frame 0.
You can use the “frame” mode to do in-place retiming of input meshes (normally retiming input objects like that would require the use of something like the Point Cache modifier).
Range: when enabled, sets a range of frames (start to end) from which to extract mesh data from input objects.
Time step: when in “range” mode, time step allows you to specify the evaluation step size from the start to the end of the range.
“Range” mode allows you to sample multiple frames and extract their combined meshes together.
For example, if “Frame” is set to 2.5, the resulting mesh will be a linear interpolation of the meshes queried at frame 2 and 3. This setting requires that whole-frame meshes surrounding the subframe have identical vertex counts. This setting is useful in situations where the is no subframe data available for input meshes that you wish to retime.
Classic blob mesh: a standard marching-cubes, quad-based mesher.
Zhu-Bridson: a modification of the classic blob mesh algorithm, that blends and flattens more densely populated areas of particles.
Blend distance: the absolute distance to search from each particle, to find neighbors and determine which areas are more densely populated. The distance will be clamped such that it will be at least the radius of the largest particle.
Blend multiplier: the relative distance to search from each particle, to find neighbors and determine which areas are more densely populated. The relative distance is the blend multiplier multiplied by the radius of the largest particle.
Adaptive: controls whether planar areas of generated meshes will undergo polygon reduction.
When “absolute” radius is disabled, voxelization radius will be derived from particle properties.
Multiplier: an overall multiplier applied to radii values.
Voxel Size: the size of the mesher voxels. Smaller values increase the resulting mesh resolution.
Render Size: when enabled, overrides the size of the mesher voxels at rendertime.
Applies filtering to voxels prior to meshing. Enabling a filter can help smooth out boundaries between voxel radii.
Filter type: the filtering kernel to apply to voxels.
Filter width: the width of the filtering kernel in voxel units.
Only at rendertime: when enabled, the filtering will only be applied at rendertime.
‘Direct conversion to SDF’ mode only applies to input meshes (not particles), and is not (yet) compatible with the matID/UVW inheritance settings. The mesh resampling setting is ignored for meshes directly converted to SDFs.
SDF from vertices: when enabled, input meshes will be converted to an SDF by sampling their vertices and generating an SDF from the resulting cloud of points.
Input mesh resampling: controls whether input mesh resampling is enabled
Multiplier: the face area ratio multiplier. Higher values will generate more resample points.
Sometimes input meshes from particles or objects are too low resolution to generate smooth mesher surfaces. Mesh resampling will generate implicit uniformly-distributed vertices over the faces of input meshes, which will help to fill in holes in the resulting mesher surface. The number of implicit points generated on a face is determined by the ratio between the area of the face and the voxel size/radius, multiplied by the resample multiplier. In other words, the bigger a face is compared to the voxel size/radius, the more resample points it will receive.
Inherit material IDs: controls whether the faces of the resulting mesh inherit material ID properties from input points.
Inherit UVWs: controls whether the faces of the resulting mesh inherit UVW coordinates from the input points.
Inherit particle velocity: controls whether vertices of the resulting mesh will be smeared at subframes along the velocity vectors of the input points. Enable this setting to allow renderers to render blob meshes with motion blur.
Save to map channel: when enabled, blob mesh surface velocities vectors will be saved to the specified mapping channel.
Only at rendertime: when enabled, subframe smearing and particle velocity inheritance will only be computed while rendering.
When “inherit particle velocity” is enabled, blob meshes generated from moving particles can be rendered with motion blur, because subframe data is just smeared whole-frame data, which keeps topology consistent over the [-0.5, 0.5] subframe interval of each whole frame. Normally motion blur is not possible on a blob mesh due to changing topology at a subframe level, but this mode prevents topological changes from happening while applying the overall velocity of the source particles to the generated blob mesh. Note: for motion blur to work, the renderer’s motion blur frame duration must be less than 1.0, and the center of the motion blur interval must be set to a whole frame. So, for example, if you’re rendering frame 100 and want this method of motion blur to work, the total motion blur frame interval should be no greater than [99.501, 100.499] and the center of the motion blur interval should be exactly frame 100. In VRay, this would be a motion blur frame duration less than 0.999 and an interval center of 0.0. When using a Physical Camera, make sure the duration is less than 0.999 and the offset is set to negative half the duration (so for a duration of 0.5, set the offset to -0.25).
Accuracy: controls the maximum number of input points that will be used to interpolate matID/UVW/velocity values.
Influence: controls the radius multiplier applied to the UVW/velocity inheritance algorithms. The larger the value, the further away a particle can influence a particular vertex’s UVW/velocity values in the resulting mesh.
Use if available: particles will be retrieved from source objects if source objects’ particle interfaces are enabled.
Force interface: particles will be retrieved from source objects even if source objects’ particle interfaces aren’t explicitly enabled.
“Force interface” mode applies to tyFlow and tyCache objects that have their particle interfaces disabled.
Clustering allows you to split up the input particle cloud into sub-clouds which are meshed separately.
Clustering disabled: no clustering will occur.
Custom float clustering: clusters groupings will be determined by the custom float data values of input tyFlow particles. The data values will be converted into integers and grouped accordingly.
Channel: the custom float data channel from which cluster values will be retrieved.
Material ID clustering: clusters groupings will be determined by the material ID alues of input points.
Texmap clustering: cluster groupings will be determined by an input world/object-space texmap.
Cluster count: the number of cluster groups to create from the input texmap values.
Enable MatID filter: when enabled, only faces with material IDs found in the match list will be copied from the input geometry.
MatIDs: the list of material ID values to match.
Enable normals filter: when enabled, only faces with qualifying surface normals will be copied from the input geometry.
X/Y/Z/Threshold: if the angle between the face’s normal and the specified normal (from the X/Y/Z values) is greater than the specified threshold, the face will be removed.
Remove degenerate faces: when enabled, faces with two or more identical vertex indices will be removed.
Remove isolated vertices: when enabled, isolated vertices (vertices not attached to any faces) will be removed.
Merge: when selected, intersections between faces in input geometry will be resolved, but no further considerations will be made about whether or not remaining faces are inside or outside of the original geometry.
Union: when selected, intersections between faces in input geometry will be resolved, and remaining faces that are classified as inside the original geometry will be removed. Input geometry needs to have proper volume (be composed of a closed surface, with depth/thickness) for this mode to work properly.
Open edges are contracted by first extruding them into tubular meshes, and then subtracting those tubular meshes from the base mesh. This process can cause various artifacts to appear in the base mesh if the contraction radius is too large, but is generally reliable enough for most purposes. Contracting the open edges of a pathfinding mesh can prevent paths from being generated too close to obstacle objects in the scene (obstacle objects used to create holes in the pathfinding mesh).
Segments: controls how many face segments will be generated around the circumference of open edge splines. Higher values will increase the resolution of the splines.
Enable spline relaxation: enabling this option, when contract is set to a non-zero value, will relax open edge splines used to perform the contract operation, allowing you to create smoothing contract borders.
Enabling spline relaxation can help to smooth hard corners in a pathfinding mesh’s open edges borders. However, spline relaxation can result in open edge splines that do not correctly overlap the original edge borders (because the relaxation function shifts the open edge spline knots in a way that may move them too far away from the original open edge borders). A good rule of thumb, if spline relaxation is enabled, is to also enable spline normalization and make sure the normalization length threshold is set to roughly half of the open edge contract value.
Normalize: when enabled, the open edge splines used in the contract operation will be normalized by the specified length threshold.
Strength: the strength of the relaxation algorithm applied to open edge splines.
Iterations: the number of iterations the relaxation algorithm will be performed on the open edge splines.
Visualize edge splines: when enabled, the meshes generated from open edge splines will be displayed in the viewport.
Use all/front/back map faces: controls which map faces will be extracted. Front/back faces are determined by examining the face normal in UVW space.
Map channel: controls which map channel will be used to extract UVW coordinates.
Scale: a scaling multiplier applied to extracted UVW coordinates.