Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In the Relationship Editor tab you will designate the relationships of your 3D assets to each other and elements in the scene (e.g. ground_plane):
Workflow: The workflow is the space where you will be defining all object relationships.
Load Collection: This module is used to place collections of 3D assets into the scene and to define their relationship to other objects and elements using a parent/child system.
Locate: This module is used for designating the range of placement in the scene via a vector.
Rotate: This module is used for rotating an object, rotation can be defined as a range or as a static value. Units are in degrees.
Neighbor: This module will place an object to a specific side of the parent object. You can also declare an offset as a range.
Drop: This module will place an object directly onto a parent object or element.
Center: This module will center an object on a particular axis.
Seahaven uses Blender (.blend) files for the 3D asset population, and there are a few steps of preparation to ensure your model will work effectively in your simulation.
Blender is a free download available through the Blender Foundation.
Remove everything from the scene except the 3D asset and any materials you will be packing into the file.
Purge any orphan data from the scene. This unused data can interfere with cameras, lighting, and many other parts of the simulation.
To purge orphan data navigate to display mode, select orphan data. Then select purge.
Last, make sure to pack all your data into the blend file.
Go to File, then External Data, then Pack All Into .blend
If you want to precisely position a camera in your scene you can load a scene with a curve object to define a specific range of positions for your camera.
For example, in this simulation, the user wanted the camera to follow the perimeter of a building and they drew a curve to represent the range of positions for the camera to take.
You can name the curve whatever you like. However, be sure to be consistent to assign the name in your camera module later.
The workflow is the container where you will be defining all object relationships.
You can assign a name to objects in the outliner. These names are typically used to tell manipulators what objects in a scene to transform or manipulate in some defined way.
You can assign an object a name in the blender outliner.
If you have multiple objects with the same name you can use a "." as a delineator and assign an index number. Seahaven will only look at the prefix if you try to modify these objects later with a manipulator.
This is a great way to manipulate large groups of objects.
In this example, the user is using a Blender empty to define a possible location to place a person.
The 3D Assets tab is where you will bring 3D assets to use in your simulation into Seahaven. Here, assets are separated into folders called collections.
Collections are an important part of Seahaven. Collections not only contain groupings of 3D assets but the names will be used to manage annotations and other aspects of your simulations. Please name your collections carefully.
First, designate a Collection name on the top left-hand side of the upload tool. Each collection is comprised of the files you want to assign to a specific category.
Drag and drop your .blend files into the upload tool and select upload when they have finished processing.
This Collection should now be available for placement in the Relationship Editor
Relationship modules
This is a variation on the relationship module. You may want to array certain types of objects like desks in an office or shelving in a store.
X and Y Axis Count : These fields accept min/max variable blocks and set the number of elements in each direction.
X and Y Distance: These fields accept min/max variable blocks and set the distance between elements in each direction.
Placements are collections of operations that are used to transform the position and orientation of objects within the scene.
Many of the placement rules will make reference to "front"
"back"
"right"
or "left"
These tags refer to sides of the parent objects bounding box. These tags are designed to be intuitive and will help you quickly define the approximate relationship between objects.
If an element is dependent on another element such as placing a window or a door on a wall you will define that object's location in local (U,V) coordinates.
The Relationship module is used to place collections of 3D assets into the scene and to define their relationship to other objects and elements (e.g. ground_plane) using a parent/child system.
Bring your Load Collection module into the Workflow bay
The top two drop-down menus in this module are used to define the parent/child relationship. In the box on the left select the name of the parent object, and the box on the right will have the name of the child object
The Placement bay is where you will be placing the Relationship modules, which specify the configuration of the relationship between the parent and child object.
Use item count to designate the range of model numbers to sample. Use a min/max value.
Seahaven is a platform that enables you to create synthetic data to train computer vision applications.
Our platform enables data scientists and AI engineers to create simulations of real-world places and events. With Lexset you can configure and run simulations that generate training data for your algorithms.
Once your account has been activated your account you will receive a link to your account page. Click the "login" button in the upper right corner.
Next, you will be directed to a login page. You can set a username and password or use your Gmail account.
After you have accessed your account you can go to your account page where you can access your user ID, bearer token, and other account information that will be necessary for you to use the Lexset APIs.
Note: We have blurred out all personal information for this tutorial. Please use your own account information.
Add a payment method
In the Colormap Editor you will be creating a colormap for your dataset. Color mapping is a common visualization technique that maps data to color, and displays the colors in a rendered image.
Start by dragging the Colormap Workflow from the Start Workflow menu. This is the bay you will be placing all of your colormaps into.
To create a new category, drag the Category module out of the menu and place it in the Colormap Workflow bay.
Start by dragging the Colormap Workflow from the Start Workflow menu. This is the bay you will be placing all of your categories into.
To create a new category, drag the Category module out of the menu and place it in the Colormap Workflow bay.
Each category should have a corresponding collection, and share the collection name. Category names will be used for annotation as well as placement in the relationship editor.
Select the color you wish to have for you collection, repeated colors are not recommended.
It is also possible to define a category and supercategory. In COCO JSON a supercatgory is a group of categories. For example, you might want all the screws to be under a supercategory called "hardware."
Choose the name of your colormap and click "Add".
Uniformly scale objects within a given collection
Define a scale factor in the input field.
This placement rule will place an object to a specific side of the parent object within a maximum and minimum offset distance.
Place the Neighbour module in the Placements bay
Use the drop down menu to select the oriention to the parent object to place the child object
Create an offset range. The distance is in meters.
This module will place an object directly onto a parent object or element.
This module is used for placing an object in the scene within a given 3d bounding box.
This module centers an object on a particular axis.
Scene objects are used to define context. Including lighting, and procedurally or parametrically generated objects.
Place an object on another object
Select the two sides of the objects you want to join. You can then project the child object to the parent within a range of local (u,v) coordinates.
This module rotates an object on the z-axis. Rotation is defined as a range. Units are in degrees.
Import 3D Assets into your simulation
The number of samples determines how many unique items to sample out of the collection.
For example, if we have a collection with four different 3D models of fruit (apple, orange, pear, pineapple), if you set the samples to 2, it will randomly select two from that collection. If "Load Duplicate" is enabled, the same asset may be loaded more than once.
To select a collection, use a "Collection" block from the "Variables" submenu.
Align an an object with another object
Align an object to a specific side of the parent object.
The workflow is the container you will use to define the parameters of your simulation.
Scene: Scene modules are used to configure your scene context. This will include importing/generating 3D environments, backgrounds, and/or lighting.
Relationships: Relationships between classes of objects defined in the relationship editor are imported into your workflow here.
Camera: Camera type, position, and camera properties are all defined in this section.
Manipulators: Manipulators can be used to transform and randomize your scene each time a data point is generated.
Colormap: Import your color map and define the labels for your COCO JSON file.
Additional Output: Additional render passes are defined in this section. For example, a depth map would be defined in this workflow portion.
Annotation Settings: Any additional limitations or transformations you might want to apply to the data annotations are defined here.
Post-Processing: Applies post-processing effects to images, such as adding shot noise.
Raytrace Samples: Number of paths to trace for each pixel in the RGB images. Increasing the number of samples improves render quality, but takes more time.
Stereoscopic: Renders a stereoscopic image (left and right).
Exclude Annotations: If checked, RGB images will be generated without any annotations.
Defines a room type, which uses accompanying object relationships to populate the space.
Room type: Choose from a list of room layout presets including "bedroom" and "living room" (more coming soon).
Object relationships: Object relationships created from the relationship editor which will be used to populate rooms of this type.
This module generated a ground plane at the origin of the scene.
Shadow Plane: If "Shadow Plane" is enabled objects will cast a shadow on the ground plane.
Import 3D Assets & Relationships: This module is to import relationships between collections that have been defined in the relationships editor.
Place an object on top of another object.
Place a child object on top of a parent object. You can then project the child object to the parent within a range of local (u,v) coordinates.
The Load Relationships module allows you to select a relationship file from a drop down menu. If you created a relationship in the Relationship Editor tab, it will appear here.
Animates/tracks a camera along a curve.
Frames: Number of frames to sample along the curve.
Curve Name: Name of the curve the camera will follow.
Properties: Perspective camera intrinsic properties. Additional documentation.
Shutter Speed: Shutter speed of the camera in seconds.
Camera Height (Min/Max): Height of the camera above the curve in meters.
Curve Parameter Start (Min/Max): A value between 0 and 1 indicating the start position of the camera where 0 is the start of the curve and 1 is the end of the curve.
Curve Parameter Length (Min/Max): A value between 0 and 1 indicating the distance to travel along the curve where 0 is the start of the curve and 1 is the end of the curve.
High Dynamic Range Images or HDRI's are large images that project ambient light into your scene. the HDRI Lighting module allows the user to select from a drop-down menu of environment types.
With the HDRI lighting module, you load a variety of pre-loaded HDRI images that you can use as a background in your images. In addition, these images provide physically accurate environmental lighting. Project to ground: You can project your HDRI images to the scenes ground plane. Resulting in a semi-spherical projection. You can learn more about how this feature can be used when creating photorealistic images on our blog. Height: Height refers to the height of the camera when the HDRI was captured. Most HDRI images are taken between 2 and 3 meters off the ground. You can adjust this parameter to reduce distortion around the horizon, which might appear if your camera moves too high above eye level in the z axes.
Intensity: Set a minimum and maximum light intensity. The range will dictate the range of lighting brightness across all the images in your dataset.
Generates an interior scene using a given room type preset.
Room Type: The room layout to use for the generated scene. Options: (bedroom, livingroom). (See Room Type for more information on room types.)
Room Height: The ceiling height of the generated room (in meters).
Wall type: The material to use for the walls in the room. Options: (Gypsum, Wallpaper).
Floor type: The material to use for the floors in the room. Options: (Oak, Maple, Tile, Ceramic).
Ceiling type: The material to use for the ceilings in the room. Options: (Wood, Gypsum, ACT).
Window type: The window shape to use. The "Tall" preset is approx. 2.3m tall. Options: (Standard, Tall).
Generate recessed lights: If checked, recessed lights will be generated in the ceiling of the room.
Color temp: Color temperature of the lights, in degrees Kelvin. See Wikipedia for more information.
Light strength multiplier: A factor by which to multiply the strength of the recessed lights. Setting both the min and max to 0 will disable the lights.
Define your camera and how it is positioned within your simulated environment.
Start Workflow modules
Allows for additional camera positions to be sampled per frame.
Note: All values are expressed relative to an assumed origin and orientation. In other words, it can be treated as adding cameras to a device (such as a head mounted display), which informs the initial location and orientation.
Offset: Distance to offset along each axis (in meters) for this specific camera.
Rotation: Additional rotation to apply to the camera (in degrees).
Filename Suffix: Suffix to append to each image rendered by this camera. For example, using a suffix of "_L_UP" would change a filename from "rgb_0001.png" to "rgb_0001_L_UP.png"
Randomly samples a camera pose on a curve.
Frames: Number of frames to sample along the curve.
Curve Name: Name of the curve to sample.
Yaw/Pitch/Roll (Min/Max): Rotation ranges for the camera about its 3 axes. Setting all values to 0 will point the camera straight down. Setting the pitch to 90 degrees will make the camera parallel with the XY/ground plane.
Camera Height (Min/Max): Height of the camera above the curve in meters.
Properties: Perspective camera intrinsic properties. Additional documentation.
Camera Configuration is a module that is loaded into the Camera bay and allows placement of Position and Properties modules
Seahaven currently supports two different camera types (Perspectival and fisheye)
Each of these camera types can either be positioned by a curve or by using one of the position modules in Seahaven.
Curves can be imported with the 3D Asset module in the scene menu. For more information, please see the prepare your models section of this documentation.
Additionally, rotation ranges may be defined for each curve-based camera.
Frames: The number of unique frames to create in the current scene. Each frame represents a unique data point.
Curve name: The name of the curve with which the camera location will be determined.
Yaw (min/max): A range of values, determining the rotation of the camera about its up axis (looking left and right).
Pitch (min/max): A range of values, determining the rotation of the camera about its right axis (looking up and down).
Roll (min/max): A range of values, determining the rotation of the camera about its forward axis (rotating clockwise/counter-clockwise).
Camera Height (min/max): The height of the camera relative to the curve (in meters).
This module moves the camera along a cone
Look at: Define where your camera points using one of the blocks from the orientation menu.
This module moves the camera along a sphere.
Look at: Define where your camera points using one of the blocks from the .
Positions a camera on the perimeter of a room generated using the "generate interior scene" block.
Frames: Number of frames to sample in the room.
Position: The location within the room to sample. Options: (Wall, Corner).
Offset Distance: The distance from the camera to the wall (in meters).
Camera Height: Camera height above the floor (in meters).
Yaw/Pitch/Roll: Random rotation along the path (in degrees).
Required Categories: Categories which are required to be visible in each image. Additional documentation.
Properties: Fisheye camera intrinsic properties. Additional documentation.
Minimum Distance to Camera: Restricts objects from being closer than this distance to the camera.
Single Raycast Distance Check: If checked, distance will only be checked from the center of the frame.
This block will automatically point the camera towards the items in the scene.
The bounding box of all the items in the scene is calculated and the camera will point towards the center of that bounding box.
Category: Name of the category to check in each image.
Min/Max: The minimum and maximum number of instance allowed of the category in each image.
This module allows you to choose points in a 3D bounding box for the camera to sample from
Look at: Define where your camera points using one of the blocks from the orientation menu.
Cameras are positioned within ranges of values. Camera position is defined as a bounding volume or point cloud within which you will sample locations. You can choose from the following options.
Manipulators are modules that can be used to transform or manipulate objects that are already loaded into a scene.
This block will point the camera towards a specific location within the scene.
Define a target point to point your camera at.
Positions a camera in a room generated using the "generate interior scene" block.
Frames: Number of frames to sample in the room. Frames are generated along a random path within the room.
Control Points: Number of points which are used to create the camera path. Higher numbers increase the complexity of the path.
Distance Between Points: The min/max distance between consecutive points in the camera path.
Camera Height: Camera height above the floor (in meters).
Yaw/Pitch/Roll: Random rotation along the path (in degrees).
Additional Cameras: Optional list of additional camera positions/orientations to sample per frame. See documentation for additional information.
Properties: Fisheye camera intrinsic properties. Additional documentation.
Camera manipulators modify camera extrinsics by translating and rotating the camera with a variety of methods.
This block positions the camera above an object matching the provided name and method (category or object name).
Match By: Property with which to match an object (Category/Object Name).
Match Name: A string to match (case-insensitive) the object category or name.
Use Partial Match: If checked, all objects with a partial match to the provided name will be sampled.
Override Existing Camera Rotation: If checked, the initial rotation of the camera will be replaced with the provided values.
Initial Rotation (Yaw/Pitch/Roll): Ranges of rotation values to use as the camera's initial rotation. If "Override Existing Camera Rotation" is unchecked, the camera's current rotation values will be used.
Additional Rotation (Yaw/Pitch/Roll): Rotation values which are applied in addition to the initial rotation.
Additional Translation (X/Y/Z): Translation applied after the camera's position has already been overridden.
FOV: Sets field of view, degrees
Sensor Size: The sensor in the camera expressed in millimeters
Focal Length: The camera focal length expressed in millimeters
Adjust the intensity of lights in your scene
This block will change the intensity of all light objects in your scene. Please note that this will impact HDRI light sources. IMPORTANT - please make sure the block is always used before the "Near IR" block. The "Near IR" block will add a light to your scene. Modifiers are executed sequentially and if it is executed first the light used in the IR simulation will not be impacted.
Randomizes the color of a material.
Material name: The name of the material to be randomized.
Chance: Chance (as a percentage, between 0 and 100) to randomize the material’s color.
Camera intrinsic properties for a perspectival camera.
FOV (Min/Max): Sets the horizontal field of view (in radians).
F-Stop: The F-Stop (aperture) of the lens. Affects the depth-of-field of the image.
Aperture Blades: Number of blades for the aperture. Affects the shape of the bokeh.
Aperture Ratio: Simulates distortion of bokeh. Values below 1 stretch the shape of the bokeh horizontally while values above 1 will stretch the bokeh vertically.
Camera Near/Far Clip: Sets the distance of the clipping plane.
Adds a point light coincident with the camera and converts color space to panchromatic to mimic the appearance of near-field IR imagery.
Illuminator Power: Radiant power of the point light in Watts.
Randomizes a numerical property of a material.
Material name: The name of the material to be randomized.
Property name: Name of the material property to randomize.
Value (min/max): A range of values to uniformly sample for the new property value.
This block will point the camera toward a specific object or bone as well as move the camera to a specified distance from said object.
Object Name: Name of the object. If Bone Names
is used, the name of this object should be that of a rig.
Distance to Target (min/max): Distance from the target object (or bone). If both values are set to 0, the distance will not be overridden.
Bone Names: Name(s) of the bones within the specified object to point to.
Define how materials are mapped to other materials in the scene and at what frequency they are replaced.
Replace: Enter the material name in your scene file. In the final port on this row input an object name variable here. With: provide a material name with a weight.
Randomly replace a material in your scene.
Import: Load a material collection into your scene.
Replace by: Select materials to be replaced by either "object name" or by category.
Input a material name and a weight
Weight defines the likleyhood of this material being used over another one in the list.
This module applies a pose to a rigged object
This block can be added to change the pose of an object given the name of the rig and a list of poses. There are two variants of this block--the following will only apply a single pose.
Rig Name: The name of the rig to modify.
Poses: A list of poses. See Pose Pair for more information.
This variant of the block will apply multiple poses (one from each "pose set").
Rig Name: The name of the rig to modify.
Pose Sets: A list of pose sets, each of which contain a list of pose pairs. See Pose Set for more information.
Defines a pair of poses for interpolation.
This is used in conjunction with the Pose Objects block. This will interpolate between two poses, the amount of which is determined by min and max bias values (as a percentage). The example below will interpolate between open
and closed
poses between 0% and 25%. A bias below 50% will label the object using the name of Pose A
, whereas a bias above 50% will label the object using the name of Pose B
.
Pose A: The first pose.
Pose B: The second pose.
Bias Min: Weight toward the first pose.
Bias Max: Weight toward the second pose.
Pose name partial match: If checked, pose names partially matching the given name will also be sampled. For example, using the pose name "sitting" will match all poses containing the string "sitting".
Constrains (parents) one object to another. All transformations applied to the parent object will be inherited by the child object.
Child Collection: Collection from which to load a child object.
Parent Object Name: Name of the parent object (case-insensitive).
Parent Bone Name: (Optional) Name of the bone in the parent object. Only applies to rigged objects.
Partial Match Name: If checked, a partial match will be used for the parent object name.
Number of Parents (Min/Max): Number of objects to act as parents.
Number of Children (Min/Max): Number of objects to load as children.
The replace module can be used to replace objects in your scene with other objects.
You can also transform the items in your scene by rotating them, aligning them to the camera or setting them a minimum distance from your camera.
Object Name: Name of the object to delete.
Items: Number of objects to replace in the scene.
Collection: Collection of 3d assets to sample (objects which will be used to replace). If the Load Child Object block is used, this will serve as the parent object.
New Object Name: (Optional) new name for the replaced object.
Rotation (Min/Max): Rotation range, in degrees, about the object's vertical axis.
View settings (optional): A block defining view-related settings. See more.
Load child object (optional): A block which loads and parents a child object. See more.
Adds a particle system to an object which populates the geometry with instances of another collection of objects. Commonly used to populate a landscape with vegetation.
Target Object Name: Name of the object to populate with particles.
Particle Collection: Collection to instance for unique particles.
Number of Particles (Min/Max): The total number of particles on the target object geometry.
Unique Particle Instance (Min/Max): The number of unique objects to use for the particle system.
Particle Random Scale: A value between 0 and 1 where 0 maintains the original object scale and 1 is the highest level of randomization.
Load a set of materials from a collection.
Material collections are created on the "Materials" page. You can upload your own shaders and materials to a designated collection.
This module will remove objects of a specific name from your scene.
Define the object name and what percentage of the items you want to be removed from the scene.
This can be a useful way to randomize a scene by randomly deleting items each time a new data point is generated.
Object Name: Name of the object to delete.
Percentage to delete: Percentage of objects matching the name to delete.
Ignore: Number of objects to skip deleting.
Min/Max Distance: Distance range from the camera in which objects may be ignored.
View Limits X (Min/Max): A pair of values where 0 is the left side of the frame and 1 is the right side of the frame.
View Limits Y (Min/Max): A pair of values where 0 is the bottom of the frame and 1 is the top of the frame.
A list of pose pairs.
A pose set block is a list of pose pairs, from which one pose pair will be chosen and then applied.
In the image above, two sets of poses will be applied to a single rig, "rig". The first set of poses will apply a pose partially matching the name "_L" (e.g. a left-handed pose).
The second set of poses will apply a pose partially matching the name "_R" (e.g. a right-handed pose).
This block loads and parents a child object.
This block loads a child object, using two bones to parent the object.
Load child object from: Collection of 3d assets to sample (objects which will be used to replace).
Parent bone name: Name of the bone in the parent rig.
Child bone name: Name of the bone in the child rig.
This block provides replacement settings related to the camera view.
Align to Camera: If checked, replaced objects will face the camera (affected by rotation values).
Distance to Replace (Min/Max): Distance range from the camera within which objects will be replaced. If both values are set to 0, there will be no restriction.
Replace in View: If checked, only objects within the view of the camera will be replaced. The extents of the view may be restricted using X and Y view limits.
View Limits X (Min/Max): A pair of values where 0 is the left side of the frame and 1 is the right side of the frame.
View Limits Y (Min/Max): A pair of values where 0 is the bottom of the frame and 1 is the top of the frame.
This block will output a json file containing a 3x3 intrinsic matrix (denoted as "K") and a 4x3 extrinsic matrix (denoted as "M").
Note: Intrinsic matrices are currently only supported for perspectival cameras.
Sample output:
Your workflow module will generate RGB images and standard COCO output including semantic segmentations and bounding box information. Additional non COCO output can be defined with additional modules
Loads objects along a curve
Curve Name: Name of the curve(s) along which the objects will be loaded.
Object Collection: Collection of objects to load along the curve.
Curve Parameter (Min/Max): Defines the part of the curve to sample where 0 is the start of the curve and 1 is the end of the curve.
Curve Parameter Tolerance: The tolerance (between 0 and 1) to use when sampling points along the curve. Using a larger value will cause objects to be placed farther apart from one another.
Number of Samples (Min/Max): Number of points to sample for placing objects.
Random Rotation (Min/Max): A range of rotation values (in degrees) to apply to the objects after they have been placed and aligned to the curve.
Exports keypoints (if applicable) in global coordinates (right-handed XYZ).
This data may be used in conjunction with ex/intrinsic matrices.
Check for self-occlusion: If checked, keypoint visibility will also be determined by self-occlusion, e.g. a hand behind one's back, not visible by the camera.
Use helper mesh for self-occlusion: If checked, a simplified version of the object will be used when checking for self-occlusion. In general, this will resolve most false negatives, i.e. a keypoint determined to be occluded when it, in fact, is not.
Self occlusion tolerance: Tolerance (in meters) of the self-occlusion test. Distances below this value will be ignored. For example, setting this to an extremely high value such as 1 meter may cause no keypoints to be labeled as "occluded".
Keypoints are provided in a separate keypoints.json file in global XYZ coordinates (right-handed, Z-up) as well as in image coordinates in the coco_annotations.json file, in standard COCO format.
This is where you load the color map you created. The color map will be used to generate your annotations.
Filter or modify your annotations by bounding box area and/or category.
This module will check each data point to see if the annotations meet specific criteria.
You can check if a particular category is visible and/or filter out annotations of that category under a specific size.
Exclude Background: If checked, all annotations with the category "background" will be excluded from the final set of annotations.
Categories for Area Check: (Optional) Filters out annotations based on a defined minimum area of the segmentation (in pixels).
Categories for Unoccluded Bbox: (Optional) Adds information for the size of the unoccluded bounding box of the provided categories.
This module creates a depth map as an output of the simulation
Depth Maps are included within your simulation as an additional output. Annotations that fall outside of standard COCO format will be included as additional output.
A normal pass may be exported as additional output. Normals are represented in camera space and are provided as RGB values in an EXR image file.
The save and import feature can be used to to save your workflows locally or import locally saved workflows.
Adds glare as a compositing step to the final image.
Glare Exposure (Stops): Exposure of the glare effect. Setting a higher value will increase the intensity of the glare in the image.
Visualizations of the COCO annotations may be produced, with the ability to specify which annotations to display and for which categories.
Show Bounding Box: Toggle to display 2D bounding boxes around each of the annotations.
Show Label: Toggle to display the category name inside of the annotation's bounding box.
Show Segmentation: Toggle to display segmentation contours.
Show Keypoints: Toggle to display keypoint annotations, if applicable.
Categories to Visualize: The names of the categories to visualize. If left empty, annotations for all categories will be visualized. Category names are provided using a Category Name block under Variables/Text Inputs.
Name your simulation, give it a description and designate the number of Datapoints you wish to produce. Select Run Simulation
Number of Compute Nodes tells Seahaven how many GPUs to use for this simulation.
Navigate to the Simulation Manager tab. Here you will see your simulation in the que
Click the play button to start your simulation
Video Walkthrough (recommended):
Blog Walkthrough:
Adds shot noise (Poisson noise) to the final RGB images.
Noise Amount (min/max): The amount of noise to add, defined as 1/n where n is the average number of photons per fully-illuminated pixel. A value of 0 adds no noise.
Chance to add noise: Chance that noise will be added to a given image.
Base class for all types of generator modules. Generator modules are used to generate procedural assets within a simulation.
Please contact info@lexset.ai for any support-related issues.
Simulations are configured through a series of YAML files and uploaded to the Lexset servers for processing.
This tutorial will explain the structure of these configuration files and how to edit them.
Simulations are composed of both "pipeline configs" and "placement rules."
Pipeline configs are composed of key/value pairs that coordinate various aspects of running a simulation. Modules include the version of Lexset's software you are using, global variables (to be shared across the application), dependencies, and more. Modules contain all the necessary instructions to run your simulation. Including loading 3D assets, manipulating cameras, and exporting data.
Placement rules govern how objects relate spatially. For example, placement rules might describe how the television is mounted on a wall or the probability that a nightstand might be next to the bed in a bedroom.
A Module serves to execute a single step in the render pipeline, whether it be populating the scene with objects (loader), manipulating camera positions (camera), or outputting files of various formats (writer).
Every config contains four key/value pairs, namely "version"
, "global"
, "setup"
and "modules"
.
"version"
: Indicates the version of Lexset being used. The most recent version of Lexset is version 2
. This should not be changed.
"global"
: Provides the ability to set param values across all modules in the pipeline. Within the parameter "all"
, global parameters are set by using the param name as its key and the param value as its value. This can be an empty dict if desired. For example:
"setup"
: A dict containing several values related to project setup. In the example above, pip modules are declared that might be necessary for the project.
"modules"
: A list of Modules to be used in the pipeline. Each module is formatted as follows:
Some modules can have other modules attached to them.
The structure modules are what enable you to generate structures and procedural assets within your virtual environment. These may in include buildings and other architectural elements.
In some cases, structure modules can generate "sub-domains." For example, within a house, you might have different rooms and you want objects and events to occur differently within each room.
Each sub-domain will have its own label and its own set of placement rules associated with that sub-domain. Placement rules are assigned to each subdomain by the "placement.PlacementHandler"
module which will be discussed later in this section.
Modules control all aspects of your simulatoins. This section will outline the various modules currently available through Lexset.
Every config file must contain a "main.Initializer"
module. It is responsible for basic project setup, including setting the compute device for rendering.
Modules are divided up into 5 different types
Structure Modules
Lighting Modules
Camera Modules
Render Modules
Annotation Modules
The simulation manager can be used to start and stop your simulations. This is also where you can download your datasets and retrieve information about your simulations.
Your simulation can be in a variety of states. The available options will change depending on the simulation state.
When your simulation is in a ready state, it will show as green in the interface. This indicates that the configuration you submitted is valid and is ready to run.
Start your simulation (the play button icon), transition your simulation to a running state and start generating data.
Queue your simulation (the clock icon) placed your simulation in the queue state.
When your simulation is in a queue state it will be placed in a queue to be run when resources become available.
You might want to use this feature if you have prepared a large number of simulations that you want to run over many hours or days. Once you place a simulation in the queue it will be automatically started once resources become available in your organization.
For example, if your organization has a number of running simulations that are consuming all your available nodes. As those simulations complete and nodes free up our servers will autostart the next job in the queue.
Remove from queue (the eject icon) will remove the simulation from the queue and return the simulation to a ready state.
A running simulation is one that is generating data. In this state you can preview images and check the status.
Options:
Stop your simulation (the red square button), this will transition your simulation to a complete state, and the data you have generated so far will be packaged up for download.
Retrieve information (the letter "i" icon), will report simulation progress and simulation state such as "starting up" or "not yet started"
Preview (the eye icon) will open a new browser tab showing a live preview of data being generated on the server. You will also be able to see a live snapshot of the logs.
Once your simulation is complete your dataset will be packaged up and any post-processing operations will be applied. Once completed your data will be available for download.
Download (the cloud icon) will give you the option to save a copy of your dataset to a local folder in .zip format.
Retrieve information (the letter "i" icon), will report simulation progress and simulation state such as "starting up" or "not yet started"
Preview (the eye icon) will open a new browser tab showing a live preview of data being generated on the server. You will also be able to see a live snapshot of the logs.
Delete (the trash can icon) will remove the simulation from the simulation manager. Lexset will keep that simulation archived for 30 days before we remove it. If you delete anything by accident please contact support.
This module will generate an architectural interior space with doors and windows.
Instructions for building generating your space are stored in a separate configuration file. The configuration is referenced by the "multi_room_config"
parameter.
Each room is composed of the following properties.
"name"
: Each room is assigned a unique string. This is important. This label will be used to connect this room to other rooms and to apply placement rules.
"diagonal"
: Define the maximum and minimum size of your space.
"components"
: Add doors and windows to your space.
"connects"
: If this is defined when the rooms are rendered no wall will exist between the two spaces. For example, if you wanted the corridor to connect to the living room directly without a door. You would specify it as in the following example.
"has_wall"
: This property looks for the "connects"
property. If rooms are connected and this value is false
the rooms will be joined without a wall between them.
Generating a room in Lexset means you define a maximum and minimum size for the room. You also define where doors and windows get placed. To do this we have developed a unique way of describing these spaces.
Rooms are composed of segments. In the Lexset version 2, each room is composed of four segments.
In this example, we place a door on segment 1.
Coordinates defining room size need to represent a range of possible values and are expressed as a nested array. This is an example of how the .
If an element is dependent on another element such as placing a window or a door on a wall you will define that objects location in local (U,V) coordinates.
In the following example, a 1.2M wide door is placed on "segment 1" (a wall within a room) around the center. The "u" position is defined as a range between, 0.4 and 0.6. This indicates the door can be located ~16.67% of the length of the wall off of the center.