The Worlds Actions node is a multi-resource node that provides access to various Worlds platform capabilities beyond event management. Select a resource to access different operations.
Resources
Process Detection Image
Create still images or animated GIFs from detection data. Used to capture visual evidence for events.
Create Still Image
Create GIF
Generates a single annotated image showing the detection at a specific timestamp.| Parameter | Type | Description |
|---|
| Track IDs | Array | Track IDs to include in the image |
| Timestamp | DateTime | The point in time to capture |
| Zone IDs | Array | Zones to overlay on the image |
Returns a processedImage field containing the base64-encoded image and a metadata object with the timestamp. Generates an animated GIF showing the detection sequence over time.| Parameter | Type | Description |
|---|
| Track IDs | Array | Track IDs to include |
| Timestamp | DateTime | Reference timestamp |
Returns a gif_base64 field containing the base64-encoded GIF.
Send Worlds Email
Send email notifications using the Worlds SendGrid email template system.
| Parameter | Type | Description |
|---|
| Email Subject | String | Subject line |
| Alert Image | String | Base64-encoded image or GIF to include |
| Alert Title | String | Title displayed in the email |
| Site Name | String | Site name for context |
| Event Timestamp | DateTime | When the event occurred |
| Camera Name | String | Camera/data source name |
| Event ID | String | Worlds event ID for linking |
| To Recipients | Collection | Email addresses to send to |
| CC/BCC Recipients | Collection | Additional recipients (in advanced options) |
Credentials: Requires SendGrid API credentials.
AI Operations
Access Worlds AI capabilities:
| Operation | Description |
|---|
| OCR | Extract text from detection images |
| Image Segmentation | Segment objects in detection images |
| Embeddings | Generate vector embeddings from detections |
Get Track State
Query the current state of a track from the state machine.
Get Zone State
Query the current state of a zone from the state machine.
Create Event
Create events directly (alternative to Event Manager for simpler workflows).
Event Producer
Query available event producers for your organization.
Get Event
Query existing events from the Worlds platform.
Get Track Snapshot
Compute a track’s position, velocity, and zone membership at a specific historical timestamp. Unlike Get Track State which returns the latest or final state, this resource analyzes raw detections within a time window around the requested timestamp to reconstruct what the track was doing at that moment.
This is particularly useful in batch workflows after a Type III check, where you need to know what a track was doing at the moment of an interaction — for example, its velocity when two tracks were closest, or which zone it occupied at the time of a near-miss.
| Parameter | Type | Default | Description |
|---|
| Track IDs | Array | — | One or more track UUIDs to query |
| Timestamps | Array | — | ISO 8601 timestamp per track, or a single shared timestamp for all tracks |
| Window Seconds | Number | 10 | Seconds before and after the timestamp to include in computation |
Output:
{
"track_snapshots": {
"019bb7ea-4050-7e9b-8761-f8df773fd3e5": {
"track_id": "019bb7ea-4050-7e9b-8761-f8df773fd3e5",
"tag": "Person",
"timestamp": "2026-01-13T15:12:52.000Z",
"window_seconds": 10,
"detections_in_window": 14,
"position": {
"method": "linear",
"offset_ms": 82,
"pix": { "x": 512, "y": 347 },
"geo": { "lon": -73.9857, "lat": 40.7484 },
"polygon": [[480, 290], [544, 290], [544, 404], [480, 404]]
},
"motion": {
"pix": {
"avg_velocity": 22.4,
"max_velocity": 38.1,
"direction": 135.0,
"sample_count": 13
},
"geo": {
"avg_velocity": 1.1,
"max_velocity": 1.9,
"direction": 135.0,
"sample_count": 13
}
},
"zones": {
"active_at_timestamp": {
"a3f1c2d4-8899-4b6e-b012-567890abcdef": {
"zone_id": "a3f1c2d4-8899-4b6e-b012-567890abcdef",
"zone_name": "Loading Dock",
"intersection_percent": 0.78,
"dwell_at_timestamp": 14.5
}
}
}
}
}
}
The method field tells you how the position was determined:
real — an exact detection exists at the requested timestamp
linear — interpolated between two bracketing detections
forward_extrapolation / backward_extrapolation — the timestamp is outside the detection range, so the nearest detection was used
Credentials: Requires GraphQL Subscription API credentials when querying tracks not already cached by the state machine.
Closest Frame
Find the optimal timestamp where multiple tracks are closest together in a single camera frame. This is particularly useful in zone state workflows where you’re working with multiple tracks and need a single image that shows them all.
| Parameter | Type | Description |
|---|
| Track IDs | Array | Track IDs to analyze |
| Timestamp | DateTime | Reference timestamp to search around |
The node calculates the minimax edge-to-edge bounding box distance across all specified tracks and returns the timestamp where they are closest together.
Output:
{
"closest_frame": {
"optimal_timestamp": "2026-02-19T18:36:22.045Z"
}
}
Use the optimal_timestamp as the timestamp input for the Process Detection Image node to capture the best possible image of all tracks together.
Closest Frame serves a similar purpose to batch mode’s interaction data, but works with streaming zone state data. In batch workflows where interactions are available, you may not need this node. In zone state workflows, it’s the recommended way to find the best image timestamp for multi-track scenarios.
Using VLM (Vision Language Model) with images
A common advanced pattern is to pass a captured still image through a Vision Language Model for additional analysis. This uses n8n’s built-in Basic LLM Chain node with image input:
- Capture a still image using Process Detection Image
- Pass the image binary to a VLM (Azure OpenAI, GPT-4V, etc.) with a structured prompt
- Use a Structured Output Parser to get typed JSON output (e.g.,
{ "boolean": true, "confidence": 0.92, "context": "..." })
- Merge the VLM output back with the original data
The VLM output can be used two ways:
- As context metadata — attach the VLM’s description and confidence to the event for human review
- As a check gate — use the boolean output with an n8n IF node to conditionally create events, reducing false positives
See the Streaming Zone State Workflow for a complete example using VLM confirmation.
Credentials
Most resources require GraphQL Subscription API credentials. The email resource requires SendGrid API credentials.