Skip to main content
OpenSRE integrates with Slurm, providing detailed observability for batch jobs and array tasks. It works by running the OpenSRE agent on each compute node — no modification to job scripts required.

Why use OpenSRE in combination with Slurm

Slurm gives job-level scheduling visibility, but not what happens inside the job. OpenSRE adds that missing layer:
  • Per-process telemetry inside each job allocation
  • Job array correlation and node-level performance
  • Resource and cost insights across users and queues
  • Zero changes to job submission or scripts
  • Real-time updates in the OpenSRE dashboard

Getting Started

Prerequisites

  • Slurm cluster access with sudo or admin privileges for installation
  • OpenSRE installed on your operating system

Just run your pipeline, OpenSRE will automatically attach

If OpenSRE is already installed on your operating system, you only need to enable the OpenSRE agent for pipelines that have not been run with OpenSRE before.
In that case, run the following command:
sudo tracer init --token <your-token>
Go to our onboarding to get your own personal token
When running this command, you will be asked to name your pipeline for clear labeling in the dashboard.

Examples

Run a Slurm pipeline under OpenSRE:
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --cpus-per-task=8
#SBATCH --time=01:00:00

module load python
python analysis.py
Submit this job as usual with:
my_job.sh
or launch the OpenSRE demo workflow:
sudo tracer demo
Once the pipeline starts, open the OpenSRE dashboard, and you’ll see each Slurm job as a timeline step updating in real time.

OpenSRE Logo
Watch your pipeline run in the OpenSRE dashboard
View real-time metrics, resource usage, and performance insights for your pipeline runs.