by Pascal Jansen and Mark Colley
This Unity asset provides an end-to-end, Human-in-the-Loop (HITL) Bayesian Optimization workflow (single- and multi-objective) built on botorch.org. It lets you declare design parameters and objectives in Unity, runs a Python backend, and loops with users inside your Unity scene. The result is an efficient search over large design spaces, yielding trade-off designs on the Pareto front.
Why this matters. Users typically have diverse preferences, needs, and abilities. Thus, manual design parameter tuning is often slow and potentially biased; A/B and grid search scale poorly. Instead, MOBO uses probabilistic surrogate models and principled acquisition to balance design exploration and exploitation, reducing the number of user trials required to reach a high-quality design for every user.
- Configure design parameters, objectives, and optimizer hyperparameters directly in Unity.
- Automatic, robust communication with a BoTorch-based MOBO process.
- Built-in integration with the QuestionnaireToolkit for explicit feedback in a HITL process; compatible with implicit telemetry.
- Automatic CSV logging of parameters/objectives and optimization metric traces (hypervolume for MOBO, best-objective trace for BO); warm-start from prior runs.
- Two example scenes demonstrating end-to-end optimization.
To improve interface usability, treat selected UI attributes as design parameters
Several scientific publications have used ‘Bayesian Optimization for Unity’ in their methodology:
-
OptiCarVis: Improving Automated Vehicle Functionality Visualizations Using Bayesian Optimization to Enhance User Experience (CHI '25, Best Paper Honorable Mention, top 5%)
-
Improving External Communication of Automated Vehicles Using Bayesian Optimization (CHI '25)
- Glossary (Plain Language)
- Background
- Installation
- Integration Checklist (Required)
- Quick Start (10 Minutes)
- Example Usage
- Demo Video
- Configuration
- Troubleshooting
- System Architecture
- Portability to Your Own Project
- Citation
- License
| Term | Meaning |
|---|---|
| Parameter | A setting Unity can change automatically (for example size, color, speed). |
| Objective | A score the optimizer tries to improve (for example usability, trust, completion time). |
| Smaller is Better | Unity flag for an objective where lower values are preferred (for example time or errors). |
| Sampling Iterations | Initial rounds used to explore the space before model-based optimization starts. |
| Optimization Iterations | Main BO rounds where the model proposes the next best design. |
| Warm Start | Start from existing CSV data instead of collecting new initial samples. |
| Pareto Front | Best trade-offs when you have multiple objectives and no single best point exists. |
| Dominated Point | A point that is worse than another point in all objectives (and strictly worse in at least one). |
| Hypervolume | Single MOBO progress metric computed from current non-dominated points in maximize-space. |
| coverage | Runtime metric sent from Python to Unity (hypervolume for MOBO, best objective for BO). |
| tempCoverage | Sampling-progress value in [0,1] during initial sampling rounds. |
| USER_LOG_ID | Folder-safe log identifier derived from User ID (invalid path characters are normalized). |
| Seed | Number used to make stochastic parts reproducible across runs with the same setup. |
In MOBO, the goal is to find a parameter configuration (e.g., color, transparency, visibility) that maximizes objective values (e.g., usability, trust) while respecting the design space (
The optimization problem is:
where:
-
$x$ is a parameter vector in$X$ , -
$f(x)$ is a vector of objectives,$f(x) = [f_1(x), f_2(x), \dots, f_k(x)]$ , -
$x^*$ maximizes$f(x)$ over$X$ .
Here,
The figure below shows the HITL process for this asset. Step by step:
-
Design Selection:
The optimizer selects a design instance$x$ from the design space ($X$ ). In the example, a design includes color (ColorR, ColorG, ColorB), transparency, and visibility of the shapes (Cube & Cylinder). Parameter ranges limit$X$ . -
Simulation:
The appearance parameterized by$x$ is shown in the simulation so the user can experience the design. -
User Feedback:
After the simulation, the user rates the design via a questionnaire. Ratings are translated into objective values$y$ . In the example, the objectives are trust and usability, each with defined ranges ($Y$ ). -
Optimization:
Based on current objective values, MOBO proposes another design, considering prior feedback. The loop repeats.
The entire process consists of two phases:
- Sampling Phase:
Sobol sampling (see note) selects evenly spread designs across the space. The optimizer records objective values to learn the landscape before optimization starts. In these rounds, visual changes may not correlate with ratings.
Note: I. M. Sobol. 1967. On the distribution of points in a cube and the approximate evaluation of integrals. U.S.S.R. Comput. Math. and Math. Phys. 7 (1967), 86–112. (DOI)
- Optimization Phase:
The optimizer balances exploitation (refining known good regions) and exploration (searching new regions).
This asset uses the QuestionnaireToolkit to collect explicit subjective feedback. This feedback serves as a design objective in the HITL process.
- Only questionnaire question-item outputs are considered for BO objective updates (via objective-key/header matching).
additionalCsvItemsare written only to the questionnaire results CSV and are not forwarded to the BO manager/backend.User ID,Condition ID, andGroup IDare not BO objectives. They are logged as context columns inObservationsPerEvaluation.csv.- Final-design selection uses the full context triad (
User ID,Condition ID,Group ID) when filtering candidate observation rows.
MOBO can optimize for multiple, potentially conflicting objectives. Rather than a single optimum, it identifies the Pareto front, representing the best trade-offs.
A solution is Pareto optimal if no other solution improves one objective without worsening another. The diagram below illustrates this.
The x-axis shows the first objective (usability) and the y-axis the second (trust). As in the HITL diagram, both axes are objective values (
MOBO uses surrogate models (e.g., Gaussian processes) to approximate objectives, enabling efficient prediction. An acquisition function (e.g., Expected Hypervolume Improvement) selects the next points, trading off performance gains and exploration.
In short, the optimizer maximizes
MOBO is used in hyperparameter tuning, materials discovery, and engineering design where multiple objectives matter.
Set up the asset as follows:
- Clone the repository.
- Run
installation_python.bat(Windows) orinstall_python.sh(macOS) to install Python and required libraries.
Files are in Assets/StreamingAssets/BOData/Installation. - Install Unity Hub.
- Create or log in to your (student) Unity account.
- Install Unity 2022.3.21f1 or higher. We recommend Unity 6.2 or newer.
- Add the project to Unity Hub by selecting the repository folder.
- Open the project and set the Python Settings.
Note: You may set the Python path manually if you already have a local Python installation. See Python Settings. Also, read Configuration to ensure settings are saved.
Before running your own scene, verify the following minimum setup:
BOforUnityManagerobject exists in the scene and has the tagBOforUnityManager.- The same object contains these components:
BoForUnityManagerPythonStarterSocketNetworkOptimizerMainThreadDispatcher
- In
BoForUnityManager, required references are assigned:Output TextLoading ObjWelcome PanelOptimizer State Panel
- If
Iteration Advance Mode = NextButton,Next Buttonis assigned and wired toBoForUnityManager.ButtonNextIteration(). - If
Iteration Advance Mode = ExternalSignal, your UI/game logic calls:var bo = GameObject.FindWithTag("BOforUnityManager").GetComponent<BoForUnityManager>(); bo.RequestNextIteration();
- Every objective key in
BoForUnityManagerhas a matching data source (questionnaire item or manual script assignment). - If you use QuestionnaireToolkit mapping, each question
Header Namematches the objective key exactly. - Parameter and objective keys are unique (no duplicates).
- Python settings are valid (
Manually Installed Pythonpath or automatic detection works).
If any item above is missing, the loop may start but stall before sending/receiving valid optimization data.
Use this path for a first successful run with the provided demo scene.
- Open
Assets/BOforUnity/BO-example-scene.unity. - Select
BOforUnityManagerin the hierarchy and verify the Integration Checklist. - In
BoForUnityManagerinspector:- keep
Iteration Advance Mode = NextButton - keep
Warm Start = false - keep
Seed = 3
- keep
- Set
Optimizer Backend = BoTorchand confirm there are at least two objectives (m >= 2) somobo.pyis used. - Press Play.
- Click
Nextto start initialization. - Wait for "The system has been started successfully!".
- Click
Nextto start an evaluation. - Run the simulation flow and click
End Simulation. - Complete the questionnaire and click
Finish. - Repeat at least one more iteration.
Expected successful outcome:
- Parameter values in the scene change between iterations.
Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/is created.ObservationsPerEvaluation.csvandExecutionTimes.csvare populated.- For MOBO (
m >= 2),HypervolumePerEvaluation.csvis written and Unity receivescoverageupdates.
If these outputs appear, your full Unity-Python loop is working.
This section walks through the demo workflow. Install the asset first as described in Installation.
Note: ObservationsPerEvaluation.csv must be empty (except for the header). Find it at Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/. By default this equals
User ID, but invalid path characters are normalized for folder safety. You can delete the folder to recreate a clean one.
- In Unity, open Assets/BOforUnity and double-click BO-example-scene.unity.
- Press the Play button (⏵).
- Click
Next, wait for loading, then clickNextagain. - The simulation appears. You will see up to two colored shapes to evaluate.
- When finished, click
End Simulation. A questionnaire appears. - Answer, then press
Finish. The optimizer saves your input and updates parameters. - Press
Nextto start a new iteration. Repeat from step3until all iterations finish. The system then indicates you can close the application.
Note: Results are in Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/ (typically your
User ID, normalized for folder-safe naming if needed).
Click the thumbnail for a short demo showing how to export the main-branch package and import it into a new Unity project. It also shows what to do after import if you have an up-to-date Python (currently, we recommend 3.13.5) on Windows. You can also open the video in the images folder.
Note: This video shows a previous version of this asset's user interface in Unity. The procedure is similar for the current version.
All configuration is done in Unity. Open Assets/BOforUnity/BO-example-scene.unity. Select the BOforUnityManager object in the hierarchy, then click Select at the top of the inspector. Adjust settings as needed.
Save the scene after changes. Re-select BOforUnityManager to confirm your edits. The BOforUnityManager prefab must be correct; it overrides previous settings (see the inspector top left).
Note: All configuration lives in this object. The options below follow the inspector from top to bottom. Note: If you add or remove parameters/objectives, back up and clear the current user log folder to regenerate CSV headers.
Parameters are automatically adjusted by the system during optimization. This section shows how to create, change, or remove parameters before runtime.
Click + at the bottom of the parameter list to add a prefilled entry, then edit it as described here.
Note: Ensure the new parameter is used by your simulation.
Note: If headers are out of sync, back up logs in Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/ and then delete the folder to refresh headers.
Note: If you use the warm start option, ensure CSV headers match after adding parameters.
Adjustable options, top to bottom:
| Name | Description |
|---|---|
| Value | Value assigned by the optimizer in each sampling/optimization iteration. |
| Lower/Upper Bound | Bounds that restrict the parameter. |
| CABOP Group | Parameter group used by CABOP for group-wise cost modeling (default if empty). |
| CABOP Tolerance | Matching tolerance for reuse/swap decisions in CABOP (>= 0). |
| CABOP Prefabricated Values | Optional discrete values for CABOP snapping (nearest value is used). |
Select the parameter by clicking the = icon in its top-left corner (it turns blue). Click - at the bottom to remove it.
Note: Ensure the removed parameter is not used in your simulation.
Note: If headers are out of sync, back up and remove the log folder Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/.
Objectives are sent to the optimizer as feedback in each iteration. You can create, change, or remove objectives.
Click + at the bottom of the objective list to add a prefilled entry, then edit it as described here.
Note: Each objective must receive a value before optimization. In the demo, create a new questionnaire item or map an existing one to the objective (see below).
Note: If headers are out of sync, back up logs in Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/ and then delete the folder.
Note: For warm start, CSV headers must match after adding objectives.
In BO-example-scene hierarchy, go to QTQuestionnaireManager/QuestionPage-1. In Question Item Creation, set the inputs (the Header Name must match the objective name), then click Create Item. Edit as needed.
In QTQuestionnaireManager/QuestionPage-1/Scroll View/Viewpoint/Content/, select the question and set its Header Name to the objective name.
Options, top to bottom:
Select the objective by clicking the = icon in its top-left corner (turns blue). Click - at the bottom to remove it.
Note: Reverse the steps you performed when adding the objective.
You can read the current parameter values each iteration by indexing into the parameter list on the BOforUnityManager instance. Here is an example snippet:
// assuming you have a reference to the manager
BoForUnityManager bo = GameObject.Find("BOforUnityManager").GetComponent<BoForUnityManager>();
var i = 0;
// during an iteration, read the i-th parameter
float value = bo.parameters[i].value.Value;
// or loop through all parameters
for (int j = 0; j < bo.parameters.Count; j++) {
var parameter = bo.parameters[j];
Debug.Log($"Parameter {j} ({parameter.key}) = {parameter.value.Value}");
}This gives you programmatic access to the parameter settings that the optimizer proposes. The index follows the order of the parameter list visible in the Unity inspector view.
By default, QuestionnaireToolkit updates objective values in each iteration. If you want to override or set them manually, you can write into the objective list on the BOforUnityManager instance. Example:
BoForUnityManager bo = GameObject.Find("BOforUnityManager").GetComponent<BoForUnityManager>();
var i = 0;
// during an iteration, assign the i-th objective
// this assumes that there is only one sub-measure for this objective:
var myScore = 1.5f;
bo.objectives[i].value.values[0] = myScore;
// if you want to assign more than one sub-measure use the following...
// their average value will be sent to the optimizer as a single value for this objective
var myScoreA = 7.1f;
var myScoreB = 10f;
var myScoreC = 3.24f;
bo.objectives[i].value.values[0] = myScoreA;
bo.objectives[i].value.values[1] = myScoreB;
bo.objectives[i].value.values[2] = myScoreC;
// the following lines are necessary if you did not define the number of sub-measures in the inspector view
bo.objectives[i].value.numberOfSubMeasures = 3;
bo.objectives[i].value.values.Add(myScoreA);
bo.objectives[i].value.values.Add(myScoreB);
bo.objectives[i].value.values.Add(myScoreC);
// or multiple objectives
for (int j = 0; j < bo.objectives.Count; j++) {
bo.objectives[j].value.values[0] = myScore + j;
Debug.Log($"Objective {j} ({bo.objectives[j].value.values[0]}) = {myScore + j}");
}The index follows the order of the objective list visible in the Unity inspector view. Make sure you assign objective values before the optimizer proceeds so that the backend receives the feedback correctly.
Default:
If you leave Manually Installed Python unchecked, the system will automatically search for a valid Python path in the OS and install the requirements via pip.
If only an older Python is found, the runtime now attempts to install the preferred target runtime (3.13.x) first (platform installer prompt may appear), then continues setup.
On macOS the runtime auto-install path uses the bundled Python .pkg installer payload. The install_python.sh script remains available for manual terminal setup.
The runtime now validates Python versions (3.9+ supported) and prefers 3.13.x automatically when it is installed.
If the target-runtime installation is cancelled or fails, startup is aborted with a clear error instead of silently continuing on the old interpreter.
You can override this behavior by checking Manually Installed Python and following the steps below:
- Open a terminal and search for Python installations:
- Windows:
where python - Linux/macOS:
which python3
Copy the path to a compatible Python (3.9+, ideally3.13.x).
- Windows:
- In BOforUnityManager → Python Settings, check the box
- Paste the path in the
Path of Python Executablefield.
Set User ID, Condition ID, and Group ID in the inspector section shown in the image.
If your study does not use any of these IDs, leave the field at -1. The value will still be logged, but you can ignore it in analysis.
These three values are always logged as context columns in ObservationsPerEvaluation.csv and are used together for final-design row filtering.
BoForUnityManager now supports two backends:
- BoTorch: existing behavior (
bo.pyfor single-objective,mobo.pyfor multi-objective). - CABOP: cost-aware optimization backend with selectable objective mode:
SingleObjective->cabop_bo.py(requires exactly 1 objective).MultiObjectiveScalarized->cabop_mobo.py(requires at least 2 objectives; objectives are scalarized to one minimized score).
CABOP inspector settings:
- CABOP Use Cost Aware Acquisition: enables EI-per-cost behavior.
- CABOP Update Rule:
Actual(recommended),Intended, orBoth. - CABOP Enable Cost Budget + CABOP Max Cumulative Cost: optional stopping criterion in addition to iteration counts.
- CABOP Group Costs: group-level
unchanged/swapped/acquiredcosts for both model cost (cost) and realized cost (actual_cost).
API equivalents are available directly on BoForUnityManager fields:
optimizerBackendcabopObjectiveModecabopUseCostAwareAcquisitioncabopUpdateRulecabopEnableCostBudgetcabopMaxCumulativeCostcabopGroupCosts- Parameter-level CABOP fields in
parameters[i].value:cabopGroupcabopTolerancecabopPrefabricatedValues
- Objective-level CABOP field in
objectives[i].value:cabopWeight
What “prefabricated values / prefab snapping” means here This is not a Unity prefab asset reference. In CABOP, “prefab” means a predefined list of numeric parameter values that already exist (for example, already manufactured/fabricated settings). If a list is provided for a parameter, CABOP snaps proposals to the nearest listed value before sending parameters to Unity.
In BoForUnityManager (Inspector), you can enable Show Prior Rating Hint.
Behavior:
- On slider questions (
QTSlider) and Likert/linear scale questions (QTLinearScale), the questionnaire shows a subtle marker indicating the participant's previous rating for that same question. - The question is still reset to unanswered for the new iteration. The hint is visual-only and does not submit an answer.
- If no previous rating exists yet, no hint is shown.
Bias control:
- Use Hint Opacity to keep the marker subtle (recommended range: low opacity).
- This helps users calibrate relative to their last response while minimizing anchoring pressure.
Technical note:
- The hint is keyed per questionnaire and question identity (slider/Likert) and persists across BO iterations during the same app run.
Here, the current setup of design parameters (d) and design objectives (m) is shown as defined in the parameter and objectives list in the inspector. This serves as an overview to decide the optimization budget below.
Backend selection:
Optimizer Backend = BoTorchm = 1usesbo.py.m >= 2usesmobo.py.
Optimizer Backend = CABOPCABOP Objective Mode = SingleObjectiveusescabop_bo.py.CABOP Objective Mode = MultiObjectiveScalarizedusescabop_mobo.py.- CABOP internally minimizes a scalar objective. For multi-objective mode, scalarization uses objective bounds, direction (
Smaller is Better), andCABOP Weight.
These options are in the lower part of this image.
- Checking Warm Start skips the initial rounds. Optimization starts from prior results supplied as CSVs, formatted like the examples in Assets/StreamingAssets/BOData/BayesianOptimization/InitData.
- Copying a prior ObservationsPerEvaluation.csv into the new study’s log folder is optional and only needed if you want one continuous observation log across runs.
- Leaving it unchecked uses the default start. After the specified number of initial iterations (minimum 2), optimization begins using the collected values.
- Warm Start Objective Format controls how objective values in the warm-start objective CSV are interpreted:
auto(default): detect format automatically.raw: values are in original objective bounds (Lower/Upper Bound), then converted internally.normalized_max: values are already normalized to[-1, 1]in maximize-space.normalized_native: values are normalized to[-1, 1]in native objective direction (entries withSmaller is Betterare flipped internally).
Note: CSV formats for warm start must match the examples. Headers must match the current number of parameters and objectives. Using logs from a prior study with the same settings satisfies this.
- Both files must be in Assets/StreamingAssets/BOData/BayesianOptimization/InitData and referenced by file name in the inspector.
- Parameter CSV headers must exactly match parameter keys; objective CSV headers must exactly match objective keys.
- Parameter and objective CSVs must have the same number of rows and at least one row.
- All values must be numeric and finite (no
NaN/Inf). - For best compatibility, provide parameter values in original parameter bounds (
Lower/Upper Bound). - Objective values must follow the selected Warm Start Objective Format.
The examples below use ; as delimiter and require headers that match your exact parameter/objective keys.
raw (original bounds):
ButtonSize;Contrast
0.35;0.70
0.55;0.40Usability;TaskTime;ErrorCount
72;38;4
68;31;3normalized_max (already in maximize-space [-1,1]):
ButtonSize;Contrast
0.35;0.70
0.55;0.40Usability;TaskTime;ErrorCount
0.44;0.36;0.60
0.36;0.48;0.70normalized_native (native direction [-1,1], Python flips minimize objectives internally):
ButtonSize;Contrast
0.35;0.70
0.55;0.40Usability;TaskTime;ErrorCount
0.44;-0.36;-0.60
0.36;-0.48;-0.70- Internally, the optimizer always works in maximize-space.
- If Smaller is Better is enabled for an objective, the backend flips that objective internally.
- This flip is applied consistently in optimization, Pareto computation, and logging conversions.
Assume these three Unity objectives:
| Objective Key | Bounds | Smaller is Better | Example Raw Value | Internal Maximize-Space Value |
|---|---|---|---|---|
TaskTime |
[0, 120] |
true |
30 |
+0.50 |
ErrorCount |
[0, 20] |
true |
4 |
+0.60 |
Usability |
[0, 100] |
false |
70 |
+0.40 |
How this is handled:
- Values are normalized to
[-1,1]. - Objectives with
Smaller is Better = trueare multiplied by-1. - Pareto checks (
is_non_dominated) and hypervolume are computed on this consistent maximize-space representation. ObservationsPerEvaluation.csvstores denormalized values in your original objective units.
- Disabled by default.
- Enable Perfect Rating to terminate when a perfect rating is achieved.
- If Perfect Rating In Initial Rounds is checked (visible only when perfect rating is active), a perfect rating can also terminate during sampling.
- Iteration Advance Mode controls how the next evaluation iteration starts:
NextButton: legacy behavior (user presses the assigned Next button).ExternalSignal: no built-in button dependency; trigger progression from your own logic.Automatic: starts the next iteration automatically after a configurable delay.
- Automatic Advance Delay (s) is used only in
Automaticmode. - Reload Scene On Advance controls whether the manager reloads the active scene when progressing.
- Keep this enabled for the default sample-loop behavior.
- Disable it if your app handles iteration transitions without scene reloads.
For ExternalSignal, call this from your own UI/event logic:
var bo = GameObject.FindWithTag("BOforUnityManager").GetComponent<BoForUnityManager>();
bo.RequestNextIteration();If you use the bundled QTQuestionnaireManager, this request is queued automatically after questionnaire completion when Iteration Advance Mode is set to ExternalSignal.
If Enable Final Design Round is active, the system adds one extra participant-facing round after BO completes.
What happens:
- The Python backend finishes normal BO iterations and sends
optimization_finished. - Unity reads the latest
ObservationsPerEvaluation.csvfor the current context (User ID,Condition ID,Group ID). - Unity deterministically selects one final design and applies its parameter values.
- The user runs one final round (
totalIterations + 1), but this round does not send objectives back to Python and does not continue optimization. - Unity appends this last evaluation to
ObservationsPerEvaluation.csvwithPhase=finaldesignand marker columnIsPareto/IsBestset toNULL.
Selection logic (deterministic):
- Normalize each objective via min-max over all CSV rows, after objective direction handling (
Smaller is Betteris internally flipped). - Primary criterion: smallest Euclidean distance to utopia (
[1,1,...,1]) in normalized objective space. - Tie-break 1: largest maximin (maximize the worst normalized objective).
- Tie-break 2: least-aggressive parameter profile (smallest L2 distance to parameter baseline in normalized parameter space; baseline uses parameter-range midpoints).
- Tie-break 3: earliest iteration index.
Candidate rows:
- MOBO: rows flagged by
IsParetoare preferred. - BO: rows flagged by
IsBestare preferred. - If no preferred rows exist, all rows are considered.
Inspector controls:
- Enable Final Design Round: activates the feature.
- Utopia Distance Epsilon, Maximin Epsilon, Aggression Epsilon: tolerances for deterministic tie handling.
Integration note:
- If you use
QTQuestionnaireManager, finishing the final round still triggers its normal completion flow. BoForUnityManagerdetects that this is the final non-BO round and ends the loop without sending objectives to Python.
| Name | Default Value | Description |
|---|---|---|
| Sampling Iterations | 2d+1 | Number of sampling iterations before optimization; the recommended value is 2*Number of Design Parameters + 1. You can overwrite this default by checking Set Sampling Iterations Manually. |
| Optimization Iterations | Number of iterations used to refine results; here, the actual optimization takes place. | |
| Total Iterations | Sum of Sampling Iterations and Optimization Iterations. This is how long the HITL process will run in total. |
The hyperparameters affect how efficiently the optimizer searches the space. The adjustable hyperparameters are shown in this image.
| Name | Default Value | Description | More Information |
|---|---|---|---|
| Batch Size | 1 | Number of evaluations performed in parallel. Current HITL implementation supports only 1; larger values are forced to 1 at runtime. |
Batch Size Explanation |
| Num Restarts | 10 | Optimization restarts to escape local optima. | |
| Raw Samples | 1024 | Random samples to initialize acquisition optimization. | |
| MC Samples | 512 | Monte Carlo samples to approximate the acquisition function. | MC Samples Explanation |
| Seed | 3 | Random seed for reproducibility. | Seed Explanation |
Note: Recommended:
Sampling Iterations >= 2(default is2d + 1). Warm start sets sampling iterations to0.
All result files are written to:
- Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/
- Assets/StreamingAssets/BOData/LogData/CABOP/single/<USER_LOG_ID>/ (CABOP single-objective runs)
- Assets/StreamingAssets/BOData/LogData/CABOP/multi/<USER_LOG_ID>/ (CABOP multi-objective-scalarized runs)
- Legacy runs may exist under Assets/StreamingAssets/BOData/BayesianOptimization/LogData/<USER_LOG_ID>/; final-design selection checks both locations.
Common files:
ObservationsPerEvaluation.csv: denormalized parameter/objective observations per evaluation.ExecutionTimes.csv: optimization-step runtimes.
MOBO (mobo.py, m >= 2):
ObservationsPerEvaluation.csvusesIsPareto.HypervolumePerEvaluation.csvstores hypervolume per iteration.- Unity
coveragecorresponds to current hypervolume.
Single-objective BO (bo.py, m = 1):
ObservationsPerEvaluation.csvusesIsBest.BestObjectivePerEvaluation.csvstores best-so-far objective per iteration.HypervolumePerEvaluation.csvis also written for backward compatibility (mirrors the best-objective trace).- Unity
coveragecorresponds to current best normalized objective.
CABOP (cabop_bo.py / cabop_mobo.py):
- Logs are separated under
LogData/CABOP/singleandLogData/CABOP/multi. ObservationsPerEvaluation.csvis reused (IsBestfor single mode,IsParetomarker column for multi mode).ExecutionTimes.csvis reused.CABOPMetricsPerEvaluation.csvstores scalarized objective trace and realized/cumulative cost.- Compatibility metric file:
- single mode:
BestObjectivePerEvaluation.csv - multi mode:
HypervolumePerEvaluation.csv(stores CABOP coverage trace for compatibility)
- single mode:
- Unity
coverageis1 - best_scalarized_objective(higher is better).
During sampling, Unity tempCoverage is a progress value in [0,1].
| Symptom | Likely Cause | Fix |
|---|---|---|
| "The system could not be started..." in Unity | Python path/setup failed or dependencies are missing | Re-check Python Settings, rerun installer scripts in Assets/StreamingAssets/BOData/Installation, then restart Unity. |
Loop stalls after questionnaire Finish |
Objective values were not assigned, or objective keys do not match questionnaire headers | Verify each objective key is mapped and receives a value each iteration. |
Loop does not progress in ExternalSignal mode |
RequestNextIteration() is not called from your custom flow |
Add the explicit call after your evaluation step ends. |
Loop does not progress in NextButton mode |
Next Button not assigned or not wired to ButtonNextIteration() |
Assign the button reference and Unity OnClick event to BoForUnityManager.ButtonNextIteration(). |
| Warm start fails on startup | Missing CSV files, wrong headers, non-numeric values, or wrong format setting | Validate files against Warm-Start CSV Checklist (Required) and Warm-Start CSV Examples. |
ObservationsPerEvaluation.csv columns mismatch error |
Existing log file schema no longer matches current parameters/objectives | Back up and remove Assets/StreamingAssets/BOData/LogData/<USER_LOG_ID>/, then rerun to regenerate headers. |
| No parameter changes between iterations | Simulation does not apply incoming parameter values from bo.parameters |
Confirm your scene reads and applies updated parameter values each iteration. |
coverage/Pareto behavior seems inconsistent with minimize objectives |
Misunderstanding of internal maximize-space conversion | See Objective Direction Semantics and Objective Direction Example (2 Minimize, 1 Maximize). |
This section explains the architecture to help you extend the asset. The diagram below summarizes the flow.
At the top is BoForUnityManagerEditor.cs, which edits the BoForUnityManager.prefab (what can be set and how it is described). The prefab’s settings are configured in the Unity Inspector as explained in Configuration.
BoForUnityManager.cs manages the process and first starts the Python server via PythonStarter.cs.
Once the server is running, BoForUnityManager.cs communicates with the selected backend script (bo.py/mobo.py or cabop_bo.py/cabop_mobo.py) using SocketNetwork.cs.
After receiving data from SocketNetwork.cs, it passes it to Optimizer.cs, which updates simulation parameters.
BoForUnityManager.cs also tracks the current iteration and orchestrates the loop.
To reuse this tool in another project, export it as a Unity package:
- In the Unity hierarchy, ensure you are in Assets.
Assets→ Export Package...- Click None to deselect all files.
- Select:
- BOforUnity
- QuestionnaireToolkit
- StreamingAssets
- Click Export... and save the package.
To import: Assets → Import Package → Custom Package..., select your package, keep all selected, and press Import.
Note: Avoid spaces in the project path; otherwise, the Python script may not resolve paths correctly.
Note: On first use of TextMeshPro, install TextMeshPro-Essentials when prompted. Refresh the scene if needed.
If you use this software, please cite:
@software{jansen_bayesian_optimization_unity_2025,
author = {Pascal Jansen and Mark Colley},
title = {Bayesian-Optimization-for-Unity},
year = {2025},
month = aug,
publisher = {Zenodo},
version = {v1.1.0},
doi = {10.5281/zenodo.16849778},
url = {https://doi.org/10.5281/zenodo.16849778}
}This project is under the MIT License, available in the repository folder containing this README.
README originally written by Sebastian Lommen as part of his student's software engineering project











