Creating a Visual Behaviour Tree editor

Creating a Visual Behaviour Tree Editor

By Tirso Meijer

Table of Contents

  1. Introduction
  2. Explaining the System
    2.1. Basic Functionality
    2.2. Behind the Scenes
    2.3. Behaviour Tree or State Machine?
  3. Getting Started
    3.1. Deciding on a UI System
    3.2. IMGUI Rundown
    3.3. Initial Editor Designs
    3.4. Preparing to Build the Editor
  4. Building the Editor
    4.1. Dealing with IMGUI
    4.2. Creating the Interface
    4.3. Serializing Behaviour Data
    4.4. The Result
  5. Testing the Editor
    5.1. Preparing the Test
    5.2. Test Limitations
    5.3. Test Results
  6. Conclusion & Future Plans
  7. Sources

1. Introduction

During my first semester at the HvA I worked on the game Spacenite as part of the project Fasten Your Seatbelts. Spacenite is an arcade-style twinstick shooter where the play must fight off waves of alien ships while avoiding their fire. The game was well received, which was in part thanks to it featuring a relatively high amount of content. We had decided early on in the project to focus on content generation systems so that, down the line, we could create lots of interesting enemies, upgrades and weapons for the player to encounter.

One of the systems we needed to make was a way to create simple cycling enemy behaviours quickly. Luckily, I had already encountered something like that in a previous project. In the past, I had worked on private servers for the popular Flash-based bullet hell MMORPG Realm of the Mad God. The source code that was commonly used for those servers contained a Behaviour Tree-esque system that dictated how enemies would behave. I had used the system myself but at the time didn’t really understand the backend, so I spent quite some time researching it and porting it over to Java for use in Spacenite. Through this I became intricately familiar with the system and how it worked.

For project Game Development the following year I once again did the same, this time porting the system to Unity for a different game. The way Unity works with monobehaviours meant it needed some changes, but overall the system stayed mostly the same. During my technical assessment on the project I presented it as part of my work and one of the teachers present asked an unexpected question: “Why is it so ugly?“. I was a bit taken aback by this, but the question did have a point.

The way the system was structured meant that to actually create behaviours, developers/designers needed to create long lists of actions with parameters for entities to perform. This meant a moderately complex behaviour could easily span over 50 lines of psuedocode. This limitation was born out of necessity, because in previous projects I had only made use of script-based editors. With a visual editing environment like Unity however, there was no reason for this limitation to remain. The prompt from the teacher had gotten me to consider: “How can I convert the behaviour tree system I am already familiar with into a powerful and user friendly visual scripting tool?“. Game Lab seemed like the perfect opportunity for me to investigate this question further

An example of boss behaviour in the behaviour tree system. While it is not completely unintelligible, the code can be difficult to read and is unapproachable to designers without programming experience. Especially missing or extra parentheses and commas can be hard to catch.

2. Explaining the System

Before getting into anything else I first want to explain the base functionality of the behaviour system. As mentioned I had already ported a script-based version of the system over to Unity as part of Project Game Development. That ported system was my starting point for this project. In order to understand the process of creating a visual editor around the system, I will first need to explain how the system itself works.

2.1. Basic Functionality

To understand how the system works we’ll look at a simple behaviour from PGD. Below is a behaviour for an enemy that will jump at the player every 2 seconds. When the enemy’s health is reduced to 33% or lower, it will start jumping faster.

The behaviour above can be split up into 3 basic elements: States, actions and transitions. I will explain each of these separately.

States

States contain actions, transitions and other states. When executing a state, its actions, transitions and the active substate is executed. A state can only have one active substate at a given time. When a state is executed for the first time, its active substate is the first state in its child elements.

To illustrate, our example behaviour has three states total. The main state (each behaviour has a single state as its root referred to as the main state) and its two substates (“state1” and “enraged“). In this case “state1” is the first child state of our main state, making it the active substate. As such, when the main state is executed for the first time, it will execute “state1” as its active substate, and “state1” will then execute its own child elements. The “enraged” state is inactive and won’t be executed until it is made to be the active substate instead.

Actions

Actions are pieces of code that the behaviour is intended to perform. When an action is executed, it performs its execution code using the parameters that it’s been given.

Our example behaviour contains two instances of the same action, namely the “JumpAtPlayer” action. One is located in “state1” while the other is in the “enraged” state.

The action has three parameters. A jump force, a jump angle, and finally a cooldown. The jump force and angle are consistent between the two instances, but the cooldown in the “enraged” state is shorter, meaning it will perform the action more frequently.

Transitions

Finally, transitions are a special kind of action which run a given evaluation when they are executed. If that evaluation equates to true, the behaviour will transition from the current state to another. In other words, the currently active substate becomes inactive, and a different state is made active.

In our behaviour, we have a single transition located in “state1“. Specifically this is a “HealthTransition”. This transition takes two parameters. Firstly, the target state, which all transitions must have, and secondly a health threshold. When executed this transition will divide an entity’s current health by its total health and compare the result by the given health threshold. If the entity’s health percentage is lower than the threshold, the transition evaluates to true and the behaviour tranisitions to the target state.

Putting it all together

By now you should have a decent understanding of how our example behaviour works. When the behaviour is first entered, the main state is executed, which executes its first substate (in this case “state1“). The substate will then execute its own children. First it will check whether its health has dropped below 33% and if so switch to the “enraged” state. It will also jump at the player every 2 seconds. Once it enters the “enraged” state it will instead begin jumping at the player every second instead, which it will continue to do until it is destroyed.

2.2. Behind the Scenes

It should hopefully be clear now how behaviours are set up, but I have yet to explain how they work behind the scenes. I won’t go too far into detail, as there will be plenty of that further down, but I’ll give a rough explanation.

At its core, this system makes heavy use of the command programming pattern. States, actions and transitions are encapsulated as objects with parameters passed into their constructor and stored as variables of that object’s state.

As part of this, states, actions and transitions all inherit from an interface called IStateChildren, which defines an Execute function which is called on the main state of the behaviour every frame. For states, the execute function will simply invoke Execute on its own IStateChildren (with exclusion of the inactive substates). In case of actions the Execute function will perform whatever functionality is specific to that action. Similarly, transitions will handle their evaluations and, if the evaluation equates to true, their transitions when Execute is invoked. In this way, execute calls will propagate downwards through the active branch of the tree.

In order for states to contain an endless amount of substates/actions/transitions, their constructor is set up with an array of IStateChildren using the params keyword.

Users of the system can extend the Action and Transition classes to easily create their own custom actions and transitions. In case of actions the simply need to implement the Execute method, while for transitions they implement the CheckCondition method.

DebugLogAction implements Action to log a given value to the console.

2.3. Behaviour Tree or State Machine?

When I initially described the system in the introduction, I described it as behaviour tree-esque. The reason for this is that the system doesn’t entirely function as one would expect of a traditional behaviour tree.

In a traditional behaviour tree system such as Behaviour Designer and Node Canvas, you generally don’t interface with states. Rather, the tree is built up of conditional and action based nodes. Every cycle, the tree is evaluated in its entirety, moving from top to bottom and then left to right. If a condition evaluates to true, that branch of the tree is entered rather than any of the other branches, however on the next cycle that condition may evaluate to false and it will evaluate the alternative branches instead. You can see this functionality in the linked behaviour tree systems.

In our system of states and transitions, if a conditional argument of a given transition is evaluated to true and the transition is performed, then the behaviour will switch to the target state of that transition. It doesn’t matter whether the condition evaluates to false on subsequent cycles, because the branch has already been swapped. For example, with our example behaviour from earlier, if the enemy were to somehow heal back to above 33% health, they would not revert back to the slower jumping behaviour. If we wanted to add that functionality, we would need to explicitly add an extra transition to our “enraged” state for it to revert back to “state1“. Essentially, once a condition has been reached, a behaviour may become “locked” into a given state, unless that state has its own transition logic.

state1” can transition to “enraged” but not the other way around.

Given the fact that this system uses states in order to organize performed actions, one might be tempted to call it a state machine rather than a behaviour tree. Specifically, this system bears some resemblance to a hierarchical state machine. This is a type of state machine where states contain different substates that are performed based on conditional arguments. The higher level states can then contain shared functionality between substates, reducing duplicate code. This sounds a little bit like our system but its not exactly the same. In this Unity forums thread, one user describes the difference between finite state machines (FSMs), hierarchical state machines (HSFMs) and behaviour trees. Specifically, they mention that for HSFMs “… nodes in each hierarchy are not modular, they are still coupled tightly to their hierarchy“. But that doesn’t quite fit for our system. The contents of our states are composed of actions and transitions as building blocks, and those actions and transitions have their own parameters that can modify their behaviour, so the whole system is modular and uncoupled.

So, what is the system then? Calling it an HSFM would be reductive in my opinion, as that would imply that the system lacks modularity. On the other hand the state-transition model is quite different from how the average behaviour tree is organized, so perhaps something like “State-based Behaviour Tree” would be a more apt name. After all, while you can achieve the same things with both systems, the way you would do so would have to be quite different. Particularly for behaviours with large numbers of highly conditional actions, the standardized behaviour tree model would be more suitable, as the state model would need to create many different substates and transitions to handle all those conditions. On the other hand for very transitional behaviours like phase based enemy patterns, the state system would win out in terms of suitability.

Keeping the above in mind, I’ve decided that for the sake of simplicity I will refer to the system as a behaviour tree for the remainder of this article, as I feel that it is the most accurate term for it without getting overly specific. While the system doesn’t work exactly as you’d expect a traditional behaviour tree model to do, it primarily differs in the way that a user may organize a behaviour rather than what the user can create with it, so I think this is a fair decision.

3. Getting Started

Now that I’ve explained the initial system I started with, I can get into the development process on the editor for said system.

I’d like to note while the alotted time period for this project spanned only a single school block, the work I’ve done for it has fallen far outside that scope. I have spent about a year working on and off on the system when I had the time to do so. With that in mind, I will likely be glossing over some of the smaller parts of the development process in order to keep this article limited to at least a somewhat reasonable length.

Contribution log of the system’s GitHub repository.

3.1. Deciding on a UI System

For my preliminary research my main focus was to look into how you can create a custom editor window in Unity. There are three major UI systems for Unity at the moment. The first is the uGUI system. This system is the system used for in-game UI such as the world-space canvas, but doesn’t offer any support for editor tooling, so its irrelevant for our use-case.

The other two systems are the IMGUI system and UI Toolkit. UI Toolkit is the newer of the two but is still in active development, so is not at feature parity with the other two systems. On the other hand, IMGUI is older and has some limitations compared to UI Toolkit, but it has far more tutorial support because it has been around for much longer. Especially at the time when I began the project a year ago there was far more information on the IMGUI system.

While UI Toolkit is the system Unity uses for almost all its own node-based editors such as VFX graph and ShaderGraph, its surprisingly difficult to find any good documentation or tutorial content on how to replicate something like that. On the other hand, I was able to find a good amount of beginner-friendly tutorial content on using IMGUI and creating a node-based editor with it. As a result I decided to go with IMGUI for this project. In hindsight, it would probably have been better to use UI Toolkit. There are several limitations to IMGUI that make it somewhat unsuitable for the type of editor I wanted to make. For example, zooming functionality is unsupported and must be implemented through a workaround in IMGUI. These limitations significantly slowed down development time and because I had to implement special code to bypass those limitations, the system is less efficient than it could be and there are some relatively obtuse functions meant to “hack around” certain issues. As a whole, the way the IMGUI system handles drawing and input events is also not suitable for more complicated editor tools, as the way it is set up will naturally steer the codebase towards large chains of conditional logic that are hard to navigate (more on that later).

On the other hand, it’s hard to say whether using UI Toolkit would necessarily have given me an easier time, as finding helpful support for it would have been quite difficult. Googling for UI Toolkit documentation and examples will often times lead you to IMGUI-based content instead, so it would require a lot of digging and may have taken me even longer to build the editor as a result. That said, the final result would most likely be more organized and efficient using UI Toolkit, so I’d like to at some point rebuild the visual side of the editor with it.

3.2. IMGUI Rundown

While IMGUI has been around for a long time, its documentation on Unity’s end is remarkably thin. I often found myself running into problems because the official documentation had excluded critical information. (For example, the difference between GUILayout.TextField and EditorGUILayout.TextField cost me several hours by itself.) Luckily, user Bunny83 has written an excellent write-up of the workings of the IMGUI system that gave me a much better understanding of it. I’ll give an explanation below on the most important parts of the system, but for further detail I would recommend seeing Bunny83’s guide.

EditorWindow

In order to create our own Editor Window in Unity, we implement the EditorWindow class. This class gives us access to the core functionality we need to draw our window and the elements within it. By using the MenuItem attribute we can add a menu item to the the context menu to open our window. For opening the window itself, we can use EditorWindow.GetWindow<T>.

OnGUI
The OnGUI method handles drawing as well as event processing.

At the core of the IMGUI system is the OnGUI method. This method is essentially an update and a draw loop in one, and handles all input events as well as displaying the window to the user. The way this is handled is through an Event class. Using the static member Event.Current gives us access to the event that is currently being processed by the OnGUI function. We can then read out the type of that event in order to know how to respond to it. OnGUI may be called several times per frame by different event types, so you need to make sure your code only responds to the correct event type.

For example, you may want some code to be called when the user clicks the mouse. For this, you would look for the MouseDown event type. You would then check which button was pressed through the event as well using Event.button.

This is why I mentioned earlier that IMGUI’s system will lead to long, conditional code chains earlier. Because all the event logic is processed in a single method in which you need to check for every event type you want to process, you end up having to create large logic chains to account for all your functionality. This will get messier the more your project grows, especially when you want to start using modifier keys (like a ctrl+click). A switch statement on the event type can make things a little easier to read, but its still not ideal. I’ll explain more about how I dealt with this issue later on.

Incidentally, the Repaint and Layout events, which are used for drawing, don’t need to be checked for when using GUI drawing methods, as they will already check for those event types internally.

Using a switch statement makes event handling a bit more organized.
(Editor)GUI and (Editor)GUILayout

In order to draw elements to your editor window in IMGUI, you will need to use the GUI and GUILayout classes, and by extension the EditorGUI and EditorGUILayout classes. The Editor versions are simply extensions containing functionality specifically for use in the Editor, rather than Play Mode, (a remnant from when the uGUI system did not exist and IMGUI was used for Editor as well as Play Mode UI). These classes contain functionality for drawing different types of controls such as buttons and text fields to the screen, as well as ways to determine the layout of those elements. The difference between GUI and GUILayout is how that layout is determined.

In general, with GUI, you are expected to supply your own rectangles and by extension, coordinates for elements and groups to be drawn. On the other hand, GUILayout only asks for a rectangle when using GUILayout.BeginArea (which restricts your layouting to that area of the screen). All other controls are drawn and have their layout determined using the supplied style (see the GUIStyle section below) and additonal GUILayoutOptions.

Generally speaking, this makes GUILayout much simpler to use, as the majority of layouting work is handled for you rather than having to write it yourself. The downside of that is that you have less information on where your elements are drawn, and by extension what their interactable area is when it comes to input events. I’ll get into that issue bit later as well.

GUIStyle

When drawing elements using GUI and GUILayout methods you can supply a GUIStyle that describes how to draw the element. This lets you define things like backgrounds, margins, font types and more for controls to be drawn with. This is how you’re expected to do most of your styling when making editor windows. As with OnGUI and GUILayout, this part of IMGUI also comes with its own issues which I’ll cover later.

3.3. Initial Editor Designs

When I had just started out on this project I had difficulty figuring out what I should be doing. I had followed a tutorial on building a node based editor in Unity to gain some familiarity with IMGUI but beyond that I was somewhat directionless. I couldn’t yet envision how the visual editor should look and because of that didn’t know what I should have been doing or where to begin.

The result of following oguzkonya‘s node based editor tutorial.

After being stuck in analysis paralysis for a while, I figured that I should begin by simply coming up with a mockup of a visual interface for the system. From there I could then start determining what changes were necessary to the core system as well as what editor elements I would need to be creating. I generally prefer to create technical prototypes rather than wireflows and graphic designs, but in this case there was too much uncertainty on technical requirements to simply start programming away.

To create the design I came up with a few essential requirements.

  • The system must have visually distinct states, each clearly containing actions and transitions.
  • The user must be able to add actions and transitions to these states.
  • The user must be able to clearly assign a target for a given transition.
  • States must be able to be parents or children of eachother and this must be clearly indicated.

Based on these requirements I came up with the following initial mockup.

In this design actions and transitions are categorized into their own sections within a state. Transitions would have output nodes that could lead to input nodes on the left or right sides of states in order to indicate transition targets. I figured that grouping transitions together so that the output nodes were adjacent would look better.

Parent/Child relationships would be indicated by input/output nodes at the top and bottom of states, where upper states would be parents, creating a vertical hierarchy in the system. I had some concerns about whether the lines indicating parentage and transitions would be distinct enough, so I also decided to give them different colors.

I eventually worked out a slightly more detailed version of the same design as seen below.

In this design I added visuals for the input/output nodes on the states and transitions. I decided to give the input and output nodes for parentage a distinct shape from those for transitions to further differentiate them. Between the verticality vs horizontality, the colored lines and the differently shaped in/out nodes, I figured the difference between the two should be clear enough, but I still spent some time trying to come up with alternative designs that would handle parenting differently.

Neither of these designs ended up going anywhere, mainly because cascading substates would end up needing to become smaller and smaller to fit inside eachother. I decided to move ahead with the initial design for now and perhaps revisit it later if the difference between the types of connections between states wasn’t clear enough.

3.4. Preparing to Build the Editor

Now that I had a design in mind for the editor, I was almost ready to start working on it. But before I could do so I needed to do some preliminary work on the behaviour system itself in order to implement some technical changes. A lot of this was simply refactoring or renaming elements, but the main functional change was the way IStateChildren were categorized in our behaviour.

In the original model states could contain substates, actions and transitions in any order and as such those executables can be run in any order. However, in our editor design they are in three distinct categories, so we should also treat them as such when executing them.

As shown earlier, states didn’t distinguish between the different IStateChildren they contained. They just used a large array of them as a parameter in their constructor, and beyond that it was up to the creator of the behaviour to order the different substates, actions and transitions in whatever order they chose.

In our new system these three elements needed to become distinct within the state hierarchy in order to be able to execute them based on their type.

As for the execution order, I decided it would be natural for states to execute their contents vertically downwards before heading on the substates.

The new state constructor now takes actions, states and substates separately.
Execution order is: actions > transitions > substates.
State( "FirstState% 
new 
new HealthTransition("Swooping% 
State( "Wait " , 
new , 
State( "FirstState" , 
new 
State( "Wait " , 
new , 
new HealthTransition("Swooping% 
0.66f), 
3) 
3) 
0.66f),
In the original system you could list and run substates, actions and transitions in any order.
In the new system actions, transitions and substates are run in sequence based on their type.

Aside from making some changes to the behaviour system itself, I also decided to make a basic node editor to start working off of. For this, I essentially took the node editor I had made earlier and simplified it, removing the connections between nodes for the time being and simply having a window in which I could add or remove nodes to and from a canvas.

4. Building the Editor

The development process on the editor itself took a long time and many of the elements in the system went through several iterations before getting to the point where they are now. I was working with many different aspects of Unity development I had never really touched before, like IMGUI and Unity’s serialization system. As my knowledge of these systems and their limitations grew the requirements for my own systems changed to match them. Furthermore, as more elements were added the codebase grew larger and more disorganized, making refactors a necessity.

Because of this, going through every iteration of every part of the product chronologically would simply be too much to cover and too disorganized to put into writing. Instead I’ve categorized the development into a few sections. First I’ll be covering how I dealt with IMGUI’s limitations. Secondly I’ll cover the building of the interface itself. Finally I’ll get into serialization of behaviour data.

4.1. Dealing with IMGUI

As I mentioned in the IMGUI rundown section there were a few issues that caused trouble for my workflow as I continued to work on the system. Here I’ll be going over what these issues were exactly and how I dealt with them.

OnGUI

OnGUI‘s centralized handling of all drawing and interaction code becomes more disorganized as more elements are added to the mix. You can somewhat lessen this by subdividing parts of the code into their own functions. For a time I had a system where the elements I was drawing all contained their own Draw and ProcessEvent methods that would be called by their parent elements.

The OnGUI method called different drawing and event processing methods.
An old version of state interactable event processing.

The issue with this approach was that a lot of code ended up being copied in several different processing methods as they were all fully independent from one another. For example, to check whether an element was left clicked, you’d need to check if the event type is a MouseDown event, then check if the mouse button was 0, and then check if the mouse position was over the element’s rectangle. Each element where you wanted to check for a left click would need to implement those checks in their event processing. It would be much more preferable if all of that was handled in a single location.

With that in mind I decided to create an abstract InteractableElement class to help with event processing and drawing. This class still contains a ProcessEvents method, but rather than element interactions being handled directly inside of it, the class instead contains several interaction methods such as OnClick and OnHover that are called by ProcessEvents. These methods also invoke events for said interactions, meaning that functionality can either be implemented by overriding the methods themselves when implementing InteractableElement, or by subscribing to the events they invoke.

ProcessEvents now only deals with the processing of events, whereas the handling of said events is implemented in the methods it calls.
StateInteractable implementing OnClicked through inheritance.
Interactions can either be implemented in subclasses or subscribed to through invoked events.
Subscribing to a Clicked event on a generic InteractableLabel.

As for drawing, InteractableElement contains two methods named Draw and DrawAtRect, which call the virtual methods DrawGUI and DrawGUIAtRect respectively. DrawGUI and DrawGUIAtRect can be implemented in inheriting classes in order to specify how they should be drawn onto the screen. The difference between the two is that DrawGUIAtRect will target a specific area on the screen to be drawn, whereas the standard DrawGUI will assume use of GUILayout to determine draw location. After calling DrawGUI and DrawGUIAtRect the Draw and DrawAtRect functions will update the interactable’s interaction rect based on where it was just drawn. (I realize the naming on these methods is a bit unclear, the code below should hopefully clear it up.)

Draw and DrawAtRect will call the drawing methods if the element is visible, and afterward update their interaction rects.
DrawGUI and DrawGUIAtRect are intended to handle the actual drawing and should be implemented as such.
InteractableLabel‘s implementation of DrawGUI and DrawGUIAtRect.

With this, I no longer have to add event processing for every element I want to draw to the screen. I can simply extend InteractableElement and implement the specific interactions that I want my element to respond to.

GUILayout

As mentioned, GUILayout’s drawing methods are generally much simpler to use than their GUI equivalents as Unity will handle the layout and formatting of your elements behind the scenes. This means you don’t need to implement your own layout calculations. However, the fact that you’re not supplying the rectangles to draw also means that you don’t exactly know where elements are being drawn. We need to know that information to compare against the mouse position when processing interactions.

The system has a solution for this, which is the GUILayoutUtility.GetLastRect function. This method will return the rectangle where the previous element was drawn, but its not ideal for two reasons. Firstly this means that you will need a GetLastRect call after the draw call of every element that you want to make interactable. As you can imagine this will cause significant amounts of clutter in your code. The second issue is that it returns the previous rect relative to the area it was drawn in, meaning that you cannot use it by itself to determine the interactive area of the element. You will also need to add the area that contains it to its position.

Having to determine interactive rectangles for elements introduces clutter into our drawing functions.

The InteractableElement class comes in use here. The first problem is resolved by the previously mentioned Draw and DrawAtRect methods, which will each update the element’s interactive rectangle to the one of the drawn element. In a scenario without the InteractableElement class, this would need to be implemented for each element we wanted to draw.

The second issue is a little trickier. My solution has basically been to give InteractableElements both a LocalRect as well as a ParentRectPosition value. When doing a get call on the Rect property of the element, it will return these two values combined. We then use SetParentPositionsPreDraw before we draw any of our elements to update the ParentRectPosition for all our elements.

This solution is admittedly not the cleanest, and mostly intended as a temporary fix for the problem. In reality the InteractableElement should have a hierarchy structure with children and parents that would handle these positional updates automatically, but this would be fairly complex to implement. Given that the system is potentially getting migrated to UI Toolkit at some point, I decided it wouldn’t be worth the time to implement a parent/child hierarchy in the InteractableElement system for the time being.

One last issue with GUILayout is that the way the layouting is determined exactly is part of Unity’s internal code and as such not known for certain. This means it is very difficult to know how your elements are going to be drawn before drawing them, which is a problem when you want to scale a containing area to its contents. When creating a GUILayout Area you need to supply a rectangle for it’s sub-elements to be drawn in, but without knowing exactly how those elements will be drawn, you cannot know how large the area should be.

For example, if we want to draw a node at a certain point on the screen, we need to use a GUILayout Area for which we will need to supply a rect. We know where we want to place it, but we don’t know how the elements inside our node are going to look, so we can’t determine the size yet. Elements are constrained to the area they’re drawn in, so if we make the area too small, it will cut off the elements we want to show. On the other hand, if we make it too large, our node will be noticably larger than its contents. Through some experimentation I did find a workaround for this issue as well.

Calculating indivual elements gets convoluted quickly and would be difficult to scale.

At first I attempted to use methods like GUILayoutUtility.GetRect and GUIStyle.CalcHeight to determine the content size beforehand, but the implementation was very convoluted and the output wasn’t entirely accurate. (Changes in node contents would cause the relative node size to shift slightly.) I believe it would be possible to use this approach to predetermine rect sizes, but without insight into the way the GUILayout methods work its very difficult to get the implementation right.

The solution I eventually ended up with was to first draw an empty area without a background that spans the size that our node could potentially be. We can then use GUILayout.BeginVertical to create a vertical group that will scale to its contents. That vertical group will draw the node background instead, so we don’t have to worry about it being too large for the contents. This was eventually implemented into the DynamicSizeInteractable class.

This solution uses a lot less code while also being more accurate.
GUIStyle

When drawing elements with GUI or GUILayout, you supply GUIStyles to determine how those elements are drawn. You can create your own styles either through code or with the GUISkin object.

The GUISkin object is a sort of style library that you can create inside Unity. It’s a collection of styles that you can retrieve via their names in script with a reference to the skin. I had some issues with this though. The main problem was that certain styles seemed to draw differently based on their given name. For example, if you copied the button style and gave it a name that was not “button”, any elements drawn with the new style would suddenly no longer be drawn as buttons. You could also not just give them the same name, as you need to use the name as an ID to retrieve the style. The very lacking documentation on GUISkin did not help either.

So the next best thing is to create the GUIStyle via code, but this gave some issues as well. Specifically because you can only create them during OnGUI processing. I wanted these styles to be stored as properties in my InteractableElement class through their constructors, but that would mean I wouldn’t be able to instantiate any interactable elements before OnGUI was called to create the styles first.

For this, I found a solution by Bunny83 to create a LazyGUIStyle class. This is sort of GUIStyle wrapper class that only initializes itself once it is cast as a GUIStyle. The constructor takes an initializer delegate that will return a GUIStyle when invoked. This delegate is then invoked the first time the LazyGUIStyle is implicitly cast as a GUIStyle. In this way I can simply store my element styles as LazyGUIStyles, and only once they’re actually used for drawing (which always happens during OnGUI anyway) will they be passed to the drawing functions as GUIStyles and invoke their own initialization.

The implicit cast to GUIStyle invokes our initializer.
Creating a LazyGUIStyle that will initialize itself with the proper settings when used as a GUIStyle

4.2. Creating the Interface

In this section I’ll go over how I went about building the interface from both a design and programming perspective. A good amount of the work on the interface was done before I created the InteractableElement class, so many elements were adapted into that model later. The code showcased here will be from after that was done, but fundamentally the logic behind the work should be the same.

StateInteractables

As mentioned in chapter 3.4, to begin working on the editor I decided to use a simplified version of the tutorial I had been following as a basis. I decided to start with only a gridded canvas on which I could create or remove nodes, as well as dragging the nodes and the canvas around.

The starting point of the editor

From here, my next goal was to draw the action and transition lists and their contents onto the nodes as in the wireframes I had made. For now I just used example data as list contents in order to make sure they were drawing correctly.

To begin with I converted the class I was using to draw nodes to the screen to a StateInteractable class. This class contains a reference to a state it represents in our behaviour. (Initially I just used a newly generated placeholder state to run tests with.)

I then created an abstract GenericActionList class with two subclasses, the TransitionList and the ActionList classes. This collection of classes is similar to the StateInteractable in that they represent parts of our behaviour data visually. The lists are slightly different in how they represent our data (transitions need to also draw output nodes for their target state) which is why they get distinct subclasses. When creating a StateInteractable, we create both an ActionList as well as a TransitionList object. When doing so we pass the actual action and transition lists from our StateInteractable‘s state into their constructors.

From the StateInteractable constructor.

The GenericActionList features its own ProcessEvents and Draw functions in order to handle drawing of the elements they contain. These lists are actually the only UI elements left that don’t implement InteractableElement, mostly because they function as a group of elements rather than an element themselves.

The GenericActionList draws the labels of all the actions it represents.

The GenericActionList class keeps track of two main lists. The first is the actual actions/transitions it needs to represent. The second is a list of InteractableLabels that actually represent those actions/transitions. (InteractableLabel is an implementation of InteractableElement to easily draw labels). Whenever a draw call is made on our GenericActionList, it loops through its list of labels and draws all of them. It does the same thing for event processing.

Creating and adding new actions to the list will immediately create a corresponding label, and because they are added together their indices correspond to eachother and they can be removed together as well. Furthermore, when creating the labels I also immediately implement the handling for their interactions, such as showing action data when they are clicked.

Label events are subscribed to as the action is created.
States are now properly represented by the StateInteractable.

Whenever a state is drawn, it will make a draw call on both its action and transition lists. The same happens for processing events. With some placeholder data added, StateInteractables were now drawing with their state’s actions and transitions inside of them. I also added a + button in order to add additional actions/transitions. The next step was implement the functionality for that button.

Action Search Window

Users of the system need to be able to add any Action or Transition within their project source to the StateInteractable. For this they need a way to search for the action or transition they want to add. I decided to draw some inspiration from the way unity handles adding components to GameObjects for this.

When clicking the “Add Component” button in the inspector it brings up a list of MonoBehaviours in your project to add to the GameObject you have selected. You can search for the MonoBehaviour you want to add by name to find it more quickly.

I wanted to replicate this functionality for adding actions and transitions to states. Whenever the user clicked the + button under the action or transition list, it should bring up a search popup in which they would be able to search for any of the actions or transitions in the project. For this I created the ActionSearchWindow class. I only ever need to show one of these search windows at a time, so I decided to use a Singleton Pattern for it. It also implements the DynamicSizeInteractable class.

The Show method updates the list data and starts displaying it.

In order to show the window when clicking on one of the + buttons I use a Show method. This method takes a few parameters such as the list it should target when adding the clicked action/transition, the position at which it should render and the list of options it should display (determined by reflection). The method then sets the window to visible, which then starts displaying the options it needs to.

In order to filter the actions, I use Regex to determine whether the text entered into the search bar is part of any of the elements in the displayed action list. Just like when searching for components to add to a GameObject, I wanted this to highlight the matching part of the string in bold, so I insert bold tags into the string where the regex matches. The actions that match the regex will be added to a filtered list of InteractableLabels which is the list that actually ends up being drawn when drawing the search window. Upon being clicked one of those labels will add an instance of the corresponding action to the action list that was passed to the search window when it was shown, after which the window is closed.

The search window in action.
Action Data Window
Just like with the ActionSearchWindow we have a Show method that sets our window data.

The user needs some way to access the parameters of an action and modify them. For this I envisioned a pop up style window that would appear when either hovering over or when clicking on an action. To make this I created a very similar structure to the search window, namely a singleton class with a Show method inheriting from DynamicSizeInteractable to which I would send data for the action that was clicked/hovered. For the styling I simply decided to use the same visuals as I was using for StateInteractables.

I eventually decided to make the data appear on click rather than on hover, mainly because it would create a lot of visual noise whenever the user moved their mouse around in the editor and passed over a list of actions. Having the data appear on click ensured that it only showed information the user actually wanted to see, as well as making sure that data wouldn’t be hidden immediately if the player accidentally hovered off of the label.

There was an issue that occured when I tried to implement clicking within the ActionDataWindow. This was that I wanted to have the data window close when clicking outside either the window or the label, but because these elements were distinct from eachother a click in one would result in the ClickedOutside/HoverExited event being invoked on the other. My solution here was to introduce a list of “sibling elements” in the InteractableElement class. For the purpose of processing clicking and hovering actions, these elements would be considered as one.

Processing a hover will see if the element or any of its siblings contain the mouse positon.
In/Out Nodes and Connections

Our states need node connections between them, both to indicate transition targets as well as parent/child relationships between states. In the initial tutorial I followed, there was some explanation on how to create in/out nodes and how to draw lines between them. I used that knowledge as a basis to create my own node connections between states.

The solution I came up with uses a NodeConnection class and an abstract InteractableNode class, the latter of which inherits from InteractableElement. InteractableNode has two subclasses: the SingleConnectionNode and the MultiConnectionNode. This is because certain in/out nodes should only be able to have one connection to other nodes whereas others should be able to connect to multiple. (For example, a state can have multiple children but only one parent.) These subclasses share mostly the same functionality, however one stores all its connections in a list, while the other overrides its existing connection when the new one is made. In other words, they just use different implementations for adding and removing connections.

SingleConnectionNode‘s implementation.
MultiConnectionNode‘s implementation.

The InteractableNode implements several of the interaction functions from InteractableElement in order to create the functionality of creating and removing connections. To help with knowing what connections are actually valid to make, there are two enumerators. Firstly we have the NodeType enum, which specifies whether the node is a Transition or ParentChild node. Secondly the ConnectionType enum, which specifies if the node is an In or Out node. When making a connection, the NodeType of the two nodes must be the same, whereas the ConnectionType must be distinct. (For example, a transition-out node should connect to a transition-in node.)

InteractableNode also extends the InteractableElement class with a few extra events for when connections are created or removed. This lets us implement the functionality of actually setting parent/child connections and setting transition targets in our behaviour data generically. Below is an example of how that works.

When adding a transition to a list, the out node representing the transition target is also added. Setting the target state for the transition is implemented by subscribing anonymous functions to the OnConnectionMade and OnConnectionRemoved events.

As for drawing, the NodeInteractable is slightly unique as it uses a GUI-based drawing method rather than GUILayout. This is because we don’t want the node to be constrained by the GUILayout groups it is being drawn in. Instead we just determine where we want the node to be drawn based on the elements they need to be drawn next to/on top of.

Transition output nodes just draw slightly offset relative to the label of the transition they represent.

The drawing of transition input nodes on states actually became a small design consideration, as they could either be drawn centered between the two action lists, or centered on the state as a whole. After sharing the below example with a few friends, I decided to go with the former option based on their feedback.

between the action and transition lists 
Actions 
Actions 
Transitions 
Actions 
Transitions 
Centered 
Actions 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
Transitions 
Actions 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
Transitions 
Centered on the state 
Transitions 
TransitionTest1 
TransitionTest1 
TransitionTest2 
TransitionTest2 
TransitionTest1 
TransitionTest1 
Actions 
Transitions 
TransitionTest1 
TransitionTest1 
TransitionTest2 
TransitionTest2 
TransitionTest1 
TransitionTest1 
Actions 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
Transitions 
TransitionTest2 
TransitionTest2 
TransitionTest2 
TransitionTest2 
Actions 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
DebugLogAction 
Transitions 
TransitionTest2 
TransitionTest2 
TransitionTest2 
TransitionTest2

As for the NodeConnection class, its simply a data class that holds reference to the two nodes its connected to as well as a Draw method for drawing bezier curves between the two points. On the topic of drawing them, NodeConnections shouldn’t be allowed to obscure any of the other UI, so we need to make sure they are drawn behind all other elements.

IMGUI does have a draw depth system, but as with almost anything in IMGUI, it doesn’t work how you’d like it to, so we have to resort to just ordering our draw calls in the way we want the UI to be viewed. In order to make sure this happens, our BehaviourEditorWindow class stores a list of all connections, to which a connection is added or removed whenever it is created or destroyed. We can then just loop through that list to draw connections before drawing the rest of our elements in OnGUI.

A NodeConnection just adds itself to the list when it is created.

This solution does create one other problem which relates to the updating of the node positions. As mentioned earlier, InteractableElements update their rects after they are drawn, but NodeConnections draw before anything else. This means that NodeConnections will draw using the node positions from the previous frame. I have been able to implement a solution to fix this in almost all circumstances, which involves getting the positional data of nodes early, but connections will still draw incorrectly for a frame when a behaviour is opened.

To completely fix the issue I would have to restructure the system to first entirely predetermine all element positions before drawing them somehow, but that would mean a huge restructuring of the entire OnGUI, which is not worth it for such a small issue from both a workload and performance perspective.

Creating connections between states.
Title Bar and Primary Substate Indicator

With the node connections implemented, the UI for the system was nearly complete, but one critical visual element was still missing. As mentioned all the way back in chapter 2.1. states have a single primary substate which is the first substate that will be made active when executing the state for the first time. In our old system this was just the first state in the hierarchy, but that hierarchy is not clearly visible in the visual editor. Making it the first state added as a child of another state would technically work, but without any indicator for it the user could easily lose track of which state that is.

Knowing the primary substate is important because it determines the initial active branch of our behaviour, which significantly affects the flow of actions. That means we have to somehow convey it to the user, so I made a few simple mockup solutions in paint and collected some feedback from some developers I knew.

The feedback I received was fairly mixed, but there was a slight preference for the marker for its simplicity, so I decided to go with that option and test it with users later. The feedback I got did also tell me that most users would still like to have a state name to be able to edit, even if I went with the marker solution, so I decided to implement that as well separately.

I wasn’t entirely happy with how the state names looked in the mockups though, so I looked at some other node-based editors for some inspiration and realized that most of them use a sort of titlebar for their nodes, like in the example below from a tool named HA Lovelace Editor.

I liked how this looked so I decided to add a titlebar to my nodes as well, which was a fairly simple addition. I then added a textfield for the name as well as a marker for the primary substate. You can see them both in action below.

Other Features

With all the above UI implemented the interface was ready to make behaviours, but there were still some other interactions that I considered necessary to make the editor properly usable. The two I want to highlight here are zooming/panning and the dragging of UI elements.

For panning, I simply draw all elements on the screen with a certain offset based on the current panned position. Zooming is a little more complicated however, on account of IMGUI offering no support for it. I won’t go into too much detail on how I accomplished it, but it involves forcibly exiting the EditorWindows own GUI drawing and restarting it after applying a GUI matrix. This solution was pretty much entirely sourced from this guide by Martin Ecker.

The initial editor already contained the dragging of nodes, which simply involved updating their positions when the mouse was dragged after clicking them. I also wanted to be able to drag action and transition labels around in order to re-order them however, and this would be a little more complicated, as they would have to go from being drawn within the action list, to being drawn on the mouse position instead.

The solution I came up with for this is a bit hacky, but it works pretty well. I created a Singleton class called DraggedActionDrawer which holds a reference to a currently dragged action. Whenever an action is dragged it is then set as the dragged action in this class. A few other pieces of information are also sent, such as the offset of the mouse relative to the action label when it was first clicked, and the list to which the action previously belonged.

DraggedActionDrawer stores relevant information when an action starts being dragged.

When drawing the action list, if an action is currently the dragged action of our DraggedActionDrawer we ignore it. Instead, at a later point, the DraggedActionDrawer will draw the dragged action at the current mouse position, plus whatever offset was set when the action was set as dragged action. I also created a ListFillerInteractable class for use in the action lists. These list fillers are added between every action in the list and will normally have a height of zero and as such not be visible when drawn. However, if the dragged action is hovered over the interactable’s position the interactable expands to create an empty space in the list, creating the effect below.

I also added functionality to drag actions between different states. When a dragged action is released it is added to the target list at the point in the list that is hovered. In the case of transitions, the connection is also carried over, unless the connection becames invalid, for example because the target is its own new state.

4.3. Serializing Behaviour Data

Having a good looking interface is nice, but if it has no way of actually modifying or creating any behaviour data there’s no point to it. In order to modify data continuously in the editor, we need to serialize the data somehow. In a Unity editor window, the SerializedObject and SerializedProperty classes are by far the best way to do this.

Before I was aware of those classes, I was under the impression that I would have to implement custom serialization to get the job done. I actually had a working implementation of this, but it was removed because the SerializedObject/Property classes are much more convenient for actual use. The most important reason for that is that they can automatically handle drawing input fields for your values.

Creating the Behaviour Data Object

First off, before we can start doing anything with serialization, we need an object we can modify. For this I created a class called BehaviourData that implements ScriptableObject. This class only holds one value, being a single State called “mainState”. This will act as the root state of our behaviour, and any states added to the behaviour will be added under it in the hierarchy.

Using the CreateAssetMenuAttribute we can add an item that will create a behaviour data object to our creation context menu in Unity. This allows us to create an instance of the BehaviourData object as an asset in our project structure.

One important point is that for other objects to be saved properly to an asset, they need to be instanced and stored as a sub-asset themselves. For example, if I were to just instantiate a state and add it to my BehaviourData asset’s main state, a reference to that object would be stored in the BehaviourData, but the referenced object only exists in working memory. As soon as Unity would recompile, or restart or anything like it, the object would cease to exist and our BehaviourData would just hold a reference to a now non-existent object.

So whenever we want to add a reference to an object to our BehaviourData asset, we need to make sure that object exists as a sub-asset within our main asset so that it is not lost when working memory is cleared. To do this, we can simply use AssetDatabase.AddObjectToAsset. We need to do this for states as well as actions and transitions, so I created a method called AddStateExecutableAsAsset to our BehaviourData class. In order for this to work we do need our States as well as our Actions and Transitions to inherit from ScriptableObject, otherwise they cannot be stored as sub-assets.

The BehaviourData class only holds a mainState field and a method for adding sub-assets.
Modifying Serialized Data

As mentioned, the best way in Unity to deal with serialized data is to use the SerializedObject and SerializedProperty classes. These classes make it simple to perform changes on an asset through user input, and it’s the same system that Unity uses to handle serialized properties being displayed in the inspector.

You can create a SerializedObject from any object who’s type implements the UnityEngine.Object class. The SerializedObject is a sort of wrapper around the original object that allows for generic manipulation of the object’s serialized fields through SerializedProperties. You can use the SerializedObject.FindProperty method to return a SerializedProperty by its field name. The SerializedProperty is a sort of generic version of the serialized field it represents. It contains fields such as stringValue, boolValue and objectReferenceValue. You can simply modify the field of the type of your actual backing field while ignoring the others. Then, once you want to apply your changes, you can call ApplyModifiedProperties on the SerializedObject to do so.

Using SerializedObject/Property to set a new main state for our BehaviourData.
The ActionDataWindow draw method iterates through all the properties of an action and draws them using EditorGUILayout.PropertyField.

Unity has created a special method in EditorGUILayout for drawing SerializedProperties. This method will essentially draw the property while automatically inferring its value type to add an appropriate input field. This is also how serialized fields are drawn in the inspector window, so you can use this method to draw properties exactly as they would be drawn in the inspector.

Additionally, SerializedProperties have Next and NextVisible methods which can be used to iterate through the SerializedProperties of a SerializedObject. Combining this with the drawing method from before allows you to draw all the properties of an object in sequence through a simple while loop. This is how our ActionDataWindow displays the values of the action it is currently representing.

With this system, we can simply add the SerializeField attribute to any properties of our actions and transitions that we want to expose to the editor. For properties that we want to serialize but don’t want to show in our action data window (for example transition targets, as they’re already represented by our connection nodes), we just add the HideInInspector attribute so that it is ignored when using SerializedProperty.NextVisible.

Extending SerializedObjects

While editing behaviour data is pretty simple thanks to the SerializedObject system, it does introduce some boilerplate for modifying behaviour properties. Rather than simply being able to change a state property directly, we need to serialize the state into a SerializedObject, then we need to find the correct property using its field name and then after assigning our desired value we need to apply the modified property on the object. This means you need a few lines of code to adjust any property, and for more complex changes, especially those involving list modification, things can get pretty verbose.

I decided to create a few extensions of SerializedObject to help with this. Namely the abstract SerializedStateExecutable which is extended to the SerializedState and SerializedAction classes. These classes contain methods for the changes that might be made to states or actions, such as setting parent/child connections and setting primary states. They handle all the interfacing with SerializedProperties in order to change our behaviour data, meaning the work is compressed down into a single method call. They also hold reference to the actual object they represent and can be implicitly cast to them when convenient.

A few of the methods in SerializedState.

4.4. The Result

With all of the above work completed, the editor was finally in a state where it was technically ready to make behaviours with. While there were still many changes I wanted to implement, it was now ready for an initial test run. Before I get into the process of testing the system however, let me explain the general workflow of the system.

Once the system has been imported as a package, the user can open the editor window in two different ways. Firstly, they can navigate to Window > Behaviour Editor. They’ll then have the option to create a new behaviour or load an existing one.

Alternatively, they can create a new Behaviour Data asset using the creation context menu, then they can double click on the behaviour asset to open it directly.

The user can then start building a behaviour in the editor. They can right click on the canvas to create a context menu for adding states. They can add any actions and transitions they want to those states using the add buttons below their respective lists.

Action and transition properties can be modified by clicking on them and changing them in the data window.

By drawing connections, you can set parent/child relationships between states as well as transition targets for your transitions.

Once the behaviour is ready to be tested, you can add a BehaviourRunner component to any object in your scene. You can then drag the behaviour data asset onto the runner to set it as the behaviour to run.

The behaviour will now automatically start running when you enter Play Mode.

You can also modify your action/transition properties during runtime to change how the behaviour runs.

5. Testing the Editor

With the editor having all its core functionality implemented, the system was now ready for testing. My primary goal for this testing run was to see if the system was intuitive to use for my testers and whether testers would understand what they were doing. For this I conceptualized a qualitative test in which the users would perform a list of tasks to create various behaviours and run them to see if they were working properly. They would be expected to perform these tasks with minimal input from my end, but the system would include a usage guide for them to consult.

5.1. Preparing the Test

Test Instructions and Usage Guide

As mentioned, the test would require the user to create several different behaviours step by step and test them out. There would be no instructions on how to perform each step in the test, and users would be encouraged to try and figure out how to perform them through experimentation. In case they got stuck, they were advised to consult the usage guide included in the package. This would give me insight into what elements of the system were perhaps not intuitive enough, and whether the guide might have contained any blind spots in regards to performing certain unintuitive actions.

The list of instructions I devised covers every aspect of creating behaviours using the editor, from importing the package and opening the editor window to things like setting primary substates and dragging actions and transitions between states. In this way every aspect of the system is tested as a step in the process. The only interactions I did not specifically instruct users to perform were things like zooming and panning the editor, as they were naturally expected to try doing this as they were performing the tasks.

By having users verify the proper functioning of their created behaviours themselves, I gain insight into whether they understand how the behaviours are actually working. I created the usage guide at the same time as I was devising the test, slowly covering every aspect of the system as I worked my way through them. You can download PDFs of both the instruction list as well as the usage guide below.

Test Setup

The test was composed of three stages. Firstly, the pre-test stage. Users would be given a zip file containing the test instructions, the package and the usage guide. They would be instructed to follow the first task in the instructions, which consisted of setting up a Unity Project for a supported Unity version. They would be asked to share their screen and while waiting for the first task to complete would be asked a small set of questions regarding their experience with Unity as well as node-based editors in general. I would also record their name in case I had any follow-up questions when going through the results later.

All questions were recorded via google forms, but I did not have users fill in the forms themselves. Having them silently fill in the answers for themselves would limit my ability to ask for them to elaborate on answers that may not have been clear. This was a qualitative test so the comprehensiveness of the results was important here. Instead, I simply went through the test questions and posed them to the users, asking for further information when necessary and filling out the answers myself.

The second portion of the testing process was the test itself. During this I mostly stayed silent while observing the users perform the tasks. I noted down whether users were having difficulties or not with the tasks, and if they were, where the difficulty stemmed from. On some occasions I would speak up to ask for their thought process in order to understand what exactly was holding them from knowing what to do. When users got completely stuck (which luckily did not happen for the most part), I would step in to give some extra instruction, which I of course recorded in the notes. Other points where I would speak up were when unexpected behaviour occurred that might send the user off track. For example, there was a bug that was causing behaviours to run incorrectly, even though the behaviour had been set up as instructed in the test.

The third and final portion the test were the post-test questions. This was a large set of questions regarding the experience the user had while going through the test. These questions were separated into several categories. Firstly they were asked about the usage guide and whether they felt it was complete and easy to navigate. After that they were asked a set of general questions regarding the user experience of the editor. The next category had them rate specific interactions such as adding or removing states. They were then asked to answer some question regarding the asset store, and whether they would be willing to pay for this tool. Finally they were asked to rate a set of potential future changes to the editor and if they had any further insights.

5.2. Test Limitations

This was a fairly in-depth test to run and testing sessions took quite a bit of time (around an hour). Having to create a new unity project, import the package and build several behaviours as well as answering a whole list of questions afterwards was a fairly intensive process, especially when you account for the user getting stuck on certain aspects of the test. This meant that the group of potential testers I would be able to get was fairly small (I tested with 6 people in total). Furthermore, a chunk of the testers were close friends who I had already shown footage of the system with, meaning that they would have already known how certain interactions would work. (I did however get a couple of testers who had never seen the system before, and they had showed similar results to the ones who had.)

Another limitation in the testing is the demographics of the testers. While this system is intended for use by both designers with no programming knowledge as well as programmers, the people I tested the system with were all primarily programmers. It could be that there are certain aspects of the process that would have been less inuitive to those unfamiliar with programming, which I haven’t been able to control for with this group of testers.

5.3. Test Results

Overall feedback on the editor was very positive. Users found the editor both intuitive to use as well as visually pleasing. They did however have some trouble with drawing transition and parent-child connections between states. I had somewhat expected this. Specifically in the context of transitions, drawing them from left to right was fairly simple, but users got a bit confused when drawing them from right to left, as the connections would get hidden behind the states. I’ll have to look into if there is a better way to visualize those connections. One user also suggested that valid connection targets get highlighted when dragging a connection, which I thought was a good idea. Aside from drawing connections, there was also some trouble with running the behaviours. A few of the users tried dragging the asset directly onto the inspector window. I’m not sure if it would be possible to automatically create a behaviour runner and add the asset if this is done, but it would be a nice quality of life feature.

A big hurdle for the testers was the step for importing the package. None of the users had ever imported a Unity package from disk before, nor had they ever imported package samples into their project. I had omitted this from the usage guide as I didn’t consider it part of using the system itself, but I had to guide all the testers through it so there would definitely be some benefit to including an installation section in the guide.

A few users also suggested adding more organizational tools to the editor such as ways to group states or color-code them. Some users tried panning with middle mouse, which is in line with Unity’s other node based tools. A few users had issues locating the behaviour asset they created as the system did not properly indicate where this occurred. Some users also tried to add actions/transitions by right clicking rather than the + button. There are many more things like this I observed while watching other people use the editor, and it has given me quite a few insights into where it can be improved. The users also encountered a few bugs which I have made note of to look into later. Overall I’d say the testing has given me quite a lot of value despite the small sample size.

6. Conclusion & Future Plans

Over the course of a year of on and off work, I’ve created a fully functional and professional looking visual scripting tool for use in Unity. The tool allows users to quickly construct behaviours using actions and transitions that they can create themselves. The behaviours can then be added to GameObjects and they can even be modified during runtime.

Obviously this project took up more time than originally intended, but I wanted to able to present it as a fully working and visually pleasing product that one could imagine finding in the Unity asset store. On some days that I spent working on it I didn’t write a single line of code, simply staring at the screen trying to conceptualize solutions for problems I was struggling with. Other times I was rewriting things over and over, pulling my hair out as I struggled against IMGUI’s limitations and Unity’s horrible documentation. But there were also times when things clicked and I was able to build core elements of the system within a few hours.

With that in mind, this was probably the most educative project I’ve ever worked on when it comes to advancing my knowledge of (Unity) development. I had no experience with things like IMGUI and serialization. Other things that I had only had a vague understanding of like ScriptableObjects and C# delegates and events became core components of the system, making me much more confident in their use. There were even a few topics I learned about while experimenting with things that aren’t present in the system at all anymore, such as custom serialization and generic C# types. As a whole I think the creation of the editor has also made me better at system design when it comes to complicated codebases where you deal with many interacting subsystems. Of course, the creation of the UI has also given me some more insight in UX design.

There’s still many improvements to be made to the system though. I would eventually like to release this product in the Unity asset store, and while the core functionality has been implemented I don’t think the system is ready for a full release. There are still many features missing, such as hotkey actions and runtime feedback on which states and actions are being executed. The testing also revealed plenty of interactions that could be improved as well as a few bugs that would need to be squashed, so there’s still plenty of work left to be done.

For the time being I am working on a game that implements the system in order to reveal any issues that might come out during actual use. I will probably make improvements to the editor as I work with it, and I will also be looking into switching over to UI toolkit for the interface, as I don’t want to do too much more work on the IMGUI version if I end up deciding to switch over.

This concludes this (way too long) article. Thanks to any readers who’ve made it all the way to the end. I hope those of you who’ve made it here have perhaps gained some new insights from reading about the making of my behaviour editor. If the tool itself seemed interesting to you, please look forward to seeing it in the asset store!

7. Sources

Nystrom, R. (2011). Command. Game Programming Patterns.
https://gameprogrammingpatterns.com/command.html

Microsoft. (2021, September 15). Params. C# Documentation.
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/params

Opsive. (2021, December 8). Behavior Designer. Unity Asset Store.
https://assetstore.unity.com/packages/tools/visual-scripting/behavior-designer-behavior-trees-for-everyone-15277#description

Paradox Notion. (2022, April 4). NodeCanvas. Unity Asset Store.
https://assetstore.unity.com/packages/tools/visual-scripting/nodecanvas-14914

Billy4184. (2017, March 26). Behaviour Trees and Finite State Machine Discussion. Unity Forums.
https://forum.unity.com/threads/behavior-trees-and-finite-state-machine-discussion.462903/#:~:text=To%20put%20it%20simply%2C%20a,anywhere%20else%20in%20the%20tree

Unity. (2022). Editor Windows. Unity Documentation.
https://docs.unity3d.com/Manual/editor-EditorWindows.html

Brackeys. (2017, July 7). How to make an EDITOR WINDOW in Unity. Youtube.
https://www.youtube.com/watch?v=491TSNwXTIg

Oguz Konya. (2019, August 15). Creating a Node Based Editor in Unity. oguzkonya.com.
https://oguzkonya.com/creating-node-based-editor-unity/

Bunny83. (2018, September 20). IMGUI crash course. Github.com.
https://github.com/Bunny83/Unity-Articles/blob/master/IMGUI%20crash%20course.md

Unity. (2022). Editor Window. Unity Documentation.
https://docs.unity3d.com/ScriptReference/EditorWindow.html

Unity. (2022). MenuItem. Unity Documentation.
https://docs.unity3d.com/ScriptReference/MenuItem.html

Unity. (2022). Event. Unity Documentation.
https://docs.unity3d.com/ScriptReference/Event.html

Unity. (2022). GUI. Unity Documentation.
https://docs.unity3d.com/ScriptReference/GUI.html

Unity. (2022). GUILayout. Unity Documentation.
https://docs.unity3d.com/ScriptReference/GUILayout.html

Unity. (2022). EditorGUI. Unity Documentation.
https://docs.unity3d.com/ScriptReference/EditorGUI.html

Unity. (2022). EditorGUILayout. Unity Documentation.
https://docs.unity3d.com/ScriptReference/EditorGUILayout.html

Unity. (2022). GUILayoutOption. Unity Documentation.
https://docs.unity3d.com/ScriptReference/GUILayoutOption.html

Unity. (2022). GUILayoutUtility. Unity Documentation.
https://docs.unity3d.com/ScriptReference/GUILayoutUtility

Unity. (2022). GUIStyle. Unity Documentation.
https://docs.unity3d.com/Manual/class-GUIStyle.html

Unity. (2022). GUISkin. Unity Documentation.
https://docs.unity3d.com/ScriptReference/GUISkin.html

Bunny83. (2020, August 8). Editorstyles null Reference. Unity Answers.
https://answers.unity.com/questions/1759291/editorstyles-null-reference.html

Nystrom, R. (2011). Singleton. Game Programming Patterns.
https://gameprogrammingpatterns.com/singleton.html

Microsoft. (2021, November 5). Reflection. C# Documentation.
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/reflection

Microsoft. (n.d.). Regex Class. C# Documentation.
https://docs.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.regex?view=net-6.0

asierralozano (2021, April 1) HomeAssistant Custom Lovelace UI Editor. Github.com.
https://github.com/asierralozano/ha-ui-editor

Unity. (2022). SerializedObject. Unity Documentation.
https://docs.unity3d.com/ScriptReference/SerializedObject.html

Unity. (2022). SerializedProperty. Unity Documentation.
https://docs.unity3d.com/ScriptReference/SerializedProperty.html

Unity. (2022). Custom serialization. Unity Documentation (archive).
https://web.archive.org/web/20220530223317/https://docs.unity3d.com/Manual/script-Serialization-Custom.html

Unity. (2022). AssetDatabase. Unity Documentation.
https://docs.unity3d.com/ScriptReference/AssetDatabase.html

Game Dev Guide. (2019, July 8). Easy Editor Windows in Unity with Serialized Properties. Youtube.
https://www.youtube.com/watch?v=c_3DXBrH-Is

Unity. (2022). HideInInspector. Unity Documentation.
https://docs.unity3d.com/ScriptReference/HideInInspector.html

Related Posts