(1) Overview
Introduction
Tree-like affine transformation hierarchies—structures that organize objects in a parent-child relationship with transformations relative to their parent—are essential in rendering, interaction, and computer vision, all core disciplines of visual computing. These hierarchies enable hierarchical space partitioning, a process that recursively divides space into non-overlapping subspaces, forming the foundation for advanced real-time rendering techniques such as view-frustum and occlusion culling [21, 3, 34], ray-tracing [31, 20, 8], multiresolution meshes [9, 18], and collision detection [32, 15].
Additionally, they support the creation of compound objects, enabling key interaction tasks like navigation, picking, and spatial manipulation [12]. They also form the basis of articulated structures, supporting applications such as inverse kinematics [5], and (non-)human skeletal animation and motion retargeting [4, 19, 2]. These tasks are central to post-WIMP (Window, Icon, Menu, Pointer) user interfaces, which transcend traditional graphical paradigms by incorporating 3D interaction, gestures, and immersive environments [17, 27].
This paper introduces the nub library, a lightweight, extensible framework for setting up node trees—acyclic hierarchies rooted at a single null root—designed for rendering and interaction experiments. The library integrates seamlessly with Processing, a flexible Java-based language for creative programming in visual design, widely used for teaching, prototyping, and computational art [26, 29, 25, 7]. Processing provides a simple syntax, extensible architecture, and vibrant community, making it an ideal foundation for tools like nub, which builds on its core principles to support advanced rendering and interaction workflows. Its key features include:
Simplicity and Portability: The library has a simple design with no external dependencies, exposing functionality through a high-level, declarative API centered on the Node and Scene classes. This design promotes API learnability and portability [30].
Customizable Rendering Scenarios: Users can define custom spatial subdivision algorithms during hierarchy creation—such as Bounding Volume Hierarchies (BVHs), Binary Space Partitioning (BSP) trees, or octrees (see the Octree code example below)—and customize culling criteria during traversal, supporting a wide range of advanced real-time rendering techniques [1].
Decoupled Interactivity: The library explicitly exposes interaction through concise, high-level patterns by directly handling interactivity through input devices (e.g., Processing’s mouseMoved or mouseDragged functions), enabling easy customization with defaults for motion actions. By decoupling rendering from interaction, it allows for easily configurable input-handling setups without reliance on agents and hidden profiles, unlike frameworks such as Proscene [6].
Seamless Processing Integration: nub integrates smoothly with Processing, supporting desktop, Python, and Android 2D/3D modes [26].
By combining a simple and extensible design, seamless integration, decoupling of interaction from rendering, and customizability of rendering scenarios, the nub library provides a robust foundation for visual computing experiments.
Implementation and architecture
A nub visual application consists of a static node tree and one or more scenes, each encapsulating an eye node which defines view transformations during rendering, and a rendering context used to render node (sub)trees (see Figure 1). All of the main framework functionality is built on just two core classes:
Node: This class represents a 2D or 3D subspace and caches a composed affine transformation comprising a translation followed by a rotation, and a uniform positive scaling. It can have inertia, filters to constrain motion, and keyframes to model it [35]. Nodes may be rendered according to a visual hint, with details provided in the repository’s readme manual.
Node instances are organized into a static tree whose root is the null reference, as depicted in Figure 1 (left). Nodes support precise picking through their screen projection hints. Methods for node localization and spatial transformations are also described in the repository’s readme manual.
Scene: This class acts as a wrapper for a Processing PGraphics, which serves as the rendering context for the node tree. Each scene encapsulates an eye node to define view transformations and caches its own view, projection, projectionView, and projectionViewInverse matrices. It also provides routines for visibility checks, including point, box, and ball visibility (see the ViewFrustumCulling code example below). Additionally, it gathers input data from Processing’s input methods or other sources, such as multi-touch gestures and game controllers, and applies interaction patterns to implement custom interactive actions for nodes.
Scene methods for transforming between world, screen, and normalized-device-coordinates (NDC) spaces are detailed in the repository’s readme manual.

Figure 1
nub main framework elements: Left: a static Node tree implementing a multi-view application for a human skeleton and defining shape visual hints which specify what to draw for each node (spine, neck, skull, and light). The tree supports three scenes (mainView, sideView, and topView), each encapsulating an eye Node (mainEye, sideEye, and topEye, respectively) and a PGraphics rendering context (mainContext, sideContext, and topContext, respectively); Right: rendering of the neck subtree from each scene’s eye viewpoint, demonstrating ray-casting picking and manipulation. The 3D model of the human skeleton was adapted from [14].
Rendering
The scene’s render([subtree]) and display([background], [axes], [grid], [subtree], [worldCallback], [pixelX], [pixelY]) methods support diverse rendering scenarios, as illustrated in Figure 1 (right) and Figure 2.

Figure 2
render([subtree]) and display([background], [axes], [grid], [subtree], [worldCallback], [pixelX], [pixelY]) scene methods. The render method, which may be called multiple times per frame (e.g., with specific ordering to render node subtrees), recursively renders the node subtree onto the scene context and collects the subtrees to be rendered onto the backbuffer (_renderBackBuffer) for resolving picking in the next frame (_track). The display method renders the axes, grid hints, and node tree (render) on the screen. Setting up the rendering context’s view and projection matrices according to the eye (_bind) is automatically handled by the Processing pre registered function (for onscreen scenes) or by the display method (for offscreen scenes). This setup can also be explicitly invoked multiple times to render offscreen scenes using openContext and closeContext, which call Processing’s beginDraw and endDraw on the scene context, respectively. Displaying an offscreen-rendered scene requires invoking the scene’s high-level image(pixelX, pixelY) (top-left corner), which is automatically handled by display. Additionally, rays used for picking, gathered during the current and previous frames (e.g., through mouse events), are swapped in Processing draw, leveraging temporal coherence similarly to double buffering, but at a much lower computational cost.
The render([subtree]) method traverses the node subtree hierarchy, rendering each node’s visual hint onto the scene rendering context. It also supports defining custom node behaviors, such as periodic tasks, which are executed during the traversal. These behaviors can be specified using the setBehavior(Node node, Consumer<Node> behavior) method.
The display([background], [axes], [grid], [subtree], [worldCallback], [pixelX], [pixelY]) method follows a three-step process: first, it fills the background and displays world axes and grid; second, it invokes the render([subtree]) method and the user-provided worldCallback function; finally, it displays the rendered scene context at the screen position (pixelX, pixelY) which specifies its top-left corner.
The above methods handle rendering scenarios ranging from simple to complex, including rendering the same node tree onto different rendering contexts from various viewpoints (e.g., for bokeh effects [22, 28, 33]) or supporting 2D/3D interactive minimaps and shadow mapping (see the MiniMap code example below).
Interaction
All use-case interaction scenarios are handled by the scene’s interact(node, gesture), interact(tag, gesture), and interact(gesture) patterns.
The interact(node, gesture) pattern sends gesture data to the specified node, or to the eye when the node parameter is null. This triggers a user-provided functor, defined by the node’s setInteraction(Consumer<Object[]> functor) method, which implements the action to be performed by the node (see the CustomNodeInteraction code example below).
The interact(tag, gesture) pattern enables node picking and manipulation by resolving the node parameter using the node(tag) method. This is equivalent to calling interact(node(tag), gesture), as illustrated in Figure 1 (right). Nodes can be tagged using tag([tag], node) or by performing ray-casting against the node’s visual hint with tag([tag], [pixelX], [pixelY]).
The interact(gesture) pattern sends gesture data directly to the eye unless a node with the null tag exists, in which case the node becomes the interaction target.
Default motion actions for both the eye and nodes, derived from screen-space gesture data, are included in nub and detailed in the code repository’s readme manual.
These interaction patterns support a wide spectrum of interactive scenarios, ranging from simple to highly complex. They are compatible with various input devices, including those with multiple degrees of freedom (see the CustomNodeInteraction code example below).
Quality control
The nub library employs a comprehensive testing framework, publicly available at https://github.com/VisualComputing/nub/tree/master/testing, which integrates unit and regression tests. For every new feature implementation, dedicated examples are created to rigorously test the introduced functionality. This approach not only ensures the robustness of new features but also preserves the overall integrity of the library.
After feature-specific testing, all existing examples are subjected to regression tests to identify and address any unintended side effects or alterations to the library’s existing functionality. Once these tests are successfully completed, selected examples are integrated into the Processing release of the library, initiating a new cycle of development and testing.
Users can report issues on GitHub for additional support with the library, and contributions from the community are actively encouraged to enhance its development.
(2) Availability
Operating system
nub is platform-agnostic, enabling it to run on any operating system (GNU/Linux, macOS, Android, Windows) that supports Processing.
Programming language
nub v1.1.1 runs on Processing 4.2 and later.
Dependencies
nub has no dependencies other than Processing 4.2 and later. In addition, nub can easily be installed using the Processing IDE import library utility.
Software location
Code repository GitHub
Name: nub
Persistent identifier: https://doi.org/10.5281/zenodo.8033963
Licence: GPL-v3
Version published: 1.1.1
Date published: 13/6/23
The first version of nub, previously known as frames during its proof-of-concept stage, was published on GitHub on 25/09/2019.
Language
Processing
(3) Reuse potential
nub is a library developed for Processing, a widely used open-source programming language, and designed to be accessible to a diverse audience within the visual computing community. Its Node class is decoupled from Processing, possibly enabling integration with other frameworks by implementing a Scene interface specific to the target framework.
The nub API, fully documented at https://visualcomputing.github.io/nub-javadocs/, provides extensive customization capabilities for core library features. The functional and declarative API leverages modern Java features, eliminating the need for class inheritance and enhancing simplicity. The library has been applied in various research fields, including inverse kinematics [5].
Feedback can be submitted through the GitHub issue tracker or via email to support the library’s development and usability.
Code examples
The current nub release includes several examples highlighting different aspects of the library. A selection of these examples, illustrating the library’s capabilities, is detailed below (see Figure 3):
A forward kinematics-based scene featuring Pixar’s iconic Luxo [16];
A scene showcasing a cube and an eye with keyframe hints;
A depth-map rendered onto an offscreen context from an arbitrary box viewpoint;
A shadow mapping scene [11, 24, 23], requiring a depth-map of the node hierarchy rendered from the light’s viewpoint (building on the previous example);
A scene with multiple toric solenoids parsing gesture data to modify their topology; and,

Figure 3
nub selected examples: a) Luxo; b) Keyframes; c) Depth map; d) Shadow mapping; e) Custom node interaction; and, f) View frustum culling.
Acknowledgements
Gratitude is extended to the anonymous reviewers for testing the software, running the examples, and reviewing the API documentation, as well as for their valuable suggestions. Thanks are also due to Sebastian Chaparro for his thorough insights and for releasing several nub-based experiments as free software, and to all Processing contributors.
Competing Interests
The author has no competing interests to declare.
