Skip to main content
Have a personal or library account? Click to login
nub: A Rendering and Interaction Library for Visual Computing in Processing Cover

nub: A Rendering and Interaction Library for Visual Computing in Processing

Open Access
|Mar 2025

Figures & Tables

Figure 1

nub main framework elements: Left: a static Node tree implementing a multi-view application for a human skeleton and defining shape visual hints which specify what to draw for each node (spine, neck, skull, and light). The tree supports three scenes (mainView, sideView, and topView), each encapsulating an eye Node (mainEye, sideEye, and topEye, respectively) and a PGraphics rendering context (mainContext, sideContext, and topContext, respectively); Right: rendering of the neck subtree from each scene’s eye viewpoint, demonstrating ray-casting picking and manipulation. The 3D model of the human skeleton was adapted from [14].

Figure 2

render([subtree]) and display([background], [axes], [grid], [subtree], [worldCallback], [pixelX], [pixelY]) scene methods. The render method, which may be called multiple times per frame (e.g., with specific ordering to render node subtrees), recursively renders the node subtree onto the scene context and collects the subtrees to be rendered onto the backbuffer (_renderBackBuffer) for resolving picking in the next frame (_track). The display method renders the axes, grid hints, and node tree (render) on the screen. Setting up the rendering context’s view and projection matrices according to the eye (_bind) is automatically handled by the Processing pre registered function (for onscreen scenes) or by the display method (for offscreen scenes). This setup can also be explicitly invoked multiple times to render offscreen scenes using openContext and closeContext, which call Processing’s beginDraw and endDraw on the scene context, respectively. Displaying an offscreen-rendered scene requires invoking the scene’s high-level image(pixelX, pixelY) (top-left corner), which is automatically handled by display. Additionally, rays used for picking, gathered during the current and previous frames (e.g., through mouse events), are swapped in Processing draw, leveraging temporal coherence similarly to double buffering, but at a much lower computational cost.

Figure 3

nub selected examples: a) Luxo; b) Keyframes; c) Depth map; d) Shadow mapping; e) Custom node interaction; and, f) View frustum culling.

DOI: https://doi.org/10.5334/jors.477 | Journal eISSN: 2049-9647
Language: English
Submitted on: Jun 15, 2023
Accepted on: Mar 20, 2025
Published on: Mar 27, 2025
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2025 Jean Pierre Charalambos, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.