Skip to main content
Have a personal or library account? Click to login
eFFT-C++: An Open-Source Implementation of the Event-Based Fast Fourier Transform Cover

eFFT-C++: An Open-Source Implementation of the Event-Based Fast Fourier Transform

Open Access
|Apr 2026

Full Article

(1) Overview

Introduction

The history of the Fast Fourier Transform (FFT) begins in 1965 with the seminal work of Cooley and Tukey [1], who introduced an algorithm that drastically reduced the computational complexity of the Discrete Fourier Transform (DFT) from quadratic to logarithmic order. This milestone marked the start of a revolution in digital signal and image processing. Since then, numerous algorithms have been developed to improve efficiency, such as Radix-2 and Radix-4, which recursively decompose the transform into smaller subproblems; the Rader-Brenner algorithm [2, 3], designed for prime-length data; the Split-Radix algorithms [4, 5], which combine the advantages of Radix-2 and Radix-4; and the Quick Fourier Transform [6, 7], which enhances efficiency through arithmetic simplifications (see Table 1). Formulations such as Winograd’s algorithm [8, 9] and the Prime Factor (also known as the Good-Thomas FFT) [10, 11] explore alternative mathematical strategies to minimize multiplication operations and optimize performance depending on data length and structure.

Table 1

Number of non-trivial floating-point operations required by different 1D FFT algorithms to compute an N-length data vector: Radix-2 (Rad2), Radix-4 (Rad4), Rader–Brenner (RB) [2, 3], Split-Radix (SR) [4, 5], and Quick Fourier Transform (QFT) [6, 7].

NRAD2RAD4RBSRQFT
2417616816816877
25496492456
261296118413001160587
27321632362824
2876966880774866643491
29179261797215368
2104097636232410163482418293

In the following decades, improvements in processing power and hardware capabilities drove the development of increasingly optimized software and hardware implementations. In software, libraries such as FFTPACK [12], FFTW [13, 14], and SPIRAL [15] emerged, implementing adaptive combinations of several FFT algorithms (Radix-2, PFA, and Rader, among others) to automatically select the most efficient strategy for a given problem. In parallel, dedicated hardware implementations were developed, including Field-Programmable Gate Array (FPGA)-based architectures [16, 17] and FFT-specific processors [18] built upon parallel and pipelined designs [19]. The rise of multicore and parallel computing further accelerated FFT computation [20], consolidating it as a cornerstone technique across a wide range of disciplines, from image and signal processing to spectral analysis, data compression, and computer vision.

Event cameras have emerged as a revolutionary sensing technology that departs from the conventional frame-based paradigm of visual perception. Instead of capturing full images at fixed rates, these sensors asynchronously report brightness changes at each pixel with microsecond temporal resolution, high dynamic range, and minimal latency. This unique representation closely mimics biological vision and enables efficient perception in challenging scenarios such as high-speed robotics, autonomous driving, aerospace navigation, and neuromorphic computing. Over the past few years, event-based methods have shown remarkable progress in tasks such as 3D reconstruction, visual odometry, optical flow estimation, object detection, and simultaneous localization and mapping (SLAM), among others [21]. However, the asynchronous and sparse nature of events also demands new signal processing methods specifically tailored to their temporal structure. In particular, frequency-domain analysis plays a central role in understanding and filtering visual signals, yet existing Fourier transform libraries are designed for dense, synchronous data. Consequently, applying standard FFT techniques to events usually requires binning events into frames by time-binning (or count-binning) over an interval and then computing an FFT of the aggregated image, which quantizes the update times to the bin boundaries.

The recently proposed eFFT algorithm addresses this limitation by introducing a method to compute the exact Fourier transform of events [22]. In eFFT, events are interpreted as asynchronous updates to a spatial signal It(x,y). At any time t, eFFT provides the 2D DFT of the current state Ft(u,v)=F{It(x,y)}. The key idea is to maintain the intermediate matrices of the Radix-2 FFT in a tree structure and update only the affected branches when new events arrive. eFFT can operate according to the two main event processing schemes: event-by-event and using event packets. When operating event-by-event, each incoming event triggers a localized recomputation that updates the internal matrices of eFFT, improving efficiency through computation reuse. When operating with packets, the groups of events are efficiently integrated by exploiting shared intermediate results, filtering redundant events, and avoiding unnecessary operations, hence achieving remarkable computational efficiency. The contribution of events is therefore not accumulated into time windows; rather, the spatial state and its spectrum are updated at the timestamps of events (or after a packet of events). The eFFT enables low-latency frequency-domain operations, making it particularly suitable for applications such as event-based denoising and filtering, pattern analysis, and registration, where repeated updates of a 2D spectrum are required without recomputing an FFT from scratch (see Figure 4).

eFFT-C++ provides an open-source and general-purpose implementation of this algorithm. The library is written in modern C++17, designed as a header-only package for ease of integration, and relies solely on Eigen3 as its external dependency. It includes test suites against FFTW3 using Google Test to guarantee numerical correctness and benchmarking tools built with Google Benchmark to measure performance under realistic workloads. This makes it a practical tool for researchers and developers who wish to reproduce the results of the original publication, explore event-based frequency analysis, or deploy the algorithm in resource-constrained platforms.

The availability of a reliable implementation has several implications. First, it lowers the barrier to entry for scientists interested in event-based signal processing by providing a validated codebase. Second, it enables systematic benchmarking and comparison with conventional FFT libraries, highlighting the computational savings of the event-driven paradigm. Third, it supports integration into robotics and vision pipelines, where low-latency and efficient spectral analysis can improve tasks such as feature extraction, motion estimation, or event-based filtering. Finally, by releasing the code under GPLv3, the project fosters reproducibility and encourages contributions from the broader research community.

The main contributions of this work are: (i) beyond prior code releases [22], we provide the first complete, general-purpose C++17 implementation of the eFFT, designed for easy usage; (ii) we expose a compact, consistent Application Programming Interface (API) that decouples raw events from persistent pixel states (stimuli), enabling seamless switching between streaming and batched workflows; (iii) we release a rigorous validation suite, together with a benchmark that reports latency/throughput across frame and packet sizes; (iv) we include Python bindings and a documented, reproducible build and emulation environment (CMake and Docker) to facilitate adoption, testing, and comparison; and (v) we document design choices, reference implementations, and usage patterns, providing a stable baseline for reproducible research and downstream integration.

Implementation and architecture

The eFFT algorithm computes the exact discrete Fourier transform of the spatial information in the asynchronous events generated by event cameras. To achieve this, eFFT exploits the Radix-2 decomposition of the FFT. The main idea is to map incoming events into the Fourier domain by updating only the coefficients affected by the event, instead of recomputing the entire transform. The matrices associated with the decomposition are stored in a quadtree structure, where each node corresponds to a stage in the FFT and contains precomputed factors. When a new event arrives, only a localized subset of the tree needs to be updated. Most of the existing values are reused, thereby preserving exactness while avoiding redundant computations.

Let Ft(u,v) denote the DFT of an image It(x,y) of size N×N, with N being any power of 2:

1
Ft(u,v)=x=0N1y=0N1It(x,y)exp(j2π(uxN+vyN)).

In the event camera model, an event is defined as e=(x,y,t)1 where (x,y) are the pixel coordinates and t is the timestamp. Events are the raw, time-stamped measurements generated by the sensor. For Fourier analysis, eFFT uses the notion of a stimulus, which represents the persistent contribution of a pixel to the Fourier domain: (x,y,σ), where σ{0,1} encodes whether the pixel is active (i.e., its contribution should be included in the spectrum computation) or inactive (its contribution should be removed from the computation). Stimuli provide an explicit representation of changes in the 2D spatial information (It in Equation 1) of the events used for spectral analysis. Updating Ft(u,v) naively after each event would require numerous operations. The eFFT algorithm avoids this by exploiting the Radix-2 Cooley-Tukey factorization of the FFT, which can be written as:

2
Ft(u,v)=rx,ry{0,1}WNurx+vryn=0N21m=0N21It(2n+rx, 2m+ry)WN/2un+vm,

with Wt=ej2π/t. This decomposition can be represented as a quadtree of depth log2N, where each node stores intermediate results of the FFT stages. eFFT maintains this Radix-2 decomposition in a quadtree structure. When a new stimulus update occurs, its contribution propagates only along one tree branch (one node per dimension). Rather than explicitly updating all coefficients, eFFT modifies only the affected branches of the decomposition tree (while reusing previously computed values), which implicitly yield the same result when the Fourier coefficients are read out. Hence, eFFT maintains an internal state (the matrices stored in the tree). Each incoming stimulus efficiently updates this state, thereby avoiding the updating of all branches of the FFT decomposition. Formally, each event triggers a state transition of It(x,y) at its pixel location. For an activation stimulus (x,y,σ=1), the value of It(x,y) is set to one; for a deactivation stimulus (x,y,σ=0), it is set to zero. Note that there is no need to explicitly materialize It(x,y) (i.e., binning), as this information is encoded at the last level (leaves; 1×1 matrices) of the quadtree. As a result, the matrices in the quadtree represent the most recent configuration of pixel activations implied by the event history up to time t. The output of eFFT is the 2D Fourier spectrum of this state. This output is stored at the top level of the tree (the root node; an N×N matrix) and is incrementally updated as stimuli are inserted.

Two modes of operation are supported:

  • Event-by-event: Each incoming event is immediately integrated by updating the corresponding stimulus, which triggers a localized update of the tree.

  • Packet-based: A set of events = {ei} is accumulated and integrated together. The tree structure detects overlapping updates within the packet and reuses shared factors, lowering the average per-event cost. Packet-based updates are a computational batching strategy, that is, events are still applied as a sequence of state changes, but the implementation amortizes work by pruning redundant updates and reusing shared factors.

This tree-based incremental update guarantees that at any instant, the Fourier transform is equal to that produced by recomputing the FFT, but with substantially fewer operations. In practice, events arriving from a sensor or dataset can be accumulated in a queue structure (e.g., std::queue) and progressively converted into stimuli by updating the activation state of each pixel. This mechanism decouples the asynchronous event flow from the internal representation: the queue preserves the temporal order of events (x,y,t), while the library updates the corresponding stimuli (x,y,σ) to maintain the Fourier transform. This design enables the seamless integration of eFFT into event-based pipelines that deliver data as continuous, asynchronous events.

Software architecture

The library is designed as a modular, header-only C++17 package (efft.hpp), relying solely on the Eigen3 library for matrix operations. Additional components for validation and performance evaluation are provided, but they are kept independent of the core code. The architecture of eFFT-C++ is summarized in Figure 1. Its main components are:

  • Core library: The template class eFFT<N> encapsulates the Radix-2 quadtree structure that stores intermediate FFT factors. It implements update methods for both single stimuli and packets of stimuli, ensuring exact maintenance of the Fourier transform. Being header-only, it can be easily included in larger projects without additional compilation steps.

  • Stimulus abstraction: Sensor events are represented internally as Stimulus(x,y, σ), where σ denotes the current activation state of the pixel. Collections of stimuli are handled through the Stimuli container, a lightweight vector-like structure that provides utilities such as filtering redundant events or applying global state changes. This abstraction allows seamless switching between event-by-event and packet-based updates.

  • Matrix backend: The library employs Eigen3 for linear algebra operations, enabling efficient storage and manipulation of complex-valued matrices that represent the Fourier coefficients.

  • Validation and testing: Numerical correctness is verified through a Google Test suite (tests/ efft.cpp), which compares eFFT outputs against ground-truth results obtained with FFTW3. Tests cover both single-event and packet-based updates, ensuring that the incremental approach yields results within a predefined error bound.

  • Performance benchmarking: A dedicated Google Benchmark suite (benchmarks/efft_ benchmark.cpp) evaluates runtime performance for different frame sizes and packet lengths. This provides quantitative measurements of the computational savings achieved by eFFT compared to frame-based FFT computation.

  • Build system integration: A CMake-based build system manages dependencies, compilation options, and test/benchmark targets. FFTW3 is optional and only required for validation and comparative benchmarking.

Figure 1

Scheme of the system architecture of eFFT-C++. The flow begins with asynchronous events, which are converted into stimulus abstractions and processed by the core eFFT<N> library based on a Radix-2 quadtree structure. The architecture enables the operation on both single stimulus and stimuli batches. Intermediate coefficients are handled through the Eigen3 backend, while the current spectrum is exposed via getFFT(). Validation and benchmarking modules (Google Test and Google Benchmark) operate independently from the core, and the build system orchestrated by CMake provides header-only distribution and Python bindings for seamless integration.

Software functionalities

The functionalities provided by the eFFT-C++ API are summarized in Table 2, which lists the available classes, methods, and their inputs and outputs. The main features of the library can be summarized as follows:

  1. Initialization: Construction and reset of the Radix-2 quadtree structure for frame sizes N×N. This prepares the internal state for subsequent event or packet updates.

  2. Single-event updates: Insertion or removal of individual stimuli (x,y,σ). Each update propagates locally through the tree, reusing intermediate results and avoiding full recomputation. This provides exact maintenance of the Fourier transform with low traversal cost per dimension.

  3. Packet-based updates: Efficient integration of batches of events by operating directly on Stimuli containers. The algorithm detects and eliminates redundant or contradictory events within the packet (e.g., successive activations and deactivations of the same pixel), reducing the average per-event cost.

  4. Fourier transform retrieval: Direct access to the current spectrum via getFFT(), which returns the coefficients as an Eigen complex matrix. This interface allows seamless downstream processing with the full Eigen3 numerical framework.

Table 2

Concise public API of eFFT-C++. Types: cfloat = std::complex<float> and cfloatmat = Eigen::Matrix<cfloat, Eigen::Dynamic, Eigen::Dynamic>.

ClassMethodInputOutputSummary
eFFT<N>eFFTBuilds lookup twiddles and allocates quadtree buffers.
eFFT<N>eFFTReleases FFTW plans when enabled (no-op otherwise).
eFFT<N>framesizeunsigned intCompile-time frame size N as a runtime integer.
eFFT<N>initializevoidInitialize internal state from a zero image.
eFFT<N>initializecfloatmat&voidInitialize from an N×N complex image (Eigen matrix).
eFFT<N>updateStimulus&boolApply one stimulus. Returns true if spectrum changed.
eFFT<N>updateStimuli&boolApply a batch of stimuli; prunes redundancies.
eFFT<N>getFFTcfloatmat&Current Fourier spectrum.
eFFT<N>initializeGTcfloatmat&voidPrepare FFTW plan and set input image.
eFFT<N>updateGTStimulus&boolApply one stimulus. Returns true if spectrum changed.
eFFT<N>updateGTStimuli&boolApply a batch of stimuli; prunes redundancies.
eFFT<N>getGTFFTcfloatmatGround-truth FFT with FFTW (if enabled).
eFFT<N>checkdoubleNorm of difference: ∥getFFT() - getGTFFT()∥.
StimulusonStimulus&Set state to true.
StimulusoffStimulus&Set state to false.
StimulussetboolStimulus&Explicitly set state.
StimulustoggleStimulus&Flip state (on/off).
StimulionvoidSet all contained stimuli to true.
StimulioffvoidSet all contained stimuli to false.
StimulisetboolvoidApply same state to all contained stimuli.
StimulitogglevoidFlip state of all contained stimuli.

Code snippets

eFFT-C++ provides a simple and consistent API that abstracts the complexity of maintaining an incremental Fourier transform. Users interact only with high-level objects such as eFFT, Stimulus, and Stimuli, while the internal tree structure is updated transparently. Both event-by-event and packet-based modes follow the same interface, making it straightforward to switch between single updates and batch processing, depending on the application requirements. All examples are designed to compile without modification in a standard C++17 environment with Eigen3, and closely mirror the unit tests and benchmark programs distributed with the codebase. The following code snippets illustrate the most common usage patterns, ranging from minimal setup to validation and integration.

  • Minimal usage example: This example demonstrates the most straightforward workflow: initializing the transform, inserting a single stimulus, and retrieving the current Fourier spectrum. It highlights the library’s header-only design and direct access to Fourier coefficients.

    jors-14-642-g5.png

  • Packet update with multiple stimuli: This example demonstrates how a batch of stimuli can be integrated efficiently in a single step. The Stimuli container automatically manages collections of activations and deactivations.

    jors-14-642-g6.png

  • Validation with FFTW3: This example illustrates the validation process against FFTW3. By mirroring updates into a ground-truth FFT implementation, users can verify the correctness of incremental updates using check(), which measures the error norm between the eFFT and FFTW3 outputs. This mechanism serves as the basis for the automated test suite integrated into the build system.

    jors-14-642-g7.png

  • Converting events into stimuli using a queue: This example shows how raw events (x,y,t) from a sensor can be converted into stimuli (x,y,σ) and processed in a streaming fashion. A fixed-size buffer implements a sliding window of active pixels, where new activations are inserted and the oldest entries are deactivated once the buffer exceeds its maximum size. This pattern demonstrates how eFFT can be integrated with real-time event camera data.

    jors-14-642-g8.png

Python bindings

To facilitate rapid experimentation and integration in Python pipelines, eFFT-C++ ships with thin, zero-copy Python bindings that expose the same concepts as the C++ API while returning numpy.ndarray objects. The bindings are provided through nanobind and are built along with the C++ library using CMake. The Stimulus class and the Stimuli container class mirror the C++ types. The transformers factory eFFT(n) returns an instance of the corresponding eFFT class. As in the original C++ implementation, the spectrum can be updated with either a single Stimulus or a Stimuli batch. The current spectrum is retrieved with get_fft(), which returns a complex NumPy array in np.complex64. An example of the use of eFFT in pure-Python is provided next:

jors-14-642-g9.png

The Python bindings convert internal complex matrices into numpy.ndarray without copies when safe. This enables direct downstream use with scientific Python tools (e.g., scipy.signal). Unit tests via pytest validate: (i) object construction and field semantics for Stimulus and Stimuli; (ii) shape and type of get_fft() output; and (iii) numerical equivalence of eFFT with numpy.fft.fft2 under randomized events for both single-event update and packet update. The bindings keep the Python surface minimal and idiomatic: plain data classes for stimuli, an iterable container for batches, a read-only framesize, and explicit initialize() for deterministic resource control. Returning NumPy arrays from get_fft() avoid copies when possible, minimizing overhead.

Quality control

Testing

eFFT-C++ is validated with a Google Test (gtest) suite that verifies the correctness of the public API (Stimulus, Stimuli, and eFFT<N>) and validates eFFT against an FFTW3 ground-truth. Tests are organized to cover: (i) data types and semantics; (ii) operations in batches; and (iii) numerical equivalence with ground-truth under both event-by-event and packet-based updates across multiple frame sizes. To compute the FFTW3 ground-truth, we explicitly materialize the current spatial state It(x,y) implied by the stimulus history. After each single-stimulus update (or after applying a packet), this explicit image It is provided as input to a conventional 2D FFT computed with FFTW3, yielding FtGT(u,v). We then compare the spectrum maintained by eFFT, Ft(u,v), against FtGT(u,v) by measuring the Frobenius norm of their difference (i.e., FtFtGTF). For all runs, the suite enforces: ASSERT_LT(check(), 1e-3) after every single update, and ASSERT_LT(check(), 1e-1) after each packet update. Across all settings, eFFT-C++ produces spectra that match FFTW3 to numerical precision. In all the runs, the Frobenius error remained lower than the assert threshold with no drift over long sequences. This holds for both event-by-event processing and packet-based processing.

Benchmarking

This section compares eFFT-C++ and standard frame-based FFT computation using FFTW3 with a Google Benchmark (benchmark) suite. Evaluation was performed on synthetic events generated within the benchmark suite. Events are drawn uniformly at random over the N×N image grid, simulating asynchronous activations and deactivations. Randomness is implemented with the C++17 standard random by a reproducible pseudo-random generator using a Mersenne Twister (std::mt19937) with a uniform integer distribution over pixel coordinates [0,N–1], and a Bernoulli distribution with p = 0.5 (implemented via parity) to decide the activation state. In the event-by-event benchmarks, independent events are produced and processed one at a time. In the packet-based benchmarks, random packets of fixed sizes are generated by repeatedly sampling from the distributions and then fed in a single update step. This synthetic generation avoids dataset-specific biases and isolates the algorithmic behavior. FFTW3 is used as the ground-truth frame-based method. For a fair comparison, hardware-specific and memory-alignment optimizations were disabled (FFTW_ESTIMATE, FFTW_UNALIGNED, and FFTW_NO_SIMD). Although these flags do not maximize FFTW’s absolute performance, this configuration ensures a computational baseline between both algorithms. We acknowledge the extensive optimization work behind FFTW, which lies well beyond the scope of this work.

  • Benchmark 1: It measures the per-update cost when the spectrum is read out after event-by-event updating. For each frame size N{16,32,64,128,256}, the harness processes exactly Ne = 250 events per iteration. Two variants were run: (i) a frame-based baseline using FFTW3 (BenchmarkFeedWith EventsFFTW); and (ii) the incremental method (BenchmarkFeedWithEvents). Results are reported in Figure 2. The CPU time includes not only event processing but also random-event generation and spectrum retrieval at each iteration. Let titer denote the total wall-clock time per iteration as measured by Google Benchmark. Then:

    • ■ Per-event latency titerNe

    • ■ Throughput (in MEPS2) Netiter106

  • Benchmark 2: It evaluates amortized performance when events are integrated in packets. For each frame size N{128,256} and packet size B{1000,5000,10000,50000,100000}, the harness processes a fixed total of Ne = 500000 events per iteration. Again, two versions were run: (i) the frame-based baseline using FFTW3 (BenchmarkFeedWith PacketsFFTW) and (ii) the incremental method (BenchmarkFeedWithPackets). Each packet is generated by sampling B random events and then applied in a single update call, after which the spectrum is retrieved. Results are reported in Figure 3. As in Benchmark 1, the reported CPU time comprises event generation, packet integration, and spectrum retrieval. It is interesting to note how the cost of eFFT tends toward that of the standard FFT as the packet size increases, since—assuming a uniform distribution of events within the frame—the likelihood that all matrices in the tree need to be recalculated also increases. This behavior allows for modulation of a trade-off between latency and computational efficiency, and is discussed in more detail (including how to reduce the cost in real-world scenarios) in [22].

    • ■ Packets per iteration =NeB

    • ■ Per-packet latency titerBNe

    • ■ Throughput (in MEPS) Netiter106

Figure 2

Event-by-event benchmark (Benchmark 1) results. Google Benchmark time per iteration (wall-clock) versus frame size {16,32,64,128,256}. Each iteration processes Ne = 250 events, including random-event generation, updates, and spectrum retrieval after every event.

Figure 3

Packet-based benchmark (Benchmark 2) results. Google Benchmark time per iteration (wall-clock) versus packet size {1,5,10,50,100}103 for framesize 128 (top) and 256 (bottom). Each iteration integrates a fixed total of Ne=5105 events. Timing includes random-event generation, packet updates, and spectrum retrieval after each packet.

(2) Availability

Operating system

Linux, macOS, and Windows

Programming language

C++ ≥ 17, Python ≥ 3.8

Additional system requirements

CMake ≥ 3.20

Dependencies

Eigen3 ≥ 3.4.0, FFTW3 (optional), Google Test, Google Benchmark

List of contributors

Raul Tapia, José Ramiro Martínez-de Dios, Anibal Ollero

Software location

Code repository

Emulation environment

Language

C++, Python, CMake, Dockerfile

(3) Reuse Potential

The release of eFFT-C++ has several impacts on both the scientific community and practical applications:

  • Scientific reproducibility: By providing the first official, open-source implementation of the eFFT algorithm, the library ensures that the results reported in [22] can be independently verified and extended. Researchers can build directly upon a validated baseline, accelerating adoption of frequency-domain methods in event-based vision.

  • New research opportunities: The availability of an exact, efficient Fourier transform for events enables a wide range of frequency-domain approaches that were previously impractical. Potential applications include event-based denoising, motion segmentation, template matching, and registration tasks. Some examples of currently implemented applications are presented in Figure 4.

  • Improving existing research: Traditional event processing pipelines typically rely on spatio-temporal representations. eFFT-C++ reduces the computational cost of incorporating spectral analysis while maintaining exactness, thereby lowering the barrier to applying Fourier-based tools to high-rate event data.

  • Practical deployment: Benchmarks demonstrate very fast event processing times for realistic packet sizes, making the library suitable for robotics scenarios with event rates above one million events per second. Its header-only C++17 implementation, with minimal dependencies (Eigen3 and optionally FFTW3), facilitates deployment on resource-constrained platforms, such as embedded ARM boards.

  • Potential for industrial impact: Event cameras are gaining traction in autonomous driving, aerial robotics, and defense and security. The efficient online spectral representation provided by eFFT-C++ could serve as a foundation for commercial applications in real-time filtering, tracking, and compression, and may inspire further developments toward GPU and FPGA acceleration.

Figure 4

Examples of applications of eFFT. From top to bottom: denoising, pattern analysis, and registration. (1) Denoising: A low-pass filter in the frequency domain is used to suppress high-frequency noise. Artificial noise (5000 random noise events; see top-left) was added to the original events. (2) Pattern analysis: A directional edge filter in the frequency domain is applied to enhance edges within a specific angular range (∼90°). Green lines (middle-right) have been thickened for a better visualization. (3) Registration: Two event slices from different time instants are aligned via phase cross-correlation. The sequence used is urban from the Event Camera Dataset [23].

The software is distributed under the GPLv3 license, which guarantees that derivative works remain open-source and promotes reproducibility and collaborative development within the research community. A LICENSE file is included in the eFFT-C++ repository.

Contributions: The repository is open to contributions from any researcher. Issues and pull requests are available. The CONTRIBUTING.md and CODEOWNERS files are also available in the repository.

Notes

[1] The polarity of the event, that is, whether the light brightness increases (positive) or decreases (negative), is not considered in this work.

[2] Million Events Per Second.

DOI: https://doi.org/10.5334/jors.642 | Journal eISSN: 2049-9647
Language: English
Submitted on: Nov 12, 2025
Accepted on: Mar 16, 2026
Published on: Apr 16, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Raul Tapia, José Ramiro Martínez-de Dios, Anibal Ollero, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.