
Figure 1
A comparison of human control–sound and audio feedback loops in a traditional grand piano (left) and an interactive system (right). In the traditional setting, the human performer physically interacts with the piano keyboard, triggering its mechanical sound production and receiving immediate acoustic feedback. In an interactive system, human actions (e.g., via a computer keyboard) are mapped to synthesis parameters (e.g., amplitude, frequency), which generate digital audio feedback (e.g., guitar, violin, piano timbre) rendered through audio output devices.

Figure 2
Temporal trajectory of sound synthesis research. Each point represents a distinct synthesis method reported in the literature, categorized into abstract digital sound synthesis, physical modeling synthesis, and neutral audio synthesis.

Figure 3
Taxonomy of sound synthesis methods, organized into two main categories—parametric and data‑driven—following the structure proposed by Schwarz (2007) and Hayes et al. (2024). Within the parametric category, both abstract digital sound synthesis and physical modeling synthesis are included (Bilbao, 2009). The data‑driven category introduces neutral audio synthesis. At the third level, we list representative methods for string instrument synthesis within each category.

Figure 4
The sound synthesis methods used for various string instruments and their categories, including plucked, bowed, hammered, and other. Generic labels such as ‘Strings’ are preserved from the original studies where specific instruments were not defined or the method was applied to a general string model.

Figure 5
Mapping between a four‑dimensional evaluation framework and established criteria by Castagné and Cadoz (2003) and Jaffe (1995).

Figure 6
Distribution of formal and informal subjective evaluation methods across reviewed studies (Quest. = Questionnaires; Music. = Musicians).
