[LV2] should plugins somehow indicate whether they support MPE?

Stefan Westerfeld stefan at space.twc.de
Sun Jun 25 05:44:40 PDT 2017


On Wed, Jun 21, 2017 at 04:18:30PM +0200, Hanspeter Portner wrote:
> On 20.06.2017 21:20, Stefan Westerfeld wrote:
> >    Hi!
> > 
> > On Tue, Jun 20, 2017 at 07:26:20PM +0200, Hanspeter Portner wrote:
> >> On 20.06.2017 18:11, Stefan Westerfeld wrote:
> >>> On Sat, Jun 17, 2017 at 05:39:08PM +0200, Hanspeter Portner wrote:
> >>>> On 17.06.2017 13:04, Stefan Westerfeld wrote:
> >>>>> I don't know if that affects other plugins (yet), but the next version of
> >>>>> SpectMorph will support MPE (Multidimensional Polyphonic Expression). That way
> >>>>> a host can change parameters individually of notes that are already playing.
> >>>>
> >>>> LV2 parameters/controls for hosts are strict singletons, there is no notion of
> >>>> neither polyphony nor per-note parameters/controls.
> >>>>
> >>>> But you can of course support that internally in your plugin.
> >>>
> >>> Right. In a way, MPE is all a big workaround for the idea that the host and
> >>> plugin communicate notes via MIDI. It allows me to only provide my current
> >>> VST2.4 and LV2 plugin, and still get per-note parameters, such as pitch.
> >>>
> >>> I wonder if I should look into supporting VST3, which seems to have this by
> >>> design.  But then, LV2, which doesn't have it by design, would also need to be
> >>> extended somehow.
> >>>
> >>>>> What I implemented so far in SpectMorph is changing the pitch. Unlike
> >>>>> conventional pitch-bend messages, this allows users to bend each note
> >>>>> individually. So you could slide from a C major chord to D minor. Obviously in
> >>>>> this case, a per-note-pitch UI like Bitwig provides - which I used for
> >>>>> developing and testing - makes sense.
> >>>>
> >>>> How do you plan to implement the MPE controller messages (e.g. pressure/timbre
> >>>> in MPE terms). Would you take the current singleton lv2:Control/Parameter value
> >>>> as initial value for each new note and overwrite it accordingly to MPE
> >>>> controller messages? What if the initial parent lv2:Control/Parameter is changed
> >>>> by the host simultaneously? Just curious.
> >>>
> >>> To be honest, I don't know yet how per-note timbre events should be handled. It
> >>> is probably not as simple as you say, because often we have internal LFOs in the
> >>> plugin which affect timbre. This is because (unlike Bitwig), most hosts - as far
> >>> as I know - do not allow you to say: this timbre parameter should by default
> >>> vary with a sine wave with 0.2 Hz.
> >>>
> >>> So just taking the LV2 control parameter for timbre is often not good enough,
> >>> the user can already make this parameter vary according to an LFO.
> >>>
> >>> It could work like: use either
> >>>  - internal LFO
> >>>  - LV2 control parameter
> >>>
> >>> if no other information is there, and use MPE timbre directly otherwise. This
> >>> would mean that the MPE timbre overrides any other (default) specification
> >>> for timbre.
> >>>
> >>>>> In VST, my plugin reacts to a new canDo("MPE"), to indicate to hosts with MPE
> >>>>> support that MPE messages should be sent (like per-note pitch bend). Bitwig for
> >>>>> instance will not send any MPE messages to the plugin unless this canDo is
> >>>>> supported.
> >>>>>
> >>>>> Since MPE is midi-only, my LV2 plugin will automatically support MPE now.  I
> >>>>> wonder if it should somehow in the plugin description indicate that it does.
> >>>>
> >>>> Suport could be annotated on the port with an atom:suppports property.
> >>>>
> >>>> mybundle:myplug
> >>>>   a lv2:Plugin ;
> >>>>
> >>>>   lv2:port [
> >>>>     a lv2:InputPort, atom:AtomPort ;
> >>>>     lv2:index 0 ;
> >>>>     lv2:symbol "myPort" ;
> >>>>     atom:bufferType atom:Sequence ;
> >>>>     atom:supports midi:MidiEvent, midi:MpeMessage ;
> >>>>   ] .
> >>>>
> >>>> And midi.ttl be extended with a definition of midi:MpeMessage
> >>>>
> >>>> midi:MpeMessage
> >>>>   a rdfs:Class ;
> >>>>   rdfs:subClassOf midi:MidiEvent ;
> >>>>   rdfs:label "Multidimensional Polyphonic Expression" .
> >>>
> >>> Sounds reasonable.
> >>>
> >>>>> I don't know any LV2 host that supports MPE so far, so if the LV2 strategy for
> >>>>> MPE would be wait until we have at least one host which supports MPE, and then
> >>>>> discuss negotiation, that would be ok for me, too
> >>>> Or better: wait until the MPE spec actually is finalized. Currently the draft is
> >>>> under (closed) consideration by the MIDI association, iirc, and it may well
> >>>> change. Once something is in the LV2 spec, it cannot (easily) be changed...
> >>>
> >>> Ok. So we can wait for that to happen before annotating it.
> >>>
> >>>> I'm pretty interested in polyphonic expression, but the MPE draft is terribly
> >>>> broken, as it only allows 15!!! concurrent notes at max.
> >>>
> >>> MPE is a workaround. The proper way to do it would be somehow like VST3, where
> >>> the host tags (as far as I understand it) each note on event with a unique id.
> >>
> >> Hm, this reminds me of something. I distantly remember to have proposed an
> >> addition to the spec to get unique IDs. Will bump it in the corresponding thread.
> > 
> > Right, but as far as I understand it, this only allows generating IDs, but to
> > make it useful, these also need to be communicated along with the note on
> > events.
> 
> Sure, this is your usual chicken-egg problem. Generating unique IDs does not
> need to be bound to event handling, it can be usual for a lot of other cases, so
> it makes sense for it to reside in the URID extension.
> 
> >>> That way, subsequent note expression parameter changes go to a note id. Since this
> >>> is not limited to 15 ids, we can avoid the problem you mentioned.
> >>>
> >>> But this would mean that LV2 somehow would need to have
> >>>  - a protocol (other than midi) to send note on events, so that a note id can be
> >>>    assigned by the host
> >>>  - a way to send per-note expression control changes
> >>>  - a way to define per-note controls
> >>>
> >>> If LV2 had this, I could probably support it, and hosts that supported it would
> >>> simply work with LV2s per-note-expression protocol. Also standard parameters
> >>> like per-note-pitch should be defined, and would then only affect the note with
> >>> the right id.
> >>
> >> I'm experimenting/protyping with such event systems in LV2. This is definitely
> >> doable with LV2's extendable atom event system.
> > 
> > Yes, I believe so, too. LV2 is so flexible that adding note expression should
> > be possible without a complete redesign. A port type which supports
> > more-than-midi messages would be necessary.
> > 
> >> But I wonder if plugins should at all need to implement such complexity.
> >> Wouldn't it be more straight-forward to force such plugins to be monophonic in
> >> their very nature and just let the host spawn the needed amount of instances to
> >> achieve polyphony (I think Ingen can do that). The host would thus decode MIDI
> >> MPE (or something better), plugins wouldn't need to implement it, it would not
> >> interfere with LV2's single-value control/parameter scheme and state
> >> saving/restoration would also work as intended.
> > 
> > I am fairly certain that this is not good enough for all cases.
> > 
> > Consider a simple soundfont player. If you instantiate a monophonic plugin 64
> > times per potential voice, the naive implementation would load the soundfont 64
> > times.  You could propose workarounds, like making the plugins implicitely
> > share state, but then it is no longer as simple and as transparent as you say.
> > 
> > Also, a sound font player would probably want to apply effects such as reverb
> > to all notes being played. In your model, this would mean computing reverb 64
> > times, which is inefficient.
> > 
> > If I look at SpectMorph, there are LFOs which affect all voices. It is possible
> > to share the LFO phase between all voices. So if you play a chord, all voices
> > will have the same slowly changing timbre.  Sharing phase is no longer trivial
> > if the individual voices live different isolated plugins.
> > 
> > Another SpectMorph example is that the behaviour is not just determined by the
> > LV2 controls alone. The UI and plugin share a complex state "morph plan" that
> > determines the sound. Currently, we have a 1:1 mapping between the UI and dsp
> > code, so the UI sends state change events produced by the user to the single
> > dsp plugin. Now if we had a 1:64 mapping, each UI state change message would
> > have to be processed 64 times by all voices. This would be at least inefficent,
> > but might cause other problems.
> > 
> > Also, if we now added some visualization, with the current model, we can send
> > visualization data once from the dsp plugin to the UI. It is not clear to me
> > how this case would even work in your proposed monophonic model.
> 
> Sure, this are all valid points. For complex synths spawning multiple instances
> makes no sense.

Right.

> I was just thinking out loud whether there is a host-side-only way of handling
> expressive polyphony which would be backwards-compatible with a lot of existing
> MIDI synth plugins out there.
> 
> I'm just being realistic here. New extensions are generally picked up very
> slowly by plugin/host authors. A new extension also makes it harder to port
> existing MIDI-based synths to LV2, etc.

Ok, let me also think out loud: I see three different cases here:

1. Modular synths: these can obviously use existing LV2 plugins (like
oscillators or filters) to implement note expression, without plugins that are
aware that this even exists. However, a sequencer (Ardour/Qtractor) will have
to communicate with the moduler synth, and send its note on events to the synth
(which redistributes the note expression to controls of the
oscillators/filters). But then the interface betweer Ardour/Qtractor and the
modular synth will need per-note controls, which cannot be reduced to the
existing LV2 midi interface for the reasons given above.

2. Per-Note instantiation: if a midi synth is simple enough, it can be
instantiated per note. It could be implemented in the host, it could also be
implemented like in the modular synth case: a modular synth LV2 plugin
instantiates midi synths per note, but uses as input a LV2 extension which
supports per-note controls.

3. Complex synths: these should use a LV2 extension which supports per-note
controls.

To sum it up: if we had a LV2 extension with per-note controls, sequencers
could use that. This would cover all cases for the host. The host could
also directly support cases 1 and 2 if it chooses to, but for hosts that
cannot do it (for instance because it doesn't want to implement a full
modular synthesis environment), LV2 plugins could be implemented which
do it for the host. Like embedding ingen, which does the polyphonic
processing of a graph of LV2 plugins. If a modular synth support
a LV2 extension with the per-note controls, the host would not need to
know about it.

   Cu... Stefan
-- 
Stefan Westerfeld, http://space.twc.de/~stefan


More information about the Devel mailing list