[LV2] LV2 atom:Sequence question
Dan H
madautodev at gmail.com
Mon Dec 3 14:25:17 PST 2018
>>> Hanspeter Portner wrote:
>>> E.g. you want some infrastructure you can push newly receive events to
and pop
>>> previous events destined for dispatch in this run() cycle./
>>
>> Dan H wrote:
>> Yes, so from the perspective of LV2 convention... is an array of structs
>> acceptable in the context of run(), or better to use something like
>> atom:Tuple for this?
>
> Hanspeter Portner wrote:
> Use whatever fifo-like structure that suits your use case. Just make sure
it is
> appropriate to be used in an audio thread, e.g. it needs to be lock-free,
> wait-free, realtime-safe, etc.
I have solved this with the info on this thread. I should also mention I
found this
somewhat useful as an example: https://github.com/x42/midifilter.lv2
Many thanks
On Sun, Dec 2, 2018 at 9:26 AM Hanspeter Portner <
dev at open-music-kontrollers.ch> wrote:
> On 01.12.18 09:05, Dan H wrote:
> > /
> > /
> > /
> > On 30.11.18, Hanspeter Portner wrote:/
> > /
> > /
> >
> > /In a given run(LV2_Handle instance, uint32_t nframes) method you
> are only
> > //allowed to send events for frames 0 up to (nframes-1). If you need
> to
> > delay your
> > //events (and the scheduled event time is >= nframes), you need to
> store your
> > //events and dispatch them at a later call to run()./
> >
> >
> > Perfect explanation, that clarifies why that did not work properly.
> >
> > /As you support time:Position events on your input event port, you
> most probably
> > //want to store incoming events with their respective musical/beat
> time in the
> > //scheduler structure instead of the frame time.
> > //E.g. if the tempo doubles (via host/user invertion) while your
> events are
> > //waiting for dispatch in your scheduler structure, they'd
> automatically be
> > //dispatched at double the speed, too./
> >
> >
> > Yes, that makes sense. I've been having some synchronization issues with
> > beat time so have considered using the sample rate, last known tempo and
> time sig
> > to calculate the beat from the midi event timestamp. As you have pointed
> out, that
> > could present issues if I am queuing events for output so I will use
> beat time.
> >
> > /E.g. you want some infrastructure you can push newly receive events
> to and pop
> > //previous events destined for dispatch in this run() cycle./
> >
> >
> > Yes, so from the perspective of LV2 convention... is an array of structs
> > acceptable in the
> > context of run(), or better to use something like atom:Tuple for this?
>
> Use whatever fifo-like structure that suits your use case. Just make sure
> it is
> appropriate to be used in an audio thread, e.g. it needs to be lock-free,
> wait-free, realtime-safe, etc.
>
> > On Sat, Dec 1, 2018 at 12:15 AM Hanspeter Portner <
> dev at open-music-kontrollers.ch
> > <mailto:dev at open-music-kontrollers.ch>> wrote:
> >
> > On 30.11.18 13:14, Dan H wrote:
> > > Hi all
> > >
> > > I can't find much in the way of 'midi in-midi-out' examples to
> follow, so hope
> > > someone here can help.
> > > Let's say I have an LV2 plugin with an in-port buffer of type
> atom:Sequence
> > > supporting midi:Event and time:Position, and I have an out-port
> buffer also of
> > > type atom:Sequence. In the context of run(), I receive an incoming
> midi
> > event at
> > > frame# 96000 and I want to delay the output of that event by n
> frames. Is
> > there
> > > an 'LV2' way to queue events and manage the timing and order of
> atoms in a
> > > sequence already implemented in the API?
> >
> > No
> >
> > > or do I need to find a way to do in c
> > > code ... eg; waiting for nframes to arrive before forging the
> event atom?
> >
> > Yes (better use beat time, though, see below)
> >
> > > I have tried simply using: lv2_atom_forge_frame_time(&self->forge,
> > > ev->time.frames + n) when forging the event, but that does not
> produce the
> > > desired result.
> >
> > In a given run(LV2_Handle instance, uint32_t nframes) method you are
> only
> > allowed to send events for frames 0 up to (nframes-1). If you need
> to delay your
> > events (and the scheduled event time is >= nframes), you need to
> store your
> > events and dispatch them at a later call to run().
> >
> > E.g. you want some infrastructure you can push newly receive events
> to and pop
> > previous events destined for dispatch in this run() cycle.
> >
> > As you support time:Position events on your input event port, you
> most probably
> > want to store incoming events with their respective musical/beat
> time in the
> > scheduler structure instead of the frame time.
> >
> > E.g. if the tempo doubles (via host/user invertion) while your
> events are
> > waiting for dispatch in your scheduler structure, they'd
> automatically be
> > dispatched at double the speed, too.
>
--
Dan Hobday
E: madautodev at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lv2plug.in/pipermail/devel-lv2plug.in/attachments/20181204/c0812e01/attachment.html>
More information about the Devel
mailing list