Discussion:
[PATCH 1/6] lavfi/buffersink: add accessors for the stream properties.
(too old to reply)
Nicolas George
2016-12-18 12:22:16 UTC
Permalink
av_buffersink_get_frame_rate() did already exist; its argument becomes const.

TODO minor version bump

API-Change: libavfilter
Signed-off-by: Nicolas George <***@nsup.org>
---
libavfilter/buffersink.c | 25 +++++++++++++++++++------
libavfilter/buffersink.h | 22 ++++++++++++++++++++--
2 files changed, 39 insertions(+), 8 deletions(-)


I think the const change is acceptable.

Note: I am introducing the "API-Change" Git tag: I think it will be much
more convenient than maintaining doc/APIchanges. Later I intend to write a
script using git log --grep to pretty print it.


diff --git a/libavfilter/buffersink.c b/libavfilter/buffersink.c
index 7b7b477..030ca80 100644
--- a/libavfilter/buffersink.c
+++ b/libavfilter/buffersink.c
@@ -279,14 +279,27 @@ void av_buffersink_set_frame_size(AVFilterContext *ctx, unsigned frame_size)
inlink->partial_buf_size = frame_size;
}

-AVRational av_buffersink_get_frame_rate(AVFilterContext *ctx)
-{
- av_assert0( !strcmp(ctx->filter->name, "buffersink")
- || !strcmp(ctx->filter->name, "ffbuffersink"));
-
- return ctx->inputs[0]->frame_rate;
+#define MAKE_AVFILTERLINK_ACCESSOR(type, field) \
+type av_buffersink_get_##field(const AVFilterContext *ctx) { \
+ av_assert0(ctx->filter->uninit == uninit); \
+ return ctx->inputs[0]->field; \
}

+MAKE_AVFILTERLINK_ACCESSOR(enum AVMediaType , type );
+MAKE_AVFILTERLINK_ACCESSOR(AVRational , time_base );
+MAKE_AVFILTERLINK_ACCESSOR(int , format );
+
+MAKE_AVFILTERLINK_ACCESSOR(AVRational , frame_rate );
+MAKE_AVFILTERLINK_ACCESSOR(int , w );
+MAKE_AVFILTERLINK_ACCESSOR(int , h );
+MAKE_AVFILTERLINK_ACCESSOR(AVRational , sample_aspect_ratio);
+
+MAKE_AVFILTERLINK_ACCESSOR(int , channels );
+MAKE_AVFILTERLINK_ACCESSOR(uint64_t , channel_layout );
+MAKE_AVFILTERLINK_ACCESSOR(int , sample_rate );
+
+MAKE_AVFILTERLINK_ACCESSOR(AVBufferRef * , hw_frames_ctx );
+
static av_cold int vsink_init(AVFilterContext *ctx, void *opaque)
{
BufferSinkContext *buf = ctx->priv;
diff --git a/libavfilter/buffersink.h b/libavfilter/buffersink.h
index e399b91..f51fa7c 100644
--- a/libavfilter/buffersink.h
+++ b/libavfilter/buffersink.h
@@ -101,9 +101,27 @@ AVABufferSinkParams *av_abuffersink_params_alloc(void);
void av_buffersink_set_frame_size(AVFilterContext *ctx, unsigned frame_size);

/**
- * Get the frame rate of the input.
+ * @defgroup lavfi_buffersink_accessors Buffer sink accessors
+ * Get the properties of the stream
+ * @{
*/
-AVRational av_buffersink_get_frame_rate(AVFilterContext *ctx);
+
+enum AVMediaType av_buffersink_get_type (const AVFilterContext *ctx);
+AVRational av_buffersink_get_time_base (const AVFilterContext *ctx);
+int av_buffersink_get_format (const AVFilterContext *ctx);
+
+AVRational av_buffersink_get_frame_rate (const AVFilterContext *ctx);
+int av_buffersink_get_w (const AVFilterContext *ctx);
+int av_buffersink_get_h (const AVFilterContext *ctx);
+AVRational av_buffersink_get_sample_aspect_ratio (const AVFilterContext *ctx);
+
+int av_buffersink_get_channels (const AVFilterContext *ctx);
+uint64_t av_buffersink_get_channel_layout (const AVFilterContext *ctx);
+int av_buffersink_get_sample_rate (const AVFilterContext *ctx);
+
+AVBufferRef * av_buffersink_get_hw_frames_ctx (const AVFilterContext *ctx);
+
+/** @} */

/**
* Get a frame with filtered data from sink and put it in frame.
--
2.10.2
Nicolas George
2016-12-18 12:22:20 UTC
Permalink
Signed-off-by: Nicolas George <***@nsup.org>
---
libavfilter/tests/filtfmts.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/libavfilter/tests/filtfmts.c b/libavfilter/tests/filtfmts.c
index f59199c..199d74d 100644
--- a/libavfilter/tests/filtfmts.c
+++ b/libavfilter/tests/filtfmts.c
@@ -30,6 +30,7 @@

#include "libavfilter/avfilter.h"
#include "libavfilter/formats.h"
+#include "libavfilter/internal.h"

static void print_formats(AVFilterContext *filter_ctx)
{
--
2.10.2
Nicolas George
2016-12-18 12:22:21 UTC
Permalink
API-Change: libavfilter
Signed-off-by: Nicolas George <***@nsup.org>
---
libavfilter/avfilter.h | 2 +
libavfilter/internal.h | 199 +++++++++++++++++++++++++++++++++++++++++++++++++
libavfilter/version.h | 3 +
3 files changed, 204 insertions(+)


Not sure what the preferred delay would be. I suspect not many programs use
libavfilter yet. In the meantime, all new fields must be added at both
places and tested.


diff --git a/libavfilter/avfilter.h b/libavfilter/avfilter.h
index 828b270..6109e58 100644
--- a/libavfilter/avfilter.h
+++ b/libavfilter/avfilter.h
@@ -377,6 +377,7 @@ struct AVFilterContext {
unsigned ready;
};

+#if FF_API_AVFILTERLINK_PUBLIC
/**
* A link between two filters. This contains pointers to the source and
* destination filters between which this link exists, and the indexes of
@@ -593,6 +594,7 @@ struct AVFilterLink {
#endif /* FF_INTERNAL_FIELDS */

};
+#endif /* FF_API_AVFILTERLINK_PUBLIC */

/**
* Link two filters together.
diff --git a/libavfilter/internal.h b/libavfilter/internal.h
index a8b69fd..599be24 100644
--- a/libavfilter/internal.h
+++ b/libavfilter/internal.h
@@ -145,6 +145,205 @@ struct AVFilterPad {
int needs_writable;
};

+#if !FF_API_AVFILTERLINK_PUBLIC
+/**
+ * A link between two filters. This contains pointers to the source and
+ * destination filters between which this link exists, and the indexes of
+ * the pads involved. In addition, this link also contains the parameters
+ * which have been negotiated and agreed upon between the filter, such as
+ * image dimensions, format, etc.
+ */
+struct AVFilterLink {
+ AVFilterContext *src; ///< source filter
+ AVFilterPad *srcpad; ///< output pad on the source filter
+
+ AVFilterContext *dst; ///< dest filter
+ AVFilterPad *dstpad; ///< input pad on the dest filter
+
+ enum AVMediaType type; ///< filter media type
+
+ /* These parameters apply only to video */
+ int w; ///< agreed upon image width
+ int h; ///< agreed upon image height
+ AVRational sample_aspect_ratio; ///< agreed upon sample aspect ratio
+ /* These parameters apply only to audio */
+ uint64_t channel_layout; ///< channel layout of current buffer (see libavutil/channel_layout.h)
+ int sample_rate; ///< samples per second
+
+ int format; ///< agreed upon media format
+
+ /**
+ * Define the time base used by the PTS of the frames/samples
+ * which will pass through this link.
+ * During the configuration stage, each filter is supposed to
+ * change only the output timebase, while the timebase of the
+ * input link is assumed to be an unchangeable property.
+ */
+ AVRational time_base;
+
+ /**
+ * Lists of formats and channel layouts supported by the input and output
+ * filters respectively. These lists are used for negotiating the format
+ * to actually be used, which will be loaded into the format and
+ * channel_layout members, above, when chosen.
+ *
+ */
+ AVFilterFormats *in_formats;
+ AVFilterFormats *out_formats;
+
+ /**
+ * Lists of channel layouts and sample rates used for automatic
+ * negotiation.
+ */
+ AVFilterFormats *in_samplerates;
+ AVFilterFormats *out_samplerates;
+ struct AVFilterChannelLayouts *in_channel_layouts;
+ struct AVFilterChannelLayouts *out_channel_layouts;
+
+ /**
+ * Audio only, the destination filter sets this to a non-zero value to
+ * request that buffers with the given number of samples should be sent to
+ * it. AVFilterPad.needs_fifo must also be set on the corresponding input
+ * pad.
+ * Last buffer before EOF will be padded with silence.
+ */
+ int request_samples;
+
+ /** stage of the initialization of the link properties (dimensions, etc) */
+ enum {
+ AVLINK_UNINIT = 0, ///< not started
+ AVLINK_STARTINIT, ///< started, but incomplete
+ AVLINK_INIT ///< complete
+ } init_state;
+
+ /**
+ * Graph the filter belongs to.
+ */
+ struct AVFilterGraph *graph;
+
+ /**
+ * Current timestamp of the link, as defined by the most recent
+ * frame(s), in link time_base units.
+ */
+ int64_t current_pts;
+
+ /**
+ * Current timestamp of the link, as defined by the most recent
+ * frame(s), in AV_TIME_BASE units.
+ */
+ int64_t current_pts_us;
+
+ /**
+ * Index in the age array.
+ */
+ int age_index;
+
+ /**
+ * Frame rate of the stream on the link, or 1/0 if unknown or variable;
+ * if left to 0/0, will be automatically copied from the first input
+ * of the source filter if it exists.
+ *
+ * Sources should set it to the best estimation of the real frame rate.
+ * If the source frame rate is unknown or variable, set this to 1/0.
+ * Filters should update it if necessary depending on their function.
+ * Sinks can use it to set a default output frame rate.
+ * It is similar to the r_frame_rate field in AVStream.
+ */
+ AVRational frame_rate;
+
+ /**
+ * Buffer partially filled with samples to achieve a fixed/minimum size.
+ */
+ AVFrame *partial_buf;
+
+ /**
+ * Size of the partial buffer to allocate.
+ * Must be between min_samples and max_samples.
+ */
+ int partial_buf_size;
+
+ /**
+ * Minimum number of samples to filter at once. If filter_frame() is
+ * called with fewer samples, it will accumulate them in partial_buf.
+ * This field and the related ones must not be changed after filtering
+ * has started.
+ * If 0, all related fields are ignored.
+ */
+ int min_samples;
+
+ /**
+ * Maximum number of samples to filter at once. If filter_frame() is
+ * called with more samples, it will split them.
+ */
+ int max_samples;
+
+ /**
+ * Number of channels.
+ */
+ int channels;
+
+ /**
+ * Link processing flags.
+ */
+ unsigned flags;
+
+ /**
+ * Number of past frames sent through the link.
+ */
+ int64_t frame_count_in, frame_count_out;
+
+ /**
+ * A pointer to a FFVideoFramePool struct.
+ */
+ void *video_frame_pool;
+
+ /**
+ * True if a frame is currently wanted on the output of this filter.
+ * Set when ff_request_frame() is called by the output,
+ * cleared when a frame is filtered.
+ */
+ int frame_wanted_out;
+
+ /**
+ * For hwaccel pixel formats, this should be a reference to the
+ * AVHWFramesContext describing the frames.
+ */
+ AVBufferRef *hw_frames_ctx;
+
+ /**
+ * Queue of frames waiting to be filtered.
+ */
+ FFFrameQueue fifo;
+
+ /**
+ * If set, the source filter can not generate a frame as is.
+ * The goal is to avoid repeatedly calling the request_frame() method on
+ * the same link.
+ */
+ int frame_blocked_in;
+
+ /**
+ * Link input status.
+ * If not zero, all attempts of filter_frame will fail with the
+ * corresponding code.
+ */
+ int status_in;
+
+ /**
+ * Timestamp of the input status change.
+ */
+ int64_t status_in_pts;
+
+ /**
+ * Link output status.
+ * If not zero, all attempts of request_frame will fail with the
+ * corresponding code.
+ */
+ int status_out;
+
+};
+#endif /* !FF_API_AVFILTERLINK_PUBLIC */
+
struct AVFilterGraphInternal {
void *thread;
avfilter_execute_func *thread_execute;
diff --git a/libavfilter/version.h b/libavfilter/version.h
index e3bd8d0..8256781 100644
--- a/libavfilter/version.h
+++ b/libavfilter/version.h
@@ -67,5 +67,8 @@
#ifndef FF_API_NOCONST_GET_NAME
#define FF_API_NOCONST_GET_NAME (LIBAVFILTER_VERSION_MAJOR < 7)
#endif
+#ifndef FF_API_AVFILTERLINK_PUBLIC
+#define FF_API_AVFILTERLINK_PUBLIC (LIBAVFILTER_VERSION_MAJOR < 8)
+#endif

#endif /* AVFILTER_VERSION_H */
--
2.10.2
wm4
2016-12-18 12:33:25 UTC
Permalink
On Sun, 18 Dec 2016 13:22:21 +0100
Post by Nicolas George
API-Change: libavfilter
---
libavfilter/avfilter.h | 2 +
libavfilter/internal.h | 199 +++++++++++++++++++++++++++++++++++++++++++++++++
libavfilter/version.h | 3 +
3 files changed, 204 insertions(+)
Not sure what the preferred delay would be. I suspect not many programs use
libavfilter yet. In the meantime, all new fields must be added at both
places and tested.
diff --git a/libavfilter/avfilter.h b/libavfilter/avfilter.h
index 828b270..6109e58 100644
--- a/libavfilter/avfilter.h
+++ b/libavfilter/avfilter.h
@@ -377,6 +377,7 @@ struct AVFilterContext {
unsigned ready;
};
+#if FF_API_AVFILTERLINK_PUBLIC
/**
* A link between two filters. This contains pointers to the source and
* destination filters between which this link exists, and the indexes of
@@ -593,6 +594,7 @@ struct AVFilterLink {
#endif /* FF_INTERNAL_FIELDS */
};
+#endif /* FF_API_AVFILTERLINK_PUBLIC */
/**
* Link two filters together.
diff --git a/libavfilter/internal.h b/libavfilter/internal.h
index a8b69fd..599be24 100644
--- a/libavfilter/internal.h
+++ b/libavfilter/internal.h
@@ -145,6 +145,205 @@ struct AVFilterPad {
int needs_writable;
};
+#if !FF_API_AVFILTERLINK_PUBLIC
+/**
+ * A link between two filters. This contains pointers to the source and
+ * destination filters between which this link exists, and the indexes of
+ * the pads involved. In addition, this link also contains the parameters
+ * which have been negotiated and agreed upon between the filter, such as
+ * image dimensions, format, etc.
+ */
+struct AVFilterLink {
+ AVFilterContext *src; ///< source filter
+ AVFilterPad *srcpad; ///< output pad on the source filter
+
+ AVFilterContext *dst; ///< dest filter
+ AVFilterPad *dstpad; ///< input pad on the dest filter
+
+ enum AVMediaType type; ///< filter media type
+
+ /* These parameters apply only to video */
+ int w; ///< agreed upon image width
+ int h; ///< agreed upon image height
+ AVRational sample_aspect_ratio; ///< agreed upon sample aspect ratio
+ /* These parameters apply only to audio */
+ uint64_t channel_layout; ///< channel layout of current buffer (see libavutil/channel_layout.h)
+ int sample_rate; ///< samples per second
+
+ int format; ///< agreed upon media format
+
+ /**
+ * Define the time base used by the PTS of the frames/samples
+ * which will pass through this link.
+ * During the configuration stage, each filter is supposed to
+ * change only the output timebase, while the timebase of the
+ * input link is assumed to be an unchangeable property.
+ */
+ AVRational time_base;
+
+ /**
+ * Lists of formats and channel layouts supported by the input and output
+ * filters respectively. These lists are used for negotiating the format
+ * to actually be used, which will be loaded into the format and
+ * channel_layout members, above, when chosen.
+ *
+ */
+ AVFilterFormats *in_formats;
+ AVFilterFormats *out_formats;
+
+ /**
+ * Lists of channel layouts and sample rates used for automatic
+ * negotiation.
+ */
+ AVFilterFormats *in_samplerates;
+ AVFilterFormats *out_samplerates;
+ struct AVFilterChannelLayouts *in_channel_layouts;
+ struct AVFilterChannelLayouts *out_channel_layouts;
+
+ /**
+ * Audio only, the destination filter sets this to a non-zero value to
+ * request that buffers with the given number of samples should be sent to
+ * it. AVFilterPad.needs_fifo must also be set on the corresponding input
+ * pad.
+ * Last buffer before EOF will be padded with silence.
+ */
+ int request_samples;
+
+ /** stage of the initialization of the link properties (dimensions, etc) */
+ enum {
+ AVLINK_UNINIT = 0, ///< not started
+ AVLINK_STARTINIT, ///< started, but incomplete
+ AVLINK_INIT ///< complete
+ } init_state;
+
+ /**
+ * Graph the filter belongs to.
+ */
+ struct AVFilterGraph *graph;
+
+ /**
+ * Current timestamp of the link, as defined by the most recent
+ * frame(s), in link time_base units.
+ */
+ int64_t current_pts;
+
+ /**
+ * Current timestamp of the link, as defined by the most recent
+ * frame(s), in AV_TIME_BASE units.
+ */
+ int64_t current_pts_us;
+
+ /**
+ * Index in the age array.
+ */
+ int age_index;
+
+ /**
+ * Frame rate of the stream on the link, or 1/0 if unknown or variable;
+ * if left to 0/0, will be automatically copied from the first input
+ * of the source filter if it exists.
+ *
+ * Sources should set it to the best estimation of the real frame rate.
+ * If the source frame rate is unknown or variable, set this to 1/0.
+ * Filters should update it if necessary depending on their function.
+ * Sinks can use it to set a default output frame rate.
+ * It is similar to the r_frame_rate field in AVStream.
+ */
+ AVRational frame_rate;
+
+ /**
+ * Buffer partially filled with samples to achieve a fixed/minimum size.
+ */
+ AVFrame *partial_buf;
+
+ /**
+ * Size of the partial buffer to allocate.
+ * Must be between min_samples and max_samples.
+ */
+ int partial_buf_size;
+
+ /**
+ * Minimum number of samples to filter at once. If filter_frame() is
+ * called with fewer samples, it will accumulate them in partial_buf.
+ * This field and the related ones must not be changed after filtering
+ * has started.
+ * If 0, all related fields are ignored.
+ */
+ int min_samples;
+
+ /**
+ * Maximum number of samples to filter at once. If filter_frame() is
+ * called with more samples, it will split them.
+ */
+ int max_samples;
+
+ /**
+ * Number of channels.
+ */
+ int channels;
+
+ /**
+ * Link processing flags.
+ */
+ unsigned flags;
+
+ /**
+ * Number of past frames sent through the link.
+ */
+ int64_t frame_count_in, frame_count_out;
+
+ /**
+ * A pointer to a FFVideoFramePool struct.
+ */
+ void *video_frame_pool;
+
+ /**
+ * True if a frame is currently wanted on the output of this filter.
+ * Set when ff_request_frame() is called by the output,
+ * cleared when a frame is filtered.
+ */
+ int frame_wanted_out;
+
+ /**
+ * For hwaccel pixel formats, this should be a reference to the
+ * AVHWFramesContext describing the frames.
+ */
+ AVBufferRef *hw_frames_ctx;
+
+ /**
+ * Queue of frames waiting to be filtered.
+ */
+ FFFrameQueue fifo;
+
+ /**
+ * If set, the source filter can not generate a frame as is.
+ * The goal is to avoid repeatedly calling the request_frame() method on
+ * the same link.
+ */
+ int frame_blocked_in;
+
+ /**
+ * Link input status.
+ * If not zero, all attempts of filter_frame will fail with the
+ * corresponding code.
+ */
+ int status_in;
+
+ /**
+ * Timestamp of the input status change.
+ */
+ int64_t status_in_pts;
+
+ /**
+ * Link output status.
+ * If not zero, all attempts of request_frame will fail with the
+ * corresponding code.
+ */
+ int status_out;
+
+};
+#endif /* !FF_API_AVFILTERLINK_PUBLIC */
+
struct AVFilterGraphInternal {
void *thread;
avfilter_execute_func *thread_execute;
diff --git a/libavfilter/version.h b/libavfilter/version.h
index e3bd8d0..8256781 100644
--- a/libavfilter/version.h
+++ b/libavfilter/version.h
@@ -67,5 +67,8 @@
#ifndef FF_API_NOCONST_GET_NAME
#define FF_API_NOCONST_GET_NAME (LIBAVFILTER_VERSION_MAJOR < 7)
#endif
+#ifndef FF_API_AVFILTERLINK_PUBLIC
+#define FF_API_AVFILTERLINK_PUBLIC (LIBAVFILTER_VERSION_MAJOR < 8)
+#endif
#endif /* AVFILTER_VERSION_H */
Did you send the same patches to Libav? This makes the API incompatible
with Libav.
Nicolas George
2016-12-18 12:37:44 UTC
Permalink
Post by wm4
Did you send the same patches to Libav? This makes the API incompatible
with Libav.
Their API has been non-working for a long time, even if technically
compatible with ours. Their problem.

Regards,
--
Nicolas Ge
wm4
2016-12-18 12:48:13 UTC
Permalink
On Sun, 18 Dec 2016 13:37:44 +0100
Post by Nicolas George
Post by wm4
Did you send the same patches to Libav? This makes the API incompatible
with Libav.
Their API has been non-working for a long time, even if technically
compatible with ours. Their problem.
Can't confirm that assessment.
Nicolas George
2016-12-18 18:41:14 UTC
Permalink
How does this patchset relate to the open-ness of the API ?
you arent saying anything about the plans, goals, intend of this (or
i missed it or fail to associate it with the patchset)
I am doing this to accommodate people who object to having a different
view of AVFilterLink for the public API and the internal implementation,
mostly Hendrik, Andreas and Clément.

As for me, I am pretty happy with the current code that gives a
different view of AVFilterLink to the public than the one used for
implementation. Something like that is needed because some fields have a
type that is not itself public.
Iam asking as it seems like this is moving libavfilter further away
from being "open" and centralizing control over filters more.
I most likely misinterpret this as i just see the change in the code
and dont know the whole story but IMO we should move toward a clean and
stable API that everyone can use.
That also implies to allow filters to only use public API.
while this patchset seems to make filters use more private api by
making more needed API private. I think a open API and external
filter support would drive developers and users towards libavfilter
while locking it down would likely do the opposit
I am not sure I understand what you mean by openness. Do you mean
applications writing their own filter and inserting it in a filter
graph? If so, I can tell you it is not currently possible, and has not
been since at least 2012-06-12 (9d0bfc5 lavfi: make AVFilterPad opaque
after two major bumps.).

Regards,
--
Nicolas George
Michael Niedermayer
2016-12-20 22:56:27 UTC
Permalink
though i wonder what mistake we did to end here, the original design
was intended to be very simple ...
That was the mistake: trying for a simple design.
Ha ha! Only serious. Linear filter chains are simple, and could have
worked easily with a simple design. But non-linear filter graphs are
inherently more complex: the system needs to decide what filter to
activate first from several parallel branches.
it shouldnt really be complex, not in concept, maybe in (efficient)
implementation
For example as a concept one could imagine each filter runs as its
own thread and waits on its inputs availability and output space.
if it gets input and has output space its woken up and works and
produces output and then wakes its surroundings up.
no difference between a linear chain or a complex graph here

and its simple as we use and need just a local view centered on a
filter and its neighbors

IIUC you already implemented the gory details of filters handing stuff
around and that working and not getting stuck nor bloating up too bad.
But as a concept ignoring the implementation details i would call it
simple.

iam not sure its usfull but
Another view from a different direction would be to see the filter
graph as a network of pipes with flowing water and interconnected
pumps. This too is a simple concept and intuition would suggest that
this would not easily end up with water being stuck nor too much
accumulating. It does not 1:1 match real filters though which work
on discrete frames, i wonder if this view is usefull
It would allow awnsering global questions about a graph, like what
inputs are useless when some output is welded shut or vice versa
Graphs with several inputs and/or outputs are even more difficult,
because the application needs to decide which input needs a frame most
urgently.
Both lavfi's original design (API and implementation) and its use in
ffmpeg (actually, avconv) for complex filter graphs neglect partially or
completely this issue.
i think the original lavfi design didnt really had any issue with graphs with
multiple inputs or outputs. A user app could decide were to in and out
put. but FFmpeg didnt support this at the time IIRC so the people working
on the original lavfi had nothing to implement.
the problems came when this support was added much later


[...]
You complain that noone else is working on libavfilter
how many people know what _NEEDS_ to be done to make the API good
enough for a first public and stable API ?
while i have a long todo and maybe no time for this, as it is i cant
even work on making libavfilter public+stable. Its not a technical
issue, its one of peoples requirements on what they require to be
changed first.
This is true, but there is another side to the human problem: explaining
the plans and requirements of the API takes time; if nobody will read it
carefully, that time is better spent actually coding (I am writing this
mail by snippets while FATE is running).
Here is an outline, in approximate order (there are dependencies, but
- Add a callback AVFilter.activate() to replace filter_frame() on all
inputs and request_frame() on all outputs. Most non-trivial filters
are written that way in the first place.
ok thats mostly cosmetic id say.
I think we possibly even had something somewhat in that direction in
the original lavfi discussions MANY years ago
this makes more sense with non recursive stuff
- Change buffersink to implement that callback and peek directly in the
FIFO.
ok, "cosmetic"
- Write a generic activate callback for 1-to-1 filters and use it.
there were plans for simplifying generic filters
1 in 1 out frame filters
apply something to every pixel filters
some more cases i forgot

while different now with activate() the idea is old and IIRC was
supposed to be done long long ago. I think its a good idea if it does
happen
- Rewrite framesync (the utility system for filters with several video
inputs that need synchroized frames) to implement activate and use the
FIFO directly.
cosmetic :)
- Allow to give buffersrc a timestamp on EOF, make sure the timestamp is
forwarded by most filters and allow to retrieve it from buffersink.
(If somebody has a suggestion of a good data structure for that...)
AVFrame.duration
This possibly is a big subject for discussion on its own, but maybe
not i dont know.
- Allow to set a callback on buffersinks and buffersrcs to be notified
when a frame arrives or is needed. It is much more convenient than
walking the buffers to check one by one.
agree that walking is bad.
cannot ATM argue on what is better as i dont feel that i have a clear
enough view of this and the surroundings.
- Allow to merge several filter graphs into one. This may be useful when
applications have several graphs to run simultaneously, so that they
do not need to decide which one to activate. Another option would be
to have the functions in the next step work on possibly several
graphs.
This may be orthogonal but i think a filter graph should be a filter
(maybe not at struct level litterally but it should be possible to have a
AVFilter that is backed by a arbitrary user specified filter graph)
- Keep a priority queue of ready filters and use it instead of walking
the graph.
something better than O(n) yes, agree
- Add a function to run the graph until "something" happens; "something"
meaning a stop instruction called by the callbacks.
dont understand
run until a frame comes out from any sink (we get the frame and
something useful identifying the sink) or one is needed from a source
(we get something identifying the source) or EOF.
if the API itself is simpler with utility functions then this sounds
like a good idea

thx

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

There will always be a question for which you do not know the correct answer.
Nicolas George
2016-12-21 16:40:50 UTC
Permalink
The framework could monitor filters to determine their apparent
behavior. This would of course not give exact prediction of future
behavior.
I cant say how useful it would be to use this of course ...
I do not see how to integrate that in something reliable, but somebody
may.
well, in the original design a filter graph can either be used in a
pull based application in which primarly data is requested from its
outputs and the requests recursivly move to its inputs triggering
data read through callbacks from some source filter. [applications
could implement their own source filter as there was a public API]
Or in a push based application each source would have a fifo,
if its empty the application needs to push data into the fifo, data
again is returned by requesting from the sink(s).
Which sink to pull data from could be determied by first pulling
from ones that had data when polled and then it would be up to the
application to decide, your lowest timestamp choice would have been
a possibility, keeping track of apparent in-out relations would
be another. (this was either way application side and not lavfis
choice)
I am not sure I can easily keep up the discussion: we are going back to
the basics of the scheduling, I worked on it in spring-summer 2012.
Since then, I remember a lot of "that does not work" but not all the
"because" that come after. I can give a few examples.

I think pull-only mode may work if the application is aware of all the
sinks and makes assumptions about their synchronization (i.e.
split -> normal speed + slow motion would not work), which seems
reasonable. Unfortunately, pull-only is the least useful way of using
that kind of library.

Push-only mode does not work with several inputs (or sources), because
you can not know which one needs a frame (the actual need may be far
downstream), and assumptions about synchronization are really not
acceptable in input.

Local-mixed pull-push (i.e. filters can issue a pull in reaction to a
push and reciprocally) solves these issues, but can result in infinite
loops: split pushes, first on out0 connected to overlay in0, overlays
pulls on in1 connected to split out1, split pushes, Redo From Start,
stack overflow. That was what we had before 2012.

Global-mixed pull-push (i.e. the application pushes on inputs and drives
by pulling on outputs, but filters are not allowed to make U-turns)
works, with the same caveats as pull-only. That is what we have since
2012.

And on top of that, you have to consider the case of inputs versus
sources that are always ready like testsrc. They bring their own
complications.

Of course, any of these issues can be solved using flags or something,
but the hard part is to solve all of them at the same time. We no longer
are at the simple original design, but a complicated set of workarounds
on top of the original design.

And the new non-recursive design is not more complex than the recursive
one, the one that works. It is the same, plus the code for the FIFO. If
it was done like that in the first place, it would have worked fine. The
complex part in it is the compatibility layer: use filters designed for
the recursive version unchanged in the non-recursive version. Hopefully,
some of that complexity will go away as difficult filters are adapted.

Plus, unlike the recursive design that mandates a depth-first order of
processing, the non-recursive design can work in any order. I suspect it
will give us the local-mixed pull-push mode for almost free. But I have
yet to test.
differences in corner cases yes, i didnt mean to imply that its
purely and 100% cosmetic. More that its basically a cosmetic change
replacing how the more or less same code is triggered and that maybe
some of this could be done by some gsoc student or other volunteer.
Aka at least part of this seems time consuming but not highly complex
work.
Indeed. And I must say it makes a nice change after a year spent
understanding why corner cases in the non-recursive design did not work.
i dont think not knowing the duration is a problem.
you need replicated frames possibly elsewere already. Like for
subtitles, its not much different to duplicating the last frame with
the remainining to EOF duration to be added to the last 1 or 0 duration
but i didnt think deeply about this now so i might miss details
the issue is also not specific to subtitles, audio tracks with "holes"
in them exist too so do video slidshows. At least in some usecases
limiting the distance between frames is needed. (for example to
ensure random access as in keyframes. The issue can to some extend
be pushed into the container format i guess but for truely streamed
formats if you dont repeat your video frame and subtitles which
are currently disaplyed it just wont get displayed if you start viewing
around that point)
so to me it seems there are a lot of issues that all can be dealt with
by some support to replicate frames in long stretches of no frames
and later drop them if they arent needed, the last EOF duration
containing frame could then be just another such case
but again i didnt think deeply about this
That may be true, but I really think that relying only on timestamps
instead of mixing duration (optional) and timestamps (almost mandatory)
will be simpler.
- Allow to set a callback on buffersinks and buffersrcs to be notified
when a frame arrives or is needed. It is much more convenient than
walking the buffers to check one by one.
You could have said it: cosmetic :)
i should be more verbose, i said cosmetic but i meant more than just
what cosmetic means litteraly :)
No, no, I meant: you really could have said it, this change is really
just syntactic sugar.

Regards,
--
Nicolas George
Nicolas George
2016-12-18 12:22:18 UTC
Permalink
Signed-off-by: Nicolas George <***@nsup.org>
---
ffplay.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/ffplay.c b/ffplay.c
index 911fd7f..90bd97b 100644
--- a/ffplay.c
+++ b/ffplay.c
@@ -2075,7 +2075,7 @@ static int audio_thread(void *arg)
goto the_end;

while ((ret = av_buffersink_get_frame_flags(is->out_audio_filter, frame, 0)) >= 0) {
- tb = is->out_audio_filter->inputs[0]->time_base;
+ tb = av_buffersink_get_time_base(is->out_audio_filter);
#endif
if (!(af = frame_queue_peek_writable(&is->sampq)))
goto the_end;
@@ -2183,7 +2183,7 @@ static int video_thread(void *arg)
last_format = frame->format;
last_serial = is->viddec.pkt_serial;
last_vfilter_idx = is->vfilter_idx;
- frame_rate = filt_out->inputs[0]->frame_rate;
+ frame_rate = av_buffersink_get_frame_rate(filt_out);
}

ret = av_buffersrc_add_frame(filt_in, frame);
@@ -2204,7 +2204,7 @@ static int video_thread(void *arg)
is->frame_last_filter_delay = av_gettime_relative() / 1000000.0 - is->frame_last_returned_time;
if (fabs(is->frame_last_filter_delay) > AV_NOSYNC_THRESHOLD / 10.0)
is->frame_last_filter_delay = 0;
- tb = filt_out->inputs[0]->time_base;
+ tb = av_buffersink_get_time_base(filt_out);
#endif
duration = (frame_rate.num && frame_rate.den ? av_q2d((AVRational){frame_rate.den, frame_rate.num}) : 0);
pts = (frame->pts == AV_NOPTS_VALUE) ? NAN : frame->pts * av_q2d(tb);
@@ -2641,7 +2641,7 @@ static int stream_component_open(VideoState *is, int stream_index)
case AVMEDIA_TYPE_AUDIO:
#if CONFIG_AVFILTER
{
- AVFilterLink *link;
+ AVFilterContext *sink;

is->audio_filter_src.freq = avctx->sample_rate;
is->audio_filter_src.channels = avctx->channels;
@@ -2649,10 +2649,10 @@ static int stream_component_open(VideoState *is, int stream_index)
is->audio_filter_src.fmt = avctx->sample_fmt;
if ((ret = configure_audio_filters(is, afilters, 0)) < 0)
goto fail;
- link = is->out_audio_filter->inputs[0];
- sample_rate = link->sample_rate;
- nb_channels = avfilter_link_get_channels(link);
- channel_layout = link->channel_layout;
+ sink = is->out_audio_filter;
+ sample_rate = av_buffersink_get_sample_rate(sink);
+ nb_channels = av_buffersink_get_channels(sink);
+ channel_layout = av_buffersink_get_channel_layout(sink);
}
#else
sample_rate = avctx->sample_rate;
--
2.10.2
Nicolas George
2016-12-18 12:22:19 UTC
Permalink
Signed-off-by: Nicolas George <***@nsup.org>
---
libavdevice/lavfi.c | 37 +++++++++++++++++++------------------
1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/libavdevice/lavfi.c b/libavdevice/lavfi.c
index f9b2694..eca3f15 100644
--- a/libavdevice/lavfi.c
+++ b/libavdevice/lavfi.c
@@ -312,31 +312,32 @@ av_cold static int lavfi_read_header(AVFormatContext *avctx)

/* fill each stream with the information in the corresponding sink */
for (i = 0; i < lavfi->nb_sinks; i++) {
- AVFilterLink *link = lavfi->sinks[lavfi->stream_sink_map[i]]->inputs[0];
+ AVFilterContext *sink = lavfi->sinks[lavfi->stream_sink_map[i]];
+ AVRational time_base = av_buffersink_get_time_base(sink);
AVStream *st = avctx->streams[i];
- st->codecpar->codec_type = link->type;
- avpriv_set_pts_info(st, 64, link->time_base.num, link->time_base.den);
- if (link->type == AVMEDIA_TYPE_VIDEO) {
+ st->codecpar->codec_type = av_buffersink_get_type(sink);
+ avpriv_set_pts_info(st, 64, time_base.num, time_base.den);
+ if (av_buffersink_get_type(sink) == AVMEDIA_TYPE_VIDEO) {
st->codecpar->codec_id = AV_CODEC_ID_RAWVIDEO;
- st->codecpar->format = link->format;
- st->codecpar->width = link->w;
- st->codecpar->height = link->h;
+ st->codecpar->format = av_buffersink_get_format(sink);
+ st->codecpar->width = av_buffersink_get_w(sink);
+ st->codecpar->height = av_buffersink_get_h(sink);
st ->sample_aspect_ratio =
- st->codecpar->sample_aspect_ratio = link->sample_aspect_ratio;
+ st->codecpar->sample_aspect_ratio = av_buffersink_get_sample_aspect_ratio(sink);
avctx->probesize = FFMAX(avctx->probesize,
- link->w * link->h *
- av_get_padded_bits_per_pixel(av_pix_fmt_desc_get(link->format)) *
+ av_buffersink_get_w(sink) * av_buffersink_get_h(sink) *
+ av_get_padded_bits_per_pixel(av_pix_fmt_desc_get(av_buffersink_get_format(sink))) *
30);
- } else if (link->type == AVMEDIA_TYPE_AUDIO) {
- st->codecpar->codec_id = av_get_pcm_codec(link->format, -1);
- st->codecpar->channels = avfilter_link_get_channels(link);
- st->codecpar->format = link->format;
- st->codecpar->sample_rate = link->sample_rate;
- st->codecpar->channel_layout = link->channel_layout;
+ } else if (av_buffersink_get_type(sink) == AVMEDIA_TYPE_AUDIO) {
+ st->codecpar->codec_id = av_get_pcm_codec(av_buffersink_get_format(sink), -1);
+ st->codecpar->channels = av_buffersink_get_channels(sink);
+ st->codecpar->format = av_buffersink_get_format(sink);
+ st->codecpar->sample_rate = av_buffersink_get_sample_rate(sink);
+ st->codecpar->channel_layout = av_buffersink_get_channel_layout(sink);
if (st->codecpar->codec_id == AV_CODEC_ID_NONE)
av_log(avctx, AV_LOG_ERROR,
"Could not find PCM codec for sample format %s.\n",
- av_get_sample_fmt_name(link->format));
+ av_get_sample_fmt_name(av_buffersink_get_format(sink)));
}
}

@@ -400,7 +401,7 @@ static int lavfi_read_packet(AVFormatContext *avctx, AVPacket *pkt)
/* iterate through all the graph sinks. Select the sink with the
* minimum PTS */
for (i = 0; i < lavfi->nb_sinks; i++) {
- AVRational tb = lavfi->sinks[i]->inputs[0]->time_base;
+ AVRational tb = av_buffersink_get_time_base(lavfi->sinks[i]);
double d;
int ret;
--
2.10.2
Nicolas George
2016-12-18 12:22:17 UTC
Permalink
Signed-off-by: Nicolas George <***@nsup.org>
---
ffmpeg.c | 46 ++++++++++++++++++++++++----------------------
ffmpeg_filter.c | 12 ++++++------
2 files changed, 30 insertions(+), 28 deletions(-)

diff --git a/ffmpeg.c b/ffmpeg.c
index e4890a4..ff177e0 100644
--- a/ffmpeg.c
+++ b/ffmpeg.c
@@ -1014,6 +1014,7 @@ static void do_video_out(OutputFile *of,
AVPacket pkt;
AVCodecContext *enc = ost->enc_ctx;
AVCodecParameters *mux_par = ost->st->codecpar;
+ AVRational frame_rate;
int nb_frames, nb0_frames, i;
double delta, delta0;
double duration = 0;
@@ -1024,9 +1025,9 @@ static void do_video_out(OutputFile *of,
if (ost->source_index >= 0)
ist = input_streams[ost->source_index];

- if (filter->inputs[0]->frame_rate.num > 0 &&
- filter->inputs[0]->frame_rate.den > 0)
- duration = 1/(av_q2d(filter->inputs[0]->frame_rate) * av_q2d(enc->time_base));
+ frame_rate = av_buffersink_get_frame_rate(filter);
+ if (frame_rate.num > 0 && frame_rate.den > 0)
+ duration = 1/(av_q2d(frame_rate) * av_q2d(enc->time_base));

if(ist && ist->st->start_time != AV_NOPTS_VALUE && ist->st->first_dts != AV_NOPTS_VALUE && ost->frame_rate.num)
duration = FFMIN(duration, 1/(av_q2d(ost->frame_rate) * av_q2d(enc->time_base)));
@@ -1416,7 +1417,7 @@ static int reap_filters(int flush)
av_log(NULL, AV_LOG_WARNING,
"Error in av_buffersink_get_frame_flags(): %s\n", av_err2str(ret));
} else if (flush && ret == AVERROR_EOF) {
- if (filter->inputs[0]->type == AVMEDIA_TYPE_VIDEO)
+ if (av_buffersink_get_type(filter) == AVMEDIA_TYPE_VIDEO)
do_video_out(of, ost, NULL, AV_NOPTS_VALUE);
}
break;
@@ -1427,25 +1428,26 @@ static int reap_filters(int flush)
}
if (filtered_frame->pts != AV_NOPTS_VALUE) {
int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : of->start_time;
+ AVRational filter_tb = av_buffersink_get_time_base(filter);
AVRational tb = enc->time_base;
int extra_bits = av_clip(29 - av_log2(tb.den), 0, 16);

tb.den <<= extra_bits;
float_pts =
- av_rescale_q(filtered_frame->pts, filter->inputs[0]->time_base, tb) -
+ av_rescale_q(filtered_frame->pts, filter_tb, tb) -
av_rescale_q(start_time, AV_TIME_BASE_Q, tb);
float_pts /= 1 << extra_bits;
// avoid exact midoints to reduce the chance of rounding differences, this can be removed in case the fps code is changed to work with integers
float_pts += FFSIGN(float_pts) * 1.0 / (1<<17);

filtered_frame->pts =
- av_rescale_q(filtered_frame->pts, filter->inputs[0]->time_base, enc->time_base) -
+ av_rescale_q(filtered_frame->pts, filter_tb, enc->time_base) -
av_rescale_q(start_time, AV_TIME_BASE_Q, enc->time_base);
}
//if (ost->source_index >= 0)
// *filtered_frame= *input_streams[ost->source_index]->decoded_frame; //for me_threshold

- switch (filter->inputs[0]->type) {
+ switch (av_buffersink_get_type(filter)) {
case AVMEDIA_TYPE_VIDEO:
if (!ost->frame_aspect_ratio.num)
enc->sample_aspect_ratio = filtered_frame->sample_aspect_ratio;
@@ -3148,19 +3150,19 @@ static int init_output_stream_encode(OutputStream *ost)

switch (enc_ctx->codec_type) {
case AVMEDIA_TYPE_AUDIO:
- enc_ctx->sample_fmt = ost->filter->filter->inputs[0]->format;
+ enc_ctx->sample_fmt = av_buffersink_get_format(ost->filter->filter);
if (dec_ctx)
enc_ctx->bits_per_raw_sample = FFMIN(dec_ctx->bits_per_raw_sample,
av_get_bytes_per_sample(enc_ctx->sample_fmt) << 3);
- enc_ctx->sample_rate = ost->filter->filter->inputs[0]->sample_rate;
- enc_ctx->channel_layout = ost->filter->filter->inputs[0]->channel_layout;
- enc_ctx->channels = avfilter_link_get_channels(ost->filter->filter->inputs[0]);
+ enc_ctx->sample_rate = av_buffersink_get_sample_rate(ost->filter->filter);
+ enc_ctx->channel_layout = av_buffersink_get_channel_layout(ost->filter->filter);
+ enc_ctx->channels = av_buffersink_get_channels(ost->filter->filter);
enc_ctx->time_base = (AVRational){ 1, enc_ctx->sample_rate };
break;
case AVMEDIA_TYPE_VIDEO:
enc_ctx->time_base = av_inv_q(ost->frame_rate);
if (!(enc_ctx->time_base.num && enc_ctx->time_base.den))
- enc_ctx->time_base = ost->filter->filter->inputs[0]->time_base;
+ enc_ctx->time_base = av_buffersink_get_time_base(ost->filter->filter);
if ( av_q2d(enc_ctx->time_base) < 0.001 && video_sync_method != VSYNC_PASSTHROUGH
&& (video_sync_method == VSYNC_CFR || video_sync_method == VSYNC_VSCFR || (video_sync_method == VSYNC_AUTO && !(oc->oformat->flags & AVFMT_VARIABLE_FPS)))){
av_log(oc, AV_LOG_WARNING, "Frame rate very high for a muxer not efficiently supporting it.\n"
@@ -3171,27 +3173,27 @@ static int init_output_stream_encode(OutputStream *ost)
AV_TIME_BASE_Q,
enc_ctx->time_base);

- enc_ctx->width = ost->filter->filter->inputs[0]->w;
- enc_ctx->height = ost->filter->filter->inputs[0]->h;
+ enc_ctx->width = av_buffersink_get_w(ost->filter->filter);
+ enc_ctx->height = av_buffersink_get_h(ost->filter->filter);
enc_ctx->sample_aspect_ratio = ost->st->sample_aspect_ratio =
ost->frame_aspect_ratio.num ? // overridden by the -aspect cli option
av_mul_q(ost->frame_aspect_ratio, (AVRational){ enc_ctx->height, enc_ctx->width }) :
- ost->filter->filter->inputs[0]->sample_aspect_ratio;
+ av_buffersink_get_sample_aspect_ratio(ost->filter->filter);
if (!strncmp(ost->enc->name, "libx264", 7) &&
enc_ctx->pix_fmt == AV_PIX_FMT_NONE &&
- ost->filter->filter->inputs[0]->format != AV_PIX_FMT_YUV420P)
+ av_buffersink_get_format(ost->filter->filter) != AV_PIX_FMT_YUV420P)
av_log(NULL, AV_LOG_WARNING,
"No pixel format specified, %s for H.264 encoding chosen.\n"
"Use -pix_fmt yuv420p for compatibility with outdated media players.\n",
- av_get_pix_fmt_name(ost->filter->filter->inputs[0]->format));
+ av_get_pix_fmt_name(av_buffersink_get_format(ost->filter->filter)));
if (!strncmp(ost->enc->name, "mpeg2video", 10) &&
enc_ctx->pix_fmt == AV_PIX_FMT_NONE &&
- ost->filter->filter->inputs[0]->format != AV_PIX_FMT_YUV420P)
+ av_buffersink_get_format(ost->filter->filter) != AV_PIX_FMT_YUV420P)
av_log(NULL, AV_LOG_WARNING,
"No pixel format specified, %s for MPEG-2 encoding chosen.\n"
"Use -pix_fmt yuv420p for compatibility with outdated media players.\n",
- av_get_pix_fmt_name(ost->filter->filter->inputs[0]->format));
- enc_ctx->pix_fmt = ost->filter->filter->inputs[0]->format;
+ av_get_pix_fmt_name(av_buffersink_get_format(ost->filter->filter)));
+ enc_ctx->pix_fmt = av_buffersink_get_format(ost->filter->filter);
if (dec_ctx)
enc_ctx->bits_per_raw_sample = FFMIN(dec_ctx->bits_per_raw_sample,
av_pix_fmt_desc_get(enc_ctx->pix_fmt)->comp[0].depth);
@@ -3274,8 +3276,8 @@ static int init_output_stream(OutputStream *ost, char *error, int error_len)
!av_dict_get(ost->encoder_opts, "ab", NULL, 0))
av_dict_set(&ost->encoder_opts, "b", "128000", 0);

- if (ost->filter && ost->filter->filter->inputs[0]->hw_frames_ctx) {
- ost->enc_ctx->hw_frames_ctx = av_buffer_ref(ost->filter->filter->inputs[0]->hw_frames_ctx);
+ if (ost->filter && av_buffersink_get_hw_frames_ctx(ost->filter->filter)) {
+ ost->enc_ctx->hw_frames_ctx = av_buffer_ref(av_buffersink_get_hw_frames_ctx(ost->filter->filter));
if (!ost->enc_ctx->hw_frames_ctx)
return AVERROR(ENOMEM);
}
diff --git a/ffmpeg_filter.c b/ffmpeg_filter.c
index 0064014..1daca77 100644
--- a/ffmpeg_filter.c
+++ b/ffmpeg_filter.c
@@ -1079,15 +1079,15 @@ int configure_filtergraph(FilterGraph *fg)
* make sure they stay the same if the filtergraph is reconfigured later */
for (i = 0; i < fg->nb_outputs; i++) {
OutputFilter *ofilter = fg->outputs[i];
- AVFilterLink *link = ofilter->filter->inputs[0];
+ AVFilterContext *sink = ofilter->filter;

- ofilter->format = link->format;
+ ofilter->format = av_buffersink_get_format(sink);

- ofilter->width = link->w;
- ofilter->height = link->h;
+ ofilter->width = av_buffersink_get_w(sink);
+ ofilter->height = av_buffersink_get_h(sink);

- ofilter->sample_rate = link->sample_rate;
- ofilter->channel_layout = link->channel_layout;
+ ofilter->sample_rate = av_buffersink_get_sample_rate(sink);
+ ofilter->channel_layout = av_buffersink_get_channel_layout(sink);
}

fg->reconfiguration = 1;
--
2.10.2
wm4
2016-12-18 20:43:45 UTC
Permalink
On Sun, 18 Dec 2016 19:32:16 +0100
By "actual internal structs", I suspect you mean something similar to
typedef struct AVFormatContext {
...
AVFormatInternal *internal;
...
};
Introducing these structures was a big mistake. For the reasons, see the
recent discussion about making filter_frame() non-recursive (short of
it: it makes the actual code unreadable), plus another discussion I did
not take part about using options on these structure (short of it: a lot
of work if even possible).
I do not intend to extend that mistake in libavfilter. If possible, I
would rather like to kick it out, but fortunately the current uses are
very limited.
For buffersink, you could simply return a struct with the parameters.
As a value type, it'd be a copy and wouldn't need accessors.

Since you moved the discussion to general API issues...

Using public fields and such "internal" structs is functionally
equivalent to having an opaque struct with trivial accessors. It's
basically a style issue.

(The former approach, internal structs, is used and preferred for
AVCodecContext etc. because it has no impact on the API.)

The difference between them is essentially syntax only. Both of them
have multiple disadvantages:
- too much to deal with at once (whether it's fields or
setters/getters), no logical separation of functionality that is
lesser needed or doesn't make sense in some contexts. (Consider all
these AVCodecContext fields for tuning encoders - what do they do for
decoding? What do fields, which are audio-only, do if video is
encoded or decoded?)
- it's unclear when these fields can be set or not. (Why _can_ you set
a field if the graph is already "configured"? What happens then? How
is the user supposed to know when it's ok to set them?)
- even if you argue that the previous point can be fixed by having the
setters check the state and return an error value on misuse, it's
complex for both API implementer and user
- many uses of this hide internal fields from the public API user,
which is good. But at the same time, this moves away from the
(probably worthy) goal of allowing outside implementation of
codecs, (de)muxers, filters.

So I would suggest that instead of changing the whole API to use
accessors, we should make the API more "transactional", and force
correct use by API design. For example, we could make AVCodecContext
opaque, and provide instantiation functions that take specialized
structs (such as AVCodecParameters) to open the decoder. Making
creation and use of an API would be a good step into improving the API
IMHO. I found myself confused over what fields are always immutable in
an AVHWFramesContext, and which are mutable until "init", and that's an
example of the more classic mixed init/use API concept in ffmpeg.
Nicolas George
2016-12-19 08:40:41 UTC
Permalink
Post by wm4
For buffersink, you could simply return a struct with the parameters.
As a value type, it'd be a copy and wouldn't need accessors.
You mean a single structure returned by a single accessor with all the
stream's properties instead of individual accessors for individual
properties? Well, the naive methods of returning a structure to the
caller would make the size of the structure part of the API, but there
are ways around it.

I do not dislike the idea, if that is what people prefer.

(AVCodecParameter would have been an obvious candidate for that, but it
lacks a few necessary fields. And it has a lot of extra fields, which
goes against what you write below.)
Post by wm4
Since you moved the discussion to general API issues...
Using public fields and such "internal" structs is functionally
equivalent to having an opaque struct with trivial accessors. It's
basically a style issue.
This is true, but "functionally equivalent" is a very myopic criterion.
It misses all the impact of the design on the code readability. Just
look at all the "->inputs[0]" that this patch allows to eliminate (and
that James missed, I think).
Post by wm4
The difference between them is essentially syntax only. Both of them
- too much to deal with at once (whether it's fields or
setters/getters), no logical separation of functionality that is
lesser needed or doesn't make sense in some contexts. (Consider all
these AVCodecContext fields for tuning encoders - what do they do for
decoding? What do fields, which are audio-only, do if video is
encoded or decoded?)
It feels more a mater of documentation and auto-documentation than
anything else. "There is a crf field, does it apply to the x262 encder?"
and "There is a crf option, does it apply to the x262 encoder?" are
exactly the same question, the only difference being that options are
auto-documenting and answer the question automatically. But we could
have found a way for fields as well.
Post by wm4
- it's unclear when these fields can be set or not. (Why _can_ you set
a field if the graph is already "configured"? What happens then? How
is the user supposed to know when it's ok to set them?)
- even if you argue that the previous point can be fixed by having the
setters check the state and return an error value on misuse, it's
complex for both API implementer and user
All considered, the complexity is in the task: video encoding is an
incredibly complex subject. API can only do so much to ease it.
Post by wm4
- many uses of this hide internal fields from the public API user,
which is good. But at the same time, this moves away from the
(probably worthy) goal of allowing outside implementation of
codecs, (de)muxers, filters.
This calls to my recent answer to Michael about making AVFilterLink and
AVFilterPad opaque: allowing foreign modules requires stability for APIs
that are currently internal, and make development that much harder.
Post by wm4
So I would suggest that instead of changing the whole API to use
accessors, we should make the API more "transactional", and force
correct use by API design.
Unfortunately, if doing that was simple, we would already be doing it.

Regards,
--
Nicolas George
wm4
2016-12-21 13:51:02 UTC
Permalink
This post might be inappropriate. Click to display it.
Nicolas George
2016-12-23 14:49:16 UTC
Permalink
Le primidi 1er nivôse, an CCXXV, wm4 a écrit :

[about windows COM system]
Post by wm4
To make it short, everything in COM consists of structs with function
pointers. Structs are never extended, if you need new function
pointers, you just add new structs, which you can dynamically query
from the old ones. This gives you 100% ABI downwards and upwards
compatibility. You also don't have to worry about linking to functions
not present in older Windows or whatever versions, because structs with
function pointers are a compile-time thing. You will merely get a
runtime failure when trying to query support for an unsupported struct.
(I tried to avoid COM terms, but COM calls these structs "interfaces".)
It's a pretty fascinating contrast to the fuckhackery we do with
extending structs while trying to keep ABI compatibility, version
and configure checks in API user code when trying to stay compatible
with multiple libavcodec versions, etc.
It is probably a large part of the reason windows is slow on tomorrow's
computers while Linux and BSD are fast on yesterday's.

Regards,
--
Nicolas George
wm4
2016-12-23 16:48:22 UTC
Permalink
On Fri, 23 Dec 2016 15:49:16 +0100
Post by Nicolas George
[about windows COM system]
Post by wm4
To make it short, everything in COM consists of structs with function
pointers. Structs are never extended, if you need new function
pointers, you just add new structs, which you can dynamically query
from the old ones. This gives you 100% ABI downwards and upwards
compatibility. You also don't have to worry about linking to functions
not present in older Windows or whatever versions, because structs with
function pointers are a compile-time thing. You will merely get a
runtime failure when trying to query support for an unsupported struct.
(I tried to avoid COM terms, but COM calls these structs "interfaces".)
It's a pretty fascinating contrast to the fuckhackery we do with
extending structs while trying to keep ABI compatibility, version
and configure checks in API user code when trying to stay compatible
with multiple libavcodec versions, etc.
It is probably a large part of the reason windows is slow on tomorrow's
computers while Linux and BSD are fast on yesterday's.
No, it's not.
wm4
2016-12-21 13:52:22 UTC
Permalink
On Wed, 21 Dec 2016 01:56:59 +0100
In general, replacing public fields with macro-generated accessors is
really just a rename. Admittedly, it removes a confusing indirection
(though ->inputs[0]) in this case, but in general the improvements are
very minor. What does it matter if whether there are 100 fields or 100
set/get functions?
* It is much easier to keep the ABI stable for accessor functions than
for public structs, because the order of members doesn't affect them.
* It is much easier to check which binary uses which ABI, because the
functions are listed in the symbols table, while checking which
struct member is accessed requires disassembling.
* Having the struct private means it can't be allocated on the stack
by API users, preventing problems when the struct size changes.
Acknowledged. I was talking mostly about API.
Nicolas George
2016-12-18 19:52:33 UTC
Permalink
You didn't answer what's the gain here.
Yes I did: discuss that with Hendrik, Andreas and Clément, not me.
How is this better than keeping the
struct public and letting library users keep accessing its fields normally?
Why are you trying to make libavfilter so different than the rest? We have
scheduled the deprecation and removal of /all/ accessors, and now you want
to add more?
I think you did not read the code carefully enough. These accessors, on
top of addressing Hendrik, Andreas and Clément's concerns, make the code
actually more robust and readable.
If people didn't use lavfi before, they will feel less motivated to do it
now since they can't even expect consistency with lavf or lavc.
Once again: consistency is only a means to an end.

Regards,
--
Nicolas George
Andreas Cadhalpun
2016-12-20 00:31:29 UTC
Permalink
Post by Nicolas George
av_buffersink_get_frame_rate() did already exist; its argument becomes const.
TODO minor version bump
API-Change: libavfilter
---
libavfilter/buffersink.c | 25 +++++++++++++++++++------
libavfilter/buffersink.h | 22 ++++++++++++++++++++--
2 files changed, 39 insertions(+), 8 deletions(-)
I think the const change is acceptable.
I'm not aware of a user outside of ffmpeg and const changes shouldn't cause big
problems, as only the API changes, not the ABI. So it is probably OK.
Post by Nicolas George
Note: I am introducing the "API-Change" Git tag: I think it will be much
more convenient than maintaining doc/APIchanges. Later I intend to write a
script using git log --grep to pretty print it.
I'm not convinced that this is more convenient. APIchanges can still be
changed after a commit, but the commit message can't. Also it can only replace
APIchanges when everyone (including Libav) uses it.
So I'd prefer if you added an APIchanges entry in addition to using this tag.

Best regards,
Andreas
Nicolas George
2016-12-21 09:50:03 UTC
Permalink
Post by Andreas Cadhalpun
I'm not aware of a user outside of ffmpeg and const changes shouldn't cause big
problems, as only the API changes, not the ABI. So it is probably OK.
Thanks.
Post by Andreas Cadhalpun
I'm not convinced that this is more convenient. APIchanges can still be
changed after a commit, but the commit message can't. Also it can only replace
APIchanges when everyone (including Libav) uses it.
So I'd prefer if you added an APIchanges entry in addition to using this tag.
I can think of a few solutions, to merge the APIchanges and the snippets
from the commit messages and allow fixes.

But you are right in principle: I should not start using a feature
before it is implemented.

Regards,
--
Nicolas George
Nicolas George
2016-12-23 14:31:45 UTC
Permalink
Post by Nicolas George
+AVRational av_buffersink_get_frame_rate (const AVFilterContext *ctx);
+int av_buffersink_get_w (const AVFilterContext *ctx);
+int av_buffersink_get_h (const AVFilterContext *ctx);
+AVRational av_buffersink_get_sample_aspect_ratio (const AVFilterContext *ctx);
So, I ask this of everybody who care: what API do you prefer?

This one, i.e.:

encoder->width = av_buffersink_get_w(sink);
encoder->height = av_buffersink_get_h(sink);
encoder->sample_aspect_ratio = av_buffersink_get_sample_aspect_ratio(sink);

Or one with a single access to all the properties:

const AVBufferSinkProperties *fmt = av_buffersink_get_properties(sink);
encoder->width = fmt->w;
encoder->height = fmt->h;
encoder->sample_aspect_ratio = fmt->sample_aspect_ratio;

Regards,
--
Nicolas George
Michael Niedermayer
2016-12-23 15:59:45 UTC
Permalink
Post by Nicolas George
Post by Nicolas George
+AVRational av_buffersink_get_frame_rate (const AVFilterContext *ctx);
+int av_buffersink_get_w (const AVFilterContext *ctx);
+int av_buffersink_get_h (const AVFilterContext *ctx);
+AVRational av_buffersink_get_sample_aspect_ratio (const AVFilterContext *ctx);
So, I ask this of everybody who care: what API do you prefer?
encoder->width = av_buffersink_get_w(sink);
encoder->height = av_buffersink_get_h(sink);
encoder->sample_aspect_ratio = av_buffersink_get_sample_aspect_ratio(sink);
const AVBufferSinkProperties *fmt = av_buffersink_get_properties(sink);
encoder->width = fmt->w;
encoder->height = fmt->h;
encoder->sample_aspect_ratio = fmt->sample_aspect_ratio;
From these 2 the first but i think the user app needs more access
to be able to implement filters and this could make either API
obsoleete

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If a bugfix only changes things apparently unrelated to the bug with no
further explanation, that is a good sign that the bugfix is wrong.
Michael Niedermayer
2016-12-23 16:05:01 UTC
Permalink
Post by Michael Niedermayer
Post by Nicolas George
Post by Nicolas George
+AVRational av_buffersink_get_frame_rate (const AVFilterContext *ctx);
+int av_buffersink_get_w (const AVFilterContext *ctx);
+int av_buffersink_get_h (const AVFilterContext *ctx);
+AVRational av_buffersink_get_sample_aspect_ratio (const AVFilterContext *ctx);
So, I ask this of everybody who care: what API do you prefer?
encoder->width = av_buffersink_get_w(sink);
encoder->height = av_buffersink_get_h(sink);
encoder->sample_aspect_ratio = av_buffersink_get_sample_aspect_ratio(sink);
const AVBufferSinkProperties *fmt = av_buffersink_get_properties(sink);
encoder->width = fmt->w;
encoder->height = fmt->h;
encoder->sample_aspect_ratio = fmt->sample_aspect_ratio;
From these 2 the first but i think the user app needs more access
to be able to implement filters and this could make either API
obsoleete
also AVCodecParameters would be an option to use as a struct if a
struct is used, the lack of AVClass/AVOption in it may cause problems
though when lib versions differ and field have been added between

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I know you won't believe me, but the highest form of Human Excellence is
to question oneself and others. -- Socrates
Nicolas George
2016-12-23 16:11:51 UTC
Permalink
Post by Michael Niedermayer
also AVCodecParameters would be an option to use as a struct if a
struct is used
Unfortunately not, since it lacks at least time_base and frame_rate. We
could add them, but think it would be a bad idea. It also has a lot of
other fields that are of no use for this case, that would need
documentation.

Regards,
--
Nicolas George
Nicolas George
2016-12-23 16:08:49 UTC
Permalink
Post by Michael Niedermayer
From these 2 the first
Ok, that makes one.
Post by Michael Niedermayer
but i think the user app needs more access
to be able to implement filters and this could make either API
obsoleete
User app implementing filters is not for today nor tomorrow. I think it
is better to start thinking on how to cross that bridge when we start
seeing it on the horizon.

Regards,
--
Nicolas George
Michael Niedermayer
2016-12-23 17:31:22 UTC
Permalink
Post by Nicolas George
Post by Michael Niedermayer
From these 2 the first
Ok, that makes one.
Post by Michael Niedermayer
but i think the user app needs more access
to be able to implement filters and this could make either API
obsoleete
User app implementing filters is not for today nor tomorrow. I think it
is better to start thinking on how to cross that bridge when we start
seeing it on the horizon.
libavfilter had a public API that allowed filters to be implemented
these changes move it farther away from it
I do care about having some public API that allows filters
to be implemented.
You pointed out that you work alone and people dont review your lavfi
patches. I think if you integrate others goals more people would

For example your plans can stay the same but include the declared
goal of making the API public once they are all done. And suddenly
i would be interrested in this. Of course it doesnt make days
have more hours or my todo shorter but pushing the public API away
makes me loose interrest for sure.

Other people might have other ideas and goals too, which they want
lavfi to achieve.
If more things can be integrated more people might contribute ...

thx

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Its not that you shouldnt use gotos but rather that you should write
readable code and code with gotos often but not always is less readable
Nicolas George
2016-12-23 17:51:38 UTC
Permalink
Post by Michael Niedermayer
libavfilter had a public API that allowed filters to be implemented
these changes move it farther away from it
I do care about having some public API that allows filters
to be implemented.
You pointed out that you work alone and people dont review your lavfi
patches. I think if you integrate others goals more people would
For example your plans can stay the same but include the declared
goal of making the API public once they are all done. And suddenly
i would be interrested in this. Of course it doesnt make days
have more hours or my todo shorter but pushing the public API away
makes me loose interrest for sure.
Other people might have other ideas and goals too, which they want
lavfi to achieve.
If more things can be integrated more people might contribute ...
You misunderstood: I am not excluding an API to allow external filters,
I am saying that it is too soon to think about it. The way filters must
be written is in a complete rework. Once it is stabilized, tested
extensively, we know it will not changing again, then we can make parts
of it public to allow external filters.

Anything else would be a compatibility nightmare. Do you not agree?

Regards,
--
Nicolas George
Michael Niedermayer
2016-12-23 20:02:09 UTC
Permalink
Post by Nicolas George
Post by Michael Niedermayer
libavfilter had a public API that allowed filters to be implemented
these changes move it farther away from it
I do care about having some public API that allows filters
to be implemented.
You pointed out that you work alone and people dont review your lavfi
patches. I think if you integrate others goals more people would
For example your plans can stay the same but include the declared
goal of making the API public once they are all done. And suddenly
i would be interrested in this. Of course it doesnt make days
have more hours or my todo shorter but pushing the public API away
makes me loose interrest for sure.
Other people might have other ideas and goals too, which they want
lavfi to achieve.
If more things can be integrated more people might contribute ...
You misunderstood: I am not excluding an API to allow external filters,
I am saying that it is too soon to think about it. The way filters must
be written is in a complete rework. Once it is stabilized, tested
extensively, we know it will not changing again, then we can make parts
of it public to allow external filters.
Anything else would be a compatibility nightmare. Do you not agree?
APIs in FFmpeg will change as long as the project is alive.

new developers join, older ones leave, peoples goals and oppinions
change. The libavfilter API is based on the lessons learned from
previous projects and frameworks, in that way this codebase has a quite
long timeline and many experienced developers from multiple other
free software projects upon whos shoulders this rests in some sense.

If we only make the API public once the ultimate global optimum is
reached we will never do so.
What iam asking for is not that you declare a API public that you
intend to change but that we commit to declaring the API public in the
reachable future and not as a goalline that keeps shifting farther
into the future forever.
you may eventually be happy with the API in libavfilter but by the time
others with other goals and oppinions will just start their journy.

Also making the API public should be a real goal and be considered in
the steps taken

For example if AVFilterLink isnt intended to be substantially changed
and is intended to be public when the API becomes public
then i dont see why it should not stay public
making it private, rewritig code, adding public accessors just to
then make it public again and deprecate the accessors is a huge
amount of wasted time.

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If you fake or manipulate statistics in a paper in physics you will never
get a job again.
If you fake or manipulate statistics in a paper in medicin you will get
a job for life at the pharma industry.
Nicolas George
2016-12-23 20:23:50 UTC
Permalink
Post by Michael Niedermayer
APIs in FFmpeg will change as long as the project is alive.
new developers join, older ones leave, peoples goals and oppinions
change. The libavfilter API is based on the lessons learned from
previous projects and frameworks, in that way this codebase has a quite
long timeline and many experienced developers from multiple other
free software projects upon whos shoulders this rests in some sense.
If we only make the API public once the ultimate global optimum is
reached we will never do so.
I was not referring to a hypothetical global optimum, but right now we
are nowhere near a local optimum: you have certainly noticed that since
I pushed the 45k patch making filter_frame non-recursive, I had to fix
several bugs. Well, there are still a few of them lurking, and then we
need to reap the benefits of the design change.

Two days ago, I outlined my plans for lavfi:

https://ffmpeg.org/pipermail/ffmpeg-devel/2016-December/204686.html

I say: first do all these points, because any of these might require a
surprise rework of some internal API. Then (in parallel when relevant),
adapt ffmpeg.c to work with the cleaned-up API of lavfi and fix the
scheduling.

Then wait a year, to be sure.

Then start working on external filters.

Does it seem unreasonable?

I can assure you, when the last points in the plan will be done, I will
be so fed up with it that I will easily leave the API alone for a year.
There are plenty of other parts of the code I would like to work on.

For reference, I am halfway through the second item in the plan.
Probably more than halfway, actually.

Regards,
--
Nicolas George
Michael Niedermayer
2016-12-23 22:01:36 UTC
Permalink
Post by Nicolas George
Post by Michael Niedermayer
APIs in FFmpeg will change as long as the project is alive.
new developers join, older ones leave, peoples goals and oppinions
change. The libavfilter API is based on the lessons learned from
previous projects and frameworks, in that way this codebase has a quite
long timeline and many experienced developers from multiple other
free software projects upon whos shoulders this rests in some sense.
If we only make the API public once the ultimate global optimum is
reached we will never do so.
I was not referring to a hypothetical global optimum, but right now we
are nowhere near a local optimum: you have certainly noticed that since
I pushed the 45k patch making filter_frame non-recursive, I had to fix
several bugs. Well, there are still a few of them lurking, and then we
need to reap the benefits of the design change.
https://ffmpeg.org/pipermail/ffmpeg-devel/2016-December/204686.html
I say: first do all these points, because any of these might require a
surprise rework of some internal API. Then (in parallel when relevant),
adapt ffmpeg.c to work with the cleaned-up API of lavfi and fix the
scheduling.
Then wait a year, to be sure.
Then start working on external filters.
Does it seem unreasonable?
shouldnt there be a public annoncement about the intend to make the API public
shouldnt there be a public call for API design suggestions and discussion
shouldnt there be a public call for API related patches with deadline
shouldnt there be a go/no-go poll of the FFmpeg developers

I dont think a wait period makes sense. By the time everyone is ok
with making the API public the code will have been more then
extensivly tested and waiting further would likely add little

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

The greatest way to live with honor in this world is to be what we pretend
to be. -- Socrates
Nicolas George
2016-12-23 22:46:07 UTC
Permalink
Post by Michael Niedermayer
shouldnt there be a public annoncement about the intend to make the API public
shouldnt there be a public call for API design suggestions and discussion
shouldnt there be a public call for API related patches with deadline
shouldnt there be a go/no-go poll of the FFmpeg developers
I am not sure what you are angling at here. I announced my projects to
rework the internal workings and global API when I started working on
making request_frame non-recursive (note: request_frame, not
filter_frame; this patch landed more than a year ago) and mentioned them
several times since then, for example when Paul considered working on
threading IIRC. I did not gave that much details because nobody seemed
interested in them.

And of course, no significant change in API or design went in without
staying for quite a time on the mailing-list.
Post by Michael Niedermayer
I dont think a wait period makes sense. By the time everyone is ok
with making the API public the code will have been more then
extensivly tested and waiting further would likely add little
So you are saying that the wait period will take care of itself even if
we do not decide to enforce it. I am ok with it.

And, after all, what do you want, concretely?

I, personally, am not working on making external filters possible. But I
am not preventing it either. (Well, this patch makes it a little bit
more work, but just work, not hard.) And I think that when I am done
with my plan, it will be actually easier because a cleaner internal API
is easier to make public.

Another point: this particular patch, and the series that surrounds it,
I do not really want it. For all I care, we could drop it (either all
the series, or just 5-6 because 1-4 make the code clearer).

On the other hand, Andreas, Clément, Hendrik and James would not be
happy, because they insisted to hide the innards of AVFilterLink more
deeply after the filter_frame patch. This was discussed on the mailing
list. I probably should let you discuss the issue with them. The code is
written, it did not take much time and does not interact with my current
work: it can be applied or dropped indifferently.

Regards,
--
Nicolas George
Michael Niedermayer
2016-12-24 01:25:11 UTC
Permalink
Post by Nicolas George
Post by Michael Niedermayer
shouldnt there be a public annoncement about the intend to make the API public
shouldnt there be a public call for API design suggestions and discussion
shouldnt there be a public call for API related patches with deadline
shouldnt there be a go/no-go poll of the FFmpeg developers
I am not sure what you are angling at here. I announced my projects to
rework the internal workings and global API when I started working on
making request_frame non-recursive (note: request_frame, not
filter_frame; this patch landed more than a year ago) and mentioned them
several times since then, for example when Paul considered working on
threading IIRC. I did not gave that much details because nobody seemed
interested in them.
And of course, no significant change in API or design went in without
staying for quite a time on the mailing-list.
you snipped the context and reply to the quote as if it was meant
in a different context.
Not everyone intends to attack you :)

The quoted text are the steps which IMHO make sense to design a
API and to then make it public. What you talk about are the steps that
make sense to get the changes you want in.
theres nothing wrong with that it just has a differnt goal and i
thought we talk about the other.
Post by Nicolas George
Post by Michael Niedermayer
I dont think a wait period makes sense. By the time everyone is ok
with making the API public the code will have been more then
extensivly tested and waiting further would likely add little
So you are saying that the wait period will take care of itself even if
we do not decide to enforce it. I am ok with it.
And, after all, what do you want, concretely?
as you ask ...

Id like to see libavfilter be less centralized, id like to be able
to write a filter and know that if it gets rejected on ffmpeg-dev that
i can still publish it on my own git repo and everyone could easily use
it. (as with plugins which could have bin packages in distros for easy
use not requireng users to patch / merge source)
This is an especially annoying situation when theres a consulting job
and then someone decides to demand some change which practically is
preventing the actual usecase one is paid for to be solved.

If its not a consulting job droping some patch isnt a huge
problem if a reviewer is stubborn and insists on something which is
unpractical.
But if one is doing the work for someone else, droping a patch is much
more annoying. Being able to create a plugin sidesteps this.
Post by Nicolas George
I, personally, am not working on making external filters possible. But I
am not preventing it either. (Well, this patch makes it a little bit
more work, but just work, not hard.) And I think that when I am done
with my plan, it will be actually easier because a cleaner internal API
is easier to make public.
Another point: this particular patch, and the series that surrounds it,
I do not really want it. For all I care, we could drop it (either all
the series, or just 5-6 because 1-4 make the code clearer).
On the other hand, Andreas, Clément, Hendrik and James would not be
happy, because they insisted to hide the innards of AVFilterLink more
deeply after the filter_frame patch. This was discussed on the mailing
list. I probably should let you discuss the issue with them. The code is
written, it did not take much time and does not interact with my current
work: it can be applied or dropped indifferently.
My oppinion is that If we intend to have AVFilterLink public in the
future then making it private now is a bad idea and wasted time.

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Breaking DRM is a little like attempting to break through a door even
though the window is wide open and the only thing in the house is a bunch
of things you dont want and which you would get tomorrow for free anyway
Nicolas George
2016-12-24 16:44:39 UTC
Permalink
Post by Michael Niedermayer
you snipped the context and reply to the quote as if it was meant
in a different context.
Not everyone intends to attack you :)
Sorry; I might have still been a little on edge.
Post by Michael Niedermayer
The quoted text are the steps which IMHO make sense to design a
API and to then make it public. What you talk about are the steps that
make sense to get the changes you want in.
theres nothing wrong with that it just has a differnt goal and i
thought we talk about the other.
Oh, ok.

But the changes that I want in will change significantly how filters
must be written, and therefore have a big impact on the necessary API.
Post by Michael Niedermayer
as you ask ...
Id like to see libavfilter be less centralized, id like to be able
to write a filter and know that if it gets rejected on ffmpeg-dev that
i can still publish it on my own git repo and everyone could easily use
it. (as with plugins which could have bin packages in distros for easy
use not requireng users to patch / merge source)
This is an especially annoying situation when theres a consulting job
and then someone decides to demand some change which practically is
preventing the actual usecase one is paid for to be solved.
If its not a consulting job droping some patch isnt a huge
problem if a reviewer is stubborn and insists on something which is
unpractical.
But if one is doing the work for someone else, droping a patch is much
more annoying. Being able to create a plugin sidesteps this.
In the long term, I agree.
Post by Michael Niedermayer
My oppinion is that If we intend to have AVFilterLink public in the
future then making it private now is a bad idea and wasted time.
As I said, I am not the one who wants to make AVFilterLink opaque.
Note that making it visible again when it is ready is a matter of
minutes.

Regards,
--
Nicolas George
Nicolas George
2016-12-25 10:22:37 UTC
Permalink
Post by Michael Niedermayer
My oppinion is that If we intend to have AVFilterLink public in the
future then making it private now is a bad idea and wasted time.
Something I just remembered:

With the prospect of inter-filters threading, it is better if even our
own filters do not access links directly, because then it is only
necessary to add synchronization in the helper functions.

What about moving the definition in its own header?

For compatibility at the beginning, this header can be included by
avfilter.h, but then it can either be made private or stay there to be
included by the parts of the code that really need it.

Regards,
--
Nicolas George
wm4
2017-01-09 09:50:34 UTC
Permalink
On Fri, 23 Dec 2016 21:02:09 +0100
Post by Michael Niedermayer
Post by Nicolas George
Post by Michael Niedermayer
libavfilter had a public API that allowed filters to be implemented
these changes move it farther away from it
I do care about having some public API that allows filters
to be implemented.
You pointed out that you work alone and people dont review your lavfi
patches. I think if you integrate others goals more people would
For example your plans can stay the same but include the declared
goal of making the API public once they are all done. And suddenly
i would be interrested in this. Of course it doesnt make days
have more hours or my todo shorter but pushing the public API away
makes me loose interrest for sure.
Other people might have other ideas and goals too, which they want
lavfi to achieve.
If more things can be integrated more people might contribute ...
You misunderstood: I am not excluding an API to allow external filters,
I am saying that it is too soon to think about it. The way filters must
be written is in a complete rework. Once it is stabilized, tested
extensively, we know it will not changing again, then we can make parts
of it public to allow external filters.
Anything else would be a compatibility nightmare. Do you not agree?
APIs in FFmpeg will change as long as the project is alive.
A public API for something as complicated and far-reaching as external
filters or codecs should be tiny and concise, not expose the current
internal API-vomit we have to the world.

James Almer
2016-12-23 16:52:44 UTC
Permalink
Post by Nicolas George
Post by Nicolas George
+AVRational av_buffersink_get_frame_rate (const AVFilterContext *ctx);
+int av_buffersink_get_w (const AVFilterContext *ctx);
+int av_buffersink_get_h (const AVFilterContext *ctx);
+AVRational av_buffersink_get_sample_aspect_ratio (const AVFilterContext *ctx);
So, I ask this of everybody who care: what API do you prefer?
encoder->width = av_buffersink_get_w(sink);
encoder->height = av_buffersink_get_h(sink);
encoder->sample_aspect_ratio = av_buffersink_get_sample_aspect_ratio(sink);
const AVBufferSinkProperties *fmt = av_buffersink_get_properties(sink);
encoder->width = fmt->w;
encoder->height = fmt->h;
encoder->sample_aspect_ratio = fmt->sample_aspect_ratio;
Regards,
I very much prefer the latter. Only one symbol, a (hopefully) easily
extensible struct if needed, etc.
Loading...