Discussion:
[PATCH 00/24] Generic hardware device setup and miscellaneous related merges
(too old to reply)
Mark Thompson
2017-06-12 22:40:17 UTC
Permalink
This merges a set of stuff from libav to do with hardware codecs/processing.

The two most interesting features of this are:

* Generic hardware device setup. This finishes the uniform structure for hardware device setup which has been in progress for a while, finally deleting several of the ffmpeg_X.c hardware specific files. Initially this is working for VAAPI and VDPAU, with partial support for QSV. A following series by wm4 (start from <https://git.libav.org/?p=libav.git;a=commit;h=fff90422d181744cd75dbf011687ee7095f02875>) will add DXVA2/D3D11 support as well.

* Mapping between hardware APIs. Initially this supports VAAPI/DXVA2 and QSV, OpenCL integration with those is to follow. The main use of this at the moment to to allow use of the lavc decoder via a platform hwaccel and hence avoid the nastiness of the specific *_qsv decoders (for example: "./ffmpeg_g -y -hwaccel vaapi -hwaccel_output_format vaapi -i in.mp4 -an -vf 'hwmap=derive_device=qsv,format=qsv' -c:v h264_qsv -b 5M -maxrate 5M -look_ahead 0 out.mp4", and similarly with DXVA2).

Other oddments:
* Support for the VAAPI driver which wraps VDPAU.
* Field rate output for the VAAPI deinterlacer.
* hw_device_ctx support for QSV codecs using software frames (fixes some current silly failure cases when using multiple independent instances together).
* Profile mismatch option for hwaccels (primarily to allow hardware decoding of H.264 constrained baseline profile streams which erroneously fail to set constraint_set1_flag).
* Documentation for the hardware frame movement filters (hwupload, hwdownload, hwmap).

VP9 VAAPI encode support would be here, but is not included because it depends on the vp9_raw_reorder BSF, which is only written with the bitstream API rather than with get_bits. I know that was skipped earlier, but has there been any more discussion on merging that? Would it be easiest to just convert the BSF?

Thanks,

- Mark
Mark Thompson
2017-06-12 22:40:18 UTC
Permalink
The driver is somewhat bitrotten (not updated for years) but is still
usable for decoding with this change. To support it, this adds a new
driver quirk to indicate no support at all for surface attributes.

Based on a patch by wm4 <***@googlemail.com>.

(cherry picked from commit e791b915c774408fbc0ec9e7270b021899e08ccc)
---
libavutil/hwcontext_vaapi.c | 79 ++++++++++++++++++++++++++-------------------
libavutil/hwcontext_vaapi.h | 7 ++++
2 files changed, 52 insertions(+), 34 deletions(-)

diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c
index 3b50e95615..3970726d30 100644
--- a/libavutil/hwcontext_vaapi.c
+++ b/libavutil/hwcontext_vaapi.c
@@ -155,7 +155,8 @@ static int vaapi_frames_get_constraints(AVHWDeviceContext *hwdev,
unsigned int fourcc;
int err, i, j, attr_count, pix_fmt_count;

- if (config) {
+ if (config &&
+ !(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES)) {
attr_count = 0;
vas = vaQuerySurfaceAttributes(hwctx->display, config->config_id,
0, &attr_count);
@@ -273,6 +274,11 @@ static const struct {
"ubit",
AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE,
},
+ {
+ "VDPAU wrapper",
+ "Splitted-Desktop Systems VDPAU backend for VA-API",
+ AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES,
+ },
};

static int vaapi_device_init(AVHWDeviceContext *hwdev)
@@ -451,43 +457,48 @@ static int vaapi_frames_init(AVHWFramesContext *hwfc)
}

if (!hwfc->pool) {
- int need_memory_type = !(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE);
- int need_pixel_format = 1;
- for (i = 0; i < avfc->nb_attributes; i++) {
- if (ctx->attributes[i].type == VASurfaceAttribMemoryType)
- need_memory_type = 0;
- if (ctx->attributes[i].type == VASurfaceAttribPixelFormat)
- need_pixel_format = 0;
- }
- ctx->nb_attributes =
- avfc->nb_attributes + need_memory_type + need_pixel_format;
+ if (!(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES)) {
+ int need_memory_type = !(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE);
+ int need_pixel_format = 1;
+ for (i = 0; i < avfc->nb_attributes; i++) {
+ if (ctx->attributes[i].type == VASurfaceAttribMemoryType)
+ need_memory_type = 0;
+ if (ctx->attributes[i].type == VASurfaceAttribPixelFormat)
+ need_pixel_format = 0;
+ }
+ ctx->nb_attributes =
+ avfc->nb_attributes + need_memory_type + need_pixel_format;

- ctx->attributes = av_malloc(ctx->nb_attributes *
+ ctx->attributes = av_malloc(ctx->nb_attributes *
sizeof(*ctx->attributes));
- if (!ctx->attributes) {
- err = AVERROR(ENOMEM);
- goto fail;
- }
+ if (!ctx->attributes) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }

- for (i = 0; i < avfc->nb_attributes; i++)
- ctx->attributes[i] = avfc->attributes[i];
- if (need_memory_type) {
- ctx->attributes[i++] = (VASurfaceAttrib) {
- .type = VASurfaceAttribMemoryType,
- .flags = VA_SURFACE_ATTRIB_SETTABLE,
- .value.type = VAGenericValueTypeInteger,
- .value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_VA,
- };
- }
- if (need_pixel_format) {
- ctx->attributes[i++] = (VASurfaceAttrib) {
- .type = VASurfaceAttribPixelFormat,
- .flags = VA_SURFACE_ATTRIB_SETTABLE,
- .value.type = VAGenericValueTypeInteger,
- .value.value.i = fourcc,
- };
+ for (i = 0; i < avfc->nb_attributes; i++)
+ ctx->attributes[i] = avfc->attributes[i];
+ if (need_memory_type) {
+ ctx->attributes[i++] = (VASurfaceAttrib) {
+ .type = VASurfaceAttribMemoryType,
+ .flags = VA_SURFACE_ATTRIB_SETTABLE,
+ .value.type = VAGenericValueTypeInteger,
+ .value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_VA,
+ };
+ }
+ if (need_pixel_format) {
+ ctx->attributes[i++] = (VASurfaceAttrib) {
+ .type = VASurfaceAttribPixelFormat,
+ .flags = VA_SURFACE_ATTRIB_SETTABLE,
+ .value.type = VAGenericValueTypeInteger,
+ .value.value.i = fourcc,
+ };
+ }
+ av_assert0(i == ctx->nb_attributes);
+ } else {
+ ctx->attributes = NULL;
+ ctx->nb_attributes = 0;
}
- av_assert0(i == ctx->nb_attributes);

ctx->rt_format = rt_format;

diff --git a/libavutil/hwcontext_vaapi.h b/libavutil/hwcontext_vaapi.h
index da1d4fe6c2..0b2e071cb3 100644
--- a/libavutil/hwcontext_vaapi.h
+++ b/libavutil/hwcontext_vaapi.h
@@ -51,6 +51,13 @@ enum {
* so the surface allocation code will not try to use it.
*/
AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE = (1 << 2),
+
+ /**
+ * The driver does not support surface attributes at all.
+ * The surface allocation code will never pass them to surface allocation,
+ * and the results of the vaQuerySurfaceAttributes() call will be faked.
+ */
+ AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES = (1 << 3),
};

/**
--
2.11.0
wm4
2017-06-13 11:59:58 UTC
Permalink
On Mon, 12 Jun 2017 23:40:18 +0100
Post by Mark Thompson
The driver is somewhat bitrotten (not updated for years) but is still
usable for decoding with this change. To support it, this adds a new
driver quirk to indicate no support at all for surface attributes.
(cherry picked from commit e791b915c774408fbc0ec9e7270b021899e08ccc)
---
libavutil/hwcontext_vaapi.c | 79 ++++++++++++++++++++++++++-------------------
libavutil/hwcontext_vaapi.h | 7 ++++
2 files changed, 52 insertions(+), 34 deletions(-)
diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c
index 3b50e95615..3970726d30 100644
--- a/libavutil/hwcontext_vaapi.c
+++ b/libavutil/hwcontext_vaapi.c
@@ -155,7 +155,8 @@ static int vaapi_frames_get_constraints(AVHWDeviceContext *hwdev,
unsigned int fourcc;
int err, i, j, attr_count, pix_fmt_count;
- if (config) {
+ if (config &&
+ !(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES)) {
attr_count = 0;
vas = vaQuerySurfaceAttributes(hwctx->display, config->config_id,
0, &attr_count);
@@ -273,6 +274,11 @@ static const struct {
"ubit",
AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE,
},
+ {
+ "VDPAU wrapper",
+ "Splitted-Desktop Systems VDPAU backend for VA-API",
+ AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES,
+ },
};
static int vaapi_device_init(AVHWDeviceContext *hwdev)
@@ -451,43 +457,48 @@ static int vaapi_frames_init(AVHWFramesContext *hwfc)
}
if (!hwfc->pool) {
- int need_memory_type = !(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE);
- int need_pixel_format = 1;
- for (i = 0; i < avfc->nb_attributes; i++) {
- if (ctx->attributes[i].type == VASurfaceAttribMemoryType)
- need_memory_type = 0;
- if (ctx->attributes[i].type == VASurfaceAttribPixelFormat)
- need_pixel_format = 0;
- }
- ctx->nb_attributes =
- avfc->nb_attributes + need_memory_type + need_pixel_format;
+ if (!(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES)) {
+ int need_memory_type = !(hwctx->driver_quirks & AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE);
+ int need_pixel_format = 1;
+ for (i = 0; i < avfc->nb_attributes; i++) {
+ if (ctx->attributes[i].type == VASurfaceAttribMemoryType)
+ need_memory_type = 0;
+ if (ctx->attributes[i].type == VASurfaceAttribPixelFormat)
+ need_pixel_format = 0;
+ }
+ ctx->nb_attributes =
+ avfc->nb_attributes + need_memory_type + need_pixel_format;
- ctx->attributes = av_malloc(ctx->nb_attributes *
+ ctx->attributes = av_malloc(ctx->nb_attributes *
sizeof(*ctx->attributes));
- if (!ctx->attributes) {
- err = AVERROR(ENOMEM);
- goto fail;
- }
+ if (!ctx->attributes) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
- for (i = 0; i < avfc->nb_attributes; i++)
- ctx->attributes[i] = avfc->attributes[i];
- if (need_memory_type) {
- ctx->attributes[i++] = (VASurfaceAttrib) {
- .type = VASurfaceAttribMemoryType,
- .flags = VA_SURFACE_ATTRIB_SETTABLE,
- .value.type = VAGenericValueTypeInteger,
- .value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_VA,
- };
- }
- if (need_pixel_format) {
- ctx->attributes[i++] = (VASurfaceAttrib) {
- .type = VASurfaceAttribPixelFormat,
- .flags = VA_SURFACE_ATTRIB_SETTABLE,
- .value.type = VAGenericValueTypeInteger,
- .value.value.i = fourcc,
- };
+ for (i = 0; i < avfc->nb_attributes; i++)
+ ctx->attributes[i] = avfc->attributes[i];
+ if (need_memory_type) {
+ ctx->attributes[i++] = (VASurfaceAttrib) {
+ .type = VASurfaceAttribMemoryType,
+ .flags = VA_SURFACE_ATTRIB_SETTABLE,
+ .value.type = VAGenericValueTypeInteger,
+ .value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_VA,
+ };
+ }
+ if (need_pixel_format) {
+ ctx->attributes[i++] = (VASurfaceAttrib) {
+ .type = VASurfaceAttribPixelFormat,
+ .flags = VA_SURFACE_ATTRIB_SETTABLE,
+ .value.type = VAGenericValueTypeInteger,
+ .value.value.i = fourcc,
+ };
+ }
+ av_assert0(i == ctx->nb_attributes);
+ } else {
+ ctx->attributes = NULL;
+ ctx->nb_attributes = 0;
}
- av_assert0(i == ctx->nb_attributes);
ctx->rt_format = rt_format;
diff --git a/libavutil/hwcontext_vaapi.h b/libavutil/hwcontext_vaapi.h
index da1d4fe6c2..0b2e071cb3 100644
--- a/libavutil/hwcontext_vaapi.h
+++ b/libavutil/hwcontext_vaapi.h
@@ -51,6 +51,13 @@ enum {
* so the surface allocation code will not try to use it.
*/
AV_VAAPI_DRIVER_QUIRK_ATTRIB_MEMTYPE = (1 << 2),
+
+ /**
+ * The driver does not support surface attributes at all.
+ * The surface allocation code will never pass them to surface allocation,
+ * and the results of the vaQuerySurfaceAttributes() call will be faked.
+ */
+ AV_VAAPI_DRIVER_QUIRK_SURFACE_ATTRIBUTES = (1 << 3),
};
/**
Fine, of course only if you want to (consenting adults etc.)
Mark Thompson
2017-06-12 22:40:19 UTC
Permalink
Previously this was leaking, though it actually hit an assert making
sure that the buffer had already been cleared when freeing the picture.

(cherry picked from commit 17aeee5832b9188b570c3d3de4197e4cdc54c634)
---
libavcodec/vaapi_encode.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 7e9c00f51d..7aaf263d25 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -428,6 +428,8 @@ fail:
fail_at_end:
av_freep(&pic->codec_picture_params);
av_frame_free(&pic->recon_image);
+ av_buffer_unref(&pic->output_buffer_ref);
+ pic->output_buffer = VA_INVALID_ID;
return err;
}
--
2.11.0
Mark Thompson
2017-06-12 22:40:20 UTC
Permalink
Creates a new device context from another of a different type which
refers to the same underlying hardware.

(cherry picked from commit b266ad56fe0e4ce5bb70118ba2e2b1dabfaf76ce)
---
doc/APIchanges | 3 ++
libavutil/hwcontext.c | 65 ++++++++++++++++++++++++++++++++++++++++++
libavutil/hwcontext.h | 26 +++++++++++++++++
libavutil/hwcontext_internal.h | 8 ++++++
libavutil/version.h | 2 +-
5 files changed, 103 insertions(+), 1 deletion(-)

diff --git a/doc/APIchanges b/doc/APIchanges
index 67a6142401..a6889f3930 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -15,6 +15,9 @@ libavutil: 2015-08-28

API changes, most recent first:

+2017-06-xx - xxxxxxx - lavu 55.64.100 - hwcontext.h
+ Add av_hwdevice_ctx_create_derived().
+
2017-05-15 - xxxxxxxxxx - lavc 57.96.100 - avcodec.h
VideoToolbox hardware-accelerated decoding now supports the new hwaccel API,
which can create the decoder context and allocate hardware frames automatically.
diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c
index 8d50a32b84..86d290d322 100644
--- a/libavutil/hwcontext.c
+++ b/libavutil/hwcontext.c
@@ -68,6 +68,8 @@ static void hwdevice_ctx_free(void *opaque, uint8_t *data)
if (ctx->free)
ctx->free(ctx);

+ av_buffer_unref(&ctx->internal->source_device);
+
av_freep(&ctx->hwctx);
av_freep(&ctx->internal->priv);
av_freep(&ctx->internal);
@@ -538,6 +540,69 @@ fail:
return ret;
}

+int av_hwdevice_ctx_create_derived(AVBufferRef **dst_ref_ptr,
+ enum AVHWDeviceType type,
+ AVBufferRef *src_ref, int flags)
+{
+ AVBufferRef *dst_ref = NULL, *tmp_ref;
+ AVHWDeviceContext *dst_ctx, *tmp_ctx;
+ int ret = 0;
+
+ tmp_ref = src_ref;
+ while (tmp_ref) {
+ tmp_ctx = (AVHWDeviceContext*)tmp_ref->data;
+ if (tmp_ctx->type == type) {
+ dst_ref = av_buffer_ref(tmp_ref);
+ if (!dst_ref) {
+ ret = AVERROR(ENOMEM);
+ goto fail;
+ }
+ goto done;
+ }
+ tmp_ref = tmp_ctx->internal->source_device;
+ }
+
+ dst_ref = av_hwdevice_ctx_alloc(type);
+ if (!dst_ref) {
+ ret = AVERROR(ENOMEM);
+ goto fail;
+ }
+ dst_ctx = (AVHWDeviceContext*)dst_ref->data;
+
+ tmp_ref = src_ref;
+ while (tmp_ref) {
+ tmp_ctx = (AVHWDeviceContext*)tmp_ref->data;
+ if (dst_ctx->internal->hw_type->device_derive) {
+ ret = dst_ctx->internal->hw_type->device_derive(dst_ctx,
+ tmp_ctx,
+ flags);
+ if (ret == 0) {
+ dst_ctx->internal->source_device = av_buffer_ref(src_ref);
+ if (!dst_ctx->internal->source_device) {
+ ret = AVERROR(ENOMEM);
+ goto fail;
+ }
+ goto done;
+ }
+ if (ret != AVERROR(ENOSYS))
+ goto fail;
+ }
+ tmp_ref = tmp_ctx->internal->source_device;
+ }
+
+ ret = AVERROR(ENOSYS);
+ goto fail;
+
+done:
+ *dst_ref_ptr = dst_ref;
+ return 0;
+
+fail:
+ av_buffer_unref(&dst_ref);
+ *dst_ref_ptr = NULL;
+ return ret;
+}
+
static void ff_hwframe_unmap(void *opaque, uint8_t *data)
{
HWMapDescriptor *hwmap = (HWMapDescriptor*)data;
diff --git a/libavutil/hwcontext.h b/libavutil/hwcontext.h
index cfc6ad0e28..782dbf22e1 100644
--- a/libavutil/hwcontext.h
+++ b/libavutil/hwcontext.h
@@ -271,6 +271,32 @@ int av_hwdevice_ctx_create(AVBufferRef **device_ctx, enum AVHWDeviceType type,
const char *device, AVDictionary *opts, int flags);

/**
+ * Create a new device of the specified type from an existing device.
+ *
+ * If the source device is a device of the target type or was originally
+ * derived from such a device (possibly through one or more intermediate
+ * devices of other types), then this will return a reference to the
+ * existing device of the same type as is requested.
+ *
+ * Otherwise, it will attempt to derive a new device from the given source
+ * device. If direct derivation to the new type is not implemented, it will
+ * attempt the same derivation from each ancestor of the source device in
+ * turn looking for an implemented derivation method.
+ *
+ * @param dst_ctx On success, a reference to the newly-created
+ * AVHWDeviceContext.
+ * @param type The type of the new device to create.
+ * @param src_ctx A reference to an existing AVHWDeviceContext which will be
+ * used to create the new device.
+ * @param flags Currently unused; should be set to zero.
+ * @return Zero on success, a negative AVERROR code on failure.
+ */
+int av_hwdevice_ctx_create_derived(AVBufferRef **dst_ctx,
+ enum AVHWDeviceType type,
+ AVBufferRef *src_ctx, int flags);
+
+
+/**
* Allocate an AVHWFramesContext tied to a given device context.
*
* @param device_ctx a reference to a AVHWDeviceContext. This function will make
diff --git a/libavutil/hwcontext_internal.h b/libavutil/hwcontext_internal.h
index cf05323e15..6451c0e2c5 100644
--- a/libavutil/hwcontext_internal.h
+++ b/libavutil/hwcontext_internal.h
@@ -66,6 +66,8 @@ typedef struct HWContextType {

int (*device_create)(AVHWDeviceContext *ctx, const char *device,
AVDictionary *opts, int flags);
+ int (*device_derive)(AVHWDeviceContext *dst_ctx,
+ AVHWDeviceContext *src_ctx, int flags);

int (*device_init)(AVHWDeviceContext *ctx);
void (*device_uninit)(AVHWDeviceContext *ctx);
@@ -95,6 +97,12 @@ typedef struct HWContextType {
struct AVHWDeviceInternal {
const HWContextType *hw_type;
void *priv;
+
+ /**
+ * For a derived device, a reference to the original device
+ * context it was derived from.
+ */
+ AVBufferRef *source_device;
};

struct AVHWFramesInternal {
diff --git a/libavutil/version.h b/libavutil/version.h
index fb61dcc666..dd8d2407da 100644
--- a/libavutil/version.h
+++ b/libavutil/version.h
@@ -80,7 +80,7 @@


#define LIBAVUTIL_VERSION_MAJOR 55
-#define LIBAVUTIL_VERSION_MINOR 63
+#define LIBAVUTIL_VERSION_MINOR 64
#define LIBAVUTIL_VERSION_MICRO 100

#define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \
--
2.11.0
Mark Thompson
2017-06-12 22:40:21 UTC
Permalink
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.

(cherry picked from commit b7487f4f3c39b4b202e1ea7bb2de13902f2dee45)
---
doc/APIchanges | 4 ++++
libavutil/hwcontext.c | 42 ++++++++++++++++++++++++++++++++++++++++++
libavutil/hwcontext.h | 28 ++++++++++++++++++++++++++++
libavutil/version.h | 2 +-
4 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/doc/APIchanges b/doc/APIchanges
index a6889f3930..5b2203f2b4 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -15,6 +15,10 @@ libavutil: 2015-08-28

API changes, most recent first:

+2017-06-xx - xxxxxxx - lavu 55.65.100 - hwcontext.h
+ Add AV_HWDEVICE_TYPE_NONE, av_hwdevice_find_type_by_name(),
+ av_hwdevice_get_type_name() and av_hwdevice_iterate_types().
+
2017-06-xx - xxxxxxx - lavu 55.64.100 - hwcontext.h
Add av_hwdevice_ctx_create_derived().

diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c
index 86d290d322..7f9b1d33e3 100644
--- a/libavutil/hwcontext.c
+++ b/libavutil/hwcontext.c
@@ -50,6 +50,48 @@ static const HWContextType *hw_table[] = {
NULL,
};

+const char *hw_type_names[] = {
+ [AV_HWDEVICE_TYPE_CUDA] = "cuda",
+ [AV_HWDEVICE_TYPE_DXVA2] = "dxva2",
+ [AV_HWDEVICE_TYPE_QSV] = "qsv",
+ [AV_HWDEVICE_TYPE_VAAPI] = "vaapi",
+ [AV_HWDEVICE_TYPE_VDPAU] = "vdpau",
+ [AV_HWDEVICE_TYPE_VIDEOTOOLBOX] = "videotoolbox",
+};
+
+enum AVHWDeviceType av_hwdevice_find_type_by_name(const char *name)
+{
+ int type;
+ for (type = 0; type < FF_ARRAY_ELEMS(hw_type_names); type++) {
+ if (hw_type_names[type] && !strcmp(hw_type_names[type], name))
+ return type;
+ }
+ return AV_HWDEVICE_TYPE_NONE;
+}
+
+const char *av_hwdevice_get_type_name(enum AVHWDeviceType type)
+{
+ if (type >= 0 && type < FF_ARRAY_ELEMS(hw_type_names))
+ return hw_type_names[type];
+ else
+ return NULL;
+}
+
+enum AVHWDeviceType av_hwdevice_iterate_types(enum AVHWDeviceType prev)
+{
+ enum AVHWDeviceType next;
+ int i, set = 0;
+ for (i = 0; hw_table[i]; i++) {
+ if (prev != AV_HWDEVICE_TYPE_NONE && hw_table[i]->type <= prev)
+ continue;
+ if (!set || hw_table[i]->type < next) {
+ next = hw_table[i]->type;
+ set = 1;
+ }
+ }
+ return set ? next : AV_HWDEVICE_TYPE_NONE;
+}
+
static const AVClass hwdevice_ctx_class = {
.class_name = "AVHWDeviceContext",
.item_name = av_default_item_name,
diff --git a/libavutil/hwcontext.h b/libavutil/hwcontext.h
index 782dbf22e1..37e8831f6b 100644
--- a/libavutil/hwcontext.h
+++ b/libavutil/hwcontext.h
@@ -31,6 +31,7 @@ enum AVHWDeviceType {
AV_HWDEVICE_TYPE_DXVA2,
AV_HWDEVICE_TYPE_QSV,
AV_HWDEVICE_TYPE_VIDEOTOOLBOX,
+ AV_HWDEVICE_TYPE_NONE,
};

typedef struct AVHWDeviceInternal AVHWDeviceInternal;
@@ -224,6 +225,33 @@ typedef struct AVHWFramesContext {
} AVHWFramesContext;

/**
+ * Look up an AVHWDeviceType by name.
+ *
+ * @param name String name of the device type (case-insensitive).
+ * @return The type from enum AVHWDeviceType, or AV_HWDEVICE_TYPE_NONE if
+ * not found.
+ */
+enum AVHWDeviceType av_hwdevice_find_type_by_name(const char *name);
+
+/** Get the string name of an AVHWDeviceType.
+ *
+ * @param type Type from enum AVHWDeviceType.
+ * @return Pointer to a static string containing the name, or NULL if the type
+ * is not valid.
+ */
+const char *av_hwdevice_get_type_name(enum AVHWDeviceType type);
+
+/**
+ * Iterate over supported device types.
+ *
+ * @param type AV_HWDEVICE_TYPE_NONE initially, then the previous type
+ * returned by this function in subsequent iterations.
+ * @return The next usable device type from enum AVHWDeviceType, or
+ * AV_HWDEVICE_TYPE_NONE if there are no more.
+ */
+enum AVHWDeviceType av_hwdevice_iterate_types(enum AVHWDeviceType prev);
+
+/**
* Allocate an AVHWDeviceContext for a given hardware type.
*
* @param type the type of the hardware device to allocate.
diff --git a/libavutil/version.h b/libavutil/version.h
index dd8d2407da..322b683cf4 100644
--- a/libavutil/version.h
+++ b/libavutil/version.h
@@ -80,7 +80,7 @@


#define LIBAVUTIL_VERSION_MAJOR 55
-#define LIBAVUTIL_VERSION_MINOR 64
+#define LIBAVUTIL_VERSION_MINOR 65
#define LIBAVUTIL_VERSION_MICRO 100

#define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \
--
2.11.0
Michael Niedermayer
2017-06-13 20:23:31 UTC
Permalink
Post by Mark Thompson
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
(cherry picked from commit b7487f4f3c39b4b202e1ea7bb2de13902f2dee45)
---
doc/APIchanges | 4 ++++
libavutil/hwcontext.c | 42 ++++++++++++++++++++++++++++++++++++++++++
libavutil/hwcontext.h | 28 ++++++++++++++++++++++++++++
libavutil/version.h | 2 +-
4 files changed, 75 insertions(+), 1 deletion(-)
diff --git a/doc/APIchanges b/doc/APIchanges
index a6889f3930..5b2203f2b4 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -15,6 +15,10 @@ libavutil: 2015-08-28
+2017-06-xx - xxxxxxx - lavu 55.65.100 - hwcontext.h
+ Add AV_HWDEVICE_TYPE_NONE, av_hwdevice_find_type_by_name(),
+ av_hwdevice_get_type_name() and av_hwdevice_iterate_types().
+
2017-06-xx - xxxxxxx - lavu 55.64.100 - hwcontext.h
Add av_hwdevice_ctx_create_derived().
diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c
index 86d290d322..7f9b1d33e3 100644
--- a/libavutil/hwcontext.c
+++ b/libavutil/hwcontext.c
@@ -50,6 +50,48 @@ static const HWContextType *hw_table[] = {
NULL,
};
+const char *hw_type_names[] = {
was this intended to be static const ?

it lacks a prefix like av_ for a non static

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I do not agree with what you have to say, but I'll defend to the death your
right to say it. -- Voltaire
Mark Thompson
2017-06-13 21:22:03 UTC
Permalink
Post by Michael Niedermayer
Post by Mark Thompson
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
(cherry picked from commit b7487f4f3c39b4b202e1ea7bb2de13902f2dee45)
---
doc/APIchanges | 4 ++++
libavutil/hwcontext.c | 42 ++++++++++++++++++++++++++++++++++++++++++
libavutil/hwcontext.h | 28 ++++++++++++++++++++++++++++
libavutil/version.h | 2 +-
4 files changed, 75 insertions(+), 1 deletion(-)
diff --git a/doc/APIchanges b/doc/APIchanges
index a6889f3930..5b2203f2b4 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -15,6 +15,10 @@ libavutil: 2015-08-28
+2017-06-xx - xxxxxxx - lavu 55.65.100 - hwcontext.h
+ Add AV_HWDEVICE_TYPE_NONE, av_hwdevice_find_type_by_name(),
+ av_hwdevice_get_type_name() and av_hwdevice_iterate_types().
+
2017-06-xx - xxxxxxx - lavu 55.64.100 - hwcontext.h
Add av_hwdevice_ctx_create_derived().
diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c
index 86d290d322..7f9b1d33e3 100644
--- a/libavutil/hwcontext.c
+++ b/libavutil/hwcontext.c
@@ -50,6 +50,48 @@ static const HWContextType *hw_table[] = {
NULL,
};
+const char *hw_type_names[] = {
was this intended to be static const ?
Yes; fixed.

Thanks,

- Mark
Mark Thompson
2017-06-12 22:40:22 UTC
Permalink
Not yet enabled for any hwaccels.

(cherry picked from commit d2e6dd32a445b5744a51d090c0822dbd7e434592)
(cherry picked from commit 9203aac22874c7259e155b7d00f1f33bb1355129)
---
Makefile | 2 +-
ffmpeg.c | 18 +++
ffmpeg.h | 17 +++
ffmpeg_hw.c | 387 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ffmpeg_opt.c | 39 ++++--
5 files changed, 455 insertions(+), 8 deletions(-)
create mode 100644 ffmpeg_hw.c

diff --git a/Makefile b/Makefile
index a2df8b9d8d..913a890a78 100644
--- a/Makefile
+++ b/Makefile
@@ -31,7 +31,7 @@ ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog) += cmdutils.o))
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_opencl.o))

-OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o
+OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o ffmpeg_hw.o
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += ffmpeg_videotoolbox.o
OBJS-ffmpeg-$(CONFIG_LIBMFX) += ffmpeg_qsv.o
OBJS-ffmpeg-$(CONFIG_VAAPI) += ffmpeg_vaapi.o
diff --git a/ffmpeg.c b/ffmpeg.c
index cd19594f8b..6170bd453c 100644
--- a/ffmpeg.c
+++ b/ffmpeg.c
@@ -2884,6 +2884,15 @@ static int init_input_stream(int ist_index, char *error, int error_len)

if (!av_dict_get(ist->decoder_opts, "threads", NULL, 0))
av_dict_set(&ist->decoder_opts, "threads", "auto", 0);
+
+ ret = hw_device_setup_for_decode(ist);
+ if (ret < 0) {
+ snprintf(error, error_len, "Device setup failed for "
+ "decoder on input stream #%d:%d : %s",
+ ist->file_index, ist->st->index, av_err2str(ret));
+ return ret;
+ }
+
if ((ret = avcodec_open2(ist->dec_ctx, codec, &ist->decoder_opts)) < 0) {
if (ret == AVERROR_EXPERIMENTAL)
abort_codec_experimental(codec, 0);
@@ -3441,6 +3450,14 @@ static int init_output_stream(OutputStream *ost, char *error, int error_len)
ost->enc_ctx->hw_frames_ctx = av_buffer_ref(av_buffersink_get_hw_frames_ctx(ost->filter->filter));
if (!ost->enc_ctx->hw_frames_ctx)
return AVERROR(ENOMEM);
+ } else {
+ ret = hw_device_setup_for_encode(ost);
+ if (ret < 0) {
+ snprintf(error, error_len, "Device setup failed for "
+ "encoder on output stream #%d:%d : %s",
+ ost->file_index, ost->index, av_err2str(ret));
+ return ret;
+ }
}

if ((ret = avcodec_open2(ost->enc_ctx, codec, &ost->encoder_opts)) < 0) {
@@ -4643,6 +4660,7 @@ static int transcode(void)
}

av_buffer_unref(&hw_device_ctx);
+ hw_device_free_all();

/* finished ! */
ret = 0;
diff --git a/ffmpeg.h b/ffmpeg.h
index a806445e0d..5c115cf9a3 100644
--- a/ffmpeg.h
+++ b/ffmpeg.h
@@ -42,6 +42,7 @@
#include "libavutil/dict.h"
#include "libavutil/eval.h"
#include "libavutil/fifo.h"
+#include "libavutil/hwcontext.h"
#include "libavutil/pixfmt.h"
#include "libavutil/rational.h"
#include "libavutil/threadmessage.h"
@@ -74,8 +75,15 @@ typedef struct HWAccel {
int (*init)(AVCodecContext *s);
enum HWAccelID id;
enum AVPixelFormat pix_fmt;
+ enum AVHWDeviceType device_type;
} HWAccel;

+typedef struct HWDevice {
+ char *name;
+ enum AVHWDeviceType type;
+ AVBufferRef *device_ref;
+} HWDevice;
+
/* select an input stream for an output stream */
typedef struct StreamMap {
int disabled; /* 1 is this mapping is disabled by a negative map */
@@ -661,4 +669,13 @@ int vaapi_decode_init(AVCodecContext *avctx);
int vaapi_device_init(const char *device);
int cuvid_init(AVCodecContext *s);

+HWDevice *hw_device_get_by_name(const char *name);
+int hw_device_init_from_string(const char *arg, HWDevice **dev);
+void hw_device_free_all(void);
+
+int hw_device_setup_for_decode(InputStream *ist);
+int hw_device_setup_for_encode(OutputStream *ost);
+
+int hwaccel_decode_init(AVCodecContext *avctx);
+
#endif /* FFMPEG_H */
diff --git a/ffmpeg_hw.c b/ffmpeg_hw.c
new file mode 100644
index 0000000000..3acf8b4532
--- /dev/null
+++ b/ffmpeg_hw.c
@@ -0,0 +1,387 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <string.h>
+
+#include "ffmpeg.h"
+
+static int nb_hw_devices;
+static HWDevice **hw_devices;
+
+static HWDevice *hw_device_get_by_type(enum AVHWDeviceType type)
+{
+ HWDevice *found = NULL;
+ int i;
+ for (i = 0; i < nb_hw_devices; i++) {
+ if (hw_devices[i]->type == type) {
+ if (found)
+ return NULL;
+ found = hw_devices[i];
+ }
+ }
+ return found;
+}
+
+HWDevice *hw_device_get_by_name(const char *name)
+{
+ int i;
+ for (i = 0; i < nb_hw_devices; i++) {
+ if (!strcmp(hw_devices[i]->name, name))
+ return hw_devices[i];
+ }
+ return NULL;
+}
+
+static HWDevice *hw_device_add(void)
+{
+ int err;
+ err = av_reallocp_array(&hw_devices, nb_hw_devices + 1,
+ sizeof(*hw_devices));
+ if (err) {
+ nb_hw_devices = 0;
+ return NULL;
+ }
+ hw_devices[nb_hw_devices] = av_mallocz(sizeof(HWDevice));
+ if (!hw_devices[nb_hw_devices])
+ return NULL;
+ return hw_devices[nb_hw_devices++];
+}
+
+int hw_device_init_from_string(const char *arg, HWDevice **dev_out)
+{
+ // "type=name:device,key=value,key2=value2"
+ // "type:device,key=value,key2=value2"
+ // -> av_hwdevice_ctx_create()
+ // "type=***@name"
+ // "***@name"
+ // -> av_hwdevice_ctx_create_derived()
+
+ AVDictionary *options = NULL;
+ char *type_name = NULL, *name = NULL, *device = NULL;
+ enum AVHWDeviceType type;
+ HWDevice *dev, *src;
+ AVBufferRef *device_ref;
+ int err;
+ const char *errmsg, *p, *q;
+ size_t k;
+
+ k = strcspn(arg, ":=@");
+ p = arg + k;
+
+ type_name = av_strndup(arg, k);
+ if (!type_name) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
+ type = av_hwdevice_find_type_by_name(type_name);
+ if (type == AV_HWDEVICE_TYPE_NONE) {
+ errmsg = "unknown device type";
+ goto invalid;
+ }
+
+ if (*p == '=') {
+ k = strcspn(p + 1, ":@");
+
+ name = av_strndup(p + 1, k);
+ if (!name) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
+ if (hw_device_get_by_name(name)) {
+ errmsg = "named device already exists";
+ goto invalid;
+ }
+
+ p += 1 + k;
+ } else {
+ // Give the device an automatic name of the form "type%d".
+ // We arbitrarily limit at 1000 anonymous devices of the same
+ // type - there is probably something else very wrong if you
+ // get to this limit.
+ size_t index_pos;
+ int index, index_limit = 1000;
+ index_pos = strlen(type_name);
+ name = av_malloc(index_pos + 4);
+ if (!name) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
+ for (index = 0; index < index_limit; index++) {
+ snprintf(name, index_pos + 4, "%s%d", type_name, index);
+ if (!hw_device_get_by_name(name))
+ break;
+ }
+ if (index >= index_limit) {
+ errmsg = "too many devices";
+ goto invalid;
+ }
+ }
+
+ if (!*p) {
+ // New device with no parameters.
+ err = av_hwdevice_ctx_create(&device_ref, type,
+ NULL, NULL, 0);
+ if (err < 0)
+ goto fail;
+
+ } else if (*p == ':') {
+ // New device with some parameters.
+ ++p;
+ q = strchr(p, ',');
+ if (q) {
+ device = av_strndup(p, q - p);
+ if (!device) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
+ err = av_dict_parse_string(&options, q + 1, "=", ",", 0);
+ if (err < 0) {
+ errmsg = "failed to parse options";
+ goto invalid;
+ }
+ }
+
+ err = av_hwdevice_ctx_create(&device_ref, type,
+ device ? device : p, options, 0);
+ if (err < 0)
+ goto fail;
+
+ } else if (*p == '@') {
+ // Derive from existing device.
+
+ src = hw_device_get_by_name(p + 1);
+ if (!src) {
+ errmsg = "invalid source device name";
+ goto invalid;
+ }
+
+ err = av_hwdevice_ctx_create_derived(&device_ref, type,
+ src->device_ref, 0);
+ if (err < 0)
+ goto fail;
+ } else {
+ errmsg = "parse error";
+ goto invalid;
+ }
+
+ dev = hw_device_add();
+ if (!dev) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
+
+ dev->name = name;
+ dev->type = type;
+ dev->device_ref = device_ref;
+
+ if (dev_out)
+ *dev_out = dev;
+
+ name = NULL;
+ err = 0;
+done:
+ av_freep(&type_name);
+ av_freep(&name);
+ av_freep(&device);
+ av_dict_free(&options);
+ return err;
+invalid:
+ av_log(NULL, AV_LOG_ERROR,
+ "Invalid device specification \"%s\": %s\n", arg, errmsg);
+ err = AVERROR(EINVAL);
+ goto done;
+fail:
+ av_log(NULL, AV_LOG_ERROR,
+ "Device creation failed: %d.\n", err);
+ goto done;
+}
+
+void hw_device_free_all(void)
+{
+ int i;
+ for (i = 0; i < nb_hw_devices; i++) {
+ av_freep(&hw_devices[i]->name);
+ av_buffer_unref(&hw_devices[i]->device_ref);
+ av_freep(&hw_devices[i]);
+ }
+ av_freep(&hw_devices);
+ nb_hw_devices = 0;
+}
+
+static enum AVHWDeviceType hw_device_match_type_by_hwaccel(enum HWAccelID hwaccel_id)
+{
+ int i;
+ if (hwaccel_id == HWACCEL_NONE)
+ return AV_HWDEVICE_TYPE_NONE;
+ for (i = 0; hwaccels[i].name; i++) {
+ if (hwaccels[i].id == hwaccel_id)
+ return hwaccels[i].device_type;
+ }
+ return AV_HWDEVICE_TYPE_NONE;
+}
+
+static enum AVHWDeviceType hw_device_match_type_in_name(const char *codec_name)
+{
+ const char *type_name;
+ enum AVHWDeviceType type;
+ for (type = av_hwdevice_iterate_types(AV_HWDEVICE_TYPE_NONE);
+ type != AV_HWDEVICE_TYPE_NONE;
+ type = av_hwdevice_iterate_types(type)) {
+ type_name = av_hwdevice_get_type_name(type);
+ if (strstr(codec_name, type_name))
+ return type;
+ }
+ return AV_HWDEVICE_TYPE_NONE;
+}
+
+int hw_device_setup_for_decode(InputStream *ist)
+{
+ enum AVHWDeviceType type;
+ HWDevice *dev;
+ const char *type_name;
+ int err;
+
+ if (ist->hwaccel_device) {
+ dev = hw_device_get_by_name(ist->hwaccel_device);
+ if (!dev) {
+ char *tmp;
+ size_t len;
+ type = hw_device_match_type_by_hwaccel(ist->hwaccel_id);
+ if (type == AV_HWDEVICE_TYPE_NONE) {
+ // No match - this isn't necessarily invalid, though,
+ // because an explicit device might not be needed or
+ // the hwaccel setup could be handled elsewhere.
+ return 0;
+ }
+ type_name = av_hwdevice_get_type_name(type);
+ len = strlen(type_name) + 1 +
+ strlen(ist->hwaccel_device) + 1;
+ tmp = av_malloc(len);
+ if (!tmp)
+ return AVERROR(ENOMEM);
+ snprintf(tmp, len, "%s:%s", type_name, ist->hwaccel_device);
+ err = hw_device_init_from_string(tmp, &dev);
+ av_free(tmp);
+ if (err < 0)
+ return err;
+ }
+ } else {
+ if (ist->hwaccel_id != HWACCEL_NONE)
+ type = hw_device_match_type_by_hwaccel(ist->hwaccel_id);
+ else
+ type = hw_device_match_type_in_name(ist->dec->name);
+ if (type != AV_HWDEVICE_TYPE_NONE) {
+ dev = hw_device_get_by_type(type);
+ if (!dev) {
+ hw_device_init_from_string(av_hwdevice_get_type_name(type),
+ &dev);
+ }
+ } else {
+ // No device required.
+ return 0;
+ }
+ }
+
+ if (!dev) {
+ av_log(ist->dec_ctx, AV_LOG_WARNING, "No device available "
+ "for decoder (device type %s for codec %s).\n",
+ av_hwdevice_get_type_name(type), ist->dec->name);
+ return 0;
+ }
+
+ ist->dec_ctx->hw_device_ctx = av_buffer_ref(dev->device_ref);
+ if (!ist->dec_ctx->hw_device_ctx)
+ return AVERROR(ENOMEM);
+
+ return 0;
+}
+
+int hw_device_setup_for_encode(OutputStream *ost)
+{
+ enum AVHWDeviceType type;
+ HWDevice *dev;
+
+ type = hw_device_match_type_in_name(ost->enc->name);
+ if (type != AV_HWDEVICE_TYPE_NONE) {
+ dev = hw_device_get_by_type(type);
+ if (!dev) {
+ av_log(ost->enc_ctx, AV_LOG_WARNING, "No device available "
+ "for encoder (device type %s for codec %s).\n",
+ av_hwdevice_get_type_name(type), ost->enc->name);
+ return 0;
+ }
+ ost->enc_ctx->hw_device_ctx = av_buffer_ref(dev->device_ref);
+ if (!ost->enc_ctx->hw_device_ctx)
+ return AVERROR(ENOMEM);
+ return 0;
+ } else {
+ // No device required.
+ return 0;
+ }
+}
+
+static int hwaccel_retrieve_data(AVCodecContext *avctx, AVFrame *input)
+{
+ InputStream *ist = avctx->opaque;
+ AVFrame *output = NULL;
+ enum AVPixelFormat output_format = ist->hwaccel_output_format;
+ int err;
+
+ if (input->format == output_format) {
+ // Nothing to do.
+ return 0;
+ }
+
+ output = av_frame_alloc();
+ if (!output)
+ return AVERROR(ENOMEM);
+
+ output->format = output_format;
+
+ err = av_hwframe_transfer_data(output, input, 0);
+ if (err < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to transfer data to "
+ "output frame: %d.\n", err);
+ goto fail;
+ }
+
+ err = av_frame_copy_props(output, input);
+ if (err < 0) {
+ av_frame_unref(output);
+ goto fail;
+ }
+
+ av_frame_unref(input);
+ av_frame_move_ref(input, output);
+ av_frame_free(&output);
+
+ return 0;
+
+fail:
+ av_frame_free(&output);
+ return err;
+}
+
+int hwaccel_decode_init(AVCodecContext *avctx)
+{
+ InputStream *ist = avctx->opaque;
+
+ ist->hwaccel_retrieve_data = &hwaccel_retrieve_data;
+
+ return 0;
+}
diff --git a/ffmpeg_opt.c b/ffmpeg_opt.c
index c997ea8faf..6755e09e47 100644
--- a/ffmpeg_opt.c
+++ b/ffmpeg_opt.c
@@ -67,25 +67,32 @@

const HWAccel hwaccels[] = {
#if HAVE_VDPAU_X11
- { "vdpau", vdpau_init, HWACCEL_VDPAU, AV_PIX_FMT_VDPAU },
+ { "vdpau", vdpau_init, HWACCEL_VDPAU, AV_PIX_FMT_VDPAU,
+ AV_HWDEVICE_TYPE_NONE },
#endif
#if HAVE_DXVA2_LIB
- { "dxva2", dxva2_init, HWACCEL_DXVA2, AV_PIX_FMT_DXVA2_VLD },
+ { "dxva2", dxva2_init, HWACCEL_DXVA2, AV_PIX_FMT_DXVA2_VLD,
+ AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_VDA
- { "vda", videotoolbox_init, HWACCEL_VDA, AV_PIX_FMT_VDA },
+ { "vda", videotoolbox_init, HWACCEL_VDA, AV_PIX_FMT_VDA,
+ AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_VIDEOTOOLBOX
- { "videotoolbox", videotoolbox_init, HWACCEL_VIDEOTOOLBOX, AV_PIX_FMT_VIDEOTOOLBOX },
+ { "videotoolbox", videotoolbox_init, HWACCEL_VIDEOTOOLBOX, AV_PIX_FMT_VIDEOTOOLBOX,
+ AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_LIBMFX
- { "qsv", qsv_init, HWACCEL_QSV, AV_PIX_FMT_QSV },
+ { "qsv", qsv_init, HWACCEL_QSV, AV_PIX_FMT_QSV,
+ AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_VAAPI
- { "vaapi", vaapi_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI },
+ { "vaapi", vaapi_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI,
+ AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_CUVID
- { "cuvid", cuvid_init, HWACCEL_CUVID, AV_PIX_FMT_CUDA },
+ { "cuvid", cuvid_init, HWACCEL_CUVID, AV_PIX_FMT_CUDA,
+ AV_HWDEVICE_TYPE_NONE },
#endif
{ 0 },
};
@@ -463,6 +470,21 @@ static int opt_vaapi_device(void *optctx, const char *opt, const char *arg)
}
#endif

+static int opt_init_hw_device(void *optctx, const char *opt, const char *arg)
+{
+ if (!strcmp(arg, "list")) {
+ enum AVHWDeviceType type = AV_HWDEVICE_TYPE_NONE;
+ printf("Supported hardware device types:\n");
+ while ((type = av_hwdevice_iterate_types(type)) !=
+ AV_HWDEVICE_TYPE_NONE)
+ printf("%s\n", av_hwdevice_get_type_name(type));
+ printf("\n");
+ exit_program(0);
+ } else {
+ return hw_device_init_from_string(arg, NULL);
+ }
+}
+
/**
* Parse a metadata specifier passed as 'arg' parameter.
* @param arg metadata string to parse
@@ -3674,5 +3696,8 @@ const OptionDef options[] = {
"set QSV hardware device (DirectX adapter index, DRM path or X11 display name)", "device"},
#endif

+ { "init_hw_device", HAS_ARG | OPT_EXPERT, { .func_arg = opt_init_hw_device },
+ "initialise hardware device", "args" },
+
{ NULL, },
};
--
2.11.0
Mark Thompson
2017-06-12 22:40:28 UTC
Permalink
(cherry picked from commit 8848ba0bd6b035af77d4f13aa0d8aaaad9806fe8)
---
libavcodec/qsvdec.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
index 74866b57ff..c00817f1d9 100644
--- a/libavcodec/qsvdec.c
+++ b/libavcodec/qsvdec.c
@@ -42,7 +42,7 @@
#include "qsvdec.h"

static int qsv_init_session(AVCodecContext *avctx, QSVContext *q, mfxSession session,
- AVBufferRef *hw_frames_ref)
+ AVBufferRef *hw_frames_ref, AVBufferRef *hw_device_ref)
{
int ret;

@@ -68,6 +68,18 @@ static int qsv_init_session(AVCodecContext *avctx, QSVContext *q, mfxSession ses
}

q->session = q->internal_session;
+ } else if (hw_device_ref) {
+ if (q->internal_session) {
+ MFXClose(q->internal_session);
+ q->internal_session = NULL;
+ }
+
+ ret = ff_qsv_init_session_device(avctx, &q->internal_session,
+ hw_device_ref, q->load_plugins);
+ if (ret < 0)
+ return ret;
+
+ q->session = q->internal_session;
} else {
if (!q->internal_session) {
ret = ff_qsv_init_internal_session(avctx, &q->internal_session,
@@ -133,7 +145,7 @@ static int qsv_decode_init(AVCodecContext *avctx, QSVContext *q)
iopattern = MFX_IOPATTERN_OUT_SYSTEM_MEMORY;
q->iopattern = iopattern;

- ret = qsv_init_session(avctx, q, session, avctx->hw_frames_ctx);
+ ret = qsv_init_session(avctx, q, session, avctx->hw_frames_ctx, avctx->hw_device_ctx);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "Error initializing an MFX session\n");
return ret;
--
2.11.0
Mark Thompson
2017-06-12 22:40:23 UTC
Permalink
(cherry picked from commit 62a1ef9f26c654a3e988aa465c4ac1d776c4c356)
---
Makefile | 1 -
ffmpeg.h | 2 -
ffmpeg_opt.c | 20 ++++-
ffmpeg_vaapi.c | 233 ---------------------------------------------------------
4 files changed, 16 insertions(+), 240 deletions(-)
delete mode 100644 ffmpeg_vaapi.c

diff --git a/Makefile b/Makefile
index 913a890a78..26f9d93d85 100644
--- a/Makefile
+++ b/Makefile
@@ -34,7 +34,6 @@ $(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_o
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o ffmpeg_hw.o
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += ffmpeg_videotoolbox.o
OBJS-ffmpeg-$(CONFIG_LIBMFX) += ffmpeg_qsv.o
-OBJS-ffmpeg-$(CONFIG_VAAPI) += ffmpeg_vaapi.o
ifndef CONFIG_VIDEOTOOLBOX
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_videotoolbox.o
endif
diff --git a/ffmpeg.h b/ffmpeg.h
index 5c115cf9a3..231d362f5f 100644
--- a/ffmpeg.h
+++ b/ffmpeg.h
@@ -665,8 +665,6 @@ int dxva2_init(AVCodecContext *s);
int vda_init(AVCodecContext *s);
int videotoolbox_init(AVCodecContext *s);
int qsv_init(AVCodecContext *s);
-int vaapi_decode_init(AVCodecContext *avctx);
-int vaapi_device_init(const char *device);
int cuvid_init(AVCodecContext *s);

HWDevice *hw_device_get_by_name(const char *name);
diff --git a/ffmpeg_opt.c b/ffmpeg_opt.c
index 6755e09e47..51671e0dd4 100644
--- a/ffmpeg_opt.c
+++ b/ffmpeg_opt.c
@@ -87,8 +87,8 @@ const HWAccel hwaccels[] = {
AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_VAAPI
- { "vaapi", vaapi_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI,
- AV_HWDEVICE_TYPE_NONE },
+ { "vaapi", hwaccel_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI,
+ AV_HWDEVICE_TYPE_VAAPI },
#endif
#if CONFIG_CUVID
{ "cuvid", cuvid_init, HWACCEL_CUVID, AV_PIX_FMT_CUDA,
@@ -462,10 +462,22 @@ static int opt_sdp_file(void *optctx, const char *opt, const char *arg)
#if CONFIG_VAAPI
static int opt_vaapi_device(void *optctx, const char *opt, const char *arg)
{
+ HWDevice *dev;
+ const char *prefix = "vaapi:";
+ char *tmp;
int err;
- err = vaapi_device_init(arg);
+ tmp = av_malloc(strlen(prefix) + strlen(arg) + 1);
+ if (!tmp)
+ return AVERROR(ENOMEM);
+ strcpy(tmp, prefix);
+ strcat(tmp, arg);
+ err = hw_device_init_from_string(tmp, &dev);
+ av_free(tmp);
if (err < 0)
- exit_program(1);
+ return err;
+ hw_device_ctx = av_buffer_ref(dev->device_ref);
+ if (!hw_device_ctx)
+ return AVERROR(ENOMEM);
return 0;
}
#endif
diff --git a/ffmpeg_vaapi.c b/ffmpeg_vaapi.c
deleted file mode 100644
index d011cacef7..0000000000
--- a/ffmpeg_vaapi.c
+++ /dev/null
@@ -1,233 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "config.h"
-
-#include "libavutil/avassert.h"
-#include "libavutil/frame.h"
-#include "libavutil/hwcontext.h"
-#include "libavutil/log.h"
-
-#include "ffmpeg.h"
-
-
-static AVClass vaapi_class = {
- .class_name = "vaapi",
- .item_name = av_default_item_name,
- .version = LIBAVUTIL_VERSION_INT,
-};
-
-#define DEFAULT_SURFACES 20
-
-typedef struct VAAPIDecoderContext {
- const AVClass *class;
-
- AVBufferRef *device_ref;
- AVHWDeviceContext *device;
- AVBufferRef *frames_ref;
- AVHWFramesContext *frames;
-
- // The output need not have the same format, width and height as the
- // decoded frames - the copy for non-direct-mapped access is actually
- // a whole vpp instance which can do arbitrary scaling and format
- // conversion.
- enum AVPixelFormat output_format;
-} VAAPIDecoderContext;
-
-
-static int vaapi_get_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
-{
- InputStream *ist = avctx->opaque;
- VAAPIDecoderContext *ctx = ist->hwaccel_ctx;
- int err;
-
- err = av_hwframe_get_buffer(ctx->frames_ref, frame, 0);
- if (err < 0) {
- av_log(ctx, AV_LOG_ERROR, "Failed to allocate decoder surface.\n");
- } else {
- av_log(ctx, AV_LOG_DEBUG, "Decoder given surface %#x.\n",
- (unsigned int)(uintptr_t)frame->data[3]);
- }
- return err;
-}
-
-static int vaapi_retrieve_data(AVCodecContext *avctx, AVFrame *input)
-{
- InputStream *ist = avctx->opaque;
- VAAPIDecoderContext *ctx = ist->hwaccel_ctx;
- AVFrame *output = 0;
- int err;
-
- av_assert0(input->format == AV_PIX_FMT_VAAPI);
-
- if (ctx->output_format == AV_PIX_FMT_VAAPI) {
- // Nothing to do.
- return 0;
- }
-
- av_log(ctx, AV_LOG_DEBUG, "Retrieve data from surface %#x.\n",
- (unsigned int)(uintptr_t)input->data[3]);
-
- output = av_frame_alloc();
- if (!output)
- return AVERROR(ENOMEM);
-
- output->format = ctx->output_format;
-
- err = av_hwframe_transfer_data(output, input, 0);
- if (err < 0) {
- av_log(ctx, AV_LOG_ERROR, "Failed to transfer data to "
- "output frame: %d.\n", err);
- goto fail;
- }
-
- err = av_frame_copy_props(output, input);
- if (err < 0) {
- av_frame_unref(output);
- goto fail;
- }
-
- av_frame_unref(input);
- av_frame_move_ref(input, output);
- av_frame_free(&output);
-
- return 0;
-
-fail:
- if (output)
- av_frame_free(&output);
- return err;
-}
-
-static void vaapi_decode_uninit(AVCodecContext *avctx)
-{
- InputStream *ist = avctx->opaque;
- VAAPIDecoderContext *ctx = ist->hwaccel_ctx;
-
- if (ctx) {
- av_buffer_unref(&ctx->frames_ref);
- av_buffer_unref(&ctx->device_ref);
- av_free(ctx);
- }
-
- av_buffer_unref(&ist->hw_frames_ctx);
-
- ist->hwaccel_ctx = NULL;
- ist->hwaccel_uninit = NULL;
- ist->hwaccel_get_buffer = NULL;
- ist->hwaccel_retrieve_data = NULL;
-}
-
-int vaapi_decode_init(AVCodecContext *avctx)
-{
- InputStream *ist = avctx->opaque;
- VAAPIDecoderContext *ctx;
- int err;
- int loglevel = (ist->hwaccel_id != HWACCEL_VAAPI ? AV_LOG_VERBOSE
- : AV_LOG_ERROR);
-
- if (ist->hwaccel_ctx)
- vaapi_decode_uninit(avctx);
-
- // We have -hwaccel without -vaapi_device, so just initialise here with
- // the device passed as -hwaccel_device (if -vaapi_device was passed, it
- // will always have been called before now).
- if (!hw_device_ctx) {
- err = vaapi_device_init(ist->hwaccel_device);
- if (err < 0)
- return err;
- }
-
- ctx = av_mallocz(sizeof(*ctx));
- if (!ctx)
- return AVERROR(ENOMEM);
- ctx->class = &vaapi_class;
- ist->hwaccel_ctx = ctx;
-
- ctx->device_ref = av_buffer_ref(hw_device_ctx);
- ctx->device = (AVHWDeviceContext*)ctx->device_ref->data;
-
- ctx->output_format = ist->hwaccel_output_format;
- avctx->pix_fmt = ctx->output_format;
-
- ctx->frames_ref = av_hwframe_ctx_alloc(ctx->device_ref);
- if (!ctx->frames_ref) {
- av_log(ctx, loglevel, "Failed to create VAAPI frame context.\n");
- err = AVERROR(ENOMEM);
- goto fail;
- }
-
- ctx->frames = (AVHWFramesContext*)ctx->frames_ref->data;
-
- ctx->frames->format = AV_PIX_FMT_VAAPI;
- ctx->frames->width = avctx->coded_width;
- ctx->frames->height = avctx->coded_height;
-
- // It would be nice if we could query the available formats here,
- // but unfortunately we don't have a VAConfigID to do it with.
- // For now, just assume an NV12 format (or P010 if 10-bit).
- ctx->frames->sw_format = (avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10 ?
- AV_PIX_FMT_P010 : AV_PIX_FMT_NV12);
-
- // For frame-threaded decoding, at least one additional surface
- // is needed for each thread.
- ctx->frames->initial_pool_size = DEFAULT_SURFACES;
- if (avctx->active_thread_type & FF_THREAD_FRAME)
- ctx->frames->initial_pool_size += avctx->thread_count;
-
- err = av_hwframe_ctx_init(ctx->frames_ref);
- if (err < 0) {
- av_log(ctx, loglevel, "Failed to initialise VAAPI frame "
- "context: %d\n", err);
- goto fail;
- }
-
- ist->hw_frames_ctx = av_buffer_ref(ctx->frames_ref);
- if (!ist->hw_frames_ctx) {
- err = AVERROR(ENOMEM);
- goto fail;
- }
-
- ist->hwaccel_uninit = &vaapi_decode_uninit;
- ist->hwaccel_get_buffer = &vaapi_get_buffer;
- ist->hwaccel_retrieve_data = &vaapi_retrieve_data;
-
- return 0;
-
-fail:
- vaapi_decode_uninit(avctx);
- return err;
-}
-
-static AVClass *vaapi_log = &vaapi_class;
-
-av_cold int vaapi_device_init(const char *device)
-{
- int err;
-
- av_buffer_unref(&hw_device_ctx);
-
- err = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI,
- device, NULL, 0);
- if (err < 0) {
- av_log(&vaapi_log, AV_LOG_ERROR, "Failed to create a VAAPI device\n");
- return err;
- }
-
- return 0;
-}
--
2.11.0
Michael Niedermayer
2017-06-13 20:19:22 UTC
Permalink
Post by Mark Thompson
(cherry picked from commit 62a1ef9f26c654a3e988aa465c4ac1d776c4c356)
---
Makefile | 1 -
ffmpeg.h | 2 -
ffmpeg_opt.c | 20 ++++-
ffmpeg_vaapi.c | 233 ---------------------------------------------------------
4 files changed, 16 insertions(+), 240 deletions(-)
delete mode 100644 ffmpeg_vaapi.c
diff --git a/Makefile b/Makefile
index 913a890a78..26f9d93d85 100644
--- a/Makefile
+++ b/Makefile
@@ -34,7 +34,6 @@ $(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_o
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o ffmpeg_hw.o
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += ffmpeg_videotoolbox.o
OBJS-ffmpeg-$(CONFIG_LIBMFX) += ffmpeg_qsv.o
-OBJS-ffmpeg-$(CONFIG_VAAPI) += ffmpeg_vaapi.o
ifndef CONFIG_VIDEOTOOLBOX
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_videotoolbox.o
endif
diff --git a/ffmpeg.h b/ffmpeg.h
index 5c115cf9a3..231d362f5f 100644
--- a/ffmpeg.h
+++ b/ffmpeg.h
@@ -665,8 +665,6 @@ int dxva2_init(AVCodecContext *s);
int vda_init(AVCodecContext *s);
int videotoolbox_init(AVCodecContext *s);
int qsv_init(AVCodecContext *s);
-int vaapi_decode_init(AVCodecContext *avctx);
-int vaapi_device_init(const char *device);
int cuvid_init(AVCodecContext *s);
HWDevice *hw_device_get_by_name(const char *name);
diff --git a/ffmpeg_opt.c b/ffmpeg_opt.c
index 6755e09e47..51671e0dd4 100644
--- a/ffmpeg_opt.c
+++ b/ffmpeg_opt.c
@@ -87,8 +87,8 @@ const HWAccel hwaccels[] = {
AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_VAAPI
- { "vaapi", vaapi_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI,
- AV_HWDEVICE_TYPE_NONE },
+ { "vaapi", hwaccel_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI,
+ AV_HWDEVICE_TYPE_VAAPI },
#endif
#if CONFIG_CUVID
{ "cuvid", cuvid_init, HWACCEL_CUVID, AV_PIX_FMT_CUDA,
@@ -462,10 +462,22 @@ static int opt_sdp_file(void *optctx, const char *opt, const char *arg)
#if CONFIG_VAAPI
static int opt_vaapi_device(void *optctx, const char *opt, const char *arg)
{
+ HWDevice *dev;
+ const char *prefix = "vaapi:";
+ char *tmp;
int err;
- err = vaapi_device_init(arg);
+ tmp = av_malloc(strlen(prefix) + strlen(arg) + 1);
+ if (!tmp)
+ return AVERROR(ENOMEM);
+ strcpy(tmp, prefix);
+ strcat(tmp, arg);
You can simplify this with av_asprintf()

[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If you fake or manipulate statistics in a paper in physics you will never
get a job again.
If you fake or manipulate statistics in a paper in medicin you will get
a job for life at the pharma industry.
Mark Thompson
2017-06-13 22:01:44 UTC
Permalink
Post by Michael Niedermayer
Post by Mark Thompson
(cherry picked from commit 62a1ef9f26c654a3e988aa465c4ac1d776c4c356)
---
Makefile | 1 -
ffmpeg.h | 2 -
ffmpeg_opt.c | 20 ++++-
ffmpeg_vaapi.c | 233 ---------------------------------------------------------
4 files changed, 16 insertions(+), 240 deletions(-)
delete mode 100644 ffmpeg_vaapi.c
diff --git a/Makefile b/Makefile
index 913a890a78..26f9d93d85 100644
--- a/Makefile
+++ b/Makefile
@@ -34,7 +34,6 @@ $(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_o
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o ffmpeg_hw.o
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += ffmpeg_videotoolbox.o
OBJS-ffmpeg-$(CONFIG_LIBMFX) += ffmpeg_qsv.o
-OBJS-ffmpeg-$(CONFIG_VAAPI) += ffmpeg_vaapi.o
ifndef CONFIG_VIDEOTOOLBOX
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_videotoolbox.o
endif
diff --git a/ffmpeg.h b/ffmpeg.h
index 5c115cf9a3..231d362f5f 100644
--- a/ffmpeg.h
+++ b/ffmpeg.h
@@ -665,8 +665,6 @@ int dxva2_init(AVCodecContext *s);
int vda_init(AVCodecContext *s);
int videotoolbox_init(AVCodecContext *s);
int qsv_init(AVCodecContext *s);
-int vaapi_decode_init(AVCodecContext *avctx);
-int vaapi_device_init(const char *device);
int cuvid_init(AVCodecContext *s);
HWDevice *hw_device_get_by_name(const char *name);
diff --git a/ffmpeg_opt.c b/ffmpeg_opt.c
index 6755e09e47..51671e0dd4 100644
--- a/ffmpeg_opt.c
+++ b/ffmpeg_opt.c
@@ -87,8 +87,8 @@ const HWAccel hwaccels[] = {
AV_HWDEVICE_TYPE_NONE },
#endif
#if CONFIG_VAAPI
- { "vaapi", vaapi_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI,
- AV_HWDEVICE_TYPE_NONE },
+ { "vaapi", hwaccel_decode_init, HWACCEL_VAAPI, AV_PIX_FMT_VAAPI,
+ AV_HWDEVICE_TYPE_VAAPI },
#endif
#if CONFIG_CUVID
{ "cuvid", cuvid_init, HWACCEL_CUVID, AV_PIX_FMT_CUDA,
@@ -462,10 +462,22 @@ static int opt_sdp_file(void *optctx, const char *opt, const char *arg)
#if CONFIG_VAAPI
static int opt_vaapi_device(void *optctx, const char *opt, const char *arg)
{
+ HWDevice *dev;
+ const char *prefix = "vaapi:";
+ char *tmp;
int err;
- err = vaapi_device_init(arg);
+ tmp = av_malloc(strlen(prefix) + strlen(arg) + 1);
+ if (!tmp)
+ return AVERROR(ENOMEM);
+ strcpy(tmp, prefix);
+ strcat(tmp, arg);
You can simplify this with av_asprintf()
Yep, changed. (Also a similar instance in 4/24.)

Thanks,

- Mark
Mark Thompson
2017-06-12 22:40:29 UTC
Permalink
(cherry picked from commit 3d197514e613ccd9eab43180c0a7c8b09a307606)
---
libavcodec/qsvenc.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/libavcodec/qsvenc.c b/libavcodec/qsvenc.c
index 64227cea6e..5eb506fb76 100644
--- a/libavcodec/qsvenc.c
+++ b/libavcodec/qsvenc.c
@@ -700,6 +700,13 @@ static int qsvenc_init_session(AVCodecContext *avctx, QSVEncContext *q)
}

q->session = q->internal_session;
+ } else if (avctx->hw_device_ctx) {
+ ret = ff_qsv_init_session_device(avctx, &q->internal_session,
+ avctx->hw_device_ctx, q->load_plugins);
+ if (ret < 0)
+ return ret;
+
+ q->session = q->internal_session;
} else {
ret = ff_qsv_init_internal_session(avctx, &q->internal_session,
q->load_plugins);
--
2.11.0
Mark Thompson
2017-06-12 22:40:24 UTC
Permalink
(cherry picked from commit aa6b2e081c504cb99f5e2e0ceb45295ef24bdac2)
---
Makefile | 1 -
ffmpeg.h | 1 -
ffmpeg_opt.c | 4 +-
ffmpeg_vdpau.c | 159 ---------------------------------------------------------
4 files changed, 2 insertions(+), 163 deletions(-)
delete mode 100644 ffmpeg_vdpau.c

diff --git a/Makefile b/Makefile
index 26f9d93d85..ea90ec8b44 100644
--- a/Makefile
+++ b/Makefile
@@ -39,7 +39,6 @@ OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_videotoolbox.o
endif
OBJS-ffmpeg-$(CONFIG_CUVID) += ffmpeg_cuvid.o
OBJS-ffmpeg-$(HAVE_DXVA2_LIB) += ffmpeg_dxva2.o
-OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o
OBJS-ffserver += ffserver_config.o

TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64 audiomatch
diff --git a/ffmpeg.h b/ffmpeg.h
index 231d362f5f..fbb9172d74 100644
--- a/ffmpeg.h
+++ b/ffmpeg.h
@@ -660,7 +660,6 @@ int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame *frame);

int ffmpeg_parse_options(int argc, char **argv);

-int vdpau_init(AVCodecContext *s);
int dxva2_init(AVCodecContext *s);
int vda_init(AVCodecContext *s);
int videotoolbox_init(AVCodecContext *s);
diff --git a/ffmpeg_opt.c b/ffmpeg_opt.c
index 51671e0dd4..1facc82f44 100644
--- a/ffmpeg_opt.c
+++ b/ffmpeg_opt.c
@@ -67,8 +67,8 @@

const HWAccel hwaccels[] = {
#if HAVE_VDPAU_X11
- { "vdpau", vdpau_init, HWACCEL_VDPAU, AV_PIX_FMT_VDPAU,
- AV_HWDEVICE_TYPE_NONE },
+ { "vdpau", hwaccel_decode_init, HWACCEL_VDPAU, AV_PIX_FMT_VDPAU,
+ AV_HWDEVICE_TYPE_VDPAU },
#endif
#if HAVE_DXVA2_LIB
{ "dxva2", dxva2_init, HWACCEL_DXVA2, AV_PIX_FMT_DXVA2_VLD,
diff --git a/ffmpeg_vdpau.c b/ffmpeg_vdpau.c
deleted file mode 100644
index 7d4fbf8a37..0000000000
--- a/ffmpeg_vdpau.c
+++ /dev/null
@@ -1,159 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include <stdint.h>
-
-#include "ffmpeg.h"
-
-#include "libavcodec/vdpau.h"
-
-#include "libavutil/buffer.h"
-#include "libavutil/frame.h"
-#include "libavutil/hwcontext.h"
-#include "libavutil/hwcontext_vdpau.h"
-#include "libavutil/pixfmt.h"
-
-typedef struct VDPAUContext {
- AVBufferRef *hw_frames_ctx;
- AVFrame *tmp_frame;
-} VDPAUContext;
-
-static void vdpau_uninit(AVCodecContext *s)
-{
- InputStream *ist = s->opaque;
- VDPAUContext *ctx = ist->hwaccel_ctx;
-
- ist->hwaccel_uninit = NULL;
- ist->hwaccel_get_buffer = NULL;
- ist->hwaccel_retrieve_data = NULL;
-
- av_buffer_unref(&ctx->hw_frames_ctx);
- av_frame_free(&ctx->tmp_frame);
-
- av_freep(&ist->hwaccel_ctx);
- av_freep(&s->hwaccel_context);
-}
-
-static int vdpau_get_buffer(AVCodecContext *s, AVFrame *frame, int flags)
-{
- InputStream *ist = s->opaque;
- VDPAUContext *ctx = ist->hwaccel_ctx;
-
- return av_hwframe_get_buffer(ctx->hw_frames_ctx, frame, 0);
-}
-
-static int vdpau_retrieve_data(AVCodecContext *s, AVFrame *frame)
-{
- InputStream *ist = s->opaque;
- VDPAUContext *ctx = ist->hwaccel_ctx;
- int ret;
-
- ret = av_hwframe_transfer_data(ctx->tmp_frame, frame, 0);
- if (ret < 0)
- return ret;
-
- ret = av_frame_copy_props(ctx->tmp_frame, frame);
- if (ret < 0) {
- av_frame_unref(ctx->tmp_frame);
- return ret;
- }
-
- av_frame_unref(frame);
- av_frame_move_ref(frame, ctx->tmp_frame);
-
- return 0;
-}
-
-static int vdpau_alloc(AVCodecContext *s)
-{
- InputStream *ist = s->opaque;
- int loglevel = (ist->hwaccel_id == HWACCEL_AUTO) ? AV_LOG_VERBOSE : AV_LOG_ERROR;
- VDPAUContext *ctx;
- int ret;
-
- AVBufferRef *device_ref = NULL;
- AVHWDeviceContext *device_ctx;
- AVVDPAUDeviceContext *device_hwctx;
- AVHWFramesContext *frames_ctx;
-
- ctx = av_mallocz(sizeof(*ctx));
- if (!ctx)
- return AVERROR(ENOMEM);
-
- ist->hwaccel_ctx = ctx;
- ist->hwaccel_uninit = vdpau_uninit;
- ist->hwaccel_get_buffer = vdpau_get_buffer;
- ist->hwaccel_retrieve_data = vdpau_retrieve_data;
-
- ctx->tmp_frame = av_frame_alloc();
- if (!ctx->tmp_frame)
- goto fail;
-
- ret = av_hwdevice_ctx_create(&device_ref, AV_HWDEVICE_TYPE_VDPAU,
- ist->hwaccel_device, NULL, 0);
- if (ret < 0)
- goto fail;
- device_ctx = (AVHWDeviceContext*)device_ref->data;
- device_hwctx = device_ctx->hwctx;
-
- ctx->hw_frames_ctx = av_hwframe_ctx_alloc(device_ref);
- if (!ctx->hw_frames_ctx)
- goto fail;
- av_buffer_unref(&device_ref);
-
- frames_ctx = (AVHWFramesContext*)ctx->hw_frames_ctx->data;
- frames_ctx->format = AV_PIX_FMT_VDPAU;
- frames_ctx->sw_format = s->sw_pix_fmt;
- frames_ctx->width = s->coded_width;
- frames_ctx->height = s->coded_height;
-
- ret = av_hwframe_ctx_init(ctx->hw_frames_ctx);
- if (ret < 0)
- goto fail;
-
- if (av_vdpau_bind_context(s, device_hwctx->device, device_hwctx->get_proc_address, 0))
- goto fail;
-
- av_log(NULL, AV_LOG_VERBOSE, "Using VDPAU to decode input stream #%d:%d.\n",
- ist->file_index, ist->st->index);
-
- return 0;
-
-fail:
- av_log(NULL, loglevel, "VDPAU init failed for stream #%d:%d.\n",
- ist->file_index, ist->st->index);
- av_buffer_unref(&device_ref);
- vdpau_uninit(s);
- return AVERROR(EINVAL);
-}
-
-int vdpau_init(AVCodecContext *s)
-{
- InputStream *ist = s->opaque;
-
- if (!ist->hwaccel_ctx) {
- int ret = vdpau_alloc(s);
- if (ret < 0)
- return ret;
- }
-
- ist->hwaccel_get_buffer = vdpau_get_buffer;
- ist->hwaccel_retrieve_data = vdpau_retrieve_data;
-
- return 0;
-}
--
2.11.0
Mark Thompson
2017-06-12 22:40:30 UTC
Permalink
The non-H.26[45] codecs already use this form. Since we don't
currently generate I frames for codecs which support them separately
to IDR, the p_per_i variable is set to infinity by default so that it
doesn't interfere with any other calculation. (All the code for I
frames still exists, and it works for H.264 if set manually.)

(cherry picked from commit 6af014f4028238b4c50f1731b3369a41d65fa9c4)
---
libavcodec/vaapi_encode.c | 3 +--
libavcodec/vaapi_encode_h264.c | 4 ++--
libavcodec/vaapi_encode_h265.c | 4 ++--
3 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 7aaf263d25..2de5f76cab 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -1435,8 +1435,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
ctx->output_order = - ctx->output_delay - 1;

// Currently we never generate I frames, only IDR.
- ctx->p_per_i = ((avctx->gop_size - 1 + avctx->max_b_frames) /
- (avctx->max_b_frames + 1));
+ ctx->p_per_i = INT_MAX;
ctx->b_per_p = avctx->max_b_frames;

if (ctx->codec->sequence_params_size > 0) {
diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c
index 92e29554ed..f9fcd805a4 100644
--- a/libavcodec/vaapi_encode_h264.c
+++ b/libavcodec/vaapi_encode_h264.c
@@ -905,8 +905,8 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
mseq->nal_hrd_parameters_present_flag = 0;
}

- vseq->intra_period = ctx->p_per_i * (ctx->b_per_p + 1);
- vseq->intra_idr_period = vseq->intra_period;
+ vseq->intra_period = avctx->gop_size;
+ vseq->intra_idr_period = avctx->gop_size;
vseq->ip_period = ctx->b_per_p + 1;
}

diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c
index 6e008b7b9c..1d648a6d87 100644
--- a/libavcodec/vaapi_encode_h265.c
+++ b/libavcodec/vaapi_encode_h265.c
@@ -832,8 +832,8 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
vseq->vui_time_scale = avctx->time_base.den;
}

- vseq->intra_period = ctx->p_per_i * (ctx->b_per_p + 1);
- vseq->intra_idr_period = vseq->intra_period;
+ vseq->intra_period = avctx->gop_size;
+ vseq->intra_idr_period = avctx->gop_size;
vseq->ip_period = ctx->b_per_p + 1;
}
--
2.11.0
Jun Zhao
2017-06-14 03:12:36 UTC
Permalink
Post by Mark Thompson
The non-H.26[45] codecs already use this form. Since we don't
currently generate I frames for codecs which support them separately
to IDR, the p_per_i variable is set to infinity by default so that it
doesn't interfere with any other calculation. (All the code for I
frames still exists, and it works for H.264 if set manually.)
(cherry picked from commit 6af014f4028238b4c50f1731b3369a41d65fa9c4)
---
libavcodec/vaapi_encode.c | 3 +--
libavcodec/vaapi_encode_h264.c | 4 ++--
libavcodec/vaapi_encode_h265.c | 4 ++--
3 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 7aaf263d25..2de5f76cab 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -1435,8 +1435,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
ctx->output_order = - ctx->output_delay - 1;
// Currently we never generate I frames, only IDR.
- ctx->p_per_i = ((avctx->gop_size - 1 + avctx->max_b_frames) /
- (avctx->max_b_frames + 1));
+ ctx->p_per_i = INT_MAX;
Why don't remove p_per_i if don't use this field?
Post by Mark Thompson
ctx->b_per_p = avctx->max_b_frames;
if (ctx->codec->sequence_params_size > 0) {
diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c
index 92e29554ed..f9fcd805a4 100644
--- a/libavcodec/vaapi_encode_h264.c
+++ b/libavcodec/vaapi_encode_h264.c
@@ -905,8 +905,8 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
mseq->nal_hrd_parameters_present_flag = 0;
}
- vseq->intra_period = ctx->p_per_i * (ctx->b_per_p + 1);
- vseq->intra_idr_period = vseq->intra_period;
+ vseq->intra_period = avctx->gop_size;
+ vseq->intra_idr_period = avctx->gop_size;
vseq->ip_period = ctx->b_per_p + 1;
}
diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c
index 6e008b7b9c..1d648a6d87 100644
--- a/libavcodec/vaapi_encode_h265.c
+++ b/libavcodec/vaapi_encode_h265.c
@@ -832,8 +832,8 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
vseq->vui_time_scale = avctx->time_base.den;
}
- vseq->intra_period = ctx->p_per_i * (ctx->b_per_p + 1);
- vseq->intra_idr_period = vseq->intra_period;
+ vseq->intra_period = avctx->gop_size;
+ vseq->intra_idr_period = avctx->gop_size;
vseq->ip_period = ctx->b_per_p + 1;
}
Mark Thompson
2017-06-14 09:58:09 UTC
Permalink
Post by Jun Zhao
Post by Mark Thompson
The non-H.26[45] codecs already use this form. Since we don't
currently generate I frames for codecs which support them separately
to IDR, the p_per_i variable is set to infinity by default so that it
doesn't interfere with any other calculation. (All the code for I
frames still exists, and it works for H.264 if set manually.)
(cherry picked from commit 6af014f4028238b4c50f1731b3369a41d65fa9c4)
---
libavcodec/vaapi_encode.c | 3 +--
libavcodec/vaapi_encode_h264.c | 4 ++--
libavcodec/vaapi_encode_h265.c | 4 ++--
3 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 7aaf263d25..2de5f76cab 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -1435,8 +1435,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
ctx->output_order = - ctx->output_delay - 1;
// Currently we never generate I frames, only IDR.
- ctx->p_per_i = ((avctx->gop_size - 1 + avctx->max_b_frames) /
- (avctx->max_b_frames + 1));
+ ctx->p_per_i = INT_MAX;
Why don't remove p_per_i if don't use this field?
It's useful for testing the I frame support in H.264, which works but is currently inaccessible to the user. I have vague plans to make it user controllable somehow, but I'm not yet sure how. (There is <https://lists.libav.org/pipermail/libav-devel/2017-May/083691.html> oustanding to generate SEI recovery points so that I frames in open-GOP style are sensibly usable in H.264, but no options to actually enable it yet.)
Post by Jun Zhao
Post by Mark Thompson
ctx->b_per_p = avctx->max_b_frames;
if (ctx->codec->sequence_params_size > 0) {
diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c
index 92e29554ed..f9fcd805a4 100644
--- a/libavcodec/vaapi_encode_h264.c
+++ b/libavcodec/vaapi_encode_h264.c
@@ -905,8 +905,8 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
mseq->nal_hrd_parameters_present_flag = 0;
}
- vseq->intra_period = ctx->p_per_i * (ctx->b_per_p + 1);
- vseq->intra_idr_period = vseq->intra_period;
+ vseq->intra_period = avctx->gop_size;
+ vseq->intra_idr_period = avctx->gop_size;
vseq->ip_period = ctx->b_per_p + 1;
}
diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c
index 6e008b7b9c..1d648a6d87 100644
--- a/libavcodec/vaapi_encode_h265.c
+++ b/libavcodec/vaapi_encode_h265.c
@@ -832,8 +832,8 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
vseq->vui_time_scale = avctx->time_base.den;
}
- vseq->intra_period = ctx->p_per_i * (ctx->b_per_p + 1);
- vseq->intra_idr_period = vseq->intra_period;
+ vseq->intra_period = avctx->gop_size;
+ vseq->intra_idr_period = avctx->gop_size;
vseq->ip_period = ctx->b_per_p + 1;
}
Mark Thompson
2017-06-12 22:40:31 UTC
Permalink
(cherry picked from commit 64a5260c695dd8051509d3270295fd64eac56587)
---
doc/APIchanges | 3 +++
libavcodec/avcodec.h | 14 ++++++++++++++
libavcodec/version.h | 2 +-
3 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/doc/APIchanges b/doc/APIchanges
index 5b2203f2b4..12c4877b9b 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -15,6 +15,9 @@ libavutil: 2015-08-28

API changes, most recent first:

+2017-06-xx - xxxxxxx - lavc 57.99.100 - avcodec.h
+ Add AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH.
+
2017-06-xx - xxxxxxx - lavu 55.65.100 - hwcontext.h
Add AV_HWDEVICE_TYPE_NONE, av_hwdevice_find_type_by_name(),
av_hwdevice_get_type_name() and av_hwdevice_iterate_types().
diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h
index dcdcfe00ae..39be8cf717 100644
--- a/libavcodec/avcodec.h
+++ b/libavcodec/avcodec.h
@@ -4002,6 +4002,20 @@ typedef struct AVHWAccel {
#define AV_HWACCEL_FLAG_ALLOW_HIGH_DEPTH (1 << 1)

/**
+ * Hardware acceleration should still be attempted for decoding when the
+ * codec profile does not match the reported capabilities of the hardware.
+ *
+ * For example, this can be used to try to decode baseline profile H.264
+ * streams in hardware - it will often succeed, because many streams marked
+ * as baseline profile actually conform to constrained baseline profile.
+ *
+ * @warning If the stream is actually not supported then the behaviour is
+ * undefined, and may include returning entirely incorrect output
+ * while indicating success.
+ */
+#define AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH (1 << 2)
+
+/**
* @}
*/

diff --git a/libavcodec/version.h b/libavcodec/version.h
index c93487273a..a44a88832d 100644
--- a/libavcodec/version.h
+++ b/libavcodec/version.h
@@ -28,7 +28,7 @@
#include "libavutil/version.h"

#define LIBAVCODEC_VERSION_MAJOR 57
-#define LIBAVCODEC_VERSION_MINOR 98
+#define LIBAVCODEC_VERSION_MINOR 99
#define LIBAVCODEC_VERSION_MICRO 100

#define LIBAVCODEC_VERSION_INT AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, \
--
2.11.0
Mark Thompson
2017-06-12 22:40:25 UTC
Permalink
(cherry picked from commit 303fadf5963e01b8edf4ba2701e45f7e9e586aeb)
---
doc/ffmpeg.texi | 85 +++++++++++++++++++++++++++++++++++++++------------------
1 file changed, 58 insertions(+), 27 deletions(-)

diff --git a/doc/ffmpeg.texi b/doc/ffmpeg.texi
index dcc0cfb341..db7f05a3e0 100644
--- a/doc/ffmpeg.texi
+++ b/doc/ffmpeg.texi
@@ -715,6 +715,56 @@ would be more efficient.
When doing stream copy, copy also non-key frames found at the
beginning.

+@item -init_hw_device @var{type}[=@var{name}][:@var{device}[,@var{key=value}...]]
+Initialise a new hardware device of type @var{type} called @var{name}, using the
+given device parameters.
+If no name is specified it will receive a default name of the form "@var{type}%d".
+
+The meaning of @var{device} and the following arguments depends on the
+device type:
+@table @option
+
+@item cuda
+@var{device} is the number of the CUDA device.
+
+@item dxva2
+@var{device} is the number of the Direct3D 9 display adapter.
+
+@item vaapi
+@var{device} is either an X11 display name or a DRM render node.
+If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY})
+and then the first DRM render node (@emph{/dev/dri/renderD128}).
+
+@item vdpau
+@var{device} is an X11 display name.
+If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY}).
+
+@item qsv
+@var{device} selects a value in @samp{MFX_IMPL_*}. Allowed values are:
+@table @option
+@item auto
+@item sw
+@item hw
+@item auto_any
+@item hw_any
+@item hw2
+@item hw3
+@item hw4
+@end table
+If not specified, @samp{auto_any} is used.
+(Note that it may be easier to achieve the desired result for QSV by creating the
+platform-appropriate subdevice (@samp{dxva2} or @samp{vaapi}) and then deriving a
+QSV device from that.)
+
+@end table
+
+@item -init_hw_device @var{type}[=@var{name}]@@@var{source}
+Initialise a new hardware device of type @var{type} called @var{name},
+deriving it from the existing device with the name @var{source}.
+
+@item -init_hw_device list
+List all hardware device types supported in this build of ffmpeg.
+
@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
Use hardware acceleration to decode the matching stream(s). The allowed values
of @var{hwaccel} are:
@@ -734,6 +784,9 @@ Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
@item dxva2
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.

+@item vaapi
+Use VAAPI (Video Acceleration API) hardware acceleration.
+
@item qsv
Use the Intel QuickSync Video acceleration for video transcoding.

@@ -757,33 +810,11 @@ useful for testing.
@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
Select a device to use for hardware acceleration.

-This option only makes sense when the @option{-hwaccel} option is also
-specified. Its exact meaning depends on the specific hardware acceleration
-method chosen.
-
-@table @option
-@item vdpau
-For VDPAU, this option specifies the X11 display/screen to use. If this option
-is not specified, the value of the @var{DISPLAY} environment variable is used
-
-@item dxva2
-For DXVA2, this option should contain the number of the display adapter to use.
-If this option is not specified, the default adapter is used.
-
-@item qsv
-For QSV, this option corresponds to the values of MFX_IMPL_* . Allowed values
-are:
-@table @option
-@item auto
-@item sw
-@item hw
-@item auto_any
-@item hw_any
-@item hw2
-@item hw3
-@item hw4
-@end table
-@end table
+This option only makes sense when the @option{-hwaccel} option is also specified.
+It can either refer to an existing device created with @option{-init_hw_device}
+by name, or it can create a new device as if
+@samp{-init_hw_device} @var{type}:@var{hwaccel_device}
+were called immediately before.

@item -hwaccels
List all hardware acceleration methods supported in this build of ffmpeg.
--
2.11.0
Mark Thompson
2017-06-12 22:40:26 UTC
Permalink
In order to work correctly with the i965 driver, this also fixes the
direction of forward/backward references - forward references are
intended to be those from the past to the current frame, not from the
current frame to the future.

(cherry picked from commit 9aa251c98ce60e5ee83156e5292547a7671ced3a)
---
libavfilter/vf_deinterlace_vaapi.c | 289 +++++++++++++++++++++----------------
1 file changed, 166 insertions(+), 123 deletions(-)

diff --git a/libavfilter/vf_deinterlace_vaapi.c b/libavfilter/vf_deinterlace_vaapi.c
index 5e7f7cf1c2..838eb89c90 100644
--- a/libavfilter/vf_deinterlace_vaapi.c
+++ b/libavfilter/vf_deinterlace_vaapi.c
@@ -22,6 +22,7 @@
#include <va/va_vpp.h>

#include "libavutil/avassert.h"
+#include "libavutil/common.h"
#include "libavutil/hwcontext.h"
#include "libavutil/hwcontext_vaapi.h"
#include "libavutil/mem.h"
@@ -42,6 +43,8 @@ typedef struct DeintVAAPIContext {
AVBufferRef *device_ref;

int mode;
+ int field_rate;
+ int auto_enable;

int valid_ids;
VAConfigID va_config;
@@ -63,6 +66,7 @@ typedef struct DeintVAAPIContext {
int queue_depth;
int queue_count;
AVFrame *frame_queue[MAX_REFERENCES];
+ int extra_delay_for_timestamps;

VABufferID filter_buffer;
} DeintVAAPIContext;
@@ -211,8 +215,12 @@ static int deint_vaapi_build_filter_params(AVFilterContext *avctx)
return AVERROR(EIO);
}

+ ctx->extra_delay_for_timestamps = ctx->field_rate == 2 &&
+ ctx->pipeline_caps.num_backward_references == 0;
+
ctx->queue_depth = ctx->pipeline_caps.num_backward_references +
- ctx->pipeline_caps.num_forward_references + 1;
+ ctx->pipeline_caps.num_forward_references +
+ ctx->extra_delay_for_timestamps + 1;
if (ctx->queue_depth > MAX_REFERENCES) {
av_log(avctx, AV_LOG_ERROR, "Pipeline requires too many "
"references (%u forward, %u back).\n",
@@ -227,6 +235,7 @@ static int deint_vaapi_build_filter_params(AVFilterContext *avctx)
static int deint_vaapi_config_output(AVFilterLink *outlink)
{
AVFilterContext *avctx = outlink->src;
+ AVFilterLink *inlink = avctx->inputs[0];
DeintVAAPIContext *ctx = avctx->priv;
AVVAAPIHWConfig *hwconfig = NULL;
AVHWFramesConstraints *constraints = NULL;
@@ -326,8 +335,13 @@ static int deint_vaapi_config_output(AVFilterLink *outlink)
if (err < 0)
goto fail;

- outlink->w = ctx->output_width;
- outlink->h = ctx->output_height;
+ outlink->w = inlink->w;
+ outlink->h = inlink->h;
+
+ outlink->time_base = av_mul_q(inlink->time_base,
+ (AVRational) { 1, ctx->field_rate });
+ outlink->frame_rate = av_mul_q(inlink->frame_rate,
+ (AVRational) { ctx->field_rate, 1 });

outlink->hw_frames_ctx = av_buffer_ref(ctx->output_frames_ref);
if (!outlink->hw_frames_ctx) {
@@ -375,7 +389,7 @@ static int deint_vaapi_filter_frame(AVFilterLink *inlink, AVFrame *input_frame)
VABufferID params_id;
VAStatus vas;
void *filter_params_addr = NULL;
- int err, i;
+ int err, i, field, current_frame_index;

av_log(avctx, AV_LOG_DEBUG, "Filter input: %s, %ux%u (%"PRId64").\n",
av_get_pix_fmt_name(input_frame->format),
@@ -394,17 +408,16 @@ static int deint_vaapi_filter_frame(AVFilterLink *inlink, AVFrame *input_frame)
ctx->frame_queue[i] = input_frame;
}

- input_frame =
- ctx->frame_queue[ctx->pipeline_caps.num_backward_references];
+ current_frame_index = ctx->pipeline_caps.num_forward_references;
+
+ input_frame = ctx->frame_queue[current_frame_index];
input_surface = (VASurfaceID)(uintptr_t)input_frame->data[3];
- for (i = 0; i < ctx->pipeline_caps.num_backward_references; i++)
- backward_references[i] = (VASurfaceID)(uintptr_t)
- ctx->frame_queue[ctx->pipeline_caps.num_backward_references -
- i - 1]->data[3];
for (i = 0; i < ctx->pipeline_caps.num_forward_references; i++)
forward_references[i] = (VASurfaceID)(uintptr_t)
- ctx->frame_queue[ctx->pipeline_caps.num_backward_references +
- i + 1]->data[3];
+ ctx->frame_queue[current_frame_index - i - 1]->data[3];
+ for (i = 0; i < ctx->pipeline_caps.num_backward_references; i++)
+ backward_references[i] = (VASurfaceID)(uintptr_t)
+ ctx->frame_queue[current_frame_index + i + 1]->data[3];

av_log(avctx, AV_LOG_DEBUG, "Using surface %#x for "
"deinterlace input.\n", input_surface);
@@ -417,129 +430,148 @@ static int deint_vaapi_filter_frame(AVFilterLink *inlink, AVFrame *input_frame)
av_log(avctx, AV_LOG_DEBUG, " %#x", forward_references[i]);
av_log(avctx, AV_LOG_DEBUG, "\n");

- output_frame = av_frame_alloc();
- if (!output_frame) {
- err = AVERROR(ENOMEM);
- goto fail;
- }
-
- err = av_hwframe_get_buffer(ctx->output_frames_ref,
- output_frame, 0);
- if (err < 0) {
- err = AVERROR(ENOMEM);
- goto fail;
- }
-
- output_surface = (VASurfaceID)(uintptr_t)output_frame->data[3];
- av_log(avctx, AV_LOG_DEBUG, "Using surface %#x for "
- "deinterlace output.\n", output_surface);
-
- memset(&params, 0, sizeof(params));
-
- input_region = (VARectangle) {
- .x = 0,
- .y = 0,
- .width = input_frame->width,
- .height = input_frame->height,
- };
+ for (field = 0; field < ctx->field_rate; field++) {
+ output_frame = ff_get_video_buffer(outlink, ctx->output_width,
+ ctx->output_height);
+ if (!output_frame) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }

- params.surface = input_surface;
- params.surface_region = &input_region;
- params.surface_color_standard = vaapi_proc_colour_standard(
- input_frame->colorspace);
+ output_surface = (VASurfaceID)(uintptr_t)output_frame->data[3];
+ av_log(avctx, AV_LOG_DEBUG, "Using surface %#x for "
+ "deinterlace output.\n", output_surface);
+
+ memset(&params, 0, sizeof(params));
+
+ input_region = (VARectangle) {
+ .x = 0,
+ .y = 0,
+ .width = input_frame->width,
+ .height = input_frame->height,
+ };
+
+ params.surface = input_surface;
+ params.surface_region = &input_region;
+ params.surface_color_standard =
+ vaapi_proc_colour_standard(input_frame->colorspace);
+
+ params.output_region = NULL;
+ params.output_background_color = 0xff000000;
+ params.output_color_standard = params.surface_color_standard;
+
+ params.pipeline_flags = 0;
+ params.filter_flags = VA_FRAME_PICTURE;
+
+ if (!ctx->auto_enable || input_frame->interlaced_frame) {
+ vas = vaMapBuffer(ctx->hwctx->display, ctx->filter_buffer,
+ &filter_params_addr);
+ if (vas != VA_STATUS_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to map filter parameter "
+ "buffer: %d (%s).\n", vas, vaErrorStr(vas));
+ err = AVERROR(EIO);
+ goto fail;
+ }
+ filter_params = filter_params_addr;
+ filter_params->flags = 0;
+ if (input_frame->top_field_first) {
+ filter_params->flags |= field ? VA_DEINTERLACING_BOTTOM_FIELD : 0;
+ } else {
+ filter_params->flags |= VA_DEINTERLACING_BOTTOM_FIELD_FIRST;
+ filter_params->flags |= field ? 0 : VA_DEINTERLACING_BOTTOM_FIELD;
+ }
+ filter_params_addr = NULL;
+ vas = vaUnmapBuffer(ctx->hwctx->display, ctx->filter_buffer);
+ if (vas != VA_STATUS_SUCCESS)
+ av_log(avctx, AV_LOG_ERROR, "Failed to unmap filter parameter "
+ "buffer: %d (%s).\n", vas, vaErrorStr(vas));
+
+ params.filters = &ctx->filter_buffer;
+ params.num_filters = 1;
+
+ params.forward_references = forward_references;
+ params.num_forward_references =
+ ctx->pipeline_caps.num_forward_references;
+ params.backward_references = backward_references;
+ params.num_backward_references =
+ ctx->pipeline_caps.num_backward_references;
+
+ } else {
+ params.filters = NULL;
+ params.num_filters = 0;
+ }

- params.output_region = NULL;
- params.output_background_color = 0xff000000;
- params.output_color_standard = params.surface_color_standard;
+ vas = vaBeginPicture(ctx->hwctx->display,
+ ctx->va_context, output_surface);
+ if (vas != VA_STATUS_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to attach new picture: "
+ "%d (%s).\n", vas, vaErrorStr(vas));
+ err = AVERROR(EIO);
+ goto fail;
+ }

- params.pipeline_flags = 0;
- params.filter_flags = VA_FRAME_PICTURE;
+ vas = vaCreateBuffer(ctx->hwctx->display, ctx->va_context,
+ VAProcPipelineParameterBufferType,
+ sizeof(params), 1, &params, &params_id);
+ if (vas != VA_STATUS_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to create parameter buffer: "
+ "%d (%s).\n", vas, vaErrorStr(vas));
+ err = AVERROR(EIO);
+ goto fail_after_begin;
+ }
+ av_log(avctx, AV_LOG_DEBUG, "Pipeline parameter buffer is %#x.\n",
+ params_id);

- vas = vaMapBuffer(ctx->hwctx->display, ctx->filter_buffer,
- &filter_params_addr);
- if (vas != VA_STATUS_SUCCESS) {
- av_log(avctx, AV_LOG_ERROR, "Failed to map filter parameter "
- "buffer: %d (%s).\n", vas, vaErrorStr(vas));
- err = AVERROR(EIO);
- goto fail;
- }
- filter_params = filter_params_addr;
- filter_params->flags = 0;
- if (input_frame->interlaced_frame && !input_frame->top_field_first)
- filter_params->flags |= VA_DEINTERLACING_BOTTOM_FIELD_FIRST;
- filter_params_addr = NULL;
- vas = vaUnmapBuffer(ctx->hwctx->display, ctx->filter_buffer);
- if (vas != VA_STATUS_SUCCESS)
- av_log(avctx, AV_LOG_ERROR, "Failed to unmap filter parameter "
- "buffer: %d (%s).\n", vas, vaErrorStr(vas));
-
- params.filters = &ctx->filter_buffer;
- params.num_filters = 1;
-
- params.forward_references = forward_references;
- params.num_forward_references =
- ctx->pipeline_caps.num_forward_references;
- params.backward_references = backward_references;
- params.num_backward_references =
- ctx->pipeline_caps.num_backward_references;
-
- vas = vaBeginPicture(ctx->hwctx->display,
- ctx->va_context, output_surface);
- if (vas != VA_STATUS_SUCCESS) {
- av_log(avctx, AV_LOG_ERROR, "Failed to attach new picture: "
- "%d (%s).\n", vas, vaErrorStr(vas));
- err = AVERROR(EIO);
- goto fail;
- }
+ vas = vaRenderPicture(ctx->hwctx->display, ctx->va_context,
+ &params_id, 1);
+ if (vas != VA_STATUS_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to render parameter buffer: "
+ "%d (%s).\n", vas, vaErrorStr(vas));
+ err = AVERROR(EIO);
+ goto fail_after_begin;
+ }

- vas = vaCreateBuffer(ctx->hwctx->display, ctx->va_context,
- VAProcPipelineParameterBufferType,
- sizeof(params), 1, &params, &params_id);
- if (vas != VA_STATUS_SUCCESS) {
- av_log(avctx, AV_LOG_ERROR, "Failed to create parameter buffer: "
- "%d (%s).\n", vas, vaErrorStr(vas));
- err = AVERROR(EIO);
- goto fail_after_begin;
- }
- av_log(avctx, AV_LOG_DEBUG, "Pipeline parameter buffer is %#x.\n",
- params_id);
+ vas = vaEndPicture(ctx->hwctx->display, ctx->va_context);
+ if (vas != VA_STATUS_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to start picture processing: "
+ "%d (%s).\n", vas, vaErrorStr(vas));
+ err = AVERROR(EIO);
+ goto fail_after_render;
+ }

- vas = vaRenderPicture(ctx->hwctx->display, ctx->va_context,
- &params_id, 1);
- if (vas != VA_STATUS_SUCCESS) {
- av_log(avctx, AV_LOG_ERROR, "Failed to render parameter buffer: "
- "%d (%s).\n", vas, vaErrorStr(vas));
- err = AVERROR(EIO);
- goto fail_after_begin;
- }
+ if (ctx->hwctx->driver_quirks &
+ AV_VAAPI_DRIVER_QUIRK_RENDER_PARAM_BUFFERS) {
+ vas = vaDestroyBuffer(ctx->hwctx->display, params_id);
+ if (vas != VA_STATUS_SUCCESS) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to free parameter buffer: "
+ "%d (%s).\n", vas, vaErrorStr(vas));
+ // And ignore.
+ }
+ }

- vas = vaEndPicture(ctx->hwctx->display, ctx->va_context);
- if (vas != VA_STATUS_SUCCESS) {
- av_log(avctx, AV_LOG_ERROR, "Failed to start picture processing: "
- "%d (%s).\n", vas, vaErrorStr(vas));
- err = AVERROR(EIO);
- goto fail_after_render;
- }
+ err = av_frame_copy_props(output_frame, input_frame);
+ if (err < 0)
+ goto fail;

- if (ctx->hwctx->driver_quirks &
- AV_VAAPI_DRIVER_QUIRK_RENDER_PARAM_BUFFERS) {
- vas = vaDestroyBuffer(ctx->hwctx->display, params_id);
- if (vas != VA_STATUS_SUCCESS) {
- av_log(avctx, AV_LOG_ERROR, "Failed to free parameter buffer: "
- "%d (%s).\n", vas, vaErrorStr(vas));
- // And ignore.
+ if (ctx->field_rate == 2) {
+ if (field == 0)
+ output_frame->pts = 2 * input_frame->pts;
+ else
+ output_frame->pts = input_frame->pts +
+ ctx->frame_queue[current_frame_index + 1]->pts;
}
- }
+ output_frame->interlaced_frame = 0;

- err = av_frame_copy_props(output_frame, input_frame);
- if (err < 0)
- goto fail;
+ av_log(avctx, AV_LOG_DEBUG, "Filter output: %s, %ux%u (%"PRId64").\n",
+ av_get_pix_fmt_name(output_frame->format),
+ output_frame->width, output_frame->height, output_frame->pts);

- av_log(avctx, AV_LOG_DEBUG, "Filter output: %s, %ux%u (%"PRId64").\n",
- av_get_pix_fmt_name(output_frame->format),
- output_frame->width, output_frame->height, output_frame->pts);
+ err = ff_filter_frame(outlink, output_frame);
+ if (err < 0)
+ break;
+ }

- return ff_filter_frame(outlink, output_frame);
+ return err;

fail_after_begin:
vaRenderPicture(ctx->hwctx->display, ctx->va_context, &params_id, 1);
@@ -592,6 +624,17 @@ static const AVOption deint_vaapi_options[] = {
0, AV_OPT_TYPE_CONST, { .i64 = VAProcDeinterlacingMotionAdaptive }, .unit = "mode" },
{ "motion_compensated", "Use the motion compensated deinterlacing algorithm",
0, AV_OPT_TYPE_CONST, { .i64 = VAProcDeinterlacingMotionCompensated }, .unit = "mode" },
+
+ { "rate", "Generate output at frame rate or field rate",
+ OFFSET(field_rate), AV_OPT_TYPE_INT, { .i64 = 1 }, 1, 2, FLAGS, "rate" },
+ { "frame", "Output at frame rate (one frame of output for each field-pair)",
+ 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, .unit = "rate" },
+ { "field", "Output at field rate (one frame of output for each field)",
+ 0, AV_OPT_TYPE_CONST, { .i64 = 2 }, .unit = "rate" },
+
+ { "auto", "Only deinterlace fields, passing frames through unchanged",
+ OFFSET(auto_enable), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, FLAGS },
+
{ NULL },
};
--
2.11.0
wm4
2017-06-13 12:02:37 UTC
Permalink
On Mon, 12 Jun 2017 23:40:26 +0100
Post by Mark Thompson
In order to work correctly with the i965 driver, this also fixes the
direction of forward/backward references - forward references are
intended to be those from the past to the current frame, not from the
current frame to the future.
Isn't this comment kind of outdated (or missing context)? Since we
decided that's how the vpp API works.

But LGTM.
Mark Thompson
2017-06-12 22:40:32 UTC
Permalink
Uses the just-added ALLOW_PROFILE_MISMATCH flag.

(cherry picked from commit 7acb90333a187b0e847b66f9d3511245423dc0ce)
---
libavcodec/vaapi_decode.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/libavcodec/vaapi_decode.c b/libavcodec/vaapi_decode.c
index b63fb94fc1..cf58aae4c6 100644
--- a/libavcodec/vaapi_decode.c
+++ b/libavcodec/vaapi_decode.c
@@ -286,14 +286,6 @@ static int vaapi_decode_make_config(AVCodecContext *avctx)
int profile_count, exact_match, alt_profile;
const AVPixFmtDescriptor *sw_desc, *desc;

- // Allowing a profile mismatch can be useful because streams may
- // over-declare their required capabilities - in particular, many
- // H.264 baseline profile streams (notably some of those in FATE)
- // only use the feature set of constrained baseline. This flag
- // would have to be be set by some external means in order to
- // actually be useful. (AV_HWACCEL_FLAG_IGNORE_PROFILE?)
- int allow_profile_mismatch = 0;
-
codec_desc = avcodec_descriptor_get(avctx->codec_id);
if (!codec_desc) {
err = AVERROR(EINVAL);
@@ -348,7 +340,8 @@ static int vaapi_decode_make_config(AVCodecContext *avctx)
goto fail;
}
if (!exact_match) {
- if (allow_profile_mismatch) {
+ if (avctx->hwaccel_flags &
+ AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH) {
av_log(avctx, AV_LOG_VERBOSE, "Codec %s profile %d not "
"supported for hardware decode.\n",
codec_desc->name, avctx->profile);
--
2.11.0
Mark Thompson
2017-06-12 22:40:27 UTC
Permalink
(cherry picked from commit 4936a48b1e6fc2147599541f8b25f43a8a9d1f16)
---
libavcodec/qsv.c | 49 ++++++++++++++++++++++++++++++++---------------
libavcodec/qsv_internal.h | 9 ++++++---
libavcodec/qsvdec.c | 6 +++---
libavcodec/qsvenc.c | 6 +++---
4 files changed, 46 insertions(+), 24 deletions(-)

diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
index 1284419741..b9e2cd990d 100644
--- a/libavcodec/qsv.c
+++ b/libavcodec/qsv.c
@@ -535,27 +535,16 @@ static mfxStatus qsv_frame_get_hdl(mfxHDL pthis, mfxMemId mid, mfxHDL *hdl)
return MFX_ERR_NONE;
}

-int ff_qsv_init_session_hwcontext(AVCodecContext *avctx, mfxSession *psession,
- QSVFramesContext *qsv_frames_ctx,
- const char *load_plugins, int opaque)
+int ff_qsv_init_session_device(AVCodecContext *avctx, mfxSession *psession,
+ AVBufferRef *device_ref, const char *load_plugins)
{
static const mfxHandleType handle_types[] = {
MFX_HANDLE_VA_DISPLAY,
MFX_HANDLE_D3D9_DEVICE_MANAGER,
MFX_HANDLE_D3D11_DEVICE,
};
- mfxFrameAllocator frame_allocator = {
- .pthis = qsv_frames_ctx,
- .Alloc = qsv_frame_alloc,
- .Lock = qsv_frame_lock,
- .Unlock = qsv_frame_unlock,
- .GetHDL = qsv_frame_get_hdl,
- .Free = qsv_frame_free,
- };
-
- AVHWFramesContext *frames_ctx = (AVHWFramesContext*)qsv_frames_ctx->hw_frames_ctx->data;
- AVQSVFramesContext *frames_hwctx = frames_ctx->hwctx;
- AVQSVDeviceContext *device_hwctx = frames_ctx->device_ctx->hwctx;
+ AVHWDeviceContext *device_ctx = (AVHWDeviceContext*)device_ref->data;
+ AVQSVDeviceContext *device_hwctx = device_ctx->hwctx;
mfxSession parent_session = device_hwctx->session;

mfxSession session;
@@ -605,6 +594,36 @@ int ff_qsv_init_session_hwcontext(AVCodecContext *avctx, mfxSession *psession,
return ret;
}

+ *psession = session;
+ return 0;
+}
+
+int ff_qsv_init_session_frames(AVCodecContext *avctx, mfxSession *psession,
+ QSVFramesContext *qsv_frames_ctx,
+ const char *load_plugins, int opaque)
+{
+ mfxFrameAllocator frame_allocator = {
+ .pthis = qsv_frames_ctx,
+ .Alloc = qsv_frame_alloc,
+ .Lock = qsv_frame_lock,
+ .Unlock = qsv_frame_unlock,
+ .GetHDL = qsv_frame_get_hdl,
+ .Free = qsv_frame_free,
+ };
+
+ AVHWFramesContext *frames_ctx = (AVHWFramesContext*)qsv_frames_ctx->hw_frames_ctx->data;
+ AVQSVFramesContext *frames_hwctx = frames_ctx->hwctx;
+
+ mfxSession session;
+ mfxStatus err;
+
+ int ret;
+
+ ret = ff_qsv_init_session_device(avctx, &session,
+ frames_ctx->device_ref, load_plugins);
+ if (ret < 0)
+ return ret;
+
if (!opaque) {
qsv_frames_ctx->logctx = avctx;

diff --git a/libavcodec/qsv_internal.h b/libavcodec/qsv_internal.h
index 814db08e6c..c0305508dd 100644
--- a/libavcodec/qsv_internal.h
+++ b/libavcodec/qsv_internal.h
@@ -90,9 +90,12 @@ int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t *fourcc);
int ff_qsv_init_internal_session(AVCodecContext *avctx, mfxSession *session,
const char *load_plugins);

-int ff_qsv_init_session_hwcontext(AVCodecContext *avctx, mfxSession *session,
- QSVFramesContext *qsv_frames_ctx,
- const char *load_plugins, int opaque);
+int ff_qsv_init_session_device(AVCodecContext *avctx, mfxSession *psession,
+ AVBufferRef *device_ref, const char *load_plugins);
+
+int ff_qsv_init_session_frames(AVCodecContext *avctx, mfxSession *session,
+ QSVFramesContext *qsv_frames_ctx,
+ const char *load_plugins, int opaque);

int ff_qsv_find_surface_idx(QSVFramesContext *ctx, QSVFrame *frame);

diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
index d7664ce581..74866b57ff 100644
--- a/libavcodec/qsvdec.c
+++ b/libavcodec/qsvdec.c
@@ -59,9 +59,9 @@ static int qsv_init_session(AVCodecContext *avctx, QSVContext *q, mfxSession ses
if (!q->frames_ctx.hw_frames_ctx)
return AVERROR(ENOMEM);

- ret = ff_qsv_init_session_hwcontext(avctx, &q->internal_session,
- &q->frames_ctx, q->load_plugins,
- q->iopattern == MFX_IOPATTERN_OUT_OPAQUE_MEMORY);
+ ret = ff_qsv_init_session_frames(avctx, &q->internal_session,
+ &q->frames_ctx, q->load_plugins,
+ q->iopattern == MFX_IOPATTERN_OUT_OPAQUE_MEMORY);
if (ret < 0) {
av_buffer_unref(&q->frames_ctx.hw_frames_ctx);
return ret;
diff --git a/libavcodec/qsvenc.c b/libavcodec/qsvenc.c
index 57bc83a47f..64227cea6e 100644
--- a/libavcodec/qsvenc.c
+++ b/libavcodec/qsvenc.c
@@ -691,9 +691,9 @@ static int qsvenc_init_session(AVCodecContext *avctx, QSVEncContext *q)
if (!q->frames_ctx.hw_frames_ctx)
return AVERROR(ENOMEM);

- ret = ff_qsv_init_session_hwcontext(avctx, &q->internal_session,
- &q->frames_ctx, q->load_plugins,
- q->param.IOPattern == MFX_IOPATTERN_IN_OPAQUE_MEMORY);
+ ret = ff_qsv_init_session_frames(avctx, &q->internal_session,
+ &q->frames_ctx, q->load_plugins,
+ q->param.IOPattern == MFX_IOPATTERN_IN_OPAQUE_MEMORY);
if (ret < 0) {
av_buffer_unref(&q->frames_ctx.hw_frames_ctx);
return ret;
--
2.11.0
Mark Thompson
2017-06-12 22:40:33 UTC
Permalink
This only supports one device globally, but more can be used by
passing them with input streams in hw_frames_ctx or by deriving new
devices inside a filter graph with hwmap.

(cherry picked from commit e669db76108de8d7a36c2274c99da82cc94d1dd1)
---
doc/ffmpeg.texi | 11 +++++++++++
ffmpeg.h | 1 +
ffmpeg_filter.c | 10 ++++++++--
ffmpeg_opt.c | 17 +++++++++++++++++
4 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/doc/ffmpeg.texi b/doc/ffmpeg.texi
index db7f05a3e0..4616a4239e 100644
--- a/doc/ffmpeg.texi
+++ b/doc/ffmpeg.texi
@@ -765,6 +765,17 @@ deriving it from the existing device with the name @var{source}.
@item -init_hw_device list
List all hardware device types supported in this build of ffmpeg.

+@item -filter_hw_device @var{name}
+Pass the hardware device called @var{name} to all filters in any filter graph.
+This can be used to set the device to upload to with the @code{hwupload} filter,
+or the device to map to with the @code{hwmap} filter. Other filters may also
+make use of this parameter when they require a hardware device. Note that this
+is typically only required when the input is not already in hardware frames -
+when it is, filters will derive the device they require from the context of the
+frames they receive as input.
+
+This is a global setting, so all filters will receive the same device.
+
@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
Use hardware acceleration to decode the matching stream(s). The allowed values
of @var{hwaccel} are:
diff --git a/ffmpeg.h b/ffmpeg.h
index fbb9172d74..c3854bcb4a 100644
--- a/ffmpeg.h
+++ b/ffmpeg.h
@@ -628,6 +628,7 @@ extern AVBufferRef *hw_device_ctx;
#if CONFIG_QSV
extern char *qsv_device;
#endif
+extern HWDevice *filter_hw_device;


void term_init(void);
diff --git a/ffmpeg_filter.c b/ffmpeg_filter.c
index 817f48f473..aacc185059 100644
--- a/ffmpeg_filter.c
+++ b/ffmpeg_filter.c
@@ -1046,9 +1046,15 @@ int configure_filtergraph(FilterGraph *fg)
if ((ret = avfilter_graph_parse2(fg->graph, graph_desc, &inputs, &outputs)) < 0)
goto fail;

- if (hw_device_ctx) {
+ if (filter_hw_device || hw_device_ctx) {
+ AVBufferRef *device = filter_hw_device ? filter_hw_device->device_ref
+ : hw_device_ctx;
for (i = 0; i < fg->graph->nb_filters; i++) {
- fg->graph->filters[i]->hw_device_ctx = av_buffer_ref(hw_device_ctx);
+ fg->graph->filters[i]->hw_device_ctx = av_buffer_ref(device);
+ if (!fg->graph->filters[i]->hw_device_ctx) {
+ ret = AVERROR(ENOMEM);
+ goto fail;
+ }
}
}

diff --git a/ffmpeg_opt.c b/ffmpeg_opt.c
index 1facc82f44..90c31c0f58 100644
--- a/ffmpeg_opt.c
+++ b/ffmpeg_opt.c
@@ -98,6 +98,7 @@ const HWAccel hwaccels[] = {
};
int hwaccel_lax_profile_check = 0;
AVBufferRef *hw_device_ctx;
+HWDevice *filter_hw_device;

char *vstats_filename;
char *sdp_filename;
@@ -497,6 +498,20 @@ static int opt_init_hw_device(void *optctx, const char *opt, const char *arg)
}
}

+static int opt_filter_hw_device(void *optctx, const char *opt, const char *arg)
+{
+ if (filter_hw_device) {
+ av_log(NULL, AV_LOG_ERROR, "Only one filter device can be used.\n");
+ return AVERROR(EINVAL);
+ }
+ filter_hw_device = hw_device_get_by_name(arg);
+ if (!filter_hw_device) {
+ av_log(NULL, AV_LOG_ERROR, "Invalid filter device %s.\n", arg);
+ return AVERROR(EINVAL);
+ }
+ return 0;
+}
+
/**
* Parse a metadata specifier passed as 'arg' parameter.
* @param arg metadata string to parse
@@ -3710,6 +3725,8 @@ const OptionDef options[] = {

{ "init_hw_device", HAS_ARG | OPT_EXPERT, { .func_arg = opt_init_hw_device },
"initialise hardware device", "args" },
+ { "filter_hw_device", HAS_ARG | OPT_EXPERT, { .func_arg = opt_filter_hw_device },
+ "set hardware device used when filtering", "device" },

{ NULL, },
};
--
2.11.0
Mark Thompson
2017-06-12 22:40:34 UTC
Permalink
(cherry picked from commit aa51bb3d2756ed912ee40645efccf5f4a9609696)
---
libavutil/hwcontext_qsv.c | 113 ++++++++++++++++++++++++++++++++++------------
1 file changed, 84 insertions(+), 29 deletions(-)

diff --git a/libavutil/hwcontext_qsv.c b/libavutil/hwcontext_qsv.c
index 5550ffe143..505a8e709d 100644
--- a/libavutil/hwcontext_qsv.c
+++ b/libavutil/hwcontext_qsv.c
@@ -792,21 +792,96 @@ static mfxIMPL choose_implementation(const char *device)
return impl;
}

-static int qsv_device_create(AVHWDeviceContext *ctx, const char *device,
- AVDictionary *opts, int flags)
+static int qsv_device_derive_from_child(AVHWDeviceContext *ctx,
+ mfxIMPL implementation,
+ AVHWDeviceContext *child_device_ctx,
+ int flags)
{
AVQSVDeviceContext *hwctx = ctx->hwctx;
- QSVDevicePriv *priv;
- enum AVHWDeviceType child_device_type;
- AVDictionaryEntry *e;
+ QSVDeviceContext *s = ctx->internal->priv;

mfxVersion ver = { { 3, 1 } };
- mfxIMPL impl;
mfxHDL handle;
mfxHandleType handle_type;
mfxStatus err;
int ret;

+ switch (child_device_ctx->type) {
+#if CONFIG_VAAPI
+ case AV_HWDEVICE_TYPE_VAAPI:
+ {
+ AVVAAPIDeviceContext *child_device_hwctx = child_device_ctx->hwctx;
+ handle_type = MFX_HANDLE_VA_DISPLAY;
+ handle = (mfxHDL)child_device_hwctx->display;
+ }
+ break;
+#endif
+#if CONFIG_DXVA2
+ case AV_HWDEVICE_TYPE_DXVA2:
+ {
+ AVDXVA2DeviceContext *child_device_hwctx = child_device_ctx->hwctx;
+ handle_type = MFX_HANDLE_D3D9_DEVICE_MANAGER;
+ handle = (mfxHDL)child_device_hwctx->devmgr;
+ }
+ break;
+#endif
+ default:
+ ret = AVERROR(ENOSYS);
+ goto fail;
+ }
+
+ err = MFXInit(implementation, &ver, &hwctx->session);
+ if (err != MFX_ERR_NONE) {
+ av_log(ctx, AV_LOG_ERROR, "Error initializing an MFX session: "
+ "%d.\n", err);
+ ret = AVERROR_UNKNOWN;
+ goto fail;
+ }
+
+ err = MFXVideoCORE_SetHandle(hwctx->session, handle_type, handle);
+ if (err != MFX_ERR_NONE) {
+ av_log(ctx, AV_LOG_ERROR, "Error setting child device handle: "
+ "%d\n", err);
+ ret = AVERROR_UNKNOWN;
+ goto fail;
+ }
+
+ ret = qsv_device_init(ctx);
+ if (ret < 0)
+ goto fail;
+ if (s->handle_type != handle_type) {
+ av_log(ctx, AV_LOG_ERROR, "Error in child device handle setup: "
+ "type mismatch (%d != %d).\n", s->handle_type, handle_type);
+ err = AVERROR_UNKNOWN;
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ if (hwctx->session)
+ MFXClose(hwctx->session);
+ return ret;
+}
+
+static int qsv_device_derive(AVHWDeviceContext *ctx,
+ AVHWDeviceContext *child_device_ctx, int flags)
+{
+ return qsv_device_derive_from_child(ctx, MFX_IMPL_HARDWARE_ANY,
+ child_device_ctx, flags);
+}
+
+static int qsv_device_create(AVHWDeviceContext *ctx, const char *device,
+ AVDictionary *opts, int flags)
+{
+ QSVDevicePriv *priv;
+ enum AVHWDeviceType child_device_type;
+ AVHWDeviceContext *child_device;
+ AVDictionaryEntry *e;
+
+ mfxIMPL impl;
+ int ret;
+
priv = av_mallocz(sizeof(*priv));
if (!priv)
return AVERROR(ENOMEM);
@@ -830,32 +905,11 @@ static int qsv_device_create(AVHWDeviceContext *ctx, const char *device,
if (ret < 0)
return ret;

- {
- AVHWDeviceContext *child_device_ctx = (AVHWDeviceContext*)priv->child_device_ctx->data;
-#if CONFIG_VAAPI
- AVVAAPIDeviceContext *child_device_hwctx = child_device_ctx->hwctx;
- handle_type = MFX_HANDLE_VA_DISPLAY;
- handle = (mfxHDL)child_device_hwctx->display;
-#elif CONFIG_DXVA2
- AVDXVA2DeviceContext *child_device_hwctx = child_device_ctx->hwctx;
- handle_type = MFX_HANDLE_D3D9_DEVICE_MANAGER;
- handle = (mfxHDL)child_device_hwctx->devmgr;
-#endif
- }
+ child_device = (AVHWDeviceContext*)priv->child_device_ctx->data;

impl = choose_implementation(device);

- err = MFXInit(impl, &ver, &hwctx->session);
- if (err != MFX_ERR_NONE) {
- av_log(ctx, AV_LOG_ERROR, "Error initializing an MFX session\n");
- return AVERROR_UNKNOWN;
- }
-
- err = MFXVideoCORE_SetHandle(hwctx->session, handle_type, handle);
- if (err != MFX_ERR_NONE)
- return AVERROR_UNKNOWN;
-
- return 0;
+ return qsv_device_derive_from_child(ctx, impl, child_device, 0);
}

const HWContextType ff_hwcontext_type_qsv = {
@@ -868,6 +922,7 @@ const HWContextType ff_hwcontext_type_qsv = {
.frames_priv_size = sizeof(QSVFramesContext),

.device_create = qsv_device_create,
+ .device_derive = qsv_device_derive,
.device_init = qsv_device_init,
.frames_get_constraints = qsv_frames_get_constraints,
.frames_init = qsv_frames_init,
--
2.11.0
Mark Thompson
2017-06-12 22:40:35 UTC
Permalink
Some frames contexts are not usable without additional format-specific
state in hwctx. This change adds new functions frames_derive_from and
frames_derive_to to initialise this state appropriately when deriving
a frames context which will require it to be set.

(cherry picked from commit 27978155bc661eec9f22bcf82c9cfc099cff4365)
---
libavutil/hwcontext.c | 9 ++++++++-
libavutil/hwcontext_internal.h | 5 +++++
2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c
index 7f9b1d33e3..ba7ffd1951 100644
--- a/libavutil/hwcontext.c
+++ b/libavutil/hwcontext.c
@@ -819,7 +819,14 @@ int av_hwframe_ctx_create_derived(AVBufferRef **derived_frame_ctx,
goto fail;
}

- ret = av_hwframe_ctx_init(dst_ref);
+ ret = AVERROR(ENOSYS);
+ if (src->internal->hw_type->frames_derive_from)
+ ret = src->internal->hw_type->frames_derive_from(dst, src, flags);
+ if (ret == AVERROR(ENOSYS) &&
+ dst->internal->hw_type->frames_derive_to)
+ ret = dst->internal->hw_type->frames_derive_to(dst, src, flags);
+ if (ret == AVERROR(ENOSYS))
+ ret = 0;
if (ret)
goto fail;

diff --git a/libavutil/hwcontext_internal.h b/libavutil/hwcontext_internal.h
index 6451c0e2c5..0a0c4e86ce 100644
--- a/libavutil/hwcontext_internal.h
+++ b/libavutil/hwcontext_internal.h
@@ -92,6 +92,11 @@ typedef struct HWContextType {
const AVFrame *src, int flags);
int (*map_from)(AVHWFramesContext *ctx, AVFrame *dst,
const AVFrame *src, int flags);
+
+ int (*frames_derive_to)(AVHWFramesContext *dst_ctx,
+ AVHWFramesContext *src_ctx, int flags);
+ int (*frames_derive_from)(AVHWFramesContext *dst_ctx,
+ AVHWFramesContext *src_ctx, int flags);
} HWContextType;

struct AVHWDeviceInternal {
--
2.11.0
Mark Thompson
2017-06-12 22:40:36 UTC
Permalink
Factorises out existing surface initialisation code to reuse.

(cherry picked from commit eaa5e0710496db50fc164806e5f49eaaccc83bb5)
---
libavutil/hwcontext_qsv.c | 174 +++++++++++++++++++++++++++++++++++++---------
1 file changed, 142 insertions(+), 32 deletions(-)

diff --git a/libavutil/hwcontext_qsv.c b/libavutil/hwcontext_qsv.c
index 505a8e709d..8dbff88b0a 100644
--- a/libavutil/hwcontext_qsv.c
+++ b/libavutil/hwcontext_qsv.c
@@ -94,6 +94,16 @@ static const struct {
{ AV_PIX_FMT_PAL8, MFX_FOURCC_P8 },
};

+static uint32_t qsv_fourcc_from_pix_fmt(enum AVPixelFormat pix_fmt)
+{
+ int i;
+ for (i = 0; i < FF_ARRAY_ELEMS(supported_pixel_formats); i++) {
+ if (supported_pixel_formats[i].pix_fmt == pix_fmt)
+ return supported_pixel_formats[i].fourcc;
+ }
+ return 0;
+}
+
static int qsv_device_init(AVHWDeviceContext *ctx)
{
AVQSVDeviceContext *hwctx = ctx->hwctx;
@@ -272,18 +282,48 @@ fail:
return ret;
}

+static int qsv_init_surface(AVHWFramesContext *ctx, mfxFrameSurface1 *surf)
+{
+ const AVPixFmtDescriptor *desc;
+ uint32_t fourcc;
+
+ desc = av_pix_fmt_desc_get(ctx->sw_format);
+ if (!desc)
+ return AVERROR(EINVAL);
+
+ fourcc = qsv_fourcc_from_pix_fmt(ctx->sw_format);
+ if (!fourcc)
+ return AVERROR(EINVAL);
+
+ surf->Info.BitDepthLuma = desc->comp[0].depth;
+ surf->Info.BitDepthChroma = desc->comp[0].depth;
+ surf->Info.Shift = desc->comp[0].depth > 8;
+
+ if (desc->log2_chroma_w && desc->log2_chroma_h)
+ surf->Info.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
+ else if (desc->log2_chroma_w)
+ surf->Info.ChromaFormat = MFX_CHROMAFORMAT_YUV422;
+ else
+ surf->Info.ChromaFormat = MFX_CHROMAFORMAT_YUV444;
+
+ surf->Info.FourCC = fourcc;
+ surf->Info.Width = ctx->width;
+ surf->Info.CropW = ctx->width;
+ surf->Info.Height = ctx->height;
+ surf->Info.CropH = ctx->height;
+ surf->Info.FrameRateExtN = 25;
+ surf->Info.FrameRateExtD = 1;
+
+ return 0;
+}
+
static int qsv_init_pool(AVHWFramesContext *ctx, uint32_t fourcc)
{
QSVFramesContext *s = ctx->internal->priv;
AVQSVFramesContext *frames_hwctx = ctx->hwctx;
- const AVPixFmtDescriptor *desc;

int i, ret = 0;

- desc = av_pix_fmt_desc_get(ctx->sw_format);
- if (!desc)
- return AVERROR_BUG;
-
if (ctx->initial_pool_size <= 0) {
av_log(ctx, AV_LOG_ERROR, "QSV requires a fixed frame pool size\n");
return AVERROR(EINVAL);
@@ -295,26 +335,9 @@ static int qsv_init_pool(AVHWFramesContext *ctx, uint32_t fourcc)
return AVERROR(ENOMEM);

for (i = 0; i < ctx->initial_pool_size; i++) {
- mfxFrameSurface1 *surf = &s->surfaces_internal[i];
-
- surf->Info.BitDepthLuma = desc->comp[0].depth;
- surf->Info.BitDepthChroma = desc->comp[0].depth;
- surf->Info.Shift = desc->comp[0].depth > 8;
-
- if (desc->log2_chroma_w && desc->log2_chroma_h)
- surf->Info.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
- else if (desc->log2_chroma_w)
- surf->Info.ChromaFormat = MFX_CHROMAFORMAT_YUV422;
- else
- surf->Info.ChromaFormat = MFX_CHROMAFORMAT_YUV444;
-
- surf->Info.FourCC = fourcc;
- surf->Info.Width = ctx->width;
- surf->Info.CropW = ctx->width;
- surf->Info.Height = ctx->height;
- surf->Info.CropH = ctx->height;
- surf->Info.FrameRateExtN = 25;
- surf->Info.FrameRateExtD = 1;
+ ret = qsv_init_surface(ctx, &s->surfaces_internal[i]);
+ if (ret < 0)
+ return ret;
}

if (!(frames_hwctx->frame_type & MFX_MEMTYPE_OPAQUE_FRAME)) {
@@ -466,15 +489,10 @@ static int qsv_frames_init(AVHWFramesContext *ctx)

int opaque = !!(frames_hwctx->frame_type & MFX_MEMTYPE_OPAQUE_FRAME);

- uint32_t fourcc = 0;
+ uint32_t fourcc;
int i, ret;

- for (i = 0; i < FF_ARRAY_ELEMS(supported_pixel_formats); i++) {
- if (supported_pixel_formats[i].pix_fmt == ctx->sw_format) {
- fourcc = supported_pixel_formats[i].fourcc;
- break;
- }
- }
+ fourcc = qsv_fourcc_from_pix_fmt(ctx->sw_format);
if (!fourcc) {
av_log(ctx, AV_LOG_ERROR, "Unsupported pixel format\n");
return AVERROR(ENOSYS);
@@ -723,6 +741,96 @@ static int qsv_transfer_data_to(AVHWFramesContext *ctx, AVFrame *dst,
return 0;
}

+static int qsv_frames_derive_to(AVHWFramesContext *dst_ctx,
+ AVHWFramesContext *src_ctx, int flags)
+{
+ QSVFramesContext *s = dst_ctx->internal->priv;
+ AVQSVFramesContext *dst_hwctx = dst_ctx->hwctx;
+ int i;
+
+ switch (src_ctx->device_ctx->type) {
+#if CONFIG_VAAPI
+ case AV_HWDEVICE_TYPE_VAAPI:
+ {
+ AVVAAPIFramesContext *src_hwctx = src_ctx->hwctx;
+ s->surfaces_internal = av_mallocz_array(src_hwctx->nb_surfaces,
+ sizeof(*s->surfaces_internal));
+ if (!s->surfaces_internal)
+ return AVERROR(ENOMEM);
+ for (i = 0; i < src_hwctx->nb_surfaces; i++) {
+ qsv_init_surface(dst_ctx, &s->surfaces_internal[i]);
+ s->surfaces_internal[i].Data.MemId = src_hwctx->surface_ids + i;
+ }
+ dst_hwctx->nb_surfaces = src_hwctx->nb_surfaces;
+ dst_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
+ }
+ break;
+#endif
+#if CONFIG_DXVA2
+ case AV_HWDEVICE_TYPE_DXVA2:
+ {
+ AVDXVA2FramesContext *src_hwctx = src_ctx->hwctx;
+ s->surfaces_internal = av_mallocz_array(src_hwctx->nb_surfaces,
+ sizeof(*s->surfaces_internal));
+ if (!s->surfaces_internal)
+ return AVERROR(ENOMEM);
+ for (i = 0; i < src_hwctx->nb_surfaces; i++) {
+ qsv_init_surface(dst_ctx, &s->surfaces_internal[i]);
+ s->surfaces_internal[i].Data.MemId = (mfxMemId)src_hwctx->surfaces[i];
+ }
+ dst_hwctx->nb_surfaces = src_hwctx->nb_surfaces;
+ if (src_hwctx->surface_type == DXVA2_VideoProcessorRenderTarget)
+ dst_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_PROCESSOR_TARGET;
+ else
+ dst_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
+ }
+ break;
+#endif
+ default:
+ return AVERROR(ENOSYS);
+ }
+
+ dst_hwctx->surfaces = s->surfaces_internal;
+
+ return 0;
+}
+
+static int qsv_map_to(AVHWFramesContext *dst_ctx,
+ AVFrame *dst, const AVFrame *src, int flags)
+{
+ AVQSVFramesContext *hwctx = dst_ctx->hwctx;
+ int i, err;
+
+ for (i = 0; i < hwctx->nb_surfaces; i++) {
+#if CONFIG_VAAPI
+ if (*(VASurfaceID*)hwctx->surfaces[i].Data.MemId ==
+ (VASurfaceID)(uintptr_t)src->data[3])
+ break;
+#endif
+#if CONFIG_DXVA2
+ if ((IDirect3DSurface9*)hwctx->surfaces[i].Data.MemId ==
+ (IDirect3DSurface9*)(uintptr_t)src->data[3])
+ break;
+#endif
+ }
+ if (i >= hwctx->nb_surfaces) {
+ av_log(dst_ctx, AV_LOG_ERROR, "Trying to map from a surface which "
+ "is not in the mapped frames context.\n");
+ return AVERROR(EINVAL);
+ }
+
+ err = ff_hwframe_map_create(dst->hw_frames_ctx,
+ dst, src, NULL, NULL);
+ if (err)
+ return err;
+
+ dst->width = src->width;
+ dst->height = src->height;
+ dst->data[3] = (uint8_t*)&hwctx->surfaces[i];
+
+ return 0;
+}
+
static int qsv_frames_get_constraints(AVHWDeviceContext *ctx,
const void *hwconfig,
AVHWFramesConstraints *constraints)
@@ -931,7 +1039,9 @@ const HWContextType ff_hwcontext_type_qsv = {
.transfer_get_formats = qsv_transfer_get_formats,
.transfer_data_to = qsv_transfer_data_to,
.transfer_data_from = qsv_transfer_data_from,
+ .map_to = qsv_map_to,
.map_from = qsv_map_from,
+ .frames_derive_to = qsv_frames_derive_to,

.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_QSV, AV_PIX_FMT_NONE },
};
--
2.11.0
Mark Thompson
2017-06-12 22:40:37 UTC
Permalink
(cherry picked from commit e1c5d56b18b82e3fb42382b1b1f972e8b371fc38)
---
libavutil/hwcontext_qsv.c | 88 +++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 86 insertions(+), 2 deletions(-)

diff --git a/libavutil/hwcontext_qsv.c b/libavutil/hwcontext_qsv.c
index 8dbff88b0a..75057f7d52 100644
--- a/libavutil/hwcontext_qsv.c
+++ b/libavutil/hwcontext_qsv.c
@@ -577,13 +577,62 @@ static int qsv_transfer_get_formats(AVHWFramesContext *ctx,
return 0;
}

+static int qsv_frames_derive_from(AVHWFramesContext *dst_ctx,
+ AVHWFramesContext *src_ctx, int flags)
+{
+ AVQSVFramesContext *src_hwctx = src_ctx->hwctx;
+ int i;
+
+ switch (dst_ctx->device_ctx->type) {
+#if CONFIG_VAAPI
+ case AV_HWDEVICE_TYPE_VAAPI:
+ {
+ AVVAAPIFramesContext *dst_hwctx = dst_ctx->hwctx;
+ dst_hwctx->surface_ids = av_mallocz_array(src_hwctx->nb_surfaces,
+ sizeof(*dst_hwctx->surface_ids));
+ if (!dst_hwctx->surface_ids)
+ return AVERROR(ENOMEM);
+ for (i = 0; i < src_hwctx->nb_surfaces; i++)
+ dst_hwctx->surface_ids[i] =
+ *(VASurfaceID*)src_hwctx->surfaces[i].Data.MemId;
+ dst_hwctx->nb_surfaces = src_hwctx->nb_surfaces;
+ }
+ break;
+#endif
+#if CONFIG_DXVA2
+ case AV_HWDEVICE_TYPE_DXVA2:
+ {
+ AVDXVA2FramesContext *dst_hwctx = dst_ctx->hwctx;
+ dst_hwctx->surfaces = av_mallocz_array(src_hwctx->nb_surfaces,
+ sizeof(*dst_hwctx->surfaces));
+ if (!dst_hwctx->surfaces)
+ return AVERROR(ENOMEM);
+ for (i = 0; i < src_hwctx->nb_surfaces; i++)
+ dst_hwctx->surfaces[i] =
+ (IDirect3DSurface9*)src_hwctx->surfaces[i].Data.MemId;
+ dst_hwctx->nb_surfaces = src_hwctx->nb_surfaces;
+ if (src_hwctx->frame_type == MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET)
+ dst_hwctx->surface_type = DXVA2_VideoDecoderRenderTarget;
+ else
+ dst_hwctx->surface_type = DXVA2_VideoProcessorRenderTarget;
+ }
+ break;
+#endif
+ default:
+ return AVERROR(ENOSYS);
+ }
+
+ return 0;
+}
+
static int qsv_map_from(AVHWFramesContext *ctx,
AVFrame *dst, const AVFrame *src, int flags)
{
QSVFramesContext *s = ctx->internal->priv;
mfxFrameSurface1 *surf = (mfxFrameSurface1*)src->data[3];
AVHWFramesContext *child_frames_ctx;
-
+ const AVPixFmtDescriptor *desc;
+ uint8_t *child_data;
AVFrame *dummy;
int ret = 0;

@@ -591,6 +640,40 @@ static int qsv_map_from(AVHWFramesContext *ctx,
return AVERROR(ENOSYS);
child_frames_ctx = (AVHWFramesContext*)s->child_frames_ref->data;

+ switch (child_frames_ctx->device_ctx->type) {
+#if CONFIG_VAAPI
+ case AV_HWDEVICE_TYPE_VAAPI:
+ child_data = (uint8_t*)(intptr_t)*(VASurfaceID*)surf->Data.MemId;
+ break;
+#endif
+#if CONFIG_DXVA2
+ case AV_HWDEVICE_TYPE_DXVA2:
+ child_data = surf->Data.MemId;
+ break;
+#endif
+ default:
+ return AVERROR(ENOSYS);
+ }
+
+ if (dst->format == child_frames_ctx->format) {
+ ret = ff_hwframe_map_create(s->child_frames_ref,
+ dst, src, NULL, NULL);
+ if (ret < 0)
+ return ret;
+
+ dst->width = src->width;
+ dst->height = src->height;
+ dst->data[3] = child_data;
+
+ return 0;
+ }
+
+ desc = av_pix_fmt_desc_get(dst->format);
+ if (desc && desc->flags & AV_PIX_FMT_FLAG_HWACCEL) {
+ // This only supports mapping to software.
+ return AVERROR(ENOSYS);
+ }
+
dummy = av_frame_alloc();
if (!dummy)
return AVERROR(ENOMEM);
@@ -603,7 +686,7 @@ static int qsv_map_from(AVHWFramesContext *ctx,
dummy->format = child_frames_ctx->format;
dummy->width = src->width;
dummy->height = src->height;
- dummy->data[3] = surf->Data.MemId;
+ dummy->data[3] = child_data;

ret = av_hwframe_map(dst, dummy, flags);

@@ -1042,6 +1125,7 @@ const HWContextType ff_hwcontext_type_qsv = {
.map_to = qsv_map_to,
.map_from = qsv_map_from,
.frames_derive_to = qsv_frames_derive_to,
+ .frames_derive_from = qsv_frames_derive_from,

.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_QSV, AV_PIX_FMT_NONE },
};
--
2.11.0
Mark Thompson
2017-06-12 22:40:38 UTC
Permalink
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.

(cherry picked from commit c5714b51aad41fef56dddac1d542e7fc6b984627)
---
doc/APIchanges | 4 ++++
libavutil/hwcontext.c | 14 +++++++++++++-
libavutil/hwcontext.h | 4 +++-
libavutil/hwcontext_internal.h | 5 +++++
libavutil/version.h | 2 +-
5 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/doc/APIchanges b/doc/APIchanges
index 12c4877b9b..19776f830e 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -15,6 +15,10 @@ libavutil: 2015-08-28

API changes, most recent first:

+2017-06-xx - xxxxxxx - lavu 55.66.100 - hwcontext.h
+ av_hwframe_ctx_create_derived() now takes some AV_HWFRAME_MAP_* combination
+ as its flags argument (which was previously unused).
+
2017-06-xx - xxxxxxx - lavc 57.99.100 - avcodec.h
Add AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH.

diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c
index ba7ffd1951..4726986902 100644
--- a/libavutil/hwcontext.c
+++ b/libavutil/hwcontext.c
@@ -458,6 +458,11 @@ int av_hwframe_get_buffer(AVBufferRef *hwframe_ref, AVFrame *frame, int flags)
// and map the frame immediately.
AVFrame *src_frame;

+ frame->format = ctx->format;
+ frame->hw_frames_ctx = av_buffer_ref(hwframe_ref);
+ if (!frame->hw_frames_ctx)
+ return AVERROR(ENOMEM);
+
src_frame = av_frame_alloc();
if (!src_frame)
return AVERROR(ENOMEM);
@@ -467,7 +472,8 @@ int av_hwframe_get_buffer(AVBufferRef *hwframe_ref, AVFrame *frame, int flags)
if (ret < 0)
return ret;

- ret = av_hwframe_map(frame, src_frame, 0);
+ ret = av_hwframe_map(frame, src_frame,
+ ctx->internal->source_allocation_map_flags);
if (ret) {
av_log(ctx, AV_LOG_ERROR, "Failed to map frame into derived "
"frame context: %d.\n", ret);
@@ -819,6 +825,12 @@ int av_hwframe_ctx_create_derived(AVBufferRef **derived_frame_ctx,
goto fail;
}

+ dst->internal->source_allocation_map_flags =
+ flags & (AV_HWFRAME_MAP_READ |
+ AV_HWFRAME_MAP_WRITE |
+ AV_HWFRAME_MAP_OVERWRITE |
+ AV_HWFRAME_MAP_DIRECT);
+
ret = AVERROR(ENOSYS);
if (src->internal->hw_type->frames_derive_from)
ret = src->internal->hw_type->frames_derive_from(dst, src, flags);
diff --git a/libavutil/hwcontext.h b/libavutil/hwcontext.h
index 37e8831f6b..edf12cc631 100644
--- a/libavutil/hwcontext.h
+++ b/libavutil/hwcontext.h
@@ -566,7 +566,9 @@ int av_hwframe_map(AVFrame *dst, const AVFrame *src, int flags);
* AVHWFramesContext on.
* @param source_frame_ctx A reference to an existing AVHWFramesContext
* which will be mapped to the derived context.
- * @param flags Currently unused; should be set to zero.
+ * @param flags Some combination of AV_HWFRAME_MAP_* flags, defining the
+ * mapping parameters to apply to frames which are allocated
+ * in the derived device.
* @return Zero on success, negative AVERROR code on failure.
*/
int av_hwframe_ctx_create_derived(AVBufferRef **derived_frame_ctx,
diff --git a/libavutil/hwcontext_internal.h b/libavutil/hwcontext_internal.h
index 0a0c4e86ce..68f78c0a1f 100644
--- a/libavutil/hwcontext_internal.h
+++ b/libavutil/hwcontext_internal.h
@@ -121,6 +121,11 @@ struct AVHWFramesInternal {
* context it was derived from.
*/
AVBufferRef *source_frames;
+ /**
+ * Flags to apply to the mapping from the source to the derived
+ * frame context when trying to allocate in the derived context.
+ */
+ int source_allocation_map_flags;
};

typedef struct HWMapDescriptor {
diff --git a/libavutil/version.h b/libavutil/version.h
index 322b683cf4..308d16f95b 100644
--- a/libavutil/version.h
+++ b/libavutil/version.h
@@ -80,7 +80,7 @@


#define LIBAVUTIL_VERSION_MAJOR 55
-#define LIBAVUTIL_VERSION_MINOR 65
+#define LIBAVUTIL_VERSION_MINOR 66
#define LIBAVUTIL_VERSION_MICRO 100

#define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \
--
2.11.0
Mark Thompson
2017-06-12 22:40:39 UTC
Permalink
Also refactor a little and improve error messages to make failure
cases easier to understand.

(cherry picked from commit 38cb05f1c89cae1862b360d4e7e3f0cd2b5bbb67)
---
libavfilter/vf_hwmap.c | 67 ++++++++++++++++++++++++++++++++++++--------------
1 file changed, 49 insertions(+), 18 deletions(-)

diff --git a/libavfilter/vf_hwmap.c b/libavfilter/vf_hwmap.c
index 654477c6f2..c0fb42a1bc 100644
--- a/libavfilter/vf_hwmap.c
+++ b/libavfilter/vf_hwmap.c
@@ -30,10 +30,10 @@
typedef struct HWMapContext {
const AVClass *class;

- AVBufferRef *hwdevice_ref;
AVBufferRef *hwframes_ref;

int mode;
+ char *derive_device_type;
int map_backwards;
} HWMapContext;

@@ -56,6 +56,7 @@ static int hwmap_config_output(AVFilterLink *outlink)
HWMapContext *ctx = avctx->priv;
AVFilterLink *inlink = avctx->inputs[0];
AVHWFramesContext *hwfc;
+ AVBufferRef *device;
const AVPixFmtDescriptor *desc;
int err;

@@ -63,30 +64,58 @@ static int hwmap_config_output(AVFilterLink *outlink)
av_get_pix_fmt_name(inlink->format),
av_get_pix_fmt_name(outlink->format));

+ av_buffer_unref(&ctx->hwframes_ref);
+
+ device = avctx->hw_device_ctx;
+
if (inlink->hw_frames_ctx) {
hwfc = (AVHWFramesContext*)inlink->hw_frames_ctx->data;

+ if (ctx->derive_device_type) {
+ enum AVHWDeviceType type;
+
+ type = av_hwdevice_find_type_by_name(ctx->derive_device_type);
+ if (type == AV_HWDEVICE_TYPE_NONE) {
+ av_log(avctx, AV_LOG_ERROR, "Invalid device type.\n");
+ goto fail;
+ }
+
+ err = av_hwdevice_ctx_create_derived(&device, type,
+ hwfc->device_ref, 0);
+ if (err < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to created derived "
+ "device context: %d.\n", err);
+ goto fail;
+ }
+ }
+
desc = av_pix_fmt_desc_get(outlink->format);
- if (!desc)
- return AVERROR(EINVAL);
+ if (!desc) {
+ err = AVERROR(EINVAL);
+ goto fail;
+ }

if (inlink->format == hwfc->format &&
(desc->flags & AV_PIX_FMT_FLAG_HWACCEL)) {
// Map between two hardware formats (including the case of
// undoing an existing mapping).

- ctx->hwdevice_ref = av_buffer_ref(avctx->hw_device_ctx);
- if (!ctx->hwdevice_ref) {
- err = AVERROR(ENOMEM);
+ if (!device) {
+ av_log(avctx, AV_LOG_ERROR, "A device reference is "
+ "required to map to a hardware format.\n");
+ err = AVERROR(EINVAL);
goto fail;
}

err = av_hwframe_ctx_create_derived(&ctx->hwframes_ref,
outlink->format,
- ctx->hwdevice_ref,
+ device,
inlink->hw_frames_ctx, 0);
- if (err < 0)
+ if (err < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to create derived "
+ "frames context: %d.\n", err);
goto fail;
+ }

} else if ((outlink->format == hwfc->format &&
inlink->format == hwfc->sw_format) ||
@@ -94,8 +123,6 @@ static int hwmap_config_output(AVFilterLink *outlink)
// Map from a hardware format to a software format, or
// undo an existing such mapping.

- ctx->hwdevice_ref = NULL;
-
ctx->hwframes_ref = av_buffer_ref(inlink->hw_frames_ctx);
if (!ctx->hwframes_ref) {
err = AVERROR(ENOMEM);
@@ -119,15 +146,17 @@ static int hwmap_config_output(AVFilterLink *outlink)
// returns frames mapped from that to the previous link in
// order to fill them without an additional copy.

- ctx->map_backwards = 1;
-
- ctx->hwdevice_ref = av_buffer_ref(avctx->hw_device_ctx);
- if (!ctx->hwdevice_ref) {
- err = AVERROR(ENOMEM);
+ if (!device) {
+ av_log(avctx, AV_LOG_ERROR, "A device reference is "
+ "required to create new frames with backwards "
+ "mapping.\n");
+ err = AVERROR(EINVAL);
goto fail;
}

- ctx->hwframes_ref = av_hwframe_ctx_alloc(ctx->hwdevice_ref);
+ ctx->map_backwards = 1;
+
+ ctx->hwframes_ref = av_hwframe_ctx_alloc(device);
if (!ctx->hwframes_ref) {
err = AVERROR(ENOMEM);
goto fail;
@@ -165,7 +194,6 @@ static int hwmap_config_output(AVFilterLink *outlink)

fail:
av_buffer_unref(&ctx->hwframes_ref);
- av_buffer_unref(&ctx->hwdevice_ref);
return err;
}

@@ -273,7 +301,6 @@ static av_cold void hwmap_uninit(AVFilterContext *avctx)
HWMapContext *ctx = avctx->priv;

av_buffer_unref(&ctx->hwframes_ref);
- av_buffer_unref(&ctx->hwdevice_ref);
}

#define OFFSET(x) offsetof(HWMapContext, x)
@@ -297,6 +324,10 @@ static const AVOption hwmap_options[] = {
0, AV_OPT_TYPE_CONST, { .i64 = AV_HWFRAME_MAP_DIRECT },
INT_MIN, INT_MAX, FLAGS, "mode" },

+ { "derive_device", "Derive a new device of this type",
+ OFFSET(derive_device_type), AV_OPT_TYPE_STRING,
+ { .str = NULL }, 0, 0, FLAGS },
+
{ NULL }
};
--
2.11.0
Mark Thompson
2017-06-12 22:40:40 UTC
Permalink
This is something of a hack. It allocates a new hwframe context for
the target format, then maps it back to the source link and overwrites
the input link hw_frames_ctx so that the previous filter will receive
the frames we want from ff_get_video_buffer(). It may fail if
the previous filter imposes any additional constraints on the frames
it wants to use as output.

(cherry picked from commit 81a4cb8e58636d4efd200c2b4fec786a7e948d8b)
---
libavfilter/vf_hwmap.c | 68 ++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 61 insertions(+), 7 deletions(-)

diff --git a/libavfilter/vf_hwmap.c b/libavfilter/vf_hwmap.c
index c0fb42a1bc..c40ed4baf7 100644
--- a/libavfilter/vf_hwmap.c
+++ b/libavfilter/vf_hwmap.c
@@ -34,7 +34,7 @@ typedef struct HWMapContext {

int mode;
char *derive_device_type;
- int map_backwards;
+ int reverse;
} HWMapContext;

static int hwmap_query_formats(AVFilterContext *avctx)
@@ -96,7 +96,8 @@ static int hwmap_config_output(AVFilterLink *outlink)
}

if (inlink->format == hwfc->format &&
- (desc->flags & AV_PIX_FMT_FLAG_HWACCEL)) {
+ (desc->flags & AV_PIX_FMT_FLAG_HWACCEL) &&
+ !ctx->reverse) {
// Map between two hardware formats (including the case of
// undoing an existing mapping).

@@ -117,6 +118,56 @@ static int hwmap_config_output(AVFilterLink *outlink)
goto fail;
}

+ } else if (inlink->format == hwfc->format &&
+ (desc->flags & AV_PIX_FMT_FLAG_HWACCEL) &&
+ ctx->reverse) {
+ // Map between two hardware formats, but do it in reverse.
+ // Make a new hwframe context for the target type, and then
+ // overwrite the input hwframe context with a derived context
+ // mapped from that back to the source type.
+ AVBufferRef *source;
+ AVHWFramesContext *frames;
+
+ ctx->hwframes_ref = av_hwframe_ctx_alloc(device);
+ if (!ctx->hwframes_ref) {
+ err = AVERROR(ENOMEM);
+ goto fail;
+ }
+ frames = (AVHWFramesContext*)ctx->hwframes_ref->data;
+
+ frames->format = outlink->format;
+ frames->sw_format = hwfc->sw_format;
+ frames->width = hwfc->width;
+ frames->height = hwfc->height;
+ frames->initial_pool_size = 64;
+
+ err = av_hwframe_ctx_init(ctx->hwframes_ref);
+ if (err < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to initialise "
+ "target frames context: %d.\n", err);
+ goto fail;
+ }
+
+ err = av_hwframe_ctx_create_derived(&source,
+ inlink->format,
+ hwfc->device_ref,
+ ctx->hwframes_ref,
+ ctx->mode);
+ if (err < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to create "
+ "derived source frames context: %d.\n", err);
+ goto fail;
+ }
+
+ // Here is the naughty bit. This overwriting changes what
+ // ff_get_video_buffer() in the previous filter returns -
+ // it will now give a frame allocated here mapped back to
+ // the format it expects. If there were any additional
+ // constraints on the output frames there then this may
+ // break nastily.
+ av_buffer_unref(&inlink->hw_frames_ctx);
+ inlink->hw_frames_ctx = source;
+
} else if ((outlink->format == hwfc->format &&
inlink->format == hwfc->sw_format) ||
inlink->format == hwfc->format) {
@@ -148,13 +199,13 @@ static int hwmap_config_output(AVFilterLink *outlink)

if (!device) {
av_log(avctx, AV_LOG_ERROR, "A device reference is "
- "required to create new frames with backwards "
+ "required to create new frames with reverse "
"mapping.\n");
err = AVERROR(EINVAL);
goto fail;
}

- ctx->map_backwards = 1;
+ ctx->reverse = 1;

ctx->hwframes_ref = av_hwframe_ctx_alloc(device);
if (!ctx->hwframes_ref) {
@@ -171,7 +222,7 @@ static int hwmap_config_output(AVFilterLink *outlink)
err = av_hwframe_ctx_init(ctx->hwframes_ref);
if (err < 0) {
av_log(avctx, AV_LOG_ERROR, "Failed to create frame "
- "context for backward mapping: %d.\n", err);
+ "context for reverse mapping: %d.\n", err);
goto fail;
}

@@ -203,7 +254,7 @@ static AVFrame *hwmap_get_buffer(AVFilterLink *inlink, int w, int h)
AVFilterLink *outlink = avctx->outputs[0];
HWMapContext *ctx = avctx->priv;

- if (ctx->map_backwards) {
+ if (ctx->reverse && !inlink->hw_frames_ctx) {
AVFrame *src, *dst;
int err;

@@ -261,7 +312,7 @@ static int hwmap_filter_frame(AVFilterLink *link, AVFrame *input)
goto fail;
}

- if (ctx->map_backwards && !input->hw_frames_ctx) {
+ if (ctx->reverse && !input->hw_frames_ctx) {
// If we mapped backwards from hardware to software, we need
// to attach the hardware frame context to the input frame to
// make the mapping visible to av_hwframe_map().
@@ -327,6 +378,9 @@ static const AVOption hwmap_options[] = {
{ "derive_device", "Derive a new device of this type",
OFFSET(derive_device_type), AV_OPT_TYPE_STRING,
{ .str = NULL }, 0, 0, FLAGS },
+ { "reverse", "Map in reverse (create and allocate in the sink)",
+ OFFSET(reverse), AV_OPT_TYPE_INT,
+ { .i64 = 0 }, 0, 1, FLAGS },

{ NULL }
};
--
2.11.0
Mark Thompson
2017-06-12 22:40:41 UTC
Permalink
(cherry picked from commit 66aa9b94dae217a0fc5acfb704490707629d95ed)
---
doc/filters.texi | 98 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 98 insertions(+)

diff --git a/doc/filters.texi b/doc/filters.texi
index 023096f4e0..db0bdfe254 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -9040,6 +9040,104 @@ A floating point number which specifies chroma temporal strength. It defaults to
@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}.
@end table

+@section hwdownload
+
+Download hardware frames to system memory.
+
+The input must be in hardware frames, and the output a non-hardware format.
+Not all formats will be supported on the output - it may be necessary to insert
+an additional @option{format} filter immediately following in the graph to get
+the output in a supported format.
+
+@section hwmap
+
+Map hardware frames to system memory or to another device.
+
+This filter has several different modes of operation; which one is used depends
+on the input and output formats:
+@itemize
+@item
+Hardware frame input, normal frame output
+
+Map the input frames to system memory and pass them to the output. If the
+original hardware frame is later required (for example, after overlaying
+something else on part of it), the @option{hwmap} filter can be used again
+in the next mode to retrieve it.
+@item
+Normal frame input, hardware frame output
+
+If the input is actually a software-mapped hardware frame, then unmap it -
+that is, return the original hardware frame.
+
+Otherwise, a device must be provided. Create new hardware surfaces on that
+device for the output, then map them back to the software format at the input
+and give those frames to the preceding filter. This will then act like the
+@option{hwupload} filter, but may be able to avoid an additional copy when
+the input is already in a compatible format.
+@item
+Hardware frame input and output
+
+A device must be supplied for the output, either directly or with the
+@option{derive_device} option. The input and output devices must be of
+different types and compatible - the exact meaning of this is
+system-dependent, but typically it means that they must refer to the same
+underlying hardware context (for example, refer to the same graphics card).
+
+If the input frames were originally created on the output device, then unmap
+to retrieve the original frames.
+
+Otherwise, map the frames to the output device - create new hardware frames
+on the output corresponding to the frames on the input.
+@end itemize
+
+The following additional parameters are accepted:
+
+@table @option
+@item mode
+Set the frame mapping mode. Some combination of:
+@table @var
+@item read
+The mapped frame should be readable.
+@item write
+The mapped frame should be writeable.
+@item overwrite
+The mapping will always overwrite the entire frame.
+
+This may improve performance in some cases, as the original contents of the
+frame need not be loaded.
+@item direct
+The mapping must not involve any copying.
+
+Indirect mappings to copies of frames are created in some cases where either
+direct mapping is not possible or it would have unexpected properties.
+Setting this flag ensures that the mapping is direct and will fail if that is
+not possible.
+@end table
+Defaults to @var{read+write} if not specified.
+
+@item derive_device @var{type}
+Rather than using the device supplied at initialisation, instead derive a new
+device of type @var{type} from the device the input frames exist on.
+
+@item reverse
+In a hardware to hardware mapping, map in reverse - create frames in the sink
+and map them back to the source. This may be necessary in some cases where
+a mapping in one direction is required but only the opposite direction is
+supported by the devices being used.
+
+This option is dangerous - it may break the preceding filter in undefined
+ways if there are any additional constraints on that filter's output.
+Do not use it without fully understanding the implications of its use.
+@end table
+
+@section hwupload
+
+Upload system memory frames to hardware surfaces.
+
+The device to upload to must be supplied when the filter is initialised. If
+using ffmpeg, select the appropriate device with the @option{-filter_hw_device}
+option.
+
@anchor{hwupload_cuda}
@section hwupload_cuda
--
2.11.0
wm4
2017-06-13 12:07:05 UTC
Permalink
On Mon, 12 Jun 2017 23:40:17 +0100
Post by Mark Thompson
This merges a set of stuff from libav to do with hardware codecs/processing.
All patches LGTM. I don't think it makes sense to delay pushing those
either.
Mark Thompson
2017-06-14 22:06:19 UTC
Permalink
Post by wm4
On Mon, 12 Jun 2017 23:40:17 +0100
Post by Mark Thompson
This merges a set of stuff from libav to do with hardware codecs/processing.
All patches LGTM. I don't think it makes sense to delay pushing those
either.
Set applied (with some fixups from Michael).

Thanks,

- Mark
James Almer
2017-06-13 20:36:06 UTC
Permalink
Post by Mark Thompson
This merges a set of stuff from libav to do with hardware codecs/processing.
* Generic hardware device setup. This finishes the uniform structure for hardware device setup which has been in progress for a while, finally deleting several of the ffmpeg_X.c hardware specific files. Initially this is working for VAAPI and VDPAU, with partial support for QSV. A following series by wm4 (start from <https://git.libav.org/?p=libav.git;a=commit;h=fff90422d181744cd75dbf011687ee7095f02875>) will add DXVA2/D3D11 support as well.
* Mapping between hardware APIs. Initially this supports VAAPI/DXVA2 and QSV, OpenCL integration with those is to follow. The main use of this at the moment to to allow use of the lavc decoder via a platform hwaccel and hence avoid the nastiness of the specific *_qsv decoders (for example: "./ffmpeg_g -y -hwaccel vaapi -hwaccel_output_format vaapi -i in.mp4 -an -vf 'hwmap=derive_device=qsv,format=qsv' -c:v h264_qsv -b 5M -maxrate 5M -look_ahead 0 out.mp4", and similarly with DXVA2).
* Support for the VAAPI driver which wraps VDPAU.
* Field rate output for the VAAPI deinterlacer.
* hw_device_ctx support for QSV codecs using software frames (fixes some current silly failure cases when using multiple independent instances together).
* Profile mismatch option for hwaccels (primarily to allow hardware decoding of H.264 constrained baseline profile streams which erroneously fail to set constraint_set1_flag).
* Documentation for the hardware frame movement filters (hwupload, hwdownload, hwmap).
VP9 VAAPI encode support would be here, but is not included because it depends on the vp9_raw_reorder BSF, which is only written with the bitstream API rather than with get_bits. I know that was skipped earlier, but has there been any more discussion on merging that? Would it be easiest to just convert the BSF?
It will be easier to just convert the BSF to use get_bits for now.
As mentioned in the thread where the new bitstream reader was skipped,
until it's confirmed there are no considerable regressions in speed on
some modules then we're not going to merge it.
In this case it should be a simple search & replace.

BitstreamContext -> GetBitContext
bitstream_read -> get_bits (Since all of them read <= 25 bits)
bitstream_read_bit -> get_bits1
bitstream_init8 -> init_get_bits8
etc.
Post by Mark Thompson
Thanks,
- Mark
_______________________________________________
ffmpeg-devel mailing list
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Reino Wijnsma
2017-06-13 22:28:59 UTC
Permalink
Are you sure you built ffmpeg using OpenJPEG v2.2? Because your patch is
missing the openjpeg_2_2_openjpeg_h entry in HEADERS_LIST in configure, so
you shouldn't be able to successfully build with OpenJPEG v2.2.
Whoops! In my script I'm patching 'configure' with sed
<https://github.com/Reino17/ffmpeg-windows-build-helpers/commit/5a54632b80e479a5ec03aa7ecbd92041420db96c>.
While turning these changes into a patch I seem to have forgotten the
HEADERS_LIST entry indeed. New patch attached.
Michael Bradshaw
2017-06-20 17:44:47 UTC
Permalink
From 70b53c1ea5a56a03cfef24d5b551b983ba2473b2 Mon Sep 17 00:00:00 2001
Date: Wed, 14 Jun 2017 00:19:12 +0200
Subject: [PATCH] Add support for LibOpenJPEG v2.2/git
---
configure | 4 +++-
libavcodec/libopenjpegdec.c | 10 +++++++---
libavcodec/libopenjpegenc.c | 12 ++++++++----
3 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/configure b/configure
index e3941f9..0190966 100755
--- a/configure
+++ b/configure
@@ -1868,6 +1868,7 @@ HEADERS_LIST="
machine_ioctl_meteor_h
malloc_h
opencv2_core_core_c_h
+ openjpeg_2_2_openjpeg_h
openjpeg_2_1_openjpeg_h
openjpeg_2_0_openjpeg_h
openjpeg_1_5_openjpeg_h
@@ -5831,7 +5832,8 @@ enabled libopencv && { check_header
opencv2/core/core_c.h &&
require opencv opencv2/core/core_c.h
cvCreateImageHeader -lopencv_core -lopencv_imgproc; } ||
require_pkg_config opencv opencv/cxcore.h
cvCreateImageHeader; }
enabled libopenh264 && require_pkg_config openh264
wels/codec_api.h WelsGetCodecVersion
-enabled libopenjpeg && { { check_lib libopenjpeg
openjpeg-2.1/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
+enabled libopenjpeg && { { check_lib libopenjpeg
openjpeg-2.2/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
+ { check_lib libopenjpeg
openjpeg-2.1/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
check_lib libopenjpeg
openjpeg-2.1/openjpeg.h opj_version -lopenjp2 ||
{ check_lib libopenjpeg
openjpeg-2.0/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
{ check_lib libopenjpeg
openjpeg-1.5/openjpeg.h opj_version -lopenjpeg -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
diff --git a/libavcodec/libopenjpegdec.c b/libavcodec/libopenjpegdec.c
index ce4e2b0..5ed9ce1 100644
--- a/libavcodec/libopenjpegdec.c
+++ b/libavcodec/libopenjpegdec.c
@@ -34,7 +34,9 @@
#include "internal.h"
#include "thread.h"
-#if HAVE_OPENJPEG_2_1_OPENJPEG_H
+#if HAVE_OPENJPEG_2_2_OPENJPEG_H
+# include <openjpeg-2.2/openjpeg.h>
+#elif HAVE_OPENJPEG_2_1_OPENJPEG_H
# include <openjpeg-2.1/openjpeg.h>
#elif HAVE_OPENJPEG_2_0_OPENJPEG_H
# include <openjpeg-2.0/openjpeg.h>
@@ -44,7 +46,7 @@
# include <openjpeg.h>
#endif
-#if HAVE_OPENJPEG_2_1_OPENJPEG_H || HAVE_OPENJPEG_2_0_OPENJPEG_H
+#if HAVE_OPENJPEG_2_2_OPENJPEG_H || HAVE_OPENJPEG_2_1_OPENJPEG_H ||
HAVE_OPENJPEG_2_0_OPENJPEG_H
# define OPENJPEG_MAJOR_VERSION 2
# define OPJ(x) OPJ_##x
#else
@@ -429,7 +431,9 @@ static int libopenjpeg_decode_frame(AVCodecContext
*avctx,
opj_stream_set_read_function(stream, stream_read);
opj_stream_set_skip_function(stream, stream_skip);
opj_stream_set_seek_function(stream, stream_seek);
-#if HAVE_OPENJPEG_2_1_OPENJPEG_H
+#if HAVE_OPENJPEG_2_2_OPENJPEG_H
+ opj_stream_set_user_data(stream, &reader, NULL);
+#elif HAVE_OPENJPEG_2_1_OPENJPEG_H
opj_stream_set_user_data(stream, &reader, NULL);
Please merge these two conditions, since both #if conditions are executing
the same code. That is:

#if HAVE_OPENJPEG_2_2_OPENJPEG_H || HAVE_OPENJPEG_2_1_OPENJPEG_H
opj_stream_set_user_data(stream, &reader, NULL);
#elif HAVE_OPENJPEG_2_0_OPENJPEG_H
...
#elif HAVE_OPENJPEG_2_0_OPENJPEG_H
opj_stream_set_user_data(stream, &reader);
diff --git a/libavcodec/libopenjpegenc.c b/libavcodec/libopenjpegenc.c
index 4a12729..d3b9161 100644
--- a/libavcodec/libopenjpegenc.c
+++ b/libavcodec/libopenjpegenc.c
@@ -32,7 +32,9 @@
#include "avcodec.h"
#include "internal.h"
-#if HAVE_OPENJPEG_2_1_OPENJPEG_H
+#if HAVE_OPENJPEG_2_2_OPENJPEG_H
+# include <openjpeg-2.2/openjpeg.h>
+#elif HAVE_OPENJPEG_2_1_OPENJPEG_H
# include <openjpeg-2.1/openjpeg.h>
#elif HAVE_OPENJPEG_2_0_OPENJPEG_H
# include <openjpeg-2.0/openjpeg.h>
@@ -42,7 +44,7 @@
# include <openjpeg.h>
#endif
-#if HAVE_OPENJPEG_2_1_OPENJPEG_H || HAVE_OPENJPEG_2_0_OPENJPEG_H
+#if HAVE_OPENJPEG_2_2_OPENJPEG_H || HAVE_OPENJPEG_2_1_OPENJPEG_H ||
HAVE_OPENJPEG_2_0_OPENJPEG_H
# define OPENJPEG_MAJOR_VERSION 2
# define OPJ(x) OPJ_##x
#else
@@ -305,7 +307,7 @@ static av_cold int
libopenjpeg_encode_init(AVCodecContext *avctx)
opj_set_default_encoder_parameters(&ctx->enc_params);
-#if HAVE_OPENJPEG_2_1_OPENJPEG_H
+#if HAVE_OPENJPEG_2_2_OPENJPEG_H || HAVE_OPENJPEG_2_1_OPENJPEG_H
switch (ctx->cinema_mode) {
ctx->enc_params.rsiz = OPJ_PROFILE_CINEMA_2K;
@@ -769,7 +771,9 @@ static int libopenjpeg_encode_frame(AVCodecContext
*avctx, AVPacket *pkt,
opj_stream_set_write_function(stream, stream_write);
opj_stream_set_skip_function(stream, stream_skip);
opj_stream_set_seek_function(stream, stream_seek);
-#if HAVE_OPENJPEG_2_1_OPENJPEG_H
+#if HAVE_OPENJPEG_2_2_OPENJPEG_H
+ opj_stream_set_user_data(stream, &writer, NULL);
+#elif HAVE_OPENJPEG_2_1_OPENJPEG_H
opj_stream_set_user_data(stream, &writer, NULL);
Same comment as above.
#elif HAVE_OPENJPEG_2_0_OPENJPEG_H
opj_stream_set_user_data(stream, &writer);
--
2.8.3
Reino Wijnsma
2017-06-21 22:43:20 UTC
Permalink
Post by Michael Bradshaw
Please merge these two conditions, since both #if conditions are executing
#if HAVE_OPENJPEG_2_2_OPENJPEG_H || HAVE_OPENJPEG_2_1_OPENJPEG_H
opj_stream_set_user_data(stream, &reader, NULL);
#elif HAVE_OPENJPEG_2_0_OPENJPEG_H
...
New patch included. Thanks.
Michael Bradshaw
2017-06-22 00:00:30 UTC
Permalink
Post by Reino Wijnsma
New patch included. Thanks.
Almost done! The OPJ_STATIC change that was introduced in OpenJPEG 2.1+
means FFmepg's configure script has to do some extra work. You'll see that
there are two check_lib calls for openjpeg-2.1. You'll need to mimic both
of those for v2.2. That is, the configure script diff should be:

...
+enabled libopenjpeg && { { check_lib libopenjpeg
openjpeg-2.2/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
+ check_lib libopenjpeg
openjpeg-2.2/openjpeg.h opj_version -lopenjp2 ||
+ { check_lib libopenjpeg
openjpeg-2.1/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
check_lib libopenjpeg
openjpeg-2.1/openjpeg.h opj_version -lopenjp2 ||
...

--Michael
Reino Wijnsma
2017-06-23 21:57:55 UTC
Permalink
Post by Michael Bradshaw
Almost done! The OPJ_STATIC change that was introduced in OpenJPEG 2.1+
means FFmepg's configure script has to do some extra work. You'll see that
there are two check_lib calls for openjpeg-2.1. You'll need to mimic both
...
+enabled libopenjpeg && { { check_lib libopenjpeg
openjpeg-2.2/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
+ check_lib libopenjpeg
openjpeg-2.2/openjpeg.h opj_version -lopenjp2 ||
+ { check_lib libopenjpeg
openjpeg-2.1/openjpeg.h opj_version -lopenjp2 -DOPJ_STATIC && add_cppflags
-DOPJ_STATIC; } ||
check_lib libopenjpeg
openjpeg-2.1/openjpeg.h opj_version -lopenjp2 ||
...
Like this?
Michael Bradshaw
2017-06-24 03:33:55 UTC
Permalink
Post by Reino Wijnsma
Like this?
Yup, just like that. Thanks for the patch! I've applied it.

Loading...