
Recherche avancée
Autres articles (44)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (7722)
-
Neutral net or neutered
4 juin 2013, par Mans — Law and libertyIn recent weeks, a number of high-profile events, in the UK and elsewhere, have been quickly seized upon to promote a variety of schemes for monitoring or filtering Internet access. These proposals, despite their good intentions of protecting children or fighting terrorism, pose a serious threat to fundamental liberties. Although at a glance the ideas may seem like a reasonable price to pay for the prevention of some truly hideous crimes, there is more than first meets the eye. Internet regulation in any form whatsoever is the thin end of a wedge at whose other end we find severely restricted freedom of expression of the kind usually associated with oppressive dictatorships. Where the Internet was once a novelty, it now forms an integrated part of modern society ; regulating the Internet means regulating our lives.
Terrorism
Following the brutal murder of British soldier Lee Rigby in Woolwich, attempts were made in the UK to revive the controversial Communications Data Bill, also dubbed the snooper’s charter. The bill would give police and security services unfettered access to details (excluding content) of all digital communication in the UK without needing so much as a warrant.
The powers afforded by the snooper’s charter would, the argument goes, enable police to prevent crimes such as the one witnessed in Woolwich. True or not, the proposal would, if implemented, also bring about infrastructure for snooping on anyone at any time for any purpose. Once available, the temptation may become strong to extend, little by little, the legal use of these abilities to cover ever more everyday activities, all in the name of crime prevention, of course.
In the emotional aftermath of a gruesome act, anything with the promise of preventing it happening again may seem like a good idea. At times like these it is important, more than ever, to remain rational and carefully consider all the potential consequences of legislation, not only the intended ones.
Hate speech
Hand in hand with terrorism goes hate speech, preachings designed to inspire violence against people of some singled-out nation, race, or other group. Naturally, hate speech is often to be found on the Internet, where it can reach large audiences while the author remains relatively protected. Naturally, we would prefer for it not to exist.
To fulfil the utopian desire of a clean Internet, some advocate mandatory filtering by Internet service providers and search engines to remove this unwanted content. Exactly how such censoring might be implemented is however rarely dwelt upon, much less the consequences inadvertent blocking of innocent material might have.
Pornography
Another common target of calls for filtering is pornography. While few object to the blocking of child pornography, at least in principle, the debate runs hotter when it comes to the legal variety. Pornography, it is claimed, promotes violence towards women and is immoral or generally offensive. As such it ought to be blocked in the name of the greater good.
The conviction last week of paedophile Mark Bridger for the abduction and murder of five-year-old April Jones renewed the debate about filtering of pornography in the UK ; his laptop was found to contain child pornography. John Carr of the UK government’s Council on Child Internet Safety went so far as suggesting a default blocking of all pornography, access being granted to an Internet user only once he or she had registered with some unspecified entity. Registering people wishing only to access perfectly legal material is not something we do in a democracy.
The reality is that Google and other major search engines already remove illegal images from search results and report them to the appropriate authorities. In the UK, the Internet Watch Foundation, a non-government organisation, maintains a blacklist of what it deems ‘potentially criminal’ content, and many Internet service providers block access based on this list.
While well-intentioned, the IWF and its blacklist should raise some concerns. Firstly, a vigilante organisation operating in secret and with no government oversight acting as the nation’s morality police has serious implications for freedom of speech. Secondly, the blocks imposed are sometimes more far-reaching than intended. In one incident, an attempt to block the cover image of the Scorpions album Virgin Killer hosted by Wikipedia (in itself a dubious decision) rendered the entire related article inaccessible as well as interfered with editing.
Net neutrality
Content filtering, or more precisely the lack thereof, is central to the concept of net neutrality. Usually discussed in the context of Internet service providers, this is the principle that the user should have equal, unfiltered access to all content. As a consequence, ISPs should not be held responsible for the content they deliver. Compare this to how the postal system works.
The current debate shows that the principle of net neutrality is important not only at the ISP level, but should also include providers of essential services on the Internet. This means search engines should not be responsible for or be required to filter results, email hosts should not be required to scan users’ messages, and so on. No mandatory censoring can be effective without infringing the essential liberties of freedom of speech and press.
Social networks operate in a less well-defined space. They are clearly not part of the essential Internet infrastructure, and they require that users sign up and agree to their terms and conditions. Because of this, they can include restrictions that would be unacceptable for the Internet as a whole. At the same time, social networks are growing in importance as means of communication between people, and as such they have a moral obligation to act fairly and apply their rules in a transparent manner.
Facebook was recently under fire, accused of not taking sufficient measures to curb ‘hate speech,’ particularly against women. Eventually they pledged to review their policies and methods, and reducing the proliferation of such content will surely make the web a better place. Nevertheless, one must ask how Facebook (or another social network) might react to similar pressure from, say, a religious group demanding removal of ‘blasphemous’ content. What about demands from a foreign government ? Only yesterday, the Turkish prime minister Erdogan branded Twitter ‘a plague’ in a TV interview.
Rather than impose upon Internet companies the burden of law enforcement, we should provide them the latitude to set their own policies as well as the legal confidence to stand firm in the face of unreasonable demands. The usual market forces will promote those acting responsibly.
Further reading
- Tory-Labour pact could save data bill, says Lord Howard
- Internet companies warn May over ‘snooper’s charter’
- Snooper’s charter ‘should be replaced by strengthening of existing powers’
- Exclusive : ‘Snooper’s charter’ would not have prevented Woolwich attack, says MI5
- Search engines urged to block more online porn sites
- Why technology must be the solution to child abuse material online
- Google must take more action to police explicit content, says Vince Cable
- Facebook bows to campaign groups over ‘hate speech’
- Facebook sexism campaign attracts thousands online
- Türkischer Ministerpräsident : Twitter ist eine Plage
- Valls : « La traque sur Internet doit être une priorité pour nous »
- La Cnil, futur juge d’Internet
- “National security matter” : Third agency caught unilaterally blocking web sites
-
How to convert same audio twice using libswresamples's swr_convert
25 juillet 2019, par JoshuaCWebDeveloperI’m working on an audio processing system that sometimes requires that the same audio be resampled twice. The first resampling of the audio from FFmpeg works fine, the second results in distorted audio. I’ve reproduced this problem by modifying the
resampling_audio
example provided by FFmpeg. How do I convert the same audio twice usingswr_convert
?Below I’ve attached a modified version of the
resampling_audio
example. In order to reproduce the issue, follow these steps :- Clone FFmepg project at https://github.com/FFmpeg/FFmpeg
- Run
./configure
- Run
make -j4 examples
(this will take awhile the first time) - Run
doc/examples/resampling_audio
to produce expected output - Replace
doc/examples/resampling_audio.c
with the version I’ve attached below - Run
make -j4 examples
- Run
doc/examples/resampling_audio
again (with new args) to output two new files (one for each conversion). - Import each file into Audacity as raw data, the first file should be 44100 Hz, the second should be 32000 Hz.
- The first file will sound the same as the original, the second file will be distorted.
The environment I ran this in was Ubuntu 16.04 ; I then copied the output files to a Windows PC to open them in Audacity.
Here is my modified
resampling_audio.c
file. I’ve created some new variables and copied the blocks of code that do the conversion. The first conversion should be unchanged, the second conversion takes in data from the first conversion and attempts to convert it again./*
* Copyright (c) 2012 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @example resampling_audio.c
* libswresample API use example.
*/
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libswresample></libswresample>swresample.h>
static int get_format_from_sample_fmt(const char **fmt,
enum AVSampleFormat sample_fmt)
{
int i;
struct sample_fmt_entry {
enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
} sample_fmt_entries[] = {
{ AV_SAMPLE_FMT_U8, "u8", "u8" },
{ AV_SAMPLE_FMT_S16, "s16be", "s16le" },
{ AV_SAMPLE_FMT_S32, "s32be", "s32le" },
{ AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
{ AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
};
*fmt = NULL;
for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
struct sample_fmt_entry *entry = &sample_fmt_entries[i];
if (sample_fmt == entry->sample_fmt) {
*fmt = AV_NE(entry->fmt_be, entry->fmt_le);
return 0;
}
}
fprintf(stderr,
"Sample format %s not supported as output format\n",
av_get_sample_fmt_name(sample_fmt));
return AVERROR(EINVAL);
}
/**
* Fill dst buffer with nb_samples, generated starting from t.
*/
static void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
{
int i, j;
double tincr = 1.0 / sample_rate, *dstp = dst;
const double c = 2 * M_PI * 440.0;
/* generate sin tone with 440Hz frequency and duplicated channels */
for (i = 0; i < nb_samples; i++) {
*dstp = sin(c * *t);
for (j = 1; j < nb_channels; j++)
dstp[j] = dstp[0];
dstp += nb_channels;
*t += tincr;
}
}
int main(int argc, char **argv)
{
int64_t src_ch_layout = AV_CH_LAYOUT_STEREO, dst_ch_layout = AV_CH_LAYOUT_SURROUND;
int src_rate = 48000, dst_rate = 44100;
uint8_t **src_data = NULL, **dst_data = NULL, **dst_data2 = NULL;
int src_nb_channels = 0, dst_nb_channels = 0;
int src_linesize, dst_linesize;
int src_nb_samples = 1024, dst_nb_samples, max_dst_nb_samples, dst_nb_samples2, max_dst_nb_samples2;
enum AVSampleFormat src_sample_fmt = AV_SAMPLE_FMT_DBL, dst_sample_fmt = AV_SAMPLE_FMT_S16;
const char *dst_filename = NULL, *dst_filename2 = NULL;
FILE *dst_file, *dst_file2;
int dst_bufsize, dst_bufsize2;
const char *fmt;
struct SwrContext *swr_ctx;
struct SwrContext *swr_ctx2;
double t;
int ret;
if (argc != 3) {
fprintf(stderr, "Usage: %s output_file_first output_file_second\n"
"API example program to show how to resample an audio stream with libswresample.\n"
"This program generates a series of audio frames, resamples them to a specified "
"output format and rate and saves them to an output file named output_file.\n",
argv[0]);
exit(1);
}
dst_filename = argv[1];
dst_filename2 = argv[2];
dst_file = fopen(dst_filename, "wb");
if (!dst_file) {
fprintf(stderr, "Could not open destination file %s\n", dst_filename);
exit(1);
}
dst_file2 = fopen(dst_filename2, "wb");
if (!dst_file2) {
fprintf(stderr, "Could not open destination file 2 %s\n", dst_filename2);
exit(1);
}
/* create resampler context */
swr_ctx = swr_alloc();
if (!swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n");
ret = AVERROR(ENOMEM);
goto end;
}
/* set options */
av_opt_set_int(swr_ctx, "in_channel_layout", src_ch_layout, 0);
av_opt_set_int(swr_ctx, "in_sample_rate", src_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", src_sample_fmt, 0);
av_opt_set_int(swr_ctx, "out_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx, "out_sample_rate", dst_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", dst_sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n");
goto end;
}
/* create resampler context 2 */
swr_ctx2 = swr_alloc();
if (!swr_ctx2) {
fprintf(stderr, "Could not allocate resampler context 2\n");
ret = AVERROR(ENOMEM);
goto end;
}
/* set options */
av_opt_set_int(swr_ctx2, "in_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx2, "in_sample_rate", dst_rate, 0);
av_opt_set_sample_fmt(swr_ctx2, "in_sample_fmt", dst_sample_fmt, 0);
av_opt_set_int(swr_ctx2, "out_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx2, "out_sample_rate", 32000, 0);
av_opt_set_sample_fmt(swr_ctx2, "out_sample_fmt", dst_sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(swr_ctx2)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context 2\n");
goto end;
}
/* allocate source and destination samples buffers */
src_nb_channels = av_get_channel_layout_nb_channels(src_ch_layout);
ret = av_samples_alloc_array_and_samples(&src_data, &src_linesize, src_nb_channels,
src_nb_samples, src_sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate source samples\n");
goto end;
}
/* compute the number of converted samples: buffering is avoided
* ensuring that the output buffer will contain at least all the
* converted input samples */
max_dst_nb_samples = dst_nb_samples =
av_rescale_rnd(src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
/* buffer is going to be directly written to a rawaudio file, no alignment */
dst_nb_channels = av_get_channel_layout_nb_channels(dst_ch_layout);
ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples\n");
goto end;
}
/* compute the number of converted samples: buffering is avoided
* ensuring that the output buffer will contain at least all the
* converted input samples */
max_dst_nb_samples2 = dst_nb_samples2 =
av_rescale_rnd(dst_nb_samples, 32000, dst_rate, AV_ROUND_UP);
/* buffer is going to be directly written to a rawaudio file, no alignment */
// dst_nb_channels2 = av_get_channel_layout_nb_channels(dst_ch_layout);
ret = av_samples_alloc_array_and_samples(&dst_data2, &dst_linesize, dst_nb_channels,
dst_nb_samples2, dst_sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples 2\n");
goto end;
}
t = 0;
do {
/* generate synthetic audio */
fill_samples((double *)src_data[0], src_nb_samples, src_nb_channels, src_rate, &t);
/* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +
src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) {
av_freep(&dst_data[0]);
ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 1);
if (ret < 0)
break;
max_dst_nb_samples = dst_nb_samples;
}
/* convert to destination format */
ret = swr_convert(swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)src_data, src_nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting\n");
goto end;
}
dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,
ret, dst_sample_fmt, 1);
if (dst_bufsize < 0) {
fprintf(stderr, "Could not get sample buffer size\n");
goto end;
}
printf("t:%f in:%d out:%d\n", t, src_nb_samples, ret);
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
/* compute destination number of samples 2 */
dst_nb_samples2 = av_rescale_rnd(swr_get_delay(swr_ctx2, dst_rate) +
dst_nb_samples2, 32000, dst_rate, AV_ROUND_UP);
if (dst_nb_samples2 > max_dst_nb_samples2) {
av_freep(&dst_data2[0]);
ret = av_samples_alloc(dst_data2, &dst_linesize, dst_nb_channels,
dst_nb_samples2, dst_sample_fmt, 1);
if (ret < 0)
break;
max_dst_nb_samples2 = dst_nb_samples2;
}
/* convert to destination format */
ret = swr_convert(swr_ctx2, dst_data2, dst_nb_samples2, (const uint8_t **)dst_data, dst_nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting 2\n");
goto end;
}
dst_bufsize2 = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,
ret, dst_sample_fmt, 1);
if (dst_bufsize2 < 0) {
fprintf(stderr, "Could not get sample buffer size 2\n");
goto end;
}
printf("t:%f in:%d out:%d\n", t, dst_nb_samples, ret);
fwrite(dst_data2[0], 1, dst_bufsize2, dst_file2);
} while (t < 10);
if ((ret = get_format_from_sample_fmt(&fmt, dst_sample_fmt)) < 0)
goto end;
fprintf(stderr, "Resampling succeeded. Play the output file with the command:\n"
"ffplay -f %s -channel_layout %"PRId64" -channels %d -ar %d %s\n",
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
end:
fclose(dst_file);
if (src_data)
av_freep(&src_data[0]);
av_freep(&src_data);
if (dst_data)
av_freep(&dst_data[0]);
av_freep(&dst_data);
swr_free(&swr_ctx);
return ret < 0;
} -
FFmpeg C++ API : Using HW acceleration (VAAPI) to transcode video coming from a webcam
17 avril 2024, par nicohI'm actually trying to use HW acceleration with the FFmpeg C++ API in order to transcode the video coming from a webcam (which may vary from one config to another) into a given output format (i.e : converting the video stream coming from the webcam in MJPEG to H264 so that it can be written into a MP4 file).


I already succeeded to achieve this by transferring the AVFrame output by the HW decoder from GPU to CPU, then transfer this to the HW encoder input (so from CPU to GPU).
This is not so optimized and on top of that, for the given above config (MJPEG => H264), I cannot provide the output of the decoder as an input for the encoder as the MJPEG HW decoder wants to output in RGBA pixel format, and the H264 encoder wants NV12. So I have to perform pixel format conversion on CPU side.


That's why I would like to connect the output of the HW video decoder directly to the input of the HW encoder (inside the GPU).
To do this, I followed this example given by FFmpeg : https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/vaapi_transcode.c.


This works fine when transcoding an AVI file with MJPEG inside to H264 but it fails when using a MJPEG stream coming from a webcam as input.
In this case, the encoder says :


[h264_vaapi @ 0x5555555e5140] No usable encoding profile found.



Below the code of the FFmpeg example I modified to connect on webcam instead of opening input file :


/*
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 */

/**
 * @file Intel VAAPI-accelerated transcoding API usage example
 * @example vaapi_transcode.c
 *
 * Perform VAAPI-accelerated transcoding.
 * Usage: vaapi_transcode input_stream codec output_stream
 * e.g: - vaapi_transcode input.mp4 h264_vaapi output_h264.mp4
 * - vaapi_transcode input.mp4 vp9_vaapi output_vp9.ivf
 */

#include 
#include 
#include <iostream>

//#define USE_INPUT_FILE

extern "C"{
#include <libavutil></libavutil>hwcontext.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavdevice></libavdevice>avdevice.h>
}

static AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
static AVBufferRef *hw_device_ctx = NULL;
static AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;
static int video_stream = -1;
static AVStream *ost;
static int initialized = 0;

static enum AVPixelFormat get_vaapi_format(AVCodecContext *ctx,
 const enum AVPixelFormat *pix_fmts)
{
 const enum AVPixelFormat *p;

 for (p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) {
 if (*p == AV_PIX_FMT_VAAPI)
 return *p;
 }

 std::cout << "Unable to decode this file using VA-API." << std::endl;
 return AV_PIX_FMT_NONE;
}

static int open_input_file(const char *filename)
{
 int ret;
 AVCodec *decoder = NULL;
 AVStream *video = NULL;
 AVDictionary *pInputOptions = nullptr;

#ifdef USE_INPUT_FILE
 if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
#else
 avdevice_register_all();
 av_dict_set(&pInputOptions, "input_format", "mjpeg", 0);
 av_dict_set(&pInputOptions, "framerate", "30", 0);
 av_dict_set(&pInputOptions, "video_size", "640x480", 0);

 if ((ret = avformat_open_input(&ifmt_ctx, "/dev/video0", NULL, &pInputOptions)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
#endif

 ifmt_ctx->flags |= AVFMT_FLAG_NONBLOCK;

 if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot find input stream information. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot find a video stream in the input file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
 video_stream = ret;

 if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
 return AVERROR(ENOMEM);

 video = ifmt_ctx->streams[video_stream];
 if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "avcodec_parameters_to_context error. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
 if (!decoder_ctx->hw_device_ctx) {
 std::cout << "A hardware device reference create failed." << std::endl;
 return AVERROR(ENOMEM);
 }
 decoder_ctx->get_format = get_vaapi_format;

 if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)
 {
 char errMsg[1024] = {0};
 std::cout << "Failed to open codec for decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 }

 return ret;
}

static int encode_write(AVPacket *enc_pkt, AVFrame *frame)
{
 int ret = 0;

 av_packet_unref(enc_pkt);

 AVHWDeviceContext *pHwDevCtx = reinterpret_cast(encoder_ctx->hw_device_ctx);
 AVHWFramesContext *pHwFrameCtx = reinterpret_cast(encoder_ctx->hw_frames_ctx);

 if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during encoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }
 while (1) {
 ret = avcodec_receive_packet(encoder_ctx, enc_pkt);
 if (ret)
 break;

 enc_pkt->stream_index = 0;
 av_packet_rescale_ts(enc_pkt, ifmt_ctx->streams[video_stream]->time_base,
 ofmt_ctx->streams[0]->time_base);
 ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during writing data to output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return -1;
 }
 }

end:
 if (ret == AVERROR_EOF)
 return 0;
 ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);
 return ret;
}

static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec, AVCodecContext *pDecCtx)
{
 AVFrame *frame;
 int ret = 0;

 ret = avcodec_send_packet(decoder_ctx, pkt);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 while (ret >= 0) {
 if (!(frame = av_frame_alloc()))
 return AVERROR(ENOMEM);

 ret = avcodec_receive_frame(decoder_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 av_frame_free(&frame);
 return 0;
 } else if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error while decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 if (!initialized) {
 AVHWFramesContext *pHwFrameCtx = reinterpret_cast(decoder_ctx->hw_frames_ctx);
 
 /* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.
 Only after we get a decoded frame, can we obtain its hw_frames_ctx */
 encoder_ctx->hw_frames_ctx = av_buffer_ref(pDecCtx->hw_frames_ctx);
 if (!encoder_ctx->hw_frames_ctx) {
 ret = AVERROR(ENOMEM);
 goto fail;
 }
 /* set AVCodecContext Parameters for encoder, here we keep them stay
 * the same as decoder.
 * xxx: now the sample can't handle resolution change case.
 */
 if(encoder_ctx->time_base.den == 1 && encoder_ctx->time_base.num == 0)
 {
 encoder_ctx->time_base = av_inv_q(ifmt_ctx->streams[video_stream]->avg_frame_rate);
 }
 else
 {
 encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);
 }
 encoder_ctx->pix_fmt = AV_PIX_FMT_VAAPI;
 encoder_ctx->width = decoder_ctx->width;
 encoder_ctx->height = decoder_ctx->height;

 if ((ret = avcodec_open2(encoder_ctx, enc_codec, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to open encode codec. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {
 std::cout << "Failed to allocate stream for output format." << std::endl;
 ret = AVERROR(ENOMEM);
 goto fail;
 }

 ost->time_base = encoder_ctx->time_base;
 ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to copy the stream parameters. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 /* write the stream header */
 if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error while writing stream header. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 initialized = 1;
 }

 if ((ret = encode_write(pkt, frame)) < 0)
 std::cout << "Error during encoding and writing." << std::endl;

fail:
 av_frame_free(&frame);
 if (ret < 0)
 return ret;
 }
 return 0;
}

int main(int argc, char **argv)
{
 const AVCodec *enc_codec;
 int ret = 0;
 AVPacket *dec_pkt;

 if (argc != 4) {
 fprintf(stderr, "Usage: %s <input file="file" /> <encode codec="codec"> <output file="file">\n"
 "The output format is guessed according to the file extension.\n"
 "\n", argv[0]);
 return -1;
 }

 ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI, NULL, NULL, 0);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to create a VAAPI device. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return -1;
 }

 dec_pkt = av_packet_alloc();
 if (!dec_pkt) {
 std::cout << "Failed to allocate decode packet" << std::endl;
 goto end;
 }

 if ((ret = open_input_file(argv[1])) < 0)
 goto end;

 if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {
 std::cout << "Could not find encoder '" << argv[2] << "'" << std::endl;
 ret = -1;
 goto end;
 }

 if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to deduce output format from file extension. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }

 if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {
 ret = AVERROR(ENOMEM);
 goto end;
 }

 ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }

 /* read all packets and only transcoding video */
 while (ret >= 0) {
 if ((ret = av_read_frame(ifmt_ctx, dec_pkt)) < 0)
 break;

 if (video_stream == dec_pkt->stream_index)
 ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);

 av_packet_unref(dec_pkt);
 }

 /* flush decoder */
 av_packet_unref(dec_pkt);
 ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);

 /* flush encoder */
 ret = encode_write(dec_pkt, NULL);

 /* write the trailer for output stream */
 av_write_trailer(ofmt_ctx);

end:
 avformat_close_input(&ifmt_ctx);
 avformat_close_input(&ofmt_ctx);
 avcodec_free_context(&decoder_ctx);
 avcodec_free_context(&encoder_ctx);
 av_buffer_unref(&hw_device_ctx);
 av_packet_free(&dec_pkt);
 return ret;
}
</output></encode></iostream>


And the content of the associated CMakeLists.txt file to build it using gcc :


cmake_minimum_required(VERSION 3.5)

include(FetchContent)

set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

set(CMAKE_VERBOSE_MAKEFILE ON)

SET (FFMPEG_HW_TRANSCODE_INCS
 ${CMAKE_CURRENT_LIST_DIR})

include_directories(
 ${CMAKE_INCLUDE_PATH}
 ${CMAKE_CURRENT_LIST_DIR}
)

project(FFmpeg_HW_transcode LANGUAGES CXX)

set(CMAKE_CXX_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")
set(CMAKE_CPP_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")

set(EXECUTABLE_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")
set(LIBRARY_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")

add_executable(${PROJECT_NAME})

target_sources(${PROJECT_NAME} PRIVATE
 vaapi_transcode.cpp)

target_link_libraries(${PROJECT_NAME}
 -L${CMAKE_CURRENT_LIST_DIR}/../build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode
 -lavdevice
 -lavformat
 -lavutil
 -lavcodec)



Has anyone tried to do this kind of stuff ?


Thanks for your help.