
Recherche avancée
Autres articles (79)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (8788)
-
Problems encoding audio stream for RTMP server using ffmpeg libraries
18 mars, par Filipe JoséI am working on an C++ application that captures audio using miniaudio, and sends the data to a RTMP server running on nginx which will generate an HLS stream to be consumed by a web browser.
I was successful in encoding the data and writing to an .flv file (which I believe to be the container for RTMP), and everything works out fine.


I've also tested the server by running the ffmpeg cli tool directly, and the RTMP server is generating the .m3u8 playlist file as well as the .ts files correctly.


The problem I'm having is that when I change the output on avio_open2() to use the RTMP server url instead of a file name, the HLS files are not being generated, even though I get logging information about a successful connection, and also see "Audio Data" packets being sent to the server and the respective ACK response when using Wireshark.


Is there anything that is conceptually different between encoding to a file or to a server when using the ffmpeg libav ?


I have a class for the audio encoding :


AudioEncoder(int sampleRate){
 av_log_set_level(AV_LOG_DEBUG);
 m_formatContext = avformat_alloc_context();
 const AVOutputFormat* outputFormat = av_guess_format("flv", NULL, NULL);
 if (!outputFormat) {
 std::cerr << "Could not find aac output format" << std::endl;
 exit(-1);
 }
 m_formatContext->oformat = outputFormat;

 result = avio_open2(&m_ioContext, "rtmp://localhost/live/test",AVIO_FLAG_WRITE, NULL, NULL);
 if( result < 0){
 std::cerr << "Could not open output stream: " << error_string(result) << std::endl;
 exit(-1); 
 }
 m_formatContext->pb = m_ioContext;
 
 const AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
 if(!codec){
 std::cerr << "Codec not found." << std::endl;
 exit(-1);
 }
 
 m_codecContext = avcodec_alloc_context3(codec);
 if (!m_codecContext) {
 std::cerr << "Could not alloc codec context" << std::endl;
 exit(-1);
 }
 AVChannelLayout layout;
 av_channel_layout_default(&layout, 1);
 m_codecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
 m_codecContext->bit_rate = 128000;
 m_codecContext->sample_rate = sampleRate;
 m_codecContext->ch_layout = layout;

 m_avStream = avformat_new_stream(m_formatContext, codec);
 if (!m_avStream) {
 std::cerr << "Could not create stream." << std::endl;
 exit(-1);
 } 
 result = avcodec_parameters_from_context(m_avStream->codecpar, m_codecContext);
 if ( result < 0) {
 std::cerr << "Failed to copy codec parameters to stream: " << error_string(result) << std::endl;
 exit(-1);
 }
 m_avStream->time_base = AVRational{1, m_codecContext->sample_rate};

 result = avcodec_open2(m_codecContext, codec, NULL);
 if ( result < 0) {
 std::cerr << "Failed to open codec: "<< error_string(result) << std::endl;
 exit(-1);
 }

 result = avformat_write_header(m_formatContext, NULL);
 if ( result < 0) {
 std::cerr << "Failed to write format header: "<< error_string(result) << std::endl;
 exit(-1);
 }
}



And an Encode function that is called by miniaudio :


void Encode(const void* data, unsigned int frames){
 AVPacket* packet = av_packet_alloc();
 if (!packet) {
 std::cerr << "Error allocating packet" << std::endl;
 return;
 }

 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Error allocating frame" << std::endl;
 av_packet_free(&packet);
 return;
 }
 frame->format = m_codecContext->sample_fmt;
 frame->ch_layout = m_codecContext->ch_layout;
 frame->sample_rate = m_codecContext -> sample_rate;
 frame->nb_samples = frames;
 if (frames) {
 int result = av_frame_get_buffer(frame, 0);
 if ( result < 0) {
 std::cerr << "Error allocating frame buffer: " << error_string(result) << std::endl;
 av_frame_free(&frame);
 av_packet_free(&packet);
 return;
 }
 }
 frame->data[0] = (uint8_t*)data;

 int result = avcodec_send_frame(m_codecContext, frame); 
 if (result < 0) {
 std::cerr << "Error sending frame to encoder: " << error_string(result) << std::endl; 
 }

 result = avcodec_receive_packet(m_codecContext, packet); 
 if (result == AVERROR(EAGAIN) || result == AVERROR_EOF) {
 std::cout << "EAGAIN" << std::endl;
 // If we get AVERROR_EAGAIN, the encoder needs more data, so we'll return and try again later
 av_frame_free(&frame);
 av_packet_free(&packet);
 return;
 } else if (result < 0) {
 std::cerr << "Error receiving packet from encoder: " << error_string(result) << std::endl;
 av_frame_free(&frame);
 av_packet_free(&packet);
 return;
 }
 else {
 packet->stream_index = m_avStream->index;
 packet->pts = av_rescale_q(frames, AVRational{1, m_codecContext->sample_rate}, m_avStream->time_base);
 packet->dts = packet->pts;
 packet->duration = av_rescale_q(1024, AVRational{1, m_codecContext->sample_rate}, m_avStream->time_base);

 result = av_write_frame(m_formatContext, packet);
 if ( result < 0) {
 std::cerr << "Error writing frame: " << error_string(result) << std::endl;
 }
 }
 //Clean up resources
 av_frame_free(&frame);
 av_packet_unref(packet);
 av_packet_free(&packet);
}



On a final note, I have been able to stream to a TCP server developed in Golang using ADTS as the container and the same method, but I've decided to delegate the task to Nginx.


-
ffmpeg nvenc encode too slow
13 août 2016, par sweetsourcei use ffmpeg 3.1 compile with nvenc,when i run the ffmpeg encode example like this:
#include
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>samplefmt.h>
#include <ace></ace>ace_os.h>
#define INBUF_SIZE 4096
#define AUDIO_INBUF_SIZE 20480
#define AUDIO_REFILL_THRESH 4096
/*
* Video encoding example
*/
static void video_encode_example(const char *filename, const char* codec_name)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, ret, x, y, got_output;
ACE_INT64 nstart,nend;
FILE *f;
AVFrame *frame;
AVPacket pkt;
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
printf("Encode video file %s\n", filename);
/* find the video encoder */
codec = avcodec_find_encoder_by_name(codec_name);
if (!codec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(1);
}
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base = (AVRational){1,25};
/* emit one intra frame every ten frames
* check frame pict_type before passing frame
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
* then gop_size is ignored and the output of encoder
* will always be I frame irrespective to gop_size
*/
c->gop_size = 25;
c->max_b_frames = 0;
c->thread_count = 1;
c->refs = 4;
c->pix_fmt = AV_PIX_FMT_YUV420P;
if(!strcmp(codec_name,"libx264")
{
av_opt_set(c->priv_data, "preset", "superfast", 0);
av_opt_set(c->priv_data, "tune", "zerolatency", 0);
}
if(!strcmp(codec_name,"h264_nvenc")
{
av_opt_set(m_pEncodeCtx->priv_data, "gpu","any",0);
av_opt_set(m_pEncodeCtx->priv_data, "preset", "llhp", 0);
av_opt_set(m_pEncodeCtx->priv_data,"profile","main",0);
m_pEncodeCtx->refs = 0;
m_pEncodeCtx->flags = 0;
m_pEncodeCtx->qmax = 31;
m_pEncodeCtx->qmin = 2;
}
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "Could not open %s\n", filename);
exit(1);
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
/* the image can be allocated by any means and av_image_alloc() is
* just the most convenient way if av_malloc() is to be used */
ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
c->pix_fmt, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate raw picture buffer\n");
exit(1);
}
/* encode 1 second of video */
for (i = 0; i < 25; i++) {
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
fflush(stdout);
/* prepare a dummy image */
/* Y */
for (y = 0; y < c->height; y++) {
for (x = 0; x < c->width; x++) {
frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
}
}
/* Cb and Cr */
for (y = 0; y < c->height/2; y++) {
for (x = 0; x < c->width/2; x++) {
frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
}
}
frame->pts = i;
/* encode the image */
nstart = ACE_OS::gettimeofday().get_msec();
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
printf("%s take time:%d\n",codec_name,ACE_OS::gettimeofday().get_msec()-nstart);
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f);
av_packet_unref(&pkt);
}
}
/* get the delayed frames */
for (got_output = 1; got_output; i++) {
fflush(stdout);
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f);
av_packet_unref(&pkt);
}
}
/* add sequence end code to have a real MPEG file */
fwrite(endcode, 1, sizeof(endcode), f);
fclose(f);
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
av_frame_free(&frame);
printf("\n");
}
int main(int argc, char **argv)
{
const char *output_type;
/* register all the codecs */
avcodec_register_all();
video_encode_example("test.h264", "h264_nvenc");
return 0;
}it encode one frame to a packet about 1800ms,this is too slow. I use Nvidia Grid K1.Is there some parameter error ? Thanke you very much
-
ffmpeg : is it nessesary to create a copy of original AVCodecContext to call avcodec_decode_video2 ?
8 décembre 2015, par cgwicShould i use copy :
videoCodecCtx = avcodec_alloc_context3(videoDecoder);
avcodec_copy_context(videoCodecCtx, formatContext->streams[videoStreamIndex]->codec);
ret = avcodec_open2(videoCodecCtx, videoDecoder, NULL);
if (videoStreamIndex < 0) {
avErrorMsg("Error opening video codec context", ret);
goto exit;
}or simply use existing codec context from AVFormatContext :
videoCodecCtx = formatContext->streams[videoStreamIndex]->codec;
ret = avcodec_open2(videoCodecCtx, videoDecoder, NULL);
if (videoStreamIndex < 0) {
avErrorMsg("Error opening video codec context", ret);
goto exit;
}In my case both works fine, but in old Dranger tutorial he creates a copy.
And updated code uses existing AVCodecContext.