
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (34)
-
Participer à sa documentation
10 avril 2011La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
Pour ce faire, vous pouvez vous inscrire sur (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (7765)
-
FFMPEG RTSP stream to MPEG4/H264 file using libx264
16 octobre 2020, par PhiHeyo folks,



I'm attempting to transcode/remux an RTSP stream in H264 format into a MPEG4 container, containing just the H264 video stream. Basically, webcam output into a MP4 container.



I can get a poorly coded MP4 produced, using this code :



// Variables here for demo
AVFormatContext * video_file_output_format = nullptr;
AVFormatContext * rtsp_format_context = nullptr;
AVCodecContext * video_file_codec_context = nullptr;
AVCodecContext * rtsp_vidstream_codec_context = nullptr;
AVPacket packet = {0};
AVStream * video_file_stream = nullptr;
AVCodec * rtsp_decoder_codec = nullptr;
int errorNum = 0, video_stream_index = 0;
std::string outputMP4file = "D:\\somemp4file.mp4";

// begin
AVDictionary * opts = nullptr;
av_dict_set(&opts, "rtsp_transport", "tcp", 0);

if ((errorNum = avformat_open_input(&rtsp_format_context, uriANSI.c_str(), NULL, &opts)) < 0) {
 errOut << "Connection failed: avformat_open_input failed with error " << errorNum << ":\r\n" << ErrorRead(errorNum);
 TacticalAbort();
 return;
}

rtsp_format_context->max_analyze_duration = 50000;
if ((errorNum = avformat_find_stream_info(rtsp_format_context, NULL)) < 0) {
 errOut << "Connection failed: avformat_find_stream_info failed with error " << errorNum << ":\r\n" << ErrorRead(errorNum);
 TacticalAbort();
 return;
}

video_stream_index = errorNum = av_find_best_stream(rtsp_format_context, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);

if (video_stream_index < 0) {
 errOut << "Connection in unexpected state; made a connection, but there was no video stream.\r\n"
 "Attempts to find a video stream resulted in error " << errorNum << ": " << ErrorRead(errorNum);
 TacticalAbort();
 return;
}

rtsp_vidstream_codec_context = rtsp_format_context->streams[video_stream_index]->codec;

av_init_packet(&packet);

if (!(video_file_output_format = av_guess_format(NULL, outputMP4file.c_str(), NULL))) {
 TacticalAbort();
 throw std::exception("av_guess_format");
}

if (!(rtsp_decoder_codec = avcodec_find_decoder(rtsp_vidstream_codec_context->codec_id))) {
 errOut << "Connection failed: connected, but avcodec_find_decoder returned null.\r\n"
 "Couldn't find codec with an AV_CODEC_ID value of " << rtsp_vidstream_codec_context->codec_id << ".";
 TacticalAbort();
 return;
}

video_file_format_context = avformat_alloc_context();
video_file_format_context->oformat = video_file_output_format;

if (strcpy_s(video_file_format_context->filename, sizeof(video_file_format_context->filename), outputMP4file.c_str())) {
 errOut << "Couldn't open video file: strcpy_s failed with error " << errno << ".";
 std::string log = errOut.str();
 TacticalAbort();
 throw std::exception("strcpy_s");
}

if (!(video_file_encoder_codec = avcodec_find_encoder(video_file_output_format->video_codec))) {
 TacticalAbort();
 throw std::exception("avcodec_find_encoder");
}

// MARKER ONE

if (!outputMP4file.empty() &&
 !(video_file_output_format->flags & AVFMT_NOFILE) &&
 (errorNum = avio_open2(&video_file_format_context->pb, outputMP4file.c_str(), AVIO_FLAG_WRITE, nullptr, &opts)) < 0) {
 errOut << "Couldn't open video file \"" << outputMP4file << "\" for writing : avio_open2 failed with error " << errorNum << ": " << ErrorRead(errorNum);
 TacticalAbort();
 return;
}

// Create stream in MP4 file
if (!(video_file_stream = avformat_new_stream(video_file_format_context, video_file_encoder_codec))) {
 TacticalAbort();
 return;
}

AVCodecContext * video_file_codec_context = video_file_stream->codec;

// MARKER TWO

// error -22/-21 in avio_open2 if this is skipped
if ((errorNum = avcodec_copy_context(video_file_codec_context, rtsp_vidstream_codec_context)) != 0) {
 TacticalAbort();
 throw std::exception("avcodec_copy_context");
}

//video_file_codec_context->codec_tag = 0;

/*
// MARKER 3 - is this not needed? Examples suggest not.
if ((errorNum = avcodec_open2(video_file_codec_context, video_file_encoder_codec, &opts)) < 0)
{
 errOut << "Couldn't open video file codec context: avcodec_open2 failed with error " << errorNum << ": " << ErrorRead(errorNum);
 std::string log = errOut.str();
 TacticalAbort();
 throw std::exception("avcodec_open2, video file");
}*/

//video_file_format_context->flags |= AVFMT_FLAG_GENPTS;
if (video_file_format_context->oformat->flags & AVFMT_GLOBALHEADER)
{
 video_file_codec_context->flags |= CODEC_FLAG_GLOBAL_HEADER;
}

if ((errorNum = avformat_write_header(video_file_format_context, &opts)) < 0) {
 errOut << "Couldn't open video file: avformat_write_header failed with error " << errorNum << ":\r\n" << ErrorRead(errorNum);
 std::string log = errOut.str();
 TacticalAbort();
 return;
}




However, there are several issues :



- 

- I can't pass any x264 options to the output file. The output H264 matches the input H264's profile/level - switching cameras to a different model switches H264 level.
- The timing of the output file is off, noticeably.
- The duration of the output file is off, massively. A few seconds of footage becomes hours, although playtime doesn't match. (FWIW, I'm using VLC to play them.)









Passing x264 options



If I manually increment PTS per packet, and set DTS equal to PTS, it plays too fast, 2-3 seconds' worth of footage in one second playtime, and duration is hours long. The footage also blurs past several seconds, about 10 seconds' footage in a second.



If I let FFMPEG decide (with or without GENPTS flag), the file has a variable frame rate (probably as expected), but it plays the whole file in an instant and has a long duration too (over forty hours for a few seconds). The duration isn't "real", as the file plays in an instant.



At Marker One, I try to set the profile by passing options to
avio_open2
. The options are simply ignored by libx264. I've tried :


av_dict_set(&opts, "vprofile", "main", 0);
av_dict_set(&opts, "profile", "main", 0); // error, missing '('
// FF_PROFILE_H264_MAIN equals 77, so I also tried
av_dict_set(&opts, "vprofile", "77", 0); 
av_dict_set(&opts, "profile", "77", 0);




It does seem to read the profile setting, but it doesn't use them. At Marker Two, I tried to set it after the
avio_open2
, beforeavformat_write_header
.


// I tried all 4 av_dict_set from earlier, passing it to avformat_write_header.
// None had any effect, they weren't consumed.
av_opt_set(video_file_codec_context, "profile", "77", 0);
av_opt_set(video_file_codec_context, "profile", "main", 0);
video_file_codec_context->profile = FF_PROFILE_H264_MAIN;
av_opt_set(video_file_codec_context->priv_data, "profile", "77", 0);
av_opt_set(video_file_codec_context->priv_data, "profile", "main", 0);




Messing with privdata made the program unstable, but I was trying anything at that point.
I'd like to solve issue 1 with passing settings, since I imagine it'd bottleneck any attempt to solve issues 2 or 3.



I've been fiddling with this for the better part of a month now. I've been through dozens of documentation, Q&As, examples. It doesn't help that quite a few are outdated.



Any help would be appreciated.



Cheers


-
How to setup a Python AWS Lambda zip with OpenCV + FFmpeg ?
5 juin 2017, par dannyeuuOn my desktop I have FFmpeg, OpenCV compilated with CUDA, and woks well.
The function load a video on S3 and run a model that extract some frames and store them in another S3 Bucket.
Bur I have to deploy a conda env with my lambda function and packages with OpenCV without FFmpeg, but I need it to opencv mp4 video file.
python-project/
- cv2/
- numpy/
- lambda_handler.pyHow can I setup the opencv that enable videos on opencv ?
-
How to create a Python AWS Lambda zip with OpenCV + FFmpeg ?
8 juin 2017, par dannyeuuOn my desktop I have FFmpeg, OpenCV compilated right, and works well.
The function load a video on S3 and run a model that extract some frames and store them in another S3 Bucket.
But I have to deploy a conda env with my lambda function and packages with OpenCV without FFmpeg, but I need it to opencv mp4 video file.
python-project/
- cv2/
- numpy/
- lambda_handler.pyHow can I setup the opencv that enable videos on OpenCV ? I tried to compile in another desktop but none.