
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (59)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (8387)
-
Decoding h264 frames from RTP stream
9 octobre 2013, par Dmitry BakhtiyarovI am using live555 and ffmpeg libraries to get and decode RTP H264 stream from server ; Video stream was encoded by ffmpeg, using Baseline profile and
x264_param_default_preset(m_params, "veryfast", "zerolatency")
I read this topic and add SPS and PPS data in the every frame, which I receive from network ;
void ClientSink::NewFrameHandler(unsigned frameSize, unsigned numTruncatedBytes,
timeval presentationTime, unsigned durationInMicroseconds)
{
...
EncodedFrame tmp;
tmp.m_frame = std::vector<unsigned char="char">(m_tempBuffer.data(), m_tempBuffer.data() + frameSize);
tmp.m_duration = durationInMicroseconds;
tmp.m_pts = presentationTime;
//Add SPS and PPS data into the frame; TODO: some devices may send SPS and PPs data already into frame;
tmp.m_frame.insert(tmp.m_frame.begin(), m_spsPpsData.cbegin(), m_spsPpsData.cend());
emit newEncodedFrame( SharedEncodedFrame(tmp) );
m_frameCounter++;
this->continuePlaying();
}
</unsigned>And this frames I receive in the decoder.
bool H264Decoder::decodeFrame(SharedEncodedFrame orig_frame)
{
...
while(m_packet.size > 0)
{
int got_picture;
int len = avcodec_decode_video2(m_decoderContext, m_picture, &got_picture, &m_packet);
if (len < 0)
{
emit criticalError(QString("Decoding error"));
return false;
}
if (got_picture)
{
std::vector<unsigned char="char"> result;
this->storePicture(result);
if ( m_picture->format == AVPixelFormat::AV_PIX_FMT_YUV420P )
{
//QImage img = QImage(result.data(), m_picture->width, m_picture->height, QImage::Format_RGB888);
Frame_t result_rgb;
if (!convert_yuv420p_to_rgb32(result, m_picture->width, m_picture->height, result_rgb))
{
emit criticalError( QString("Failed to convert YUV420p image into rgb32; can't create QImage!"));
return false;
}
unsigned char* copy_img = new unsigned char[result_rgb.size()];
//this needed because QImage shared buffer, which used, and it will crash, if i use this qimage after result_rgb deleting
std::copy(result_rgb.cbegin(), result_rgb.cend(), copy_img);
QImage img = QImage(copy_img, m_picture->width, m_picture->height, QImage::Format_RGB32,
[](void* array)
{
delete[] array;
}, copy_img);
img.save(QString("123.bmp"));
emit newDecodedFrame(img);
}
</unsigned>avcodec_decode_video2 decode frames without any error message, but decoded frames, after converting it (from yuv420p into rgb32) is invalid. Example of image available on this link
Do you have any ideas what I make wrong ?
-
Access violation reading location when opening avfromat_open_input
28 mai 2023, par noklaI am trying to build a function for reading a video from .mp4 file using ffmpeg and c++.


This function was already working on one computer but when I copied the code to another one, with the same environment it returned the following error :


Exception thrown at 0x00007FFC81B7667C (avutil-57.dll)
in VideoEditor.exe: 0xC0000005: Access violation 
reading location 0x0000000000000000.



If anyone has ever encountered such an issue or have any idea how to solve it please share.



This is the function :


void VideoSource::ReadSource()
{
 auto lock = this->LockSource();
 std::vector> newSource;

 // Open the file using libavformat
 AVFormatContext* av_format_ctx = avformat_alloc_context();
 if (!av_format_ctx) {
 //wxMessageBox("Couldn't create AVFormatContext\n");
 read = false;
 return;
 }
 if (avformat_open_input(&av_format_ctx, path.c_str(), NULL, NULL) != 0) { // error here
 //wxMessageBox("Couldn't open video file\n");
 read = false;
 return;
 }

 // Find the first valid video stream inside the file
 int video_stream_index = -1;
 AVCodecParameters* av_codec_params = NULL;
 const AVCodec* av_codec = NULL;
 for (uint i = 0; i < av_format_ctx->nb_streams; i)
 {
 av_codec_params = av_format_ctx->streams[i]->codecpar;
 av_codec = avcodec_find_decoder(av_codec_params->codec_id);

 if (!av_codec) {
 continue;
 }
 if (av_codec_params->codec_type == AVMEDIA_TYPE_VIDEO) {
 video_stream_index = i;
 break;
 }
 }

 if (video_stream_index == -1) {
 //wxMessageBox("Couldn't find valid video stream inside file\n");
 read = false;
 return;
 }

 // Set up a codec context for the decoder
 AVCodecContext* av_codec_ctx = avcodec_alloc_context3(av_codec);
 if (!av_codec_ctx) {
 //wxMessageBox("Couldn't create AVCpdecContext\n");
 read = false;
 return;
 }

 if (avcodec_parameters_to_context(av_codec_ctx, av_codec_params) < 0)
 {
 //wxMessageBox("Couldn't initialize AVCodecContext\n");
 read = false;
 return;
 }
 if (avcodec_open2(av_codec_ctx, av_codec, NULL) < 0) {
 //wxMessageBox("Couldn't open codec\n");

 read = false;
 return;
 }

 AVFrame* av_frame = av_frame_alloc();
 if (!av_frame) {
 //wxMessageBox("Couldn't allocate AVFrame\n");

 read = false;
 return;
 }
 AVPacket* av_packet = av_packet_alloc();
 if (!av_packet) {
 //wxMessageBox("Couldn't allocate AVPacket\n");

 read = false;
 return;
 }
 int response;
 int counter = 0;
 while (av_read_frame(av_format_ctx, av_packet) >= 0 && counter < 100000) {
 if (av_packet->stream_index != video_stream_index) {
 av_packet_unref(av_packet);
 continue;
 }
 response = avcodec_send_packet(av_codec_ctx, av_packet);
 if (response < 0) {
 //wxMessageBox("Failed to decode packet: %s\n", av_err2str(response));

 read = false;
 return;
 }
 response = avcodec_receive_frame(av_codec_ctx, av_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 av_packet_unref(av_packet);
 continue;
 }
 else if (response < 0) {
 //wxMessageBox("Failed to decode frame: %s\n", av_err2str(response));

 read = false;
 return;
 }
 counter++;
 av_packet_unref(av_packet);

 av_packet = av_packet_alloc();

 response = avcodec_send_frame(av_codec_ctx, av_frame);
 std::string tmp = av_err2str(response);
 //source.push_back(*av_frame);
 //auto mat_frame = Avframe2Cvmat(av_frame);

 //source.push_back(im);
 //bool isEqual = (cv::sum(Avframe2Cvmat(av_frame) != Avframe2Cvmat(&source[0])) == cv::Scalar(0, 0, 0, 0));
 //bool isEqual = (cv::sum(im != source[0]) == cv::Scalar(0, 0, 0, 0));
 //im.release();
 newSource.push_back(SyncObject(av_frame_clone(av_frame)));

 /*
 if (int iRet = av_frame_copy(&source.back(), av_frame) == 0) {
 av_log(NULL, AV_LOG_INFO, "Ok");
 }
 else {
 av_log(NULL, AV_LOG_INFO, "Error: %s\n", av_err2str(iRet));
 }*/
 av_frame_unref(av_frame);
 }


 avformat_close_input(&av_format_ctx);
 avformat_free_context(av_format_ctx);
 av_frame_free(&av_frame);
 av_packet_free(&av_packet);
 avcodec_free_context(&av_codec_ctx);
 //this->LockSource();
 source_.swap(newSource);
}



This function is inside A class and those are its memebers :


bool created;
 bool read;
 std::string path; // THE PATH TO THE FILE
 std::vector> source_; // vector containing all of the video frames





This is what I get in the debugger when the error accures :


[![Debugger values][1]][1]



-
Extraction motion vectors from H.264 bitstream [on hold]
22 juillet 2015, par SerhanI’m looking for an open-source tool/code or some guidance to extract the motion vectors (MVs) of a H.264 encoded bit sequence. I’m already aware that motion vectors can be visualized using ffmpeg with the following command :
ffplay -flags2 +export_mvs input.mp4 -vf codecview=mv=pf+bf+bb
However, I want to produce a log file where the MVs of P and B frames are listed frame by frame. I checked out the structure of MVs from libavutil/motion_vector.h, but I couldn’t find an example which shows how they are extracted and laid over the original sequence by ffplay. I thought that if I can find that out, I could possibly re-arrange the code to extract the MVs to a text file.
I also tried the code given in this answer, but it doesn’t seem to work with the newer versions of ffmpeg :
I would appreciate any example codes or hints.