Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (26)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (3939)

  • Benchmarking video encoding with FFMPEG

    27 février 2020, par TheLebDev

    I’ve built a Go-based application that transforms MP4 files into HLS playlists for VoD purposes using FFMPEG. After running a few tests on my server, I’ve figured out how long each minute of output video needs to be processed. I’ve decided to finally upgrade my server, because obviously, it’s an IO/CPU heavy process so the bigger the CPU / RAM, the better. However, I need to find out how much I need to upgrade my server to get a desired output efficiency.

    Is there a way of benchmarking for me to find out the required specifications of any given server for a given required efficiency ?

  • Screen Recording with FFmpeg-Lib with c++

    27 janvier 2020, par Baschdel

    I’m trying to record the whole desktop stream with FFmpeg on Windows.
    I found a working example here. The Problem is that some og the functions depricated. So I tried to replace them with the updated ones.

    But there are some slight problems. The error "has triggered a breakpoint." occurse and also "not able to read the location."
    The bigger problem is that I don’t know if this is the right way to do this..

    My code looks like this :

    using namespace std;

    /* initialize the resources*/
    Recorder::Recorder()
    {

       av_register_all();
       avcodec_register_all();
       avdevice_register_all();
       cout<<"\nall required functions are registered successfully";
    }

    /* uninitialize the resources */
    Recorder::~Recorder()
    {

       avformat_close_input(&pAVFormatContext);
       if( !pAVFormatContext )
       {
           cout<<"\nfile closed sucessfully";
       }
       else
       {
           cout<<"\nunable to close the file";
           exit(1);
       }

       avformat_free_context(pAVFormatContext);
       if( !pAVFormatContext )
       {
           cout<<"\navformat free successfully";
       }
       else
       {
           cout<<"\nunable to free avformat context";
           exit(1);
       }

    }

    /* establishing the connection between camera or screen through its respective folder */
    int Recorder::openCamera()
    {

       value = 0;
       options = NULL;
       pAVFormatContext = NULL;

       pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.

       openScreen(pAVFormatContext);

       /* set frame per second */
       value = av_dict_set( &options,"framerate","30",0 );
       if(value < 0)
       {
         cout<<"\nerror in setting dictionary value";
          exit(1);
       }

       value = av_dict_set( &options, "preset", "medium", 0 );
       if(value < 0)
       {
         cout<<"\nerror in setting preset values";
         exit(1);
       }

    //  value = avformat_find_stream_info(pAVFormatContext,NULL);
       if(value < 0)
       {
         cout<<"\nunable to find the stream information";
         exit(1);
       }

       VideoStreamIndx = -1;

       /* find the first video stream index . Also there is an API available to do the below operations */
       for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
       {
         if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
         {
            VideoStreamIndx = i;
            break;
         }

       }

       if( VideoStreamIndx == -1)
       {
         cout<<"\nunable to find the video stream index. (-1)";
         exit(1);
       }

       // assign pAVFormatContext to VideoStreamIndx
       pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;

       pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
       if( pAVCodec == NULL )
       {
         cout<<"\nunable to find the decoder";
         exit(1);
       }

       value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
       if( value < 0 )
       {
         cout<<"\nunable to open the av codec";
         exit(1);
       }
    }

    /* initialize the video output file and its properties  */
    int Recorder::init_outputfile()
    {
       outAVFormatContext = NULL;
       value = 0;
       output_file = "output.mp4";

       avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
       if (!outAVFormatContext)
       {
           cout<<"\nerror in allocating av format output context";
           exit(1);
       }

    /* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
       output_format = av_guess_format(NULL, output_file ,NULL);
       if( !output_format )
       {
        cout<<"\nerror in guessing the video format. try with correct format";
        exit(1);
       }

       video_st = avformat_new_stream(outAVFormatContext ,NULL);
       if( !video_st )
       {
           cout<<"\nerror in creating a av format new stream";
           exit(1);
       }

       if (codec_id == AV_CODEC_ID_H264)
       {
           av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
       }

       outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
       if (!outAVCodec)
       {
           cout << "\nerror in finding the av codecs. try again with correct codec";
           exit(1);
       }

       outAVCodecContext = avcodec_alloc_context3(outAVCodec);
       if( !outAVCodecContext )
       {
           cout<<"\nerror in allocating the codec contexts";
           exit(1);
       }

       /* set property of the video file */
       outAVCodecContext = video_st->codec;
       outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
       outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
       outAVCodecContext->pix_fmt  = AV_PIX_FMT_YUV420P;
       outAVCodecContext->bit_rate = 400000; // 2500000
       outAVCodecContext->width = 1920;
       outAVCodecContext->height = 1080;
       outAVCodecContext->gop_size = 3;
       outAVCodecContext->max_b_frames = 2;
       outAVCodecContext->time_base.num = 1;
       outAVCodecContext->time_base.den = 30; //15fps


       /* Some container formats (like MP4) require global headers to be present
          Mark the encoder so that it behaves accordingly. */

       if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
       {
           outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
       if( value < 0)
       {
           cout<<"\nerror in opening the avcodec";
           exit(1);
       }

       /* create empty video file */
       if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
       {
        if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
        {
         cout<<"\nerror in creating the video file";
         exit(1);
        }
       }

       if(!outAVFormatContext->nb_streams)
       {
           cout<<"\noutput file dose not contain any stream";
           exit(1);
       }

       /* imp: mp4 container or some advanced container file required header information*/
       value = avformat_write_header(outAVFormatContext , &options);
       if(value < 0)
       {
           cout<<"\nerror in writing the header context";
           exit(1);
       }

       /*
       // uncomment here to view the complete video file informations
       cout<<"\n\nOutput file information :\n\n";
       av_dump_format(outAVFormatContext , 0 ,output_file ,1);
       */
    }

    int Recorder::stop() {
       threading = false;

       demux->join();
       rescale->join();
       mux->join();

       return 0;
    }

    int Recorder::start() {
       initVideoThreads();
       return 0;
    }

    int Recorder::initVideoThreads() {
       demux = new thread(&Recorder::demuxVideoStream, this, pAVCodecContext, pAVFormatContext, VideoStreamIndx);

       rescale = new thread(&Recorder::rescaleVideoStream, this, pAVCodecContext, outAVCodecContext);

       demux = new thread(&Recorder::encodeVideoStream, this, outAVCodecContext);
       return 0;
    }

    void Recorder::demuxVideoStream(AVCodecContext* codecContext, AVFormatContext* formatContext, int streamIndex)
    {
       // init packet
       AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
       av_init_packet(packet);

       int ctr = 0;

       while (threading)
       {
           if (av_read_frame(formatContext, packet) < 0) {
               exit(1);
           }

           if (packet->stream_index == streamIndex)
           {
               int return_value; // = 0;
               ctr++;

               do
               {
                   return_value = avcodec_send_packet(codecContext, packet);
               } while (return_value == AVERROR(EAGAIN) && threading);

               //int i = avcodec_send_packet(codecContext, packet);
               if (return_value < 0 && threading) { // call Decoder
                   cout << "unable to decode video";
                   exit(1);
               }
           }
       }

       avcodec_send_packet(codecContext, NULL); // flush decoder

       // return 0;
    }

    void Recorder::rescaleVideoStream(AVCodecContext* inCodecContext, AVCodecContext* outCodecContext)
    {
       bool closing = false;
       AVFrame* inFrame = av_frame_alloc();
       if (!inFrame)
       {
           cout << "\nunable to release the avframe resources";
           exit(1);
       }

       int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt, outAVCodecContext->width, outAVCodecContext->height, 32);
       uint8_t* video_outbuf = (uint8_t*)av_malloc(nbytes);
       if (video_outbuf == NULL)
       {
           cout << "\nunable to allocate memory";
           exit(1);
       }

       AVFrame* outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
       if (!outFrame)
       {
           cout << "\nunable to release the avframe resources for outframe";
           exit(1);
       }

       // Setup the data pointers and linesizes based on the specified image parameters and the provided array.
       int value = av_image_fill_arrays(outFrame->data, outFrame->linesize, video_outbuf, AV_PIX_FMT_YUV420P, outAVCodecContext->width, outAVCodecContext->height, 1); // returns : the size in bytes required for src
       if (value < 0)
       {
           cout << "\nerror in filling image array";
       }
       int ctr = 0;

       while (threading || !closing) {
           int value = avcodec_receive_frame(inCodecContext, inFrame);
           if (value == 0) {
               ctr++;
               SwsContext* swsCtx_ = sws_getContext(inCodecContext->width,
                   inCodecContext->height,
                   inCodecContext->pix_fmt,
                   outAVCodecContext->width,
                   outAVCodecContext->height,
                   outAVCodecContext->pix_fmt,
                   SWS_BICUBIC, NULL, NULL, NULL);
               sws_scale(swsCtx_, inFrame->data, inFrame->linesize, 0, inCodecContext->height, outFrame->data, outFrame->linesize);


               int return_value;
               do
               {
                   return_value = avcodec_send_frame(outCodecContext, outFrame);
               } while (return_value == AVERROR(EAGAIN) && threading);
           }
           closing = (value == AVERROR_EOF);
       }
       avcodec_send_frame(outCodecContext, NULL);


       // av_free(video_outbuf);

       // return 0;
    }

    void Recorder::encodeVideoStream(AVCodecContext* codecContext)
    {
       bool closing = true;
       AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
       av_init_packet(packet);

       int ctr = 0;

       while (threading || !closing) {
           packet->data = NULL;    // packet data will be allocated by the encoder
           packet->size = 0;
           ctr++;
           int value = avcodec_receive_packet(codecContext, packet);
           if (value == 0) {
               if (packet->pts != AV_NOPTS_VALUE)
                   packet->pts = av_rescale_q(packet->pts, video_st->codec->time_base, video_st->time_base);
               if (packet->dts != AV_NOPTS_VALUE)
                   packet->dts = av_rescale_q(packet->dts, video_st->codec->time_base, video_st->time_base);

               //printf("Write frame %3d (size= %2d)\n", j++, packet->size / 1000);
               if (av_write_frame(outAVFormatContext, packet) != 0)
               {
                   cout << "\nerror in writing video frame";
               }
           }

           closing = (value == AVERROR_EOF);
       }

       value = av_write_trailer(outAVFormatContext);
       if (value < 0)
       {
           cout << "\nerror in writing av trailer";
           exit(1);
       }

       // av_free(packet);

       // return 0;
    }


    int Recorder::openScreen(AVFormatContext* pFormatCtx) {
       /*

       X11 video input device.
       To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
       This device allows one to capture a region of an X11 display.
       refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
       */
       /* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */
       pAVInputFormat = av_find_input_format("gdigrab");
       //value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);

       value = avformat_open_input(&pAVFormatContext, "desktop", pAVInputFormat, NULL);
       if (value != 0)
       {
           cout << "\nerror in opening input device";
           exit(1);
       }
       return 0;
    }
  • FFMPEG API - Recording video and audio - Syncing problems

    16 juin 2016, par Solidus

    I’m developing an app which is able to record video from a webcam and audio from a microphone. I’ve been using QT but unfortunately the camera module does not work on windows which led me to use ffmpeg to record the video/audio.

    My Camera module is now working well besides a slight problem with syncing. The audio and video sometimes end up out of sync by a small difference (less than 1 second I’d say, although it might be worse with longer recordings).

    When I encode the frames I add the PTS in the following way (which I took from the muxing.c example) :

    • For the video frames I increment the PTS one by one (starting at 0).
    • For the audio frames I increment the PTS by the nb_samples of the audio frame (starting at 0).

    I am saving the file at 25 fps and asking for the camera to give me 25 fps (which it can). I am also converting the video frames to the YUV420P format. For the audio frames conversion I need to use a AVAudioFifo because the microfone sends bigger samples than the mp4 stream supports, so I have to split them in chuncks. I used the transcode.c example for this.

    I am out of ideas in what I should do to sync the audio and video. Do I need to use a clock or something to correctly sync up both streams ?

    The full code is too big to post here but should it be necessary I can add it to github for example.

    Here is the code for writing a frame :

    int FFCapture::writeFrame(const AVRational *time_base, AVStream *stream, AVPacket *pkt) {
       /* rescale output packet timestamp values from codec to stream timebase */
       av_packet_rescale_ts(pkt, *time_base, stream->time_base);
       pkt->stream_index = stream->index;
       /* Write the compressed frame to the media file. */
       return av_interleaved_write_frame(oFormatContext, pkt);
    }

    Code for getting the elapsed time :

    qint64 FFCapture::getElapsedTime(qint64 *previousTime) {
       qint64 newTime = timer.elapsed();
       if(newTime > *previousTime) {
           *previousTime = newTime;
           return newTime;
       }
       return -1;
    }

    Code for adding the PTS (video and audio stream, respectively) :

    qint64 time = getElapsedTime(&previousVideoTime);
    if(time >= 0) outFrame->pts = time;
    //if(time >= 0) outFrame->pts = av_rescale_q(time, outStream.videoStream->codec->time_base, outStream.videoStream->time_base);

    qint64 time = getElapsedTime(&previousAudioTime);
    if(time >= 0) {
       AVRational aux;
       aux.num = 1;
       aux.den = 1000;
       outFrame->pts = time;
       //outFrame->pts = av_rescale_q(time, aux, outStream.audioStream->time_base);
    }