Recherche avancée

Médias (0)

Mot : - Tags -/acrobat

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (80)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (12823)

  • FFMPEG I/O output buffer

    6 juin 2015, par peacer212

    I’m currently having issues trying to encapsulate raw H264 nal packets into a mp4 container. Instead of writing them to disk however, I want to have the result stored in memory. I followed this approach Raw H264 frames in mpegts container using libavcodec but haven’t been successful so far.

    First, is this the right way to write to memory ? I have a small struct in my header

    struct IOOutput {
       uint8_t* outBuffer;
       int bytesSet;
    };

    where I initialize the buffer and bytesset. I then initialize my AVIOContext variable

    AVIOContext* pIOCtx = avio_alloc_context(pBuffer, iBufSize, 1, outptr, NULL, write_packet, NULL);

    where outptr is a void pointer to IOOutput output, and write_packet looks like the following

    int write_packet (void *opaque, uint8_t *buf, int buf_size) {
       IOOutput* out = reinterpret_cast(opaque);
       memcpy(out->outBuffer+ous->bytesSet, buf, buf_size);
       out->bytesSet+=buf_size;
       return buf_size;
    }

    I then set

    fc->pb = pIOCtx;
    fc->flags = AVFMT_FLAG_CUSTOM_IO;

    on my AVFormatContext *fc variable.

    Then, whenever I encode the nal packets I have from a frame, I write them to the AVFormatContext via av_interleaved_write_frame and then get the mp4 contents via

    void getBufferContent(char* buffer) {
       memcpy(buffer, output.outBuffer, output.bytesSet);
       output.bytesSet=0;
    }

    and thus reset the variable bytesSet, so during the next writing operation bytes will be inserted at the start of the buffer. Is there a better way to do this ? Is this actually a valid way to do it ? Does FFMPEG do any reading operation if I only do call av_interleaved_write_frame and avformat_write_header in order to add packets ?

    Thank you very much in advance !

    EDIT

    Here is the code regarding the muxing process - in my encode Function I have something like

    int frame_size = x264_encoder_encode(obj->mEncoder, &obj->nals, &obj->i_nals, obj->pic_in, obj->pic_out);
    int total_size=0;

    for(int i=0; ii_nals;i++)
       {
           if ( !obj->fc ) {
               obj->create( obj->nals[i].p_payload, obj->nals[i].i_payload );
           }

           if ( obj->fc ) {
               obj->write_frame( obj->nals[i].p_payload, obj->nals[i].i_payload);
           }
       }

    // Here I get the output values
    int currentBufferSize = obj->output.bytesSet;
    char* mem = new char[currentBufferSize];
    obj->getBufferContent(mem);

    And the create and write functions look like this

    int create(void *p, int len) {

      AVOutputFormat *of = av_guess_format( "mp4", 0, 0 );

      fc = avformat_alloc_context();

      // Add video stream
      AVStream *pst = av_new_stream( fc, 0 );
      vi = pst->index;

      void* outptr = (void*) &output;

    // Create Buffer
      pIOCtx = avio_alloc_context(pBuffer, iBufSize, 1, outptr, NULL, write_packet, NULL);

      fc->oformat = of;
      fc->pb = pIOCtx;
      fc->flags = AVFMT_FLAG_CUSTOM_IO;

      pcc = pst->codec;

      AVCodec c= {0};
      c.type= AVMEDIA_TYPE_VIDEO;

      avcodec_get_context_defaults3( pcc, &c );
      pcc->codec_type = AVMEDIA_TYPE_VIDEO;

      pcc->codec_id = codec_id;
      pcc->bit_rate = br;
      pcc->width = w;
      pcc->height = h;
      pcc->time_base.num = 1;
      pcc->time_base.den = fps;
    }

    void write_frame( const void* p, int len ) {

      AVStream *pst = fc->streams[ vi ];

      // Init packet
      AVPacket pkt;
      av_init_packet( &pkt );
      pkt.flags |= ( 0 >= getVopType( p, len ) ) ? AV_PKT_FLAG_KEY : 0;  
      pkt.stream_index = pst->index;
      pkt.data = (uint8_t*)p;
      pkt.size = len;

      pkt.dts = AV_NOPTS_VALUE;
      pkt.pts = AV_NOPTS_VALUE;

      av_interleaved_write_frame( fc, &pkt );

    }
  • Best / simplest way to display FFmpeg frames in Qt5

    15 juin 2015, par user412

    I need to display ffmpeg frames on Qt widget. I know about QtFFmpegWrapper, but it seems outdated. I tried to use memcpy() to copy data from RGB ffmpeg frame to QImage and got unhandled exception inside it.

    QImage lastFrame;
    lastFrame = QImage( screen_w, screen_h, QImage::Format_RGB888 );
    for( int y = 0; y < screen_h; ++y )
       memcpy( lastFrame.scanLine(y),
               frameRGB -> data[0] + y * frameRGB -> linesize[0],
               screen_w * 3 );

    I tried sws_getContext() and sws_getCachedContext(), AV_PIX_FMT_BGR24 and AV_PIX_FMT_RGB24 in all parts of ffmpeg processing. All ffmpeg code is from popular tutorials and works fine with SDL and PIX_FMT_YUV420P.

    Any ideas ?
    Maybe it’s not the best/simplest way to display ffmpeg frames on Qt widget ?

    Edit.

    Ok, I used Murat Şeker’s solution with QImage::copy() but now QImage::isNull() returns true.

    Some of my ffmpeg code :

    out_buffer = (uint8_t*)av_malloc( avpicture_get_size( AV_PIX_FMT_RGB32,
                                     codecCtx -> width, codecCtx -> height ));
    avpicture_fill((AVPicture *)frameRGB, out_buffer, AV_PIX_FMT_RGB32,
                  codecCtx -> width, codecCtx -> height);
    img_convert_ctx = sws_getContext( codecCtx -> width, codecCtx -> height,
                                     codecCtx -> pix_fmt, codecCtx -> width,
                                     codecCtx -> height, AV_PIX_FMT_RGB32,
                                     SWS_BICUBIC, NULL, NULL, NULL );
    /* ... */

    if( got_picture ){
       sws_scale( img_convert_ctx, (const uint8_t* const*)frame -> data,
                  frame -> linesize, 0, codecCtx -> height, frameRGB -> data,
                  frameRGB -> linesize );
       QImage imgFrame = QImage( frameRGB -> data[0], frameRGB -> width,
                                 frameRGB -> height, frameRGB -> linesize[0],
                                 QImage::Format_RGB32 ).copy();
       if( imgFrame.isNull()){} // true

       // But I can write this frame to hard disk using BITMAPFILEHEADER
       SaveBMP( frameRGB, codecCtx -> width, codecCtx -> height ); // it works
    }
  • x264enc not respecting timestamps

    23 mai 2015, par Austin A.

    I am trying to write a gstreamer pipeline to read video from a FIFO on disk as raw 720p RGB images, then encode the frames using x264, and save an .mkv file to disk. Here is my current pipeline :

    gst-launch-1.0 --eos-on-shutdown -v \
       filesrc location="/path/to/fifo_name" do-timestamp=true \
       ! videoparse format="GST_VIDEO_FORMAT_RGB" width="1280" height="720" \
         framerate="2997/100" \
       ! videoconvert \
       ! "video/x-raw, format=(string)I420, width=(int)1280,\
         height=(int)720" \
       ! videorate \
       ! "video/x-raw,framerate=(fraction)2997/100" \
       ! x264enc \
       ! matroskamux name=mux \
       ! filesink location=/path/to/output.mkv sync=false

    The frames are normally pushed onto the fifo at a regular rate (29.97 fps), but sometimes there will be a dropped frame or two, so there might be a delay of 66ms or 100ms between adjacent frames instead of the regular 33ms. However, the x264enc element is not respecting those timestamps, because it gives each frame the same 33ms duration, and skips over those dropped frames without the appropriate delay. This makes the video play back faster than expected.

    To test that this was indeed the x264enc element causing issues, I tried skipping the encoder and just showing the result in a display, and this puts the correct delays into the video :

    gst-launch-1.0 --eos-on-shutdown -v \
       filesrc location="/path/to/fifo_name" do-timestamp=true \
       ! videoparse format="GST_VIDEO_FORMAT_RGB" width="1280" height="720" \
         framerate="2997/100" \
       ! videoconvert \
       ! "video/x-raw, format=(string)I420, width=(int)1280,\
         height=(int)720" \
       ! videorate \
       ! "video/x-raw,framerate=(fraction)2997/100" \
       ! xvimagesink sync=false

    So, something about the x264 encoder is not respecting the timestamps it’s given.

    According to the x264 options page, I can provide my own timestamps in a file and give it to x264 with a ’tcfile-in’ parameter. But when I try to pass that in through x264enc’s option-string parameter as

    ! x264enc option-string="tcfile-in=/path/to/timestamp.txt"

    I get a

    Bad name for option tcfile-in=/path/to/timestamp.txt

    error.

    Any ideas ?