Recherche avancée

Médias (0)

Mot : - Tags -/page unique

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (64)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (4274)

  • FFMPEG Chapter Metadata added to a song file is sometimes off by 1 second

    6 février 2021, par Elthfa

    I attempted to add chapter metadata to a .opus file, but afterward, the file metadata showed timestamps different than the ones I attempted to add.
It seems to be adding an extra second to the timestamps under certain conditions.
I'm not sure if this is a bug, or if maybe I am just using the command wrong.

    


    Here are the commands I performed :

    


    First I removed all existing metadata from the file. I did this using a command I found on stack overflow :

    


    ffmpeg -i test_song.opus -c copy -map_metadata -1 -fflags +bitexact -flags:a +bitexact no_metadata.opus


    


    Then I added the new metadata from an external file I had written :

    


    ffmpeg -i no_metadata.opus -f ffmetadata -i metadata.txt -c copy -map_metadata 1 out.opus


    


    The file 'metadata.txt' looks like :

    


    ;FFMETADATA1
[CHAPTER]
TIMEBASE=1/1000
START=0
END=400
title=400 ms
[CHAPTER]
TIMEBASE=1/1000
START=400
END=500
title=100 ms
[CHAPTER]
TIMEBASE=1/1000
START=500
END=2000s
title=1500 ms
[CHAPTER]
TIMEBASE=1/1000
START=2000
END=97000
title=The Rest


    


    When I print out the basic data from the file, not all the timestamps shown match the ones I had in the metadata file.

    


    > ffmpeg -i out.opus
...
Input #0, ogg, from 'out.opus':
  Duration: 00:01:37.00, start: 0.000000, bitrate: 147 kb/s

    Chapter #0:0: start 0.000000, end 0.400000
    Metadata:
      title           : 400 ms

    Chapter #0:1: start 0.400000, end 1.500000
    Metadata:
      title           : 100 ms
    
    Chapter #0:2: start 1.500000, end 2.000000
    Metadata:
      title           : 1500 ms
    
    Chapter #0:3: start 2.000000, end 97.000000
    Metadata:
      title           : The Rest
...


    


    You can see the issues for chapters 0:1 and 0:2, which show a start and end time of 1.5 seconds respectively, when it should be 0.5 seconds for each.
I tried several combinations for this, and it seems that if the digit in the hundreds of milliseconds place is between 5 and 9 inclusive, it adds and extra second to the timestamp it saves in the metadata.

    


    Is this due to me using the command wrong ? Or formatting the metadata file wrong ? Or is there an issue in the code with rounding timestamps ?

    


    Thanks !

    


  • ffmpeg - preserve time base and pts of all frames during transcode

    18 mars 2021, par jdizzle

    Context :

    


    I have an application that produces mp4s with HEVC encoding. I want to convert them to AVC for use in browser-based displaying. A very crucial part of how I want to use this is to preserve exact PTS times, as this is how we correlate the frames to other data streams not included in the video.

    


    Question :

    


    How do I make ffmpeg preserve this information across the transcode ? All the obvious flags seem to have no effect and ffmpeg just does whatever it wants.

    


    $ ffprobe -show_streams original.mp4 2>/dev/null | grep time_base
codec_time_base=16666667/500000000
time_base=1/1000


    


    Here is my convert command :

    


    $ ffmpeg -i original.mp4 -copyts -copytb 0 test.mp4


    


    And its result :

    


    $ ffprobe -show_streams test.mp4 2>/dev/null | grep time_base
codec_time_base=1/60
time_base=1/15360


    


    I would expect the time_bases to match. The PTS of the frames also don't match when doing ffprobe -show_frames

    


    EDIT :
@Gyan suggested using -video_track_timescale, but that didn't get the exact behavior I was looking for :

    


    $ sdiff <(ffprobe -show_frames test.mp4  | grep pkt_pts_time) <(ffprobe -show_frames original.mp4 | grep pkt_pts_time)
pkt_pts_time=0.000000                           pkt_pts_time=0.000000
pkt_pts_time=0.033000                           pkt_pts_time=0.033000
pkt_pts_time=0.067000                         | pkt_pts_time=0.066000
pkt_pts_time=0.100000                           pkt_pts_time=0.100000
pkt_pts_time=0.133000                           pkt_pts_time=0.133000
pkt_pts_time=0.167000                         | pkt_pts_time=0.166000
pkt_pts_time=0.200000                           pkt_pts_time=0.200000
pkt_pts_time=0.233000                           pkt_pts_time=0.233000
pkt_pts_time=0.267000                         | pkt_pts_time=0.266000
pkt_pts_time=0.300000                           pkt_pts_time=0.300000
pkt_pts_time=0.333000                           pkt_pts_time=0.333000
pkt_pts_time=0.367000                         | pkt_pts_time=0.366000
pkt_pts_time=0.400000                           pkt_pts_time=0.400000
pkt_pts_time=0.433000                           pkt_pts_time=0.433000
pkt_pts_time=0.467000                           pkt_pts_time=0.467000
pkt_pts_time=0.500000                           pkt_pts_time=0.500000
pkt_pts_time=0.533000                         | pkt_pts_time=0.532000
pkt_pts_time=0.567000                         | pkt_pts_time=0.565000
pkt_pts_time=0.600000                         | pkt_pts_time=0.598000
pkt_pts_time=0.633000                         | pkt_pts_time=0.631000
pkt_pts_time=0.667000                         | pkt_pts_time=0.665000
pkt_pts_time=0.700000                         | pkt_pts_time=0.698000
...


    


  • h264 encoding of bgr images into .mp4 file with libav

    10 février 2021, par xyfix

    I'm trying to encode(h264) a series of .png into a mp4 file. A cv::Mat holds the png data (BGR) and that is converted to YUV420P which is then encoded and written to a .mp4 file. I have added two block statements in the code to store images on the disk (before and after encoding). The first image before it gets encoded is correct but the second one after encoding is not. The avcodec_send_frame returns 0 so up to that point everything works.
Edit : I get an mp4 file of 1 MB but I can't open it with vlc
ecodec.h

    


    class ECodec
{
public:

    MovieCodec();

    ~MovieCodec();

    void MatToFrame( cv::Mat& image );

    void encode( AVFrame *frame, AVPacket *pkt );

private:
 
    FILE* m_file;

    AVCodec* m_encoder = NULL;

    AVCodecContext* m_codecContextOut = NULL;

    AVPacket* m_packet = NULL;

};


    


    ecodec.cpp

    


    ECodec::ECodec() :
//    m_encoder( avcodec_find_encoder_by_name( videoCodec.c_str()))
    m_encoder( avcodec_find_encoder( AV_CODEC_ID_H264 ))
{
    m_file = fopen( "c:\\tmp\\outputVideo.mp4", "wb");
}



void ECodec::MatToFrame( cv::Mat& image )
{
    int ret( 0 );
    int frameRate( 24 );
    AVFrame *frame = NULL;

    m_encoder( avcodec_find_encoder( AV_CODEC_ID_H264 ))
    m_codecContextOut = avcodec_alloc_context3( m_encoder );

    m_codecContextOut->width = 800;
    m_codecContextOut->height = 640;
    m_codecContextOut->bit_rate = 400000;//m_codecContextOut->width * m_codecContextOut->height * 3;
    m_codecContextOut->time_base = (AVRational){1, 24};
    m_codecContextOut->framerate = (AVRational){24, 1};
    m_codecContextOut->codec_tag = AV_CODEC_ID_H264;
    m_codecContextOut->pix_fmt = AV_PIX_FMT_YUV420P;
    m_codecContextOut->codec_type = AVMEDIA_TYPE_VIDEO;
    m_codecContextOut->gop_size = 1;
    m_codecContextOut->max_b_frames = 1;

    av_log_set_level(AV_LOG_VERBOSE);

    ret = av_opt_set(m_codecContextOut->priv_data, "preset", "slow", 0);

    ret = avcodec_open2(m_codecContextOut, m_encoder, NULL);

    frame = av_frame_alloc();

    frame->format = AV_PIX_FMT_YUV420P;
    frame->width = image.cols();
    frame->height = image.rows();


    ret = av_image_alloc(frame->data, frame->linesize, frame->width,  frame->height, AV_PIX_FMT_YUV420P, 1);

    if (ret < 0)
    {
        return;
    }

    struct SwsContext *sws_ctx;
    sws_ctx = sws_getContext((int)image.cols(), (int)image.rows(), AV_PIX_FMT_RGB24,
                             (int)image.cols(), (int)image.rows(), AV_PIX_FMT_YUV420P,
                             0, NULL, NULL, NULL);

    const uint8_t* rgbData[1] = { (uint8_t* )image.getData() };
    int rgbLineSize[1] = { 3 * image.cols() };

    sws_scale(sws_ctx, rgbData, rgbLineSize, 0, image.rows(), frame->data, frame->linesize);

    frame->pict_type = AV_PICTURE_TYPE_I;

cv::Mat yuv420p(frame->height + frame->height/2, frame->width, CV_8UC1,frame->data[0]);
cv::Mat cvmIm;
cv::cvtColor(yuv420p,cvmIm,CV_YUV420p2BGR);
cv::imwrite("c:\\tmp\\rawimage.png", cvmIm);
//OK

    m_packet = av_packet_alloc();
    ret = av_new_packet( m_packet, m_codecContextOut->width * m_codecContextOut->height * 3 );

    /* encode the image */
    encode( frame, m_packet );


    avcodec_free_context(&m_codecContextOut);
    av_frame_free(&frame);
    av_packet_free( &m_packet );
}




void ECodec::encode( AVFrame *frame, AVPacket *pkt )
{
    int ret;
    
    /* send the frame to the encoder */
    ret = avcodec_send_frame( m_codecContextOut, frame);

    if (ret < 0)
    {
        fprintf(stderr, "Error sending a frame for encoding\n");
        exit(1);
    }

    do
    {
        ret = avcodec_receive_packet(m_codecContextOut, pkt);
        if (ret == 0)
        {

cv::Mat yuv420p(frame->height + frame->height/2, frame->width, CV_8UC1,pkt->data);
cv::Mat cvmIm;
cv::cvtColor(yuv420p,cvmIm,CV_YUV420p2BGR);
cv::imwrite("c:\\tmp\\rawencodedimage.png", cvmIm);
//NOT OK
            fwrite(pkt->data, 1, pkt->size, m_file );
            av_packet_unref(pkt);

            break;
        }
        else if ((ret < 0) && (ret != AVERROR(EAGAIN)))
        {
            return;
        }
        else if (ret == AVERROR(EAGAIN))
        {
             ret = avcodec_send_frame(m_codecContextOut, NULL);
             if (0 > ret)
             {
                 return;
             }
        }
    } while (ret == 0);
}