Recherche avancée

Médias (91)

Autres articles (104)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (11328)

  • Revision b12e060b55 : Add speed feature to disable splitmv Added a speed feature in speed 1 to disabl

    30 juin 2013, par Yunqing Wang

    Changed Paths :
     Modify /vp9/encoder/vp9_onyx_if.c


     Modify /vp9/encoder/vp9_onyx_int.h



    Add speed feature to disable splitmv

    Added a speed feature in speed 1 to disable splitmv for HD (>=720)
    clips. Test result on stdhd set : 0.3% psnr loss and 0.07% ssim
    loss. Encoding speedup is 36%.

    (For reference : The test result on derf set showed 2% psnr loss
    and 1.6% ssim loss. Encoding speedup is 34%. SPLITMV should be
    enabled for small resolution videos.)

    Change-Id : I54f72b94f506c6d404b47c42e71acaa5374d6ee6

  • How to extract elementary video from mp4 using ffmpeg programmatically ?

    4 juillet 2013, par epipav

    I have started learning ffmpeg few weaks ago. At the moment I am able to transcode any video to mp4 using h264/AVC codec. The main scheme is something like that :

    -open input
    - demux
    - decode
    - encode
    - mux

    The actual code is below :

    #include <iostream>
    #include
    extern "C"
    {

    #ifndef __STDC_CONSTANT_MACROS
    #undef main /* Prevents SDL from overriding main() */
    #  define __STDC_CONSTANT_MACROS
    #endif



    #pragma comment (lib,"avcodec.lib")
    #pragma comment (lib,"avformat.lib")
    #pragma comment (lib,"swscale.lib")
    #pragma comment(lib,"avutil.lib")

    #include
    #include
    #include
    #include
    #include <libavutil></libavutil>opt.h>
    #include
    #include
    #include
    #include
    #include

    }


    using namespace std;


    void open_video(AVFormatContext*oc , AVCodec *codec, AVStream * st)
    {
    int ret;
    AVCodecContext *c ;
    c = st->codec;

    /*open codec */

    cout &lt;&lt; "probably starts here" &lt;&lt; endl;
    ret = avcodec_open2(c,codec,NULL);
    cout &lt;&lt; "and ends here" &lt;&lt; endl;

    if ( ret &lt; 0)
    {
       cout &lt;&lt; ("Could not open video codec") &lt;&lt; endl;
    }


    }






    /*This function will add a new stream to our file.
    @param
    oc -> Format context that the new stream will be added.
    codec -> codec of the stream, this will be passed.
    codec_id ->
    chWidth->
    chHeight->
    */

    AVStream * addStream(AVFormatContext * oc, AVCodec **codec, enum AVCodecID codec_id, int chWidth,        int chHeight, int fps)
    {
    AVCodecContext *c;
    AVStream *st;

    //find encoder of the stream, it passes this information to @codec, later on
    //it will be used in encoding the video @ avcodec_encode_video2 in loop.
    *codec = avcodec_find_encoder(AV_CODEC_ID_H264);

    if ( (*codec) == NULL)
       cout &lt;&lt; "ERROR CAN NOT FIND ENCODER! ERROR! ERROR! AVCODEC_FIND_ENCODER FAILED !!!1 """ &lt;&lt; endl;

    if(!(*codec))
       printf ("Could not find encoder for &#39; %s &#39; ", avcodec_get_name(codec_id));


    //create a new stream with the found codec inside oc(AVFormatContext).
    st = avformat_new_stream ( oc, *codec);

    if (!st)
       cout &lt;&lt; " Cannot allocate stream " &lt;&lt; endl;

    //Setting the stream id.
    //Since, there can be other streams in this AVFormatContext,
    //we should find the first non used index. And this is oc->nb_streams(number of streams) - 1
    st ->id = oc ->nb_streams - 1;

    c = st->codec;

    //setting the stream&#39;s codec&#39;s properties.
    c-> codec_id = codec_id;
    c->bit_rate = 4000000;
    c->width = chWidth;
    c->height = chHeight;
    c->time_base.den = fps;
       //fps;
    c->time_base.num = 1;
    c->gop_size = 12;
    c->pix_fmt = AV_PIX_FMT_YUV420P;

    if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
           /* just for testing, we also add B frames */
           c->max_b_frames = 2;
       }


    if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
           /* Needed to avoid using macroblocks in which some coeffs overflow.
            * This does not happen with normal video, it just happens here as
            * the motion of the chroma plane does not match the luma plane. */
           c->mb_decision = 2;
       }

    /* Some formats want stream headers to be separate. */
    if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
       c->flags |= CODEC_FLAG_GLOBAL_HEADER;

    //returning our lovely new brand stream.
    return st;


    }

    int changeResolution ( string source, int format )
    {
    //Data members
    struct SwsContext   *sws_ctx = NULL;
    AVFrame             *pFrame = NULL;
    AVFrame             *outFrame = NULL;  
    AVPacket            packet;
    uint8_t             *buffer = NULL;
    uint8_t endcode[] = { 0, 0, 1, 0xb7 };
    AVDictionary        *optionsDict = NULL;
    AVFormatContext     *pFormatCtx = NULL;
    AVFormatContext     *outputContext = NULL;
    AVCodecContext      *pCodecCtx;
    AVCodec             *pCodec ;
    AVCodec             *codec;
    AVCodec             *videoCodec;
    AVOutputFormat      *fmt;
    AVStream            *video_stream;
    int                 changeWidth;
    int                  changeHeight;
    int                 frameFinished;
    int                 numBytes;
    int                 fps;


    int lock = 0;

    //Register all codecs &amp; other important stuff. Vital!..
    av_register_all();


    //Selects the desired resolution.
    if (format == 0)
    {
       changeWidth = 320;
       changeHeight = 180;
    }

    else if (format == 1)
    {
       changeWidth = 640;
       changeHeight = 480;

    }
    else if (format == 2)
    {
       changeWidth = 960;
       changeHeight = 540;

    }
    else if (format == 3)
    {
       changeWidth = 1024;
       changeHeight = 768;

    }
    else
    {
       changeWidth = 1280;
       changeHeight = 720;
    }


    // Open video file
    int aaa;
    aaa = avformat_open_input(&amp;pFormatCtx, source.c_str(), NULL, NULL) ;
    if(aaa !=0)
    {
       cout &lt;&lt; " cannot open input file \n" &lt;&lt; endl;
       cout &lt;&lt; "aaa = " &lt;&lt; aaa  &lt;&lt; endl;
        return -1; // Couldn&#39;t open file  
    }

    // Retrieve stream information
    if(av_find_stream_info(pFormatCtx)&lt;0)
       return -1; // Couldn&#39;t find stream information

    //just checking duration casually for no reason
    /*int64_t duration = pFormatCtx->duration;

    cout &lt;&lt; "the duration is " &lt;&lt; duration &lt;&lt; " " &lt;&lt; endl;*/

    //this writes the info about the file
    av_dump_format(pFormatCtx, 0, 0, 0);
    cin >> lock;

    // Find the first video stream
    int videoStream=-1;
    int i;

    for(i=0; i&lt;3; i++)
        if(pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
         videoStream=i;
         cout &lt;&lt; " lel \n " ;
        break;

    }


    if(videoStream==-1)
       return -1; // Didn&#39;t find a video stream

    // Get a pointer to the codec context for the video stream
    pCodecCtx=pFormatCtx->streams[videoStream]->codec;
    fps = pCodecCtx -> time_base.den;



    //Find the decoder of the input file, for the video stream
    pCodec=avcodec_find_decoder(pCodecCtx->codec_id);

    if(pCodec==NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
    }


    // Open codec, you must open it first, in order to use it.
    if(avcodec_open2(pCodecCtx, pCodec, &amp;optionsDict)&lt;0)
    return -1; // Could not open codec


    // Allocate video frame ( pFrame for taking the packets into, outFrame for processed frames to packet.)
    pFrame=avcodec_alloc_frame();
    outFrame = avcodec_alloc_frame();


    i=0;

    int ret;
    int video_frame_count = 0;

    //Initiate the outFrame set the buffer &amp; fill the properties
    numBytes=avpicture_get_size(PIX_FMT_YUV420P, changeWidth, changeHeight);
    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
       avpicture_fill((AVPicture *)outFrame, buffer, PIX_FMT_YUV420P, changeWidth, changeHeight );  


    int pp;
    int frameNo = 0;

    //allocate the outputContext, it will be the AVFormatContext of our output file.
    //It will try to find the format by giving the file name.
    avformat_alloc_output_context2(&amp;outputContext,NULL,NULL, "myoutput.mp4");

    //Cant find the file extension, using MPEG as default.
    if (!outputContext) {
       printf("Could not deduce output format from file extension: using MPEG.\n");
       avformat_alloc_output_context2(&amp;outputContext, NULL, "mpeg", "myoutput.mp4");
    }

    //Still cant set file extension, exit.
    if (!outputContext) {
       return 1;
    }

    //set AVOutputFormat fmt to our outputContext&#39;s format.
    fmt = outputContext -> oformat;
    video_stream = NULL;

    //If fmt has a valid codec_id, create a new video stream.
    //This function will set the streams codec &amp; codecs desired properties.
    //Stream&#39;s codec will be passed to videoCodec for later usage.
    if (fmt -> video_codec != AV_CODEC_ID_NONE)
       video_stream = addStream(outputContext, &amp;videoCodec, fmt ->video_codec, changeWidth, changeHeight,fps);


    //open the video using videoCodec. by avcodec_open2() i.e open the codec.
    if (video_stream)  
       open_video(outputContext, videoCodec, video_stream);

    //Creating our new output file.
    if (!(fmt->flags &amp; AVFMT_NOFILE)) {
       ret = avio_open(&amp;outputContext->pb, "toBeStreamed.264", AVIO_FLAG_WRITE);
       if (ret &lt; 0) {
           cout &lt;&lt; " cant open file " &lt;&lt; endl;
           return 1;
         }
      }

    //Writing the header of format context.
    //ret = avformat_write_header(outputContext, NULL);

    if (ret >= 0) {
       cout &lt;&lt; "writing header success !!!"  &lt;&lt; endl;
    }


     //Start reading packages from input file.
    while(av_read_frame(pFormatCtx, &amp;packet)>=0  ) {

    // Is this a packet from the video stream?  
    if(packet.stream_index==videoStream) {

    // Decode video package into frames
    ret = avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);

    if( ret &lt; 0)
    {
       printf ( " Error decoding frame !!.." );
       return ret;
    }

    if (frameFinished){
       printf("video_frame n:%d    coded_n:%d\n" , video_frame_count++, pFrame->coded_picture_number);
    }

    av_free_packet(&amp;packet);

    //do stuff with frame, in this case we are changing the resolution.
    static struct SwsContext *img_convert_ctx_in = NULL;
    if (img_convert_ctx_in == NULL)
    {
       img_convert_ctx_in =sws_getContext( pCodecCtx->width,
           pCodecCtx->height,
           pCodecCtx->pix_fmt,
           changeWidth,
           changeHeight,
           PIX_FMT_YUV420P,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL );

    }
    //scale the frames
    sws_scale(img_convert_ctx_in,
       pFrame->data,
       pFrame->linesize,
       0,
       pCodecCtx->height,
       outFrame->data,
       outFrame->linesize);

    //initiate the pts value
    if ( frameNo == 0)
       outFrame->pts = 0;


       //calculate the pts value &amp; set it.
       outFrame->pts += av_rescale_q(1, video_stream->codec->time_base, video_stream->time_base);

       //encode frames into packages. Package passed in @packet.
       if(avcodec_encode_video2(outputContext->streams[0]->codec, &amp;packet, outFrame, &amp;pp) &lt; 0 )
         cout &lt;&lt; "Encoding frames into packages, failed. " &lt;&lt;  endl;

       frameNo++;

       //write the packages into file, resulting in creating a video file.
       av_interleaved_write_frame(outputContext,&amp;packet);


     }

    }



    av_free_packet(&amp;packet);
    //av_write_trailer(outputContext);

    avio_close(outputContext->pb);



    // Free the RGB image
    av_free(buffer);
    av_free(outFrame);

    // Free the YUV frame
    av_free(pFrame);

    // Close the codec
    avcodec_close(video_stream->codec);
    avcodec_close(pCodecCtx);

    // Close the video file
    avformat_close_input(&amp;pFormatCtx);

    return 0;


    }
    </iostream>

    at the end of the process I get my desired file with desired codec & container & resolution.

    My problem is, in a part of our project I need to get elementary video streams IN file. Such as example.264. However I can not add a stream without creating an AVFormatContext. I can not create an AVFormatContext because 264 files does not have a container,they are just raw video ?, as far as I know.

    I have tried the way in decoding_encoding.c which uses fwrite. However that example was for mpeg-2 codec and when I try to adapt that code to H264/AVC codec, I got "floating point division by zero" error from mediainfo and moreover, some of the properties of the video was not showing (such as FPS & playtime & quality factor). I think it has to do with the "endcode" the example adds at the end of the code. It is for mpeg-2. ( uint8_t endcode[] = 0, 0, 1, 0xb7  ; )

    Anyway, I would love to get a startpoint for this task. I have managed to come this far by using internet resources ( quite few & outdated for ffmpeg) but now I'm stuck a little.

  • ffmpeg : images to 29.97fps mpeg2, audio not sync [migrated]

    21 novembre 2011, par Andy Le

    I have spent a lot of time on this issue. Hope someone can help.

    I want to convert 3147 images + ac3 audio file into an mpeg2 video at 29.97fps (about 1m45s). My command :

    ~/ffmpeg/ffmpeg/ffmpeg -loop_input -t 105 -i v%4d.tga -i final.ac3 -vcodec mpeg2video -qscale 1 -s 400x400 -r 30000/1001 -acodec copy -y out.mpeg 2> out.txt

    However, the audio file ends before the frame sequence. Which means the video is slower then audio.

    I checked the output file with imageinfo and see :

    General
    Complete name                    : out.mpeg
    Format                           : MPEG-PS
    File size                        : 7.18 MiB
    Duration                         : 1mn 44s
    Overall bit rate                 : 574 Kbps

    Video
    ID                               : 224 (0xE0)
    Format                           : MPEG Video
    Format version                   : Version 2
    Format profile                   : Main@Main
    Format settings, BVOP            : No
    Format settings, Matrix          : Default
    Format_Settings_GOP              : M=1, N=12
    Duration                         : 1mn 44s
    Bit rate mode                    : Variable
    Bit rate                         : 103 Kbps
    Width                            : 400 pixels
    Height                           : 400 pixels
    Display aspect ratio             : 1.000
    Frame rate                       : 29.970 fps
    Resolution                       : 8 bits
    Colorimetry                      : 4:2:0
    Scan type                        : Progressive
    Bits/(Pixel*Frame)               : 0.021
    Stream size                      : 1.29 MiB (18%)

    Audio
    ID                               : 128 (0x80)
    Format                           : AC-3
    Format/Info                      : Audio Coding 3
    Duration                         : 1mn 44s
    Bit rate mode                    : Constant
    Bit rate                         : 448 Kbps
    Channel(s)                       : 6 channels
    Channel positions                : Front: L C R, Side: L R, LFE
    Sampling rate                    : 44.1 KHz
    Stream size                      : 5.61 MiB (78%)

    The log from ffmpeg shows many duplicate frames. But I don't know how to get rid of that.

    -loop_input is deprecated, use -loop 1
    [image2 @ 0x9c17a80] max_analyze_duration 5000000 reached at 5000000
    Input #0, image2, from &#39;v%4d.tga&#39;:
     Duration: 00:02:05.88, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: targa, bgr24, 400x400, 25 fps, 25 tbr, 25 tbn, 25 tbc
    -loop_input is deprecated, use -loop 1
    [ac3 @ 0x9ca5420] max_analyze_duration 5000000 reached at 5014400
    [ac3 @ 0x9ca5420] Estimating duration from bitrate, this may be inaccurate
    Input #1, ac3, from &#39;Final.ac3&#39;:
     Duration: 00:20:10.68, start: 0.000000, bitrate: 447 kb/s
       Stream #1:0: Audio: ac3, 44100 Hz, 5.1(side), s16, 448 kb/s
    Incompatible pixel format &#39;bgr24&#39; for codec &#39;mpeg2video&#39;, auto-selecting format &#39;yuv420p&#39;
    [buffer @ 0x9c1e060] w:400 h:400 pixfmt:bgr24 tb:1/1000000 sar:0/1 sws_param:
    [buffersink @ 0x9dd56c0] auto-inserting filter &#39;auto-inserted scale 0&#39; between the filter &#39;src&#39; and the filter &#39;out&#39;
    [scale @ 0x9c178e0] w:400 h:400 fmt:bgr24 -> w:400 h:400 fmt:yuv420p flags:0x4
    [mpeg @ 0x9d58060] VBV buffer size not set, muxing may fail
    Output #0, mpeg, to &#39;out.mpeg&#39;:
     Metadata:
       encoder         : Lavf53.21.0
       Stream #0:0: Video: mpeg2video, yuv420p, 400x400, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc
       Stream #0:1: Audio: ac3, 44100 Hz, 5.1(side), 448 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (targa -> mpeg2video)
     Stream #1:0 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    frame=  267 fps=  0 q=1.0 size=     564kB time=00:00:08.87 bitrate= 520.6kbits/s dup=43 drop=0    
    frame=  544 fps=542 q=1.0 size=    1186kB time=00:00:18.11 bitrate= 536.2kbits/s dup=89 drop=0    
    frame=  821 fps=546 q=1.0 size=    1818kB time=00:00:27.36 bitrate= 544.3kbits/s dup=135 drop=0    
    frame= 1098 fps=548 q=1.0 size=    2444kB time=00:00:36.60 bitrate= 547.0kbits/s dup=181 drop=0    
    frame= 1376 fps=549 q=1.0 size=    3072kB time=00:00:45.87 bitrate= 548.5kbits/s dup=227 drop=0    
    frame= 1653 fps=550 q=1.0 size=    3700kB time=00:00:55.12 bitrate= 549.9kbits/s dup=273 drop=0    
    frame= 1930 fps=550 q=1.0 size=    4326kB time=00:01:04.36 bitrate= 550.6kbits/s dup=319 drop=0    
    frame= 2208 fps=551 q=1.0 size=    4960kB time=00:01:13.64 bitrate= 551.8kbits/s dup=365 drop=0    
    frame= 2462 fps=546 q=1.0 size=    5746kB time=00:01:22.11 bitrate= 573.2kbits/s dup=407 drop=0    
    frame= 2728 fps=544 q=1.0 size=    6354kB time=00:01:30.99 bitrate= 572.1kbits/s dup=451 drop=0    
    frame= 3007 fps=545 q=1.0 size=    6980kB time=00:01:40.28 bitrate= 570.2kbits/s dup=498 drop=0    
    frame= 3146 fps=546 q=1.0 Lsize=    7352kB time=00:01:44.93 bitrate= 573.9kbits/s dup=521 drop=0    

    video:1518kB audio:5745kB global headers:0kB muxing overhead 1.230493%