Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (56)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (9806)

  • ffmpeg : split mp3, encode aac and join produce artifacts and empty space

    18 juin 2016, par aganeiro

    Source mp3

       ffprobe -show_frames -select_streams a -print_format csv -show_entries  
    frame=index,pkt_dts_time ~/demo_files/000.orig.5352357791787324393.mp3
    frame,0.000000
    frame,0.026122
    frame,0.052245
    frame,0.078367

    every part I make with command, -ss position and -t time I got and calculate from previous ffprobe output

       /home/xxx/bin/ffmpeg -analyzeduration 50000000 -probesize 50000000  
    -ss 0.000000 -i /home/xxx/demo_files/000.orig.5352357791787324393.mp3  
    -s 0 -t 0.926276 -flags +global_header -c:a libfdk_aac -strict -2  
    -b:a 64k -ac 2 -ar 44100 -vn -f mpegts -y /tmp/p0.ts

       /home/xxx/bin/ffmpeg -analyzeduration 50000000 -probesize 50000000  
    -ss 1.018776 -i /home/xxx/demo_files/000.orig.5352357791787324393.mp
    -s 0 -t 0.900153 -flags +global_header -c:a libfdk_aac -strict -2  
    -b:a 64k -ac 2 -ar 44100 -vn -f mpegts -y /tmp/p1.ts

    it’s produce

    [mp3 @ 0x39ca980] Estimating duration from bitrate, this may be inaccurate
       Input #0, mp3, from '/home/xxx/demo_files/000.orig.5352357791787324393.mp3':
       Duration: 00:05:17.20, start: 0.000000, bitrate: 320 kb/s
       Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 320 kb/s
       [mpegts @ 0x39ccea0] Using AVStream.codec to pass codec  
    parameters to muxers is deprecated, use AVStream.codecpar instead.
       [mpegts @ 0x39ccea0] frame size not set
       Output #0, mpegts, to '/tmp/p0.ts':
         Metadata:
           encoder         : Lavf57.38.100
           Stream #0:0: Audio: aac (libfdk_aac), 44100 Hz, stereo, s16, 64 kb/s
           Metadata:
             encoder         : Lavc57.46.100 libfdk_aac
       Stream mapping:
         Stream #0:0 -> #0:0 (mp3 (native) -> aac (libfdk_aac))
       Press [q] to stop, [?] for help
       size=      10kB time=00:00:00.92 bitrate=  92.3kbits/s speed=39.8x    
       video:0kB audio:8kB subtitle:0kB other streams:0kB global  
    headers:0kB muxing overhead: 24.619143%
         Duration: 00:00:00.63, start: 1.400000, bitrate: 127 kb/s

    Part info

       ffmpeg -hide_banner -i /tmp/p0.ts 2>&1 |grep -P 'Duration|Stream'
       Duration: 00:00:00.95, start: 1.400000, bitrate: 90 kb/s
       Stream #0:0[0x100]: Audio: aac (LC) ([15][0][0][0] / 0x000F),  
    44100 Hz, stereo, fltp, 68 kb/s

    Then I join all parts together with

       /home/xxx/bin/ffmpeg -i /tmp/p0.ts -i /tmp/p1.ts -i /tmp/p2.ts  
    -i /tmp/p3.ts -i /tmp/p4.ts -i /tmp/p5.ts -filter_complex  
    "[0:a]asetpts=PTS-STARTPTS[a0];[1:a]asetpts=PTS-STARTPTS[a1];  
    [2:a]asetpts=PTS-STARTPTS[a2];[3:a]asetpts=PTS-STARTPTS[a3];  
    [4:a]asetpts=PTS-STARTPTS[a4];[5:a]asetpts=PTS-STARTPTS[a5];  
    [a0][a1][a2][a3][a4][a5] concat=n=6:v=0:a=1 [a]"  
    -map [a] -strict experimental -fflags +genpts -flags +global_header  
    -c libfdk_aac -bsf:a aac_adtstoasc -y /tmp/res.m4a

    waveform of original and joined on the left
    i68.tinypic.com/magcnl.jpg

    So, as you can see joined have delays and waveforms starte later. Why ? maybe it depens that all encoded parts have start time 1.400000, ?? How to set start time to 0 on encode ?

    Also I tried to cut empty space on joining with filter_complex but result stil not good and contains artifacts because trim position looks different in every part.

       /home/xxx/bin/ffmpeg -i /tmp/p0.ts -i /tmp/p1.ts -i /tmp/p2.ts  
    -i /tmp/p3.ts -i /tmp/p4.ts -i /tmp/p5.ts -filter_complex  
    "[0:a]atrim=0.020000,asetpts=PTS-STARTPTS[a0];  
    [1:a]atrim=0.020000,asetpts=PTS-STARTPTS[a1];  
    [2:a]atrim=0.020000,asetpts=PTS-STARTPTS[a2];  
    [3:a]atrim=0.020000,asetpts=PTS-STARTPTS[a3];  
    [4:a]atrim=0.020000,asetpts=PTS-STARTPTS[a4];  
    [5:a]atrim=0.020000,asetpts=PTS-STARTPTS[a5];  
    [a0][a1][a2][a3][a4][a5] concat=n=6:v=0:a=1 [a]"  
    -map [a] -strict experimental -fflags +genpts  
    -flags +global_header -c libfdk_aac -bsf:a aac_adtstoasc  
    -y /tmp/res.m4a

    Whyyyy and how to solve it ?

  • Creating GIF from QImages with ffmpeg

    17 août 2016, par Sierra

    I would like to generate GIF from QImage, using ffmpeg - all of that programmatically (C++). I’m working with Qt 5.6 and the last build of ffmpeg (build git-0a9e781 (2016-06-10).

    I’m already able to convert these QImage in .mp4 and it works. I tried to use the same principle for the GIF, changing format pixel and codec. GIF is generated with two pictures (1 second each), in 15 FPS.

    ## INITIALIZATION
    #####################################################################

    // Filepath : "C:/Users/.../qt_temp.Jv7868.gif"  
    // Allocating an AVFormatContext for an output format...
    avformat_alloc_output_context2(formatContext, NULL, NULL, filepath);

    ...

    // Adding the video streams using the default format codecs and initializing the codecs.
    stream = avformat_new_stream(formatContext, *codec);

    AVCodecContext * codecContext = avcodec_alloc_context3(*codec);

    context->codec_id       = codecId;
    context->bit_rate       = 400000;
    ...
    context->pix_fmt        = AV_PIX_FMT_BGR8;

    ...

    // Opening the codec...
    avcodec_open2(codecContext, codec, NULL);

    ...

    frame = allocPicture(codecContext->width, codecContext->height, codecContext->pix_fmt);
    tmpFrame = allocPicture(codecContext->width, codecContext->height, AV_PIX_FMT_RGBA);

    ...

    avformat_write_header(formatContext, NULL);

    ## ADDING A NEW FRAME
    #####################################################################

    // Getting in parameter the QImage: newFrame(const QImage & image)
    const qint32 width  = image.width();
    const qint32 height = image.height();

    // Converting QImage into AVFrame
    for (qint32 y = 0; y < height; y++) {
       const uint8_t * scanline = image.scanLine(y);

       for (qint32 x = 0; x < width * 4; x++) {
           tmpFrame->data[0][y * tmpFrame->linesize[0] + x] = scanline[x];
       }
    }

    ...

    // Scaling...
    if (codec->pix_fmt != AV_PIX_FMT_BGRA) {
       if (!swsCtx) {
           swsCtx = sws_getContext(codec->width, codec->height,
                                   AV_PIX_FMT_BGRA,
                                   codec->width, codec->height,
                                   codec->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);
       }

       sws_scale(swsCtx,
                 (const uint8_t * const *)tmpFrame->data,
                 tmpFrame->linesize,
                 0,
                 codec->height,
                 frame->data,
                 frame->linesize);
    }
    frame->pts = nextPts++;

    ...

    int gotPacket = 0;
    AVPacket packet = {0};

    av_init_packet(&packet);
    avcodec_encode_video2(codec, &packet, frame, &gotPacket);

    if (gotPacket) {
       av_packet_rescale_ts(paket, *codec->time_base, stream->time_base);
       paket->stream_index = stream->index;

       av_interleaved_write_frame(formatContext, paket);
    }

    But when I’m trying to modify the video codec and pixel format to match with GIF specifications, I’m facing some issues.
    I tried several codecs such as AV_CODEC_ID_GIF and AV_CODEC_ID_RAWVIDEO but none of them seem to work. During the initialization phase, avcodec_open2() always returns such kind of errors :

    Specified pixel format rgb24 is invalid or not supported
    Could not open video codec:  gif

    EDIT 17/06/2016

    Digging a little bit more, avcodec_open2() returns -22 :

    #define EINVAL          22      /* Invalid argument */

    EDIT 22/06/2016

    Here are the flags used to compile ffmpeg :

    "FFmpeg/Libav configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib"

    Did I miss a crucial one for GIF ?

    EDIT 27/06/2016

    Thanks to Gwen, I have a first output : I setted the context->pix_fmt to AV_PIX_FMT_BGR8. Btw I’m still facing some issues with the generated GIF. It’s not playing and encoding appears to fail.

    GIF generated in command lines with ffmpeg (left) . . . GIF generated programmatically (right)
    Generated in command line with ffmpeg
    enter image description here

    It looks like some options are not defined... also may be a wrong conversion between QImage and AVFrame ? I updated the code above. It represents a lot of code, so I tried to stay short. Don’t hesitate to ask more details.

    End of EDIT

    I’m not really familiar with ffmpeg, any kind of help would be highly appreciated. Thank you.

  • FFMPEG - FFV1 frame encoding crashes on cleaning up

    21 mai 2016, par Yan

    I’m trying to implement a frame encoding functionality using the ffmpeg c-api. I am receiving frames from a camera which are in the Gray16le format. I want to convert encode them using the ffv1 encoder and copy the resulting frame into the variable "data". This is the code that I got so far. It seems to be working in a sense that it doesn’t crash until the part where I am freeing up my variables.

    /* Video compression variables///////////////////////////////////*/
    struct timeval stop, start;

    AVCodec *codec;
    AVCodecContext *context= NULL;
    const AVPixFmtDescriptor *avPixDesc = NULL; // used to get bits per pixel
    int ret, got_output;
    int bufferSize = 0; // Size of encoded image frame in bytes
    //uint8_t* outBuffer;
    AVFrame *inFrame; //
    AVPacket pkt;
    Data* data;
    /* Video compression ///////////////////////////////////*/

    Frame* frame;
    /////////////////////////////////////////////////////////////////////////
    // start frame compression - current codec is ffv1
    //////////////////////////////////////////////////////////////////////////

    gettimeofday(&start, NULL); // get current time
    avcodec_register_all(); // register all the codecs
    codec = avcodec_find_encoder(AV_CODEC_ID_FFV1); // find the ffv1 encoder
    if (!codec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }

    context = avcodec_alloc_context3(codec);
    if (!context) {
       fprintf(stderr, "Could not allocate video codec context\n");
       exit(1);
    }

    frame = getFrame(); // get frame so we can set context params

    /* put sample parameters */
    context->bit_rate = 400000; // from example, half might also work
    /* resolution must be a multiple of two */
    context->width = frame->size[0];
    context->height = frame->size[1];
    /* frames per second */
    context->time_base = (AVRational){1,22}; // 22 fps

    context->gop_size = 1; // typical for ffv1 codec
    context->max_b_frames = 1; // set to 1 for now, the higher the b-frames count, the higher the needed ressources
    context->pix_fmt = AV_PIX_FMT_GRAY16LE ; // same as source, Y , 16bpp, little-endian, 12 of the 16 pixels are used

    /* open it */
    if (avcodec_open2(context, codec, NULL) < 0) {
           fprintf(stderr, "Could not open codec\n");
           exit(1);
    }

    inFrame = av_frame_alloc();
    if(!inFrame)
    {
       printf("Could not allocate video frame\n! Exciting..");
       exit(1);
    }

    // allocate image in inFrame
    ret = av_image_alloc(inFrame->data, inFrame->linesize, context->width, context->height, context->pix_fmt, 16);
    if(ret<0)
    {
       printf("Error allocating image of inFrame! Exiting..\n");
       exit(1);
    }

    // copy data of frame of type Frame* into frame of type AVFrame* so we can use ffmpeg to encode it
    int picFill = avpicture_fill((AVPicture*)inFrame, (uint8_t*)frame->image, context->pix_fmt, context->width, context->height);

    if(picFill < 0)
    {
       printf("Error filling inFrame with frame->image! Exiting..\n");
       exit(1);
    }
    else
    {
       printf("Successfully filled inFrame with frame->image..\n");
       printf("Size of bytes filled:  %d", picFill);
    }

    inFrame->width = context->width;
    inFrame->height = context->height;
    inFrame->format = context->pix_fmt;

    if(frame->image[0] == NULL)
    {
           printf("Error! frame->image[0] == NULL.. Exiting..\n");
           exit(1);
    }

    fflush(stdout);
    int i=0;

    // start encoding
    while(!got_output) // while we didn't get a complete package
    {
       /* Start encoding the given frame */
       av_init_packet(&pkt);
       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;

       i++;

       /* encode the image */
       ret = avcodec_encode_video2(context, &pkt, inFrame, &got_output);
       if (ret < 0) {
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
       }

       inFrame->pts = i;

       if(got_output)
       {
           printf("Got a valid package after %d frames..\n", i);
           // encoding of frame done, adapt "data"-field accordingly
           avPixDesc = av_pix_fmt_desc_get(context->pix_fmt); // Get pixelFormat descriptor
           bufferSize = av_image_get_buffer_size(context->pix_fmt, inFrame->width, inFrame->height,16);
           if(bufferSize <= 0)
           {
               printf("Error! Buffersize of encoded frame is <= 0, exciting...\n");
           }
           else
           {
               printf("Buffersize determined to be %d\n", bufferSize);
           }

           data->m_size[0] = inFrame->width;
           data->m_size[1] = inFrame->height;
           data->m_bytesPerPixel = av_get_bits_per_pixel(avPixDesc)/8;

           if (0 != av_get_bits_per_pixel(avPixDesc) % 8)
                   data->m_bytesPerPixel += 1;

           printf("Buffersize is: %d, should be %d\n", bufferSize, inFrame->width * inFrame->height * data->m_bytesPerPixel);
           data->m_image = malloc(bufferSize);
           printf("copying data into final variable...\n");

           memcpy(data->m_image, pkt.data, bufferSize); // copy data from ffmpeg frame
           printf("copying of data done\n");

           printf("Unrefing packet..\n");
           av_packet_unref(&pkt);
           printf("Unrefing packet done..\n");
       }
       else
       {
           printf("Didnt get package, so we get and encode next frame..\n");
           frame = getFrame(); // get next frame            

           picFill = avpicture_fill((AVPicture*)inFrame, (uint8_t*)frame->image, context->pix_fmt, context->width, context->height);
           if(!picFill)
           {
               printf("Error filling frame with data!!..\n");
               exit(1);
           }
           else
           {
               printf("Size required to store received frame in AVFrame in bytes: %d", picFill);
           }
       }
    }

    printf("\nDone with encoding.. cleaning up..\n");
    printf("Closing context...\n");
    avcodec_close(context);
    printf("Closing context done...\n");
    printf("Freeing context...\n");
    av_free(context);
    printf("Freeing context done...\n");
    if(inFrame->data[0] != NULL)
    {
       printf("avfreep() pointer to FFMPEG frame data...\n");
       av_freep(&inFrame->data[0]);
       printf("Freeing pointer to FFMPEG frame data done...\n");
    }
    else
    {
       printf("infRame->data[0] was not deleted because it was NULL\n");
    }

    printf("Freeing frame...\n");
    av_frame_free(&inFrame);
    printf("Freeing inFrame done...\n");
    printf("Compression of frame done...\n");
    gettimeofday(&stop, NULL);
    printf("took %lu ms\n", (stop.tv_usec - start.tv_usec) / 1000);

    This is the output that I am getting when I run the program :

    [ffv1 @ 0x75101970] bits_per_raw_sample > 8, forcing range coder
    Successfully filled inFrame with frame->image..
    Size of bytes filled:  1377792Got a valid package after 1 frames..
    Buffersize determined to be 1377792
    Buffersize is: 1377792, should be 1377792
    copying data into final variable...
    copying of data done
    Unrefing packet..
    Unrefing packet done..

    Done with encoding.. cleaning up..
    Closing context...
    Closing context done...
    Freeing context...
    Freeing context done...
    avfreep() pointer to FFMPEG frame data...
    *** Error in `./encoding': free(): invalid pointer: 0x74a66428 ***
    Aborted

    The error seems to occur when calling the av_freep() function. If you could point me in the right direction, it would be greatly appreciated ! This is my first time working with the ffmpeg api and I feel that I am not so close to my goal, though I spent quite some time looking for the error already..

    Best regards !