Recherche avancée

Médias (0)

Mot : - Tags -/content

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (89)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (8518)

  • Creating TS file from MP4 file - Differing durations

    2 mai 2018, par Mark Walsh

    I am running the following command to crop an MP4 file

    -i "C:\FFMPEG\Temp\S3\2ad239d1-f4b9-4854-afe4-7e28157893daHighRes.mp4" -q:v 0 -y -ss 00:00:01.000 -to 00:00:29.834 -vf "fade=t=out:st=29.334:d=0.500, scale=iw*min(1080/iw\,720/ih):ih*min(1080/iw\,720/ih),pad=1080:720:(1080-iw)/2:(720-ih)/2" "C:\FFMPEG\Temp\Crops\5ae9806e32ab040978d97013_0.ts"

    As you can see I want to crop a video of exactly 28834 milliseconds long. However the created file when inspecting it via ffprobe is 28873 milliseconds long. Why is this ?

  • Support scaling to UHD

    6 mai 2018, par Aurelius Schnitzler

    I am using ffmpeg with

    ffmpeg -i GOPR1373.MP4 -c:v libx264 -vf "pad=width=5848:height=2924:x=1964:y=922:color=black,scale=3840:2160" GOPR1373_w.MP4

    Which yields to Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height and x264 [error]: malloc of size 43688832 failed

    yet

    ffmpeg -i GOPR1373.MP4 -c:v libx264 -vf "pad=width=5848:height=2924:x=1964:y=922:color=black,scale=1920:1080" GOPR1373_w.MP4

    works.

    How to fix this ?

    EDIT : I solved it by freeing up memory.

  • mpeg2 ts android ffmpeg openmax

    10 octobre 2014, par WLGfx

    The setup is as follows :

    1. Multicast server 1000Mbs, UDP, Mpeg2-TS Part 1 (H.222) streaming live TV-channels.
    2. Quad core 1.5Ghz Android 4.2.2 GLES 2.0 renderer.
    3. FFMpeg library.
    4. Eclipse Kepler, Android SDK/NDK, etc. Running on Windows 8.1.
    5. Output screen 1920 x 1080, I am using a texture of 2048 x 1024 and getting between 35 and 45 Frames per second.

    The app :

    1. Renderer thread runs continuously and updates a single texture by uploading segments to the gpu when media images are ready.
    2. Media handler thread, downloads and processes media from server/or local storage.
    3. Video thread(s), one for buffering the UDP packets and another for decoding the packets into frames.

    I am connecting ffmpeg to the UDP stream just fine and the packets are being buffered and seemingly decoded fine. The packet buffers are plenty, no under/over-flows. The problem I am facing is it appears to be chopping up frames (ie only playing back 1 out of every so many frames). I understand that I need to distinguish I/P/B frames, but at the moment, hands up, I ain’t got a clue. I’ve even tried a hack to detect I frames to no avail. Plus, I am only rendering the frames to less than a quarter of the screen. So I’m not using full screen decoding.

    The decoded frames are also stored in separate buffers to cut out page tearing. The number of buffers I’ve changed too, from 1 to 10 with no luck.

    From what I’ve found about OpenMax IL, is it only handles MPeg2-TS Part 3 (H.264 and AAC), but you can use your own decoder. I understand that you can add your own decode component to it. Would it be worth me trying this route or should I continue on with ffmpeg ?

    The frame decoder (only the renderer will convert and scale the frames when ready)
    /*
    * This function will run through the packets and keep decoding
    * until a frame is ready first, or out of packets
    */

    while (packetsUsed[decCurrent])
    {
       hack_for_i_frame:
       i = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packets[decCurrent]);
       packetsUsed[decCurrent] = 0; // finished with this one
       i = packets[decCurrent].flags & 0x0001;
       decCurrent++;
       if (decCurrent >= MAXPACKETS) decCurrent = 0;
       if (frameFinished)
       {
           ready_pFrame = pFrame;
           frameReady = true;  // notify renderer
           frameCounter++;
           if (frameCounter>=MAXFRAMES) frameCounter = 0;
           pFrame = pFrames[frameCounter];
           return 0;
       }
       else if (i)
           goto hack_for_i_frame;
    }

    return 0;

    The packet reader (spawned as a pthread)
    void *mainPacketReader(void *voidptr)

    int res ;

    while ( threadState == TS_RUNNING )
    {
       if (packetsUsed[prCurrent])
       {
           LOGE("Packet buffer overflow, dropping packet...");
           av_read_frame( pFormatCtx, &packet );
       }
       else if ( av_read_frame( pFormatCtx, &packets[prCurrent] ) >= 0 )
       {
           if ( packets[prCurrent].stream_index == videoStream )
           {
               packetsUsed[prCurrent] = 1; // flag as used
               prCurrent++;
               if ( prCurrent >= MAXPACKETS )
               {
                   prCurrent = 0;
               }
           }

           // here check if the packet is audio and add to audio buffer
       }
    }
    return NULL;

    And the renderer just simply does this
    // texture has already been bound before calling this function

    if ( frameReady == false ) return;

    AVFrame *temp;  // set to frame 'not' currently being decoded
    temp = ready_pFrame;

    sws_scale(sws_ctx,(uint8_t const* const *)temp->data,
           temp->linesize, 0, pCodecCtx->height,
           pFrameRGB->data, pFrameRGB->linesize);

    glTexSubImage2D(GL_TEXTURE_2D, 0,
           XPOS, YPOS, WID, HGT,
           GL_RGBA, GL_UNSIGNED_BYTE, buffer);

    frameReady = false;

    In the past, libvlc had audio syncing problems too, so that is my decision for going with ffmpeg and doing all the donkey work from scratch.

    If anybody has any pointers of how to stop the choppiness of the video playback (works great in VLC player) or possibly another route to go down, it would be seriously appreciated.

    EDIT I removed the hack for the I-frame (completely useless). Move the sws_scale function from the renderer to the packet decoder. And I left the udp packet reader thread alone.

    In the meantime I’ve also changed the packet reader thread and the packet decoder threads priority to real-time. Since doing that I don’t get shed loads of dropped packets.