Recherche avancée

Médias (91)

Autres articles (50)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (7480)

  • FFmpeg SwrContext incorrectly converting leftover data after seek

    27 avril 2017, par trigger_death

    I currently have my own custom SFML SoundFileReader that uses FFmpeg for more file formats. It works great for the most part until you seek and then you get leftover data from the previous location when using swr_convert. I currently have a hackish (I think) solution to the problem where I call swr_init after seeking to remove whatever data is leftover in there. I assumed that swr_convert’s documentation on flushing would be the solution to the issue yet either it doesn’t help or I’m not doing it correctly. Is there a proper way to clear the leftover data in the SwrContext after seeking ?

    void seekBeginning() {
       av_seek_frame(
           m_formatContext, m_audioStream,
           m_formatContext->streams[m_audioStream]->first_dts,
           AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_ANY
       );
       avcodec_flush_buffers(m_codecContext);

       // This fixes the issue but it seems like a horribly incorrect way of doing it
       swr_init(m_convertContext);

       // I've tried this but it doesn't seem to work
       //swr_convert(m_convertContext, NULL, 0, NULL, 0);
    }

    Uint64 read(Int16* samples, Uint64 maxCount) {
       Uint64 count = 0;
       while (count < maxCount) {
           if (m_packet->stream_index == m_audioStream) {
               while (m_packet->size > 0) {
                   int gotFrame = 0;
                   int result = avcodec_decode_audio4(m_codecContext, m_frame, &gotFrame, m_packet);
                   if (result >= 0 && gotFrame) {
                       int samplesToRead = static_cast<int>(maxCount - count) / m_codecContext->channels;
                       if (samplesToRead > m_frame->nb_samples)
                           samplesToRead = m_frame->nb_samples;
                       m_packet->size -= result;
                       m_packet->data += result;
                       result = swr_convert(m_convertContext, (uint8_t**)&amp;samples, samplesToRead, (const uint8_t**)m_frame->data, m_frame->nb_samples);

                       if (result > 0) {
                           count += result * m_codecContext->channels;
                           samples += result * m_codecContext->channels;
                       }
                       else {
                           m_packet->size = 0;
                           m_packet->data = NULL;
                       }
                   }
                   else {
                       m_packet->size = 0;
                       m_packet->data = NULL;
                   }
               }
           }
           av_free_packet(m_packet);
       }

       return count;
    }
    </int>
  • avformat/demux : Remove fake-loop

    9 décembre 2021, par Andreas Rheinhardt
    avformat/demux : Remove fake-loop
    

    When flushing, try_decode_frame() itself loops until the desired
    properties have been found or the decoder is drained.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavformat/demux.c
  • C++/C FFmpeg artifact build up across video frames

    6 mars 2017, par ChiragRaman

    Context :
    I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here : https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html

    Video capture specifics :
    I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions :

    // Used for injecting decoding functions for different media types, allowing
    // for a generic decode loop
    typedef std::function PacketDecoder;

    /**
    * Decodes a video packet.
    * If the decoding operation is successful, returns the number of bytes decoded,
    * else returns the result of the decoding process from ffmpeg
    */
    int decode_video_packet(AVPacket *packet,
                           int *got_frame,
                           int cached){
       int ret = 0;
       int decoded = packet->size;

       *got_frame = 0;

       //Decode video frame
       ret = avcodec_decode_video2(video_decode_context,
                                   video_frame, got_frame, packet);
       if (ret &lt; 0) {
           //FFmpeg users should use av_err2str
           char errbuf[128];
           av_strerror(ret, errbuf, sizeof(errbuf));
           std::cerr &lt;&lt; "Error decoding video frame " &lt;&lt; errbuf &lt;&lt; std::endl;
           decoded = ret;
       } else {
           if (*got_frame) {
               video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);

               //Write to log file
               AVRational *time_base = &amp;video_decode_context->time_base;
               log_frame(video_frame, time_base,
                         video_frame->coded_picture_number, video_log_stream);

    #if( DEBUG )
               std::cout &lt;&lt; "Video frame " &lt;&lt; ( cached ? "(cached)" : "" )
                         &lt;&lt; " coded:" &lt;&lt;  video_frame->coded_picture_number
                         &lt;&lt; " pts:" &lt;&lt; pts &lt;&lt; std::endl;
    #endif

               /*Copy decoded frame to destination buffer:
                *This is required since rawvideo expects non aligned data*/
               av_image_copy(video_dest_attr.video_destination_data,
                             video_dest_attr.video_destination_linesize,
                             (const uint8_t **)(video_frame->data),
                             video_frame->linesize,
                             video_decode_context->pix_fmt,
                             video_decode_context->width,
                             video_decode_context->height);

               //Write to rawvideo file
               fwrite(video_dest_attr.video_destination_data[0],
                      1,
                      video_dest_attr.video_destination_bufsize,
                      video_out_file);

               //Unref the refcounted frame
               av_frame_unref(video_frame);
           }
       }

       return decoded;
    }

    /**
    * Grabs frames in a loop and decodes them using the specified decoding function
    */
    int process_frames(AVFormatContext *context,
                      PacketDecoder packet_decoder) {
       int ret = 0;
       int got_frame;
       AVPacket packet;

       //Initialize packet, set data to NULL, let the demuxer fill it
       av_init_packet(&amp;packet);
       packet.data = NULL;
       packet.size = 0;

       // read frames from the file
       for (;;) {
           ret = av_read_frame(context, &amp;packet);
           if (ret &lt; 0) {
               if  (ret == AVERROR(EAGAIN)) {
                   continue;
               } else {
                   break;
               }
           }

           //Convert timing fields to the decoder timebase
           unsigned int stream_index = packet.stream_index;
           av_packet_rescale_ts(&amp;packet,
                                context->streams[stream_index]->time_base,
                                context->streams[stream_index]->codec->time_base);

           AVPacket orig_packet = packet;
           do {
               ret = packet_decoder(&amp;packet, &amp;got_frame, 0);
               if (ret &lt; 0) {
                   break;
               }
               packet.data += ret;
               packet.size -= ret;
           } while (packet.size > 0);
           av_free_packet(&amp;orig_packet);

           if(stop_recording == true) {
               break;
           }
       }

       //Flush cached frames
       std::cout &lt;&lt; "Flushing frames" &lt;&lt; std::endl;
       packet.data = NULL;
       packet.size = 0;
       do {
           packet_decoder(&amp;packet, &amp;got_frame, 1);
       } while (got_frame);

       av_log(0, AV_LOG_INFO, "Done processing frames\n");
       return ret;
    }

    Questions :

    1. How do I go about debugging the underlying issue ?
    2. Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem ?
    3. Am I doing something wrong in the decoding code ?

    Things I have tried/found :

    1. I found this thread that is about the same problem here : FFMPEG decoding artifacts between keyframes
      (I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have)
      However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn’t ’preserving the packets correctly’, but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification.

    2. I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.

    3. I found another question about debugging decoding issues, but couldn’t find anything conclusive : How is video decoding corruption debugged ?

    I’d appreciate any insight. Thanks a lot !

    [EDIT] In response to Ronald’s answer, I am adding a little more information that wouldn’t fit in a comment :

    1. I am only calling decode_video_packet() from the thread processing video frames ; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.

    2. I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns ? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.

    3. A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can’t even say if a solution works or not.