Recherche avancée

Médias (91)

Autres articles (39)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (5639)

  • avcodec library not extracting all the frames from a video

    22 novembre 2018, par Guru Govindan

    I have written a library that could be used with python to extract frames from a video in a buffer for some processing. I used the examples from the ffmpeg to build my library using libavcodec.
    This and seems to be working fine for most of the videos. But I see on some videos where my library does not extract all the frames (way less than the FPS).

    It seems like somehow the packets might be out of order and docoder is not able to handle it. I keep getting a lot of the following errors

    pid=100 pes_code=0x1e0

    nal_unit_type : 9, nal_ref_idc : 0

    nal_unit_type : 1, nal_ref_idc : 2

    deblocking_filter_idc 7 out of range

    decode_slice_header error

    no frame !

    I use the following steps to setup the decoder.

       //open input file, allocate context
    if ((ret = avformat_open_input(&format_ctx, in_filename, 0, 0)) < 0) {
       PyErr_SetString(ExtractorError, "Could not open input file!");
       goto end;
    }

    if ((ret = avformat_find_stream_info(format_ctx, 0)) < 0) {
       PyErr_SetString(ExtractorError, "Failed to retrieve input stream information!");
       goto end;
    }

    av_dump_format(format_ctx, 0, in_filename, 0);

    // Get the video index from the stream
    for(int i=0; inb_streams ;i++ )
    {
       if( format_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
       {
           video_stream_index = i;
           break;
       }
    }

    /* if video stream not availabe */
    if((video_stream_index) == -1)
    {
       PyErr_SetString(ExtractorError, "video stream to retreive fps is not found!");
       return NULL;
    }

    long duration = format_ctx->duration + (format_ctx->duration <= INT64_MAX - 5000 ? 5000 : 0);

    float duration_in_secs = (float)duration / AV_TIME_BASE;

    stream_mapping_size = format_ctx->nb_streams;
    stream_mapping = av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping));
    if (!stream_mapping) {
       ret = AVERROR(ENOMEM);
       goto end;
    }

    AVCodec *pCodec = NULL;
    AVCodecParameters *in_codecpar = NULL;
    for (i = 0; i < format_ctx->nb_streams; i++) {
       AVStream *in_stream = format_ctx->streams[i];
       in_codecpar = in_stream->codecpar;

       if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
           in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
           in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
           stream_mapping[i] = -1;
           continue;
       }

       if (in_codecpar->codec_type == AVMEDIA_TYPE_VIDEO){
           pCodec = avcodec_find_decoder(in_codecpar->codec_id);
           stream_mapping[i] = stream_index++;
           break;
       }
    }

    if(!pCodec){
       PyErr_SetString(ExtractorError, "error, no pCodec!");
       return NULL;
    }

    // convert CodecParam to CodecCtx
    AVCodecContext *pCodecCtx = avcodec_alloc_context3(pCodec);
    if (!pCodecCtx) {
       PyErr_SetString(ExtractorError, "Failed to convert codecParam to CodecCtx!");
       return NULL;
    }

    ret = avcodec_parameters_to_context(pCodecCtx, in_codecpar);
    if (ret < 0)
       goto end;

    //open video decoder
    if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
    {
       logging("EXTRACTOR: failed to open codec through avcodec_open2\n");
       return NULL;
    }

    In order to test it I used the following command from ffmpeg to extract frames from the same video and it worked fine.

    ffmpeg -i /thuuz/extractor/03000.ts -f image2 -qscale:v 7 ffmpeg-detect—%03d.jpg -hide_banner -v quiet

    Am I not passing the right codec params ? Please let me know some inputs on how to debug this issue.
    Thanks a lot.

  • How can I fix a audio delay caused by a batch trim of the start and end of a video file

    15 mai 2019, par trozz

    When using this ffmpeg batch trimmer I found on here - will be posted down below in "show code" Tried to format it correctly, but I’m new here so.

    It causes the audio to be delayed by 100 milliseconds. Is there a way to fix this in the same bat file, or maybe after with another edit ?

    Thanks

    @Echo Off
    SetLocal

    Set "ext=mp4"

    Set "opts=-v quiet"

    Set "opts=%opts% -print_format "compact=print_section=0:nokey=1:escape=csv"

    Set "opts=%opts% -show_entries "format=duration""

    If Exist *.%ext% (If Not Exist "Trimmed\" MD Trimmed)

    For %%a In (*.%ext%) Do Call :Sub "%%~a"

    Exit/B

    :Sub

    For /f "Tokens=1* Delims=." %%a In (
       'FFProbe %opts% %1') Do (Set/A "ws=%%a-7.85" & Set "ps=%%b")

    rem If %ws% Lss 20 GoTo :EOF

    Set/A hh=ws/(60*60), lo=ws%%(60*60), mm=lo/60, ss=lo%%60

    If %hh% Lss 10 Set hh=0%hh%

    If %mm% Lss 10 Set mm=0%mm%

    If %ss% Lss 10 Set ss=0%ss%

    FFMpeg -i %1 -ss 00:00:04.5000 -to %hh%:%mm%:%ss%.%ps:~,3% -c:v copy -c:a copy "Trimmed\%~1"
  • Why aren't the videos in my S3 bucket buffering to html video tag ?

    2 juin 2019, par Michael Cain

    I have so far successfully programmed a node script on a Udoo x86 advanced plus that captures an Ethernet connected IP cam’s RTSP stream. I use ffmpeg to trans-code the stream into 5 second mp4 files. As soon as the files show up in the folder they are uploaded/synced to my AWS S3 Bucket. Next I have a Node server that GET’s the most recently created mp4 file from the S3 bucket and runs it through mediasource extension and finally to an html video tag.

    The videos are playing on the browser but not in any kind of synchronous manner. No buffering seems to be taking place. one video plays then another and so on. Video is skipping all over the place.

    I would really appreciate any guidance with this bug.

    export function startlivestream() {
     const videoElement = document.getElementById("my-video");
     const myMediaSource = new MediaSource();
     const url = URL.createObjectURL(myMediaSource);
     videoElement.src = url;
     myMediaSource.addEventListener("sourceopen", sourceOpen);
    }
    function sourceOpen() {
     if (window.MediaSource.isTypeSupported(
         'video/mp4; codecs="avc1.42E01E, mp4a.40.2"'
       )
     )
    {
          console.log("YES");
     }

    // 1. add source buffers

     const mediaCodec = 'video/mp4; codecs="avc1.4D601F"';
     var mediasource = this;
     const videoSourceBuffer = mediasource.addSourceBuffer(mediaCodec);

    // 2. download and add our audio/video to the SourceBuffers

    function checkVideo(url) {
     var oReq = new XMLHttpRequest();
       oReq.open("GET", url, true);
       oReq.responseType = "arraybuffer";

        oReq.onload = function(oEvent) {
         var arrayBuffer = oReq.response; // Note: not oReq.responseText
         if (arrayBuffer) {
            videoSourceBuffer.addEventListener("updateend", function(_) {
              mediasource.endOfStream();
              document.getElementById("my-video").play();
      });
              videoSourceBuffer.appendBuffer(arrayBuffer);
         }
       };

       oReq.send(null);
     }

     setInterval(function() {
       checkVideo("http://localhost:8080");
     }, 5000);

    My ffmpeg tags :

    const startRecording = () => {
     const args = [
       "-rtsp_transport",
       "tcp",
       "-i",
       inputURL,
       "-f",
       "segment",
       "-segment_time",
       "5",
       "-segment_format",
       "mp4",
       "-segment_format_options",
       "movflags=frag_keyframe+empty_moov+default_base_moof",
       "-segment_time",
       "5",
       "-segment_list_type",
       "m3u8",
       "-c:v",
       "copy",
       "-strftime",
       "1",
       `${path.join(savePath, "test-%Y-%m-%dT%H-%M-%S.mp4")}`
     ];

    From what I have learned about Mediasource extensions they allow multiple videos to be taken in and allow the client to buffer them so it looks like one longer video. In simple terms.