Recherche avancée

Médias (91)

Autres articles (46)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (7458)

  • Concatenate sped up videos and normal speed ones using ffmpeg

    5 juillet 2020, par FlucTuAte.

    I have a python script that uses ffmpeg to cut different sections of a video. There are ones that i want to keep as-is, and i use this ffmpeg command for that :

    


    os.system((
      "ffmpeg "
      "-loglevel error "
      "-nostats "
      "-hide_banner "
      f"-ss {start / fps} "
      f"-i \"{uncutVideo}\" "
      f"-t {(end - start) / fps} "
      "-c copy "
      "-avoid_negative_ts make_zero "
      f"tmp\\out{processedClips}.mp4"
))


    


    Start and end are frame numbers, that's why i divide by fps.
Then I have the parts where I want to speed up the video. I use this command for that :

    


    os.system((
    "ffmpeg "
    "-loglevel error "
    "-nostats "
    "-hide_banner "
    f"-ss {start / fps} "
    f"-i \"{uncutVideo}\" "
    f"-t {(end - start) / fps / cfg.speed} "
    f"-vf \"setpts=PTS/{cfg.speed}\" "
    "-avoid_negative_ts make_zero "
    "-an "
    f"tmp\\out{processedClips}.mp4"
))


    


    After these are done I want to concatenate them using the concat demuxer :

    


    os.system((
    "ffmpeg "
    "-loglevel error "
    "-nostats "
    "-f concat "
    "-safe 0 "
    "-i videoClips.txt "
    "-c copy "
    f"\"{cfg.saveDir}\\{srcFileName}-Cut.mp4\""
))


    


    This runs fine, but in the output file there is a long pause when the sped up part would start. The last frame of the normal speed part is shown but the timeline shows that the video is playing. After the sped up part would have ended it shows the next normal speed part properly. This is what I see in VLC at least. I've also tried MPC-BE and MPC-HC. They also mess up but instead the sped up part is displayed properly and they pause right after. These pauses seem to be as long as the sped up part would be.

    


    What could cause this ?

    


    Any help is appreciated.

    


  • Treating a video stream as playback with pausing

    21 janvier 2020, par kealist

    I am working on an application that streams multiple h264 video streams to a video wall. I am using libav/ffmpeg libs to stream multiple video files at once from inside the application. The application will control playback speed, seeking, pausing, resuming, stopping, and the video wall will only be receiving udp streams.

    I want to implement streaming such that if the videos are paused, the same frame is sent continually so that it looks as if it is a video window in a paused state.

    How can i insert copies of the same h264 frame into the stream so that it does not mess up sending of later frames ?

    My code is almost an exact port of transcoding.c from ffmpeg.exe. Planning on retaining a copy of the last frame/packet, and when paused to send this. Is this likely to function properly, or should I approach this a different way.

    while (true)
    {
       if (paused) {
           // USE LAST PACKET
       }
       else
       {
           if ((ret = ffmpeg.av_read_frame(ifmt_ctx, &packet)) < 0)
               break;
       }
       stream_index = packet.stream_index;

       type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
       Console.WriteLine("Demuxer gave frame of stream_index " + stream_index);
       if (filter_ctx[stream_index].filter_graph != null)
       {
           Console.WriteLine("Going to reencode&filter the frame\n");
           frame = ffmpeg.av_frame_alloc();
           if (frame == null)
           {
               ret = ffmpeg.AVERROR(ffmpeg.ENOMEM);
               break;
           }

           packet.dts = ffmpeg.av_rescale_q_rnd(packet.dts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ifmt_ctx->streams[stream_index]->codec->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
           packet.pts = ffmpeg.av_rescale_q_rnd(packet.pts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ifmt_ctx->streams[stream_index]->codec->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);



           if (type == AVMediaType.AVMEDIA_TYPE_VIDEO)
           {

               ret = ffmpeg.avcodec_decode_video2(stream_ctx[packet.stream_index].dec_ctx, frame,
                   &got_frame, &packet);

           }
           else
           {
               ret = ffmpeg.avcodec_decode_audio4(stream_ctx[packet.stream_index].dec_ctx, frame,
                   &got_frame, &packet);
           }
           if (ret < 0)
           {
               ffmpeg.av_frame_free(&frame);
               Console.WriteLine("Decoding failed\n");
               break;
           }
           if (got_frame != 0)
           {
               frame->pts = ffmpeg.av_frame_get_best_effort_timestamp(frame);
               ret = filter_encode_write_frame(frame, (uint)stream_index);
               // SAVE LAST FRAME/PACKET HERE
               ffmpeg.av_frame_free(&frame);
               if (ret < 0)
                   goto end;
           }
           else
           {
               ffmpeg.av_frame_free(&frame);
           }
       }
       else
       {
           /* remux this frame without reencoding */
           packet.dts = ffmpeg.av_rescale_q_rnd(packet.dts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ofmt_ctx->streams[stream_index]->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
           packet.pts = ffmpeg.av_rescale_q_rnd(packet.pts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ofmt_ctx->streams[stream_index]->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
           ret = ffmpeg.av_interleaved_write_frame(ofmt_ctx, &packet);
           if (ret < 0)
               goto end;
       }
       ffmpeg.av_free_packet(&packet);
    }
  • What is faster, Raw PCM audio files, or mp3 files decoded with FFmpeg ?

    8 janvier 2020, par Matthew Strom

    I’m pretty deep into the development of my Android app, and as I mess around with my audio files a second time to try longer audio clips (1000ms long), I am now getting audio glitches again. Before I was not getting any glitches with 160ms long files.

    • Background : I’m making a metronome, so imagine roughly 100 lines of code in the callback to constantly figure out what audio file to play and for how long.

    Without getting into my code, I was just wondering if file size or file type has any impact on performance ? I believe I’m using the sample Player rendering class (source) (for Raw file input) which seems to load the audio data of the file each callback. This would Perhaps loading data from a larger array would slow it down ? Although, It could also be the new features/logic that I’m adding to the callback.

    I know it is talked about frequently about using mp3’s and decoding with FFmpeg. Has anyone done any bench-marking between mp3 and raw, and is there any performance advantage to using mp3’s, or is it mainly to cut down on your APK size ?

    Sorry if this has been discussed somewhere, however, I wasn’t able to find any articles mentioning this aspect between the two file types. Looking more closely at the rendering class, my gut tells me that file size "shouldn’t" be a factor... Otherwise I’ll continue to debug and maybe get some systraces in if I can.