Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (35)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6480)

  • Serving video stream in Node with ffmpeg

    30 mars 2023, par Spedwards

    I have a local-only utility where the backend is Adonis.js and the frontend is Vue.js. I'm trying to get a readable stream and have it play in my frontend. I have got the very basics down, the video plays, but I can't skip to anywhere else in the video, it'll just jump back to where it left off and continue playing.

    


    I've been told that it requires a bi-directional data flow. What I was planning on doing was updating the frontend stream URL to add a query string to the end with the timestamp of where the user (me) skips to. This would go back to the backend and I'd use ffmpeg to create a new stream from the video starting at that timestamp.

    


    The problem is that I've never really messed around with streams before and I'm finding all of this very confusing. I'm able to get a ReadStream of my video and serve it, but I can't write to it. I can create a WriteStream and have it start at my timestamp (I think) but I can't serve it because I can only return ReadStream, ReadWriteStream, or ReadableStream. The ReadWriteStream sounds perfect but I have no idea how to create one and I couldn't find anything fruitful after a few hours of searching, nor could I find anyway of converting a WriteStream to a ReadStream.

    


    There's also the problem I alluded to ; I have no idea if my ffmpeg method is actually working since I can't serve it to test.

    


    My working controller method without any of the timestamp stuff is as follows :

    


    public async stream({ params, response }: HttpContextContract) {
    const file = await File.find(params.id)
    if (!file) {
        return response.badRequest()
    }
    const stream = await Drive.getStream(file.path) // this creates a ReadableStream
    return response.stream(stream)
}


    


    For all the ffmpeg stuff, I'm using fluent-ffmpeg as it was the best wrapper I could find.

    


    This was my first attempt.

    


    public async stream({ params, request, response }: HttpContextContract) {
    const file = await File.find(params.id)
    if (!file) {
        return response.badRequest()
    }
    const stream = await Drive.getStream(file.path) // this creates a ReadableStream
    if (request.input('t')) {
        const timestamp = request.input('t')
        ffmpeg()
            .input(stream)
            .seekInput(timestamp)
            .output(stream)
    }
    return response.stream(stream)
}


    


    How can I achieve what I want ? Am I going about this the wrong way and/or is there a better way ?

    


  • Treating a video stream as playback with pausing

    21 janvier 2020, par kealist

    I am working on an application that streams multiple h264 video streams to a video wall. I am using libav/ffmpeg libs to stream multiple video files at once from inside the application. The application will control playback speed, seeking, pausing, resuming, stopping, and the video wall will only be receiving udp streams.

    I want to implement streaming such that if the videos are paused, the same frame is sent continually so that it looks as if it is a video window in a paused state.

    How can i insert copies of the same h264 frame into the stream so that it does not mess up sending of later frames ?

    My code is almost an exact port of transcoding.c from ffmpeg.exe. Planning on retaining a copy of the last frame/packet, and when paused to send this. Is this likely to function properly, or should I approach this a different way.

    while (true)
    {
       if (paused) {
           // USE LAST PACKET
       }
       else
       {
           if ((ret = ffmpeg.av_read_frame(ifmt_ctx, &packet)) < 0)
               break;
       }
       stream_index = packet.stream_index;

       type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
       Console.WriteLine("Demuxer gave frame of stream_index " + stream_index);
       if (filter_ctx[stream_index].filter_graph != null)
       {
           Console.WriteLine("Going to reencode&filter the frame\n");
           frame = ffmpeg.av_frame_alloc();
           if (frame == null)
           {
               ret = ffmpeg.AVERROR(ffmpeg.ENOMEM);
               break;
           }

           packet.dts = ffmpeg.av_rescale_q_rnd(packet.dts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ifmt_ctx->streams[stream_index]->codec->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
           packet.pts = ffmpeg.av_rescale_q_rnd(packet.pts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ifmt_ctx->streams[stream_index]->codec->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);



           if (type == AVMediaType.AVMEDIA_TYPE_VIDEO)
           {

               ret = ffmpeg.avcodec_decode_video2(stream_ctx[packet.stream_index].dec_ctx, frame,
                   &got_frame, &packet);

           }
           else
           {
               ret = ffmpeg.avcodec_decode_audio4(stream_ctx[packet.stream_index].dec_ctx, frame,
                   &got_frame, &packet);
           }
           if (ret < 0)
           {
               ffmpeg.av_frame_free(&frame);
               Console.WriteLine("Decoding failed\n");
               break;
           }
           if (got_frame != 0)
           {
               frame->pts = ffmpeg.av_frame_get_best_effort_timestamp(frame);
               ret = filter_encode_write_frame(frame, (uint)stream_index);
               // SAVE LAST FRAME/PACKET HERE
               ffmpeg.av_frame_free(&frame);
               if (ret < 0)
                   goto end;
           }
           else
           {
               ffmpeg.av_frame_free(&frame);
           }
       }
       else
       {
           /* remux this frame without reencoding */
           packet.dts = ffmpeg.av_rescale_q_rnd(packet.dts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ofmt_ctx->streams[stream_index]->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
           packet.pts = ffmpeg.av_rescale_q_rnd(packet.pts,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ofmt_ctx->streams[stream_index]->time_base,
                   AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);
           ret = ffmpeg.av_interleaved_write_frame(ofmt_ctx, &packet);
           if (ret < 0)
               goto end;
       }
       ffmpeg.av_free_packet(&packet);
    }
  • Using gcovr with FFmpeg

    6 septembre 2010, par Multimedia Mike — FATE Server

    When I started investigating code coverage tools to analyze FFmpeg, I knew there had to be an easier way to do what I was trying to do (obtain code coverage statistics on a macro level for the entire project). I was hoping there was a way to ask the GNU gcov tool to do this directly. John K informed me in the comments of a tool called gcovr. Like my tool from the previous post, gcovr is a Python script that aggregates data collected by gcov. gcovr proves to be a little more competent than my tool.

    Results
    Here is the spreadsheet of results, reflecting FATE code coverage as of this writing. All FFmpeg source files are on the same sheet this time, including header files, sorted by percent covered (ascending), then total lines (descending).

    Methodology
    I wasn’t easily able to work with the default output from the gcovr tool. So I modified it into a tool called gcovr-csv which creates data that spreadsheets can digest more easily.

    • Build FFmpeg using the '-fprofile-arcs -ftest-coverage' in both the extra cflags and extra ldflags configuration options
    • 'make'
    • 'make fate'
    • From build directory : 'gcovr-csv > output.csv'
    • Massage the data a bit, deleting information about system header files (assuming you don’t care how much of /usr/include/stdlib.h is covered — 66%, BTW)

    Leftovers
    I became aware of some spreadsheet limitations thanks to this tool :

    1. OpenOffice can’t process percent values correctly– it imports the percent data from the CSV file but sorts it alphabetically rather than numerically.
    2. Google Spreadsheet expects CSV to really be comma-delimited– forget about any other delimiters. Also, line length is an issue which is why I needed my tool to omit the uncovered ine number ranges, which it does in its default state.