Recherche avancée

Médias (91)

Autres articles (104)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (37902)

  • Stream RTSP to HTML5 Video - which is the best approach ?

    13 janvier 2020, par Daniel

    As this task is very complicated I’ve created two approaches and I’d like to know which is the better approach for my purpose.

    Approach 1 :

    H264 frames are grabbed out of RTSP stream on the server (i.e. by ffmpeg), then they are put into a websocket and sent to the client. Client uses mp4box.js to fragment the h264 and then HTML5 video can render it with MSE.

    Approach 2 :

    H264 frames are grabbed out of RTSP stream and also fragmented on the server (i.e. by ffmpeg), then they are transferred directly to the client’s HTML5 video to render it with MSE.
    Here is an example for this approach.

    If we consider today’s client devices (modern phones, notebooks), we can state approach1 would be a better solution because it would prevent the central load on the server.

    However I have not really found any good resource or material on how to use approach1, hence I could not yet tried it out.

    I would like to know if approach1 is really better than approach2 ?

    because maybe grabbing and fragmenting would not put much higher load on the server than grabbing only

    (why I’m asking this ? because for approach2 I’ve a concrete example, whereas for approach1 I don’t. If approach1 is really better, I’ll go for it and implement the whole thing.)

    To put it more exact : does ffmpeg stress the server more if it grabs and fragments an rtsp-h264 stream to fmp4 than when it only grabs the frames from rtsp-h264 stream ?

  • How can I get start time of rtsp-sesson via ffmpeg (C++) ? start_time_realtime always equal -9223372036854775808

    5 août 2019, par chuchuchu

    I’m trying to get a frame by rtsp and calculate its real-world timestamp. I previously used Live555 for this (presentationTime).

    As far as I understand, ffmpeg does not provide such functionality, but provides the ability to read the relative time of each frame and the start time of the stream. In my case, the frame timestamps (pts) works correctly, but the stream start time (start_time_realtime) is always -9223372036854775808.

    I’m trying to use simple example from this Q : https://stackoverflow.com/a/11054652/5355846

    Value does not change. regardless of the position in the code

    int main(int argc, char** argv) {
       // Open the initial context variables that are needed
       SwsContext *img_convert_ctx;
       AVFormatContext* format_ctx = avformat_alloc_context();
       AVCodecContext* codec_ctx = NULL;
       int video_stream_index;

       // Register everything
       av_register_all();
       avformat_network_init();

       //open RTSP
       if (avformat_open_input(&format_ctx, "path_to_rtsp_stream",
                               NULL, NULL) != 0) {
           return EXIT_FAILURE;
       }
       ...
    }
    while (av_read_frame(format_ctx, &packet) >= 0 && cnt < 1000) { //read ~ 1000 frames

           //// here!
           std::cout<< " ***** "
           << std::to_string(format_ctx->start_time_realtime)
           << " | "<start_time
           << " | "<best_effort_timestamp;

    ...
    }

    ***** -9223372036854775808 | 0 | 4120 | 40801 Frame : 103

    What am I doing wrong ?

  • avcodec/ac3_parser : add avpriv_ac3_parse_header2() and use it in libavcodec

    1er mars 2014, par Michael Niedermayer
    avcodec/ac3_parser : add avpriv_ac3_parse_header2() and use it in libavcodec
    

    The new function has the ability to allocate the structure, allowing it to grow
    without needing major bumps

    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavcodec/ac3_parser.c
    • [DH] libavcodec/ac3_parser.h
    • [DH] libavcodec/ac3dec.c