Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (7694)

  • resending a stream causes ffmpeg to crash

    18 août 2020, par Arikael

    we use ffmpeg 3.4.8 to get udp streams from a source.
    
While the streams may be sent 24/7 or just from time to time, ffmpeg should always listen 24/7.
    
Those streams are out of my control.

    


    The streams which are sent from time to time, are not just paused during the time they are not sent, they are not existing.
    
This means while they are always send with on the same address/port and our application sees them as one stream (or one input in ffmpeg terminology) they are technically multiple separate streams sent to the same address.

    


    The problem is when one of those stream stops, ffmpeg keeps listening for the input (which is good) but it crashes as soon as we send the stream again giving the error :

    


    Application provided invalid, non monotonically increasing dts to muxer in stream 4...   
av_interleaved_write_frame(): invalid argument.


    


    Stream 4 contains synchronous klv metadata.

    


    This is to be expected since the new stream will probably have a lower dts than the old stream.

    


    I cant use the reconnect_* flags since we use an udp source

    


    example
    
ffmpeg -i udp://192.168.2.255:1234 -map 0 -c copy -f mpegts udp://192.168.2.255:1235

    


    log (with loglevel verbose)

    


    [mpegts @ 0x1eb8e60] Timestamps are unset in a packet for stream 3. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mpegts @ 0x1eb8e60] Application provided invalid, non monotonically increasing dts to muxer in stream 4: 2097600 >= 1732800
av_interleaved_write_frame(): Invalid argument
No more output streams to write to, finishing.
frame=  317 fps= 20 q=-1.0 Lsize=    7252kB time=00:00:23.30 bitrate=2549.1kbits/s speed=1.44x
video:6166kB audio:400kB subtitle:0kB other streams:58kB global headers:0kB muxing overhead: 9.481929%
Input file #0 (udp://192.168.2.255:1234):
  Input stream #0:0 (video): 318 packets read (6334176 bytes);
  Input stream #0:1 (audio): 534 packets read (410112 bytes);
  Input stream #0:2 (data): 0 packets read (0 bytes);
  Input stream #0:3 (data): 56 packets read (9968 bytes);
  Input stream #0:4 (data): 252 packets read (49542 bytes);
  Total: 1160 packets (6803798 bytes) demuxed
Output file #0 (udp://192.168.2.255:1235?broadcast=1):
  Output stream #0:0 (video): 317 packets muxed (6313576 bytes);
  Output stream #0:1 (audio): 534 packets muxed (410112 bytes);
  Output stream #0:2 (data): 0 packets muxed (0 bytes);
  Output stream #0:3 (data): 56 packets muxed (9968 bytes);
  Output stream #0:4 (data): 252 packets muxed (49542 bytes);
  Total: 1159 packets (6783198 bytes) muxed
Conversion failed!


    


    Like mentioned this only happens if the input stream is stopped and started again.

    


    So the question is :
    
Is it somehow possible to give ffmpeg a timeout after which it treats an input as a new stream and/or ignores those dts errors or how can I solve the problem ?

    


    I know I could probably just wait for ffmpeg to fail and then restart it, but maybe there's a cleaner solution.

    


  • Stream #0:0 : Unknown : none (pcm_s16be)

    1er novembre 2016, par bot1131357

    I am trying to create an RTP audio stream using FFmpeg with the code below. When running on my Windows 10 machine, I get the following response :

    Output #0, rtp, to 'rtp://127.0.0.1:8554':
       Stream #0:0: Audio: pcm_s16be, 8000 Hz, mono, s16, 128 kb/s
    SDP dump:
    =================
    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    c=IN IP4 127.0.0.1
    t=0 0
    a=tool:libavformat 57.25.101
    m=audio 8554 RTP/AVP 96
    b=AS:128
    a=rtpmap:96 L16/8000/1
    ret = 0

    but when on Linux (#57 14.04.1-Ubuntu), FFmpeg treats the stream as "Unknown" :

    Output #0, rtp, to 'rtp://127.0.0.1:8554':
       Stream #0:0: Unknown: none (pcm_s16be)
    SDP dump:
    =================
    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    c=IN IP4 127.0.0.1
    t=0 0
    a=tool:libavformat 57.57.100
    m=application 8554 RTP/AVP 3

    Does anyone know why this is the case ? Any form of help would be much appreciated.

    #include
    extern "C"
    {
    #include <libavutil></libavutil>opt.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavutil></libavutil>channel_layout.h>
    #include <libavutil></libavutil>common.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavutil></libavutil>samplefmt.h>
    #include <libavformat></libavformat>avformat.h>
    }

    /*
    * Audio encoding example
    */
    static void audio_encode_example(const char *filename)
    {
     int ret;
     AVCodec *outCodec = NULL;
     AVCodecContext *outCodecCtx = NULL;
     AVFormatContext *outFormatCtx = NULL;
     AVStream * outAudioStream = NULL;
     AVFrame *outFrame = NULL;

     ret = avformat_alloc_output_context2(&amp;outFormatCtx, NULL, "rtp", filename);
     if (!outFormatCtx || ret &lt; 0)
     {
       fprintf(stderr, "Could not allocate output context");
     }

     outFormatCtx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;
     outFormatCtx->oformat->audio_codec = AV_CODEC_ID_PCM_S16BE;
     // outFormatCtx->audio_codec_id = AV_CODEC_ID_PCM_S16BE;
     // outFormatCtx->oformat->video_codec = AV_CODEC_ID_NONE;
     // outFormatCtx->oformat->data_codec = AV_CODEC_ID_PCM_S16BE;

     /* find the encoder */
     outCodec = avcodec_find_encoder(outFormatCtx->oformat->audio_codec);
     if (!outCodec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
     }

     outAudioStream = avformat_new_stream(outFormatCtx, outCodec);
     if (!outAudioStream)
     {
       fprintf(stderr, "Cannot add new audio stream\n");
       exit(1);
     }

     outAudioStream->time_base.den = 8000;
     outAudioStream->time_base.num = 1;
     outCodecCtx = outAudioStream->codec;
     outCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;

     /* select other audio parameters supported by the encoder */
     outCodecCtx->sample_rate = 8000;
     outCodecCtx->channel_layout = AV_CH_LAYOUT_MONO;
     outCodecCtx->channels = 1;

     /* open it */
     if (avcodec_open2(outCodecCtx, outCodec, NULL) &lt; 0) {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
     }
     outCodecCtx->frame_size = 372;

     av_dump_format(outFormatCtx, 0, filename, 1);

     char buff[10000] = { 0 };
     ret = av_sdp_create(&amp;outFormatCtx, 1, buff, sizeof(buff));
     printf("SDP dump:\n"
             "=================\n"
             "%s", buff);
     /*
     Running the program returns the following:

       Output #0, rtp, to 'rtp://127.0.0.1:8554':
           Stream #0:0: Unknown: none (pcm_s16be)
       SDP dump:
       =================
       v=0
       o=- 0 0 IN IP4 127.0.0.1
       s=No Name
       c=IN IP4 127.0.0.1
       t=0 0
       a=tool:libavformat 57.57.100
       m=application 8554 RTP/AVP 3

     */

     exit(1);
    }


    int main(int argc, char **argv)
    {
     const char *output;

     av_register_all();
     avformat_network_init(); // for network streaming

     audio_encode_example("rtp://127.0.0.1:8554");

     return 0;
    }
  • On the fly transcoding and HLS streaming with ffmpeg

    12 janvier 2023, par syfluqs

    I am building a web application that involves serving various kinds of video content. Web-friendly audio and video codecs are handled without any problems, but I am having trouble designing the delivery of video files incompatible with HTML5 video players like mkv containers or H265.

    &#xA;&#xA;

    What I have done till now, is use ffmpeg to transcode the video file on the server and make HLS master and VOD playlists and use hls.js on the frontend. The problem, however, is that ffmpeg treats the playlist as a live stream playlist until transcoding is complete on the whole file and then it changes the playlist to serve as VOD. So, the user can't seek until the transcoding is over, and that my server has unnecessarily transcoded the whole file if the user decides to seek the video file halfway ahead. I am using the following ffmpeg command line arguments

    &#xA;&#xA;

    ffmpeg -i sample.mkv \&#xA;       -c:v libx264 \&#xA;       -crf 18 \&#xA;       -preset ultrafast \&#xA;       -maxrate 4000k \&#xA;       -bufsize 8000k \&#xA;       -vf "scale=1280:-1,format=yuv420p" \&#xA;       -c:a copy -start_number 0 \&#xA;       -hls_time 10 \&#xA;       -hls_list_size 0 \&#xA;       -f hls \&#xA;file.m3u8&#xA;

    &#xA;&#xA;

    Now to improve upon this system, I tried to generate the VOD playlist through my app and not ffmpeg, since the format is self explanatory. The webapp would generate the HLS master and VOD playlists beforehand using the video properties such as duration, resolution and bitrate (which are known to the server) and serve the master playlist to the client. The client then starts requesting the individual video segments at which point the server will individually transcode and generate each segment and serve them. Seeking would be possible as the client already has the complete VOD playlist and it can request the specific segment that the user seeks to. The benefit, as I see it, would be that my server would not have to transcode the whole file, if the user decides to seek forward and play the video halfway through.

    &#xA;&#xA;

    Now I tried manually creating segments (10s each) from my sample.mkv using the following command

    &#xA;&#xA;

    ffmpeg -ss 90 \&#xA;       -t 10 \&#xA;       -i sample.mkv \&#xA;       -g 52 \&#xA;       -strict experimental \&#xA;       -movflags &#x2B;frag_keyframe&#x2B;separate_moof&#x2B;omit_tfhd_offset&#x2B;empty_moov \&#xA;       -c:v libx264 \&#xA;       -crf 18 \&#xA;       -preset ultrafast \&#xA;       -maxrate 4000k \&#xA;       -bufsize 8000k \&#xA;       -vf "scale=1280:-1,format=yuv420p" \&#xA;       -c:a copy \&#xA;fileSequence0.mp4&#xA;

    &#xA;&#xA;

    and so on for other segments, and the VOD playlist as

    &#xA;&#xA;

    #EXTM3U&#xA;#EXT-X-PLAYLIST-TYPE:VOD&#xA;#EXT-X-TARGETDURATION:10&#xA;#EXT-X-VERSION:4&#xA;#EXT-X-MEDIA-SEQUENCE:0&#xA;#EXTINF:10.0,&#xA;fileSequence0.mp4&#xA;#EXTINF:10.0,&#xA;fileSequence1.mp4&#xA;...&#xA;... and so on &#xA;...&#xA;#EXT-X-ENDLIST&#xA;

    &#xA;&#xA;

    which plays the first segment just fine but not the subsequent ones.

    &#xA;&#xA;

    Now my questions,

    &#xA;&#xA;

      &#xA;
    1. Why don't the subsequent segments play ? What am I doing wrong ?

    2. &#xA;

    3. Is my technique even viable ? Would there be any problem with presetting the segment durations since segmenting is only possible after keyframes and whether ffmpeg can get around this ?

    4. &#xA;

    &#xA;&#xA;

    My knowledge regarding video processing and generation borders on modest at best. I would greatly appreciate some pointers.

    &#xA;