Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (72)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6381)

  • OpenCV access live streaming from FFserver delay at start

    2 mai 2022, par Sandro Pellizza

    I've already posted this question on superuser.com (https://superuser.com/questions/1718608/ffmpeg-live-streaming-to-ffserver-delay-at-start), but since part of this question is OpenCV related, they suggested me to post it also here.

    


    I'm trying to achieve a simple camera streaming using FFMpeg and FFserver. I have two slightly different systems acting as source, both Debian OS :

    


    the first one runs ffmpeg 3.4.8, as indicated in figure 1

    


    First system FFMPEG version

    


    the second one runs ffmpeg 2.8.17, as indicated in figure 2

    


    Second system FFMPEG version

    


    The ffmpeg command used to send the stream to to ffserver is the following, identical for both systems :

    


    ffmpeg -re -f v4l2 -s 640x360 -thread_queue_size 20 -probesize 32 -i /dev/video0 -threads 4 -fflags nobuffer -tune zerolatency http://myserverIP:myserverPort/liveFeed.ffm

    


    In order to see the stream result I access the live stream from a third system using openCV pointing to the server URL :

    


    VideoCapture videoCap = new VideoCapture("http://myserverIP:myserverPort/liveFeed.flv"); 
...
videoCap.read(imageInput);


    


    and start grabbing the incoming frames from the stream.

    


    The wierd thing happens here :

    


      

    • with the first system the video stream visualized through openCV is pretty much real time, with 1-2 seconds of delay from the original source.
    • 


    • with the second system the video stream is affected by a variable delay which is comparable with the elapsed time between the start time of the stream source and the start time of the stream acquisition with openCV (for example : if I start the source stream at 12:00:00 and wait 30 seconds before access the stream with openCV, I have a delay of about 30 seconds shown on the third system)
    • 


    


    The ffserver configuration is the following

    


    HTTPBindAddress 0.0.0.0&#xA;MaxHTTPConnections 2000&#xA;MaxClients 1000&#xA;MaxBandwidth 6000&#xA;CustomLog -&#xA;#NoDaemon&#xA;&#xA;<feed>&#xA;    File /tmp/SCT-0001_3.ffm&#xA;    FileMaxSize 5M&#xA;</feed>&#xA;&#xA;<stream>&#xA;    Format flv&#xA;    Feed liveFeed.ffm&#xA;&#xA;    VideoCodec libx264&#xA;    VideoFrameRate 20&#xA;    VideoBitRate 200&#xA;    VideoSize 640x360&#xA;    AVOptionVideo preset superfast&#xA;    AVOptionVideo tune zerolatency&#xA;    AVOptionVideo flags &#x2B;global_header&#xA;&#xA;    NoAudio&#xA;</stream>&#xA;&#xA;##################################################################&#xA;# Special streams&#xA;##################################################################&#xA;<stream>&#xA;    Format status&#xA;    # Only allow local people to get the status&#xA;    ACL allow localhost&#xA;    ACL allow 192.168.0.0 192.168.255.255&#xA;</stream>&#xA;&#xA;# Redirect index.html to the appropriate site&#xA;<redirect>&#xA;    URL http://www.ffmpeg.org/&#xA;</redirect> &#xA;

    &#xA;

    Any help to spot the problem would be great ! Thanks

    &#xA;

  • X264 : How to access NAL units from encoder ?

    18 avril 2014, par user1884325

    When I call

    frame_size = x264_encoder_encode(encoder, &amp;nals, &amp;i_nals, &amp;pic_in, &amp;pic_out);

    and subsequently write each NAL to a file like this :

        if (frame_size >= 0)
        {
           int i;
           int j;

           for (i = 0; i &lt; i_nals; i++)
           {
              printf("******************* NAL %d (%d bytes) *******************\n", i, nals[i].i_payload);
              fwrite(&amp;(nals[i].p_payload[0]), 1, nals[i].i_payload, fid);
           }
        }

    then I get this

    Beginning of NAL file

    My questions are :

    1) Is it normal that there’s readable parameters in the beginning of the file ?

    2) How do I configure the X264 encoder so that the encoder returns frames that I can send via UDP without the packet getting fragmented (size must be below 1390 or somewhere around that).

    3) With the x264.exe I pass in these options :

    "--threads 1 --profile baseline --level 3.2 --preset ultrafast --bframes 0 --force-cfr --no-mbtree --sync-lookahead 0 --rc-lookahead 0 --keyint 1000 --intra-refresh"

    How do I map those to the settings in the X264 parameters structure ? (x264_param_t)

    4) I have been told that the x264 static library doesn’t support bitmap input to the encoder and that I have to use libswscale for conversion of the 24bit RGB input bitmap to YUV2. The encoder, supposedly, only takes YUV2 as input ? Is this true ? If so, how do I build libswscale for the x264 static library ?

  • X264 : How to access NAL units from encoder ?

    18 avril 2014, par user1884325

    When I call

    frame_size = x264_encoder_encode(encoder, &amp;nals, &amp;i_nals, &amp;pic_in, &amp;pic_out);

    and subsequently write each NAL to a file like this :

        if (frame_size >= 0)
        {
           int i;
           int j;

           for (i = 0; i &lt; i_nals; i++)
           {
              printf("******************* NAL %d (%d bytes) *******************\n", i, nals[i].i_payload);
              fwrite(&amp;(nals[i].p_payload[0]), 1, nals[i].i_payload, fid);
           }
        }

    then I get this

    Beginning of NAL file

    My questions are :

    1) Is it normal that there’s readable parameters in the beginning of the file ?

    2) How do I configure the X264 encoder so that the encoder returns frames that I can send via UDP without the packet getting fragmented (size must be below 1390 or somewhere around that).

    3) With the x264.exe I pass in these options :

    "--threads 1 --profile baseline --level 3.2 --preset ultrafast --bframes 0 --force-cfr --no-mbtree --sync-lookahead 0 --rc-lookahead 0 --keyint 1000 --intra-refresh"

    How do I map those to the settings in the X264 parameters structure ? (x264_param_t)

    4) I have been told that the x264 static library doesn’t support bitmap input to the encoder and that I have to use libswscale for conversion of the 24bit RGB input bitmap to YUV2. The encoder, supposedly, only takes YUV2 as input ? Is this true ? If so, how do I build libswscale for the x264 static library ?