Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (36)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5304)

  • How to convert RTSP stream into flv/swf Stream (w. ffmpeg) ?

    22 octobre 2012, par acy

    I want embed a webcam stream (From geovision video server) into a website. Unfortunately only the rtsp stream gives direct access to the video data.

    I tried a bunch of different variants. With this version I got no errors :

    openRTSP -b 50000 -w 352 -h 288 -f 5 -v -c -u admin password rtsp://xxxxxx.dyndns.org:8554/CH001.sdp | \
    ffmpeg -r 5 -b 256000 -f mp4 -i - http://127.0.0.1:8090/feed1.ffm

    Unfortunately I get no video. Sometimes I see a single frame of the webcam, but no livestream.

    This is my ffserver.conf

    Port 8090
    BindAddress 0.0.0.0
    MaxClients 200
    MaxBandwidth 20000
    CustomLog /var/log/flvserver/access.log

    NoDaemon

    # Server Status
    <stream>
    Format status
    </stream>

    <feed>
    File /tmp/feed1.ffm
    FileMaxSize 200K
    ACL allow 127.0.0.1
    </feed>

    # SWF output - great for testing
    <stream>
    # the source feed
    Feed feed1.ffm
    # the output stream format - SWF = flash
    Format swf
    #VideoCodec flv
    # this must match the ffmpeg -r argument
    VideoFrameRate 5
    # another quality tweak
    VideoBitRate 256K
    # quality ranges - 1-31 (1 = best, 31 = worst)
    VideoQMin 1
    VideoQMax 3
    VideoSize 352x288
    # wecams don&#39;t have audio
    NoAudio
    </stream>

    What am I doing wrong ? THe test.swf seems to load forever...

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay ?

    24 janvier 2012, par Serguey Zefirov

    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine ! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong.

    I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex :

    This is sender pipeline :
    # !/bin/sh

    gst-launch -v gstrtpbin name=rtpbin \
           v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \
                     rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000                            \
                     rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false    \
                     udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0                           \
           pulsesrc ! audioconvert ! audioresample  ! audio/x-raw-int,rate=16000 !    \
                     speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1                   \
                     rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002                            \
                     rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false    \
                     udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1

    Receiver pipeline :

     !/bin/sh

    gst-launch -v\
       gstrtpbin name=rtpbin                                          \
       udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \
               port=5000 ! rtpbin.recv_rtp_sink_0                                \
           rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink                    \
        udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0                               \
        rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false        \
       udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \
               port=5002 ! rtpbin.recv_rtp_sink_1                                \
           rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \
        udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1                               \
        rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false

    Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time.

    Then I changed pipelines to use H264 along the video path.

    The sender becomes :
    # !/bin/sh

    gst-launch -v gstrtpbin name=rtpbin \
           v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \
                     rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000                            \
                     rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false    \
                     udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0                           \
           pulsesrc ! audioconvert ! audioresample  ! audio/x-raw-int,rate=16000 !    \
                     speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1                   \
                     rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002                            \
                     rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false    \
                     udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1

    And receiver becomes :
    # !/bin/sh

    gst-launch -v\
       gstrtpbin name=rtpbin                                          \
       udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \
               port=5000 ! rtpbin.recv_rtp_sink_0                                \
           rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink                    \
        udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0                               \
        rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false        \
       udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \
               port=5002 ! rtpbin.recv_rtp_sink_1                                \
           rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \
        udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1                               \
        rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false

    This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

  • Changing int main() to JNI interface prototype

    13 mars 2012, par iSun

    I changed ffmpeg.c according following link :

    http://www.roman10.net/how-to-port-ffmpeg-the-program-to-androidideas-and-thoughts/

    He said the change main () to JNI interface prototype. Well, I'm not familiar with JNI interface prototype, but I read an article about JNI and change it accordingly.

    Can anyone look at my code to see is this true or not ?

    JNIEXPORT jint JNICALL Java_com_ffmpegtest_MainActivity_main(JNIEnv *pEnv, int argc, char **argv) {
    int64_t ti;

    av_log_set_flags(AV_LOG_SKIP_REPEATED);

    if(argc>1 &amp;&amp; !strcmp(argv[1], "-d")){
    run_as_daemon=1;
    verbose=-1;
    av_log_set_callback(log_callback_null);
    argc--;
    argv++;

    }

    avcodec_register_all();
    #if CONFIG_AVDEVICE
    avdevice_register_all();
    #endif
    #if CONFIG_AVFILTER
    avfilter_register_all();
    #endif
    av_register_all();

    #if HAVE_ISATTY
    if(isatty(STDIN_FILENO))
    avio_set_interrupt_cb(decode_interrupt_cb);
    #endif

    init_opts();

    if(verbose>=0)
    show_banner();

    /* parse options */
    parse_options(argc, argv, options, opt_output_file);

    if(nb_output_files &lt;= 0 &amp;&amp; nb_input_files == 0) {
    show_usage();
    fprintf(stderr, "Use -h to get full help or, even better, run &#39;man ffmpeg&#39;\n");
    ffmpeg_exit(1);
    }

    /* file converter / grab */
    if (nb_output_files &lt;= 0) {
    fprintf(stderr, "At least one output file must be specified\n");
    ffmpeg_exit(1);
    }

    if (nb_input_files == 0) {
    fprintf(stderr, "At least one input file must be specified\n");
    ffmpeg_exit(1);
    }

    ti = getutime();
    if (transcode(output_files, nb_output_files, input_files, nb_input_files,
    stream_maps, nb_stream_maps) &lt; 0)
    ffmpeg_exit(1);
    ti = getutime() - ti;
    if (do_benchmark) {
    int maxrss = getmaxrss() / 1024;
    printf("bench: utime=%0.3fs maxrss=%ikB\n", ti / 1000000.0, maxrss);
    }

    return ffmpeg_exit(0);
    }