Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (91)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (13879)

  • how to use a video stream as input in python/opencv programme

    11 juillet 2016, par vasu gupta

    I am able to stream and receive webcam feed in two terminal via udp

    command for streaming :

    ffmpeg -i /dev/video0 -b 50k -r 20 -s 858x500 -f mpegts udp://127.0.0.1:2000  

    command for recieving :

    ffplay  udp://127.0.0.1:2000

    Now i have to use this received video stream as input in python/opencv how can i do that.
    I will be doing this using rtp and rstp as well.
    But in case of rtsp it is essential to initiate the receiving terminal, but if I do that then port will become busy and my program will not be able to take the feed.How could it be resolved.
    I am currently using opencv 2.4.13, python 2.7 in ubuntu 14.04

  • seeking with av_seek_frame returns invalid data

    27 juin 2016, par mtadmk

    Based on dranger tutorial I’m trying to implement seeking in video file. Problem is with seeking to beginning milliseconds - it always returns 0 when seeking backward, or some place in file (based on file format).

    I.e. with this video file and seeking forward to 500 pts there is always 6163 pts returned. Seeking backward to 500 is always returning 0. I have no idea why.

    Code to do this :

    import java.util.Arrays;
    import java.util.List;
    import org.bytedeco.javacpp.BytePointer;
    import org.bytedeco.javacpp.DoublePointer;
    import org.bytedeco.javacpp.PointerPointer;
    import org.bytedeco.javacpp.avcodec;
    import org.bytedeco.javacpp.avformat;
    import org.bytedeco.javacpp.avutil;
    import org.bytedeco.javacpp.swscale;

    import static org.bytedeco.javacpp.avcodec.av_free_packet;
    import static org.bytedeco.javacpp.avcodec.avcodec_close;
    import static org.bytedeco.javacpp.avcodec.avcodec_decode_video2;
    import static org.bytedeco.javacpp.avcodec.avcodec_find_decoder;
    import static org.bytedeco.javacpp.avcodec.avcodec_flush_buffers;
    import static org.bytedeco.javacpp.avcodec.avcodec_open2;
    import static org.bytedeco.javacpp.avcodec.avpicture_fill;
    import static org.bytedeco.javacpp.avcodec.avpicture_get_size;
    import static org.bytedeco.javacpp.avformat.AVSEEK_FLAG_BACKWARD;
    import static org.bytedeco.javacpp.avformat.av_dump_format;
    import static org.bytedeco.javacpp.avformat.av_read_frame;
    import static org.bytedeco.javacpp.avformat.av_register_all;
    import static org.bytedeco.javacpp.avformat.av_seek_frame;
    import static org.bytedeco.javacpp.avformat.avformat_close_input;
    import static org.bytedeco.javacpp.avformat.avformat_find_stream_info;
    import static org.bytedeco.javacpp.avformat.avformat_open_input;
    import static org.bytedeco.javacpp.avutil.AVMEDIA_TYPE_VIDEO;
    import static org.bytedeco.javacpp.avutil.AV_PIX_FMT_RGB24;
    import static org.bytedeco.javacpp.avutil.AV_TIME_BASE;
    import static org.bytedeco.javacpp.avutil.av_frame_alloc;
    import static org.bytedeco.javacpp.avutil.av_frame_get_best_effort_timestamp;
    import static org.bytedeco.javacpp.avutil.av_free;
    import static org.bytedeco.javacpp.avutil.av_make_q;
    import static org.bytedeco.javacpp.avutil.av_malloc;
    import static org.bytedeco.javacpp.avutil.av_q2d;
    import static org.bytedeco.javacpp.avutil.av_rescale_q;
    import static org.bytedeco.javacpp.swscale.SWS_BILINEAR;
    import static org.bytedeco.javacpp.swscale.sws_getContext;
    import static org.bytedeco.javacpp.swscale.sws_scale;

    public class SeekTest1 {
       private avformat.AVFormatContext videoFile;
       private final avcodec.AVPacket framePacket = new avcodec.AVPacket();
       private avcodec.AVCodecContext videoCodec;
       private avutil.AVFrame yuvFrame;
       private swscale.SwsContext scalingContext;
       private avutil.AVFrame rgbFrame;
       private int streamId;
       private long lastTime;

       public static void main(String[] args) {
           if (args.length > 0) {
               new SeekTest1().start(args[0]);
           } else {
               new SeekTest1().start("/path/to/file");
           }

       }

       private void start(String path) {
           init(path);
           //test for forward
    //        List<integer> seekPos = Arrays.asList(0, 100, 0, 200, 0, 300, 0, 400, 0, 500, 0, 600, 0, 700);
           //test for backward
           List<integer> seekPos = Arrays.asList(10000, 8000, 10000, 7000, 10000, 6000, 10000, 4000, 10000, 2000, 10000, 1000, 10000, 200);
           seekPos.forEach(seekPosition -> {
               readFrame();
               seek(seekPosition);
           });
           readFrame();
           cleanUp();
       }

       long seek(long seekPosition) {
           final avutil.AVRational timeBase = fetchTimeBase();
           long timeInBaseUnit = fetchTimeInBaseUnit(seekPosition, timeBase);

           //changing flag to AVSEEK_FLAG_ANY does not change behavior
           int flag = 0;
           if (timeInBaseUnit &lt; lastTime) {
               flag = AVSEEK_FLAG_BACKWARD;
               System.out.println("seeking backward to " + timeInBaseUnit);
           } else {
               System.out.println("seeking forward to " + timeInBaseUnit);
           }
           avcodec_flush_buffers(videoCodec);
           av_seek_frame(videoFile, streamId, timeInBaseUnit, flag);
           return timeInBaseUnit;
       }

       private long fetchTimeInBaseUnit(long seekPositionMillis, avutil.AVRational timeBase) {
           return av_rescale_q((long) (seekPositionMillis / 1000.0 * AV_TIME_BASE), av_make_q(1, AV_TIME_BASE), timeBase);
       }

       private void readFrame() {
           while (av_read_frame(videoFile, framePacket) >= 0) {
               if (framePacket.stream_index() == streamId) {
                   // Decode video frame
                   final int[] frameFinished = new int[1];
                   avcodec_decode_video2(this.videoCodec, yuvFrame, frameFinished, framePacket);

                   if (frameFinished[0] != 0) {
                       av_free_packet(framePacket);
                       long millis = produceFinishedFrame();
                       System.out.println("produced time " + millis);
                       break;
                   }
               }
               av_free_packet(framePacket);
           }
       }

       private long produceFinishedFrame() {
           sws_scale(scalingContext, yuvFrame.data(), yuvFrame.linesize(), 0,
               videoCodec.height(), rgbFrame.data(), rgbFrame.linesize());
           final long pts = av_frame_get_best_effort_timestamp(yuvFrame);
           final double timeBase = av_q2d(fetchTimeBase());
           final long foundMillis = (long) (pts * 1000 * timeBase);
           lastTime = foundMillis;
           return foundMillis;
       }

       private avutil.AVRational fetchTimeBase() {
           return videoFile.streams(streamId).time_base();
       }

       private void init(String videoPath) {
           final avformat.AVFormatContext videoFile = new avformat.AVFormatContext(null);

           av_register_all();
           if (avformat_open_input(videoFile, videoPath, null, null) != 0) throw new RuntimeException("unable to open");
           if (avformat_find_stream_info(videoFile, (PointerPointer) null) &lt; 0) throw new RuntimeException("Couldn't find stream information");

           av_dump_format(videoFile, 0, videoPath, 0);

           final int streamId = findFirstVideoStream(videoFile);

           final avcodec.AVCodecContext codec = videoFile.streams(streamId).codec();

           final avcodec.AVCodec pCodec = avcodec_find_decoder(codec.codec_id());
           if (pCodec == null) throw new RuntimeException("Unsupported codec");

           if (avcodec_open2(codec, pCodec, (avutil.AVDictionary) null) &lt; 0) throw new RuntimeException("Could not open codec");

           final avutil.AVFrame yuvFrame = av_frame_alloc();

           final avutil.AVFrame rgbFrame = av_frame_alloc();
           if (rgbFrame == null) throw new RuntimeException("Can't allocate avframe");

           final int numBytes = avpicture_get_size(AV_PIX_FMT_RGB24, codec.width(), codec.height());
           final BytePointer frameBuffer = new BytePointer(av_malloc(numBytes));

           final swscale.SwsContext swsContext = sws_getContext(codec.width(), codec.height(), codec.pix_fmt(), codec.width(), codec.height(),
               AV_PIX_FMT_RGB24, SWS_BILINEAR, null, null, (DoublePointer) null);

           avpicture_fill(new avcodec.AVPicture(rgbFrame), frameBuffer, AV_PIX_FMT_RGB24, codec.width(), codec.height());

           this.videoFile = videoFile;
           this.videoCodec = codec;
           this.yuvFrame = yuvFrame;
           this.scalingContext = swsContext;
           this.rgbFrame = rgbFrame;
           this.streamId = streamId;
       }

       private static int findFirstVideoStream(avformat.AVFormatContext videoFile) {
           int videoStream = -1;
           for (int i = 0; i &lt; videoFile.nb_streams(); i++) {
               if (videoFile.streams(i).codec().codec_type() == AVMEDIA_TYPE_VIDEO) {
                   videoStream = i;
                   break;
               }
           }
           if (videoStream == -1) throw new RuntimeException("Didn't find video stream");
           return videoStream;
       }

       private void cleanUp() {
           av_free(this.rgbFrame);
           av_free(yuvFrame);
           avcodec_close(videoCodec);
           avformat_close_input(videoFile);
       }
    }
    </integer></integer>

    As input arg should be provided file from above.

  • reading video mpegts stream from stdin using libavformat libavcodec

    1er février 2012, par D Starkweather

    I'm trying to read video frames from stdin using ffmpeg's libav* lib collection. Simply passing "pipe:0" or "pipe :" as the filename to avformat_open_input_file() in my program does not do the trick. (e.g. cat /dev/video0 | ./myprogram OR ./myprogram < /dev/video0 ; it also fails using file.avi in place of /dev/video0).

    Peeking into ffmpeg.c, I find the following bit of code using tcsetattr() to set the termios :

    struct termios tty;

    if (tcgetattr (0, &amp;tty) == 0) {
    oldtty = tty;
    restore_tty = 1;
    atexit(term_exit);

    tty.c_iflag &amp;= ~(IGNBRK|BRKINT|PARMRK|ISTRIP
                         |INLCR|IGNCR|ICRNL|IXON);
    tty.c_oflag |= OPOST;
    tty.c_lflag &amp;= ~(ECHO|ECHONL|ICANON|IEXTEN);
    tty.c_cflag &amp;= ~(CSIZE|PARENB);
    tty.c_cflag |= CS8;
    tty.c_cc[VMIN] = 1;
    tty.c_cc[VTIME] = 0;

    tcsetattr (0, TCSANOW, &amp;tty);
    }
    signal(SIGQUIT, sigterm_handler); /* Quit (POSIX).  */
    }

    avformat_network_deinit();

    signal(SIGINT , sigterm_handler); /* Interrupt (ANSI).    */
    signal(SIGTERM, sigterm_handler); /* Termination (ANSI).  */
    signal(SIGXCPU, sigterm_handler);

    However, when I try this, it appears to have no effect. What settings do I need in the termios struct to continuously read video frames in ?

    Currently, all i am able to do is call av_read_frames a few times (return code 0) before it usually either segfaults or hangs waiting. The buffer usually contains just a blank screen (all black). With any luck, the video buffer will contain the first video frame it comes across, but does not update to the later frames.

    (I'm running this on Debian 6.0) Here's my version of ffmpeg :

    ffmpeg version N-36936-g4cf81d9 Copyright (c) 2000-2012 the FFmpeg
    developers built on Jan 19 2012 20:51:47 with gcc 4.4.5
    configuration : —enable-shared —enable-gray —enable-hardcoded-tables
    —enable-runtime-cpudetect —enable-libmp3lame —enable-libopenjpeg —enable-librtmp —enable-libtheora —enable-libvorbis —enable-libx264 —enable-libxvid —enable-zlib —enable-gpl libavutil 51. 34.101 / 51. 34.101 libavcodec 53. 57.100 /
    53. 57.100 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.101 / 2. 59.101 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 /
    0. 6.100 libpostproc 52. 0.100 / 52. 0.100 Hyper fast Audio and Video encoder usage : ffmpeg [options] [[infile options] -i
    infile].

    Thank you very much for any help helpful tips. It is much appreciated.

    D. Grant Starkweather