Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (112)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (7914)

  • OpenCV won't capture frames from a RTMP source, while FFmpeg does

    27 août 2015, par user2957378

    my goal is to capture a frame from a rtmp stream every second, and process it using OpenCV. I’m using FFmpeg version N-71899-g6ef3426 and OpenCV 2.4.9 with the Java interface (but I’m first experimenting with Python).
    For the moment, I can only take the simple and dirty solution, which is to capture images using FFmpeg, store them in disk, and then read those images from my OpenCV program. This is the FFmpeg command I’m using :

    ffmpeg -i "rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1" -r 1 capImage%03d.jpg

    This is currently working for me, at least with this concrete rtmp source. Then I would need to read those images from my OpenCV program in a proper way. I have not actually implemented this part, because I’m trying to find a better solution.

    I think the ideal way would be to capture the rtmp frames directly from OpenCV, but I cannot find the way to do it. This is the code in Python I’m using :

    cv2.namedWindow("camCapture", cv2.CV_WINDOW_AUTOSIZE)
    cap = cv2.VideoCapture()
    cap.open('"rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1"')
    if not cap.open:
       print "Not open"
    while (True):
       err,img = cap.read()
       if img and img.shape != (0,0):
           cv2.imwrite("img1", img)
           cv2.imshow("camCapture", img)
       if err:
           print err
           break
       cv2.waitKey(30)

    Instead of read() function, I’m also trying with grab() and retrieve() functions without any good result. The read() function is being executed every time, but no "img" or "err" is received.
    Is there any other way to do it ? or maybe there is no way to get frames directly from OpenCV 2.4.9 from a stream like this ?

    I’ve read OpenCV uses FFmpeg to do this kind of tasks, but as you can see, in my case FFmpeg is able to get frames from the stream while OpenCV is not.

    In the case I could not find the way to get the frames directly from OpenCV, my next idea is to pipe somehow, FFmpeg output to OpenCV, which seems harder to implement.

    Any idea,
    thank you !

    UPDATE 1 :
    I’m in Windows 8.1. Since I was running the python script from Eclipse PyDev, this time I run it from cmd instead, and I’m getting the following warning :

    warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)

    This warning means, as far as I could read, that either the file-path is wrong, or either the codec is not supported. Now, the question is the same. Is OpenCV not capable of getting the frames from this source ?

  • FFMPEG images to video with reverse sequence with other filters

    4 juillet 2019, par Archimedes Trajano

    Similar to this ffmpeg - convert image sequence to video with reversed order

    But I was wondering if I can create a video loop by specifying the image range and have the reverse order appended in one command.

    Ideally I’d like to combine it with this Make an Alpha Mask video from PNG files

    What I am doing now is generating the reverse using https://stackoverflow.com/a/43301451/242042 and combining the video files together.

    However, I am thinking it would be similar to Concat a video with itself, but in reverse, using ffmpeg

    My current attempt was assuming 60 images. which makes vframes x2

    ffmpeg -y -framerate 20 -f image2 -i \
     running_gear/%04d.png -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
     -filter_complex alphaextract[a]
     -map 0:v -b:v 5M -crf 20 running_gear.webm
     -map [a] -b:v 5M -crf 20 running_gear-alpha.web

    Without the alpha masking I can get it working using

    ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
     -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
     -map "[v]" -b:v 5M -crf 20 running_gear.webm

    With just the alpha masking I can do

    ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
     -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]alphaextract[a]"
     -map [a] -b:v 5M -crf 20 alpha.webm

    So I am trying to do it so the alpha mask is done at the same time.

    Although my ultimate ideal would be to take the images, reverse it get an alpha mask and put it side-by-side so it can be used in Ren’py

  • FFmpeg jump to most recent frame

    7 mars 2019, par je42

    I am looking for some help with dropping/skipping FFmpeg frames. The project I am working on streams live video which when the app goes into the background, upon returning to an active state the video stream spends a long time catching up by fast forwarding itself to the current frame. This isn’t ideal and what I am aiming to achieve is have the app immediately jump to the most recent frame.

    What I need to do is drop the amount of frames that are being fast-forwarded in order to catch up to the most recent frame. Is this possible ? Here is my current code which decodes the frames :

    - (NSArray *) decodeFrames: (CGFloat) minDuration
    {
       NSMutableArray *result = [NSMutableArray array];

       @synchronized (lock) {

           if([_reading integerValue] != 1){

               _reading = [NSNumber numberWithInt:1];

               @synchronized (_seekPosition) {
                   if([_seekPosition integerValue] != -1 && _seekPosition){
                       [self seekDecoder:[_seekPosition longLongValue]];
                       _seekPosition = [NSNumber numberWithInt:-1];
                   }
               }

               if (_videoStream == -1 &&
                   _audioStream == -1)
                   return nil;

               AVPacket packet;

               CGFloat decodedDuration = 0;

               CGFloat totalDuration = [TimeHelper calculateTimeDifference];

               do {

                   BOOL finished = NO;
                   int count = 0;


                   while (!finished) {

                       if (av_read_frame(_formatCtx, &packet) < 0) {
                           _isEOF = YES;
                           [self endOfFileReached];
                           break;
                       }

                       [self frameRead];

                       if (packet.stream_index ==_videoStream) {

                           int pktSize = packet.size;

                           while (pktSize > 0) {

                               int gotframe = 0;
                               int len = avcodec_decode_video2(_videoCodecCtx,
                                                               _videoFrame,
                                                               &gotframe,
                                                               &packet);

                               if (len < 0) {
                                   LoggerVideo(0, @"decode video error, skip packet");
                                   break;
                               }

                               if (gotframe) {


                                       if (!_disableDeinterlacing &&
                                           _videoFrame->interlaced_frame) {

                                           avpicture_deinterlace((AVPicture*)_videoFrame,
                                                                 (AVPicture*)_videoFrame,
                                                                 _videoCodecCtx->pix_fmt,
                                                                 _videoCodecCtx->width,
                                                                 _videoCodecCtx->height);
                                       }

                                       KxVideoFrame *frame = [self handleVideoFrame];
                                       if (frame) {

                                           [result addObject:frame];
                                           _position = frame.position;
                                           decodedDuration += frame.duration;
                                           if (decodedDuration > minDuration)
                                               finished = YES;
                               }



                               } else {
                                   count++;
                               }

                               if (0 == len)
                                   break;

                               pktSize -= len;
                           }
                       }
                       av_free_packet(&packet);
                   }
               } while (totalDuration > 0);
               _reading = [NSNumber numberWithInt:0];
               return result;
           }

       }

       return result;