Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (61)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (9322)

  • issue after video rotation how fix

    2 avril 2015, par Vahagn

    I have next code for rotate video

    OpenCVFrameConverter.ToIplImage converter2 = new OpenCVFrameConverter.ToIplImage() ;

    for (int i = firstIndex; i <= lastIndex; i++) {
       long t = timestamps[i % timestamps.length] - startTime;
       if (t >= 0) {
           if (t > recorder.getTimestamp()) {
               recorder.setTimestamp(t);
           }
           Frame g = converter2.convert(rotate(converter2.convertToIplImage(images[i % images.length]),9 0));
       recorder.record(g);
       }
    }

    images[i] - Frame in JavaCV
    after in video have green lines

    UPDATE
    Convertation function

    /*
    * Copyright (C) 2015 Samuel Audet
    *
    * This file is part of JavaCV.
    *
    * JavaCV is free software: you can redistribute it and/or modify
    * it under the terms of the GNU General Public License as published by
    * the Free Software Foundation, either version 2 of the License, or
    * (at your option) any later version (subject to the "Classpath" exception
    * as provided in the LICENSE.txt file that accompanied this code).
    *
    * JavaCV is distributed in the hope that it will be useful,
    * but WITHOUT ANY WARRANTY; without even the implied warranty of
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    * GNU General Public License for more details.
    *
    * You should have received a copy of the GNU General Public License
    * along with JavaCV.  If not, see /www.gnu.org/licenses/>.
    */

    package com.example.vvardanyan.ffmpeg;

    import org.bytedeco.javacpp.BytePointer;
    import org.bytedeco.javacpp.Pointer;

    import java.nio.Buffer;

    import static org.bytedeco.javacpp.opencv_core.CV_16S;
    import static org.bytedeco.javacpp.opencv_core.CV_16U;
    import static org.bytedeco.javacpp.opencv_core.CV_32F;
    import static org.bytedeco.javacpp.opencv_core.CV_32S;
    import static org.bytedeco.javacpp.opencv_core.CV_64F;
    import static org.bytedeco.javacpp.opencv_core.CV_8S;
    import static org.bytedeco.javacpp.opencv_core.CV_8U;
    import static org.bytedeco.javacpp.opencv_core.CV_MAKETYPE;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_16S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_16U;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_32F;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_32S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_64F;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8U;
    import static org.bytedeco.javacpp.opencv_core.IplImage;
    import static org.bytedeco.javacpp.opencv_core.Mat;

    /**
    * A utility class to map data between {@link Frame} and {@link IplImage} or {@link Mat}.
    * Since this is an abstract class, one must choose between two concrete classes:
    * {@link ToIplImage} or {@link ToMat}.
    *
    * @author Samuel Audet
    */
    public abstract class OpenCVFrameConverter<f> extends FrameConverter<f> {
       IplImage img;
       Mat mat;

       public static class ToIplImage extends OpenCVFrameConverter<iplimage> {
           @Override public IplImage convert(Frame frame) { return convertToIplImage(frame); }
       }

       public static class ToMat extends OpenCVFrameConverter<mat> {
           @Override public Mat convert(Frame frame) { return convertToMat(frame); }
       }

       public static int getFrameDepth(int depth) {
           switch (depth) {
               case IPL_DEPTH_8U:  case CV_8U:  return Frame.DEPTH_UBYTE;
               case IPL_DEPTH_8S:  case CV_8S:  return Frame.DEPTH_BYTE;
               case IPL_DEPTH_16U: case CV_16U: return Frame.DEPTH_USHORT;
               case IPL_DEPTH_16S: case CV_16S: return Frame.DEPTH_SHORT;
               case IPL_DEPTH_32F: case CV_32F: return Frame.DEPTH_FLOAT;
               case IPL_DEPTH_32S: case CV_32S: return Frame.DEPTH_INT;
               case IPL_DEPTH_64F: case CV_64F: return Frame.DEPTH_DOUBLE;
               default: return -1;
           }
       }

       public static int getIplImageDepth(Frame frame) {
           switch (frame.imageDepth) {
               case Frame.DEPTH_UBYTE:  return IPL_DEPTH_8U;
               case Frame.DEPTH_BYTE:   return IPL_DEPTH_8S;
               case Frame.DEPTH_USHORT: return IPL_DEPTH_16U;
               case Frame.DEPTH_SHORT:  return IPL_DEPTH_16S;
               case Frame.DEPTH_FLOAT:  return IPL_DEPTH_32F;
               case Frame.DEPTH_INT:    return IPL_DEPTH_32S;
               case Frame.DEPTH_DOUBLE: return IPL_DEPTH_64F;
               default:  return -1;
           }
       }
       static boolean isEqual(Frame frame, IplImage img) {
           return img != null &amp;&amp; frame != null &amp;&amp; frame.image != null &amp;&amp; frame.image.length > 0
                   &amp;&amp; frame.imageWidth == img.width() &amp;&amp; frame.imageHeight == img.height()
                   &amp;&amp; frame.imageChannels == img.nChannels() &amp;&amp; getIplImageDepth(frame) == img.depth()
                   &amp;&amp; new Pointer(frame.image[0]).address() == img.imageData().address()
                   &amp;&amp; frame.imageStride * Math.abs(frame.imageDepth) / 8 == img.widthStep();
       }
       public IplImage convertToIplImage(Frame frame) {
           if (frame == null) {
               return null;
           } else if (frame.opaque instanceof IplImage) {
               return (IplImage)frame.opaque;
           } else if (!isEqual(frame, img)) {
               int depth = getIplImageDepth(frame);
               img = depth &lt; 0 ? null : IplImage.createHeader(frame.imageWidth, frame.imageHeight, depth, frame.imageChannels)
                       .imageData(new BytePointer(new Pointer(frame.image[0].position(0)))).widthStep(frame.imageStride * Math.abs(frame.imageDepth) / 8);
           }
           return img;
       }
       public Frame convert(IplImage img) {
           if (img == null) {
               return null;
           } else if (!isEqual(frame, img)) {
               frame = new Frame();
               frame.imageWidth = img.width();
               frame.imageHeight = img.height();
               frame.imageDepth = getFrameDepth(img.depth());
               frame.imageChannels = img.nChannels();
               frame.imageStride = img.widthStep() * 8 / Math.abs(frame.imageDepth);
               frame.image = new Buffer[] { img.createBuffer() };
               frame.opaque = img;
           }
           return frame;
       }

       public static int getMatDepth(Frame frame) {
           switch (frame.imageDepth) {
               case Frame.DEPTH_UBYTE:  return CV_8U;
               case Frame.DEPTH_BYTE:   return CV_8S;
               case Frame.DEPTH_USHORT: return CV_16U;
               case Frame.DEPTH_SHORT:  return CV_16S;
               case Frame.DEPTH_FLOAT:  return CV_32F;
               case Frame.DEPTH_INT:    return CV_32S;
               case Frame.DEPTH_DOUBLE: return CV_64F;
               default:  return -1;
           }
       }
       static boolean isEqual(Frame frame, Mat mat) {
           return mat != null &amp;&amp; frame != null &amp;&amp; frame.image != null &amp;&amp; frame.image.length > 0
                   &amp;&amp; frame.imageWidth == mat.cols() &amp;&amp; frame.imageHeight == mat.rows()
                   &amp;&amp; frame.imageChannels == mat.channels() &amp;&amp; getMatDepth(frame) == mat.depth()
                   &amp;&amp; new Pointer(frame.image[0]).address() == mat.data().address()
                   &amp;&amp; frame.imageStride * Math.abs(frame.imageDepth) / 8 == (int)mat.step();
       }
       public Mat convertToMat(Frame frame) {
           if (frame == null) {
               return null;
           } else if (frame.opaque instanceof Mat) {
               return (Mat)frame.opaque;
           } else if (!isEqual(frame, mat)) {
               int depth = getMatDepth(frame);
               mat = depth &lt; 0 ? null : new Mat(frame.imageHeight, frame.imageWidth, CV_MAKETYPE(depth, frame.imageChannels),
                       new Pointer(frame.image[0].position(0)), frame.imageStride * Math.abs(frame.imageDepth) / 8);
           }
           return mat;
       }
       public Frame convert(Mat mat) {
           if (mat == null) {
               return null;
           } else if (!isEqual(frame, mat)) {
               frame = new Frame();
               frame.imageWidth = mat.cols();
               frame.imageHeight = mat.rows();
               frame.imageDepth = getFrameDepth(mat.depth());
               frame.imageChannels = mat.channels();
               frame.imageStride = (int)mat.step() * 8 / Math.abs(frame.imageDepth);
               frame.image = new Buffer[] { mat.createBuffer() };
               frame.opaque = mat;
           }
           return frame;
       }
    }
    </mat></iplimage></f></f>
  • FFMPEG UDP output not working

    12 juillet 2017, par nameless

    I’m currently trying to send a video through ffmpeg to a udp stream. Therefore, I pipe a rawvideo directly into ffmpeg by ffmpeg.stdin.write(data). This are my options/parameters :

    var ffmpegArgs = [
       '-c:v', 'rawvideo',// input container
       '-f', 'rawvideo',
       '-pix_fmt', 'rgba', // input pixel format
       '-s', '600x600', //input size
       '-video_size', '600x600',
       '-i', 'pipe:0', // input source
       '-format', 'mpegts', // output container format
       '-c:v', 'libx264', // output video codec
       '-b:v', '2m', // output bitrate
       'udp://239.255.123.46:1234' // output destination
    ];

    What I’m wondering about is : when starting, I immediately get an error saying Unable to find a suitable output format for 'udp://239.255.123.46:1234', but when I put a filename there (to save the video) like video.mp4, the video is recorded and rendered all fine and I can open it after stopping.

    So why does the UDP streaming not work ? Any ideas ? When runnning FFMPEG directly from the commandline with a video and then use exactly that UDP stream address, everything is working completely fine.

    Whats the problem ?

  • C++ Extracting a h264 Subsequence from Byte Stream

    10 janvier 2017, par Simon

    I have a raw h.264 byte stream coming from an RTSP network camera. In order to get the byte stream, I catch the piped output from ffmpeg using popen() :

    ffmpeg -i rtsp://address -c:v copy -an -c:v copy -an -f h264 pipe:1

    At some point in time, I would like to start recording from the stream for a while (and save everything to an mp4 file). I want to achieve this without decoding the stream to an intermediate format (e.g., yuv420p) and encoding it back. As a first test, I just started writing the output buffer to disk after a couple of seconds. Then, I can encode the video again using

    ffmpeg -i cam.h264 -c:v h264 -an -f copy cam_out.mp4

    Here, ffmpeg complains that the first part of the data is corrupted (it still seems to be able to recover from this as it just throws away the corrupted parts). This of course makes sense as I simply start recording without looking for the start of frames etc.. Ideally, I would like to start and stop recording at the correct parts of the stream. I had a small glimpse on the h.264 format and the NAL units. Is there some simple way of detecting "good" positions in the stream to start recording ?