Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (36)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (3195)

  • Generate and concatenate videos from images with ffmpeg in single command

    17 août 2022, par YulkyTulky

    My goal is to generate a video from images. Let's say I have 2 images 1.png and 2.png.

    


    I can do

    


    ffmpeg -loop 1 1.png -t 3 1.mp4


    


    ffmpeg -loop 1 2.png -t 5 2.mp4


    


    to create a 3 second video from the first image and 5 second video from the second image.

    


    Then, I merge the two videos using

    


    ffmpeg -i 1.mp4 -I 2.mp4 -filter_complex "concat" final.mp4 


    


    to create my final 8 second video.

    


    This process seems extremely inefficient, and I feel I do not have to use all this processing power+disk reading/writing to create 2 intermediary video files when I only want the one final video.

    


    Is there a way to execute this entire process in one ffmpeg command (efficiently) ?

    


  • Humble Video take snapshot of given time

    23 mai 2021, par Boldbayar
      

    1. Hello, I'm using https://github.com/artclarke/humble-video to take a thumbnail from a video.
So far I have successfully managed to take a snapshot from a video at start with following method.

      


      private static Path generateThumbnail(final Path videoFile)
     throws InterruptedException, IOException {

 final Demuxer demuxer = Demuxer.make();
 demuxer.open(videoFile.toString(), null, false, true, null, null);

 int streamIndex = -1;
 Decoder videoDecoder = null;
 String rotate = null;
 final int numStreams = demuxer.getNumStreams();
 for (int i = 0; i < numStreams; ++i) {
     final DemuxerStream stream = demuxer.getStream(i);
     final KeyValueBag metaData = stream.getMetaData();
     final Decoder decoder = stream.getDecoder();
     if (decoder != null
             && decoder.getCodecType() == MediaDescriptor.Type.MEDIA_VIDEO) {
         videoDecoder = decoder;
         streamIndex = i;
         rotate = metaData.getValue("rotate", KeyValueBag.Flags.KVB_NONE);
         break;
     }
 }

 if (videoDecoder == null) {
     throw new IOException("Not a valid video file");
 }
 videoDecoder.open(null, null);

 final MediaPicture picture = MediaPicture.make(videoDecoder.getWidth(),
         videoDecoder.getHeight(), videoDecoder.getPixelFormat());

 final MediaPictureConverter converter = MediaPictureConverterFactory
         .createConverter(MediaPictureConverterFactory.HUMBLE_BGR_24, picture);

 final MediaPacket packet = MediaPacket.make();
 BufferedImage image = null;
 MUX : while (demuxer.read(packet) >= 0) {
     if (packet.getStreamIndex() != streamIndex) {
         continue;
     }
     int offset = 0;
     int bytesRead = 0;
     videoDecoder.decodeVideo(picture, packet, offset);
     do {
         bytesRead += videoDecoder.decode(picture, packet, offset);
         if (picture.isComplete()) {
             image = converter.toImage(null, picture);
             break MUX;
         }
         offset += bytesRead;

     } while (offset < packet.getSize());
 }
 if (image == null) {
     throw new IOException("Unable to find a complete video frame");
 }
 if (rotate != null) {
     final AffineTransform transform = new AffineTransform();
     transform.translate(0.5 * image.getHeight(), 0.5 * image.getWidth());
     transform.rotate(Math.toRadians(Double.parseDouble(rotate)));
     transform.translate(-0.5 * image.getWidth(), -0.5 * image.getHeight());
     final AffineTransformOp op = new AffineTransformOp(transform,
             AffineTransformOp.TYPE_BILINEAR);
     image = op.filter(image, null);
 }

 final Path target = videoFile.getParent()
         .resolve(videoFile.getFileName() + ".thumb.jpg");

 final double mul;
 if (image.getWidth() > image.getHeight()) {
     mul = 216 / (double) image.getWidth();
 } else {
     mul = 216 / (double) image.getHeight();
 }

 final int newW = (int) (image.getWidth() * mul);
 final int newH = (int) (image.getHeight() * mul);
 final Image thumbnailImage = image.getScaledInstance(newW, newH,
         Image.SCALE_SMOOTH);
 image = new BufferedImage(newW, newH, BufferedImage.TYPE_INT_BGR);

 final Graphics2D g2d = image.createGraphics();
 g2d.drawImage(thumbnailImage, 0, 0, null);
 g2d.dispose();

 ImageIO.write(image, "jpeg", target.toFile());
 return target.toAbsolutePath(); }


      


    2. 


    3. Now, what I want to do is take a snapshot after 2 seconds after the video starts, is it possible ? I
have tried using the "Demuxer" -s seek method but no luck.

      


    4. 


    


  • Black overlay appears when merging a transparent video to another video

    13 juin 2012, par RakeshS

    This is what I have done so far :

    Command to create a transparent PNG image :

    convert -size 640x480 -background transparent -fill blue \
    -gravity South label:ROCK image1-0.png

    Command to create a transparent video :

    ffmpeg -loop 1 -f image2 -i image1-0.png -r 20 -vframes 100 \
    -vcodec png -pix_fmt bgra mov-1.mov

    (as per this post) - I expect this video to be a transparent video.

    Command to overlay a video with another :

    ffmpeg -i final-video.mov -sameq -ar 44100 \
    -vf "movie=mov-1.mov [logo];[in][logo] overlay=0:0 [out]" \
    -strict experimental final-video.mov

    Above commands works perfect and I have not faced any problem, but I don't get what I expect which is kinda watermarking effect, I want mov-1.mov to be transparent with final-video.mov.

    Questions :

    1. Is there any way to verify if the generated video is transparent ? other than merging ?
    2. Not sure why the above mov-1.mov is not transparent when it is merged with final-video.mov, any info to solve this problem would be great.

    Please help.