Recherche avancée

Médias (1)

Mot : - Tags -/ipad

Autres articles (108)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (8261)

  • SWF to PNG.. but merge all pngs inside into 1

    14 novembre 2014, par Jayden

    The title says it all,

    I’ve tried swftools and ffmepg but I don’t ’think’ they achieve what I need to do.

    Basically there is a SWF file that is made up of up to 60 images, them 60 images inside the SWF all are parts of an image that make up 1 final image or animation on an online game.

    I need to be able to download the swf as a png image with the same output it will look like on the virtual game - so not 60 images if seperate graphics, just one image with them all (merged ?) together.

    I am not even sure if this is possible to do in the commandline so if you understand any of this, please point me in the right direction.

    I am using Ubuntu 14.04

  • Superimposing two videos onto a static image ?

    15 décembre 2014, par Archagon

    I have two videos that I’d like to combine into a single video, in which both videos would sit on top of a static background image. (Think something like this.) My requirements are that the software I use is free, that it runs on OSX, and that I don’t have to re-encode my videos an excessive number of times. I’d also like to be able to perform this operation from the command line or via script, since I’ll be doing it a lot. (But this isn’t strictly necessary.)

    I tried fiddling with ffmpeg for a couple of hours, but it just doesn’t seem very well suited for post-processing. I could potentially hack something together via the overlay feature, but so far I haven’t figured out how to do it, aside from pain-stakingly converting the image to a video (which takes 2x as long as the length of my videos !) and then superimposing the two videos onto it in another rendering step.

    Any tips ? Thank you !


    Update :

    Thanks to LordNeckbeard’s help, I was able to achieve my desired result with a single ffmpeg call ! Unfortunately, encoding is quite slow, taking 6 seconds to encode 1 second of video. I believe this is caused by the background image. Any tips on speeding up encoding ? Here’s the ffmpeg log :

    MacBook-Pro:Video archagon$ ffmpeg -loop 1 -i underlay.png -i test-slide-video-short.flv -i test-speaker-video-short.flv -filter_complex "[1:0]scale=400:-1[a];[2:0]scale=320:-1[b];[0:0][a]overlay=0:0[c];[c][b]overlay=0:0" -shortest -t 5 -an output.mp4
    ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers
     built on Nov 14 2012 16:18:58 with Apple clang version 4.0 (tags/Apple/clang-421.0.60) (based on LLVM 3.1svn)
     configuration: --prefix=/opt/local --enable-swscale --enable-avfilter --enable-libmp3lame --enable-libvorbis --enable-libopus --enable-libtheora --enable-libschroedinger --enable-libopenjpeg --enable-libmodplug --enable-libvpx --enable-libspeex --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/clang --arch=x86_64 --enable-yasm --enable-gpl --enable-postproc --enable-libx264 --enable-libxvid
     libavutil      51. 73.101 / 51. 73.101
     libavcodec     54. 59.100 / 54. 59.100
     libavformat    54. 29.104 / 54. 29.104
     libavdevice    54.  2.101 / 54.  2.101
     libavfilter     3. 17.100 /  3. 17.100
     libswscale      2.  1.101 /  2.  1.101
     libswresample   0. 15.100 /  0. 15.100
     libpostproc    52.  0.100 / 52.  0.100
    Input #0, image2, from 'underlay.png':
     Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: png, rgb24, 1024x768, 25 fps, 25 tbr, 25 tbn, 25 tbc
    Input #1, flv, from 'test-slide-video-short.flv':
     Metadata:
       author          :
       copyright       :
       description     :
       keywords        :
       rating          :
       title           :
       presetname      : Custom
       videodevice     : VGA2USB Pro V3U30343
       videokeyframe_frequency: 5
       canSeekToEnd    : false
       createdby       : FMS 3.5
       creationdate    : Mon Aug 16 16:35:34 2010
       encoder         : Lavf54.29.104
     Duration: 00:50:32.75, start: 0.000000, bitrate: 90 kb/s
       Stream #1:0: Video: vp6f, yuv420p, 640x480, 153 kb/s, 8 tbr, 1k tbn, 1k tbc
    Input #2, flv, from 'test-speaker-video-short.flv':
     Metadata:
       author          :
       copyright       :
       description     :
       keywords        :
       rating          :
       title           :
       presetname      : Custom
       videodevice     : Microsoft DV Camera and VCR
       videokeyframe_frequency: 5
       audiodevice     : Microsoft DV Camera and VCR
       audiochannels   : 1
       audioinputvolume: 75
       canSeekToEnd    : false
       createdby       : FMS 3.5
       creationdate    : Mon Aug 16 16:35:34 2010
       encoder         : Lavf54.29.104
     Duration: 00:50:38.05, start: 0.000000, bitrate: 238 kb/s
       Stream #2:0: Video: vp6f, yuv420p, 320x240, 204 kb/s, 25 tbr, 1k tbn, 1k tbc
       Stream #2:1: Audio: mp3, 22050 Hz, mono, s16, 32 kb/s
    File 'output.mp4' already exists. Overwrite ? [y/N] y
    using cpu capabilities: none!
    [libx264 @ 0x7fa84c02f200] profile High, level 3.1
    [libx264 @ 0x7fa84c02f200] 264 - core 119 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'output.mp4':
     Metadata:
       encoder         : Lavf54.29.104
       Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1024x768, q=-1--1, 25 tbn, 25 tbc
    Stream mapping:
     Stream #0:0 (png) -> overlay:main
     Stream #1:0 (vp6f) -> scale
     Stream #2:0 (vp6f) -> scale
     overlay -> Stream #0:0 (libx264)
    Press [q] to stop, [?] for help

    Update 2 :

    It works ! One important tweak was to move the underlay.png input to the end of the input list. This increased performance substantially. Here’s my final ffmpeg call. (The maps at the end aren’t required for this particular arrangement, but I sometimes have a few extra audio inputs that I want to map to my output.)

    ffmpeg
       -i VideoOne.flv
       -i VideoTwo.flv
       -loop 1 -i Underlay.png
       -filter_complex "[2:0] [0:0] overlay=20:main_h/2-overlay_h/2 [overlay];[overlay] [1:0] overlay=main_w-overlay_w-20:main_h/2-overlay_h/2 [output]"
       -map [output]:v
       -map 0:a
       OutputVideo.m4v
  • MediaCodec - save timing info for ffmpeg ?

    18 novembre 2014, par Mark

    I have a requirement to encrypt video before it hits the disk. It seems on Android the only way to do this is to use MediaCodec, and encrypt and save the raw h264 elementary streams. (The MediaRecorder and Muxer classes operate on FileDescriptors, not an OutputStream, so I can’t wrap it with a CipherOutputStream).

    Using the grafika code as a base, I’m able to save a raw h264 elementary stream by replacing the Muxer in the VideoEncoderCore class with a WriteableByteChannel, backed by a CipherOutputStream (code below, minus the CipherOutputStream).

    If I take the resulting output file over to the desktop I’m able to use ffmpeg to mux the h264 stream to a playable mp4 file. What’s missing however is timing information. ffmpeg always assumes 25fps. What I’m looking for is a way to save the timing info, perhaps to a separate file, that I can use to give ffmpeg the right information on the desktop.

    I’m not doing audio yet, but I can imagine I’ll need to do the same thing there, if I’m to have any hope of remotely accurate syncing.

    FWIW, I’m a total newbie here, and I really don’t know much of anything about SPS, NAL, Atoms, etc.

    /*
    * Copyright 2014 Google Inc. All rights reserved.
    *
    * Licensed under the Apache License, Version 2.0 (the "License");
    * you may not use this file except in compliance with the License.
    * You may obtain a copy of the License at
    *
    *      http://www.apache.org/licenses/LICENSE-2.0
    *
    * Unless required by applicable law or agreed to in writing, software
    * distributed under the License is distributed on an "AS IS" BASIS,
    * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    * See the License for the specific language governing permissions and
    * limitations under the License.
    */


    import android.media.MediaCodec;
    import android.media.MediaCodecInfo;
    import android.media.MediaFormat;
    import android.util.Log;
    import android.view.Surface;

    import java.io.BufferedOutputStream;
    import java.io.File;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.nio.ByteBuffer;
    import java.nio.channels.Channels;
    import java.nio.channels.WritableByteChannel;

    /**
    * This class wraps up the core components used for surface-input video encoding.
    * <p>
    * Once created, frames are fed to the input surface.  Remember to provide the presentation
    * time stamp, and always call drainEncoder() before swapBuffers() to ensure that the
    * producer side doesn't get backed up.
    * </p><p>
    * This class is not thread-safe, with one exception: it is valid to use the input surface
    * on one thread, and drain the output on a different thread.
    */
    public class VideoEncoderCore {
       private static final String TAG = MainActivity.TAG;
       private static final boolean VERBOSE = false;

       // TODO: these ought to be configurable as well
       private static final String MIME_TYPE = "video/avc";    // H.264 Advanced Video Coding
       private static final int FRAME_RATE = 30;               // 30fps
       private static final int IFRAME_INTERVAL = 5;           // 5 seconds between I-frames

       private Surface mInputSurface;
       private MediaCodec mEncoder;
       private MediaCodec.BufferInfo mBufferInfo;
       private int mTrackIndex;
       //private MediaMuxer mMuxer;
       //private boolean mMuxerStarted;
       private WritableByteChannel outChannel;

       /**
        * Configures encoder and muxer state, and prepares the input Surface.
        */
       public VideoEncoderCore(int width, int height, int bitRate, File outputFile)
               throws IOException {
           mBufferInfo = new MediaCodec.BufferInfo();

           MediaFormat format = MediaFormat.createVideoFormat(MIME_TYPE, width, height);

           // Set some properties.  Failing to specify some of these can cause the MediaCodec
           // configure() call to throw an unhelpful exception.
           format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
                   MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
           format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
           format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
           format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
           if (VERBOSE) Log.d(TAG, "format: " + format);

           // Create a MediaCodec encoder, and configure it with our format.  Get a Surface
           // we can use for input and wrap it with a class that handles the EGL work.
           mEncoder = MediaCodec.createEncoderByType(MIME_TYPE);
           mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
           mInputSurface = mEncoder.createInputSurface();
           mEncoder.start();

           // Create a MediaMuxer.  We can't add the video track and start() the muxer here,
           // because our MediaFormat doesn't have the Magic Goodies.  These can only be
           // obtained from the encoder after it has started processing data.
           //
           // We're not actually interested in multiplexing audio.  We just want to convert
           // the raw H.264 elementary stream we get from MediaCodec into a .mp4 file.
           //mMuxer = new MediaMuxer(outputFile.toString(),
           //        MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

           mTrackIndex = -1;
           //mMuxerStarted = false;
           outChannel = Channels.newChannel(new BufferedOutputStream(new FileOutputStream(outputFile)));
       }

       /**
        * Returns the encoder's input surface.
        */
       public Surface getInputSurface() {
           return mInputSurface;
       }

       /**
        * Releases encoder resources.
        */
       public void release() {
           if (VERBOSE) Log.d(TAG, "releasing encoder objects");
           if (mEncoder != null) {
               mEncoder.stop();
               mEncoder.release();
               mEncoder = null;
           }
           try {
               outChannel.close();
           }
           catch (Exception e) {
               Log.e(TAG,"Couldn't close output stream.");
           }
       }

       /**
        * Extracts all pending data from the encoder and forwards it to the muxer.
        * </p><p>
        * If endOfStream is not set, this returns when there is no more data to drain.  If it
        * is set, we send EOS to the encoder, and then iterate until we see EOS on the output.
        * Calling this with endOfStream set should be done once, right before stopping the muxer.
        * </p><p>
        * We're just using the muxer to get a .mp4 file (instead of a raw H.264 stream).  We're
        * not recording audio.
        */
       public void drainEncoder(boolean endOfStream) {
           final int TIMEOUT_USEC = 10000;
           if (VERBOSE) Log.d(TAG, "drainEncoder(" + endOfStream + ")");

           if (endOfStream) {
               if (VERBOSE) Log.d(TAG, "sending EOS to encoder");
               mEncoder.signalEndOfInputStream();
           }

           ByteBuffer[] encoderOutputBuffers = mEncoder.getOutputBuffers();
           while (true) {
               int encoderStatus = mEncoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
               if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                   // no output available yet
                   if (!endOfStream) {
                       break;      // out of while
                   } else {
                       if (VERBOSE) Log.d(TAG, "no output available, spinning to await EOS");
                   }
               } else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                   // not expected for an encoder
                   encoderOutputBuffers = mEncoder.getOutputBuffers();
               } else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                   // should happen before receiving buffers, and should only happen once
                   //if (mMuxerStarted) {
                   //    throw new RuntimeException("format changed twice");
                   //}
                   MediaFormat newFormat = mEncoder.getOutputFormat();
                   Log.d(TAG, "encoder output format changed: " + newFormat);

                   // now that we have the Magic Goodies, start the muxer
                   //mTrackIndex = mMuxer.addTrack(newFormat);
                   //mMuxer.start();
                   //mMuxerStarted = true;
               } else if (encoderStatus &lt; 0) {
                   Log.w(TAG, "unexpected result from encoder.dequeueOutputBuffer: " +
                           encoderStatus);
                   // let's ignore it
               } else {
                   ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
                   if (encodedData == null) {
                       throw new RuntimeException("encoderOutputBuffer " + encoderStatus +
                               " was null");
                   }

                   /*
                      FFMPEG needs this info.
                   if ((mBufferInfo.flags &amp; MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
                       // The codec config data was pulled out and fed to the muxer when we got
                       // the INFO_OUTPUT_FORMAT_CHANGED status.  Ignore it.
                       if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
                       mBufferInfo.size = 0;
                   }
                   */

                   if (mBufferInfo.size != 0) {
                       /*
                       if (!mMuxerStarted) {
                           throw new RuntimeException("muxer hasn't started");
                       }
                       */

                       // adjust the ByteBuffer values to match BufferInfo (not needed?)
                       encodedData.position(mBufferInfo.offset);
                       encodedData.limit(mBufferInfo.offset + mBufferInfo.size);

                       try {
                           outChannel.write(encodedData);
                       }
                       catch (Exception e) {
                           Log.e(TAG,"Error writing output.",e);
                       }
                       if (VERBOSE) {
                           Log.d(TAG, "sent " + mBufferInfo.size + " bytes to muxer, ts=" +
                                   mBufferInfo.presentationTimeUs);
                       }
                   }

                   mEncoder.releaseOutputBuffer(encoderStatus, false);

                   if ((mBufferInfo.flags &amp; MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                       if (!endOfStream) {
                           Log.w(TAG, "reached end of stream unexpectedly");
                       } else {
                           if (VERBOSE) Log.d(TAG, "end of stream reached");
                       }
                       break;      // out of while
                   }
               }
           }
       }
    }
    </p>