Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (77)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Les notifications de la ferme

    1er décembre 2010, par

    Afin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
    Les notifications de changement de statut
    Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
    À la demande d’un canal
    Passage au statut "publie"
    Passage au (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10888)

  • Multiple overlays using ffmpeg

    23 mars 2018, par lhan

    I’m trying to satisfy a few layering scenarios for building video files using ffmpeg.

    Scenario 1 : Overlay a video (specifying opacity of the video) on top of an image, creating a new video as the result.

    I solved this with :

    ffmpeg -i video.mp4 -i image.jpg -filter_complex '[0]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[a];[1][a]overlay=0:0' -t 30 output.mp4

    I’m scaling the video to 3840x2160 to match my image (ideally I’d have them matching beforehand).

    Scenario 2 : 3 layers now, video - image - image. The middle image layer is a transparent image with text. So we have a base image, with text overlaid, and a video on top of that at a certain opacity.

    I solved this with :

    ffmpeg -i video.mp4 -i image.jpg -i text.png -filter_complex '[0]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[a];[2][a]overlay=0:0,scale=w=3840:h=2160[b];[1][b]overlay=0:0' -t 30 output.mp4

    Scenario 3 (which I can’t get working) : Same as Scenario #2, but with text on top of the video.

    I tried re-arranging my filter, hoping to affect the layering order :

    ffmpeg -i video.mp4 -i image.jpg -i text.png -filter_complex '[2]overlay=0:0,scale=w=3840:h=2160[a];[0][a]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[b];[1][b]overlay=0:0' -t 5 output.mp4

    But that gives the following error :

    Too many inputs specified for the "format" filter. Error initializing complex filters. Invalid argument

    Full Error :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ’video.mp4’ :
    Metadata :
    major_brand : mp42
    minor_version : 0
    compatible_brands : mp42mp41
    creation_time : 2018-03-09T20:52:18.000000Z

    Duration : 00:00:30.00, start : 0.000000, bitrate : 8002 kb/s

    Stream #0:0(eng) : Video : h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 7997 kb/s, 24 fps,
    24 tbr, 24k tbn, 48 tbc (default)

    Metadata :
    creation_time : 2018-03-09T20:52:18.000000Z
    handler_name : Alias Data Handler
    encoder : AVC Coding Input #1, image2, from ’image.jpg’ :

    Duration : 00:00:00.04, start : 0.000000, bitrate : 526829 kb/s

    Stream #1:0 : Video : mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 3840x2160 [SAR 96:96 DAR 16:9], 25 tbr, 25 tbn, 25 tbc Input #2,
    png_pipe, from ’text.png’ : Duration : N/A, bitrate : N/A

    Stream #2:0 : Video : png, rgba(pc), 1500x1500, 25 tbr, 25 tbn, 25 tbc [AVFilterGraph @ 0x7fc37d402de0]

    Too many inputs specified for the "format" filter. Error initializing complex filters. Invalid argument

    I can sort of get around that by tweaking the command so that the text isn’t an input to the overlay :

    ffmpeg -i lightTexture.mp4 -i image.jpg -i textSample.png -filter_complex '[2]overlay=0:0,scale=w=3840:h=2160;[0]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[b];[1][b]overlay=0:0' -t 5 output_text_on_top.mp4

    But then my output video is all messed up. I suspect I am on the wrong track by trying to cram all of this into the -filter_complex. I’m wondering if I need to create two overlays and then overlay those (i.e overlay Text onto the Video, and then overlay that onto the base image) though I’m not sure how to accomplish that.

    If anyone could point me in the right direction here, I’d be super grateful.

  • How to Make Video from Arraylist of images with Music

    6 avril 2018, par hardik sojitra

    I am making an app which makes video from selected images which is stored in SD card I have implement the
    group : ’com.github.hiteshsondhi88.libffmpeg’, name : ’FFmpegAndroid’, version : ’0.2.5’ this library and make this classes but it does not work

    Utility.java

    public class Utility {
       private final static String TAG = Utility.class.getName();
       private static Context mContext;
       public Utility(Context context) {
           mContext = context;
       }
       public static String excuteCommand(String command)
       {
           try {
               Log.d(TAG, "execute command : " + command);

               Process process = Runtime.getRuntime().exec(command);

               BufferedReader reader = new BufferedReader(
                       new InputStreamReader(process.getInputStream()));
               int read;
               char[] buffer = new char[4096];
               StringBuffer output = new StringBuffer();
               while ((read = reader.read(buffer)) > 0) {
                   output.append(buffer, 0, read);
               }
               reader.close();
               process.waitFor();
               Log.d(TAG, "command result: " + output.toString());
               return output.toString();
           } catch (IOException e) {
               Log.e(TAG, e.getMessage(), e);
           } catch (InterruptedException e) {
               Log.e(TAG, e.getMessage(), e);
           }
           return "";
       }
       public String getPathOfAppInternalStorage()
       {
           return Environment.getExternalStorageDirectory().getAbsolutePath();
       }
       public void saveFileToAppInternalStorage(InputStream inputStream, String fileName)
       {
           File file = new File(getPathOfAppInternalStorage() + "/" + fileName);
           if (file.exists())
           {
               Log.d(TAG, "SaveRawToAppDir Delete Exsisted File");
               file.delete();
           }
           FileOutputStream outputStream;
           try {
               outputStream = mContext.openFileOutput(fileName, Context.MODE_PRIVATE);
               byte[] buffer = new byte[1024];
               int length;
               while ((length = inputStream.read(buffer)) > 0)
               {
                   outputStream.write(buffer, 0, length);
               }
               outputStream.close();
               inputStream.close();
           } catch (Exception e) {
               Log.e(TAG, e.getMessage(), e);
           }
       }
       public static boolean isFileExsisted(String filePath)
       {
           File file = new File(filePath);
           return file.exists();
       }
       public static void deleteFileAtPath(String filePath)
       {
           File file = new File(filePath);
           file.delete();
       }
    }

    enter image description here

    ffmpegController.java

    package com.example.admin.imagetovideowithnative;

    import android.content.Context;
    import android.util.Log;

    import java.io.ByteArrayInputStream;
    import java.io.InputStream;

    public class FfmpegController {
       private static Context mContext;
       private static Utility mUtility;
       private static String mFfmpegBinaryPath;
       public FfmpegController(Context context) {
           mContext = context;
           mUtility = new Utility(context);
           initFfmpeg();
       }
       private void initFfmpeg()
       {
           mFfmpegBinaryPath = mContext.getApplicationContext().getFilesDir().getAbsolutePath() + "/ffmpeg";
           if (Utility.isFileExsisted(mFfmpegBinaryPath))
               return;
           InputStream inputStream = mContext.getResources().openRawResource(R.raw.ffmpeg);
           mUtility.saveFileToAppInternalStorage(inputStream, "ffmpeg");
           Utility.excuteCommand(CommandHelper.commandChangeFilePermissionForExecuting(mFfmpegBinaryPath));
       }
       public void convertImageToVideo(String inputImgPath)
       {
           Log.e("Image Parth", "inputImgPath - "+inputImgPath);
           if (Utility.isFileExsisted(pathOuputVideo()))
               Utility.deleteFileAtPath(pathOuputVideo());
           saveShellCommandImg2VideoToAppDir(inputImgPath);
           Utility.excuteCommand("sh" + " " + pathShellScriptImg2Video());
       }
       public String pathOuputVideo()
       {
           return mUtility.getPathOfAppInternalStorage() + "/out.mp4";
       }
       private String pathShellScriptImg2Video()
       {
           return mUtility.getPathOfAppInternalStorage() + "/img2video.sh";
       }
       private void saveShellCommandImg2VideoToAppDir(String inputImgPath)
       {
           String command = CommandHelper.commandConvertImgToVideo(mFfmpegBinaryPath, inputImgPath, pathOuputVideo());
           InputStream is = new ByteArrayInputStream(command.getBytes());
           mUtility.saveFileToAppInternalStorage(is, "img2video.sh");
       }
    }

    CommandHelper.java

    package com.example.admin.imagetovideowithnative;

    import android.util.Log;

    public class CommandHelper {
       public static String commandConvertImgToVideo(String ffmpegBinaryPath, String inputImgPath, String outputVideoPath) {
           Log.e("ffmpegBinaryPath", "ffmpegBinaryPath - " + ffmpegBinaryPath);
           Log.e("inputImgPath", "inputImgPath - " + inputImgPath);
           Log.e("outputVideoPath", "outputVideoPath - " + outputVideoPath);

           return ffmpegBinaryPath + " -r 1/1 -i " + inputImgPath + " -c:v libx264 -crf 23 -pix_fmt yuv420p -s 640x480 " + outputVideoPath;
       }

       public static String commandChangeFilePermissionForExecuting(String filePath) {
           return "chmod 777 " + filePath;
       }
    }

    AsynckTask

    class Asynck extends AsyncTask {

           FfmpegController mFfmpegController = new FfmpegController(PhotosActivity.this);
           Utility mUtility = new Utility(PhotosActivity.this);

           @Override
           protected void onPreExecute() {
               super.onPreExecute();
               Log.e("Video Process Start", "======================== Video Process Start ======================================");

           }

           @Override
           protected Void doInBackground(Object... objects) {
               mFfmpegController.convertImageToVideo("");
               return null;
           }

           @Override
           protected void onPostExecute(Void aVoid) {
               super.onPostExecute(aVoid);
               Log.e("Video Process Complete", "======================== Video Process Complete ======================================");
               Log.e("Video Path", "Path - " + mFfmpegController.pathOuputVideo());
               Toast.makeText(PhotosActivity.this, "Video Process Complete", Toast.LENGTH_LONG).show();
           }
       }
  • How to reversely play a video in unity ?

    30 avril 2018, par azusa

    I am implementing a VR 360 video viewer in Unity and need to implement a "play in reverse" function. Some approaches I tried (and obviously failed) :

    1. Set the playbackSpeed field of the VideoPlayer in a negative number.
      • Result : Video pauses
    2. Reversing the video frame by frame using the method suggested here : How to Rewind Video Player in Unity ?
      • Result : Super laggy playback
    3. Instead of using the default VideoPlayer, use Vive Media Player (which builds on top of ffmpeg) (https://assetstore.unity.com/packages/tools/video/vive-media-decoder-63938). Reverse the video frame by frame and force the renderer to render the frame at each call of Update() even if the state of the decoder is DecoderState.SEEK_FRAME.

    Code (Based on ViveMediaDecoder.cs from the asset) :

       //  Video progress is triggered using Update. Progress time would be set by nativeSetVideoTime.
       void Update() {
           Debug.Log(decoderState);
           switch (decoderState) {
               case DecoderState.START:
                   if (isVideoEnabled) {
                       //  Prevent empty texture generate green screen.(default 0,0,0 in YUV which is green in RGB)
                       if (useDefault && nativeIsContentReady(decoderID)) {
                           getTextureFromNative();
                           setTextures(videoTexYch, videoTexUch, videoTexVch);
                           useDefault = false;
                       }

                       //  Update video frame by dspTime.
                       double setTime = AudioSettings.dspTime - globalStartTime;

                       //  Normal update frame.
                       if (setTime < videoTotalTime || videoTotalTime == -1.0f) {
                           if (seekPreview && nativeIsContentReady(decoderID)) {
                               setPause();
                               seekPreview = false;
                               unmute();
                           } else {
                               nativeSetVideoTime(decoderID, (float) setTime);
                               GL.IssuePluginEvent(GetRenderEventFunc(), decoderID);
                           }
                       } else {
                           isVideoReadyToReplay = true;
                       }
                   }

                   if (nativeIsVideoBufferEmpty(decoderID) && !nativeIsEOF(decoderID)) {
                       decoderState = DecoderState.BUFFERING;
                       hangTime = AudioSettings.dspTime - globalStartTime;
                   }

                   break;

               case DecoderState.SEEK_FRAME:

                       //
                       // Code Added:
                       //

                       setTime = AudioSettings.dspTime - globalStartTime;
                       nativeSetVideoTime(decoderID, (float) setTime);
                       GL.IssuePluginEvent(GetRenderEventFunc(), decoderID);

                       //
                       //

                   if (nativeIsSeekOver(decoderID)) {
                       globalStartTime = AudioSettings.dspTime - hangTime;
                       decoderState = DecoderState.START;
                       if (lastState == DecoderState.PAUSE) {
                           seekPreview = true;
                           mute();
                       }
                   }
                   break;

               case DecoderState.BUFFERING:
                   if (nativeIsVideoBufferFull(decoderID) || nativeIsEOF(decoderID)) {
                       decoderState = DecoderState.START;
                       globalStartTime = AudioSettings.dspTime - hangTime;
                   }
                   break;

               case DecoderState.PAUSE:
               case DecoderState.EOF:
               default:
                   break;
           }

           if (isVideoEnabled || isAudioEnabled) {
               if ((!isVideoEnabled || isVideoReadyToReplay) && (!isAudioEnabled || isAllAudioChEnabled || isAudioReadyToReplay)) {
                   decoderState = DecoderState.EOF;
                   isVideoReadyToReplay = isAudioReadyToReplay = false;

                   if (onVideoEnd != null) {
                       onVideoEnd.Invoke();
                   }
               }
           }
       }

    - Result : Video pauses

    I currently work around this problem by generating a reversed video beforehand and switch to the reversed videos whenever the user wants to rewind. However, given that our project use more then one 360 video and allows custom videos, the time needed to generate the reversed videos and the lag in switching the videos are unacceptably long.

    Since the function is intuitively easy I think there must exist a much simpler solution. Have been stuck in this problem for a long time already so any pointers in solving the problem would be a big help !