Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (92)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (10858)

  • Using Python script to cut long videos into chunks in FFMPEG

    23 février 2016, par Michael Hamilton

    Starting off by saying I’m not a programmer, but I really need the application this Python script I found says it can do.

    Auto-Splitting Script by Antarctic Nest of Icephoenix

    Basically I have a directory of long .MP4s that need to be cut into equal parts based on a total running time of 3 hours 15 minutes. For example, I would have an 8 hour video that needs to be cut into smaller parts each under 3:15:00.

    We’ve been manually crating FFMPEG codes to do this, but I found the Python script above that seems like it will do what we are needing. The issue is that I have no Python experience. I don’t know where in the script to enter in the folder path with the videos, or where to specify my codecs, or where to tell the program that the max time for each video chunk is 3:15:00.

    I’m on a 64-bit windows system working in command prompt

    Here’s what I have done :

    • Installed python 3
    • downloaded the script
    • I can click on the script to see the cmd window flash to indicate it’s running
    • I enter "C :\Python34\python.exe V :\ffmpeg\ffmpeg-split.py" into cmd
    • output is

      File "V :\ffmpeg\ffmpeg-split.py", line 16
      print "Split length can’t be 0"

       SyntaxError: Missing parentheses in call to 'print'

    I have no idea where to go from here. It seems like the script is loading properly, but I haven’t entered my variables. Any help with where to put the information would be appreciated.

    Here is the FFMPEG code we usually use :

    ffmpeg -i V :\ffmpeg\88518_63c392af.mp4 -vcodec libx264 -acodec copy -vf fps=fps=30000/1001 -ss 00:05:01.000 -t 02:43:49.000 V :\events\88518.mp4

    The ffmpeg codes we use :

    -i is a .mp4

    -vcodec h.264 codec

    -acodec should be “copy” or can be “libvo_aacenc”

    -vf fps=30000/1000 a forced fps of 29.97

    -ss is start time (we would use this to manually cut into parts along with -t)

    -t is duration (we would calculate the duration for each part as the total run time divided by the equal time under 3:15:00 be it two, three, or four parts)

    Thank you a million dollars

  • How to improve the performance of Audio Queue Services when playing audio ?

    11 décembre 2014, par 谢小进

    I want to play RTMP’s H.264/AAC using FFmpeg for decoding and playing video, Audio Queue Services for playing audio. I have successfully done them all, but still some tough issues, for example high memory allocation when playing audio. I have debugged and found that Audio Queue Services lead to high memory allocation, and then crash ! Does anybody know how to improve the memory performance ? Here is my code for Audio Queue Services playing audio.

    //
    //  RTMPAudioPlayer.h
    //

    #import <foundation></foundation>Foundation.h>
    #import <audiotoolbox></audiotoolbox>AudioToolbox.h>

    @interface AQBuffer : NSObject

    @property (nonatomic) AudioQueueBufferRef buffer;

    @end


    @interface RTMPAudioPlayer : NSObject {
       AudioQueueRef queue;
       AudioStreamBasicDescription dataFormat;
       NSMutableArray *buffers;
       NSMutableArray *reusableBuffers;
    }

    - (id)initWithSampleRate:(int)sampleRate channels:(int)channels bitsPerChannel:(int)bitsPerChannel;
    - (void)start;
    - (void)stop;
    - (void)putData:(NSData *)data;

    @end




    //
    //  RTMPAudioPlayer.m
    //

    #import "RTMPAudioPlayer.h"

    static const int kNumberBuffers = 3;
    static const int kBufferSize    = 0xA000;

    @implementation AQBuffer

    @end


    @implementation RTMPAudioPlayer

    void AQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer) {
       RTMPAudioPlayer *THIS = (__bridge RTMPAudioPlayer *)inUserData;
       [THIS handleAQOutputCallback:inAQ buffer:inCompleteAQBuffer];
    }

    - (void)handleAQOutputCallback:(AudioQueueRef)audioQueue buffer:(AudioQueueBufferRef)buffer {
       for (int i = 0; i &lt; [buffers count]; ++i) {
           if (buffer == [buffers[i] buffer]) {
               [reusableBuffers addObject:buffers[i]];
               break;
           }
       }
    }

    - (id)initWithSampleRate:(int)sampleRate channels:(int)channels bitsPerChannel:(int)bitsPerChannel {
       self = [super init];
       if (self) {
           memset(&amp;dataFormat, 0, sizeof(dataFormat));
           dataFormat.mSampleRate = sampleRate;
           dataFormat.mFormatID = kAudioFormatLinearPCM;
           dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
           dataFormat.mBitsPerChannel = bitsPerChannel;
           dataFormat.mChannelsPerFrame = channels;
           dataFormat.mFramesPerPacket = 1;
           dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
           dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame * dataFormat.mFramesPerPacket;
       }
       return self;
    }

    - (void)start {
       OSStatus status = AudioQueueNewOutput(&amp;dataFormat, AQOutputCallback, (__bridge void *)self, NULL, NULL, 0, &amp;queue);
       if (status == noErr) {
           buffers = [NSMutableArray array];
           reusableBuffers = [NSMutableArray array];
           for (int i = 0; i &lt; kNumberBuffers; i++) {
               AudioQueueBufferRef buffer;
               status = AudioQueueAllocateBuffer(queue, kBufferSize, &amp;buffer);
               if (status == noErr) {
                   AQBuffer *bufferObj = [[AQBuffer alloc] init];
                   bufferObj.buffer = buffer;
                   [buffers addObject:bufferObj];
                   [reusableBuffers addObject:bufferObj];
               } else {
                   AudioQueueDispose(queue, true);
                   queue = NULL;
                   break;
               }
           }

           AudioQueueStart(queue, NULL);
       } else {
           queue = NULL;
       }
    }

    - (void)stop {
       if (queue) {
           AudioQueueStop(queue, true);
       }
    }

    - (void)putData:(NSData *)data {
       AQBuffer *bufferObj = [reusableBuffers firstObject];
       [reusableBuffers removeObject:bufferObj];
       AudioQueueBufferRef buffer;
       OSStatus status = AudioQueueAllocateBuffer(queue, kBufferSize, &amp;buffer);
       if (status == noErr) {
           bufferObj = [[AQBuffer alloc] init];
           bufferObj.buffer = buffer;

           memcpy(bufferObj.buffer->mAudioData, [data bytes], [data length]);
           bufferObj.buffer->mAudioDataByteSize = [data length];

           AudioQueueEnqueueBuffer(queue, bufferObj.buffer, 0, NULL);
       }
    }

    @end
  • ffmpeg : given a video of any length, create timelapse video of 6-8 seconds long

    14 mai 2018, par CDub

    I have an application which consumes video of any length (could be a few seconds, could be several minutes) and I want to use ffmpeg to return a "timelapse" version of the input video which will always be between 6 and 8 seconds long.

    I’ve been fiddling with the following :

    ffmpeg -y -i in.mp4 -r 10 -vf setpts='0.01*PTS' -an out.mp4

    Which seems to be a good start, but still generates very dynamic results given what in.mp4 is.

    I also don’t fully understand the verbiage in the ffmpeg documentation, so I’m not even sure what arguments to use for setpts, or if setpts is the correct filter I want to use.