Recherche avancée

Médias (91)

Autres articles (67)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5446)

  • How to catch a FFMPEG exception with subprocess ?

    13 mai 2022, par Faindirnomainzein

    I'm doing some work on subtitles and some videos have 1 subtitle track, others have 2 subtitle tracks. For those that have 2, I use the 2nd one (index = 1). I'm trying to automate it with python.

    


    For files with with 2 subtitle tracks, I use :

    


    


    -vf "subtitles='file.mkv':si=1

    


    


    and for those with 1 subtitle track, I use :

    


    


    -vf "subtitles='file.mkv':si=0

    


    


    I'm using this code :

    


    for mkv in all_mkvs:
    try:
      subprocess.call(f'ffmpeg -i ... -vf "subtitles='file.mkv':si=1 ...')
    except:
      subprocess.call(f'ffmpeg -i ... -vf "subtitles='file.mkv':si=0 ...')


    


    But it doesn't seem to care about the exception and just ends the loop whenever it meets a file with 1 subtitle and gives me the error anyway.

    


    Press [q] to stop, [?] for help
[Parsed_subtitles_0 @ 0000011af4912880] Shaper: FriBidi 1.0.10 (SIMPLE) HarfBuzz-ng 2.7.2 (COMPLEX)
[Parsed_subtitles_0 @ 0000011af4912880] Unable to locate subtitle stream in ./test/349.mkv
[AVFilterGraph @ 0000011af623d880] Error initializing filter 'subtitles' with args './test/349.mkv:si=1'
Error reinitializing filters!
Failed to inject frame into filter network: Operation not permitted
Error while processing the decoded data for stream #0:3
Conversion failed!


    


    As you can see in the error message above, it says : "Error initializing filter .... si=1" because it should be si=0 in the case of that specific file, which is why I added the exception, but it doesn't seem to care about it.

    


    enter image description here

    


    So I'm trying to catch that error and say "ok, in that case, let's do si=0 instead".

    


  • Meaning of Timestamp Retrieved by AudioQueueGetCurrentTime() in AudioQueue Callback

    30 juillet 2024, par White0930

    I'm working on an audio project and have a question regarding the meaning of the timestamp retrieved by AudioQueueGetCurrentTime().

    


    According to the Apple Developer Documentation, the following calculation gives the audio time being played (since AudioQueueStart) :

    


    - (Float64) GetCurrentTime {
    AudioTimeStamp c;       
    AudioQueueGetCurrentTime(playState.queue, NULL, &c, NULL);  
    return c.mSampleTime / _av->audio.sample_rate;
}


    


    However, in a project I'm working on, I noticed the following code inside the fillAudioBuffer callback function of AudioQueue :

    


    
static void fillAudioBuffer(AudioQueueRef queue, AudioQueueBufferRef buffer){
    
    int lengthCopied = INT32_MAX;
    int dts= 0;
    int isDone = 0;

    buffer->mAudioDataByteSize = 0;
    buffer->mPacketDescriptionCount = 0;
    
    OSStatus err = 0;
    AudioTimeStamp bufferStartTime;

    AudioQueueGetCurrentTime(queue, NULL, &bufferStartTime, NULL);
    

    
    while(buffer->mPacketDescriptionCount < numPacketsToRead && lengthCopied > 0){
        if (buffer->mAudioDataByteSize) {
            break;
        }
        
        lengthCopied = getNextAudio(_av,buffer->mAudioDataBytesCapacity-buffer->mAudioDataByteSize, (uint8_t*)buffer->mAudioData+buffer->mAudioDataByteSize,&dts,&isDone);
        if(!lengthCopied || isDone) break;
      
        if(aqStartDts < 0) aqStartDts = dts;
        if (dts>0) currentDts = dts;
        if(buffer->mPacketDescriptionCount ==0){
            bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
            bufferStartTime.mSampleTime = (Float64)(dts-aqStartDts) * _av->audio.frame_size;
            
            if (bufferStartTime.mSampleTime <0 ) bufferStartTime.mSampleTime = 0;
            PMSG2("AQHandler.m fillAudioBuffer: DTS for %x: %lf time base: %lf StartDTS: %d\n", (unsigned int)buffer, bufferStartTime.mSampleTime, _av->audio.time_base, aqStartDts);
            
        }
        buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
        buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = lengthCopied;
        buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _av->audio.frame_size;
        
        buffer->mPacketDescriptionCount++;
        buffer->mAudioDataByteSize += lengthCopied;
        
    }
    
#ifdef DEBUG
    int audioBufferCount, audioBufferTotal,  videoBufferCount, videoBufferTotal;
    bufferCheck(_av,&videoBufferCount, &videoBufferTotal, &audioBufferCount, &audioBufferTotal);
    
    PMSG2("AQHandler.m fillAudioBuffer: Video Buffer: %d/%d Audio Buffer: %d/%d\n", videoBufferCount, videoBufferTotal, audioBufferCount, audioBufferTotal);
    
    PMSG2("AQHandler.m fillAudioBuffer: Bytes copied for buffer 0x%x: %d\n",(unsigned int)buffer, (int)buffer->mAudioDataByteSize );
#endif  
    if(buffer->mAudioDataByteSize){
        
        if(err=AudioQueueEnqueueBufferWithParameters(queue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL))
        {
#ifdef DEBUG
            char sErr[10];

            PMSG2(@"AQHandler.m fillAudioBuffer: Could not enqueue buffer 0x%x: %d %s.", buffer, err, FormatError(sErr, err));
#endif
        }
    }

}


    


    Based on the documentation for AudioQueueEnqueueBufferWithParameters and the variable naming used by the author, bufferStartTime seems to represent the time when the newly filled audio buffer will start playing, i.e., the time when all current audio in the queue has finished playing and the new audio starts. This interpretation suggests bufferStartTime is not the same as the time of the audio currently being played.

    


    I have browsed through many related questions, but I still have some doubts.. I'm currently fixing an audio-video synchronization issue in my project, and there isn't much detailed information in the Apple Developer Documentation (or maybe my search skills are lacking).

    


    Can someone clarify the exact meaning of the timestamp returned by AudioQueueGetCurrentTime() in this context ? Is it the time when the current audio will finish playing, or is it the time when the new audio will start playing ? Any additional resources or documentation that explain this in detail would also be appreciated.

    


  • It's possible to catch ffmpeg errors with python ?

    4 avril 2019, par Elros Romeo

    Hi I’m trying to make a video converter for django with python, I forked django-ffmpeg module which does almost everything I want, except that doesn’t catch error if conversion failed.

    Basically the module passes to the command line interface the ffmpeg command to make the conversion like this :

    /usr/bin/ffmpeg -hide_banner -nostats -i %(input_file)s -target
    film-dvd %(output_file)

    Module uses this method to pass the ffmpeg command to cli and get the output :

    def _cli(self, cmd, without_output=False):
       print 'cli'
       if os.name == 'posix':
           import commands
           return commands.getoutput(cmd)
       else:
           import subprocess
           if without_output:
               DEVNULL = open(os.devnull, 'wb')
               subprocess.Popen(cmd, stdout=DEVNULL, stderr=DEVNULL)
           else:
               p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
               return p.stdout.read()

    But for example, I you upload an corrupted video file it only returns the ffmpeg message printed on the cli, but nothing is triggered to know that something failed

    This is an ffmpeg sample output when conversion failed :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x237d500] Format mov,mp4,m4a,3gp,3g2,mj2
    detected only with low score of 1, misdetection possible !
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x237d500] moov atom not found
    /home/user/PycharmProjects/videotest/media/videos/orig/270f412927f3405aba041265725cdf6b.mp4 :
    Invalid data found when processing input

    I was wondering if there’s any way to make that an exception and how, so I can handle it easy.

    The only option that came to my mind is to search : "Invalid data found when processing input" in the cli output message string but I’m not shure that if this is the best approach. Anyone can help me and guide me with this please.