Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (88)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (13735)

  • Problem with ffmpeg syntax in a looping batch file. Windows, not Ubuntu

    24 février 2019, par user1738673

    My goal is to loop through my music folders, read the bitrate of each file, and if greater than 192k, convert it to 192k, then delete the original. I have working batch files that will do each task. The problem comes when I try to combine the lines from the working code. Depending on where I put the quotation marks, I get varying errors about filenames. So, I need someone who really knows how to read this code to solve the issue.

    Here are each of the working batch files. They do everything exactly as they should.

    First : Loop through files and get the bitrate

    @echo off & setlocal
    FOR /r %%i in (*.mp3) DO (
               ffprobe -v error -show_entries stream=bit_rate -of default=noprint_wrappers=1:nokey=1 "%%~fi"
               )
    ECHO Completed %~n0
    PAUSE

    Here is the output :

    320000
    320000
    192000
    320000
    320000
    320000
    320000
    320000
    320000
    320000
    320000
    320000
    Completed BrTestLoop6
    Press any key to continue . . .

    Next, the batch file for converting all mp3 files to 192k, then deleting the orginal :

    @echo off & setlocal
    FOR /r %%i in (*.mp3) DO (
                 ffmpeg -i "%%~fi" -ab 192k -map_metadata 0 -id3v2_version 3 "%%~dpni_192.mp3"
                 if not errorlevel 1 if exist "%%~fi" del /q "%%~fi"
               )
    ECHO Completed %~n0
    PAUSE

    I get perfect conversions at 192k and all of the originals are deleted.

    Now, the combined version :

    @echo off & setlocal

    FOR /R %%i in (*.mp3) DO (
       FOR /F "TOKENS=1 DELIMS==" %%H IN ('ffprobe -v error -show_entries stream=bit_rate -of default=noprint_wrappers=1:nokey=1 "%%i"') DO (
           IF %%H GTR 192000 ffmpeg -i "%%i" -ab 192k -map_metadata 0 -id3v2_version 3 "%%~dpni_192.mp3"
       )
    )

    PAUSE

    I get the following output :

    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Argument 'noprint_wrappers' provided as input filename, but 'bit_rate' was already specified.
    Press any key to continue . . .

    Can anyone help, please ?

  • Getting realtime output from ffmpeg to be used in progress bar (PyQt4, stdout)

    8 septembre 2023, par Jason O'Neil

    I've looked at a number of questions but still can't quite figure this out. I'm using PyQt, and am hoping to run ffmpeg -i file.mp4 file.avi and get the output as it streams so I can create a progress bar.

    



    I've looked at these questions :
Can ffmpeg show a progress bar ?
catching stdout in realtime from subprocess

    



    I'm able to see the output of a rsync command, using this code :

    



    import subprocess, time, os, sys

cmd = "rsync -vaz -P source/ dest/"
p, line = True, 'start'


p = subprocess.Popen(cmd,
                     shell=True,
                     bufsize=64,
                     stdin=subprocess.PIPE,
                     stderr=subprocess.PIPE,
                     stdout=subprocess.PIPE)

for line in p.stdout:
    print("OUTPUT>>> " + str(line.rstrip()))
    p.stdout.flush()


    



    But when I change the command to ffmpeg -i file.mp4 file.avi I receive no output. I'm guessing this has something to do with stdout / output buffering, but I'm stuck as to how to read the line that looks like

    



    frame=   51 fps= 27 q=31.0 Lsize=     769kB time=2.04 bitrate=3092.8kbits/s


    



    Which I could use to figure out progress.

    



    Can someone show me an example of how to get this info from ffmpeg into python, with or without the use of PyQt (if possible)

    




    



    EDIT :
I ended up going with jlp's solution, my code looked like this :

    



    #!/usr/bin/python
import pexpect

cmd = 'ffmpeg -i file.MTS file.avi'
thread = pexpect.spawn(cmd)
print "started %s" % cmd
cpl = thread.compile_pattern_list([
    pexpect.EOF,
    "frame= *\d+",
    '(.+)'
])
while True:
    i = thread.expect_list(cpl, timeout=None)
    if i == 0: # EOF
        print "the sub process exited"
        break
    elif i == 1:
        frame_number = thread.match.group(0)
        print frame_number
        thread.close
    elif i == 2:
        #unknown_line = thread.match.group(0)
        #print unknown_line
        pass


    



    Which gives this output :

    



    started ffmpeg -i file.MTS file.avi
frame=   13
frame=   31
frame=   48
frame=   64
frame=   80
frame=   97
frame=  115
frame=  133
frame=  152
frame=  170
frame=  188
frame=  205
frame=  220
frame=  226
the sub process exited


    



    Perfect !

    


  • Programmatically accessing PTS times in MP4 container

    9 novembre 2022, par mcandril

    Background

    


    For a research project, we are recording video data from two cameras and feed a synchronization pulse directly into the microphone ADC every second.

    


    Problem

    


    We want to derive a frame time stamp in the clock of the pulse source for each camera frame to relate the camera images temporally. With our current methods (see below), we get a frame offset of around 2 frames between the cameras. Unfortunately, inspection of the video shows that we are clearly 6 frames off (at least at one point) between the cameras.
I assume that this is because we are relating audio and video signal wrong (see below).

    


    Approach I think I need help with

    


    I read that in the MP4 container, there should be PTS times for video and audio. How do we access those programmatically. Python would be perfect, but if we have to call ffmpeg via system calls, we may do that too ...

    


    What we currently fail with

    


    The original idea was to find video and audio times as

    


    audio_sample_times = range(N_audiosamples)/audio_sampling_rate
video_frame_times = range(N_videoframes)/video_frame_rate


    


    then identify audio_pulse_times in audio_sample_times base, calculate the relative position of each video_time to the audio_pulse_times around it, and select the same relative value to the corresponding source_pulse_times.

    


    However, a first indication that this approach is problematic is already that for some videos, N_audiosamples/audio_sampling_rate differs from N_videoframes/video_frame_rate by multiple frames.

    


    What I have found by now

    


    OpenCV's cv2.CAP_PROP_POS_MSEC seems to do exactly what we do, and not access any PTS ...

    


    Edit : What I took from the winning answer

    


    container = av.open(video_path)
signal = []
audio_sample_times = []
video_sample_times = []

for frame in tqdm(container.decode(video=0, audio=0)):
    if isinstance(frame, av.audio.frame.AudioFrame):
        sample_times = (frame.pts + np.arange(frame.samples)) / frame.sample_rate
        audio_sample_times += list(sample_times)
        signal_f_ch0 = frame.to_ndarray().reshape((-1, len(frame.layout.channels))).T[0]
        signal += list(signal_f_ch0)
    elif isinstance(frame, av.video.frame.VideoFrame):
        video_sample_times.append(float(frame.pts*frame.time_base))

signal = np.abs(np.array(signal))
audio_sample_times = np.array(audio_sample_times)
video_sample_times = np.array(video_sample_times)


    


    Unfortunately, in my particular case, all pts are consecutive and gapless, so the result is the same as with the naive solution ...
By picture clues, we identified a section of 10s in the videos, somewhere in which they desync, but can't find any traces of that in the data.