Recherche avancée

Médias (91)

Autres articles (104)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (13243)

  • How to avoid stopping or record all dynamic videos in the process of capturing screen video with ffmpeg from a python program ?

    3 décembre 2020, par fengnix

    I have many robotframework test cases and in the first case, a ffmpeg command like the following is invoked to record the whole running process :

    


    ffmpeg -framerate 30 -f gdigrab -i desktop -c:v libx264rgb -crf 0 -preset ultrafast output.mkv


    


    Whenever I firstly run all cases and then manuually run the above command from an addition command console, the recorded video always looks fine, it looks like all contents on the screen can be correctly captured.

    


    However, once I execute the command the same as the above one in the first case by call the following code :

    


    p=subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)


    


    and then in the final test case the record process is stopped by calling the following code to tell ffmpeg that we want to stop the recording :

    


    p.stdin.write(bytes("q",'UTF-8'))  


    


    the final result video only contain correct contents of the "start" and the "end" of the whole process, but all other contents no longer changed and seemd just a static image, which means all the dynamic effects on the screen cannot be captured.

    


    Could anyone be so kind as to let me know what the matter is and how to solve it ?

    


  • ffmpeg h264 to mp4 conversion from multiple files fails to preserve in-sequence resolution changes

    1er juillet 2023, par LB2

    This will be a long post, so I thank you in advance for your patience in digesting it.

    


    Context

    


    I have different sources that generate visual content that eventually need to be all composed into a single .mp4 file. The sources are :

    


      

    • H.264 video (encoded using CUDA NVENC).

        

      • This video can have in-sequence resolution change that is natively supported by H.264 codec.
      • 


      • I.e. stream may start as HxW resolution and mid-stream change to WxH. This behavior happens because it comes from a camera device that can be rotated and flipped between portrait and landscape (e.g. think of a phone camera recording video and phone being flipped from one orientation to another, and video recording adjusting its encoding for proper video scaling and orientation).
      • 


      • When rotation occurs, most of the time H & W are just swaps, but may actually be entirely new values — e.g. in some cases 1024x768 will switch to 768x1024, but in other cases 1024x768 may become 460x640 (depends on source camera capabilities that I have no control over).
      • 


      


    • 


    • JPEGs. A series (a.k.a. batch) of still JPEGs.

        

      • The native resolution of JPEGs may or may not match the video resolution in the earlier bullet.
      • 


      • JPEGs can also reflect rotation of device and so some JPEGs in a sequence may start at HxW resolution and then from some arbitrary JPEG file can flip and become WxH. Similar to video, resolution dimensions are likely to be just a swap, but may become altogether different values.
      • 


      


    • 


    • There can be any number of batches and intermixes between video and still sources. E.g. V1 + S2 + S3 + V4 + V5 + V6 + S7 + ...
    • 


    • There can be any number of resolution changes between or within batches. e.g. V1 ;r1 + V1 ;r2 + S2 ;r1 + S2 ;r3 + V3 ;r2 + ... (where first subscript is batch sequence ; rX is resolution)
    • 


    


    Problem

    


    I'm attempting to do this conversion with ffmpeg and can't quite get it right. The problem is that I can't get output to respect source resolutions, and it just squishes all into a single output resolution.

    


    Example of squishing problem

    


    As already mentioned above, H.264 supports resolution changes in-sequence (mid-stream), and it should be possible to convert and concatenate all the content and have final output contain in-sequence resolution changes.

    


    Since MP4 is just a container, I'm assuming that MP4 files can do so as well ?

    


    Attempts so far

    


    The approach thus far has been to take each batch of content (i.e. .h264 video or a set of JPEGs), and individually convert to .mp4. Video is converted using -c copy to ensure it doesn't try to transcode, e.g. :

    


    ffmpeg -hide_banner -i videoX.h264 -c copy -vsync vfr -video_track_timescale 90000 intermediateX.mp4


    


    ... and JPEGs are converted using -f concat

    


    ffmpeg -hide_banner -f concat -safe 0 -i jpegsX.txt -vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2' -r 30 -vsync vfr -video_track_timescale 90000 intermediateX.mp4


    


    ... and then all the intermediates concatenated together

    


    ffmpeg -hide_banner -f concat -safe 0 -i final.txt -pix_fmt yuv420p -c copy -vsync vfr -video_track_timescale 90000 -metadata title='yabadabadoo' -fflags +bitexact -flags:v +bitexact -flags:a +bitexact final.mp4


    


    This concatenates, but if resolution changes at some mid point, then that part of content comes up squished/stretched in final output.

    


    Use h.264 as intermediates

    


    All the intermediates are produced the same, except as .h264. All intermediate .h264 are cat'ed together like `cat intermediate1.h264 intermediate2.264 > final.h264.

    


    If final output is final.mp4, the output is incorrect and images are squished/stretched.

    


    If final.h264, then at least it seems to be respecting aspect ratios of input and managing to produce correctly looking output. However, examining with ffprobe it seems that it uses SAR weird ratios, where first frames are width=1440 height=3040 sample_aspect_ratio=1:1, but later SAR takes on values like width=176 height=340 sample_aspect_ratio=1545:176, which I suspect isn't right, since all original input was with "square pixels". I think the reason for it is that it was composed out of different sized JPEGs, and concat filter somehow caused ffmpeg to manipulate SAR "to get things fit".

    


    But at least it renders respectably, though hard to say with ffplay if player would actually see resolution change and resize accordingly .

    


    And, that's .h264 ; and I need final output to be .mp4.

    


    Use -vf filter

    


    I tried enforcing SAR using -vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2,setsar=1:1' (scaling is to deal with odd dimension JPEGs), but it still produces frames with SAR like stated earlier.

    


    Other thoughts

    


    For now, while I haven't given up, I'm trying to avoid in my code examining each individual JEPG in a batch to see if there are differing sizes, and splitting batch so that each sub-batch is homogenous resolution-wise, and generating individual intermediate .h264 so that SAR remains sane, and keep fingers crossed that the final would work correctly. It'll be very slow, unfortunately.

    


    Question

    


    What's the right way to deal with all that using ffmpeg, and how to concatenate mulitple varying resolution sources into a final mp4 so that it respects resolution changes mid-stream ?

    


  • FFMPEG output to the Exact Folder using Python

    6 août 2021, par Ande Caleb

    i'm working on a simple script using ffmpeg, to reduce the size of a video and add watermark to the video, then move the final output into the compressed folder... this is my script.

    


    the compression works, the watermark works, but the issue i'm having is that the final output is placed in the root folder, and not in the compressed folder... below i my folder structure and my scripts

    


    Folder Structure

    


       rootfolder
    |
    |--media
       |--vids
          |--(video files, mov, mp4s)..
       |--compressed
    |--encode.py


    


    Script (encode.py) file

    


    import os    
import subprocess
from pathlib import Path


dir_path = os.path.dirname(os.path.realpath(__file__))    
vidfile = dir_path + '/media/vids/mv1.mov'    
watermark = dir_path + '/media/watermark.png'
compressed = str(Path.cwd() / '/media/compressed/')

# 1. compress the video and store it in the media out folder

media_out = str(dir_path + "/compressed_mv1s.mov").replace(" ", "\\ ") 
subprocess.run("ffmpeg -i " + vidfile.replace(" ", "\\ ") +
               " -vcodec libx264 -crf 22 " + media_out, shell=True)  

#2.add watermark to the video and move it to the compressed folder 

media_watermarked = str(compressed + '/w_mv1.mov').replace(" ", "\\ ")
subprocess.run("ffmpeg -i " + media_out + " -i " + watermark +
               " -filter_complex \"overlay=main_w-(overlay_w+10) : main_h-(10+overlay_h)\" " + media_watermarked, shell=True)


    


    in summary, compressing the video works, adding watermark works, but the last line, the error is from the media_watermarked variable, i'm not sure what i'm doing wrong but it isn't resolving the folder correctly moving the final output to the folder.. this is the error i get

    


    enter image description here

    


    Also, how can i run two ffmpeg commands concurrently to compress the video and add watermark at once without doing it seperately.
Thanks.