Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (111)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

Sur d’autres sites (8621)

  • How can I automate the subtitle adding process in FFMPEG ?

    23 septembre 2022, par Phoeniqz

    The situation is, that I have 10 MP4 videos and a folder for each of them that has the same name as its video. In each of folders there are around 30 SRT files I need to add. I would like to automate this. I mean a script that would add each SRT file to the video and add the correct handler for the subtitles, so that the subtitle would appear as "English" instead of "Subtitle #12" in a movie player. I made a python script ; it's far from perfect and it does not add the handlers correctly.

    


    The name of each SRT file is something like "20_Hebrew.srt"

    


    

import os
file_dir = r"Path/to/my/files"
sub_dir = file_dir + "/Subs"


def add_sub(file, file_name):
    cmd = f"ffmpeg -i '{file}' "
    sub_list = []

    no_extension = file_name.replace(".mp4", "")

    for sub_name in os.listdir(sub_dir + f"/{no_extension}"):
        s = os.path.join(sub_dir + f"/{no_extension}", sub_name)

        if os.path.isfile(s):
            cmd += f"-i '{s}' "
            sub_list.append(s)
    
    cmd += "-map 0:v -map 0:a "

    for i, v in enumerate(sub_list):
        cmd += f"-map {i+1} "
    
    cmd += "-c:v copy -c:a copy "

    for i, v in enumerate(sub_list):
        sub_lang = v.replace(".srt", "")
        index = sub_lang.index(f"/Subs/")
        sub_lang = sub_lang[index:]
        index = sub_lang.index("_")
        sub_lang = sub_lang[index+1:]

        cmd += f"-c:s:{i} mov_text -metadata:s:s:{i} language='{sub_lang}' -metadata:s:s:{i} handler='{sub_lang}' "

    cmd += f"'{file}_OUTP.mp4'"

    os.system(cmd)

for file_name in os.listdir(file_dir):
    f = os.path.join(file_dir, file_name)

    if os.path.isfile(f):
        add_sub(f, file_name) 

    


  • Interval option when downloading HLS format videos published on Internet sites using ffmpeg

    28 septembre 2021, par atsu8492

    I'm trying to download HLS format video as mp4 using ffmpeg.

    


    I don't want to bother the server side due to excessive requests, so I want to create an interval for ts file acquisition.

    


    When I actually run ffmpeg, I sometimes get an HTTP 429 error as shown below.

    


    In addition, it looked like there are no good options in ffmpeg.

    


    [https @ 000001bfef570100] Opening 'https://~/ts/9.ts' for reading
[https @ 000001bfeef31140] HTTP error 429 Too Many Requests
[hls @ 000001bfee6be200] keepalive request failed for 'https://~/ts/9.ts' with error: 'Server returned 4XX Client Error, but not one of 40{0,1,3,4}' when opening url, retrying with new connection
[https @ 000001bfef570100] Opening 'https://~/ts/9.ts' for reading
[https @ 000001bfef570100] Opening 'https://~/ts/10.ts' for reading
[https @ 000001bfef570100] Opening 'https://~/ts/11.ts' for reading
[https @ 000001bfef570100] Opening 'https://~/ts/12.ts' for reading
[https @ 000001bfef570100] Opening 'https://~/ts/13.ts' for reading
[https @ 000001bfeef31140] HTTP error 429 Too Many Requests
[hls @ 000001bfee6be200] keepalive request failed for 'https://~/ts/13.ts' with error: 'Server returned 4XX Client Error, but not one of 40{0,1,3,4}' when opening url, retrying with new connection
[https @ 000001bfef570100] Opening 'https://~/ts/13.ts' for reading
…


    


    Can you come up with any good way ?

    


    I'm not good at English, so I may not be able to respond well. sorry.
This post was written by Google Translate.

    


  • ffmpeg : crop video into two grayscale sub-videos ; guarantee monotonical frames ; and get timestamps

    13 mars 2021, par lurix66

    The need

    


    Hello, I need to extract two regions of a .h264 video file via the crop filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format ?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).

    


    In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.

    


    In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.

    


    Below you find some of my attempts to work with the following options :

    


      

    • -filter:v "crop=400:ih:260:0,format=gray" to do the crop and the monochrome conversion
    • 


    • -vf showinfo possibly combined with -vsync 0 or -copyts to get the timestamps via stderr redirection &> filename
    • 


    • -c:v mjpeg to force monotony of frames (are there other ways ?)
    • 


    


    1. cropping each region and obtaining monochrome videos

    


    $ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4


    


    The issue here is that in the output files the frames are not organized monotonically (I don't understand why ; how come would that make sense in any video format ? I can't say if that comes from the input file).

    


    EDIT. Maybe it is not frames, but packets, as returned by av .demux() method that are not monotonic (see below "instructions to reproduce...")

    


    I have got the advice to do a ffmpeg -i outL.mp4 outL.mjpeg after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.

    


    EDIT. I acknowledge the advice to specify -q:v 1 ; this fixes the pixellation effect but produces a file even bigger, 12x in size. Is it necessary ? (see below "instructions to reproduce...")

    


    2. getting the timestamps

    


    I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following :

    


    $ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt


    


    The issue here is that I don't get any output because ffmpeg claims it needs an output file.

    


    The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the -copyts option, and the forcing the encoding with -c:v mjpeg option as per the advice mentioned above (don't know if in the right position though)

    


    ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt


    


    This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the -vf showinfo option just before the stderr redirection, I get no redirected output

    


    ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt


    


    In this case I get the desired timestamps output (too much : I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again

    


    I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.

    


    3. [EDIT] instructions how to reproduce this

    


      

    • get a .h264 video
    • 


    • turn it into .mp by ffmpeg command $ ffmpeg -i inVideo.h264 out.mp4
    • 


    • run the following python cell in a jupyter-notebook
    • 


    • see that the packets timestamps have diffs greater and less than zero
    • 


    


    %matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl

fname, ext="outL.direct", "mp4"

cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])

cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])

print(pk_pts.shape,fm_pts.shape)

mpl.subplot(211)
mpl.plot(np.diff(pk_pts))

mpl.subplot(212)
mpl.plot(np.diff(fm_pts))


    


      

    • finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)
    • 


    


    $ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4


    


    Finally, the question

    


    What is a working way / the right way to do it ?

    


    Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.