Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (35)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (5469)

  • Transcode from a live m3u8 using -ss

    20 août 2015, par pgm

    I’m trying to create a VOD hls clip from a live hls stream on adobe media server using ffmpeg and nodejs.

    An example of the command I’m using looks like this :

    ffmpeg -report -analyzeduration 999999999 -probesize 999999999 -ss 50 -i http://live.m3u8 -y -r 29.97 -threads 0 -hls_list_size 0 -c:v copy -a:v copy streamoutput.m3u8

    The problem is the -ss param (start time) is calculating the start time from the live point on the stream, rather than from the first ’ts’ fragment. I’d like to be able to encode inside of a "DVR window," meaning seeking from the beginning of the stream, not from the live point of the stream.

    Example : I use the param -ss 50 and it won’t encode for 50 seconds until the live stream catches up, outputting this in the ffmpeg log :

    [h264 @ 0000000002beae00] non-existing PPS 0 referenced
    [h264 @ 0000000002beae00] non-existing PPS 0 referenced
    [h264 @ 0000000002beae00] decode_slice_header error

    Once the live stream catches up to the 50 second delay it begins encoding. It works this way when I use -ss as either an input parameter or output parameter.

    Is there a way to accomplish this ? I’ve noticed that if I leave -ss completely out of the command, it will start at the beginning of the stream, but as soon as it’s there, even as a 0, it will start at the "live point."

    Any help is much appreciated !

  • ffmpeg hundreds of videos- to-image as shell script in keyboard maestro

    28 juillet 2023, par Leopardi

    First I would like to say that I use MacOS, I am new using ffmpeg and keyboard maestro and have no little to none experience in coding. But I think I did learned a bit is the past few weeks trying to solve the problem I will ask now. Believe me I tried searching for answers online and did a lot of trial and error before coming here to ask my question.
so here is what I am trying to do :

    


    I have a folder (/Users/Documents/clips) with 450 short AVI clips (foto1.avi, foto2.avi...., foto450.avi) from which I would like to extract all frames from. I know how to extract all frames from 1 clip and copy them to an existing directory :

    


    *ffmpeg -i /Users/Documents/clips/foto1.avi /Users/Documents/Frames/Foto1/frame%06d.png *

    


    With their command all frames from the clip Foto1.avi (frame000001.png, frame000002.png...., frame000132.png) are copied to the folder /Foto1.

    


    With keyboard Maestro I created 450 folders named afters the 450 .avi clips.

    


    Is there a way to write a ffmpeg command that will extract the frames of all 450 .avi clips to the folder with the same name as the file ?

    


    I was hoping there would be a way of seeing variables in this kind of way :

    


    ffmpeg -i /Users/Documents/clips/foto%.avi /Users/Documents/Frames/Foto%/frame%06d.png

    


    I really appreciate any help. Thank you !

    


    I tried searching for answers in forums, ffmpeg database. And since I have almost no knowledge on coding sometimes is hard to decipher and understand the meaning of the code.

    


  • Accented characters are not recognized in python [closed]

    10 avril 2023, par CorAnna

    I have a problem in the python script, my script should put subtitles in a video given a srt file, this srt file is written by another script but in its script it replaces the accents and all the particular characters with a black square symbol with a question mark inside it... the problem I think lies in the writing of this file, what follows and that in overwriting the subtitles I do with ffmpeg the sentences that contain an accented word are not written

    


    def video_audio_file_writer(video_file):

    videos_folder = "Video"
    audios_folder = "Audio"

    video_path = f"{videos_folder}\\{video_file}"

    video_name = Path(video_path).stem
    audio_name = f"{video_name}Audio"

    audio_path = f"{audios_folder}\\{audio_name}.wav"

    video = mp.VideoFileClip(video_path)
    audio = video.audio.write_audiofile(audio_path)

    return video_path, audio_path, video_name

    def audio_file_transcription(audio_path, lang):

    model = whisper.load_model("base")
    tran = gt.Translator()

    audio_file = str(audio_path)

    options = dict(beam_size=5, best_of=5)
    translate = dict(task="translate", **options)
    result = model.transcribe(audio_file, **translate)   

    return result

def audio_subtitles_transcription(result, video_name):

    subtitle_folder = "Content"
    subtitle_name = f"{video_name}Subtitle"
    subtitle_path_form = "srt"

    subtitle_path = f"{subtitle_folder}\\{subtitle_name}.{subtitle_path_form}"

    with open(os.path.join(subtitle_path), "w") as srt:
        # write_vtt(result["segments"], file=vtt)
        write_srt(result["segments"], file=srt)
            
    return subtitle_path

def video_subtitles(video_path, subtitle_path, video_name):

    video_subtitled_folder = "VideoSubtitles"
    video_subtitled_name = f"{video_name}Subtitles"
    video_subtitled_path = f"{video_subtitled_folder}\\{video_subtitled_name}.mp4"

    video_path_b = bytes(video_path, 'utf-8')
    subtitle_path_b = bytes(subtitle_path, 'utf-8')
    video_subtitled_path_b = bytes(video_subtitled_path, 'utf-8')

    path_abs_b = os.getcwdb() + b"\\"

    path_abs_bd = path_abs_b.decode('utf-8')
    video_path_bd= video_path_b.decode('utf-8')
    subtitle_path_bd = subtitle_path_b.decode('utf-8')
    video_subtitled_path_bd = video_subtitled_path_b.decode('utf-8')

    video_path_abs = str(path_abs_bd + video_path_bd)
    subtitle_path_abs = str(path_abs_bd + subtitle_path_bd).replace("\\", "\\\\").replace(":", "\\:")
    video_subtitled_path_abs = str(path_abs_bd + video_subtitled_path_bd)

    time.sleep(3)

    os.system(f"ffmpeg -i {video_path_abs} -vf subtitles='{subtitle_path_abs}' -y {video_subtitled_path_abs}")

    return video_subtitled_path_abs, video_path_abs, subtitle_path_abs

if __name__ == "__main__":

    video_path, audio_path, video_name = video_audio_file_writer(video_file="ChiIng.mp4")
    result = audio_file_transcription(audio_path=audio_path, lang="it")
    subtitle_path = audio_subtitles_transcription(result=result, video_name=video_name)
    video_subtitled_path_abs, video_path_abs, subtitle_path_abs = video_subtitles(video_path=video_path, subtitle_path=subtitle_path, video_name=video_name)
    
    print("Video Subtitled")


    


    Windows 11
Python 3.10