Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (42)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (4654)

  • Precise seeking with ffmpeg

    6 août 2015, par Taras Galchenko

    Let’s say I have an audio file being decoded with ffmpeg. The source format is something like AAC where the audio is split into packets. When seeking to a particular time, it is clear that the time will not fall, most of the time, on the packet border but somewhere within the packet duration. Do I have to seek within packet myself or av_seek_frame does it all by itself and sets up decoding so that the next decoded frame should start at the position I’ve requested ?

    If I use the function av_seek_frame with the flag AVSEEK_FLAG_BACKWARD, I assume that the next packet returned by av_read_frame will be the packet containing the time position I am seeking to. Is that right ?

    If I decode this packet with avcodec_decode_audio4, will the frame returned contain the audio data at the start time of the packet begining or from the time I’ve passed to av_seek_frame ? In the latter case how can I find out the frame/packet timestamp so as to estimate the number of samples to skip in the decoded frame ? The PTS after seek is zero and DTS looks useless either.

    Is it possible to seek with precision to a particular time using ffmpeg ?

  • avcodec/motion_est_template : Avoid using last + 1 element of array

    21 octobre 2022, par Andreas Rheinhardt
    avcodec/motion_est_template : Avoid using last + 1 element of array
    

    For an int array[8][2] using &array[8][0] (which is an int*
    pointing to the element beyond the last element of array)
    triggers a "runtime error : index 8 out of bounds for type 'int[8][2]'"
    from (Clang-)UBSan in the fate-vsynth(1|2|_lena)-snow tests.

    I don't know whether this is really undefined behaviour or does not
    actually fall under the "pointer arithmetic with the element beyond
    the last element of the array is allowed as long as it is not
    accessed" exception". All I know is that the code itself does not
    read from beyond the last element of the array.

    Nevertheless rewrite the code to a form that UBSan does not complain
    about.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/motion_est_template.c
  • Subtitle Overlay Isn't Working, how do I fix it ? [closed]

    27 octobre 2024, par michael tan

    I'm trying to make a program to create clips with subtitles, but instead of the overlaid subtitles syncing with the clip, they just start from the beginning of the movie.

    &#xA;

    import subprocess&#xA;from moviepy.editor import VideoFileClip&#xA;&#xA;def parse_srt(srt_file):&#xA;    """Parse the SRT file and return a list of subtitles with their timestamps."""&#xA;    subtitles = []&#xA;    with open(srt_file, &#x27;r&#x27;) as f:&#xA;        content = f.read().strip().split(&#x27;\n\n&#x27;)&#xA;        for entry in content:&#xA;            lines = entry.split(&#x27;\n&#x27;)&#xA;            if len(lines) >= 3:&#xA;                index = lines[0]&#xA;                timestamps = lines[1]&#xA;                text = &#x27;\n&#x27;.join(lines[2:])&#xA;                start, end = timestamps.split(&#x27; --> &#x27;)&#xA;                subtitles.append((start.strip(), end.strip(), text.strip()))&#xA;    return subtitles&#xA;&#xA;def print_subtitles_in_range(subtitles, start_time, end_time):&#xA;    """Print subtitles that fall within the given start and end times."""&#xA;    for start, end, text in subtitles:&#xA;        start_seconds = convert_srt_time_to_seconds(start)&#xA;        end_seconds = convert_srt_time_to_seconds(end)&#xA;        if start_seconds >= start_time and end_seconds &lt;= end_time:&#xA;            print(f"{start} --> {end}: {text}")&#xA;&#xA;&#xA;def convert_srt_time_to_seconds(time_str):&#xA;    """Convert SRT time format to total seconds."""&#xA;    hours, minutes, seconds = map(float, time_str.replace(&#x27;,&#x27;, &#x27;.&#x27;).split(&#x27;:&#x27;))&#xA;    return hours * 3600 &#x2B; minutes * 60 &#x2B; seconds&#xA;&#xA;def create_captioned_clip(input_file, start_time, end_time, srt_file, output_file):&#xA;    # Step 1: Extract the clip from the main video&#xA;    clip = VideoFileClip(input_file).subclip(start_time, end_time)&#xA;    print("Clip duration:", clip.duration)&#xA;    temp_clip_path = "temp_clip.mp4"&#xA;    clip.write_videofile(temp_clip_path, codec="libx264")&#xA;&#xA;    # Step 2: Parse the SRT file to get subtitles&#xA;    subtitles = parse_srt(srt_file)&#xA;&#xA;    # Step 3: Print subtitles that fall within the start and end times&#xA;    print("\nSubtitles for the selected clip:")&#xA;    print_subtitles_in_range(subtitles, start_time, end_time)&#xA;&#xA;    # Step 2: Add subtitles using FFmpeg&#xA;    ffmpeg_command = [&#xA;        "ffmpeg",&#xA;        "-ss", str(start_time),  # Seek to the start time of the clip&#xA;        "-i", input_file,        # Use the original input file for subtitles&#xA;        "-vf", f"subtitles=&#x27;{srt_file}:force_style=Alignment=10,TimeOffset={start_time}&#x27;",  # Overlay subtitles&#xA;        "-t", str(end_time - start_time),  # Set duration for the output&#xA;        output_file&#xA;    ]&#xA;&#xA;    print("Running command:", &#x27; &#x27;.join(ffmpeg_command))&#xA;    subprocess.run(ffmpeg_command, capture_output=True, text=True)&#xA;&#xA;# Define input video and srt file&#xA;input_video = "Soul.2020.720p.BluRay.x264.AAC-[YTS.MX].mp4"&#xA;subtitle_file = "Soul.2020.720p.BluRay.x264.AAC-[YTS.MX].srt"&#xA;&#xA;# Define multiple clips with start and end times&#xA;clips = [&#xA;    {"start": (5 * 60), "end": (5 * 60 &#x2B; 30), "output": "output_folder/captioned_clip1.mp4"},&#xA;    {"start": (7 * 60), "end": (7 * 60 &#x2B; 25), "output": "output_folder/captioned_clip2.mp4"},&#xA;]&#xA;&#xA;# Process each clip&#xA;for clip_info in clips:&#xA;    create_captioned_clip(input_video, clip_info["start"], clip_info["end"], subtitle_file, clip_info["output"])&#xA;

    &#xA;

    I thought the subtitles would sync with the clip automatically ; after that didn't work I tried to manually sync them by putting the start time, duration, and an offset, but it still didn't work. The subtitles still start from 0:00 of the movie. There's nothing wrong with the .srt file, it's formatted correctly.

    &#xA;