Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (66)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (7647)

  • Bash script for splitting video using ffmpeg outputs wrong video lengths

    19 juin 2018, par jonny

    I have a video that’s x seconds long. I want to split that video up into equal segments, where each segment is no longer than a minute. To do that, I cobbled together a fairly simple bash script which uses ffmprobe to get the duration of the video, find out how long each segment should be, and then iteratively split the video up using ffmpeg :

    INPUT_FILE=$1
    INPUT_DURATION="$(./bin/ffprobe.exe -i "$INPUT_FILE" -show_entries format=duration -v quiet -of csv="p=0")"

    NUM_SPLITS="$(perl -w -e "use POSIX; print ceil($INPUT_DURATION/60), qq{\n}")"

    printf "\nVideo duration: $INPUT_DURATION; "
    printf "Number of videos to output: $NUM_SPLITS; "
    printf "Approximate length of each video: $(echo "$INPUT_DURATION" "$NUM_SPLITS" | awk '{print ($1 / $2)}')\n\n"

    for i in `seq 1 "$NUM_SPLITS"`; do
       START="$(echo "$INPUT_DURATION" "$NUM_SPLITS" "$i" | awk '{print (($1 / $2) * ($3 - 1))}')"
       END="$(echo "$INPUT_DURATION" "$NUM_SPLITS" "$i" | awk '{print (($1 / $2) * $3)}')"

       echo ./bin/ffmpeg.exe -v quiet -y -i "$INPUT_FILE" \
       -vcodec copy -acodec copy -ss "$START" -t "$END" -sn test_${i}.mp4

       ./bin/ffmpeg.exe -v quiet -y -i "$INPUT_FILE" \
       -vcodec copy -acodec copy -ss "$START" -t "$END" -sn test_${i}.mp4
    done

    printf "\ndone\n"

    If I run that script on the 30MB / 02:50 duration Big Buck Bunny sample from here, the output of the program would suggest the videos should all be of equal length :

    λ bash split.bash .\media\SampleVideo_1280x720_30mb.mp4

    Video duration: 170.859000; Number of videos to output: 3; Approximate length of each video: 56.953

    ./bin/ffmpeg.exe -v quiet -y -i .\media\SampleVideo_1280x720_30mb.mp4 -vcodec copy -acodec copy -ss 0 -t 56.953 -sn test_1.mp4
    ./bin/ffmpeg.exe -v quiet -y -i .\media\SampleVideo_1280x720_30mb.mp4 -vcodec copy -acodec copy -ss 56.953 -t 113.906 -sn test_2.mp4
    ./bin/ffmpeg.exe -v quiet -y -i .\media\SampleVideo_1280x720_30mb.mp4 -vcodec copy -acodec copy -ss 113.906 -t 170.859 -sn test_3.mp4

    done

    As the duration of each partial video, i.e. the time between -ss and -t, are equal for each subsequent ffmpeg command. But the durations I get are closer to :

    test_1.mp4 = 00:56
    test_2.mp4 = 01:53
    test_3.mp4 = 00:56

    Where the contents of each partial video overlap. What am I missing here ?

  • Start and end time of MoviePy's VideoClip not working

    21 mars 2024, par ernesto casco velazquez

    I'm trying to add captions to a video. The desired outcome is to show each word in the exact moment is being said.

    


    I have a method that gives me the accurate time start and end per each word :

    


    def get_words_per_time(audio_speech_file):
    model = whisper.load_model("base")
    transcribe = model.transcribe(
        audio=audio_speech_file, fp16=False, word_timestamps=True
    )
    segments = transcribe["segments"]
    words = []

    for seg in segments:
        for word in seg["words"]:
            words.append(
                {
                    "word": word["word"],
                    "start": word["start"],
                    "end": word["end"],
                    "prob": round(word["probability"], 4),
                }
            )
    return words


    


    Then I have a code that uses MoviePy to create TextClip and assing a given start and end time per pair of words (I know there are redundant statements, srry) :

    


    def generate_captions(
    words,
    font="Komika",
    fontsize=32,
    color="White",
    align="center",
    stroke_width=3,
    stroke_color="black",
):
    text_comp = []
    for i in track(range(0, len(words), 2), description="Creating captions..."):
        word1 = words[i]
        if i + 1 < len(words):
            word2 = words[i + 1]
        text_clip = TextClip(
            f"{word1['word']} {word2['word'] if i + 1 < len(words) else ''}",
            font=font,  # Change Font if not found
            fontsize=fontsize,
            color=color,
            align=align,
            method="caption",
            size=(660, None),
            stroke_width=stroke_width,
            stroke_color=stroke_color,
        )
        text_clip = text_clip.set_start(word1["start"])
        text_clip = text_clip.set_end(
            word2["end"] if i + 1 < len(words) else word1["end"]
        )
        text_comp.append(text_clip)
    return text_comp


    


    Finally, I concatenate the words into a single video :

    


    vid_clip = CompositeVideoClip(
    [vid_clip, concatenate_videoclips(text_comp).set_position(("center", 860))]
)


    


    The output is this, but you can clearly see the words are not flowing with the speech. They somehow move faster as if the start/end time did not matter. Here's the video

    


    The words with their respective start/end time, look like this :

    


    [
    {
        'word': 'This',
        'start': 0.0,
        'end': 0.22,
        'prob': 0.805
    },
    {
        'word': 'is',
        'start': 0.22,
        'end': 0.42,
        'prob': 0.9991
    },
    {
        'word': 'a',
        'start': 0.42,
        'end': 0.6,
        'prob': 0.999
    },
    {
        'word': 'test,
        ',
        'start': 0.6,
        'end': 1.04,
        'prob': 0.9939
    },
    {
        'word': 'to',
        'start': 1.18,
        'end': 1.3,
        'prob': 0.9847
    },
    {
        'word': 'show',
        'start': 1.3,
        'end': 1.54,
        'prob': 0.9971
    },
    {
        'word': 'words',
        'start': 1.54,
        'end': 1.9,
        'prob': 0.995
    },
    {
        'word': 'does',
        'start': 1.9,
        'end': 2.16,
        'prob': 0.997
    },
    {
        'word': 'not',
        'start': 2.16,
        'end': 2.4,
        'prob': 0.9978
    },
    {
        'word': 'appear.',
        'start': 2.4,
        'end': 2.82,
        'prob': 0.9984
    },
    {
        'word': 'At',
        'start': 3.46,
        'end': 3.6,
        'prob': 0.9793
    },
    {
        'word': 'their',
        'start': 3.6,
        'end': 3.8,
        'prob': 0.9984
    },
    {
        'word': 'proper',
        'start': 3.8,
        'end': 4.22,
        'prob': 0.9976
    },
    {
        'word': 'time.',
        'start': 4.22,
        'end': 4.72,
        'prob': 0.999
    },
    {
        'word': 'Thanks',
        'start': 5.04,
        'end': 5.4,
        'prob': 0.9662
    },
    {
        'word': 'for,
        ',
        'start': 5.4,
        'end': 5.66,
        'prob': 0.9941
    },
    {
        'word': 'watching.',
        'start': 5.94,
        'end': 6.36,
        'prob': 0.7701
    }
]


    


    What could be causing this ?

    


  • How to convert an IP Camera video stream into a video file ?

    14 novembre 2014, par AgentFire

    I have a URL (<ip>/ipcam/mpeg4.cgi</ip>) which points to my IP camera which is connected via Ethernet.
    Accessing the URL resuls in a infinite stream of video (possibly with audio) data.

    I would like to store this data into a video file and play it later with a video player (HTML5’s video tag is preferred as the player).

    However, a straightforward approach, which is simple saving the stream data into .mp4 file, didn’t work.

    I have looked into the file and here is what I saw (click to enlarge) :

    It turned out, there are some HTML headers, which I further on manually excluded using the binary editing tool, and yet no player could play the rest of the file.

    The HTML headers are :

    --myboundary
    Content-Type: image/mpeg4
    Content-Length: 76241
    X-Status: 0
    X-Tag: 1693923
    X-Flags: 0
    X-Alarm: 0
    X-Frametype: I
    X-Framerate: 30
    X-Resolution: 1920*1080
    X-Audio: 1
    X-Time: 2000-02-03 02:46:31
    alarm: 0000

    My question is pretty clear now, and I would like any help or suggestion. I suspect, I have to manually create some MP4 headers myself based on those values above, however, I fail to understand format descriptions such as these.

    I have the following video stream settings on my IP camera (click to enlarge) :

    I could also use the ffmpeg tool, but no matter how I try and mix the arguments to the program, it keeps telling me this error :