Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (103)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (20161)

  • How to stop perl buffering ffmpeg output

    4 février 2017, par Sebastian King

    I am trying to have a Perl program process the output of an ffmpeg encode, however my test program only seems to receive the output of ffmpeg in periodic chunks, thus I am assuming there is some sort of buffering going on. How can I make it process it in real-time ?

    My test program (the tr command is there because I thought maybe ffmpeg’s carriage returns were causing perl to see one big long line or something) :

    #!/usr/bin/perl

    $i = "test.mkv"; # big file, long encode time
    $o = "test.mp4";

    open(F, "-|", "ffmpeg -y -i '$i' '$o' 2>&1 | tr '\r' '\n'")
           or die "oh no";

    while(<f>) {
           print "A12345: $_"; # some random text so i know the output was processed in perl
    }
    </f>

    Everything works fine when I replace the ffmpeg command with this script :

    #!/bin/bash

    echo "hello";

    for i in `seq 1 10`; do
           sleep 1;
           echo "hello $i";
    done

    echo "bye";

    When using the above script I see the output each second as it happens. With ffmpeg it is some 5-10 seconds or so until it outputs and will output sometimes 100 lines each output.

    I have tried using the program unbuffer ahead of ffmpeg in the command call but it seems to have no effect. Is it perhaps the 2>&amp;1 that might be buffering ?
    Any help is much appreciated.

    If you are unfamiliar with ffmpeg’s output, it outputs a bunch of file information and stuff to STDOUT and then during encoding it outputs lines like

    frame=  332 fps= 93 q=28.0 size=     528kB time=00:00:13.33 bitrate= 324.2kbits/s speed=3.75x

    which begin with carriage returns instead of new lines (hence tr) on STDERR (hence 2>&amp;1).

  • How to use ffmpeg to transcode many live streamed videos ? [closed]

    21 septembre 2020, par user14258924

    PREMISE

    &#xA;

    As a pet project, I am writing a live video streaming service, in Go, that can consume video streams from OBS via SRT(TS -> h264/aac) and RTMP(FLV -> h264/aac) protocols and am planning to support streaming video from web browser as well, captured from a web camera via JS. This ingress server will receive many video streams in various containers and codecs and I need to normalize them into single container and codec and then create multiple versions for various bitrates(ie. 240p, 360p, 480p, 720p, 1080p...) to pass along where needed in the application. Each stream is split into 2 second GOP segments, separate for audio and video track, that will produce fragmented MP4 as the end result - which can be consumed by web browser.

    &#xA;

    The issue is that I am using Go which has no libraries for transcoding video so I need to use either ffmpeg or vlc, which is a C code. I have decided to avoid the CGo route and use ffmpeg/vlc as standalone binaries.

    &#xA;

    QUESTION

    &#xA;

    My question is how to use either of these project in the most efficient way - avoiding the use of files in favour of unix sockets/streams and also the performance aspect - handling hundreds of video segments in any one time and in sufficient time to avoid creating too much of a lag beteen producer and consumer.

    &#xA;

    So let's say I will pick the most used one - ffmpeg, how should I actually use it to achieve what I have described ? How would you set it up and which flags/config to use with it ?

    &#xA;

    Can the performance be even achieved or is it just too much and I will need some sort of ffmpeg cluser to even come close to some useful performance/low delay ?

    &#xA;

  • Why is my .mp4 file created using cv2.VideoWriter not syncing up with the audio when i combine the video and audio using ffmpeg [closed]

    27 décembre 2024, par joeS125

    The aim of the script is to take text from a text file and put it onto a stock video with an ai reading the text. Similar to those reddit stories on social media with parkour minecraft in the background.

    &#xA;

    import cv2&#xA;import time&#xA;from ffpyplayer.player import MediaPlayer&#xA;from Transcription import newTranscribeAudio&#xA;from pydub import AudioSegment&#xA;&#xA;#get a gpt text generation to create a story based on a prompt, for example sci-fi story and spread it over 3-4 parts&#xA;#get stock footage, like minecraft parkour etc&#xA;#write text of script on the footage&#xA;#create video for each part&#xA;#have ai voiceover to read the transcript&#xA;cap = cv2.VideoCapture("Stock_Videos\Minecraft_Parkour.mp4")&#xA;transcription = newTranscribeAudio("final_us.wav")&#xA;player = MediaPlayer("final_us.mp3")&#xA;audio = AudioSegment.from_file("final_us.mp3")&#xA;story = open("Story.txt", "r").read()&#xA;story_split = story.split("||")&#xA;fps = cap.get(cv2.CAP_PROP_FPS)&#xA;frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))&#xA;video_duration = frame_count / fps  # Duration of one loop of the video&#xA;fourcc = cv2.VideoWriter_fourcc(*"mp4v")&#xA;audio_duration = len(audio) / 1000  # Duration in seconds&#xA;video_writer = cv2.VideoWriter(f"CompletedVideo.mp4", fourcc, fps, (1080, 1920))&#xA;&#xA;choice = 0#part of the story choice&#xA;part_split = story_split[choice].split(" ")&#xA;with open("Segment.txt", "w") as file:&#xA;    file.write(story_split[choice])&#xA;start_time = time.time()&#xA;length = len(part_split) - 1&#xA;next_text = []&#xA;for j in range(0, length):&#xA;    temp = part_split[j].replace("\n", "")&#xA;    next_text.append([temp])&#xA;index = 0&#xA;word_index = 0&#xA;frame_size_x = 1080&#xA;frame_size_y = 1920&#xA;audio_duration = len(audio) / 1000  # Duration in seconds&#xA;start_time = time.time()&#xA;wait_time = 1 / fps&#xA;while (time.time() - start_time) &lt; audio_duration:&#xA;    cap.set(cv2.CAP_PROP_POS_FRAMES, 0)  # Restart video&#xA;    elapsed_time = time.time() - start_time&#xA;    print(video_writer)&#xA;    if index >= len(transcription):&#xA;        break&#xA;    while cap.isOpened():&#xA;        # Capture frames in the video &#xA;        ret, frame = cap.read()&#xA;        if not ret:&#xA;            break&#xA;        audio_frame, val = player.get_frame()&#xA;        if val == &#x27;eof&#x27;:  # End of file&#xA;            print("Audio playback finished.")&#xA;            break&#xA;        if index >= len(transcription):&#xA;            break&#xA;        &#xA;        if frame_size_x == -1:&#xA;            frame_size_x = frame.shape[1]&#xA;            frame_size_y = frame.shape[0]&#xA;&#xA;        elapsed_time = time.time() - start_time&#xA;&#xA;        # describe the type of font &#xA;        # to be used. &#xA;        font = cv2.FONT_HERSHEY_SIMPLEX &#xA;        trans = transcription[index]["words"]&#xA;        end_time = trans[word_index]["end"]&#xA;        if trans[word_index]["start"] &lt; elapsed_time &lt; trans[word_index]["end"]:&#xA;            video_text = trans[word_index]["text"]&#xA;        elif elapsed_time >= trans[word_index]["end"]:&#xA;            #index &#x2B;= 1&#xA;            word_index &#x2B;= 1&#xA;        if (word_index >= len(trans)):&#xA;            index &#x2B;= 1&#xA;            word_index = 0&#xA;        # get boundary of this text&#xA;        textsize = cv2.getTextSize(video_text, font, 3, 6)[0]&#xA;        # get coords based on boundary&#xA;        textX = int((frame.shape[1] - textsize[0]) / 2)&#xA;        textY = int((frame.shape[0] &#x2B; textsize[1]) / 2)&#xA;        &#xA;        cv2.putText(frame,  &#xA;                    video_text,  &#xA;                    (textX, textY),  &#xA;                    font, 3,  &#xA;                    (0, 255, 255),  &#xA;                    6,  &#xA;                    cv2.LINE_4)&#xA;        &#xA;        # Define the resize scale&#xA;        scale_percent = 50  # Resize to 50% of the original size&#xA;        # Get new dimensions&#xA;        width = 1080&#xA;        height = 1920&#xA;        new_size = (width, height)&#xA;&#xA;        # Resize the frame&#xA;        resized_frame = cv2.resize(frame, new_size)&#xA;        video_writer.write(resized_frame)&#xA;        cv2.imshow(&#x27;video&#x27;, resized_frame)&#xA;        cv2.waitKey(wait_time)&#xA;        if cv2.waitKey(1) &amp; 0xFF == ord(&#x27;q&#x27;): &#xA;            break&#xA;cv2.destroyAllWindows()&#xA;video_writer.release()&#xA;cap.release()&#xA;&#xA;

    &#xA;

    When I run this script the audio matches the text in the video perfectly and it runs for the correct amount of time to match with the audio (2 min 44 sec). However, the saved video CompletedVideo.mp4 only lasts for 1 min 10 sec. I am unsure why the video has sped up. The fps is 60 fps. If you require any more information please let me know and thanks in advance.

    &#xA;

    I have tried changing the fps, changing the wait_time after writing each frame. I am expecting the CompletedVideo.mp4 to be 2 min 44 sec long not 1 min 10 sec long.

    &#xA;