
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (103)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (20161)
-
How to stop perl buffering ffmpeg output
4 février 2017, par Sebastian KingI am trying to have a Perl program process the output of an ffmpeg encode, however my test program only seems to receive the output of ffmpeg in periodic chunks, thus I am assuming there is some sort of buffering going on. How can I make it process it in real-time ?
My test program (the
tr
command is there because I thought maybe ffmpeg’s carriage returns were causing perl to see one big long line or something) :#!/usr/bin/perl
$i = "test.mkv"; # big file, long encode time
$o = "test.mp4";
open(F, "-|", "ffmpeg -y -i '$i' '$o' 2>&1 | tr '\r' '\n'")
or die "oh no";
while(<f>) {
print "A12345: $_"; # some random text so i know the output was processed in perl
}
</f>Everything works fine when I replace the
ffmpeg
command with this script :#!/bin/bash
echo "hello";
for i in `seq 1 10`; do
sleep 1;
echo "hello $i";
done
echo "bye";When using the above script I see the output each second as it happens. With
ffmpeg
it is some 5-10 seconds or so until it outputs and will output sometimes 100 lines each output.I have tried using the program
unbuffer
ahead offfmpeg
in the command call but it seems to have no effect. Is it perhaps the2>&1
that might be buffering ?
Any help is much appreciated.If you are unfamiliar with ffmpeg’s output, it outputs a bunch of file information and stuff to
STDOUT
and then during encoding it outputs lines likeframe= 332 fps= 93 q=28.0 size= 528kB time=00:00:13.33 bitrate= 324.2kbits/s speed=3.75x
which begin with carriage returns instead of new lines (hence
tr
) onSTDERR
(hence2>&1
). -
How to use ffmpeg to transcode many live streamed videos ? [closed]
21 septembre 2020, par user14258924PREMISE


As a pet project, I am writing a live video streaming service, in Go, that can consume video streams from OBS via SRT(TS -> h264/aac) and RTMP(FLV -> h264/aac) protocols and am planning to support streaming video from web browser as well, captured from a web camera via JS. This ingress server will receive many video streams in various containers and codecs and I need to normalize them into single container and codec and then create multiple versions for various bitrates(ie. 240p, 360p, 480p, 720p, 1080p...) to pass along where needed in the application. Each stream is split into 2 second GOP segments, separate for audio and video track, that will produce fragmented MP4 as the end result - which can be consumed by web browser.


The issue is that I am using Go which has no libraries for transcoding video so I need to use either ffmpeg or vlc, which is a C code. I have decided to avoid the CGo route and use ffmpeg/vlc as standalone binaries.


QUESTION


My question is how to use either of these project in the most efficient way - avoiding the use of files in favour of unix sockets/streams and also the performance aspect - handling hundreds of video segments in any one time and in sufficient time to avoid creating too much of a lag beteen producer and consumer.


So let's say I will pick the most used one - ffmpeg, how should I actually use it to achieve what I have described ? How would you set it up and which flags/config to use with it ?


Can the performance be even achieved or is it just too much and I will need some sort of ffmpeg cluser to even come close to some useful performance/low delay ?


-
Why is my .mp4 file created using cv2.VideoWriter not syncing up with the audio when i combine the video and audio using ffmpeg [closed]
27 décembre 2024, par joeS125The aim of the script is to take text from a text file and put it onto a stock video with an ai reading the text. Similar to those reddit stories on social media with parkour minecraft in the background.


import cv2
import time
from ffpyplayer.player import MediaPlayer
from Transcription import newTranscribeAudio
from pydub import AudioSegment

#get a gpt text generation to create a story based on a prompt, for example sci-fi story and spread it over 3-4 parts
#get stock footage, like minecraft parkour etc
#write text of script on the footage
#create video for each part
#have ai voiceover to read the transcript
cap = cv2.VideoCapture("Stock_Videos\Minecraft_Parkour.mp4")
transcription = newTranscribeAudio("final_us.wav")
player = MediaPlayer("final_us.mp3")
audio = AudioSegment.from_file("final_us.mp3")
story = open("Story.txt", "r").read()
story_split = story.split("||")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
video_duration = frame_count / fps # Duration of one loop of the video
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
audio_duration = len(audio) / 1000 # Duration in seconds
video_writer = cv2.VideoWriter(f"CompletedVideo.mp4", fourcc, fps, (1080, 1920))

choice = 0#part of the story choice
part_split = story_split[choice].split(" ")
with open("Segment.txt", "w") as file:
 file.write(story_split[choice])
start_time = time.time()
length = len(part_split) - 1
next_text = []
for j in range(0, length):
 temp = part_split[j].replace("\n", "")
 next_text.append([temp])
index = 0
word_index = 0
frame_size_x = 1080
frame_size_y = 1920
audio_duration = len(audio) / 1000 # Duration in seconds
start_time = time.time()
wait_time = 1 / fps
while (time.time() - start_time) < audio_duration:
 cap.set(cv2.CAP_PROP_POS_FRAMES, 0) # Restart video
 elapsed_time = time.time() - start_time
 print(video_writer)
 if index >= len(transcription):
 break
 while cap.isOpened():
 # Capture frames in the video 
 ret, frame = cap.read()
 if not ret:
 break
 audio_frame, val = player.get_frame()
 if val == 'eof': # End of file
 print("Audio playback finished.")
 break
 if index >= len(transcription):
 break
 
 if frame_size_x == -1:
 frame_size_x = frame.shape[1]
 frame_size_y = frame.shape[0]

 elapsed_time = time.time() - start_time

 # describe the type of font 
 # to be used. 
 font = cv2.FONT_HERSHEY_SIMPLEX 
 trans = transcription[index]["words"]
 end_time = trans[word_index]["end"]
 if trans[word_index]["start"] < elapsed_time < trans[word_index]["end"]:
 video_text = trans[word_index]["text"]
 elif elapsed_time >= trans[word_index]["end"]:
 #index += 1
 word_index += 1
 if (word_index >= len(trans)):
 index += 1
 word_index = 0
 # get boundary of this text
 textsize = cv2.getTextSize(video_text, font, 3, 6)[0]
 # get coords based on boundary
 textX = int((frame.shape[1] - textsize[0]) / 2)
 textY = int((frame.shape[0] + textsize[1]) / 2)
 
 cv2.putText(frame, 
 video_text, 
 (textX, textY), 
 font, 3, 
 (0, 255, 255), 
 6, 
 cv2.LINE_4)
 
 # Define the resize scale
 scale_percent = 50 # Resize to 50% of the original size
 # Get new dimensions
 width = 1080
 height = 1920
 new_size = (width, height)

 # Resize the frame
 resized_frame = cv2.resize(frame, new_size)
 video_writer.write(resized_frame)
 cv2.imshow('video', resized_frame)
 cv2.waitKey(wait_time)
 if cv2.waitKey(1) & 0xFF == ord('q'): 
 break
cv2.destroyAllWindows()
video_writer.release()
cap.release()




When I run this script the audio matches the text in the video perfectly and it runs for the correct amount of time to match with the audio (2 min 44 sec). However, the saved video CompletedVideo.mp4 only lasts for 1 min 10 sec. I am unsure why the video has sped up. The fps is 60 fps. If you require any more information please let me know and thanks in advance.


I have tried changing the fps, changing the wait_time after writing each frame. I am expecting the CompletedVideo.mp4 to be 2 min 44 sec long not 1 min 10 sec long.