
Recherche avancée
Autres articles (53)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (7725)
-
ffmpeg concatenating videos of different fps while keeping the total length not changed
23 novembre 2017, par A_MatarI wanna pad an
mp4
video stream with another video clip of a static image that I created using :def generate_white_vid (duration):
output_filename = os.path.join(p_path,'white_vid_'+" 0:.2f}".format(duration)+'.mp4')
ffmpeg_create_vid_from_static_img = 'ffmpeg -loop 1 -i /path/WhiteBackground.jpg -c:v libx264 -t %f -pix_fmt yuv420p -vf scale=1920:1080 %s' % (duration, output_filename)
p = subprocess.Popen(ffmpeg_create_vid_from_static_img, shell=True)
p.communicate()
return output_filenameI use the following to concatenate :
def concat_vids(clip_paths):
filenames_txt = open('clips_to_join.txt','w')
for clip in clip_paths:
filenames_txt.write ('file \''+ clip+'\'\n')
filenames_txt.close()
output_filename = clip_paths[0].split('.', 2)[0]
output_file_path = os.path.join(root_path, output_filename+'-padded.mp4')
# join the clips
ffmpeg_command = ["ffmpeg", "-f", "concat", "-safe", "0", "-i", "clips_to_join.txt", "-codec", "copy", output_file_path] # output_filename = ch0X-start_time-end_time
p = subprocess.Popen(ffmpeg_command)
p.communicate() # wait till the subprocess finishes. You can send commands to process as well.
return output_file_pathWhen I check the length of the resulting video after concatenation, I find that it is not equal to the sum of the two segments that I concatenated, and sometimes it is even less by some seconds !!
Here is how I get the video length in seconds :
def ffmpeg_len(vid_path):
'''
Returns length in seconds using ffmpeg
'''
ffmpeg_get_mediafile_length = ['sh', '-c', 'ffmpeg -i "$1" 2>&1 | grep Duration', '_', vid_path]
p = subprocess.Popen(ffmpeg_get_mediafile_length, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
length_regexp = 'Duration: (\d{2}):(\d{2}):(\d{2})(\.\d+),'
re_length = re.compile(length_regexp)
matches = re_length.search(output)
if matches:
video_length = int(matches.group(1)) * 3600 + \
int(matches.group(2)) * 60 + \
int(matches.group(3)) + float(matches.group(4))
return video_length
else:
print("Can't determine video length.")
print err
raise SystemExitMy guess is that maybe the concatenation unifies the
fps
rate for the all the clips to be joined, if this is the case, how to prevent this from happening ? How can I get a video of the desired length exactly.Maybe it worth mentioning that the video to padded is very short
0.42 second
, the original video is210.58
and the resultant video is210.56
!I have verified that ffmpeg does generate the desired padding region and it is of the desired length
0.42
I got a 0.43 segment when I forced30 fps
but it is okay. -
How to detect blue screen of ffmpeg video packet ?
28 novembre 2017, par 심상원Good morning. There is one question about FFMPEG.
I’m using FFMPEG to study C ++ on Linux.
When the camera spirituality is RTSP and the format is H.264,
I would like to determine if the camera image is a blue screen, but the following concepts are confusing.
-
KeyFrame comes in 1 second or every X seconds cycle. Does the KeyFrame get delivered from the camera even if it is still the same image ?
-
If the KeyFrame is delivered, is the size of the packet transmitted between the cycles zero ?
-
If the above method is the same as normal image, should I compare the individual frames after decoding ?
If you do not have any of these questions, please let me know if you have a good way.
Thank you.
-
-
Cut AVI video via FFMPEG results in black screen video, but audio is OK
25 décembre 2017, par mipiI want to trim a AVI video (H264 codec) via ffmpeg. The time interval for the result is available as START_TIME_ORIG and DURATION_ORIG (both in microseconds). To make sure that the resulting video starts with an IDR frame, I determine START_TIME and DURATION via ffprobe by executing
ffprobe -show_frames -pretty -read_intervals [TIME_FROM%TIME_TO] input.avi
twice to get the IDR frames which are (1st call) closest to START_TIME_ORIG and (2nd call) closest to START_TIME_ORIG+DURATION_ORIG. TIME_FROM and TIME_TO is an interval of 5 seconds plus/minus around (1st call) START_TIME_ORIG and (2nd call) START_TIME_ORIG+DURATION_ORIG. To identify a frame as IDR frame I verify that key_frame=1 and pict_type=I. START_TIME is then set to pkt_dts_time of that frame. In a similar way I calculate DURATION.
Then ffmpeg is called :
ffmpeg -ss [START_TIME] -i input.avi -t [DURATION] -codec copy -reset_timestamps 1 -async 1 -map 0 -y output.avi
Unfortunately the resulting video has a black screen only, audio is OK. What is wrong with my approach ?
Thanks, mipi