
Recherche avancée
Autres articles (25)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (6475)
-
ffmpeg takes a while to start
17 octobre 2020, par SuspendedI have this command in python script, in a loop :


ffmpeg -i somefile.mp4 -ss 00:03:12 -t 00:00:35 piece.mp4 -loglevel error -stats



It cuts out pieces of input file (-i). Input filename, as well as start time (-ss) and length of the piece I cut out (-t) varies, so it reads number of mp4 files and cuts out number of pieces from each one. During execution of the script it might be called around 100 times. My problem is that each time before it starts, there is a delay of few seconds and it adds up to significant time. How can I get it to start immediately ?


The script (process_videos.py) :


import subprocess
import sys
import math
import time

class TF:
 """TimeFormatter class (TF).
This class' reason for being is to convert time in short
form, e.g. 1:33, 0:32, or 23 into long form accepted by
mp4cut function in bash, e.g. 00:01:22, 00:00:32, etc"""

def toLong(self, shrt):
 """Converts time to its long form"""
 sx = '00:00:00'
 ladd = 8 - len(shrt)
 n = sx[:ladd] + shrt
 return n

def toShort(self, lng):
 """Converts time to short form"""
 if lng[0] == '0' or lng[0] == ':':
 return self.toShort(lng[1:])
 else:
 return lng

def toSeconds(self, any_time):
 """Converts time to seconds"""
 if len(any_time) < 3:
 return int(any_time)
 tt = any_time.split(':')
 if len(any_time) < 6: 
 return int(tt[0])*60 + int(tt[1])
 return int(tt[0])*3600 + int(tt[1])*60 + int(tt[2])

def toTime(self, secsInt):
 """"""
 tStr = ''
 hrs, mins, secs = 0, 0, 0
 if secsInt >= 3600:
 hrs = math.floor(secsInt / 3600)
 secsInt = secsInt % 3600
 if secsInt >= 60:
 mins = math.floor(secsInt / 60)
 secsInt = secsInt % 60
 secs = secsInt
 return str(hrs).zfill(2) + ':' + str(mins).zfill(2) + ':' + str(secs).zfill(2)

def minus(self, t_start, t_end):
 """"""
 t_e = self.toSeconds(t_end)
 t_s = self.toSeconds(t_start)
 t_r = t_e - t_s
 hrs, mins, secs = 0, 0, 0
 if t_r >= 3600:
 hrs = math.floor(t_r / 3600)
 t_r = t_r - (hrs * 3600)
 if t_r >= 60:
 mins = math.floor(t_r / 60)
 t_r = t_r - (mins * 60)
 secs = t_r
 hrsf = str(hrs).zfill(2)
 minsf = str(mins).zfill(2)
 secsf = str(secs).zfill(2)
 t_fnl = hrsf + ':' + minsf + ':' + secsf
 return t_fnl

def go_main():
 tf = TF()
 vid_n = 0
 arglen = len(sys.argv)
 if arglen == 2:
 with open(sys.argv[1], 'r') as f_in:
 lines = f_in.readlines()
 start = None
 end = None
 cnt = 0
 for line in lines:
 if line[:5] == 'BEGIN':
 start = cnt
 if line[:3] == 'END':
 end = cnt
 cnt += 1
 if start == None or end == None:
 print('Invalid file format. start = {}, end = {}'.format(start,end))
 return
 else:
 lines_r = lines[start+1:end]
 del lines
 print('videos to process: {}'.format(len(lines_r)))
 f_out_prefix = ""
 for vid in lines_r:
 vid_n += 1
 print('\nProcessing video {}/{}'.format(vid_n, len(lines_r)))
 f_out_prefix = 'v' + str(vid_n) + '-'
 dat = vid.split('!')[1:3]
 title = dat[0]
 dat_t = dat[1].split(',')
 v_pieces = len(dat_t)
 piece_n = 0
 video_pieces = []
 cmd1 = "echo -n \"\" > tmpfile"
 subprocess.run(cmd1, shell=True) 
 print(' new tmpfile created')
 for v_times in dat_t:
 piece_n += 1
 f_out = f_out_prefix + str(piece_n) + '.mp4'
 video_pieces.append(f_out)
 print(' piece filename {} added to video_pieces list'.format(f_out))
 v_times_spl = v_times.split('-')
 v_times_start = v_times_spl[0]
 v_times_end = v_times_spl[1]
 t_st = tf.toLong(v_times_start)
 t_dur = tf.toTime(tf.toSeconds(v_times_end) - tf.toSeconds(v_times_start))
 cmd3 = ["ffmpeg", "-i", title, "-ss", t_st, "-t", t_dur, f_out, "-loglevel", "error", "-stats"]
 print(' cutting out piece {}/{} - {}'.format(piece_n, len(dat_t), t_dur))
 subprocess.run(cmd3)
 for video_piece_name in video_pieces:
 cmd4 = "echo \"file " + video_piece_name + "\" >> tmpfile"
 subprocess.run(cmd4, shell=True)
 print(' filename {} added to tmpfile'.format(video_piece_name))
 vname = f_out_prefix[:-1] + ".mp4"
 print(' name of joined file: {}'.format(vname))
 cmd5 = "ffmpeg -f concat -safe 0 -i tmpfile -c copy joined.mp4 -loglevel error -stats"
 to_be_joined = " ".join(video_pieces)
 print(' joining...')
 join_cmd = subprocess.Popen(cmd5, shell=True)
 join_cmd.wait()
 print(' joined!')
 cmd6 = "mv joined.mp4 " + vname
 rename_cmd = subprocess.Popen(cmd6, shell=True)
 rename_cmd.wait()
 print(' File joined.mp4 renamed to {}'.format(vname))
 cmd7 = "rm " + to_be_joined
 rm_cmd = subprocess.Popen(cmd7, shell=True)
 rm_cmd.wait()
 print('rm command completed - pieces removed')
 cmd8 = "rm tmpfile"
 subprocess.run(cmd8, shell=True)
 print('tmpfile removed')
 print('All done')
 else:
 print('Incorrect number of arguments')

############################
if __name__ == '__main__':
 go_main()



process_videos.py is called from bash terminal like this :


$ python process_videos.py video_data 



video_data file has the following format :


BEGIN
!first_video.mp4!3-23,55-1:34,2:01-3:15,3:34-3:44!
!second_video.mp4!2-7,12-44,1:03-1:33!
END



My system details :


System: Host: snowflake Kernel: 5.4.0-52-generic x86_64 bits: 64 Desktop: Gnome 3.28.4
 Distro: Ubuntu 18.04.5 LTS
Machine: Device: desktop System: Gigabyte product: N/A serial: N/A
Mobo: Gigabyte model: Z77-D3H v: x.x serial: N/A BIOS: American Megatrends v: F14 date: 05/31/2012
CPU: Quad core Intel Core i5-3570 (-MCP-) cache: 6144 KB 
 clock speeds: max: 3800 MHz 1: 1601 MHz 2: 1601 MHz 3: 1601 MHz 4: 1602 MHz
Drives: HDD Total Size: 1060.2GB (55.2% used)
 ID-1: /dev/sda model: ST31000524AS size: 1000.2GB
 ID-2: /dev/sdb model: Corsair_Force_GT size: 60.0GB
Partition: ID-1: / size: 366G used: 282G (82%) fs: ext4 dev: /dev/sda1
 ID-2: swap-1 size: 0.70GB used: 0.00GB (0%) fs: swap dev: /dev/sda5
Info: Processes: 313 Uptime: 16:37 Memory: 3421.4/15906.9MB Client: Shell (bash) inxi: 2.3.56



-
"File doesn't exist" - streamio FFMPEG on screenshot after create method
3 mai 2013, par dodgerogers747I have videos being directly uploaded to S3 using Amazon's CORS configuration. Videos are uploaded via a dedicated S3 form, once they have been uploaded successfully the URL of the video is appended to the @video.file hidden_field via javascript and then the video saves.
I can't get this
after_save
method to work which takes a screenshot of the video and saves it to S3 via carrierwave after the video has been saved as a rails object. ( It was previously working using a carrierwave video upload instance )It errors out with
Errno::ENOENT - No such file or directory - the file 'http://bucket-name.s3.amazonaws.com/uploads/video/file/secure-random-hex/video_name.m4v' does not exist:
I have tried running this method as a class method to call it from the console but it always comes back with the same error, even though the video exists.My bucket is set to public, read and write. How come it doesn't think the file exists ?
If anyone needs more code just shout, thanks in advance.
application trace
Started POST "/videos" for 127.0.0.1 at 2013-05-03 10:48:07 -0700
Processing by VideosController#create as JS
Parameters: {"utf8"=>"✓", "authenticity_token"=>"MAHxrVcmPDtVIMfDWZBwL0YnzaAaAe1PTGip5M4OVoY=", "video"=>{"user_id"=>"5", "file"=>"http://bucket-name.s3.amazonaws.com/uploads/video/file/secure-random-hex/video.m4v"}}
User Load (0.3ms) SELECT `users`.* FROM `users` WHERE `users`.`id` = 5 LIMIT 1
(0.1ms) BEGIN
SQL (20.5ms) INSERT INTO `videos` (`created_at`, `file`, `question_id`, `screenshot`, `updated_at`, `user_id`) VALUES ('2013-05-03 17:48:07', 'http://teebox-network.s3.amazonaws.com/uploads/video/file/secure-random-hex/video.m4v', NULL, NULL, '2013-05-03 17:48:07', 5)
(44.0ms) ROLLBACK
Completed 500 Internal Server Error in 71ms
Errno::ENOENT - No such file or directory - the file 'http://teebox-network.s3.amazonaws.com/uploads/video/file/secure-random-hex/video.m4v' does not exist:
(gem) streamio-ffmpeg-0.9.0/lib/ffmpeg/movie.rb:10:in `initialize'
app/models/video.rb:25:in `new'
app/models/video.rb:25:in `take_screenshot'video.rb
attr_accessible :user_id, :question_id, :file, :screenshot
belongs_to :question
belongs_to :user
default_scope order('created_at DESC')
after_create :take_screenshot
mount_uploader :screenshot, ImageUploader
validates_presence_of :user_id, :file
def take_screenshot
FFMPEG.ffmpeg_binary = '/opt/local/bin/ffmpeg'
movie = FFMPEG::Movie.new("#{self.file}")
self.screenshot = movie.screenshot("#{Rails.root}/public/uploads/tmp/screenshots/#{File.basename(self.file)}.jpg", seek_time: 2 )
self.save!
endvideos/_form.html.erb
<form action="http://bucket-name.s3.amazonaws.com" data-remote="true" class="direct-upload" enctype="multipart/form-data" method="post">
<input type="hidden" />
<input type="hidden" value="ACCESS_KEY" />
<input type="hidden" value="public-read" />
<input type="hidden" />
<input type="hidden" />
<input type="hidden" value="201" />
<input type="file" />
</form>
<%= form_for @video, html: { multipart: true, id: "new_video" }, remote: true do |f| %>
<% if @video.errors.any? %>
<div>
<h2><%= pluralize(@video.errors.count, "error") %> prohibited this post from being saved:</h2>
<ul>
<% @video.errors.full_messages.each do |msg| %>
<li><%= msg %></li>
<% end %>
</ul>
</div>
<% end %>
<%= f.hidden_field :user_id, value: current_user.id %>
<%= f.hidden_field :file %><br />
<% end %>ImageUploader
class ImageUploader < CarrierWave::Uploader::Base
include CarrierWave::RMagick
include Sprockets::Helpers::RailsHelper
include Sprockets::Helpers::IsolatedHelper
storage :fog
before :store, :remember_cache_id
after :store, :delete_tmp_dir
def cache_dir
Rails.root.join('public/uploads/tmp/')
end
def remember_cache_id(new_file)
@cache_id_was = cache_id
end
def delete_tmp_dir(new_file)
if @cache_id_was.present? && @cache_id_was =~ /\A[\d]{8}\-[\d]{4}\-[\d]+\-[\d]{4}\z/
FileUtils.rm_rf(File.join(root, cache_dir, @cache_id_was))
end
end
process resize_and_pad: [306, 150, '#000']
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
def extension_white_list
%w(jpg)
# %w(ogg ogv 3gp mp4 m4v webm mov)
end -
Sending per frame metadata with H264 encoded frames
21 septembre 2013, par user2459280We're looking for a way to send per frame metadata (for example an ID) with H264 encoded frames from a server to a client.
We're currently developing a remote rendering application, where both client and server side are actively involved.
The server renders a high quality image with all effects, lighting etc.
The client also has model-informations and renders a diffuse image that is used when the bandwidth is too low or the images have to be warped in order to avoid stuttering .So far we're encoding the frames on the server side with ffmpeg and streaming them with live555 to the client, who receives an rtsp-stream and decodes the frames again using ffmpeg.
For our application, we now need to send per frame metadata.
We want the client to tell the server where the camera is right now.
Ideally we'd be able to send the client's view matrix to the server, render the corresponding frame and send it back to the client together with its view matrix. So when the client receives a frame, we need to know exactly at what camera position the frame was rendered.Alternatively we could also tag each view matrix with an ID, send it to the server, render the frame and tag it with the same ID and send it back. In this case we'd have to assign the right matrix to the frame again on the client side.
After several attempts to realize the above intent with ffmpeg we came to the conclusion that ffmpeg does not provide the required functionality. ffmpeg only provides a fix, predefined set of fields for metadata, that either cannot store a matrix or can only be set for every key frame, which is not frequently enough for our purpose.
Now we're considering using live555. So far we have an on demand Server, witch gets a VideoSubsession with a H264VideoStreamDiscreteFramer to contain our own FramedSource class. In this class we load the encoded AVPacket (from ffmpeg) and send its data-buffer over the network. Now we need a way to send some kind of metadata with every frame to the client.
Do you have any ideas how to solve this metadata problem with live555 oder another library ?
Thanks for your help !