
Recherche avancée
Autres articles (35)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (6890)
-
Start and end time of MoviePy's VideoClip not working
21 mars 2024, par ernesto casco velazquezI'm trying to add captions to a video. The desired outcome is to show each word in the exact moment is being said.


I have a method that gives me the accurate time start and end per each word :


def get_words_per_time(audio_speech_file):
 model = whisper.load_model("base")
 transcribe = model.transcribe(
 audio=audio_speech_file, fp16=False, word_timestamps=True
 )
 segments = transcribe["segments"]
 words = []

 for seg in segments:
 for word in seg["words"]:
 words.append(
 {
 "word": word["word"],
 "start": word["start"],
 "end": word["end"],
 "prob": round(word["probability"], 4),
 }
 )
 return words



Then I have a code that uses MoviePy to create TextClip and assing a given start and end time per pair of words (I know there are redundant statements, srry) :


def generate_captions(
 words,
 font="Komika",
 fontsize=32,
 color="White",
 align="center",
 stroke_width=3,
 stroke_color="black",
):
 text_comp = []
 for i in track(range(0, len(words), 2), description="Creating captions..."):
 word1 = words[i]
 if i + 1 < len(words):
 word2 = words[i + 1]
 text_clip = TextClip(
 f"{word1['word']} {word2['word'] if i + 1 < len(words) else ''}",
 font=font, # Change Font if not found
 fontsize=fontsize,
 color=color,
 align=align,
 method="caption",
 size=(660, None),
 stroke_width=stroke_width,
 stroke_color=stroke_color,
 )
 text_clip = text_clip.set_start(word1["start"])
 text_clip = text_clip.set_end(
 word2["end"] if i + 1 < len(words) else word1["end"]
 )
 text_comp.append(text_clip)
 return text_comp



Finally, I concatenate the words into a single video :


vid_clip = CompositeVideoClip(
 [vid_clip, concatenate_videoclips(text_comp).set_position(("center", 860))]
)



The output is this, but you can clearly see the words are not flowing with the speech. They somehow move faster as if the start/end time did not matter. Here's the video


The words with their respective start/end time, look like this :


[
 {
 'word': 'This',
 'start': 0.0,
 'end': 0.22,
 'prob': 0.805
 },
 {
 'word': 'is',
 'start': 0.22,
 'end': 0.42,
 'prob': 0.9991
 },
 {
 'word': 'a',
 'start': 0.42,
 'end': 0.6,
 'prob': 0.999
 },
 {
 'word': 'test,
 ',
 'start': 0.6,
 'end': 1.04,
 'prob': 0.9939
 },
 {
 'word': 'to',
 'start': 1.18,
 'end': 1.3,
 'prob': 0.9847
 },
 {
 'word': 'show',
 'start': 1.3,
 'end': 1.54,
 'prob': 0.9971
 },
 {
 'word': 'words',
 'start': 1.54,
 'end': 1.9,
 'prob': 0.995
 },
 {
 'word': 'does',
 'start': 1.9,
 'end': 2.16,
 'prob': 0.997
 },
 {
 'word': 'not',
 'start': 2.16,
 'end': 2.4,
 'prob': 0.9978
 },
 {
 'word': 'appear.',
 'start': 2.4,
 'end': 2.82,
 'prob': 0.9984
 },
 {
 'word': 'At',
 'start': 3.46,
 'end': 3.6,
 'prob': 0.9793
 },
 {
 'word': 'their',
 'start': 3.6,
 'end': 3.8,
 'prob': 0.9984
 },
 {
 'word': 'proper',
 'start': 3.8,
 'end': 4.22,
 'prob': 0.9976
 },
 {
 'word': 'time.',
 'start': 4.22,
 'end': 4.72,
 'prob': 0.999
 },
 {
 'word': 'Thanks',
 'start': 5.04,
 'end': 5.4,
 'prob': 0.9662
 },
 {
 'word': 'for,
 ',
 'start': 5.4,
 'end': 5.66,
 'prob': 0.9941
 },
 {
 'word': 'watching.',
 'start': 5.94,
 'end': 6.36,
 'prob': 0.7701
 }
]



What could be causing this ?


-
Make better marketing decisions with attribution modeling
Do you suspect some traffic sources are not getting the rewards they deserve ? Do you want to know how much credit each of your marketing channel actually gets ?
When you look at which referrers contribute the most to your goal conversions or purchases, Piwik shows you only the referrer of the last visit. However, in reality, a visitor often visits a website multiple times from different referrers before they convert a goal. Giving all credit to the referrer of the last visit ignores all other referrers that contributed to a conversion as well.
You can now push your marketing analysis to the next level with attribution modeling and finally discover the true value of all your marketing channels. As a result, you will be able to shift your marketing efforts and spending accordingly to maximize your success and stop wasting resources. In marketing, studying this data is called attribution modeling.
Get the true value of your referrers
Attribution is a premium feature that you can easily purchase from the Piwik marketplace.
Once installed, you will be able to :
- identify valuable referrers that you did not see before
- invest in potential new partners
- attribute a new level of conversion
- make this work very easily by filling just a couple of form information
Identify valuable referrers that you did not see before
You probably have hundreds or even thousands of different sources listed within the referrer reports. We also guess that you have the feeling that it is always the same referrers which are credited of conversions.
Guess what, those data are probably biased or at least are not telling you the whole story.
Why ? Because by default, Piwik only attributes all credit to the last referrer.It is likely that many non credited sources played a role in the conversion process as well as people often visit your website several times before converting and they may come from different referrers.
This is exactly where attribution modeling comes into play. With attribution modeling, you can decide which touchpoint you want to study. For example, you can choose to give credit to all the referrers a single visitor came from each time the user visits your website, and not only look at the last one. Without this feature, chances are, that you have spent too much money and / or efforts on the wrong referrer channels in the past because many referrers that contributed to conversions were ignored. Based on the insights you get by applying different attribution models, you can make better decisions on where to shift your marketing spending and efforts.
Invest in potential new partners
Once you apply different attribution models, you will find out that you need to consider a new list of referrers which you before either over- or under-estimated in terms of how much they contributed to your conversions. You probably did not identify those sources before because Piwik shows only the last referrer before a conversion. But you can now also look at what these newly discovered referrers are saying about your company, looking for any advertising programs they may offer, getting in contact with the owner of the website, and more.
Apply up to 6 different attribution models
By default, Piwik is attributing the conversion to the last referrer only. With attribution modeling you can analyze 6 different models :
- Last Interaction : the conversion is attributed to the last referrer, even if it is a direct access.
- Last Non-Direct : the conversion is attributed to the last referrer, but not in the case of a direct access.
- First Interaction : the conversion is attributed to the first referrer which brought you the visit.
- Linear : whatever the number of referrers which brought you the conversion, they will all get the same value.
- Position Based : first and last referrer will be attributed 40% each the conversion value, the remaining 60% is divided between the rest of the referrers.
- Time Decay : this attribution model means that the closer to the date of the conversion is, the more your last referrers will get credit.
Those attribution models will enable you to analyze all your referrers deeply and increase your conversions.
Let’s look at an example where we are comparing two models : “last interaction” and “first interaction”. Our goal is to identify whether some referrers that we are currently considering as less important, are finally playing a serious role in the total amount of conversions :
Comparing Last Interaction model to First Interaction model
Here it is interesting to observe that the website www.hongkiat.com is bringing almost 90% conversion more with the first interaction model rather than the last one.
As a result we can look at this website and take the following actions :
- have a look at the message on this website
- look at opportunities to change the message
- look at opportunities to display extra marketing messages
- get in contact with the owner to identify any other communication opportunities
The Multi Channel Attribution report
Attribution modeling in Piwik does not require you to add any tracking code. The only thing you need is to install the plugin and let the magic happen.
Simple as pie is the word you should keep in mind for this feature. Once installed, you will find the report within the goal section, just above the goals you created :The Multi Attribution menu
There you can select the attribution model you would like to apply or compare.
Attribution modeling is not just about playing with a new report. It is above all an opportunity to increase the number of conversions by identifying referrers that you may have not recognized as valuable in the past. To grow your business, it is crucial to identify the most (and least) successful channels correctly so you can spend your time and money wisely.
-
Unable to convert mov to mp4 using ffmpeg carrierwave in ruby
9 avril 2021, par LearningRORI am unable to convert MOV files to MP4 using carrierwave and FFmpeg.


It does upload mp4 file but does not convert it to mp4 and so on browser only sound gets played not the video.


What I am missing here ? I tried most of the solutions.


Code :


class VideoUploader < CarrierWave::Uploader::Base
 include CarrierWave::MiniMagick
 include CarrierWave::Video
 include CarrierWave::Video::Thumbnailer
 include CarrierWave::FFmpeg
 # Choose what kind of storage to use for this uploader:
 storage :file
 # Override the directory where uploaded files will be stored.
 # This is a sensible default for uploaders that are meant to be mounted:
 def store_dir
 "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
 end
 version :video, :if => :video? do
 process :encode
 end
 version :thumb do
 process thumbnail: [{format: 'jpg', quality: 8, size: 360, logger: Rails.logger, square: false}]
 def full_filename for_file
 jpg_name for_file, version_name
 end
 end
 def encode
 tmp_path = File.join( File.dirname(current_path), "tmpfile.mp4" )
 movie = FFMPEG::Movie.new(current_path)
 movie.transcode(tmp_path, custom: %w(-c:v libx264 -c:a aac -vf format=yuv420p -movflags +faststart)) do |progress|
 puts progress
 end
 File.rename tmp_path, current_path
 end
 protected
 def video?(new_file)
 new_file.content_type.include? 'video'
 end
end



Logs :


I, [2021-04-09T15:33:24.783337 #9397] INFO -- : Running transcoding...
["/usr/local/bin/ffmpeg", "-y", "-i", "/Users/osx/workspace_ror/xxx-react/tmp/1617964403-459168728101935-0021-9213/video/sample_iTunes__1_.mov", "-c:v", "libx264", "-c:a", "aac", "-vf", "format=yuv420p", "-movflags", "+faststart", "/Users/osx/workspace_ror/xxx-react/tmp/1617964403-459168728101935-0021-9213/video/tmpfile.mp4"]

0.0
0.08304093567251461
0.1672514619883041
0.2327485380116959
0.3229239766081871
0.4008187134502924
0.45426900584795327
0.5033918128654971
0.5403508771929825
0.6100584795321637
0.663859649122807
0.711812865497076
0.7657309941520468
0.8196491228070175
0.87953216374269
0.9394152046783625
1.0
1.0
I, [2021-04-09T15:33:33.555508 #9397] INFO -- : Transcoding of /Users/osx/workspace_ror/xxx-react/tmp/1617964403-459168728101935-0021-9213/video/sample_iTunes__1_.mov to /Users/osx/workspace_ror/xxx-react/tmp/1617964403-459168728101935-0021-9213/video/tmpfile.mp4 succeeded

Running....ffmpegthumbnailer -i /Users/osx/workspace_ror/xxx-react/tmp/1617964403-459168728101935-0021-9213/thumb/sample_iTunes__1_.mov -o /Users/osx/workspace_ror/xxx-react/tmp/1617964403-459168728101935-0021-9213/thumb/tmpfile.jpg -c jpg -q 8 -s 360
Success!