
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (42)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (4456)
-
4K Screen Recording on 1080p Monitors [closed]
10 avril, par Souhail BenlhachemiI have created a basic windows screen recording app (ffmpeg + GUI), but I noticed that the quality of the recording depends on the monitor used to record, the video recording quality when recorded using a full HD is different from he video recording quality when recorded using a 4k monitor (which is obvious).


There is not much difference between the two when playing the recorded video with a scale of 100%, but when I zoom to 150% or more, we clearly can see the difference between the two recorded videos (1920x1080 VS the 4k).


I did some research on how to do screen recording with a 4k quality on a full hd monitor, and here is what I found :


I played with the windows duplicate API (AcquireNextFrame function which gives you the next frame on the swap chain), I successfully managed to convert the buffer to a PNG image and save it locally to my machine, but as you expect the quality was the same as a normal screenshot ! Because AcquireNextFrame return a frame after it is rasterized.


Then I came across what’s called “Graphics pipeline”, I spent some time to understand the basics, and finally I came to a conclusion that I need to intercept somehow the pre-rasterize data (the data that comes before the Rasterizer Stage - Geometry shaders, etc...) and then duplicate this data and do an off-screen render on a new 4k render target, but the windows API don’t allow that, there is no way to do that ! The only option they have on docs is what’s called Stream Output Stage, but this is useful only if you want to render your own shaders, not the ones that my display is using. (I tried to use MinHook to intercept data but no luck).


After that, I tried a different approach, I managed to create a virtual display as extended monitor with 4k resolution, and record it using ffmpeg, but as you know what I’m seeing on my main display on my monitor is different from the virtual display (only an empty desktop), what I need to do is drag and drop app windows using my mouse to that screen manually, but this will put us in a problem when recording, we are not seeing what we are recording xD.


I found some YouTube videos that talk about DSR (Dynamic Super Resolution), I tried that on my nvidia control panel (manually with GUI) and it works. I managed to fake the system that I have a 4k monitor and the quality of the recording was crystal clear. But I didn’t find anyway to do that programmatically using NVAPI + there is no API for that on AMD.


Has anyone worked on a similar project ? Or know a similar project that I can use as reference ?


suggestions ?


-
How to apply 'simple 'opacity to combined(layered) mp4s in FFMPEG
27 mai 2021, par CamI am not getting the final image results I need when layering together multiple mp4s of the same length and format into a single output MP4. I am using ffmpeg to create a pseudo 'motion blur' effect on animation, and need to layer mp4s together with identical opacities to produce the final video.


I am using a base 'black' MP4 as the first layer for a background, and then adding a series of source mp4s with equal opacity over the top in each pass. Here I am showing a photoshop mockup using their 'normal' blending mode which is exactly the blending effect I am trying to replicate with ffmpeg. I understand that the final composite is less "bright" but that's fine (unless you have any ideas).



Instead of looking like the result above, I am getting output where the colors are either all pink, garbled, super dark or generally hugely overbright etc based on trying different blend modes.


Here are the commands I am using :


To create the original (uncompressed ?) 'black' MP4 from a sequence of black pngs :


ffmpeg -start_number 0 -r 24 -f image2 -s 1920x1080 -i black_seq.%04d.png -vcodec libx264 -crf 0 -pix_fmt yuv420p black_seq.mp4 -y



I then take that "black_seq.mp4" and blend a set of n number of source mp4s over the top with an opacity value. This runs in a loop and the output.mp4 of each pass becomes the input.mp4 of the next pass until it completes. In this example a total of 10 source mp4s assigns their opacity to 0.1 for each pass, and this is a single pass (below). The source mp4s are all very similar in their appearance and color, essentially just the same sequence of animation but offset in time by fractions of a single frame and have been generated from color pngs, using the same code that produced the first black layer (above).


ffmpeg i input.mp4 -i n_layer.mp4 -vcodec libx264 -crf 0 -pix_fmt yuv420p -filter_complex "blend=all_mode='overlay':all_opacity=0.1" output.mp4 -y



Then finally add some compression to the result as the final "blur.mp4"


ffmpeg -i "output.mp4" -vcodec libx264 -crf 25 -pix_fmt yuv420p "blur.mp4" -y



And yes, this is certainly highly inefficient as an approach, but I am learning. The main issue I am trying to solve is, despite the final blur.mp4 being less "bright", it has colors that are not matching the original animation and instead looks like the animation has been hue shifted somehow.


This image shows a cropped output for comparison (the processed blur is set to zero for clarity)



I would love some insight.


-
How to use qt-faststart in ffmpeg arguments while merging 2 flv files into 1 mp4 format
21 novembre 2014, par Mohsin SharpenI want to do Pseudo Streaming of an mp4 file generated from 2 different flvs. For that I am using qt-faststart tool in ffmpeg. My generated file is according to my requirement but still Pseudo Streaming is not working.
Here is my code which is written in Ruby on Rail which merge 2 different flv files and then move them to the final location after generating final mp4 format.
class VideoProcess < BaseJob
@queue = :video_process
@config_file = 'video_process'
def self.perform(session_name)
new(session_name).video_merge
end
def initialize(session_name)
@session_name = session_name
@candidate_file = "#{@session_name}candidate.flv"
@expert_file = "#{@session_name}expert.flv"
load_config
end
def video_merge()
left_video = @config[:src_path]+@candidate_file
right_video = @config[:src_path]+@expert_file
unless File.exists?(left_video)
raise "The file '#{left_video}' does not exist!"
end
unless File.exists?(right_video)
raise "The file '#{right_video}' does not exist!"
end
prepare_output_dir @config[:dest_path]
output_video = "#{@config[:dest_path]}#{@session_name}.#{@config[:output_ext]}"
filter = generate_filter
args = strip_spaces %Q|
-i "#{left_video}"
-i "#{right_video}"
-filter_complex "#{filter}"
-map “[left+right]”
-y
-movflags faststart
#{@config[:output_format]}
#{output_video}
|
command = "#{@config[:command]} #{args} 2>&1" #2>&1 - move error to output
output = `#{command}`
puts output
unless $?.success?
raise output
end
Resque.enqueue(FileMove, output_video)
#File.delete left_video, right_video
end
def generate_filter
strip_spaces %Q!
nullsrc=size=1040x400 [background];[0:v] setpts=PTS-STARTPTS,
scale=520x400[left];[1:v]setpts=PTS-STARTPTS,scale=520x400[right];
[background][left]overlay=shortest=1[background+left];
[background+left][right]overlay=shortest=1:x=520 [left+right];
[0:1] [1:1] amerge
!
end
def load_config
super
@config.merge!({
command: @config[:ffmpeg],
padding: @config[:video_w] + @config[:video_space],
overlay: @config[:video_w] * 2 + @config[:video_space]
})
end
# Strip multi spaces to one and remove new line symbol
def strip_spaces(string)
string.gsub("\n",'').gsub(/\s+/, ' ').strip!
endend
Can some one see my code and check if I have set the flag -movflags faststart properly or I need to do something else.
I am stuck badly as I am not good at ruby on rail/ffmpeg thing.
Your help will be really appreciated for me in this regard.