
Recherche avancée
Médias (3)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (112)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (7461)
-
Error : spawn process ffmpeg ChildProcessError
10 juin 2019, par KarnonI wanted to make a program like OBS simply.
My ideal code behavior is to create a child process in Node.js and execute FFMPEG commands to send a webcam stream to yourtube live RTMP server. However, the actual behavior is caused by an error in the child-process-promise module used in node.js.
I’ve checked several questions, but I don’t have enough experience to understand them, and I hope there’s a clear solution.
I guess it was because I couldn’t find the command address of FFMPEG in the Node environment. Or is calling from the socket environment a problem ?
I checked that the FFMPEG command works in a Windows prompt environment.
※ Note : FFMPEG environment variables are registered.
Environment :
Window10
,node.js
,ffmpeg
The code took advantage of a simple WebSocket example.
When I first investigated, I thought that the only way to do this was to use "fluent-effmpeg."
I tried "fluent-ffmpeg" but I couldn’t get my laptop webcam up and running in Windows environments as a parameter for the "fluent-ffmppeg" command.
I’ve also thought about using WebRTC, but I think it’s not for personal use because it’s a P2P connection. (I also saw how to connect a peer connection to a WebRTC server like Janus, but I didn’t have enough references to understand it.)
Below is the code of the problem.
const SocketIO = require("socket.io");
const ffmpeg = require("fluent-ffmpeg");
const spawn = require("child-process-promise").spawn;
module.exports = server => {
const io = SocketIO(server, { path: "/socket.io" });
io.on("connection", socket => {
const req = socket.request;
const ip = req.headers["x-forwarded-for"] || req.connection.remoteAddress;
console.log("새로운 클라이언트 접속!", ip, socket.id, req.ip);
socket.on("disconnect", () => {
console.log("클라이언트 접속해제", ip, socket.id);
clearInterval(socket.interval);
});
socket.on("error", error => {
console.error(error);
});
socket.on("reply", data => {
console.log(data);
ffmpeg_command();
});
});
function ffmpeg_command() {
let arg = [
"-f",
"lavfi",
"-i",
"anullsrc=r=16000:cl=mono",
"-f",
"dshow",
"-ac",
"2",
"-i",
"video='HP Truevision HD'",
"-s",
"1280x720",
"-r",
"10",
"-vcodec",
"libx264",
"-pix_fmt",
"yuv420p",
"-preset",
"ultrafast",
"-r",
"25",
"-g",
"20",
"-b:v",
"2500k",
"-codec:a",
"libmp3lame",
"-ar",
"44100",
"-threads",
"6",
"-b:a",
"11025",
"-bufsize",
"512k",
"-f",
"flv",
"rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q"
];
spawn("ffmpeg", arg).catch(e => {
console.log(e);
});
}
};Here’s the error : The expected result is that your webcam is working and YouTube live streaming is successful.
{ ChildProcessError: `ffmpeg -f lavfi -i anullsrc=r=16000:cl=mono -f dshow -ac 2 -i video='HP Truevision HD' -s 1280x720 -r 10 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -r 25 -g 20 -b:v 2500k -codec:a libmp3lame -ar 44100 -threads 6 -b:a 11025 -bufsize 512k -f flv rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q` failed with code 1
at ChildProcess.<anonymous> (C:\Users\Tricky\Desktop\Work\ESC\ESC_temp\node_modules\child-process-promise\lib\index.js:132:23)
at ChildProcess.emit (events.js:182:13)
at ChildProcess.cp.emit (C:\Users\Tricky\Desktop\Work\ESC\ESC_temp\node_modules\child-process-promise\node_modules\cross-spawn\lib\enoent.js:40:29)
at maybeClose (internal/child_process.js:962:16)
at Socket.stream.socket.on (internal/child_process.js:381:11)
at Socket.emit (events.js:182:13)
at Pipe._handle.close (net.js:606:12)
name: 'ChildProcessError',
code: 1,
childProcess:
ChildProcess {
_events: { error: [Function], close: [Function] },
_eventsCount: 2,
_maxListeners: undefined,
_closesNeeded: 3,
_closesGot: 3,
connected: false,
signalCode: null,
exitCode: 1,
killed: false,
spawnfile: 'ffmpeg',
_handle: null,
spawnargs:
[ 'ffmpeg',
'-f',
'lavfi',
'-i',
'anullsrc=r=16000:cl=mono',
'-f',
'dshow',
'-ac',
'2',
'-i',
'video=\'HP Truevision HD\'',
'-s',
'1280x720',
'-r',
'10',
'-vcodec',
'libx264',
'-pix_fmt',
'yuv420p',
'-preset',
'ultrafast',
'-r',
'25',
'-g',
'20',
'-b:v',
'2500k',
'-codec:a',
'libmp3lame',
'-ar',
'44100',
'-threads',
'6',
'-b:a',
'11025',
'-bufsize',
'512k',
'-f',
'flv',
'rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q' ],
pid: 18928,
stdin:
Socket {
connecting: false,
_hadError: false,
_handle: null,
_parent: null,
_host: null,
_readableState: [ReadableState],
readable: false,
_events: [Object],
_eventsCount: 1,
_maxListeners: undefined,
_writableState: [WritableState],
writable: false,
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
[Symbol(asyncId)]: 132,
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0 },
stdout:
Socket {
connecting: false,
_hadError: false,
_handle: null,
_parent: null,
_host: null,
_readableState: [ReadableState],
readable: false,
_events: [Object],
_eventsCount: 2,
_maxListeners: undefined,
_writableState: [WritableState],
writable: false,
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
write: [Function: writeAfterFIN],
[Symbol(asyncId)]: 133,
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0 },
stderr:
Socket {
connecting: false,
_hadError: false,
_handle: null,
_parent: null,
_host: null,
_readableState: [ReadableState],
readable: false,
_events: [Object],
_eventsCount: 2,
_maxListeners: undefined,
_writableState: [WritableState],
writable: false,
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
write: [Function: writeAfterFIN],
[Symbol(asyncId)]: 134,
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBytesRead)]: 1615,
[Symbol(kBytesWritten)]: 0 },
stdio: [ [Socket], [Socket], [Socket] ],
emit: [Function] },
stdout: undefined,
stderr: undefined }
</anonymous> -
H.264 to DNxHR 444 issue. Colors are not transcoded correctly (HDR project)
21 janvier 2020, par Raulo1985I’m having an issue transcoding a H.264 UHD HDR file to a DNxHR file in a mxf container with FFmpeg. The issue is that both files don’t look the same at all, the colors look washed out on the DNxHR video, and I tried to make the transcoding as lossless as possible (DNxHR 444 flavor). The original file is a movie I ripped a while ago, H.264, UHD, HDR, in a mkv container.
My goal : to create an almost lossless DNxHR file to use it as source file in Adobe Premiere Pro, and use another DNxHR file with less quality as proxy for editing. I wanted to do it that way and not use the original H.264 as the source file because it’s out of sync with the proxy file (I mean, when I toggle the proxy icon on and off, you can tell that there’s a short delay between them, which defeats all purposes for editing). My guess is that it may be because H.264 is compressed and DNxHR isn’t, and since I edit making a lot of fast cuts, I need both the source file and the proxy file to be as synced as possible. When the source file and the proxy file are both DNxHR, no matter the flavor, they are perfectly synced. I don’t want to go with Prores for the proxies, because the sync problem is a lot worse (several seconds of delay between files), maybe because it’s a VBR codec and my original file and DNxHR are CBR (for the record, I always prefer CBR).
Well, the thing is that when I import the original H.264 file to Premiere Pro, use a DNxHR proxy, edit a little, and export it directly from the original file (H.264 10 bits, with all the settings required for HDR output enabled) the colors look as they should. When I do the same with the high quality DNxHR as source file, with the exact same export settings, the colors look washed out. The same with any DNxHR flavor.
Then I opened both files (original H.264 and high quality DNxHR transcoded from the H.264 one) with VLC, and I also can tell that the mxf file looks washed out and the H.264 file doesn’t. So it’s not an export issue on Premiere’ side, it’s something that has to do with the original transcoding.
I understand that DNxHR 444 is as lossless as you can get with that codec, preserving all the HDR required data, and I believe that the mfx container has some advantages over MOV, which is the other container that supports DNxHD/DNxHR. So I don’t know what’s happening really.
The command I used was :
ffmpeg -channel_layout 63 -i input.mkv -map 0:0 -c:v dnxhd -vf "scale=in_range=limited:out_range=full" -color_range 2 -profile:v dnxhr_444 -pix_fmt yuv444p10le -acodec pcm_s24le -ar 48000 -ac 6 -channel_layout 63 -map 0:2 -hide_banner output.mxf
Like I said, after the transcoding, both video files look a lot different from each other, color wise. And after using them in Premiere and exporting with the exact same settings, the output files suffer from the same difference.
Mediainfo shows the expected data for both files :
10 bits, main 10, level 5, 4:2:0, CBR, BT.2020 for the original h.264 file.
10 bits, 4:4:4, CBR for the DNxHR 444 file.
One thing I noticed in Mediainfo is that both have YUV as color space, but the DNxHR 444 video has an extra field that says ColorSpace_Original : RGB. Honestly, I don’t know what that means, since the original is YUV. Color range is fine, from 0 to 1023 (and chroma range 1023). The other thing is that it says "limited" on the color range field of the H.264 file, but I’ve read that that could be a bug or missinterpretation of the file by Mediainfo.
Well, that’s it, any help would be appreciated. I’d really like to edit with DNxHR 444 as source file and DNxHR LB for the proxies, so I can edit in a fast pace and without sync issues, but the color is just not acceptable. And I do understand that I’m adding an extra transcoding step (from original to DNxHR), but the sync issue between the original and the DNxHR proxies, even though it may be a delay of a fraction of a second, makes my workflow a lot harder since I’ll have to export many times to see if the cuts are made exactly where I want them to be. Not ideal by any means. And Prores is not an option apparently, the sync issue is a lot worse. For me, it all comes down to being able to get a DNxHR 444 file that looks, well, as close to lossless as it can be, and that goal obviously involves the colors.
Thanks in advance.
PS : file size is not an issue for me, so having an entire UHD HDR movie transcoded to DNxHR 444 is not a problem.
PS2 : I tried with a different chroma subsampling (like DNxHR HQX 10 bits, which is 4:2:2), same result. Haven´t tried with 8 bits yet, but I don’t see the point since this is a HDR project.
EXTRA INFO :
1) FFprobe output of the MXF DNxHR file (this one is 4:2:2, the only difference with the command used compared to the one stated on the OP is -pix_fmt yuv444p10le being -pix_fmt yuv422p10le) :
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[mxf @ 000001f4d17fbac0] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, mxf, from 'Interstellar_Master_DNxHR_444_UHD_422_PCM24_5.1.mxf':
Metadata:
operational_pattern_ul: 060e2b34.04010101.0d010201.01010900
uid : adab4424-2f25-4dc7-92ff-29bd000c0000
generation_uid : adab4424-2f25-4dc7-92ff-29bd000c0001
company_name : FFmpeg
product_name : OP1a Muxer
product_version : 58.29.100
product_uid : adab4424-2f25-4dc7-92ff-29bd000c0002
material_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D9300
timecode : 00:00:00:00
Duration: 02:49:03.97, start: 0.000000, bitrate: 1404833 kb/s
Stream #0:0: Video: dnxhd (DNXHR 444), yuv444p10le(bt709/unknown/unknown, progressive), 3840x2160, SAR 1:1 DAR 16:9, 23.98 tbr, 23.98 tbn, 23.98 tbc
Metadata:
file_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D9301
Stream #0:1: Audio: pcm_s24le, 48000 Hz, 6 channels, s32 (24 bit), 6912 kb/s
Metadata:
file_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D93012) FFprobe output of the MP4 H.264 source file (this one is 4:2:0, 10 bits, HDR) :
Stream #0:0(eng): Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv, bt2020nc/bt2020/smpte2084), 3840x2160 [SAR 1:1 DAR 16:9], 15584 kb/s, 23.98 fps, 23.98 tbr, 16k tbn, 23.98 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 640 kb/s (default)
Metadata:
handler_name : SoundHandler
Side data:
audio service type: main
Stream #0:2(eng): Data: bin_data (text / 0x74786574)
Metadata:
handler_name : SubtitleHandler
Unsupported codec with id 100359 for input stream 2 -
Record video stream in rust
19 novembre 2024, par El_LocoI have bought a stereo camera with global shutter and a frame rate of at most 120 fps. https://www.amazon.com/dp/B0D8T3ZSL4?ref_=pe_386300_442618370_TE_sc_as_ri_0#


My next step is to write a program that can show and record a video with desired fps and resolution.


use opencv::{
 core, highgui,
 prelude::*,
 videoio::{self, VideoCapture},
 Result,
};

fn open_camera() -> Result<videocapture> {
 let capture = videoio::VideoCapture::new(2, videoio::CAP_ANY)?;
 return Ok(capture);
}
fn main() -> Result<()> {
 let window = "video capture";
 highgui::named_window(window, highgui::WINDOW_AUTOSIZE)?;
 let mut cam = open_camera()?;
 let opened = videoio::VideoCapture::is_opened(&cam)?;
 if !opened {
 panic!("Unable to open default camera!");
 }
 let width = 3200.0;
 let height = 1200.0;
 cam.set(videoio::CAP_PROP_FRAME_WIDTH, width)?;
 cam.set(videoio::CAP_PROP_FRAME_HEIGHT, height)?;

 // Set the frame rate (FPS)
 let fps = 60.0;
 
 let fourcc = videoio::VideoWriter::fourcc('M', 'J', 'P', 'G')?;
 let mut writer = videoio::VideoWriter::new(
 "video_output.avi",
 fourcc,
 fps,
 core::Size::new(width as i32, height as i32),
 true,
 )?;

 if !writer.is_opened()? {
 println!("Error: Could not open the video writer.");
 }

 let mut frame = core::Mat::default();
 let mut ctr = 0;
 while cam.read(&mut frame)? {
 if frame.empty() {
 break;
 }
 writer.write(&frame)?;
 highgui::imshow(window, &frame)?;
 
 let key = highgui::wait_key(1)?;
 if key > 0 {
 break;
 }
 ctr += 1;
 if ctr == 600 {
 break;
 }
 }
 cam.release()?;
 writer.release()?;
 Ok(())
}
</videocapture>


When I run this code the frame rate is terrible. Like 1 fps or something. For debugging I tried to run in cheese. There I got 30 fps with full resolution
3200x1200
. But I cannot change the fps to 60 fps what I can see.

Then I tried to capture a video using ffmpeg :


ffmpeg -f v4l2 -framerate 60 -video_size 3200x1200 -i /dev/video2 output.mp4


With the following output :


[video4linux2,v4l2 @ 0x5a72cbbd1400] The driver changed the time per frame from 1/60 to 1/2
Input #0, video4linux2,v4l2, from '/dev/video2':
 Duration: N/A, start: 2744.250608, bitrate: 122880 kb/s
 Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 3200x1200, 122880 kb/s, 2 fps, 2 tbr, 1000k tbn
File 'output.mp4' already exists. Overwrite? [y/N]



The frame rate is lowered to 2 fps.


Then I tried to run
v4l2-ctl --list-formats-ext -d 2
with the following output :

ioctl: VIDIOC_ENUM_FMT
 Type: Video Capture

 [0]: 'MJPG' (Motion-JPEG, compressed)
 Size: Discrete 3200x1200
 Interval: Discrete 0.017s (60.000 fps)
 Interval: Discrete 0.033s (30.000 fps)
 Interval: Discrete 0.040s (25.000 fps)
 Interval: Discrete 0.050s (20.000 fps)
 Interval: Discrete 0.067s (15.000 fps)
 Interval: Discrete 0.100s (10.000 fps)
 Size: Discrete 2560x720
 Interval: Discrete 0.017s (60.000 fps)
 Interval: Discrete 0.033s (30.000 fps)
 Interval: Discrete 0.040s (25.000 fps)
 Interval: Discrete 0.050s (20.000 fps)
 Interval: Discrete 0.067s (15.000 fps)
 Interval: Discrete 0.100s (10.000 fps)
 Size: Discrete 1600x600
 Interval: Discrete 0.008s (120.000 fps)
 Interval: Discrete 0.017s (60.000 fps)
 Interval: Discrete 0.033s (30.000 fps)
 Interval: Discrete 0.040s (25.000 fps)
 Interval: Discrete 0.050s (20.000 fps)
 Interval: Discrete 0.067s (15.000 fps)



I then tried to open the camera using
qv4l
and there it seemed to work. Does not seem like I can record a video though.

I am using Rust to learn. I want to be able to programmatically be able to record a video somehow and then do computer vision. The easiest would be to do it in Rust. But other solutions are ok.


Edit
I have found some more this morning :


v4l2-ctl -d 2 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
 Type: Video Capture

 [0]: 'MJPG' (Motion-JPEG, compressed)
 Size: Discrete 3200x1200
 Interval: Discrete 0.017s (60.000 fps)
 Interval: Discrete 0.033s (30.000 fps)
 Interval: Discrete 0.040s (25.000 fps)
 Interval: Discrete 0.050s (20.000 fps)
 Interval: Discrete 0.067s (15.000 fps)
 Interval: Discrete 0.100s (10.000 fps)

 [1]: 'YUYV' (YUYV 4:2:2)
 Size: Discrete 3200x1200
 Interval: Discrete 0.500s (2.000 fps)
 Size: Discrete 2560x720
 Interval: Discrete 0.500s (2.000 fps)
 Size: Discrete 1600x600
 Interval: Discrete 0.100s (10.000 fps)



I also found here that order of flags was important for
ffmpeg
. Running this I can actually record a video with 60 fps :

ffmpeg -framerate 60 -f v4l2 -video_size 3200x1200 -input_format mjpeg -i /dev/video2 output.avi


A drawback is that the images does not look very sharp. You can clearly see the pixels. (I am new to video formats etc as well. Before it has just worked.)


If I change from
avi
tomkv
it is slow again.

In the link above I also saw a suggestion to first do :


ffmpeg -framerate 60 -f v4l2 -video_size 3200x1200 -input_format mjpeg -i /dev/video2 -c copy mjpeg.mkv


and then :


ffmpeg -i mjpeg.mkv -c:v libx264 -crf 23 -preset medium -pix_fmt yuv420p out.mkv


which worked. But I am not sure those flags are ideal for the camera I have. I think it is a good start to make it run as expected using command line and ffmpeg. So I know what format to use and that it actually works as intended before doing it programmatically.