Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (7461)

  • H.264 to DNxHR 444 issue. Colors are not transcoded correctly (HDR project). Note : issue not solved yet

    22 janvier 2020, par Raulo1985

    I’m having an issue transcoding a H.264 UHD HDR file to a DNxHR file in a mxf container with FFmpeg. The issue is that both files don’t look the same at all, the colors look washed out on the DNxHR video, and I tried to make the transcoding as lossless as possible (DNxHR 444 flavor). The original file is a movie I ripped a while ago, H.264, UHD, HDR, in a mkv container.

    My goal : to create an almost lossless DNxHR file to use it as source file in Adobe Premiere Pro, and use another DNxHR file with less quality as proxy for editing. I wanted to do it that way and not use the original H.264 as the source file because it’s out of sync with the proxy file (I mean, when I toggle the proxy icon on and off, you can tell that there’s a short delay between them, which defeats all purposes for editing). My guess is that it may be because H.264 is compressed and DNxHR isn’t, and since I edit making a lot of fast cuts, I need both the source file and the proxy file to be as synced as possible. When the source file and the proxy file are both DNxHR, no matter the flavor, they are perfectly synced. I don’t want to go with Prores for the proxies, because the sync problem is a lot worse (several seconds of delay between files), maybe because it’s a VBR codec and my original file and DNxHR are CBR (for the record, I always prefer CBR).

    Well, the thing is that when I import the original H.264 file to Premiere Pro, use a DNxHR proxy, edit a little, and export it directly from the original file (H.264 10 bits, with all the settings required for HDR output enabled) the colors look as they should. When I do the same with the high quality DNxHR as source file, with the exact same export settings, the colors look washed out. The same with any DNxHR flavor.

    Then I opened both files (original H.264 and high quality DNxHR transcoded from the H.264 one) with VLC, and I also can tell that the mxf file looks washed out and the H.264 file doesn’t. So it’s not an export issue on Premiere’ side, it’s something that has to do with the original transcoding.

    I understand that DNxHR 444 is as lossless as you can get with that codec, preserving all the HDR required data, and I believe that the mfx container has some advantages over MOV, which is the other container that supports DNxHD/DNxHR. So I don’t know what’s happening really.

    The command I used was :

    ffmpeg -channel_layout 63 -i input.mkv -map 0:0 -c:v dnxhd -vf "scale=in_range=limited:out_range=full" -color_range 2 -profile:v dnxhr_444 -pix_fmt yuv444p10le -acodec pcm_s24le -ar 48000 -ac 6 -channel_layout 63 -map 0:2 -hide_banner output.mxf

    Like I said, after the transcoding, both video files look a lot different from each other, color wise. And after using them in Premiere and exporting with the exact same settings, the output files suffer from the same difference.

    Mediainfo shows the expected data for both files :
    - 10 bits, main 10, level 5, 4:2:0, CBR, BT.2020 for the original h.264 file.
    - 10 bits, 4:4:4, CBR for the DNxHR 444 file.

    One thing I noticed in Mediainfo is that both have YUV as color space, but the DNxHR 444 video has an extra field that says ColorSpace_Original : RGB. Honestly, I don’t know what that means, since the original is YUV. Color range is fine, from 0 to 1023 (and chroma range 1023). The other thing is that it says "limited" on the color range field of the H.264 file, but I’ve read that that could be a bug or missinterpretation of the file by Mediainfo.

    Well, that’s it, any help would be appreciated. I’d really like to edit with DNxHR 444 as source file and DNxHR LB for the proxies, so I can edit in a fast pace and without sync issues, but the color is just not acceptable. And I do understand that I’m adding an extra transcoding step (from original to DNxHR), but the sync issue between the original and the DNxHR proxies, even though it may be a delay of a fraction of a second, makes my workflow a lot harder since I’ll have to export many times to see if the cuts are made exactly where I want them to be. Not ideal by any means. And Prores is not an option apparently, the sync issue is a lot worse. For me, it all comes down to being able to get a DNxHR 444 file that looks, well, as close to lossless as it can be, and that goal obviously involves the colors.

    Thanks in advance.

    PS : file size is not an issue for me, so having an entire UHD HDR movie transcoded to DNxHR 444 is not a problem.

    PS2 : I tried with a different chroma subsampling (like DNxHR HQX 10 bits, which is 4:2:2), same result. Haven´t tried with 8 bits yet, but I don’t see the point since this is a HDR project.

    UPDATE :

    I tried to transcode from the H.264 source file to a DNxHR video file in a MXF container using Adobe Media Encoder instead of FFmpeg, and the colors are not transcoded correctly again, but this time they seem to be over saturated instead of washed out. Adobe Media Encoder doesn’t give much room for tweaking, but I made sure to select 444 10 bits profile, same resolution (UHD), same frame rate and render with maximum quality and maximum bit depth. FFprobe output of the resulting file again shows BT709 as the color space (the same thing happens with the output file after transcoding using FFmpeg). Seems to be something not related to FFmpeg, apparently. Any ideas ? It’s like there’s no way I can transcode and retain the colors correctly from H.264 to DNxHR, even using its most high quality flavor and correct command settings (at least they look ok to me). How can I post this so maybe developers or people with lots of experience here can give us a clue to what’s happening ? Thanks.

    PS : More potentially useful info on the comments below.

    EXTRA INFO :

    1) FFprobe output of the MXF DNxHR file (this one is 4:2:2, the only difference with the command used compared to the one stated on the OP is -pix_fmt yuv444p10le being -pix_fmt yuv422p10le) :

     libavutil      56. 31.100 / 56. 31.100
     libavcodec     58. 54.100 / 58. 54.100
     libavformat    58. 29.100 / 58. 29.100
     libavdevice    58.  8.100 / 58.  8.100
     libavfilter     7. 57.100 /  7. 57.100
     libswscale      5.  5.100 /  5.  5.100
     libswresample   3.  5.100 /  3.  5.100
     libpostproc    55.  5.100 / 55.  5.100
    [mxf @ 000001f4d17fbac0] Stream #0: not enough frames to estimate rate; consider increasing probesize
    Input #0, mxf, from 'Interstellar_Master_DNxHR_444_UHD_422_PCM24_5.1.mxf':
     Metadata:
       operational_pattern_ul: 060e2b34.04010101.0d010201.01010900
       uid             : adab4424-2f25-4dc7-92ff-29bd000c0000
       generation_uid  : adab4424-2f25-4dc7-92ff-29bd000c0001
       company_name    : FFmpeg
       product_name    : OP1a Muxer
       product_version : 58.29.100
       product_uid     : adab4424-2f25-4dc7-92ff-29bd000c0002
       material_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D9300
       timecode        : 00:00:00:00
     Duration: 02:49:03.97, start: 0.000000, bitrate: 1404833 kb/s
       Stream #0:0: Video: dnxhd (DNXHR 444), yuv444p10le(bt709/unknown/unknown, progressive), 3840x2160, SAR 1:1 DAR 16:9, 23.98 tbr, 23.98 tbn, 23.98 tbc
       Metadata:
         file_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D9301
       Stream #0:1: Audio: pcm_s24le, 48000 Hz, 6 channels, s32 (24 bit), 6912 kb/s
       Metadata:
         file_package_umid: 0x060A2B340101010501010D001393EE79529471348D93EE7900529471348D9301

    2) FFprobe output of the MP4 H.264 source file (this one is 4:2:0, 10 bits, HDR) :

       Stream #0:0(eng): Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv, bt2020nc/bt2020/smpte2084), 3840x2160 [SAR 1:1 DAR 16:9], 15584 kb/s, 23.98 fps, 23.98 tbr, 16k tbn, 23.98 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 640 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
       Side data:
         audio service type: main
       Stream #0:2(eng): Data: bin_data (text / 0x74786574)
       Metadata:
         handler_name    : SubtitleHandler
    Unsupported codec with id 100359 for input stream 2
  • ffmpeg - strange timestamps after concatenation

    25 août 2020, par codefox

    I want to cut some scene from video to other file, but also avoid lenghty encoding. In order to do it, i have to generate two sub video files : first file with encoding from start position to nearest position with keyframe, and second file with copy stream (without encoding), from keyframe to end position. So I have two files, and then merge them using concat, and result is not ideal. first frame do not start from 0.0 sec is longer than expected. These issues do not appear, when audio stream is disabled while generating two videos.

    


    printout :

    


    C:\ffmpeg\x86>ffmpeg -ss 0ms -to 33ms -i C:\db\test.mp4 -profile:v baseline -y -video_track_timescale 30030 C:/db/v0.mp4 -hide_banner
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\db\test.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf57.71.100
  Duration: 00:00:09.67, start: 0.000000, bitrate: 429 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x360 [SAR 1:1 DAR 16:9], 323 kb/s, 30 fps, 30 tbr, 30030 tbn, 60 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 03d5f100] using SAR=1/1
[libx264 @ 03d5f100] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 @ 03d5f100] profile Constrained Baseline, level 3.0, 4:2:0, 8-bit
[libx264 @ 03d5f100] 264 - core 160 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=0 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'C:/db/v0.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.49.100
    Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], q=-1--1, 30 fps, 30030 tbn, 30 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      encoder         : Lavc58.97.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      encoder         : Lavc58.97.100 aac
frame=    1 fps=0.0 q=29.0 Lsize=       9kB time=00:00:00.03 bitrate=1994.2kbits/s speed=1.12x
video:7kB audio:1kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 16.875153%
[libx264 @ 03d5f100] frame I:1     Avg QP:26.34  size:  6797
[libx264 @ 03d5f100] mb I  I16..4: 51.3%  0.0% 48.7%
[libx264 @ 03d5f100] coded y,uvDC,uvAC intra: 34.2% 43.3% 4.0%
[libx264 @ 03d5f100] i16 v,h,dc,p: 39% 21% 16% 24%
[libx264 @ 03d5f100] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 30% 21% 17%  5%  7%  5%  8%  4%  3%
[libx264 @ 03d5f100] i8c dc,h,v,p: 58% 19% 18%  4%
[libx264 @ 03d5f100] kb/s:1631.28
[aac @ 03d80a80] Qavg: 212.419

C:\ffmpeg\x86>ffmpeg -ss 900ms -to 1100ms -i C:\db\test.mp4 -y -c copy C:/db/v1.mp4 -hide_banner
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\db\test.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf57.71.100
  Duration: 00:00:09.67, start: 0.000000, bitrate: 429 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x360 [SAR 1:1 DAR 16:9], 323 kb/s, 30 fps, 30 tbr, 30030 tbn, 60 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Output #0, mp4, to 'C:/db/v1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.49.100
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x360 [SAR 1:1 DAR 16:9], q=2-31, 323 kb/s, 30 fps, 30 tbr, 30030 tbn, 30030 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame=    6 fps=0.0 q=-1.0 Lsize=      23kB time=00:00:00.18 bitrate=1018.0kbits/s speed= 183x
video:19kB audio:3kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 7.228971%

C:\ffmpeg\x86>ffmpeg -f concat -i c:/db/list.txt -c copy -y c:/db/out.mp4 -hide_banner
[mov,mp4,m4a,3gp,3g2,mj2 @ 03544800] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from 'c:/db/list.txt':
  Duration: N/A, start: -0.008209, bitrate: 1923 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 1778 kb/s, 30 fps, 30 tbr, 30030 tbn, 60 tbc
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 144 kb/s
    Metadata:
      handler_name    : SoundHandler
Output #0, mp4, to 'c:/db/out.mp4':
  Metadata:
    encoder         : Lavf58.49.100
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], q=2-31, 1778 kb/s, 30 fps, 30 tbr, 30030 tbn, 30030 tbc
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 144 kb/s
    Metadata:
      handler_name    : SoundHandler
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mov,mp4,m4a,3gp,3g2,mj2 @ 03544800] Auto-inserting h264_mp4toannexb bitstream filter
frame=    7 fps=0.0 q=-1.0 Lsize=      31kB time=00:00:00.23 bitrate=1083.9kbits/s speed=58.3x
video:26kB audio:3kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 5.516988%

C:\ffmpeg\x86>ffprobe c:\db\v0.mp4 -show_entries frame=pict_type,pkt_pts_time,pkt_duration_time -select_streams v:0 -of csv=print_section=0 -hide_banner
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'c:\db\v0.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.49.100
  Duration: 00:00:00.04, start: 0.000000, bitrate: 1815 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 1778 kb/s, 30 fps, 30 tbr, 30030 tbn, 60 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 144 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
0.000000,0.033333,I


C:\ffmpeg\x86>ffprobe c:\db\v1.mp4 -show_entries frame=pict_type,pkt_pts_time,pkt_duration_time -select_streams v:0 -of csv=print_section=0 -hide_banner
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'c:\db\v1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.49.100
  Duration: 00:00:00.21, start: 0.000000, bitrate: 891 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x360 [SAR 1:1 DAR 16:9], 765 kb/s, 30 fps, 30 tbr, 30030 tbn, 60 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 99 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
0.000000,0.033267,I

0.033267,0.033333,P
0.066600,0.033400,P
0.100000,0.033300,P
0.133300,0.033300,P
0.166600,0.033400,P

C:\ffmpeg\x86>ffprobe c:\db\out.mp4 -show_entries frame=pict_type,pkt_pts_time,pkt_duration_time -select_streams v:0 -of csv=print_section=0 -hide_banner
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'c:\db\out.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.49.100
  Duration: 00:00:00.26, start: 0.000000, bitrate: 984 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 880 kb/s, 28.93 fps, 30 tbr, 30030 tbn, 60 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 104 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
0.007992,0.041991,I

0.049983,0.033267,I

0.083250,0.033333,P
0.116583,0.033400,P
0.149983,0.033300,P
0.183283,0.033300,P
0.216583,0.025408,P


    


    How to generate these files, merge them with stream copy, and final video will start from 0.0 and have uniform frames duration ?

    


  • Node.js Error : spawn process ffmpeg ChildProcessError

    12 juin 2019, par Karnon

    I wanted to make a program like OBS simply.

    My ideal code behavior is to create a child process in Node.js and execute FFMPEG commands to send a webcam stream to yourtube live RTMP server. However, the actual behavior is caused by an error in the child-process-promise module used in node.js.

    I’ve checked several questions, but I don’t have enough experience to understand them, and I hope there’s a clear solution.

    I guess it was because I couldn’t find the command address of FFMPEG in the Node environment. Or is calling from the socket environment a problem ?

    I checked that the FFMPEG command works in a Windows prompt environment.

    ※ Note : FFMPEG environment variables are registered.

    Environment : Window10, node.js, ffmpeg

    The code took advantage of a simple WebSocket example.

    When I first investigated, I thought that the only way to do this was to use "fluent-effmpeg."

    I tried "fluent-ffmpeg" but I couldn’t get my laptop webcam up and running in Windows environments as a parameter for the "fluent-ffmppeg" command.

    I’ve also thought about using WebRTC, but I think it’s not for personal use because it’s a P2P connection. (I also saw how to connect a peer connection to a WebRTC server like Janus, but I didn’t have enough references to understand it.)

    Below is the code of the problem.

    const SocketIO = require("socket.io");
    const ffmpeg = require("fluent-ffmpeg");
    const spawn = require("child-process-promise").spawn;

    module.exports = server => {
     const io = SocketIO(server, { path: "/socket.io" });

     io.on("connection", socket => {
       const req = socket.request;
       const ip = req.headers["x-forwarded-for"] || req.connection.remoteAddress;
       console.log("새로운 클라이언트 접속!", ip, socket.id, req.ip);
       socket.on("disconnect", () => {
         console.log("클라이언트 접속해제", ip, socket.id);
         clearInterval(socket.interval);
       });
       socket.on("error", error => {
         console.error(error);
       });
       socket.on("reply", data => {
         console.log(data);
         ffmpeg_command();
       });
     });

     function ffmpeg_command() {
       let arg = [
         "-f",
         "lavfi",
         "-i",
         "anullsrc=r=16000:cl=mono",
         "-f",
         "dshow",
         "-ac",
         "2",
         "-i",
         "video='HP Truevision HD'",
         "-s",
         "1280x720",
         "-r",
         "10",
         "-vcodec",
         "libx264",
         "-pix_fmt",
         "yuv420p",
         "-preset",
         "ultrafast",
         "-r",
         "25",
         "-g",
         "20",
         "-b:v",
         "2500k",
         "-codec:a",
         "libmp3lame",
         "-ar",
         "44100",
         "-threads",
         "6",
         "-b:a",
         "11025",
         "-bufsize",
         "512k",
         "-f",
         "flv",
         "rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q"
       ];
       spawn("ffmpeg", arg).catch(e => {
         console.log(e);
       });
     }
    };

    Here’s the error : The expected result is that your webcam is working and YouTube live streaming is successful.

    { ChildProcessError: `ffmpeg -f lavfi -i anullsrc=r=16000:cl=mono -f dshow -ac 2 -i video='HP Truevision HD' -s 1280x720 -r 10 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -r 25 -g 20 -b:v 2500k -codec:a libmp3lame -ar 44100 -threads 6 -b:a 11025 -bufsize 512k -f flv rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q` failed with code 1
       at ChildProcess.<anonymous> (C:\Users\Tricky\Desktop\Work\ESC\ESC_temp\node_modules\child-process-promise\lib\index.js:132:23)
       at ChildProcess.emit (events.js:182:13)
       at ChildProcess.cp.emit (C:\Users\Tricky\Desktop\Work\ESC\ESC_temp\node_modules\child-process-promise\node_modules\cross-spawn\lib\enoent.js:40:29)
       at maybeClose (internal/child_process.js:962:16)
       at Socket.stream.socket.on (internal/child_process.js:381:11)
       at Socket.emit (events.js:182:13)
       at Pipe._handle.close (net.js:606:12)
     name: 'ChildProcessError',
     code: 1,
     childProcess:
      ChildProcess {
        _events: { error: [Function], close: [Function] },
        _eventsCount: 2,
        _maxListeners: undefined,
        _closesNeeded: 3,
        _closesGot: 3,
        connected: false,
        signalCode: null,
        exitCode: 1,
        killed: false,
        spawnfile: 'ffmpeg',
        _handle: null,
        spawnargs:
         [ 'ffmpeg',
           '-f',
           'lavfi',
           '-i',
           'anullsrc=r=16000:cl=mono',
           '-f',
           'dshow',
           '-ac',
           '2',
           '-i',
           'video=\'HP Truevision HD\'',
           '-s',
           '1280x720',
           '-r',
           '10',
           '-vcodec',
           'libx264',
           '-pix_fmt',
           'yuv420p',
           '-preset',
           'ultrafast',
           '-r',
           '25',
           '-g',
           '20',
           '-b:v',
           '2500k',
           '-codec:a',
           'libmp3lame',
           '-ar',
           '44100',
           '-threads',
           '6',
           '-b:a',
           '11025',
           '-bufsize',
           '512k',
           '-f',
           'flv',
           'rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q' ],
        pid: 18928,
        stdin:
         Socket {
           connecting: false,
           _hadError: false,
           _handle: null,
           _parent: null,
           _host: null,
           _readableState: [ReadableState],
           readable: false,
           _events: [Object],
           _eventsCount: 1,
           _maxListeners: undefined,
           _writableState: [WritableState],
           writable: false,
           allowHalfOpen: false,
           _sockname: null,
           _pendingData: null,
           _pendingEncoding: '',
           server: null,
           _server: null,
           [Symbol(asyncId)]: 132,
           [Symbol(lastWriteQueueSize)]: 0,
           [Symbol(timeout)]: null,
           [Symbol(kBytesRead)]: 0,
           [Symbol(kBytesWritten)]: 0 },
        stdout:
         Socket {
           connecting: false,
           _hadError: false,
           _handle: null,
           _parent: null,
           _host: null,
           _readableState: [ReadableState],
           readable: false,
           _events: [Object],
           _eventsCount: 2,
           _maxListeners: undefined,
           _writableState: [WritableState],
           writable: false,
           allowHalfOpen: false,
           _sockname: null,
           _pendingData: null,
           _pendingEncoding: '',
           server: null,
           _server: null,
           write: [Function: writeAfterFIN],
           [Symbol(asyncId)]: 133,
           [Symbol(lastWriteQueueSize)]: 0,
           [Symbol(timeout)]: null,
           [Symbol(kBytesRead)]: 0,
           [Symbol(kBytesWritten)]: 0 },
        stderr:
         Socket {
           connecting: false,
           _hadError: false,
           _handle: null,
           _parent: null,
           _host: null,
           _readableState: [ReadableState],
           readable: false,
           _events: [Object],
           _eventsCount: 2,
           _maxListeners: undefined,
           _writableState: [WritableState],
           writable: false,
           allowHalfOpen: false,
           _sockname: null,
           _pendingData: null,
           _pendingEncoding: '',
           server: null,
           _server: null,
           write: [Function: writeAfterFIN],
           [Symbol(asyncId)]: 134,
           [Symbol(lastWriteQueueSize)]: 0,
           [Symbol(timeout)]: null,
           [Symbol(kBytesRead)]: 1615,
           [Symbol(kBytesWritten)]: 0 },
        stdio: [ [Socket], [Socket], [Socket] ],
        emit: [Function] },
     stdout: undefined,
     stderr: undefined }
    </anonymous>