Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (48)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (6126)

  • How do terminal pipes in Python differ from those in Rust ?

    5 octobre 2022, par rust_convert

    To work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly from the terminal pipe - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty. I have been researching the 'Pipe'-ing of data from the rust documentation and python documentation and am unsure how the Rust pipe could be empty or incorrect if it's working for the video.

    


    From this python article and this rust stack overflow exchange, it looks like the python stdout pipe is equivalent to the rust stdin pipe ?

    


    The python code snippet for video and audio conversion :

    


    output=open(self.outputFile, 'wb')
devnull = open(os.devnull, 'wb')

vidcommand = [ FFMPEG_BIN,
            '-i', self.inputFile,
            '-f', 'image2pipe',
            '-r', '%d' % (self.outputFrameRate),
            '-vf', scaleCommand,
            '-vcodec', 'rawvideo',
            '-pix_fmt', 'bgr565be',
            '-f', 'rawvideo', '-']
        
vidPipe = '';
if os.name=='nt' :
    startupinfo = sp.STARTUPINFO()
    startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)
else:
    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)

vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)

audioCommand = [ FFMPEG_BIN,
    '-i', self.inputFile,
    '-f', 's16le',
    '-acodec', 'pcm_s16le',
    '-ar', '%d' % (self.outputAudioSampleRate),
    '-ac', '1',
    '-']

audioPipe=''
if (self.audioEnable.get() == 1):
    if os.name=='nt' :
        startupinfo = sp.STARTUPINFO()
        startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)
    else:
        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)

    audioFrame = audioPipe.stdout.read(self.audioFrameBytes) 

currentFrame=0;

while len(vidFrame)==self.inputVidFrameBytes:
    currentFrame+=1
    if(currentFrame%30==0):
        self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)
    if (self.videoBitDepth.get() == 16):
        output.write(vidFrame)
    else:
        b16VidFrame=bytearray(vidFrame)
        b8VidFrame=[]
        for p in range(self.outputVidFrameBytes):
            b8VidFrame.append(((b16VidFrame[(p*2)+0]>>0)&0xE0)|((b16VidFrame[(p*2)+0]<<2)&0x1C)|((b16VidFrame[(p*2)+1]>>3)&0x03))
        output.write(bytearray(b8VidFrame))

    vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?
    if (self.audioEnable.get() == 1):


        if len(audioFrame)==self.audioFrameBytes:
            audioData=bytearray(audioFrame) 

            for j in range(int(round(self.audioFrameBytes/2))):
                sample = ((audioData[(j*2)+1]<<8) | audioData[j*2]) + 0x8000
                sample = (sample>>(16-self.outputAudioSampleBitDepth)) & (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))

                audioData[j*2] = sample & 0xFF
                audioData[(j*2)+1] = sample>>8

            output.write(audioData)
            audioFrame = audioPipe.stdout.read(self.audioFrameBytes)

        else:
            emptySamples=[]
            for samples in range(int(round(self.audioFrameBytes/2))):
                emptySamples.append(0x00)
                emptySamples.append(0x00)
            output.write(bytearray(emptySamples))

self.progressBarVar.set(100.0)

vidPipe.terminate()
vidPipe.stdout.close()
vidPipe.wait()

if (self.audioEnable.get() == 1):
    audioPipe.terminate()
    audioPipe.stdout.close()
    audioPipe.wait()

output.close()


    


    The Rust snippet that should accomplish the same goals :

    


    let output_file = OpenOptions::new()
    .create(true)
    .truncate(true)
    .write(true)
    .open(&output_path)
    .unwrap();
let mut writer = BufWriter::with_capacity(
    options.video_frame_bytes.max(options.audio_frame_bytes),
    output_file,
);
let ffmpeg_path = sidecar_path("ffmpeg");
#[cfg(debug_assertions)]
let timer = Instant::now();

let mut video_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
video_cmd.args([
    "-i", options.path,
    "-f", "image2pipe",
    "-r", options.frame_rate,
    "-vf", options.scale,
    "-vcodec", "rawvideo",
    "-pix_fmt", "bgr565be",
    "-f", "rawvideo",
    "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window
// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags
#[cfg(windows)]
video_cmd.creation_flags(0x08000000);

let mut video_child = video_cmd.spawn().unwrap();
let mut video_stdout = video_child.stdout.take().unwrap();
let mut video_frame = vec![0; options.video_frame_bytes];

let mut audio_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
audio_cmd.args([
    "-i", options.path,
    "-f", "s16le",
    "-acodec", "pcm_s16le",
    "-ar", options.sample_rate,
    "-ac", "1",
    "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

#[cfg(windows)]
audio_cmd.creation_flags(0x08000000);

let mut audio_child = audio_cmd.spawn().unwrap();
let mut audio_stdout = audio_child.stdout.take().unwrap();
let mut audio_frame = vec![0; options.audio_frame_bytes];

while video_stdout.read_exact(&mut video_frame).is_ok() {
    writer.write_all(&video_frame).unwrap();

    if audio_stdout.read_to_end(&mut audio_frame).is_ok() {
        if audio_frame.len() == options.audio_frame_bytes {
            for i in 0..options.audio_frame_bytes / 2 {
                let temp_sample = ((u32::from(audio_frame[(i * 2) + 1]) << 8)
                    | u32::from(audio_frame[i * 2]))
                    + 0x8000;
                let sample = (temp_sample >> (16 - 10)) & (0x0000FFFF >> (16 - 10));

                audio_frame[i * 2] = (sample & 0xFF) as u8;
                audio_frame[(i * 2) + 1] = (sample >> 8) as u8;
            }
        } else {
            audio_frame.fill(0x00);
        }
    }
    writer.write_all(&audio_frame).unwrap();
}


video_child.wait().unwrap();
audio_child.wait().unwrap();

#[cfg(debug_assertions)]
{
    let elapsed = timer.elapsed();
    dbg!(elapsed);
}

writer.flush().unwrap();


    


    I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so the audio pipe is incorrectly interfaced. I included a screenshot of the hex data from the working python program that converts the video and audio correctly.

    


    HxD Python program hex output :

    


    HxD Python program hex output

    


    HxD Rust program hex output :

    


    HxD Rust program hex output

    


  • FFMPEG command runs in terminal but not by subprocess

    1er septembre 2022, par Basilique

    I am trying to run a bash command using the subprocess module from within python 3.10.

    


    The bash command is :

    


    ffmpeg -framerate 1 -pattern_type glob -i '*.png' -c:v libx264 -pix_fmt yuv420p -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" out.mp4


    


    In terminal the command runs fine. Here is the output :

    


    ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, image2, from '*.png':
  Duration: 00:16:39.00, start: 0.000000, bitrate: N/A
    Stream #0:0: Video: png, rgba(pc), 895x332 [SAR 3937:3937 DAR 895:332], 1 fps, 1 tbr, 1 tbn, 1 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x55726ab95d00] using SAR=1/1
[libx264 @ 0x55726ab95d00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
[libx264 @ 0x55726ab95d00] profile High, level 2.2
[libx264 @ 0x55726ab95d00] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=10 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'out.mp4':
  Metadata:
    encoder         : Lavf58.29.100
    Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 894x332 [SAR 1:1 DAR 447:166], q=-1--1, 1 fps, 16384 tbn, 1 tbc
    Metadata:
      encoder         : Lavc58.54.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame=  173 fps=0.0 q=17.0 size=     512kB time=00:01:56.00 bitrate=  36.2kbits/frame=  351 fps=350 q=17.0 size=    1536kB time=00:04:54.00 bitrate=  42.8kbits/frame=  517 fps=343 q=17.0 size=    2560kB time=00:07:40.00 bitrate=  45.6kbits/frame=  725 fps=361 q=17.0 size=    3328kB time=00:11:08.00 bitrate=  40.8kbits/frame=  913 fps=364 q=17.0 size=    4352kB time=00:14:16.00 bitrate=  41.6kbits/frame=  999 fps=361 q=-1.0 Lsize=    4986kB time=00:16:36.00 bitrate=  41.0kbits/s speed= 360x    
video:4974kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.241361%
[libx264 @ 0x55726ab95d00] frame I:4     Avg QP: 6.12  size: 24072
[libx264 @ 0x55726ab95d00] frame P:346   Avg QP:12.94  size:  5708
[libx264 @ 0x55726ab95d00] frame B:649   Avg QP:18.19  size:  4655
[libx264 @ 0x55726ab95d00] consecutive B-frames:  5.8% 16.0% 20.1% 58.1%
[libx264 @ 0x55726ab95d00] mb I  I16..4: 59.1% 10.6% 30.4%
[libx264 @ 0x55726ab95d00] mb P  I16..4:  5.6%  0.6%  2.2%  P16..4: 10.5%  4.3%  2.3%  0.0%  0.0%    skip:74.5%
[libx264 @ 0x55726ab95d00] mb B  I16..4:  2.2%  0.1%  1.7%  B16..8: 16.9%  4.8%  1.6%  direct: 1.1%  skip:71.5%  L0:50.9% L1:45.2% BI: 3.9%
[libx264 @ 0x55726ab95d00] 8x8 transform intra:5.9% inter:10.4%
[libx264 @ 0x55726ab95d00] coded y,uvDC,uvAC intra: 20.1% 18.3% 17.3% inter: 4.7% 4.7% 4.6%
[libx264 @ 0x55726ab95d00] i16 v,h,dc,p: 66% 33%  1%  0%
[libx264 @ 0x55726ab95d00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 18%  8% 73%  0%  0%  0%  0%  0%  0%
[libx264 @ 0x55726ab95d00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 31% 31%  2%  3%  2%  4%  2%  3%
[libx264 @ 0x55726ab95d00] i8c dc,h,v,p: 73% 23%  3%  0%
[libx264 @ 0x55726ab95d00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x55726ab95d00] ref P L0: 57.2%  1.5% 24.3% 17.0%
[libx264 @ 0x55726ab95d00] ref B L0: 69.6% 24.8%  5.6%
[libx264 @ 0x55726ab95d00] ref B L1: 92.4%  7.6%
[libx264 @ 0x55726ab95d00] kb/s:40.78


    


    In my python script I tried the following solutions :

    


    video_cmd = """ffmpeg -framerate 1 -pattern_type glob -i '*.png' -c:v libx264 -pix_fmt yuv420p -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" out.mp4"""

subprocess.run(shlex.split(video_cmd), shell=False, cwd=path_viz, stderr=subprocess.STDOUT, check=True, text=False)

subprocess.run(video_cmd, shell=True, cwd=path_viz, stderr=subprocess.STDOUT, check=True, text=False)


    


    as well as the solution proposed for this similar question

    


    subprocess.Popen(video_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)


    


    None of them worked. Apparently, the right command is run (output of the check_out function) :

    


    Command 'ffmpeg -y -framerate 1 -pattern_type glob -i '*.png' -c:v libx264 -pix_fmt yuv420p -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" out.mp4' returned non-zero exit status 1.


    


    the first part of the job (up to Stream mapping:) is done also correctly :

    


    fmpeg version 4.3 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
  configuration: --prefix=/home/rsghazanfari/anaconda3/envs/_cuda --cc=/opt/conda/conda-bld/ffmpeg_1597178665428/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --disable-openssl --enable-avresample --enable-gnutls --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-pic --enable-pthreads --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libmp3lame
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
Input #0, image2, from '*.png':
  Duration: 00:16:39.00, start: 0.000000, bitrate: N/A
    Stream #0:0: Video: png, rgba(pc), 895x332 [SAR 3937:3937 DAR 895:332], 1 fps, 1 tbr, 1 tbn, 1 tbc


    


    but it then pops up the following error :

    


    Unknown encoder &#x27;libx264&#x27;&#xA;Traceback (most recent call last):&#xA;  File "/home/rsgh/anaconda3/envs/_cuda/lib/python3.10/code.py", line 90, in runcode&#xA;    exec(code, self.locals)&#xA;  File "<input />", line 1, in <module>&#xA;  File "/home/rsgh/anaconda3/envs/_cuda/lib/python3.10/subprocess.py", line 524, in run&#xA;    raise CalledProcessError(retcode, process.args,&#xA;&#xA;subprocess.CalledProcessError: Command &#x27;ffmpeg -y -framerate 1 -pattern_type glob -i &#x27;*.png&#x27; -c:v libx264 -pix_fmt yuv420p -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" out.mp4&#x27; returned non-zero exit status 1.&#xA;</module>

    &#xA;

    Any ideas of why this error is produced in python while in terminal it runs fine ? Thank you in advance.

    &#xA;

    PS : ffmpeg -version outputs :

    &#xA;

    ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers&#xA;built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)&#xA;configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;libavutil      56. 31.100 / 56. 31.100&#xA;libavcodec     58. 54.100 / 58. 54.100&#xA;libavformat    58. 29.100 / 58. 29.100&#xA;libavdevice    58.  8.100 / 58.  8.100&#xA;libavfilter     7. 57.100 /  7. 57.100&#xA;libavresample   4.  0.  0 /  4.  0.  0&#xA;libswscale      5.  5.100 /  5.  5.100&#xA;libswresample   3.  5.100 /  3.  5.100&#xA;libpostproc    55.  5.100 / 55.  5.100&#xA;

    &#xA;

    ubuntu version :

    &#xA;

    Distributor ID: Ubuntu&#xA;Description:    Ubuntu 20.04.4 LTS&#xA;Release:    20.04&#xA;Codename:   focal&#xA;

    &#xA;

  • How to install ffmpeg for generating thumbnail images ? I got no package ffmpeg available while run on Linux(centos) terminal

    16 mai 2015, par Intrepid Uliyar

    How to install ffmpeg for generating thumbnail images ? I got "no package ffmpeg available" while run on Linux(centos) terminal by using "yum ffmpeg install" command.

    I need to install ffmpeg for generating thumbnail images, it’s working fine in localhost(Windows), but doesn’t work in Linux (I don’t know server platform). I have refer some website. In that they put like exe are not execute in linux server, we need to install ffmpeg programmatically. Can anyone give me suggestion. I have refer some website, but that wasn’t useful.

    My code :

    $video = "../../".$down_path;
       $rand=mt_rand(111111,999999);
       //where to save the image
       $image1 = "../../completed_project/thumb/".$rand.'ss.jpg';
       $image2 = "../../completed_project/thumb/".$rand.'uu.jpg';
       $image3 = "../../completed_project/thumb/".$rand.'rr.jpg';

       $thumb_image1 = "completed_project/thumb/".$rand.'ss.jpg';
       $thumb_image2 = "completed_project/thumb/".$rand.'uu.jpg';
       $thumb_image3 = "completed_project/thumb/".$rand.'rr.jpg';

       //time to take screenshot at
       $interval1 = 2;
       $interval2 = 4;
       $interval3 = 5;
       //screenshot size
       $size = '320x240';

       //ffmpeg command
       $cmd1 = "$ffmpeg -i $video -deinterlace -an -ss $interval1 -f mjpeg -t 1 -r 1 -y -s $size $image1 2>&amp;1";
       $cmd2 = "$ffmpeg -i $video -deinterlace -an -ss $interval2 -f mjpeg -t 1 -r 1 -y -s $size $image2 2>&amp;1";  
       $cmd3 = "$ffmpeg -i $video -deinterlace -an -ss $interval3 -f mjpeg -t 1 -r 1 -y -s $size $image3 2>&amp;1";  
       $return = `$cmd1`.`$cmd2`.`$cmd3`;
       print_r($cmd1);

    I’ve got this output :

    ffmpeg.exe -i ../../completed_project/606560114793SK_PROMO-45s.mp4 -deinterlace -an -ss 2 -f mjpeg -t 1 -r 1 -y -s 320x240 ../../completed_project/thumb/362796ss.jpg 2>&amp;1