Recherche avancée

Médias (91)

Autres articles (1)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (2489)

  • Multiple overlays using ffmpeg

    23 mars 2018, par lhan

    I’m trying to satisfy a few layering scenarios for building video files using ffmpeg.

    Scenario 1 : Overlay a video (specifying opacity of the video) on top of an image, creating a new video as the result.

    I solved this with :

    ffmpeg -i video.mp4 -i image.jpg -filter_complex '[0]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[a];[1][a]overlay=0:0' -t 30 output.mp4

    I’m scaling the video to 3840x2160 to match my image (ideally I’d have them matching beforehand).

    Scenario 2 : 3 layers now, video - image - image. The middle image layer is a transparent image with text. So we have a base image, with text overlaid, and a video on top of that at a certain opacity.

    I solved this with :

    ffmpeg -i video.mp4 -i image.jpg -i text.png -filter_complex '[0]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[a];[2][a]overlay=0:0,scale=w=3840:h=2160[b];[1][b]overlay=0:0' -t 30 output.mp4

    Scenario 3 (which I can’t get working) : Same as Scenario #2, but with text on top of the video.

    I tried re-arranging my filter, hoping to affect the layering order :

    ffmpeg -i video.mp4 -i image.jpg -i text.png -filter_complex '[2]overlay=0:0,scale=w=3840:h=2160[a];[0][a]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[b];[1][b]overlay=0:0' -t 5 output.mp4

    But that gives the following error :

    Too many inputs specified for the "format" filter. Error initializing complex filters. Invalid argument

    Full Error :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ’video.mp4’ :
    Metadata :
    major_brand : mp42
    minor_version : 0
    compatible_brands : mp42mp41
    creation_time : 2018-03-09T20:52:18.000000Z

    Duration : 00:00:30.00, start : 0.000000, bitrate : 8002 kb/s

    Stream #0:0(eng) : Video : h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 7997 kb/s, 24 fps,
    24 tbr, 24k tbn, 48 tbc (default)

    Metadata :
    creation_time : 2018-03-09T20:52:18.000000Z
    handler_name : Alias Data Handler
    encoder : AVC Coding Input #1, image2, from ’image.jpg’ :

    Duration : 00:00:00.04, start : 0.000000, bitrate : 526829 kb/s

    Stream #1:0 : Video : mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 3840x2160 [SAR 96:96 DAR 16:9], 25 tbr, 25 tbn, 25 tbc Input #2,
    png_pipe, from ’text.png’ : Duration : N/A, bitrate : N/A

    Stream #2:0 : Video : png, rgba(pc), 1500x1500, 25 tbr, 25 tbn, 25 tbc [AVFilterGraph @ 0x7fc37d402de0]

    Too many inputs specified for the "format" filter. Error initializing complex filters. Invalid argument

    I can sort of get around that by tweaking the command so that the text isn’t an input to the overlay :

    ffmpeg -i lightTexture.mp4 -i image.jpg -i textSample.png -filter_complex '[2]overlay=0:0,scale=w=3840:h=2160;[0]format=rgba,colorchannelmixer=aa=0.7,scale=w=3840:h=2160[b];[1][b]overlay=0:0' -t 5 output_text_on_top.mp4

    But then my output video is all messed up. I suspect I am on the wrong track by trying to cram all of this into the -filter_complex. I’m wondering if I need to create two overlays and then overlay those (i.e overlay Text onto the Video, and then overlay that onto the base image) though I’m not sure how to accomplish that.

    If anyone could point me in the right direction here, I’d be super grateful.

  • Changing bit-rate with timestamp copy still offsets events by 1 or 2 frames

    10 avril 2018, par harkmug

    As a novice user, I am trying to understand the following. I have an mp4 (encoder=Lavf57.66.104) that is 127 MB in size and I use the following to reduce its size :

    ffmpeg -i original.mp4 -start_at_zero -copyts -b:v 1000k -c:a copy output.mp4

    The duration and number of frames stay the same after this process, however, when I annotate (using ELAN) the same events (at millisecond level, e.g. a blink) the output video (encoder=Lavf57.55.100) seems to be offset relative to the original by 1 or 2 frames.

    Can someone help me understand this shift ? Thanks !

    UPDATE (2018-04-10) :
    As per @Mulvya’s suggestion, ran :

    ffmpeg -i original.mp4 -copyts -b:v 1000k -c:a copy output.mp4

    Looking at the two files :

    fprobe -v error -select_streams v:0 -show_frames -show_entries frame=key_frame,pkt_pts_time,pict_type,coded_picture_number -of default=noprint_wrappers=1:nokey=1 -of csv=p=0 original.mp4 | head -n 5

    1,5292.861000,I,0
    0,5292.894333,P,1
    0,5292.927667,P,2
    0,5292.961000,P,3
    0,5292.994333,P,4

    Same for output :

    1,5292.866016,I,0
    0,5292.899349,P,1
    0,5292.932682,P,2
    0,5292.966016,P,3
    0,5292.999349,P,4

    Trying to understand how to get the same time-stamps for the same frames. Maybe this is not possible ?

  • FFMpeg mixing audio to video creates silent video after audio is added,

    25 mars 2018, par 1234567

    mixing audio to video creates silent video after audio is added, using FFMpeg

    this is the command I am using

    "-y","-i",j,"-filter_complex","amovie="+audio+":loop=999,asetpts=N/SR/TB,atrim=0:18,adelay=50000|50000,aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,volume=1.5[a1];[0:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,volume=2.0[a2];[a1][a2]amerge,pan=stereo:c0code>

    I have a video which is 2 mins 7 seconds ling, I want to ’mix’ audio to a part starting from 50 seconds to 68 seconds

    What i want is to keep original sound from video and mix audio to existing sound of video(not over write) it, (keep old video sound and new audio in video)

    what happens is till 50 seconds video has its own sound, from 50 -68 the audio over writes the video sound and from then till end it is silent

    what i want is video to have its sound through out the video and mix from 50 -68 seconds the audio to it

    this graphic can explain a bit of what i want and what i am getting

    What i want enter image description here
    what i am gettingenter image description here
    another problem that I am facing is
    if i try this on a silent video (video with no sound at all) this code fails this is the error that I am getting

    ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.8 (GCC)
     configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/sdcard0/abcd/Videos/20180325_164206.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: isommp42
       creation_time   : 2018-03-25 16:44:15
     Duration: 00:02:05.21, start: 0.000000, bitrate: 1585 kb/s
       Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 720x1280, 1584 kb/s, SAR 1:1 DAR 9:16, 11.83 fps, 90k tbr, 90k tbn, 180k tbc (default)
       Metadata:
         creation_time   : 2018-03-25 16:44:15
         handler_name    : VideoHandle
    [mp3 @ 0xaca26b40] Skipping 0 bytes of junk at 417.
    [Parsed_pan_9 @ 0xac9b17e0] This syntax is deprecated. Use '|' to separate the list items.
    Stream specifier ':a' in filtergraph description amovie=/storage/sdcard0/abcd/Videos/baby.mp3:loop=999,asetpts=N/SR/TB,atrim=0:18,adelay=50000|50000,aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,volume=1.5[a1];[0:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,volume=2.0[a2]; [a1][a2]amerge,pan=stereo:c0code>

    how can we add audio to a silent video