Recherche avancée

Médias (0)

Mot : - Tags -/albums

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (48)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (6223)

  • FFMPEG segment creates black frame in the beginning - and key_frame looks offset [closed]

    30 septembre 2022, par jackyroo

    I'm trying to split a video using ffmpeg segment :

    


    ffmpeg -r 60 -accurate_seek -I input.mov -map 0:a? -map 0:v? -codec copy -reset_timestamps 1 -map_metadata 0 -avoid_negative_ts 1 -f segment output_%%03d.mov -y -loglevel debug

    


    The problem with this approach is that the first split segment has 2 black frames at the beginning, while the consecutive ones are fine.

    


    I cannot figure out why, and I'm genuinely lost in this process.

    


    I've also tried specifying -force_key_frames but it makes no difference at all.

    


    Then I found something interesting by comparing keyframes of the original video and the first split chunk using this command :

    


    ffprobe -select_streams v -skip_frame nokey -show_frames -v quiet input.mov

    


    Original Video KeyFrames
The first keyframe of the original video looks fine :

    


    [FRAME]
media_type=video
stream_index=0
key_frame=1
pts=0
pts_time=0.000000
pkt_dts=127488
pkt_dts_time=8.300000
best_effort_timestamp=0
best_effort_timestamp_time=0.000000
pkt_duration=256
pkt_duration_time=0.016667
duration=256
duration_time=0.016667
pkt_pos=40
pkt_size=128838
width=1420
height=2000
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=unknown
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=left
[SIDE_DATA]
side_data_type=H.26[45] User Data Unregistered SEI message
[/SIDE_DATA]
[/FRAME]


    


    Split Video - First Chunk Keyframes

    


    On the other hand, it seems like the first keyframe of the chunk (the one with the two added black frames in the beginning) gets shifted !

    


    [FRAME]
media_type=video
stream_index=1
key_frame=1
pts=507
pts_time=0.033008
pkt_dts=N/A
pkt_dts_time=N/A
best_effort_timestamp=507
best_effort_timestamp_time=0.033008
pkt_duration=256
pkt_duration_time=0.016667
duration=256
duration_time=0.016667
pkt_pos=40
pkt_size=128838
width=1420
height=2000
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=unknown
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=left
[SIDE_DATA]
side_data_type=H.26[45] User Data Unregistered SEI message
[/SIDE_DATA]
[/FRAME]


    


    Might this be the cause of the two black frames at the beginning ?

    


    The following is the first (and only) keyframe of the second chunk, and it looks fine :

    


    [FRAME]
media_type=video
stream_index=1
key_frame=1
pts=0
pts_time=0.000000
pkt_dts=N/A
pkt_dts_time=N/A
best_effort_timestamp=0
best_effort_timestamp_time=0.000000
pkt_duration=256
pkt_duration_time=0.016667
duration=256
duration_time=0.016667
pkt_pos=40
pkt_size=158598
width=1420
height=2000
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=unknown
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=left
[/FRAME]


    


    Please help me spreading some light over this...

    


    Thanks !

    


  • ffmpeg output to stdout is a black box (python)

    27 juillet 2022, par dwilliams

    Project Background :
The project I'm working on takes an image from a UE simulation camera, uses OpenCV to convert the image to .jpg, and streams the .jpg to a localhost port so that a second program can retrieve the images and annotate the objects in the simulation. I am attempting to convert these to a suitable format for UE4's WmfMediaPlayer via ffmpeg so that it can be streamed to another localhost port.

    


    Problem :
I have managed to get a the stream active but the video at the end of stream is a black square. I'm confident but not 100% certain that I've setup the ffmpeg correctly. The output to console seem to imply that the conversion is working but I've been fooled before. It could also be the flask yield component.

    


    ffmpeg declaration/setup :

    


    process = (
        ffmpeg
        .input('C:/MOT/test.jpg')   
        .output('pipe:1', vcodec='libx264', format='avi')   
        .overwrite_output()
        .run_async(pipe_stdin=True, pipe_stdout=True)
    )


    


    Flask Integration :

    


    @app.route('/annotated')
def annotated():
    return Response(
        frame_generator_annotated(),
        mimetype='multipart/x-mixed-replace; boundary=frame'
    )


def frame_generator_annotated():
    global process
    while (True):
        frame = process.stdout.read()
        yield (b'--frame\r\n'
               b'Content-Type: video/avi\r\n\r\n' + frame + b'\r\n\r\n')


    


    stream view :
http://localhost:5001/annotated

    


    ffmpeg console output :

    


    Running on all addresses.
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://192.168.1.220:5001/ (Press CTRL+C to quit)
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 10.2.1 (GCC) 20200726
  configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
[mjpeg @ 00000269be0faec0] EOI missing, emulating
Input #0, image2, from 'C:/MOT/test.jpg':
  Duration: 00:00:00.04, start: 0.000000, bitrate: 1968 kb/s
    Stream #0:0: Video: mjpeg (Baseline), yuvj420p(pc, bt470bg/unknown/unknown), 1024x576 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 00000269be0fc240] using SAR=1/1
[libx264 @ 00000269be0fc240] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 00000269be0fc240] profile High, level 3.1, 4:2:0, 8-bit
Output #0, avi, to 'pipe:1':
  Metadata:
    ISFT            : Lavf58.45.100
    Stream #0:0: Video: h264 (libx264) (H264 / 0x34363248), yuvj420p(pc), 1024x576 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 25 tbn, 25 tbc
    Metadata:
      encoder         : Lavc58.91.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=    1 fps=0.0 q=28.0 Lsize=       2kB time=00:00:00.04 bitrate= 447.2kbits/s speed= 2.9x    
video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 157.307251%
[libx264 @ 00000269be0fc240] frame I:1     Avg QP:13.00  size:   869
[libx264 @ 00000269be0fc240] mb I  I16..4: 100.0%  0.0%  0.0%
[libx264 @ 00000269be0fc240] 8x8 transform intra:0.0%
[libx264 @ 00000269be0fc240] coded y,uvDC,uvAC intra: 0.0% 0.0% 0.0%
[libx264 @ 00000269be0fc240] i16 v,h,dc,p: 97%  0%  3%  0%
[libx264 @ 00000269be0fc240] i8c dc,h,v,p: 100%  0%  0%  0%
[libx264 @ 00000269be0fc240] kb/s:173.80


    


  • ffmpeg - convert background to black ? [closed]

    22 juin 2022, par david furst

    the basic problem
    
convert all pixels in all frames of a source video to black if they have a white value below a certain threshold and output the results as a series of static images with no transparency.

    


    my solution so far
    
i am able to do this with a two-step process :

    


      

    1. convert pixels below the threshold to transparent using ffmpeg's colorkey filtre, outputting a series as PNG.
    2. 


    3. use imagemagick to convert the PNG to JPEG.
    4. 


    


    this approach is very slow. ideally i'd like to do everything in one go, within ffmpeg.

    


    the reason i haven't been able to do that so far is that the resulting transparency isn't discarded (as i'd hoped) when outputting to non-transparent formats like JPG, even if i try to 'discard' the alpha layer beforehand using combinations of split and lutrgb ; the resulting JPEGs still resemble the original images.

    


    my current filter chain :

    


    ffmpeg -hide_banner -y -i input.mp4 -f lavfi -i color=c=white \
    -filter_complex "[0:v]format=gray[src];
        [1][src]scale2ref[white][vid];
        [vid][white]blend=all_mode=multiply:shortest=1,colorkey=black:0.95" %05d.png