Recherche avancée

Médias (1)

Mot : - Tags -/portrait

Autres articles (104)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (11224)

  • ffmpeg 4 : Using the stream_loop parameter to loop the audio during a video ends up with an infinite loop

    17 juin 2020, par JarsOfJam-Scheduler

    Summary

    



      

    1. Context
    2. 


    3. The software I use
    4. 


    5. The problem
    6. 


    7. Results
      
4.1. Actual Results

      



      4.2. Expected Results

    8. 


    9. What did I try to fix the bug ?

    10. 


    11. How to reproduce this bug : minimal and testable example with the provided required data

    12. 


    13. The question

    14. 


    15. Sources

    16. 


    




    



    Context

    



    I would want to set an audio WAV as the background sound of a video WEBM. The video can be shorter or longer than the audio. At the moment I add the audio over the video, I don't know the length of both streams. The audio must repeat until the video ends (the audio can be truncated if the video ends before the end of the last repetition of the audio).

    



    The software I use

    



    I use ffmpeg version 4.2.2-1ubuntu1 18.04.sav0.

    



    The problem

    



    ffmpeg seems to enter in an infinite loop when it proccesses in order to mix the audio and the video. Also, the length of the currently-generating-output-file (which contains both video and audio) is equal to the length of the audio, instead of the length of the video.

    



    The problem seems to be triggered by this command line :

    



    ffmpeg -i directory_1/video.webm -stream_loop -1 -fflags +shortest -max_interleave_delta 50000 -i directory_2/audio.wav directory_3/video_and_audio.webm


    



    Results

    



    Actual Results

    



    Three things :

    



      

    1. The infinite loop of the ffmpeg process : I must manually stop the ffmpeg process

    2. 


    3. The output video file with music (which is currently generating but output anyway) : it contains both audio and video. But the length of the output file is equal to the length of the audio, instead of the length of the video.

    4. 


    5. The following output logs :

    6. 


    



    


    ffmpeg version 4.2.2-1ubuntu1 18.04.sav0 Copyright (c) 2000-2019 the
 FFmpeg developers built with gcc 7 (Ubuntu 7.5.0-3ubuntu1 18.04)
    
 configuration : —prefix=/usr —extra-version='1ubuntu1 18.04.sav0'
 —toolchain=hardened —libdir=/usr/lib/x86_64-linux-gnu —incdir=/usr/include/x86_64-linux-gnu —arch=amd64 —enable-gpl —disable-stripping —enable-avresample —disable-filter=resample —enable-avisynth —enable-gnutls —enable-ladspa —enable-libaom —enable-libass —enable-libbluray —enable-libbs2b —enable-libcaca —enable-libcdio —enable-libcodec2 —enable-libflite —enable-libfontconfig —enable-libfreetype —enable-libfribidi —enable-libgme —enable-libgsm —enable-libjack —enable-libmp3lame —enable-libmysofa —enable-libopenjpeg —enable-libopenmpt —enable-libopus —enable-libpulse —enable-librsvg —enable-librubberband —enable-libshine —enable-libsnappy —enable-libsoxr —enable-libspeex —enable-libssh —enable-libtheora —enable-libtwolame —enable-libvidstab —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx265 —enable-libxml2 —enable-libxvid —enable-libzmq —enable-libzvbi —enable-lv2 —enable-omx —enable-openal —enable-opencl —enable-opengl —enable-sdl2 —enable-libdc1394 —enable-libdrm —enable-libiec61883 —enable-nvenc —enable-chromaprint —enable-frei0r —enable-libx264 —enable-shared libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 /
 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 /
 55. 5.100 Input #0, matroska,webm, from 'youtubed/my_youtube_video.webm' : Metadata :
 encoder : Chrome Duration : N/A, start : 0.000000, bitrate : N/A
 Stream #0:0(eng) : Video : vp8, yuv420p(progressive), 3200x1608, SAR 1:1 DAR 400:201, 1k tbr, 1k tbn, 1k tbc (default)
 Metadata :
 alpha_mode : 1 Guessed Channel Layout for Input Stream #1.0 : stereo Input #1, wav, from 'tmp_music/original_music.wav' :
    
 Duration : 00:00:11.78, bitrate : 1411 kb/s
 Stream #1:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s Stream mapping : Stream #0:0 -> #0:0 (vp8
 (native) -> vp9 (libvpx-vp9)) Stream #1:0 -> #0:1 (pcm_s16le
 (native) -> opus (libopus)) Press [q] to stop, [?] for help
 [libvpx-vp9 @ 0x5645268aed80] v1.8.2 [libopus @ 0x5645268b09c0] No bit
 rate set. Defaulting to 96000 bps. Output #0, webm, to
 'youtubed/my_youtube_video_with_music.webm' : Metadata :
 encoder : Lavf58.29.100
 Stream #0:0(eng) : Video : vp9 (libvpx-vp9), yuv420p(progressive), 3200x1608 [SAR 1:1 DAR 400:201], q=-1—1, 200 kb/s, 1k fps, 1k tbn, 1k
 tbc (default)
 Metadata :
 alpha_mode : 1
 encoder : Lavc58.54.100 libvpx-vp9
 Side data :
 cpb : bitrate max/min/avg : 0/0/0 buffer size : 0 vbv_delay : -1
 Stream #0:1 : Audio : opus (libopus), 48000 Hz, stereo, s16, 96 kb/s
 Metadata :
 encoder : Lavc58.54.100 libopus

    


    



    Expected Results

    



      

    1. No infinite loop during the ffmpeg process

    2. 


    3. Concerning the output logs, I don't know what it should look.

    4. 


    5. The output file with the audio and the video should :

      



      3.1. If the video is longer than the audio, then the audio is repeated until it exactly fits the video. The audio can be truncated.

      



      3.2. If the video is shorter than the audio, then the audio is truncated and exactly fits the video.

      



      3.3. If both video and audio are of the same length, then the audio exactly fits the video.

    6. 


    



    How to reproduce this bug ? (+ required data)

    



      

    1. Download the following files (resp. audio and video) (I must refresh these download links every 24 hours) :

      



      1.1. https://a.uguu.se/dmgsmItjJMDq_audio.wav

      



      1.2. https://a.uguu.se/w3qHDlGq6mOW_video.webm

    2. 


    3. Move them into the directory/directories of your choice.

    4. 


    5. Open your CLI, move to the adequat directory and copy/paste/execute the instruction given in Part. The Problem (don't forget to eventually modify this instruction by indicating the adequat directories, according to step 2.).

    6. 


    7. You'll face my problem.

    8. 


    



    What did I try to fix the bug ?

    



    Nothing, since I don't even understand why the bug occures.

    



    The question

    



    How to correct my command in order to mix these audio and video streams without any infinite loop during the ffmpeg process, keeping in mind that I don't know their length, and that audio must be repeated in order to fit the video, even if audio must be truncated (in the case of the last repetition of the audio file must be truncated because the video stream has just ended) ?

    



    Sources

    



    The source is the command line you can find in Part. The problem.

    


  • FFmpeg with Nvidia GPU - full HW transcode with 50i to 50p deinterlacing

    5 janvier 2018, par Jernej Stopinšek

    I’m trying to do a full hardware transcode of an udp stream to hls
    with 50i to 50p deinterlacing.

    I’m using ffmpeg and Nvidia GPU.

    Since HLS requires deinterlacing

    https://developer.apple.com/library/content/documentation/General/Reference/HLSAuthoringSpec/Requirements.html

    I would like to deinterlace an interlaced source stream and preserve
    as much smooth motion and picture quality as possible.

    My hardware, software and driver info :

    GPU : Tesla P100-PCIE-12GB
    Nvidia Driver Version : 387.26
    Cuda compilation tools, release 9.1, V9.1.85
    FFmpeg from git on 20171218

    ffmpeg version N-89520-g3f88744067 Copyright (c) 2000-2017 the FFmpeg
    developers built with gcc 6.3.0 (Debian 6.3.0-18) 20170516
    configuration : —enable-gpl
    —enable-cuda-sdk —enable-libx264 —enable-libx265 —enable-nonfree —enable-libnpp —enable-opengl —enable-opencl —enable-libfreetype —enable-openssl —enable-libzvbi —enable-libfontconfig —enable-libfreetype —enable-libfribidi —extra-cflags=-I/usr/local/cuda/include —extra-ldflags=-L/usr/local/cuda/lib64 —arch=x86_64

    libavutil 56. 6.100 / 56. 6.100
    libavcodec 58. 8.100 / 58.
    8.100
    libavformat 58. 3.100 / 58. 3.100
    libavdevice 58. 0.100 / 58. 0.100
    libavfilter 7. 7.100 / 7. 7.100
    libswscale 5.
    0.101 / 5. 0.101
    libswresample 3. 0.101 / 3. 0.101
    libpostproc 55. 0.100 / 55. 0.100

    Input stream info :

    ffmpeg -t 00:05:00 -i udp://xxx.xxx.xxx.xxx:xxxx -map 0:0 -vf idet -c rawvideo -y -f rawvideo /dev/null

    Input #0, mpegts, from ’udp ://xxx.xxx.xxx.xxx:xxxx’ :
    Duration :
    N/A, start : 49634.159411, bitrate : N/A
    Program xxxxx
    Metadata : service_name :
    service_provider : Stream
    #0:0[0x44d] : Video : h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 50 tbr, 90k
    tbn, 50 tbc
    Stream #0:10x19de : Audio : mp2 ([3][0][0][0] /
    0x0003), 48000 Hz, stereo, s16p, 192 kb/s
    Stream
    #0:20x19e1 : Subtitle : dvb_subtitle ([6][0][0][0] / 0x0006)

    Output #0, rawvideo, to ’/dev/null’ :
    Metadata :
    encoder :
    Lavf58.3.100
    Stream #0:0 : Video : rawvideo (I420 / 0x30323449),
    yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 622080 kb/s, 25 fps, 25
    tbn, 25 tbc
    Metadata :
    encoder : Lavc58.8.100 rawvideo
    frame= 7538 fps= 25 q=-0.0 Lsize=22896675kB time=00:05:01.52
    bitrate=622080.0kbits/s dup=38 drop=0 speed=1.02x
    video:22896675kB audio:0kB subtitle:0kB other streams:0kB global
    headers:0kB muxing overhead : 0.000000%
    [Parsed_idet_0 @
    0x56370b3c5080] Repeated Fields : Neither : 7458 Top : 24 Bottom : 18
    [Parsed_idet_0 @ 0x56370b3c5080] Single frame detection : TFF : 281 BFF :
    13 Progressive : 5639 Undetermined : 1567
    [Parsed_idet_0 @
    0x56370b3c5080] Multi frame detection : TFF : 380 BFF : 0 Progressive :
    7120 Undetermined : 0


    This is my command for adaptive hardware deinterlacing. It gives great results with picture, but sound is out of sync.

    ffmpeg -y -err_detect ignore_err -loglevel debug -vsync -1 -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -deint adaptive -r:v 50 -gpu:v 1 -i "udp://xxx.xxx.xxx.xxx:xxxx=?overrun_nonfatal=1&fifo_size=84450&buffer_size=33554432" -map 0:0 -map 0:1 -c:a aac -b:a 196k -c:v h264_nvenc -flags -global_header+cgop -gpu:v 1 -g:v 50 -bf:v 4 -coder:v cabac -b_adapt:v false -b:v 5184000 -minrate:v 5184000 -maxrate:v 5184000 -bufsize:v 2488320 -rc:v cbr_hq -2pass:v true -rc-lookahead:v 25 -no-scenecut:v 1 -profile:v high -preset:v slow -color_range:v 1 -color_trc:v 1 -color_primaries:v 1 -colorspace:v 1 -f hls -hls_time 5 -hls_list_size 3 -start_number 0 -hls_flags delete_segments /srv/hls/program_01/1080p/index.m3u8

    If I add option "-drop_second_field 1" to h264_cuvid and remove -r:v 50 from input and put it to h264_nvenc - then transcoded stream has synced audio, but I think I’m losing quality due to drop_second_field option.

    ffmpeg -y -err_detect ignore_err -loglevel debug -vsync -1 -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -deint adaptive -drop_second_field 1 -gpu:v 1 -i "udp://xxx.xxx.xxx.xxx:xxxx=?overrun_nonfatal=1&fifo_size=84450&buffer_size=33554432" -map 0:0 -map 0:1 -c:a aac -b:a 196k -c:v h264_nvenc -flags -global_header+cgop -gpu:v 1 -g:v 50 -r:v 50 -bf:v 4 -coder:v cabac -b_adapt:v false -b:v 5184000 -minrate:v 5184000 -maxrate:v 5184000 -bufsize:v 2488320 -rc:v cbr_hq -2pass:v true -rc-lookahead:v 25 -no-scenecut:v 1 -profile:v high -preset:v slow -color_range:v 1 -color_trc:v 1 -color_primaries:v 1 -colorspace:v 1 -f hls -hls_time 5 -hls_list_size 3 -start_number 0 -hls_flags delete_segments /srv/hls/program_01/1080p/index.m3u8

    Could someone please point me in the right direction how to properly deinterlace with cuvid and minimal possible loss of quality ?

  • ffmpeg capture from ip camera video in h264 stream [closed]

    23 mars 2023, par Иванов Иван

    I can't read the frames from the camera and then write them to a video file (any). The fact is that I even get crooked frames, they seem to have violated the coordinates of the position of each point, the video is crooked, distorted

    


    c++ code.

    


    https://drive.google.com/file/d/1W2sZMR5D5pvVmnhiQyhiaQhC9frhdeII/view?usp=sharing

    


     #define INBUF_SIZE 4096&#xA;&#xA;&#xA;    //writing the minimal required header for a pgm file format&#xA;    //portable graymap format-> https://en.wikipedia.org/wiki/Netpbm_format#PGM_example&#xA;    fprintf (f, "P5\n%d %d\n%d\n", xsize, ysize, 255);&#xA;&#xA;    //writing line by line&#xA;    for (i = 0; i /contains data on a configuration of media content, such as bitrate, &#xA;        //frame rate, sampling frequency, channels, height and many other things.&#xA;        AVCodecContext * AVCodecContext_ = NULL;&#xA;        AVCodecParameters * AVCodecParametr_ = NULL;&#xA;        FILE * f;&#xA;        //This structure describes decoded (raw) audio- or video this.&#xA;        AVFrame * frame;&#xA;        uint8_t inbuf [INBUF_SIZE &#x2B; AV_INPUT_BUFFER_PADDING_SIZE];&#xA;        uint8_t * data;&#xA;        size_t data_size;&#xA;        int ret;&#xA;        int eof;&#xA;        AVFormatContext * AVfc = NULL;&#xA;        int ERRORS;&#xA;        //AVCodec * codec;&#xA;        char buf [1024];&#xA;        const char * FileName;&#xA;    &#xA;        //https://habr.com/ru/post/137793/&#xA;        //Stores the compressed one shot.&#xA;        AVPacket * pkt;&#xA;    &#xA;        //**********************************************************************&#xA;        //Beginning of reading video from the camera. &#xA;        //**********************************************************************&#xA;    &#xA;        avdevice_register_all ();&#xA;    &#xA;        filename = "rtsp://admin: 754HG@192.168.1.75:554/11";&#xA;        //filename = "c:\\1.avi";&#xA;        outfilename = "C:\\2.MP4";&#xA;    &#xA;        //We open a flow of video (it is the file or the camera). &#xA;        ERRORS = avformat_open_input (&amp; AVfc, filename, NULL, NULL);&#xA;        if (ERRORS &lt;0) {&#xA;            fprintf (stderr, "ffmpeg: could not open file \n");&#xA;            return-1;&#xA;        }&#xA;    &#xA;        //After opening, we can print out information on the video file (iformat = the name of a format; &#xA;        //duration = duration). But as I connected the camera to me wrote: Duration: N/A, &#xA;        //start: 0.000000, bitrate: N/A&#xA;        printf ("Format %s, duration %lld us", AVfc-> iformat-> long_name, AVfc-> duration);&#xA;    &#xA;    &#xA;        ERRORS = avformat_find_stream_info (AVfc, NULL);&#xA;        if (ERRORS &lt;0) {&#xA;            fprintf (stderr, "ffmpeg: Unable to find stream info\n");&#xA;            return-1;&#xA;        }&#xA;    &#xA;    &#xA;        int CountStream;&#xA;    &#xA;        //We learn quantity of streams. &#xA;        CountStream = AVfc-> nb_streams;&#xA;    &#xA;        //Let&#x27;s look for the codec. &#xA;        int video_stream;&#xA;        for (video_stream = 0; video_stream  nb_streams; &#x2B;&#x2B; video_stream) {&#xA;            if (AVfc-> streams[video_stream]-> codecpar-> codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;                break;&#xA;            }&#xA;    &#xA;        }&#xA;    &#xA;        if (video_stream == AVfc-> nb_streams) {&#xA;            fprintf (stderr, "ffmpeg: Unable to find video stream\n");&#xA;            return-1;&#xA;        }&#xA;    &#xA;        //Here we define a type of the codec, for my camera it is equal as AV_CODEC_ID_HEVC (This that in what is broadcast by my camera)&#xA;        codec = avcodec_find_decoder(AVfc-> streams [video_stream]-> codecpar-> codec_id);&#xA;        //--------------------------------------------------------------------------------------&#xA;    &#xA;        //Functions for inquiry of opportunities of libavcodec,&#xA;        AVCodecContext_ = avcodec_alloc_context3(codec);&#xA;        if (! AVCodecContext _) {&#xA;            fprintf (stderr, "Was not succeeded to allocate a video codec context, since it not poddrerzhivayetsya\n");&#xA;            exit(1);&#xA;        }&#xA;    &#xA;        //This function is used for initialization &#xA;        //AVCodecContext of video and audio of the codec. The announcement of avcodec_open2 () is in libavcodecavcodec.h&#xA;        //We open the codec. &#xA;    &#xA;        ERRORS = avcodec_open2 (AVCodecContext _, codec, NULL);&#xA;        if (ERRORS &lt;0) {&#xA;            fprintf (stderr, "ffmpeg: It is not possible to open codec \n");&#xA;            return-1;&#xA;        }&#xA;    &#xA;        //It for processing of a sound - a reserve.&#xA;        //swr_alloc_set_opts ()&#xA;        //swr_init (); &#xA;    &#xA;        //To output all information on the video file. &#xA;        av_dump_format (AVfc, 0, argv[1], 0);&#xA;    &#xA;        //=========================================================================================&#xA;        //Further, we receive frames. before we only received all infomration about the entering video.&#xA;        //=========================================================================================&#xA;    &#xA;        //Now we are going to read packages from a stream and to decode them in shots, but at first &#xA;        //we need to mark out memory for both components (AVPacket and AVFrame).&#xA;        frame = av_frame_alloc ();&#xA;    &#xA;        if (! frame) {&#xA;            fprintf (stderr, "Is not possible to mark out memory for video footage \n");&#xA;            exit(1);&#xA;        }&#xA;        //We mark out memory for a package &#xA;        pkt = av_packet_alloc ();&#xA;        //We define a file name for saving the picture.&#xA;        const char * FileName1 = "C:\\Users\\Павел\\Desktop\\NyFile.PGM";&#xA;        //Data reading if they is. &#xA;        while (av_read_frame (AVfc, pkt)> = 0) {&#xA;            //It is a package from a video stream? Because there is still a soundtrack.&#xA;            if (pkt-> stream_index == video_stream) {&#xA;                int ret;&#xA;    &#xA;                //Transfer of the raw package data as input data in the decoder&#xA;                ret = avcodec_send_packet (AVCodecContext _, pkt);&#xA;                if (ret &lt;0 | | ret == AVERROR(EAGAIN) | | ret == AVERROR_EOF) {&#xA;                    std:: cout &lt;&lt;"avcodec_send_packet:" &lt;<ret while="while"> = 0) {&#xA;    &#xA;                    //Returns the decoded output data from the decoder or the encoder&#xA;                    ret = avcodec_receive_frame (AVCodecContext _, frame);&#xA;                    if (ret == AVERROR(EAGAIN) | | ret == AVERROR_EOF) {&#xA;                        //std:: cout &lt;&lt;"avcodec_receive_frame:" &lt;<ret cout="cout"> of frame_number &lt;/============================================================================================&#xA;    &#xA;                    //Experimentally - we will keep a shot in the picture. &#xA;    &#xA;                    save_gray_frame(frame-> data [0], frame-> linesize [0], frame-> width, frame-> height, (char *) FileName1);&#xA;                }&#xA;            }&#xA;        }&#xA;    &#xA;        //av_parser_close(parser);&#xA;        avcodec_free_context (&amp; AVCodecContext _);&#xA;        av_frame_free (&amp; frame);&#xA;        av_packet_free (&amp; pkt);&#xA;    &#xA;        return 0;&#xA;</ret></ret>

    &#xA;