Recherche avancée

Médias (21)

Mot : - Tags -/Nine Inch Nails

Autres articles (104)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (11948)

  • RTMP streaming using FFMPEG and HLS conversion in NGINX

    1er mai 2019, par Jonathan Piat

    i have some ffmpeg code in c++ that generates a RTMP stream from H264 NALU and audio samples encoded in AAC. I’am using NGINX to take the RTMP stream and forwards to clients and it is working fine. My issue is that when i use NGINX to convert the RTMP stream to HLS, there is no HLS chunks and playlist generated. I use ffmpeg to copy the RTMP stream and generate a new stream to NGINX, the HLS conversion works.

    Here is what i get when i do the stream copy using FFMPEG :

    Input #0, flv, from 'rtmp://127.0.0.1/live/beam_0'
    Metadata:
    Server          : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
    displayWidth    : 1920
    displayHeight   : 1080
    fps             : 30
    profile         :
    level           :
    Duration: 00:00:00.00, start: 5.019000, bitrate: N/A
    Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 128 kb/s
    Stream #0:1: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 1920x1080 (1920x1088), 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 60 tbc

    Output #0, flv, to 'rtmp://localhost/live/copy_stream':
    Metadata:
    Server          : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
    displayWidth    : 1920
    displayHeight   : 1080
    fps             : 30
    profile         :
    level           :
    encoder         : Lavf57.83.100
    Stream #0:0: Video: h264 (High), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p(progressive, left), 1920x1080 (0x0), q=2-31, 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 1k tbc
    Stream #0:1: Audio: aac ([10][0][0][0] / 0x000A), 44100 Hz, mono, fltp, 128 kb/s

    There are not much differences between the two streams, so i don’t really get what is going wrong and what i should change in my C++ code. One very weird issue i see is that my audio stream is 48kHz when i publish it, but here it is recognized as 44100Hz :

    Output #0, flv, to 'rtmp://127.0.0.1/live/beam_0':
    Stream #0:0, 0, 1/1000: Video: h264 (libx264), 1 reference frame, yuv420p, 1920x1080, 0/1, q=-1--1, 8000 kb/s, 30 fps, 1k tbn, 1k tbc
    Stream #0:1, 0, 1/1000: Audio: aac, 48000 Hz, 1 channels, fltp, 128 kb/s

    UPDATE 1 :

    The output context is created using the following code :

    pOutputFormatContext->oformat = av_guess_format("flv", url.toStdString().c_str(), nullptr);
    memcpy(pOutputFormatContext->filename, url.toStdString().c_str(), url.length());
    avio_open(&pOutputFormatContext->pb,  url.toStdString().c_str(), AVIO_FLAG_WRITE));
    pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;
    pOutputFormatContext->oformat->audio_codec = AV_CODEC_ID_AAC ;

    The audio stream is created with :

    AVDictionary *opts = nullptr;
    //pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_VORBIS);
    pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_AAC);

    pAudioCodecContext = avcodec_alloc_context3(pAudioCodec);

    pAudioCodecContext->thread_count = 1;
    pAudioFrame = av_frame_alloc();

    av_dict_set(&opts, "strict", "experimental", 0);
    pAudioCodecContext->bit_rate = 128000;
    pAudioCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
    pAudioCodecContext->sample_rate = static_cast<int>(sample_rate) ;
    pAudioCodecContext->channels = nb_channels ;
    pAudioCodecContext->time_base.num = 1;
    pAudioCodecContext->time_base.den = 1000 ;

    //pAudioCodecContext->time_base.den = static_cast<int>(sample_rate) ;

    pAudioCodecContext->codec_type = AVMEDIA_TYPE_AUDIO;
    avcodec_open2(pAudioCodecContext, pAudioCodec, &amp;opts);


    pAudioFrame->nb_samples     = pAudioCodecContext->frame_size;
    pAudioFrame->format         = pAudioCodecContext->sample_fmt;
    pAudioFrame->channel_layout = pAudioCodecContext->channel_layout;
    mAudioSamplesBufferSize = av_samples_get_buffer_size(nullptr, pAudioCodecContext->channels, pAudioCodecContext->frame_size, pAudioCodecContext->sample_fmt, 0);

    avcodec_fill_audio_frame(pAudioFrame, pAudioCodecContext->channels, pAudioCodecContext->sample_fmt, (const uint8_t*) pAudioSamples, mAudioSamplesBufferSize, 0);

    if( pOutputFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER )
       pAudioCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    pAudioStream = avformat_new_stream(pOutputFormatContext, 0);

    pAudioStream->codec = pAudioCodecContext ;
    pAudioStream->id =  pOutputFormatContext->nb_streams-1;;
    pAudioStream->time_base.den = pAudioStream->pts.den =  pAudioCodecContext->time_base.den;
    pAudioStream->time_base.num = pAudioStream->pts.num =  pAudioCodecContext->time_base.num;

    mAudioPacketTs = 0 ;
    </int></int>

    The video stream is created with :

    pVideoCodec = avcodec_find_encoder(AV_CODEC_ID_H264);

    pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);

    pVideoCodecContext->codec_type = AVMEDIA_TYPE_VIDEO ;
    pVideoCodecContext->thread_count = 1 ;
    pVideoCodecContext->width = width;
    pVideoCodecContext->height = height;
    pVideoCodecContext->bit_rate = 8000000 ;
    pVideoCodecContext->time_base.den = 1000 ;
    pVideoCodecContext->time_base.num = 1 ;
    pVideoCodecContext->gop_size = 10;
    pVideoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
    pVideoCodecContext->flags = 0x0007 ;

    pVideoCodecContext->extradata_size = sizeof(extra_data_buffer);
    pVideoCodecContext->extradata = (uint8_t *) av_malloc ( sizeof(extra_data_buffer) );
    memcpy ( pVideoCodecContext->extradata, extra_data_buffer, sizeof(extra_data_buffer));

    avcodec_open2(pVideoCodecContext,pVideoCodec,0);

    pVideoFrame = av_frame_alloc();

    AVDictionary *opts = nullptr;
    av_dict_set(&amp;opts, "strict", "experimental", 0);
    memcpy(pOutputFormatContext->filename, this->mStreamUrl.toStdString().c_str(), this->mStreamUrl.length());
    pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;

    if( pOutputFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER )
       pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    pVideoStream = avformat_new_stream(pOutputFormatContext, pVideoCodec);


    //This following section is because AVFormat complains about parameters being passed throught the context and not CodecPar
    pVideoStream->codec = pVideoCodecContext ;
    pVideoStream->id = pOutputFormatContext->nb_streams-1;
    pVideoStream->time_base.den = pVideoStream->pts.den =  pVideoCodecContext->time_base.den;
    pVideoStream->time_base.num = pVideoStream->pts.num =  pVideoCodecContext->time_base.num;
    pVideoStream->avg_frame_rate.num = fps ;
    pVideoStream->avg_frame_rate.den = 1 ;
    pVideoStream->codec->gop_size = 10 ;

    mVideoPacketTs = 0 ;

    Then each video packet and audio packet is pushed with correct scaled pts/dts. I have corrected the 48kHz issue. It was because i was configuring the stream through the codec context and the through the codec parameters (because of waarning at runtime).

    This RTMP stream still does not work for HLS conversion by NGINX, but if i just use FFMPEG to take the RTMP stream from NGINX and re-publish it with copy codec then it works.

  • Non-Monotonous DTS on concat (ffmpeg)

    2 mai 2015, par whitesiroi

    After running this command ffmpeg -f concat -i mylist.txt -c copy output.mp4 - I’m getting this message :

    ffmpeg -f concat -i mylist.txt -c copy output.mp4
    ffmpeg version 2.6.2 Copyright (c) 2000-2015 the FFmpeg developers
     built with Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.6.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-vda
     libavutil      54. 20.100 / 54. 20.100
     libavcodec     56. 26.100 / 56. 26.100
     libavformat    56. 25.101 / 56. 25.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 11.102 /  5. 11.102
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, concat, from 'mylist.txt':
     Duration: N/A, start: 0.000000, bitrate: 829 kb/s
       Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x900, 701 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
       Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s
    Output #0, mp4, to 'output.mp4':
     Metadata:
       encoder         : Lavf56.25.101
       Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1440x900, q=2-31, 701 kb/s, 30 fps, 30 tbr, 15360 tbn, 15360 tbc
       Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 44100 Hz, stereo, 128 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #0:1 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    [mp4 @ 0x7f897a01bc00] Non-monotonous DTS in output stream 0:0; previous: 598061, current: 467644; changing to 598062. This may result in incorrect timestamps in the output file.
    [mp4 @ 0x7f897a01bc00] Non-monotonous DTS in output stream 0:0; previous: 598062, current: 468044; changing to 598063. This may result in incorrect timestamps in the output file.
    [mp4 @ 0x7f897a01bc00] Non-monotonous DTS in output stream 0:0; previous: 598063, current: 468444; changing to 598064. This may result in incorrect timestamps in the output file.
    ...
    [mp4 @ 0x7f897a01bc00] Non-monotonous DTS in output stream 0:0; previous: 598362, current: 588044; changing to 598363. This may result in incorrect timestamps in the output file.
    frame= 1472 fps=0.0 q=-1.0 Lsize=    5825kB time=00:00:49.04 bitrate= 973.0kbits/s
    video:4903kB audio:877kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.776358%

    Content of mylist.txt

    file 'cut.mp4'
    file 'cut2.mp4'

    cut.mp4 output from ffmpeg :

    ffmpeg -i cut.mp4
    ffmpeg version 2.6.2 Copyright (c) 2000-2015 the FFmpeg developers
     built with Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.6.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-vda
     libavutil      54. 20.100 / 54. 20.100
     libavcodec     56. 26.100 / 56. 26.100
     libavformat    56. 25.101 / 56. 25.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 11.102 /  5. 11.102
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'cut.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.25.101
     Duration: 00:00:39.04, start: 0.036281, bitrate: 837 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x900, 701 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    cut2.mp4 output from ffmpeg :

    ffmpeg -i cut2.mp4
    ffmpeg version 2.6.2 Copyright (c) 2000-2015 the FFmpeg developers
     built with Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.6.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-vda
     libavutil      54. 20.100 / 54. 20.100
     libavcodec     56. 26.100 / 56. 26.100
     libavformat    56. 25.101 / 56. 25.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 11.102 /  5. 11.102
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'cut2.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.25.101
     Duration: 00:00:10.07, start: 0.000000, bitrate: 1498 kb/s
       Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1440x900, 1271 kb/s, 30 fps, 30 tbr, 12k tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 218 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    cut.mp4 I got by this command ffmpeg -ss 00:00:11 -i myfile.mp4 -to 00:00:39 -vf 'drawbox= : x=0 : y=0 : color=invert' cut.mp4

    cut2.mp4 I got by this command ffmpeg -ss 00:00:00 -i myfile.mp4 -to 00:00:10 -c copy cut2.mp4

    I searched a lot - didn’t find any solution, maybe, someone can help me out with this one.

    +++UPDATE+++

    output.mp4 is playable, but looks weird.

    enter image description here

  • Ffmpeg video output is 0 seconds with correct filesize when uploading to google cloud bucket

    22 août 2022, par Turgut

    I've made a C++ program that lives in gke and takes some videos as input using ffmpeg, then does something with that input using opengl(not relevant), then finally encodes those edited videos as a single output. Normally the program works perfectly fine on my local machine, it encodes just as I want it to with no warnings or valgrind errors whatsoever. Then, after encoding the said video, I want my program to upload that video to the google cloud storage. This is where the problem comes, I have tried 2 methods for this : First, I tried using curl to upload to the cloud using a signed url. Second, I tried mounting the google storage using gcsfuse(I was already mounting the bucket to access the inputs in question). Both of those methods yielded undefined, weird behaviour's ranging from : Outputing a 0byte or 44byte file, (This is the most common one :) encoding in the correct file size 500mb but the video is 0 seconds long, outputing a 0.4 second video or just encoding the desired output normally (really rare).

    &#xA;

    From the logs I can't see anything unusual, everything seems to work fine and ffmpeg does not give any errors or warnings, so does valgrind. Everything seems to work normally, even when I use curl to upload the video to the cloud the output is perfectly fine when it first encodes it (before sending it with curl) but the video gets messed up when curl uploads it to the cloud.

    &#xA;

    I'm using the muxing.c example of ffmpeg to encode my video with the only difference being :

    &#xA;

    void video_encoder::fill_yuv_image(AVFrame *frame, struct SwsContext *sws_context) {&#xA;    const int in_linesize[1] = { 4 * width };&#xA;    //uint8_t* dest[4] = { rgb_data, NULL, NULL, NULL };&#xA;    sws_context = sws_getContext(&#xA;            width, height, AV_PIX_FMT_RGBA,&#xA;            width, height, AV_PIX_FMT_YUV420P,&#xA;            SWS_BICUBIC, 0, 0, 0);&#xA;&#xA;    sws_scale(sws_context, (const uint8_t * const *)&amp;rgb_data, in_linesize, 0,&#xA;            height, frame->data, frame->linesize);&#xA;}&#xA;

    &#xA;

    rgb_data is the data I got after editing the inputs. Again, this works fine and I don't think there are any errors here.

    &#xA;

    I'm not sure where the error is and since the code is huge I can't provide a replicable example. I'm just looking for someone to point me to the right direction.

    &#xA;

    Running the cloud's output in mplayer wields this result (This is when the video is the right size but is 0 seconds long, the most common one.) :

    &#xA;

    MPlayer 1.4 (Debian), built with gcc-11 (C) 2000-2019 MPlayer Team&#xA;do_connect: could not connect to socket&#xA;connect: No such file or directory&#xA;Failed to open LIRC support. You will not be able to use your remote control.&#xA;&#xA;Playing /media/c36c2633-d4ee-4d37-825f-88ae54b86100.&#xA;libavformat version 58.76.100 (external)&#xA;libavformat file format detected.&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f2cba1168e0]moov atom not found&#xA;LAVF_header: av_open_input_stream() failed&#xA;libavformat file format detected.&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f2cba1168e0]moov atom not found&#xA;LAVF_header: av_open_input_stream() failed&#xA;RAWDV file format detected.&#xA;VIDEO:  [DVSD]  720x480  24bpp  29.970 fps    0.0 kbps ( 0.0 kbyte/s)&#xA;X11 error: BadMatch (invalid parameter attributes)&#xA;Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory&#xA;[vdpau] Error when calling vdp_device_create_x11: 1&#xA;==========================================================================&#xA;Opening video decoder: [ffmpeg] FFmpeg&#x27;s libavcodec codec family&#xA;libavcodec version 58.134.100 (external)&#xA;[dvvideo @ 0x7f2cb987a380]Requested frame threading with a custom get_buffer2() implementation which is not marked as thread safe. This is not supported anymore, make your callback thread-safe.&#xA;Selected video codec: [ffdv] vfm: ffmpeg (FFmpeg DV)&#xA;==========================================================================&#xA;Load subtitles in /media/&#xA;==========================================================================&#xA;Opening audio decoder: [libdv] Raw DV Audio Decoder&#xA;Unknown/missing audio format -> no sound&#xA;ADecoder init failed :(&#xA;Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders&#xA;[dvaudio @ 0x7f2cb987a380]Decoder requires channel count but channels not set&#xA;Could not open codec.&#xA;ADecoder init failed :(&#xA;ADecoder init failed :(&#xA;Cannot find codec for audio format 0x56444152.&#xA;Audio: no sound&#xA;Starting playback...&#xA;[dvvideo @ 0x7f2cb987a380]could not find dv frame profile&#xA;Error while decoding frame!&#xA;[dvvideo @ 0x7f2cb987a380]could not find dv frame profile&#xA;Error while decoding frame!&#xA;V:   0.0   2/  2 ??% ??% ??,?% 0 0 &#xA;&#xA;&#xA;Exiting... (End of file)&#xA;&#xA;&#xA;

    &#xA;

    Edit : Since the code runs on a VM, I'm using xvfb-run ro start my application, but again even when using xvfb-run it works completely fine on when not encoding to the cloud.

    &#xA;