Recherche avancée

Médias (91)

Autres articles (23)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (4425)

  • ffmpeg ; opus encoded sound in webm does not work with ffplay or YouTube, only VLC [on hold]

    2 août 2017, par Mockarutan

    I’m having trouble getting Opus encoded sound in the webm container to work. I’m using libopus in ffmpeg.

    The file does work in VLC. But not in ffplay or on YouTube. If I take the raw wav data in a wav file and then convert it to Opus/webm with the ffmpeg.exe that comes pre-compiled. It works in VLC, ffplay and YouTube.

    So ffmpeg can obviously do it correctly, I must be doing something wrong in my code.

    The file my code produces : https://drive.google.com/file/d/0B16rIXjPXJCqcU5HVllIYW1iODg/view?usp=sharing

    Edit, More details that I forgot in my frustration : The file can be opened by ffplay and uploaded to youtube (when I interlace it with VP9 video). But the sound is just "ticks", example : https://www.youtube.com/watch?v=j_ShBbuizeo&feature=youtu.be

    I have read though all example codes that I know of from ffmpeg, but all of them is in the old API, not the send/receive api, so a big part of the code does not apply anymore. This code works with all other Codes I’ve tested, including H.264+AAC in mp4, VP8+Opus in ogg and raw PCM F32LE in wav. I would have gone with VP8+Opus in ogg if the license was as straight forward as the webm license

    I’ve looked though the source for the ffmpeg.exe command line tool and coped everything applicable in to my code base.

    (Edit 3, reduced the code as much as I can)
    Here is my code : https://pastebin.com/HTuc0g8K

    Setup :

    int initialize(int sample_rate, int per_frame_audio_samples, int audio_bitrate, const char *filename)
       {
           int ret;

           avcodec_register_all();
           av_register_all();

           ret = avformat_alloc_output_context2(&outctx, NULL, "webm", filename);

           if (ret < 0)
               return ret;

           aud_codec = avcodec_find_encoder(aud_codec_id);
           avcodec_register(aud_codec);

           if (!aud_codec)
               return -1;

           // Setup Audio Stream

           aud_codec_context = avcodec_alloc_context3(aud_codec);
           if (!aud_codec_context)
               return -1;

           /* select other audio parameters supported by the encoder */
           aud_codec_context->bit_rate = audio_bitrate;
           aud_codec_context->sample_rate = sample_rate;
           aud_codec_context->sample_fmt = sample_fmt;
           aud_codec_context->channel_layout = AV_CH_LAYOUT_STEREO;
           aud_codec_context->channels = av_get_channel_layout_nb_channels(aud_codec_context->channel_layout);

           aud_codec_context->codec = aud_codec;
           aud_codec_context->codec_id = aud_codec_id;


           AVRational time_base;
           time_base.num = per_frame_audio_samples;
           time_base.den = aud_codec_context->sample_rate;
           aud_codec_context->time_base = time_base;

           ret = avcodec_open2(aud_codec_context, aud_codec, NULL);

           if (ret < 0)
               return ret;

           outctx->audio_codec = aud_codec;
           outctx->audio_codec_id = aud_codec_id;

           audio_st = avformat_new_stream(outctx, aud_codec);

           avcodec_parameters_from_context(audio_st->codecpar, aud_codec_context);

           conv_time_base.num = aud_codec_context->frame_size;
           conv_time_base.den = aud_codec_context->sample_rate;

           // Setup audio frame
           aud_frame = av_frame_alloc();
           aud_frame->nb_samples = aud_codec_context->frame_size;
           aud_frame->format = aud_codec_context->sample_fmt;
           aud_frame->channel_layout = aud_codec_context->channel_layout;
           aud_frame->sample_rate = aud_codec_context->sample_rate;

           int buffer_size;
           if (aud_codec_context->frame_size == 0)
           {
               buffer_size = per_frame_audio_samples * 2 * 4;
               aud_frame->nb_samples = per_frame_audio_samples;
           }
           else
           {
               buffer_size = av_samples_get_buffer_size(NULL, aud_codec_context->channels, aud_codec_context->frame_size,
                   aud_codec_context->sample_fmt, 0);
           }

           if (av_sample_fmt_is_planar(sample_fmt))
               ret = av_frame_get_buffer(aud_frame, buffer_size / 2);
           else
               ret = av_frame_get_buffer(aud_frame, buffer_size);

           if (!aud_frame || ret < 0)
               return ret;

           // Setup audio resampler

           audio_swr_ctx = swr_alloc();
           if (!audio_swr_ctx)
               return -1;

           /* set options */
           av_opt_set_int(audio_swr_ctx, "in_channel_layout", aud_codec_context->channel_layout, 0);
           av_opt_set_int(audio_swr_ctx, "in_sample_rate", sample_rate, 0);
           av_opt_set_int(audio_swr_ctx, "in_frame_size", per_frame_audio_samples, 0);
           av_opt_set_sample_fmt(audio_swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_FLT, 0);

           av_opt_set_int(audio_swr_ctx, "out_channel_layout", aud_codec_context->channel_layout, 0);
           av_opt_set_int(audio_swr_ctx, "out_sample_rate", aud_codec_context->sample_rate, 0);
           av_opt_set_int(audio_swr_ctx, "out_frame_size", aud_codec_context->frame_size, 0);
           av_opt_set_sample_fmt(audio_swr_ctx, "out_sample_fmt", aud_codec_context->sample_fmt, 0);

           /* initialize the resampling context */
           if ((ret = swr_init(audio_swr_ctx)) < 0)
           {
               return ret;
           }

           dst_rate = aud_codec_context->sample_rate;
           src_rate = sample_rate;

           src_nb_samples = per_frame_audio_samples;
           dst_nb_samples = aud_codec_context->frame_size;

           max_dst_nb_samples = av_rescale_rnd(src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);

           dst_nb_channels = av_get_channel_layout_nb_channels(aud_codec_context->channel_layout);

           ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels, dst_nb_samples, sample_fmt, 0);

           aud_frame_counter = 0;

           if (ret < 0)
               return ret;

           av_dump_format(outctx, 0, filename, 1);

           if (!(outctx->oformat->flags & AVFMT_NOFILE))
           {
               ret = avio_open(&outctx->pb, filename, AVIO_FLAG_WRITE);
               if (ret < 0)
               {
                   return ret;
               }
           }

           ret = avformat_write_header(outctx, NULL);
           if (ret < 0)
               return ret;

           return 0;
       }

    Encoding and ending :

    int process_encode_loop(AVFormatContext *local_outctx, AVCodecContext *codec_context, AVStream *stream, AVRational time_base, bool flush)
       {
           int ret;

           AVPacket pkt;
           av_init_packet(&pkt);
           pkt.data = NULL;
           pkt.size = 0;

           while (true)
           {
               ret = avcodec_receive_packet(codec_context, &pkt);
               if (!ret)
               {
                   pkt.stream_index = stream->index;
                   av_packet_rescale_ts(&pkt, time_base, stream->time_base);
                   av_interleaved_write_frame(local_outctx, &pkt);

                   av_packet_unref(&pkt);
               }

               if (ret == AVERROR(EAGAIN))
                   break;
               else if (ret == AVERROR_EOF)
                   break;
               else if (ret < 0)
                   return ret;
               else if (flush == false)
                   break;
           }

           return 0;
       }

       int write_audio_frame(float_t *aud_sample)
       {
           int ret;
           if (dst_nb_samples > max_dst_nb_samples)
           {
               av_free(&aud_frame->data[0]);
               ret = av_samples_alloc(aud_frame->data, &dst_linesize, dst_nb_channels, dst_nb_samples, sample_fmt, 1);
               if (ret < 0)
                   return ret;

               max_dst_nb_samples = dst_nb_samples;
           }

           ret = swr_convert(audio_swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)&aud_sample, src_nb_samples);
           if (ret < 0)
           {
               return ret;
           }

           aud_frame->data[0] = (uint8_t*)dst_data[0];
           aud_frame->extended_data[0] = (uint8_t*)dst_data[0];

           aud_frame->pts = aud_frame_counter++;

           ret = avcodec_send_frame(aud_codec_context, aud_frame);

           ret = process_encode_loop(outctx, aud_codec_context, audio_st, conv_time_base, false);

           if (ret < 0)
               return ret;

           return 0;
       }

       int finish_audio_encoding()
       {
           int ret = avcodec_send_frame(aud_codec_context, NULL);
           if (ret < 0)
               return ret;

           ret = process_encode_loop(outctx, aud_codec_context, audio_st, conv_time_base, true);
           if (ret < 0)
               return ret;

           av_write_trailer(outctx);

           return ret;
       }

    Main :

    void fill_samples(float_t *dst, int nb_samples, int nb_channels, int sample_rate, float_t *t)
       {
           int i, j;
           float_t tincr = 1.0 / sample_rate;
           const float_t c = 2 * M_PI * 440.0;
           /* generate sin tone with 440Hz frequency and duplicated channels */
           for (i = 0; i < nb_samples; i++) {
               *dst = sin(c * *t);
               for (j = 1; j < nb_channels; j++)
                   dst[j] = dst[0];
               dst += nb_channels;
               *t += tincr;
           }
       }

       int main()
       {
           int frame_rate = 30;
           int sec = 12;
           int bit_rate = 192000;
           float t = 0;

           int src_samples_linesize;
           int src_nb_samples = 1024;
           int src_channels = 2;
           int sample_rate = 48000;

           uint8_t **src_data = NULL;

           int ret;

           initialize(sample_rate, src_nb_samples, bit_rate, "sound_test.webm");

           ret = av_samples_alloc_array_and_samples(&src_data, &src_samples_linesize, src_channels,
               src_nb_samples, AV_SAMPLE_FMT_FLT, 0);

           for (size_t i = 0; i < frame_rate * sec; i++)
           {
               fill_samples((float *)src_data[0], src_nb_samples, src_channels, sample_rate, &t);
               write_audio_frame((float *)src_data[0]);
           }
           finish_audio_encoding();

           cleanup();

           return 0;
       }

    Edit 2, This code reproduces the issue exactly and is fully self contained, if you have the ffmpeg 3.3.x libraries. It’s tried with 3.3.1 and 3.3.2 is the same results.

    So what could I be missing ? I do not think something is wrong with the sample rates or any other specifications, else it would not work in VLC or an ogg file. I do think the audio stream itself if correct, just some part of the header or how the file is formatted (look further down for some EBML inspection) that is not correct.

    As explained earlier, the licence with VP9+Opus in webm is why I have these specifics. And the exact problem is that I want the audio stream produced to work well when I upload it to YouTube.

    Any suggestion is appreciated, thanks in Advance !

    Some other things I’ve tried :

    I’ve looked at the header with the "MediaInfo" app built in to MVKTool :
    https://i.gyazo.com/3b29b41629a28bd526bf7637ce3f2601.png
    It all looks fine to me.

    I’ve also inspected the raw EBML file with EBML-Viewer (https://code.google.com/archive/p/ebml-viewer/) and in there I can se some difference between the files ;

    My file : https://i.gyazo.com/6fa8c540a2698a8a4d3421d363aede0a.png
    File produced with ffmpeg.exe : https://i.gyazo.com/04d60e64ff3c3040ea83e98cdf507530.png

    In my file it’s "Cluster" -> "BlockGroup" -> "Block", " ?"
    In the other it’s just "Cluster" -> "SimpleBlock"
    And in the webm specs, it says both are supported (https://www.webmproject.org/docs/container/)

    But I do not know much about these specific things, just looking for anything.

  • FFmpeg Auto Level, Auto Color etc similar to YouTube's Auto-Fix

    22 juin 2019, par Umer

    I used to use YouTube Auto-Fix which presumably auto fixed levels, color, contrast and added a bit of vibrance (it was almost always a bit high saturated which i kinda liked).

    I am looking for an alternate in ffmpeg. I tried using -vf pp=al but it only lightens the video.

    Any ideas ?

    P.S. I can do this in Premiere/After Effects but am looking for a ffmpeg solution.

  • How can i stream through ffmpeg a canvas generated in Node.js to youtube/any other rtmp server ?

    10 octobre 2020, par DDC

    i wanted to generate some images in Node.JS, compile them to a video and stream them to youtube. To generate the images i'm using the node-canvas module. This sounds simple enough, but i wanted to generate the images continuously, and stream the result in realtime. I'm very new to this whole thing, and what i was thinking about doing, after reading a bunch of resources on the internet was :

    


      

    1. Open ffmpeg with spawn('ffmpeg', ...args), setting the output to the destination rtmp server
    2. 


    3. Generate the image in the canvas
    4. 


    5. Convert the content of the canvas to a buffer, and write it to the ffmpeg process through stdin
    6. 


    7. Enjoy the result on Youtube
    8. 


    


    But it's not as simple as that, is it ? I saw people sharing their code involving client-side JS running on the browser, but i wanted it to be a Node app so that i could run it from a remote VPS.
Is there a way for me to do this without using something like p5 in my browser and capturing the window to restream it ?
Is my thought process even remotely adequate ? For now i don't really care about performance/resources usage. Thanks in advance.

    


    EDIT :

    


    I worked on it for a bit, and i couldn't get it to work...
This is my code :

    


    const { spawn } = require('child_process');
const { createCanvas } = require('canvas');
const fs = require('fs');


const canvas = createCanvas(1920, 1080);
const ctx = canvas.getContext('2d');
const ffmpeg = spawn("ffmpeg",
    ["-re", "-f", "png_pipe", "-vcodec", "png", "-i", "pipe:0", "-vcodec", "h264", "-re", "-f", "flv", "rtmp://a.rtmp.youtube.com/live2/key-i-think"],
    { stdio: 'pipe' })

const randomColor = (depth) => Math.floor(Math.random() * depth)
const random = (min, max) => (Math.random() * (max - min)) + min;

let i = 0;
let drawSomething = function () {
    ctx.strokeStyle = `rgb(${randomColor(255)}, ${randomColor(255)}, ${randomColor(255)})`
    let x1 = random(0, canvas.width);
    let x2 = random(0, canvas.width);
    let y1 = random(0, canvas.height);
    let y2 = random(0, canvas.height);
    ctx.moveTo(x1, y1);
    ctx.lineTo(x2, y2);
    ctx.stroke();

    let out = canvas.toBuffer();
    ffmpeg.stdin.write(out)
    i++;
    if (i >= 30) {
        ffmpeg.stdin.end();
        clearInterval(int)
    };
}

drawSomething();
let int = setInterval(drawSomething, 1000);



    


    I'm not getting any errors, neither i am getting any video data from it. I have set up an rtmp server that i can connect to, and then get the stream with VLC, but i don't get any video data. Am i doing something wrong ? I Looked around for a while, and i can't seem to find anyone that tried this, so i don't really have a clue...

    


    EDIT 2 :
Apparently i was on the right track, but my approach just gave me like 2 seconds of "good" video and then it started becoming blocky and messy. i think that, most likely, my method of generating images is just too slow. I'll try to use some GPU accelerated code to generate the images, instead of using the canvas, which means i'll be doing fractals all the time, since i don't know how to do anything else with that. Also, a bigger buffer in ffmpeg might help too