Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (60)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

Sur d’autres sites (10517)

  • ffmpeg background worker runs in debug but not in application

    2 novembre 2019, par Purgitoria

    My application has a function of taking captured images and using an FFmpeg background worker to stitch these into a time-lapse video. The GUI has some simple options for video quality and for the source folder and output file. I had an older version of my application written in VB.NET and that worked without issue but I am rewriting in C# as it supports additional capture and filter capability in the image processing but am having real trouble figuring out what is wrong with this function.

    I have tried relocating FFmpeg to different locations just in case it was a permissions issue but that had no effect and I also tried to put the function in a "try" with a message box to output any exceptions but I got different errors that prevented me from compiling the code. When I run the application from within VS 2015 in the debugging tool the function works just fine and it will create a video from a collection of still images but when I build and install the application it does not work at all and I cannot see what is causing it to fail. In the options for ffmpeg I used the -report to output a log of what happens in the background worker and in debug, it creates this log but from the application, it does not so I presume it is not even running ffmpeg and going straight to the completed stage of the function.

    Function startConversion()

       CheckForIllegalCrossThreadCalls = False
       Dim quality As Integer = trbQuality.Value
       Dim input As String = tbFolderOpen.Text
       Dim output As String = tbFolderSave.Text
       Dim exepath As String = Application.StartupPath + "\\bin\ffmpeg.exe"
       input = input & "\SCAImg_%1d.bmp"
       input = Chr(34) & input & Chr(34)
       output = Chr(34) & output & Chr(34)

       Dim sr As StreamReader
       Dim ffmpegOutput As String

       ' all parameters required to run the process
       proc.StartInfo.UseShellExecute = False
       proc.StartInfo.CreateNoWindow = True
       proc.StartInfo.RedirectStandardError = True
       proc.StartInfo.FileName = exepath
       proc.StartInfo.Arguments = "-framerate 25 -start_number 0 -pattern_type sequence -framerate 10 -i " & input & " -r 10 -c:v libx264 -preset slow -crf " & quality & " " & output
       proc.Start()

       lblInfo.Text = "Conversion in progress... Please wait..."
       sr = proc.StandardError 'standard error is used by ffmpeg
       btnMakeVideo.Enabled = False
       Do
           ffmpegOutput = sr.ReadLine
           tbProgress.Text = ffmpegOutput
       Loop Until proc.HasExited And ffmpegOutput = Nothing Or ffmpegOutput = ""

       tbProgress.Text = "Finished !"
       lblInfo.Text = "Completed!"
       MsgBox("Completed!", MsgBoxStyle.Exclamation)
       btnMakeVideo.Enabled = True
       Return 0

    End Function

    I checked the application folder and it does contain a subfolder \bin withe the ffmpeg.exe located within the folder so I then used cmd to run an instance of the installed ffmpeg from the application folder and it seemed to be throwing out permissions errors :

    Failed to open report "ffmpeg-20191101-191452.log" : Permission denied
    Failed to set value ’1’ for option ’report’ : Permission denied
    Error parsing global options : Permission denied

    This seems then like it is certainly a permissions problem but where I am not sure. I did not run into this error when using VB.NET so I am wondering where I am going wrong now. I thought perhaps it would just be a write permission in the application folder so I the removed the -report and ran ffmpeg again using cmd from my application folder and it then gave the error

    C :\Users\CEAstro\Pictures\AnytimeCap : Permission denied

    Am I missing something really obvious in my code or is there something more fundamental I have wrong in my setup ?

    I should also add that I tried running ffmpeg via cmd from a copy that was manually placed elsewhere (i used the same file) and that actually worked. For some reason, it seems like it just will not work from wherever my application installs it.

  • How to convert a Buffer to a video with fluent-ffmpeg and write the output to a WritableStream

    30 octobre 2023, par isaac.g

    For some context, I need to quickly render a small video (6.5 seconds long, at 15 FPS), and send it in a Discord channel with discord.js. I don't want to ever have to write anything to disk, because that'll slow it down and I just don't need it downloaded. So far, I was able to write the video to disk, but now I want to skip that step and send the video Buffer straight to discord.js. I was also able to output the video from ffmpeg to Discord as an audio file, but when I try to use the .mp4 format, I get a "Conversion failed !" error.

    


    I rendered the individual frames for the video using the canvas module, and export all of them as pngs using canvas.toBuffer("image/png");, and then push all the frames to an array. Then I combine the frames to one Buffer using Buffer.concat(), and then create a ReadableStream from the nodejs stream module. I also needed to write a custom WritableStream class that implements the _write method. Here's the class :

    


    class MyWritable extends Stream.Writable {
    constructor() {
        super();
        this.buffer = Buffer.from([]);
    }

    _write(chunk, encoding, callback) {
        this.buffer = Buffer.concat([this.buffer, chunk]);
        callback();
    }
}


    


    And here's how I implement everything using fluent-ffmpeg :

    


    const allFrames = Buffer.concat(framesArray);
const readable = Stream.Readable.from(allFrames);
const writable = new MyWritable();

const output = await (new Promise((resolve, reject) => {
    ffmpeg()
        .input(readableStream)
        .inputOptions([
            `-framerate 15`,
        ])
        .input("path_to_audio_file.mp3")
        .videoCodec("libx264")
        .format("mp4")
        .outputOptions([
            "-pix_fmt yuv420p"
        ])
        .duration(6.5)
        .fps(15)
        .writeToStream(writable)
        .on("finish", () => {
            // this is never reached, but it should resolve the promise with all the data that was written to the WritableStream (which should be the video)
            resolve(writable.buffer);
        })
        .on("error", (err) => {
            // this is never reached also
            reject(err);
        })
}))

// then I try to send the output buffer to Discord as an attachment, but I don't ever get here anyway


    


    And here's the error I get :

    


    node:events:491&#xA;      throw er; // Unhandled &#x27;error&#x27; event&#xA;      ^&#xA;&#xA;Error: ffmpeg exited with code 1: Conversion failed!&#xA;&#xA;    at ChildProcess.<anonymous> (/Users/isaac/Documents/Github/DiscordBot/node_modules/fluent-ffmpeg/lib/processor.js:182:22)&#xA;    at ChildProcess.emit (node:events:513:28)&#xA;    at ChildProcess._handle.onexit (node:internal/child_process:293:12)&#xA;Emitted &#x27;error&#x27; event on FfmpegCommand instance at:&#xA;    at emitEnd (/Users/isaac/Documents/Github/DiscordBot/node_modules/fluent-ffmpeg/lib/processor.js:424:16)&#xA;    at endCB (/Users/isaac/Documents/Github/DiscordBot/node_modules/fluent-ffmpeg/lib/processor.js:544:13)&#xA;    at handleExit (/Users/isaac/Documents/Github/DiscordBot/node_modules/fluent-ffmpeg/lib/processor.js:170:11)&#xA;    at ChildProcess.<anonymous> (/Users/isaac/Documents/Github/DiscordBot/node_modules/fluent-ffmpeg/lib/processor.js:182:11)&#xA;    at ChildProcess.emit (node:events:513:28)&#xA;    at ChildProcess._handle.onexit (node:internal/child_process:293:12)&#xA;</anonymous></anonymous>

    &#xA;

    Like I said earlier, if I change the format to an audio format, it exports perfectly fine. But I have no idea where the problem is with converting to a video.&#xA;Thanks in advance.

    &#xA;

  • ffmpeg library pcm to ac3 encoding

    16 juillet 2014, par Dave Camp

    I’m new to the ffmpeg library and Im working on a custom directshow filter. I decided to use the ffmpeg library for the encoding of what I need to achieve. I’m a little confused with some of the parameters and the correct values with what ffmpeg is expecting.

    I’m currently working on the ac3 part of the custom filter.
    I’ve looked through the example of the encoding audio ( for MP2 encoding ) in the ffmpeg doc and I understand it, but I don’t understand how I should adapt it to my specific needs.

    The incoming samples are at 48K samples per second 16bit per sample and are stereo interleaved. The upstream filter is giving them to me at 25fps so I get an incoming ’audio sample packet’ of 1920 bytes for each audio frame. I want to encode that data into an ac3 data packet that I pass on to the next process that I’ll be doing myself.

    But I’m unsure of the correct parameters for each component in the following code...

    The code I have so far. There are several questions in the comments at key places.

    AVCodec*         g_pCodec = nullptr;
    AVCodecContext*  g_pContext = nullptr;
    AVFrame*         g_pFrame = nullptr;
    AVPacket         g_pPacket;
    LPVOID           g_pInSampleBuffer;

    avcodec_register_all();
    g_pCodec = avcodec_find_encoder(CODEC_ID_AC3);

    // What am I'm describing here? the incoming sample params or the outgoing sample params?
    // An educated guess is the outgoing sample params
    g_pContext = avcodec_alloc_context3(g_pCodec);
    g_pContext->bit_rate = 448000;
    g_pContext->sample_rate = 48000;
    g_pContext->channels = 2;
    g_pContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
    g_pContext->channel_layout = AV_CH_LAYOUT_STEREO;

    // And this is the incoming sample params?
    g_pFrame = av_frame_alloc();
    g_pFrame->nb_samples = 1920; ?? What figure is the codec expecting me to give it here? 1920 / bytes_per_sample?
    g_pFrame->format = AV_SAMPLE_FMT_S16;
    g_pFrame->channel_layout = AV_CH_LAYOUT_STEREO;

    // I assume this going to give me the size of a buffer that I use to fill with my incoming samples? I get a dwSize of 15360 but my samples are only coming in at 1920, does this matter?
    dwSize = av_samples_get_buffer_size(nullptr,2,1920,AV_SAMPLE_FMT_S16,0);

    // do I need to use av_malloc and copy my samples into g_pInSampleBuffer or can I supply the address of my own buffer ( created outside of the libav framework ) ?
    g_pInSampleBuffer = (LPVOID)av_malloc(dwSize)
    avcodec_fill_audio_frame(g_pFrame,2,AV_SAMPLE_FMT_S16,(const uint8_t*)g_pInSampleBuffer,*dwSize,0);

    // Encoding loop - samples are given to me through a directshow interface - DSInbuffer is the buffer containing the incoming samples
    av_init_packet(&amp;g_pPacket);
    g_pPacket.data = nullptr;
    g_pPacket.size = 0;

    int gotpacket = 0;
    int ok = avcodec_encode_audio2(g_pContext,&amp;g_pPacket,g_pFrame,&amp;gotpacket);
    if((ok == 0) &amp;&amp; gotpacket){
      // I then copy from g_pPacket.data an amount of g_pPacket.size bytes into another directshow interface buffer that sends the encoded sample downstream.

       av_free_packet(&amp;g_pPacket);
    }

    Currently it will crash at the avcodec_encode_audio2 call. If I change the format parameter to AV_SAMPLE_FMT_FLTP in the avcodec_fill_audio_frame call then it doesnt crash but it only encodes 1 frame of data and i get error -22 on the next frame. The pPacket.size parameter is 1792 ( 7 * 256 ) after the first avcode_encode_audio2 call.

    As I’m new to ffmpeg I’m sure it will probably be something quite straight forward that I’ve missed or I’m misunderstanding, and I’m confused at where the parameters for the incoming samples are and the outgoing samples.

    This is obviously and extract from the main function that I’ve created, and I’ve manually typed into the forum. If there are spelling mistakes that are by mistake here, then original code compiles and runs.

    Dave.