Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (33)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (4510)

  • ffmpeg to convert mov to flv

    27 octobre 2014, par jeet

    I’m trying to convert a mov video to flv, but keep getting these errors below.
    There are 2 commands I used, both are below.

    ffmpeg -y -i video.mov -deinterlace -acodec copy -r 25 -qmin 3 -qmax 6 video.flv


    FFmpeg version SVN-r16573, Copyright (c) 2000-2009 Fabrice Bellard, et al.
    configuration: --extra-cflags=-fno-common --enable-memalign-hack --enable-pthreads --enable-libmp3lame --enable-libxvid --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libfaac --enable-libgsm --enable-libx264 --enable-libschroedinger --enable-avisynth --enable-swscale --enable-gpl
    libavutil     49.12. 0 / 49.12. 0
    libavcodec    52.10. 0 / 52.10. 0
    libavformat   52.23. 1 / 52.23. 1
    libavdevice   52. 1. 0 / 52. 1. 0
    libswscale     0. 6. 1 /  0. 6. 1
    built on Jan 13 2009 02:57:09, gcc: 4.2.4
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'vid\video.mov':
    Duration: 00:03:16.00, start: 0.000000, bitrate: 398 kb/s
    Stream #0.0(eng): Video: mpeg4, yuv420p, 800x600 [PAR 1:1 DAR 4:3], 30.00 tb(r)
    Stream #0.1(eng): Audio: pcm_u8, 8000 Hz, mono, s16, 64 kb/s
    Output #0, flv, to 'vid\video.flv':
    Stream #0.0(eng): Video: flv, yuv420p, 800x600 [PAR 1:1 DAR 4:3], q=3-6, 200 kb/s, 25.00 tb(c)
    Stream #0.1(eng): Audio: pcm_u8, 8000 Hz, mono, s16, 64 kb/s
    Stream mapping:
    Stream #0.0 -> #0.0
    Stream #0.1 -> #0.1
    [NULL @ 0x1714390]codec not compatible with flv
    Could not write header for output file #0 (incorrect codec parameters ?)

    Second command :

    ffmpeg -y -i video.mov -deinterlace -ar 44100 -r 25 -qmin 3 -qmax 6 video.flv

    Audio resampler only works with 16 bits per sample, patch welcome.

    With a newer version of ffmpeg :

    ffmpeg version N-49610-gc2dd5a1 Copyright (c) 2000-2013 the FFmpeg developers
    built on Feb  5 2013 13:20:59 with gcc 4.7.2 (GCC)
    configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
     libavutil      52. 17.101 / 52. 17.101
     libavcodec     54. 91.100 / 54. 91.100
     libavformat    54. 61.104 / 54. 61.104
     libavdevice    54.  3.103 / 54.  3.103
     libavfilter     3. 35.101 /  3. 35.101
     libswscale      2.  2.100 /  2.  2.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  2.100 / 52.  2.100
    Guessed Channel Layout for  Input Stream #0.1 : mono
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'vid\video.mov':
     Metadata:
       major_brand     : qt  
       minor_version   : 512
       compatible_brands: qt  
       creation_time   : 1970-01-01 00:00:00
     Duration: 00:02:50.39, start: 0.000000, bitrate: 370 kb/s
       Stream #0:0(eng): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 1366x768 [SAR 1:1 DAR 683:384], 308 kb/s, 11.50 fps, 11.50 tbr, 23 tbn, 23 tbc
       Metadata:
         creation_time   : 1970-01-01 00:00:00
         handler_name    : DataHandler
       Stream #0:1(eng): Audio: pcm_u8 (raw  / 0x20776172), 8000 Hz, mono, u8, 64 kb/s
       Metadata:
         creation_time   : 1970-01-01 00:00:00
         handler_name    : DataHandler
    [flv @ 026347a0] FLV does not support sample rate 8000, choose from (44100, 22050, 11025)
    Output #0, flv, to 'vid\video.flv':
     Metadata:
       major_brand     : qt  
       minor_version   : 512
    compatible_brands: qt  
       encoder         : Lavf54.61.104
       Stream #0:0(eng): Video: flv1 ([2][0][0][0] / 0x0002), yuv420p, 1366x768 [SAR 1:1 DAR 683:384], q=2-31, 200 kb/s, 1k tbn, 11.50 tbc
       Metadata:
         creation_time   : 1970-01-01 00:00:00
         handler_name    : DataHandler
       Stream #0:1(eng): Audio: mp3 ([2][0][0][0] / 0x0002), 8000 Hz, mono, s16p
       Metadata:
         creation_time   : 1970-01-01 00:00:00
         handler_name    : DataHandler
    Stream mapping:
     Stream #0:0 -> #0:0 (mpeg4 -> flv)
     Stream #0:1 -> #0:1 (pcm_u8 -> libmp3lame)
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid data found when processing input

    1 more thing please :
    If I use this newer version of ffmpeg to create a video with the below command, I get a video with a very hazy display.
    It’s like a few black dots on a blank screen :

    ffmpeg -i img%d.png -i audio.wav -acodec copy output.mov

    what could be the reason for this display ?

  • Tracking granular duration from an ffmpeg stream piped into Nodejs

    20 mai 2020, par slifty

    The Situation

    



    I have a video file that I want to process in nodejs, but I want to run it through ffmpeg first in order to normalize the encoding. As I get the data / for a given data event I would like to be able to know how "far" into the video the piped stream has progressed, at as close to frame-level granularity as possible.

    



    0.1 second granularity would be fine.

    



    The Code

    



    I am using Nodejs to invoke ffmpeg to take the video file path and then output the data to stdout :

    



    const ffmpegSettings = [
    '-i', './path/to/video.mp4',       // Ingest a file
    '-f', 'mpegts',                    // Specify the wrapper
    '-'                                // Output to stdout
]
const ffmpegProcess = spawn('ffmpeg', ffmpegSettings)
const readableStream = ffmpegProcess.stdout
readableStream.on('data', async (data) => {
    readableStream.pause() // Pause the stream to make sure we process in order.

    // Process the data.
    // This is where I want to know the "timestamp" for the data.

    readableStream.resume() // Restart the stream.
})


    



    The Question

    



    How can I efficiently and accurately keep track of how far into the video a given 'data' event represents ? Keeping in mind that a given on event in this context might not even have enough data to represent even a full frame.

    




    



    Temporary Edit

    



    (I will delete this section once a conclusive answer is identified)

    



    Since posting this question I've done some exploration using ffprobe.

    



        start = () => {
        this.ffmpegProcess = spawn('ffmpeg', this.getFfmpegSettings())
        this.ffprobeProcess = spawn('ffprobe', this.getFfprobeSettings())
        this.inputStream = this.getInputStream()

        this.inputStream.on('data', (rawData) => {
            this.inputStream.pause()
            if(rawData) {
                this.ffmpegProcess.stdin.write(rawData)
            }
            this.inputStream.resume()
        })

        this.ffmpegProcess.stdout.on('data', (mpegtsData) => {
            this.ffmpegProcess.stdout.pause()
            this.ffprobeProcess.stdin.write(mpegtsData)
            console.log(this.currentPts)
            this.ffmpegProcess.stdout.resume()
        })

        this.ffprobeProcess.stdout.on('data', (probeData) => {
            this.ffprobeProcess.stdout.pause()
            const lastLine = probeData.toString().trim().split('\n').slice(-1).pop()
            const lastPacket = lastLine.split(',')
            const pts = lastPacket[4]
            console.log(`FFPROBE: ${pts}`)
            this.currentPts = pts
            this.ffprobeProcess.stdout.resume()
        })

        logger.info(`Starting ingestion from ${this.constructor.name}...`)
    }

    /**
     * Returns an ffmpeg settings array for this ingestion engine.
     *
     * @return {String[]} A list of ffmpeg command line parameters.
     */
    getFfmpegSettings = () => [
        '-loglevel', 'info',
        '-i', '-',
        '-f', 'mpegts',
        // '-vf', 'fps=fps=30,signalstats,metadata=print:key=lavfi.signalstats.YDIF:file=\'pipe\\:3\'',
        '-',
    ]

    /**
     * Returns an ffprobe settings array for this ingestion engine.
     *
     * @return {String[]} A list of ffprobe command line parameters.
     */
    getFfprobeSettings = () => [
        '-f', 'mpegts',
        '-i', '-',
        '-print_format', 'csv',
        '-show_packets',
    ]


    



    This pipes the ffmpeg output into ffprobe and uses that to "estimate" where in the stream the processing has gotten. It starts off wildly inaccurate, but after about 5 seconds of processed-video ffprobe and ffmpeg are producing data at a similar pace.

    



    This is a hack, but a step towards the granularity I want. It may be that I need an mpegts parser / chunker that can run on the ffmpeg output directly in NodeJS.

    



    The output of the above is something along the lines of :

    



    undefined (repeated around 100x as I assume ffprobe needs more data to start)
FFPROBE: 1.422456 // Note that this represents around 60 ffprobe messages sent at once.
FFPROBE: 1.933867
1.933867 // These lines represent mpegts data sent from ffmpeg, and the latest pts reported by ffprobe
FFPROBE: 2.388989
2.388989
FFPROBE: 2.728578
FFPROBE: 2.989811
FFPROBE: 3.146544
3.146544
FFPROBE: 3.433889
FFPROBE: 3.668989
FFPROBE: 3.802400
FFPROBE: 3.956333
FFPROBE: 4.069333
4.069333
FFPROBE: 4.426544
FFPROBE: 4.609400
FFPROBE: 4.870622
FFPROBE: 5.184089
FFPROBE: 5.337267
5.337267
FFPROBE: 5.915522
FFPROBE: 6.104700
FFPROBE: 6.333478
6.333478
FFPROBE: 6.571833
FFPROBE: 6.705300
6.705300
FFPROBE: 6.738667
6.738667
FFPROBE: 6.777567
FFPROBE: 6.772033
6.772033
FFPROBE: 6.805400
6.805400
FFPROBE: 6.829811
FFPROBE: 6.838767
6.838767
FFPROBE: 6.872133
6.872133
FFPROBE: 6.882056
FFPROBE: 6.905500
6.905500


    


  • Write wave audio samples to vorbis by FFMPEG

    3 avril 2014, par Ankush

    I am recording screen, mic & speaker to webm video. Video part is working fine but when I add audio samples recorded from mic to video, I can't hear sound in generated video. I am adding 3 screenshots per sec. & After each screenshot I add audio buffer gathered by NAudio. Any clues what I am doing wrong ?

    c# part :

    static void wmic_DataAvailable(object sender, NAudio.Wave.WaveInEventArgs e)
           {
               if(e.BytesRecorded>0)
                   vencoder.AddAudioSampleBuffer(e.Buffer, e.BytesRecorded);
           }

    c++ part :

    AVStream * VideoEncoder::AddAudioStream(AVFormatContext *pContext, AVCodecID codec_id)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVStream *pStream = NULL;

     // Try create stream.
     pStream = av_new_stream(pContext, 1);
     if (!pStream)
     {
       printf("Cannot add new audio stream\n");
       return NULL;
     }

     // Codec.
     pCodecCxt = pStream->codec;
     pCodecCxt->codec_id = codec_id;
     pCodecCxt->codec_type = AVMEDIA_TYPE_AUDIO;
     // Set format
     pCodecCxt->bit_rate    = 128000;
     pCodecCxt->sample_rate = 44100;
     pCodecCxt->channels    = 1;
     pCodecCxt->sample_fmt  = AV_SAMPLE_FMT_S16P;//AV_SAMPLE_FMT_S16; //libvorbis supports only AV_SAMPLE_FMT_FLTP

     nSizeAudioEncodeBuffer = 4 * MAX_AUDIO_PACKET_SIZE;
     if (pAudioEncodeBuffer == NULL)
     {      
       pAudioEncodeBuffer = (uint8_t * )av_malloc(nSizeAudioEncodeBuffer);
     }

     // Some formats want stream headers to be separate.
     if(pContext->oformat->flags & AVFMT_GLOBALHEADER)
     {
       pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }

     return pStream;
    }

    bool VideoEncoder::OpenAudio(AVFormatContext *pContext, AVStream *pStream)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVCodec *pCodec = NULL;
     pCodecCxt = pStream->codec;

     // Find the audio encoder.
     pCodec = avcodec_find_encoder(pCodecCxt->codec_id);
     if (!pCodec)
     {
       printf("Cannot open audio codec\n");
       return false;
     }

     // Open it.
     if (avcodec_open2(pCodecCxt, pCodec,NULL) < 0)
     {
       printf("Cannot open audio codec\n");
       return false;
     }

     if (pCodecCxt->frame_size <= 1)
     {
       // Ugly hack for PCM codecs (will be removed ASAP with new PCM
       // support to compute the input frame size in samples.
       audioInputSampleSize = nSizeAudioEncodeBuffer / pCodecCxt->channels;
       switch (pStream->codec->codec_id)
       {
         case CODEC_ID_PCM_S16LE:
         case CODEC_ID_PCM_S16BE:
         case CODEC_ID_PCM_U16LE:
         case CODEC_ID_PCM_U16BE:
           audioInputSampleSize >>= 1;
           break;
         default:
           break;
       }
       pCodecCxt->frame_size = audioInputSampleSize;
     }
     else
     {
       audioInputSampleSize = pCodecCxt->frame_size;
     }

     return true;
    }

    bool VideoEncoder::AddAudioSample(AVFormatContext *pFormatContext, AVStream *pStream,
                                           const char* soundBuffer, int soundBufferSize)
    {
     AVCodecContext *pCodecCxt;    
     bool res = true;  

     pCodecCxt       = pStream->codec;
     memcpy(audioBuffer + nAudioBufferSizeCurrent, soundBuffer, soundBufferSize);
     nAudioBufferSizeCurrent += soundBufferSize;

     BYTE * pSoundBuffer = (BYTE *)audioBuffer;
     int nCurrentSize    = nAudioBufferSizeCurrent;

     // Size of packet on bytes.
     // FORMAT s16
     DWORD packSizeInSize = 2 * audioInputSampleSize;

     while(nCurrentSize >= packSizeInSize)
     {
       AVPacket pkt;
       av_init_packet(&pkt);

       pkt.size = avcodec_encode_audio(pCodecCxt, pAudioEncodeBuffer,
         nSizeAudioEncodeBuffer, (const short *)pSoundBuffer);

       if (pCodecCxt->coded_frame && pCodecCxt->coded_frame->pts != AV_NOPTS_VALUE)
       {
         pkt.pts = av_rescale_q(pCodecCxt->coded_frame->pts, pCodecCxt->time_base, pStream->time_base);
       }

       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.stream_index = pStream->index;
       pkt.data = pAudioEncodeBuffer;

       // Write the compressed frame in the media file.
       if (av_interleaved_write_frame(pFormatContext, &pkt) != 0)
       {
         res = false;
         break;
       }

       nCurrentSize -= packSizeInSize;  
       pSoundBuffer += packSizeInSize;      
     }

     // save excess
     memcpy(audioBuffer, audioBuffer + nAudioBufferSizeCurrent - nCurrentSize, nCurrentSize);
     nAudioBufferSizeCurrent = nCurrentSize;

     return res;
    }

    char* ByteArray_to_charptr(array^ byteArray)
    {
       pin_ptr p = &byteArray[0];
       unsigned char* pby = p;
       char* pch = reinterpret_cast(pby);
       return pch;
       // use it...
    }

    void VideoEncoder::AddAudioSampleBuffer(array^ byteArray,int^ buffersize)
    {
       char * buffer=ByteArray_to_charptr(byteArray);

       if (buffer && (int)buffersize > 0)
       {
           AddAudioSample(pFormatContext,pVideoStream,buffer,(int)buffersize);
       }
    }

    Q2. How can I mix mic & speaker samples realtime so that I hear both in one AudioStream ?