Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (102)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (8974)

  • Wrap audio data of the pcm_alaw type into an MKA audio file using the ffmpeg API

    19 septembre 2020, par bbdd

    Imagine that in my project, I receive RTP packets with the payload type-8, for later saving this load as the Nth part of the audio track. I extract this load from the RTP packet and save it to a temporary buffer :

    


    ...

while ((rtp = receiveRtpPackets()).withoutErrors()) {
   payloadData.push(rtp.getPayloadData());
}

audioGenerator.setPayloadData(payloadData);
audioGenerator.recordToFile();

...


    


    After filling a temporary buffer of a certain size with this payload, I process this buffer, namely, extract the entire payload and encode it using ffmpeg for further saving to an audio file in Matroska format. But I have a problem. Since the payload of the RTP packet is type 8, I have to save the raw audio data of the pcm_alaw format to mka audio format. But when saving raw data pcm_alaw to an audio file, I get these messages from the library :

    


    ...

[libopus @ 0x18eff60] Queue input is backward in time
[libopus @ 0x18eff60] Queue input is backward in time
[libopus @ 0x18eff60] Queue input is backward in time
[libopus @ 0x18eff60] Queue input is backward in time

...


    


    When you open an audio file in vlc, nothing is played (the audio track timestamp is missing).

    


    The task of my project is to simply take pcm_alaw data and pack it in a container, in mka format. The best way to determine the codec is to use the av_guess_codec() function, which in turn automatically selects the desired codec ID. But how do I pack the raw data into the container correctly, I do not know.

    


    It is important to note that I can get as raw data any format of this data (audio formats only) defined by the RTP packet type (All types of RTP packet payload). All I know is that in any case, I have to pack the audio data in an mka container.

    


    I also attach the code (borrowed from this resource) that I use :

    


    audiogenerater.h

    


    extern "C"
{
#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswresample/swresample.h"
}

class AudioGenerater
{
public:

    AudioGenerater();
   ~AudioGenerater() = default;

    void generateAudioFileWithOptions(
            QString        fileName,
            QByteArray     pcmData,
            int            channel,
            int            bitRate,
            int            sampleRate,
            AVSampleFormat format);
            
private:

    // init Format
    bool initFormat(QString audioFileName);

private:

    AVCodec         *m_AudioCodec        = nullptr;
    AVCodecContext  *m_AudioCodecContext = nullptr;
    AVFormatContext *m_FormatContext     = nullptr;
    AVOutputFormat  *m_OutputFormat      = nullptr;
};


    


    audiogenerater.cpp

    


    AudioGenerater::AudioGenerater()
{
    av_register_all();
    avcodec_register_all();
}

AudioGenerater::~AudioGenerater()
{
    // ... 
}

bool AudioGenerater::initFormat(QString audioFileName)
{
    // Create an output Format context
    int result = avformat_alloc_output_context2(&m_FormatContext, nullptr, nullptr, audioFileName.toLocal8Bit().data());
    if (result < 0) {
        return false;
    }

    m_OutputFormat = m_FormatContext->oformat;

    // Create an audio stream
    AVStream* audioStream = avformat_new_stream(m_FormatContext, m_AudioCodec);
    if (audioStream == nullptr) {
        avformat_free_context(m_FormatContext);
        return false;
    }

    // Set the parameters in the stream
    audioStream->id = m_FormatContext->nb_streams - 1;
    audioStream->time_base = { 1, 8000 };
    result = avcodec_parameters_from_context(audioStream->codecpar, m_AudioCodecContext);
    if (result < 0) {
        avformat_free_context(m_FormatContext);
        return false;
    }

    // Print FormatContext information
    av_dump_format(m_FormatContext, 0, audioFileName.toLocal8Bit().data(), 1);

    // Open file IO
    if (!(m_OutputFormat->flags & AVFMT_NOFILE)) {
        result = avio_open(&m_FormatContext->pb, audioFileName.toLocal8Bit().data(), AVIO_FLAG_WRITE);
        if (result < 0) {
            avformat_free_context(m_FormatContext);
            return false;
        }
    }

    return true;
}

void AudioGenerater::generateAudioFileWithOptions(
    QString _fileName,
    QByteArray _pcmData,
    int _channel,
    int _bitRate,
    int _sampleRate,
    AVSampleFormat _format)
{
    AVFormatContext* oc;
    if (avformat_alloc_output_context2(
            &oc, nullptr, nullptr, _fileName.toStdString().c_str())
        < 0) {
        qDebug() << "Error in line: " << __LINE__;
        return;
    }
    if (!oc) {
        printf("Could not deduce output format from file extension: using mka.\n");
        avformat_alloc_output_context2(
            &oc, nullptr, "mka", _fileName.toStdString().c_str());
    }
    if (!oc) {
        qDebug() << "Error in line: " << __LINE__;
        return;
    }
    AVOutputFormat* fmt = oc->oformat;
    if (fmt->audio_codec == AV_CODEC_ID_NONE) {
        qDebug() << "Error in line: " << __LINE__;
        return;
    }

    AVCodecID codecID = av_guess_codec(
        fmt, nullptr, _fileName.toStdString().c_str(), nullptr, AVMEDIA_TYPE_AUDIO);
    // Find Codec
    m_AudioCodec = avcodec_find_encoder(codecID);
    if (m_AudioCodec == nullptr) {
        qDebug() << "Error in line: " << __LINE__;
        return;
    }
    // Create an encoder context
    m_AudioCodecContext = avcodec_alloc_context3(m_AudioCodec);
    if (m_AudioCodecContext == nullptr) {
        qDebug() << "Error in line: " << __LINE__;
        return;
    }

    // Setting parameters
    m_AudioCodecContext->bit_rate = _bitRate;
    m_AudioCodecContext->sample_rate = _sampleRate;
    m_AudioCodecContext->sample_fmt = _format;
    m_AudioCodecContext->channels = _channel;

    m_AudioCodecContext->channel_layout = av_get_default_channel_layout(_channel);
    m_AudioCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    // Turn on the encoder
    int result = avcodec_open2(m_AudioCodecContext, m_AudioCodec, nullptr);
    if (result < 0) {
        avcodec_free_context(&m_AudioCodecContext);
        if (m_FormatContext != nullptr)
            avformat_free_context(m_FormatContext);
        return;
    }

    // Create a package
    if (!initFormat(_fileName)) {
        avcodec_free_context(&m_AudioCodecContext);
        if (m_FormatContext != nullptr)
            avformat_free_context(m_FormatContext);
        return;
    }

    // write to the file header
    result = avformat_write_header(m_FormatContext, nullptr);
    if (result < 0) {
        avcodec_free_context(&m_AudioCodecContext);
        if (m_FormatContext != nullptr)
            avformat_free_context(m_FormatContext);
        return;
    }

    // Create Frame
    AVFrame* frame = av_frame_alloc();
    if (frame == nullptr) {
        avcodec_free_context(&m_AudioCodecContext);
        if (m_FormatContext != nullptr)
            avformat_free_context(m_FormatContext);
        return;
    }

    int nb_samples = 0;
    if (m_AudioCodecContext->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE) {
        nb_samples = 10000;
    }
    else {
        nb_samples = m_AudioCodecContext->frame_size;
    }

    // Set the parameters of the Frame
    frame->nb_samples = nb_samples;
    frame->format = m_AudioCodecContext->sample_fmt;
    frame->channel_layout = m_AudioCodecContext->channel_layout;

    // Apply for data memory
    result = av_frame_get_buffer(frame, 0);
    if (result < 0) {
        av_frame_free(&frame);
        {
            avcodec_free_context(&m_AudioCodecContext);
            if (m_FormatContext != nullptr)
                avformat_free_context(m_FormatContext);
            return;
        }
    }

    // Set the Frame to be writable
    result = av_frame_make_writable(frame);
    if (result < 0) {
        av_frame_free(&frame);
        {
            avcodec_free_context(&m_AudioCodecContext);
            if (m_FormatContext != nullptr)
                avformat_free_context(m_FormatContext);
            return;
        }
    }

    int perFrameDataSize = frame->linesize[0];
    int count = _pcmData.size() / perFrameDataSize;
    bool needAddOne = false;
    if (_pcmData.size() % perFrameDataSize != 0) {
        count++;
        needAddOne = true;
    }

    int frameCount = 0;
    for (int i = 0; i < count; ++i) {
        // Create a Packet
        AVPacket* pkt = av_packet_alloc();
        if (pkt == nullptr) {
            avcodec_free_context(&m_AudioCodecContext);
            if (m_FormatContext != nullptr)
                avformat_free_context(m_FormatContext);
            return;
        }
        av_init_packet(pkt);

        if (i == count - 1)
            perFrameDataSize = _pcmData.size() % perFrameDataSize;

        // Synthesize WAV files
        memset(frame->data[0], 0, perFrameDataSize);
        memcpy(frame->data[0], &(_pcmData.data()[perFrameDataSize * i]), perFrameDataSize);

        frame->pts = frameCount++;
        // send Frame
        result = avcodec_send_frame(m_AudioCodecContext, frame);
        if (result < 0)
            continue;

        // Receive the encoded Packet
        result = avcodec_receive_packet(m_AudioCodecContext, pkt);
        if (result < 0) {
            av_packet_free(&pkt);
            continue;
        }

        // write to file
        av_packet_rescale_ts(pkt, m_AudioCodecContext->time_base, m_FormatContext->streams[0]->time_base);
        pkt->stream_index = 0;
        result = av_interleaved_write_frame(m_FormatContext, pkt);
        if (result < 0)
            continue;

        av_packet_free(&pkt);
    }

    // write to the end of the file
    av_write_trailer(m_FormatContext);
    // Close file IO
    avio_closep(&m_FormatContext->pb);
    // Release Frame memory
    av_frame_free(&frame);

    avcodec_free_context(&m_AudioCodecContext);
    if (m_FormatContext != nullptr)
        avformat_free_context(m_FormatContext);
}


    


    main.cpp

    


    int main(int argc, char **argv)
{
    av_log_set_level(AV_LOG_TRACE);

    QFile file("rawDataOfPcmAlawType.bin");
    if (!file.open(QIODevice::ReadOnly)) {
        return EXIT_FAILURE;
    }
    QByteArray rawData(file.readAll());

    AudioGenerater generator;
    generator.generateAudioFileWithOptions(
               "test.mka",
               rawData,
               1, 
               64000, 
               8000,
               AV_SAMPLE_FMT_S16);

    return 0;
}


    


    It is IMPORTANT you help me find the most appropriate way to record pcm_alaw or a different data format in an MKA audio file.

    


    I ask everyone who knows anything to help (there is too little time left to implement this project)

    


  • using ffmpeg headers with a c program

    15 avril 2014, par orpgol

    i'm trying to learn ffmpeg from drangers guide (for school), i have a mac so the first thing i did was to use Macports and get ffmpeg and sdl...

    now when i'm tried to compile drangers code, the compiler didnt recognize the headers...
    so i used -I on gnu when compiling, and also gave the "whole/path/name..." to the headers,
    but i always get an error on some header missing...

    below i will put the code with my corrections.

    i also tried including all the headers that i get an error about but there is always another header the compiler can't find

    i tried both gnu (on console) and on xcode.

    // tutorial01.c
    // Code based on a tutorial by Martin Bohme (boehme@inb.uni-luebeckREMOVETHIS.de)
    // Tested on Gentoo, CVS version 5/01/07 compiled with GCC 4.1.1

    // A small sample program that shows how to use libavformat and libavcodec to
    // read video from a file.
    //
    // Use
    //
    // gcc -o tutorial01 tutorial01.c -lavformat -lavcodec -lz
    //
    // to build (assuming libavformat and libavcodec are correctly installed
    // your system).
    //
    // Run using
    //
    // tutorial01 myvideofile.mpg
    //
    // to write the first five frames from "myvideofile.mpg" to disk in PPM
    // format.

    #include "/opt/local/include/libavcodec/avcodec.h"
    #include "/opt/local/include/libavformat/avformat.h"

    #include

    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
     FILE *pFile;
     char szFilename[32];
     int  y;

     // Open file
     sprintf(szFilename, "frame%d.ppm", iFrame);
     pFile=fopen(szFilename, "wb");
     if(pFile==NULL)
       return;

     // Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    int main(int argc, char *argv[]) {
     AVFormatContext *pFormatCtx;
     int             i, videoStream;
     AVCodecContext  *pCodecCtx;
     AVCodec         *pCodec;
     AVFrame         *pFrame;
     AVFrame         *pFrameRGB;
     AVPacket        packet;
     int             frameFinished;
     int             numBytes;
     uint8_t         *buffer;

     if(argc < 2) {
       printf("Please provide a movie file\n");
       return -1;
     }
     // Register all formats and codecs
     av_register_all();

     // Open video file
     if(av_open_input_file(&pFormatCtx, argv[1], NULL, 0, NULL)!=0)
       return -1; // Couldn't open file

     // Retrieve stream information
     if(av_find_stream_info(pFormatCtx)<0)
       return -1; // Couldn't find stream information

     // Dump information about file onto standard error
     dump_format(pFormatCtx, 0, argv[1], 0);

     // Find the first video stream
     videoStream=-1;
     for(i=0; inb_streams; i++)
       if(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO) {
         videoStream=i;
         break;
       }
     if(videoStream==-1)
       return -1; // Didn't find a video stream

     // Get a pointer to the codec context for the video stream
     pCodecCtx=pFormatCtx->streams[videoStream]->codec;

     // Find the decoder for the video stream
     pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
     if(pCodec==NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
     }
     // Open codec
     if(avcodec_open(pCodecCtx, pCodec)<0)
       return -1; // Could not open codec

     // Allocate video frame
     pFrame=avcodec_alloc_frame();

     // Allocate an AVFrame structure
     pFrameRGB=avcodec_alloc_frame();
     if(pFrameRGB==NULL)
       return -1;

     // Determine required buffer size and allocate buffer
     numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                     pCodecCtx->height);
     buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

     // Assign appropriate parts of buffer to image planes in pFrameRGB
     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
     // of AVPicture
     avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
            pCodecCtx->width, pCodecCtx->height);

     // Read frames and save first five frames to disk
     i=0;
     while(av_read_frame(pFormatCtx, &packet)>=0) {
       // Is this a packet from the video stream?
       if(packet.stream_index==videoStream) {
         // Decode video frame
         avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,
                  packet.data, packet.size);

         // Did we get a video frame?
         if(frameFinished) {
       // Convert the image from its native format to RGB
       img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24,
                       (AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width,
                       pCodecCtx->height);

       // Save the frame to disk
       if(++i<=5)
         SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
               i);
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
     }

     // Free the RGB image
     av_free(buffer);
     av_free(pFrameRGB);

     // Free the YUV frame
     av_free(pFrame);

     // Close the codec
     avcodec_close(pCodecCtx);

     // Close the video file
     av_close_input_file(pFormatCtx);

     return 0;
    }
  • getting video duration with ffmpeg, php function

    17 novembre 2014, par keithp

    I have FFMpeg installed and I know it’s functional, but i’m trying to get the duration time from a flv video through PHP but when I use this code :

    function mbmGetFLVDuration($file)

    /*  
    * Determine video duration with ffmpeg  
    * ffmpeg should be installed on your server.  
    */  

    //$time = 00:00:00.000 format  
    $ffmpeg = "../ffmpeg/ffmpeg";

    $time =  exec("$ffmpeg -i $file 2>&1 | grep 'Duration' | cut -d ' ' -f 4 | sed s/,//");  

    $duration = explode(":",$time);  
    $duration_in_seconds = $duration[0]*3600 + $duration[1]*60+ round($duration[2]);  

    return $duration_in_seconds;  

    and :

    $duration = mbmGetFLVDuration(’http://www.videoaddsite.com/videos/intro.flv’) ;
    echo $duration ;

    I get an output of 220. THe video is 3:40. Can any help me on what i’m doing wrong, or if there’s something else I can use ?