Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (103)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (11741)

  • Concatenate MOV files without re-encoding on iOS with ffmpeg libs

    2 juillet 2013, par Developer82

    I would like to concatenate MOV files without re-encoding. I want to do it on iOS (iPhone). All the MOV files are recorded with the same settings, no difference in dimensions or encoding profiles.

    I have succeeded to do it with the command line ffmpeg :
    ffmpeg -re -f concat -i files.txt -c copy ...
    But I have difficulties using the libraries.

    I think the demuxing part is ok, I have the h.264+AAC packets. After demuxing I shift the PTS and DTS info of each packet to have ascending values in the joined MOV file.
    The hard part is the muxing.

    I have built the ffmpeg libs with x264 lib, so it can be used if necessary, but I am not sure whether I need the x264 codec since I don't want to re-encode the MOV files, I just want to join them.

    Problems I have encountered :

    1. In this case I do not use x264 codec. At muxing I create the stream with NULL codec parameter. I have successful writing of header, packets and trailer. All the function calls return with zero error code. However, the output can be opened, but black "screen" is displayed during playback. FFprobe report is attached. I have also examined the output with MediaInfo tool. I have attached that report as well (MediaInfo report - without x264 codec.txt). As you can see there is no h.264 profile or pixel info found that might be a problem.

    2. In this case I use x264 codec with functions : avcodec_find_encoder, avformat_new_stream and avcodec_open2. Again : no decode-encode ! In this case I have much more metadata in the output file like h.264 profile and pixel info (YUV), but the av_interleaved_write_frame call simply does nothing but returns success code (0). No packet is written to the file. :( I don't know how this could happen. fwrite works, but results in un-openable file. I have also attached the MediaInfo report of this output (MediaInfo report - with x264 codec.txt).

    Questions :

    • How should I process the demuxed packets to feed the muxer ?
    • What format context and codec context setting should be done including AVOption settings ?
    • Should I use the x264 codec to do this ? I just vant to re-mux the chunks into a single joined file.
    • The chunks have their own header/trailer. Should I somehow filter the demuxed packets to skip them ?
    • The final goal is creating a network stream (RTP or RTMP) - also with re-muxing and without re-encoding. It works with command line ffmpeg :
      ffmpeg -re -f concat -i files.txt -vcodec copy -an -f rtp rtp://127.0.0.1:20000 -vn -acodec copy -f rtp rtp://127.0.0.1:30000

    Concatenating to MOV format is only an intermediate pilot. Is it recommended to work on the network format since it is so different task that there is no benefit of solving the MOV format muxing ?

    Any help, advice, suggestion is greatly appreciated.
    I can reveal code to make deeper investigation possible.

    Thanks !

  • Concatenate MOV files without re-encoding on iOS with ffmpeg libs

    2 juillet 2013, par Developer82

    I would like to concatenate MOV files without re-encoding. I want to do it on iOS (iPhone). All the MOV files are recorded with the same settings, no difference in dimensions or encoding profiles.

    I have succeeded to do it with the command line ffmpeg :
    ffmpeg -re -f concat -i files.txt -c copy ...
    But I have difficulties using the libraries.

    I think the demuxing part is ok, I have the h.264+AAC packets. After demuxing I shift the PTS and DTS info of each packet to have ascending values in the joined MOV file.
    The hard part is the muxing.

    I have built the ffmpeg libs with x264 lib, so it can be used if necessary, but I am not sure whether I need the x264 codec since I don't want to re-encode the MOV files, I just want to join them.

    Problems I have encountered :

    1. In this case I do not use x264 codec. At muxing I create the stream with NULL codec parameter. I have successful writing of header, packets and trailer. All the function calls return with zero error code. However, the output can be opened, but black "screen" is displayed during playback. FFprobe report is attached. I have also examined the output with MediaInfo tool. I have attached that report as well (MediaInfo report - without x264 codec.txt). As you can see there is no h.264 profile or pixel info found that might be a problem.

    2. In this case I use x264 codec with functions : avcodec_find_encoder, avformat_new_stream and avcodec_open2. Again : no decode-encode ! In this case I have much more metadata in the output file like h.264 profile and pixel info (YUV), but the av_interleaved_write_frame call simply does nothing but returns success code (0). No packet is written to the file. :( I don't know how this could happen. fwrite works, but results in un-openable file. I have also attached the MediaInfo report of this output (MediaInfo report - with x264 codec.txt).

    Questions :

    • How should I process the demuxed packets to feed the muxer ?
    • What format context and codec context setting should be done including AVOption settings ?
    • Should I use the x264 codec to do this ? I just vant to re-mux the chunks into a single joined file.
    • The chunks have their own header/trailer. Should I somehow filter the demuxed packets to skip them ?
    • The final goal is creating a network stream (RTP or RTMP) - also with re-muxing and without re-encoding. It works with command line ffmpeg :
      ffmpeg -re -f concat -i files.txt -vcodec copy -an -f rtp rtp://127.0.0.1:20000 -vn -acodec copy -f rtp rtp://127.0.0.1:30000

    Concatenating to MOV format is only an intermediate pilot. Is it recommended to work on the network format since it is so different task that there is no benefit of solving the MOV format muxing ?

    Any help, advice, suggestion is greatly appreciated.
    I can reveal code to make deeper investigation possible.

    Thanks !

  • FFMPEG AAC encoding causes audio to be lower in pitch

    14 février 2017, par Paul Knopf

    I built a sample application that encodes AAC (from PortAudio) into a MP4 container (no video stream).

    The resulting audio is lower in pitch.

    #include "stdafx.h"
    #include "TestRecording.h"
    #include "libffmpeg.h"

    TestRecording::TestRecording()
    {
    }


    TestRecording::~TestRecording()
    {
    }

    struct RecordingContext
    {
       RecordingContext()
       {
           formatContext = NULL;
           audioStream = NULL;
           audioFrame = NULL;
           audioFrameframeNumber = 0;
       }

       libffmpeg::AVFormatContext* formatContext;
       libffmpeg::AVStream* audioStream;
       libffmpeg::AVFrame* audioFrame;
       int audioFrameframeNumber;
    };

    static int AudioRecordCallback(const void *inputBuffer, void *outputBuffer,
       unsigned long framesPerBuffer,
       const PaStreamCallbackTimeInfo* timeInfo,
       PaStreamCallbackFlags statusFlags,
       void *userData)
    {
       RecordingContext* recordingContext = (RecordingContext*)userData;

       libffmpeg::avcodec_fill_audio_frame(recordingContext->audioFrame,
           recordingContext->audioFrame->channels,
           recordingContext->audioStream->codec->sample_fmt,
           static_cast<const unsigned="unsigned">(inputBuffer),
           (framesPerBuffer * sizeof(float) * recordingContext->audioFrame->channels),
           0);

       libffmpeg::AVPacket pkt;
       libffmpeg::av_init_packet(&amp;pkt);
       pkt.data = NULL;
       pkt.size = 0;

       int gotpacket;
       int result = avcodec_encode_audio2(recordingContext->audioStream->codec, &amp;pkt, recordingContext->audioFrame, &amp;gotpacket);

       if (result &lt; 0)
       {
           LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't encode the audio frame to acc");
           return paContinue;
       }

       if (gotpacket)
       {
           pkt.stream_index = recordingContext->audioStream->index;
           recordingContext->audioFrameframeNumber++;

           // this codec requires no bitstream filter, just send it to the muxer!
           result = libffmpeg::av_write_frame(recordingContext->formatContext, &amp;pkt);
           if (result &lt; 0)
           {
               LOG(ERROR) &lt;&lt; "Couldn't write the encoded audio frame";
               libffmpeg::av_free_packet(&amp;pkt);
               return paContinue;
           }

           libffmpeg::av_free_packet(&amp;pkt);
       }

       return paContinue;
    }

    static bool InitializeRecordingContext(RecordingContext* recordingContext)
    {
       int result = libffmpeg::avformat_alloc_output_context2(&amp;recordingContext->formatContext, NULL, NULL, "C:\\Users\\Paul\\Desktop\\test.mp4");
       if (result &lt; 0)
       {
           LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't create output format context");
           return false;
       }

       libffmpeg::AVCodec *audioCodec;
       audioCodec = libffmpeg::avcodec_find_encoder(libffmpeg::AV_CODEC_ID_AAC);
       if (audioCodec == NULL)
       {
           LOG(ERROR) &lt;&lt; "Couldn't find the encoder for AAC";
       }

       recordingContext->audioStream = libffmpeg::avformat_new_stream(recordingContext->formatContext, audioCodec);
       if (!recordingContext->audioStream)
       {
           LOG(ERROR) &lt;&lt; "Couldn't create the audio stream";
           return false;
       }

       recordingContext->audioStream->codec->bit_rate = 64000;
       recordingContext->audioStream->codec->sample_fmt = libffmpeg::AV_SAMPLE_FMT_FLTP;
       recordingContext->audioStream->codec->sample_rate = 48000;
       recordingContext->audioStream->codec->channel_layout = AV_CH_LAYOUT_STEREO;
       recordingContext->audioStream->codec->channels = libffmpeg::av_get_channel_layout_nb_channels(recordingContext->audioStream->codec->channel_layout);

       recordingContext->audioStream->codecpar->bit_rate = recordingContext->audioStream->codec->bit_rate;
       recordingContext->audioStream->codecpar->format = recordingContext->audioStream->codec->sample_fmt;
       recordingContext->audioStream->codecpar->sample_rate = recordingContext->audioStream->codec->sample_rate;
       recordingContext->audioStream->codecpar->channel_layout = recordingContext->audioStream->codec->channel_layout;
       recordingContext->audioStream->codecpar->channels = recordingContext->audioStream->codec->channels;

       result = libffmpeg::avcodec_open2(recordingContext->audioStream->codec, audioCodec, NULL);
       if (result &lt; 0)
       {
           LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't open the audio codec");
           return false;
       }

       // create a new frame to store the audio samples
       recordingContext->audioFrame = libffmpeg::av_frame_alloc();
       if (!recordingContext->audioFrame)
       {
           LOG(ERROR) &lt;&lt; "Couldn't alloce the output audio frame";
           return false;
       }

       recordingContext->audioFrame->nb_samples = recordingContext->audioStream->codec->frame_size;
       recordingContext->audioFrame->channel_layout = recordingContext->audioStream->codec->channel_layout;
       recordingContext->audioFrame->channels = recordingContext->audioStream->codec->channels;
       recordingContext->audioFrame->format = recordingContext->audioStream->codec->sample_fmt;
       recordingContext->audioFrame->sample_rate = recordingContext->audioStream->codec->sample_rate;

       result = libffmpeg::av_frame_get_buffer(recordingContext->audioFrame, 0);
       if (result &lt; 0)
       {
           LOG(ERROR) &lt;&lt; "Coudln't initialize the output audio frame buffer";
           return false;
       }

       // some formats want video_stream headers to be separate  
       if (!strcmp(recordingContext->formatContext->oformat->name, "mp4") || !strcmp(recordingContext->formatContext->oformat->name, "mov") || !strcmp(recordingContext->formatContext->oformat->name, "3gp"))
       {
           recordingContext->audioStream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       // open the ouput file
       if (!(recordingContext->formatContext->oformat->flags &amp; AVFMT_NOFILE))
       {
           result = libffmpeg::avio_open(&amp;recordingContext->formatContext->pb, recordingContext->formatContext->filename, AVIO_FLAG_WRITE);
           if (result &lt; 0)
           {
               LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't open the output file");
               return false;
           }
       }

       // write the stream headers
       result = libffmpeg::avformat_write_header(recordingContext->formatContext, NULL);
       if (result &lt; 0)
       {
           LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't write the headers to the file");
           return false;
       }

       return true;
    }

    static bool FinalizeRecordingContext(RecordingContext* recordingContext)
    {
       int result = 0;

       // write the trailing information
       if (recordingContext->formatContext->pb)
       {
           result = libffmpeg::av_write_trailer(recordingContext->formatContext);
           if (result &lt; 0)
           {
               LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't write the trailer information");
               return false;
           }
       }

       // close all the codes
       for (int i = 0; i &lt; (int)recordingContext->formatContext->nb_streams; i++)
       {
           result = libffmpeg::avcodec_close(recordingContext->formatContext->streams[i]->codec);
           if (result &lt; 0)
           {
               LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't close the codec");
               return false;
           }
       }

       // close the output file
       if (recordingContext->formatContext->pb)
       {
           if (!(recordingContext->formatContext->oformat->flags &amp; AVFMT_NOFILE))
           {
               result = libffmpeg::avio_close(recordingContext->formatContext->pb);
               if (result &lt; 0)
               {
                   LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't close the output file");
                   return false;
               }
           }
       }

       // free the format context and all of its data
       libffmpeg::avformat_free_context(recordingContext->formatContext);

       recordingContext->formatContext = NULL;
       recordingContext->audioStream = NULL;

       if (recordingContext->audioFrame)
       {
           libffmpeg::av_frame_free(&amp;recordingContext->audioFrame);
           recordingContext->audioFrame = NULL;
       }

       return true;
    }

    int TestRecording::Test()
    {
       PaError result = paNoError;

       result = Pa_Initialize();
       if (result != paNoError) LOGINT_WITH_MESSAGE(ERROR, result, "Error initializing audio device framework");

       RecordingContext recordingContext;
       if (!InitializeRecordingContext(&amp;recordingContext))
       {
           LOG(ERROR) &lt;&lt; "Couldn't start recording file";
           return 0;
       }

       auto defaultDevice = Pa_GetDefaultInputDevice();
       auto deviceInfo = Pa_GetDeviceInfo(defaultDevice);

       PaStreamParameters  inputParameters;
       inputParameters.device = defaultDevice;
       inputParameters.channelCount = 2;
       inputParameters.sampleFormat = paFloat32;
       inputParameters.suggestedLatency = deviceInfo->defaultLowInputLatency;
       inputParameters.hostApiSpecificStreamInfo = NULL;

       PaStream* stream = NULL;
       result = Pa_OpenStream(
           &amp;stream,
           &amp;inputParameters,
           NULL,
           48000,
           1024,
           paClipOff,
           AudioRecordCallback,
           &amp;recordingContext);
       if (result != paNoError)LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't open the audio stream");

       result = Pa_StartStream(stream);
       if (result != paNoError)LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't start the audio stream");

       Sleep(1000 * 5);

       result = Pa_StopStream(stream);
       if (result != paNoError)LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't stop the audio stream");

       if (!FinalizeRecordingContext(&amp;recordingContext)) LOG(ERROR) &lt;&lt; "Couldn't stop recording file";

       result = Pa_CloseStream(stream);
       if (result != paNoError)LOGINT_WITH_MESSAGE(ERROR, result, "Couldn't stop the audio stream");

       return 0;
    }
    </const>

    Here is the stdout, in case it helps.

    https://gist.github.com/pauldotknopf/9f24a604ce1f8a081aa68da1bf169e98

    Why is the audio lower in pitch ? I assume I am overlooking a parameter that needs to be configured between PortAudio and FFMPEG. Is there something super obvious that I am missing ?