Recherche avancée

Médias (0)

Mot : - Tags -/latitude

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (25)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (6695)

  • Merge video without sound and audio with offset using ffmeg

    17 août 2022, par Stratocaster

    I need to merge the video (which has no audio) and the audio file with offset (audio should start at 9 seconds). I use this :

    


    ffmpeg -i video_without_sound.mp4 -i audio_file.mp3 -c:v copy -filter_complex "[1]adelay=9000|9000[s1];[s0:a][s1]amix=2[a]" -map 0:v -map "[a]" -c:a aac output.mp4


    


    And I get an error :

    


    Stream specifier 's0:a' in filtergraph description [1]adelay=9000|9000[s1];[s0:a][s1]amix=2[a] matches no streams.


    


    I think problem is that the video files have no sound and no audio streams. How to merge such files with offset ?

    


  • AVAssetWriter creating mp4 with no sound in last 50msec

    12 août 2015, par Joseph K

    I’m working on a project involving live streaming from the iPhone’s camera.

    To minimize loss during AVAssetWriter finishWriting, I use an array of 2 asset writers and swap them whenever I need to create an mp4 fragment out of the recorded buffers.

    Code responsible for capturing Audio & Video sample buffers

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

       if CMSampleBufferDataIsReady(sampleBuffer) <= 0 {
               println("Skipped sample because it was not ready")
               return
           }

       if captureOutput == audioOutput  {

           if audioWriterBuffers[0].readyForMoreMediaData() {
               if !writers[0].appendAudio(sampleBuffer) { println("Failed to append: \(recordingPackages[0].name). Error: \(recordingPackages[0].outputWriter.error.localizedDescription)") }
               else {
                   writtenAudioFrames++

                   if writtenAudioFrames == framesPerFragment {
                       writeFragment()
                   }
               }
           }
           else {
               println("Skipped audio sample; it is not ready.")
           }
       }

       else if captureOutput == videoOutput {
           //Video sample buffer
           if videoWriterBuffers[0].readyForMoreMediaData() {
               //Call startSessionAtSourceTime if needed
               //Append sample buffer with a source time
           }
       }
    }

    Code responsible for the writing and swapping

    func writeFragment() {
       writtenAudioFrames = 0

       swap(&writers[0], &writers[1])
       if !writers[0].startWriting() {println( "Failed to start OTHER writer writing") }
       else { startTime  = CFAbsoluteTimeGetCurrent() }

       audioWriterBuffers[0].markAsFinished()
       videoWriterBuffers[0].markAsFinished()

       writers[1].outputWriter.finishWritingWithCompletionHandler { () -> Void in
           println("Finish Package record Writing, now Resetting")
           //
           // Handle written MP4 fragment code
           //

           //Reset Writer
           //Basically reallocate it as a new AVAssetWriter with a given URL and MPEG4 file Type and add inputs to it
           self.resetWriter()
       }

    The issue at hand

    The written MP4 fragments are being sent over to a local sandbox server to be analyzed.

    When MP4 fragments are stitched together using FFMpeg, there is a noticeable glitch in sound due to the fact that there is not audio at the last 50msec of every fragment.

    My audio AVAssetWriterInput’s settings are the following :

    static let audioSettings: [NSObject : AnyObject]! =
    [
       AVFormatIDKey : NSNumber(integer: kAudioFormatMPEG4AAC),
       AVNumberOfChannelsKey : NSNumber(integer: 1),
       AVSampleRateKey : NSNumber(int: 44100),
       AVEncoderBitRateKey : NSNumber(int: 64000),
    ]

    As such, I encode 44 audio sample buffers every second. They are all being successfully appended.

    Further resources

    Here’s a waveform display of the audio stream after concatenating the mp4 fragments

    Waveform of audio stream

     !! Note that my fragments are about 2secs in length.
     !! Note that I’m focusing on audio since video frames are extremely smooth when jumping from one fragment to another.

    Any idea as to what is causing this ? I can provide further code or info if needed.

  • FFmpeg transcoded sound (AAC) stops after half video time

    17 août 2015, par TheSHEEEP

    I have a strange problem in my C/C++ FFmpeg transcoder, which takes an input MP4 (varying input codecs) and produces and output MP4 (x264, baseline & AAC LC @44100 sample rate with libfdk_aac) :

    The resulting mp4 video has fine images (x264) and the audio (AAC LC) works fine as well, but is only played until exactly the half of the video.

    The audio is not slowed down, not stretched and doesn’t stutter. It just stops right in the middle of the video.

    One hint may be that the input file has a sample rate of 22050 and 22050/44100 is 0.5, but I really don’t get why this would make the sound just stop after half the time. I’d expect such an error leading to sound being at the wrong speed. Everything works just fine if I don’t try to enforce 44100 and instead just use the incoming sample_rate.

    Another guess would be that the pts calculation doesn’t work. But the audio sounds just fine (until it stops) and I do exactly the same for the video part, where it works flawlessly. "Exactly", as in the same code, but "audio"-variables replaced with "video"-variables.

    FFmpeg reports no errors during the whole process. I also flush the decoders/encoders/interleaved_writing after all the package reading from the input is done. It works well for the video so I doubt there is much wrong with my general approach.

    Here are the functions of my code (stripped off the error handling & other class stuff) :

    AudioCodecContext Setup

    outContext->_audioCodec = avcodec_find_encoder(outContext->_audioTargetCodecID);
    outContext->_audioStream =
           avformat_new_stream(outContext->_formatContext, outContext->_audioCodec);
    outContext->_audioCodecContext = outContext->_audioStream->codec;
    outContext->_audioCodecContext->channels = 2;
    outContext->_audioCodecContext->channel_layout = av_get_default_channel_layout(2);
    outContext->_audioCodecContext->sample_rate = 44100;
    outContext->_audioCodecContext->sample_fmt = outContext->_audioCodec->sample_fmts[0];
    outContext->_audioCodecContext->bit_rate = 128000;
    outContext->_audioCodecContext->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
    outContext->_audioCodecContext->time_base =
           (AVRational){1, outContext->_audioCodecContext->sample_rate};
    outContext->_audioStream->time_base = (AVRational){1, outContext->_audioCodecContext->sample_rate};
    int retVal = avcodec_open2(outContext->_audioCodecContext, outContext->_audioCodec, NULL);

    Resampler Setup

    outContext->_audioResamplerContext =
           swr_alloc_set_opts( NULL, outContext->_audioCodecContext->channel_layout,
                               outContext->_audioCodecContext->sample_fmt,
                               outContext->_audioCodecContext->sample_rate,
                               _inputContext._audioCodecContext->channel_layout,
                               _inputContext._audioCodecContext->sample_fmt,
                               _inputContext._audioCodecContext->sample_rate,
                               0, NULL);
    int retVal = swr_init(outContext->_audioResamplerContext);

    Decoding

    decodedBytes = avcodec_decode_audio4(   _inputContext._audioCodecContext,
                                           _inputContext._audioTempFrame,
                                           &p_gotAudioFrame, &_inputContext._currentPacket);

    Converting (only if decoding produced a frame, of course)

    int retVal = swr_convert(   outContext->_audioResamplerContext,
                               outContext->_audioConvertedFrame->data,
                               outContext->_audioConvertedFrame->nb_samples,
                               (const uint8_t**)_inputContext._audioTempFrame->data,
                               _inputContext._audioTempFrame->nb_samples);

    Encoding (only if decoding produced a frame, of course)

    outContext->_audioConvertedFrame->pts =
           av_frame_get_best_effort_timestamp(_inputContext._audioTempFrame);

    // Init the new packet
    av_init_packet(&outContext->_audioPacket);
    outContext->_audioPacket.data = NULL;
    outContext->_audioPacket.size = 0;

    // Encode
    int retVal = avcodec_encode_audio2( outContext->_audioCodecContext,
                                       &outContext->_audioPacket,
                                       outContext->_audioConvertedFrame,
                                       &p_gotPacket);


    // Set pts/dts time stamps for writing interleaved
    av_packet_rescale_ts(   &outContext->_audioPacket,
                           outContext->_audioCodecContext->time_base,
                           outContext->_audioStream->time_base);
    outContext->_audioPacket.stream_index = outContext->_audioStream->index;

    Writing (only if encoding produced a packet, of course)

    int retVal = av_interleaved_write_frame(outContext->_formatContext, &outContext->_audioPacket);

    I am quite out of ideas about what would cause such a behaviour.