Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (26)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (4460)

  • Merge commit 'c29da01ac95ea2c8c5c4b3a312a33aaaa8fb7068'

    26 septembre 2017, par James Almer
    Merge commit 'c29da01ac95ea2c8c5c4b3a312a33aaaa8fb7068'
    

    * commit 'c29da01ac95ea2c8c5c4b3a312a33aaaa8fb7068' :
    svq3 : Convert to the new bitstream reader

    This commit is a noop, see
    http://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209609.html

    Merged-by : James Almer <jamrial@gmail.com>

  • Merge commit 'f7ec7f546f0021d28da284b024416b916b61c974'

    27 septembre 2017, par James Almer
    Merge commit 'f7ec7f546f0021d28da284b024416b916b61c974'
    

    * commit 'f7ec7f546f0021d28da284b024416b916b61c974' :
    wma : Convert to the new bitstream reader

    This commit is a noop, see
    http://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209609.html

    Merged-by : James Almer <jamrial@gmail.com>

  • FFmpeg + OpenAL - playback streaming sound from video won't work

    28 janvier 2014, par TheSHEEEP

    I am decoding an OGG video (theora & vorbis as codecs) and want to show it on the screen (using Ogre 3D) while playing its sound. I can decode the image stream just fine and the video plays perfectly with the correct frame rate, etc.

    However, I cannot get the sound to play at all with OpenAL.

    Edit : I managed to make the playing sound resemble the actual audio in the video at least somewhat. Updated sample code.

    Edit 2 : I was able to get "almost" correct sound now. I had to set OpenAL to use AL_FORMAT_STEREO_FLOAT32 (after initializing the extension) instead of just STEREO16. Now the sound is "only" extremely high pitched and stuttering, but at the correct speed.

    Here is how I decode audio packets (in a background thread, the equivalent works just fine for the image stream of the video file) :

    //------------------------------------------------------------------------------
    int decodeAudioPacket(  AVPacket&amp; p_packet, AVCodecContext* p_audioCodecContext, AVFrame* p_frame,
                           FFmpegVideoPlayer* p_player, VideoInfo&amp; p_videoInfo)
    {
       // Decode audio frame
       int got_frame = 0;
       int decoded = avcodec_decode_audio4(p_audioCodecContext, p_frame, &amp;got_frame, &amp;p_packet);
       if (decoded &lt; 0)
       {
           p_videoInfo.error = "Error decoding audio frame.";
           return decoded;
       }

       // Frame is complete, store it in audio frame queue
       if (got_frame)
       {
           int bufferSize = av_samples_get_buffer_size(NULL, p_audioCodecContext->channels, p_frame->nb_samples,
                                                       p_audioCodecContext->sample_fmt, 0);

           int64_t duration = p_frame->pkt_duration;
           int64_t dts = p_frame->pkt_dts;

           if (staticOgreLog)
           {
               staticOgreLog->logMessage("Audio frame bufferSize / duration / dts: "
                       + boost::lexical_cast(bufferSize) + " / "
                       + boost::lexical_cast(duration) + " / "
                       + boost::lexical_cast(dts), Ogre::LML_NORMAL);
           }

           // Create the audio frame
           AudioFrame* frame = new AudioFrame();
           frame->dataSize = bufferSize;
           frame->data = new uint8_t[bufferSize];
           if (p_frame->channels == 2)
           {
               memcpy(frame->data, p_frame->data[0], bufferSize >> 1);
               memcpy(frame->data + (bufferSize >> 1), p_frame->data[1], bufferSize >> 1);
           }
           else
           {
               memcpy(frame->data, p_frame->data, bufferSize);
           }
           double timeBase = ((double)p_audioCodecContext->time_base.num) / (double)p_audioCodecContext->time_base.den;
           frame->lifeTime = duration * timeBase;

           p_player->addAudioFrame(frame);
       }

       return decoded;
    }

    So, as you can see, I decode the frame, memcpy it to my own struct, AudioFrame. Now, when the sound is played, I use these audio frame like this :

       int numBuffers = 4;
       ALuint buffers[4];
       alGenBuffers(numBuffers, buffers);
       ALenum success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error on alGenBuffers : " + Ogre::StringConverter::toString(success) + alGetString(success));
           return;
       }

       // Fill a number of data buffers with audio from the stream
       std::vector audioBuffers;
       std::vector<unsigned int="int"> audioBufferSizes;
       unsigned int numReturned = FFMPEG_PLAYER->getDecodedAudioFrames(numBuffers, audioBuffers, audioBufferSizes);

       // Assign the data buffers to the OpenAL buffers
       for (unsigned int i = 0; i &lt; numReturned; ++i)
       {
           alBufferData(buffers[i], _streamingFormat, audioBuffers[i]->data, audioBufferSizes[i], _streamingFrequency);

           success = alGetError();
           if(success != AL_NO_ERROR)
           {
               CONSOLE_LOG("Error on alBufferData : " + Ogre::StringConverter::toString(success) + alGetString(success)
                               + " size: " + Ogre::StringConverter::toString(audioBufferSizes[i]));
               return;
           }
       }

       // Queue the buffers into OpenAL
       alSourceQueueBuffers(_source, numReturned, buffers);
       success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error queuing streaming buffers: " + Ogre::StringConverter::toString(success) + alGetString(success));
           return;
       }
    }

    alSourcePlay(_source);
    </unsigned>

    The format and frequency I give to OpenAL are AL_FORMAT_STEREO_FLOAT32 (it is a stereo sound stream, and I did initialize the FLOAT32 extension) and 48000 (which is the sample rate of the AVCodecContext of the audio stream).

    And during playback, I do the following to refill OpenAL's buffers :

    ALint numBuffersProcessed;

    // Check if OpenAL is done with any of the queued buffers
    alGetSourcei(_source, AL_BUFFERS_PROCESSED, &amp;numBuffersProcessed);
    if(numBuffersProcessed &lt;= 0)
       return;

    // Fill a number of data buffers with audio from the stream
    std::vector audioBuffers;
    std::vector<unsigned int="int"> audioBufferSizes;
    unsigned int numFilled = FFMPEG_PLAYER->getDecodedAudioFrames(numBuffersProcessed, audioBuffers, audioBufferSizes);

    // Assign the data buffers to the OpenAL buffers
    ALuint buffer;
    for (unsigned int i = 0; i &lt; numFilled; ++i)
    {
       // Pop the oldest queued buffer from the source,
       // fill it with the new data, then re-queue it
       alSourceUnqueueBuffers(_source, 1, &amp;buffer);

       ALenum success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error Unqueuing streaming buffers: " + Ogre::StringConverter::toString(success));
           return;
       }

       alBufferData(buffer, _streamingFormat, audioBuffers[i]->data, audioBufferSizes[i], _streamingFrequency);

       success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error on re- alBufferData: " + Ogre::StringConverter::toString(success));
           return;
       }

       alSourceQueueBuffers(_source, 1, &amp;buffer);

       success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error re-queuing streaming buffers: " + Ogre::StringConverter::toString(success) + " "
                       + alGetString(success));
           return;
       }
    }

    // Make sure the source is still playing,
    // and restart it if needed.
    ALint playStatus;
    alGetSourcei(_source, AL_SOURCE_STATE, &amp;playStatus);
    if(playStatus != AL_PLAYING)
       alSourcePlay(_source);
    </unsigned>

    As you can see, I do quite heavy error checking. But I do not get any errors, neither from OpenAL nor from FFmpeg.
    Edit : What I hear somewhat resembles the actual audio from the video, but VERY high pitched and stuttering VERY much. Also, it seems to be playing on top of TV noise. Very strange. Plus, it is playing much slower than the correct audio would.
    Edit : 2 After using AL_FORMAT_STEREO_FLOAT32, the sound plays at the correct speed, but is still very high pitched and stuttering (though less than before).

    The video itself is not broken, it can be played fine on any player. OpenAL can also play *.way files just fine in the same application, so it is also working.

    Any ideas what could be wrong here or how to do this correctly ?

    My only guess is that somehow, FFmpeg's decode function does not produce data OpenGL can read. But this is as far as the FFmpeg decode example goes, so I don't know what's missing. As I understand it, the decode_audio4 function decodes the frame to raw data. And OpenAL should be able to work with RAW data (or rather, doesn't work with anything else).