Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (60)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8654)

  • Révision 21711 : r21657 avait ajouté une autorisation sur les grandes entrées du menu, définie po...

    25 octobre 2014, par cedric -

    On corrige en renommant l’autorisation pour la distinguer des sous entrees, et en definissant une autorisation par defaut a true pour tout le monde

  • iOS : How to fill audioUnit IOdata from avFrame

    4 septembre 2012, par Michelle Cannon

    I'm a little confused, I understand the basis of audio units in general but I can't seem to come up with an easy approach to my problem.

    I have an rtsp player application using ffmpeg and AudioQueues , this works very well for some formats but the audio queue api seems to have issues with mu-law (g.711) audio.
    I did some preliminary testing with another player that uses SDL audio which is a wrapper around audio units and playback was fairly smooth with little latency. I'd like to duplicate this without using the SDL middleware.

    What I am having problems with is how to take the decoded avFrame, I assume using audio units I have to decode using avformat_decode_audio4 since audio units can only handle uncompressed formats unlike audioQueue api that can play compressed formats.

    My audio callback and fillAudio method for audioQueues is like the following

      {{
       void audioQueueOutputCallback(void *info, AudioQueueRef unused, AudioQueueBufferRef    buffer) {
       // NSLog(@"buffer size on callback %lu ",sizeof(buffer));
       [(__bridge FLPlayer *)info fillAudioBuffer:buffer];
    }


    - (void)fillAudioBuffer:(AudioQueueBufferRef)buffer {
       AudioTimeStamp bufferStartTime;

       AudioQueueGetCurrentTime(audioQueue, NULL, &bufferStartTime, NULL);

       buffer->mAudioDataByteSize = 0;
       buffer->mPacketDescriptionCount = 0;

       if (audioPacketQueue.count <= 0) {
           //NSLog(@"Warning: No audio packets in queue  %d ",audioPacketQueue.count);
           emptyAudioBuffer = buffer;
           return;
       }
       //NSLog(@" audio packets in queue  %d ",audioPacketQueue.count);
       emptyAudioBuffer = nil;

       while (audioPacketQueue.count && buffer->mPacketDescriptionCount < buffer->mPacketDescriptionCapacity) {
           NSMutableData *packetData = [audioPacketQueue objectAtIndex:0];
           AVPacket *packet = [packetData mutableBytes];

           if (buffer->mAudioDataBytesCapacity - buffer->mAudioDataByteSize >= packet->size) {
               if (buffer->mPacketDescriptionCount == 0) {
                   bufferStartTime.mSampleTime = (Float64)packet->dts * avfContext->streams[audio_index]->codec->frame_size;
                   bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
               }

               memcpy((uint8_t *)buffer->mAudioData + buffer->mAudioDataByteSize, packet->data, packet->size);
               buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
               buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = packet->size;
               buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = avfContext->streams[audio_index]->codec->frame_size;

               buffer->mAudioDataByteSize += packet->size;
               buffer->mPacketDescriptionCount++;

               [audioPacketQueueLock lock];
               audioPacketQueueSize -= packet->size;
               [audioPacketQueue removeObjectAtIndex:0];
               [audioPacketQueueLock unlock];

               av_free_packet(packet);
           }
           else {
               break;
           }
       }

       if (buffer->mPacketDescriptionCount > 0) {
           OSStatus err;

           err = AudioQueueEnqueueBufferWithParameters(audioQueue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL) ;

           if (err != noErr ) {
               NSLog(@"Error enqueuing audio buffer: %d", err);
           }

           [decodeDoneLock lock];
           if (decodeDone && audioPacketQueue.count == 0) {
               err = AudioQueueStop(audioQueue, false);

               if (err != noErr) {
                   NSLog(@"Error: Failed to stop audio queue: %d", err);
               }
               else {
                   NSLog(@"Stopped audio queue");
               }
           }
           [decodeDoneLock unlock];
       }
    }
    }}

    but the audioUnit callback looks like the following, how do I fill iodata from my buffers

    #pragma mark Playback callback

    static OSStatus playbackCallback(void *inRefCon,
                                    AudioUnitRenderActionFlags *ioActionFlags,
                                    const AudioTimeStamp *inTimeStamp,
                                    UInt32 inBusNumber,
                                    UInt32 inNumberFrames,
                                    AudioBufferList *ioData) {    

    edited-update *

    Wow this really is a lot more difficult than I thought, why I am not sure, but anyway I think I have most of the pieces.

    So to get my 8bit samples from ffmpeg int sint16 lpcm we can do something like this

    UInt8* packetPtr = packet->data;
       int packetSize = packet->size;
       int16_t audioBuf[AVCODEC_MAX_AUDIO_FRAME_SIZE];
       int dataSize, decodedSize, pts, nextPts;
       double decodedAudioTime;
       BOOL newPacket = true;
       while (0 < packetSize) {
           dataSize = AVCODEC_MAX_AUDIO_FRAME_SIZE;
           decodedSize = avcodec_decode_audio2(_audioContext(trackId),
                                               audioBuf, &dataSize,
                                               packetPtr, packetSize);

    now audioBuf has our 16bit samples but we need 32 bit samples so we can do something like this

    - (OSStatus)getSamples:(AudioBufferList*)ioData  
    {  
       for(UInt32 i = 0; i < ioData->mNumberBuffers; ++i)  
       {  
           SInt32 *outBuffer = (SInt32 *) (ioData->mBuffers[i].mData);  

           const int mDataByteSize = ioData->mBuffers[i].mDataByteSize;  

           numSamples = mDataByteSize / 4;  

           for(UInt32 j=0; j/do I get more than 1 sample from ffmpeg?? audiobuf needs to feed my ring buffer
           {  
               outBuffer[j] = (SInt16) (ringbuf[j] * 32767.0);  
           }  
       }  

       return noErr;  
    }  
  • ffmpeg decode and encode frames of video stream

    6 octobre 2014, par SetV

    I followed ffmpeg sample tutorial for reading frames from an mpeg file (C interface), the raw (YUV) frames are decoded from video stream and converted to RGB frame and written to disk.

    I want to encode the same frame or modified frame (after converting to raw frame) back to the video stream, Is that possible as mpeg2/1 supports edit (Is that true ?)

    I don’t want to create an another file as I need to decode and encode audio/video streams along with other streams, if available.

    Whatever I tried doesn’t helped solving this, please point me to where I can check this.

    Thanks !