Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to encode VP6 codec in ffmpeg ? [migrated]

    20 novembre 2011, par userffmpeg

    Can anyone tell me if there is a way to encode VP6 codec in ffmpeg? I used libvpx only to find out that it encodes using VP8...

  • Can't record audio with ffmpeg linux

    20 novembre 2011, par FGraviton

    I'm trying to do a screencast with ffmpeg on OpenSUSE but the audio isn't working :

    ffmpeg -f oss -i /dev/audio -f x11grab -s $SCREEN -r 24 -b 100k -bf 2 -g 300 -i :0.0 -ar 22050 -ab 128k -acodec libmp3lame -vcodec libxvid -aspect 1.6 -sameq out.avi

    this one shows me that /dev/audio isn't there !!

    Any pointers ?

    Thanks Community,

  • Setting up audio queue for ffmpeg rtsp stream playing

    20 novembre 2011, par illew

    I'm working on an rtsp streaming(AAC format) client for iOS using ffmpeg. Right now I can only say my app is workable, but the streaming sound is very noisy and even a little distorted, far worse than when it's played by vlc or mplayer.

    The stream is read by av_read_frame(), decoded by avcodec_decode_audio3(). Then I just send the decoded raw audio to Audio Queue.

    When decoding a local aac file with my app, the sound seemed not so noisy at all. I know initial encoding would dramatically affect the result. However at least I should try to have it sounded like other streaming clients...

    Many parts in my implementation/modification actually came from try and error. I believe I'm doing something wrong in setting up Audio Queue, and the callback function for filling Audio Buffer.

    Any hints, suggestions or help are greatly appreciated.

    // --info of test materials dumped by av_dump_format() --

    Metadata:
        title           : /demo/test.3gp
      Duration: 00:00:30.11, start: 0.000000, bitrate: N/A
        Stream #0:0: Audio: aac, 32000 Hz, stereo, s16
    aac  Advanced Audio Coding 
    

    // -- the Audio Queue setup procedure --

    - (void) startPlayback
    {
        OSStatus err = 0;
        if(playState.playing) return;
    
        playState.started = false;
    
        if(!playState.queue) 
        {
    
            UInt32 bufferSize;
    
    
            playState.format.mSampleRate = _av->audio.sample_rate;
            playState.format.mFormatID = kAudioFormatLinearPCM;
            playState.format.mFormatFlags = kAudioFormatFlagsCanonical;
            playState.format.mChannelsPerFrame = _av->audio.channels_per_frame;
            playState.format.mBytesPerPacket = sizeof(AudioSampleType) *_av->audio.channels_per_frame;
            playState.format.mBytesPerFrame = sizeof(AudioSampleType) *_av->audio.channels_per_frame;
            playState.format.mBitsPerChannel = 8 * sizeof(AudioSampleType);
    
            playState.format.mFramesPerPacket = 1;        
            playState.format.mReserved = 0;
    
    
            pauseStart = 0;
            DeriveBufferSize(playState.format,playState.format.mBytesPerPacket,BUFFER_DURATION,&bufferSize,&numPacketsToRead);
            err= AudioQueueNewOutput(&playState.format, aqCallback, &playState, NULL, kCFRunLoopCommonModes, 0, &playState.queue);
    
            if(err != 0)
            {
                printf("AQHandler.m startPlayback: Error creating new AudioQueue: %d \n", (int)err);
            }
    
            for(int i = 0 ; i < NUM_BUFFERS ; i ++)
            {
                err = AudioQueueAllocateBufferWithPacketDescriptions(playState.queue, bufferSize, numPacketsToRead , &playState.buffers[i]);
    
                if(err != 0)
                    printf("AQHandler.m startPlayback: Error allocating buffer %d", i);
                fillAudioBuffer(&playState,playState.queue, playState.buffers[i]);
            }
    
        }
    
        startTime = mu_currentTimeInMicros();
    
        err=AudioQueueStart(playState.queue, NULL);
    
        if(err)
        {
    
            char sErr[4];
            printf("AQHandler.m startPlayback: Could not start queue %ld %s.", err, FormatError(sErr,err));
    
            playState.playing = NO;
        } 
        else
        {
            AudioSessionSetActive(true);
            playState.playing = YES;
        }           
    }
    

    // -- callback for filling audio buffer --

    static int ct = 0;
    static void fillAudioBuffer(void *info,AudioQueueRef queue, AudioQueueBufferRef buffer)
    {
    
        int lengthCopied = INT32_MAX;
        int dts= 0;
        int isDone = 0;
    
        buffer->mAudioDataByteSize = 0;
        buffer->mPacketDescriptionCount = 0;
    
        OSStatus err = 0;
        AudioTimeStamp bufferStartTime;
    
        AudioQueueGetCurrentTime(queue, NULL, &bufferStartTime, NULL);
    
    
        PlayState *ps = (PlayState *)info;
    
        if (!ps->started)
            ps->started = true;
    
        while(buffer->mPacketDescriptionCount < numPacketsToRead && lengthCopied > 0)
        {
            lengthCopied = getNextAudio(_av,
                            buffer->mAudioDataBytesCapacity-buffer->mAudioDataByteSize,
                            (uint8_t*)buffer->mAudioData+buffeg->mAudioDataByteSize,
                            &dts,&isDone);
    
            ct+= lengthCopied;
    
            if(lengthCopied < 0 || isDone) 
            {
                printf("nothing to read....\n\n");
                PlayState *ps = (PlayState *)info;
                ps->finished = true;
                ps->started = false;
                break;
            }
    
            if(aqStartDts < 0) aqStartDts = dts;
    
            if(buffer->mPacketDescriptionCount ==0)
            {
                bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
                bufferStartTime.mSampleTime = (Float64)(dts-aqStartDts);//* _av->audio.frame_size;
    
                if (bufferStartTime.mSampleTime <0 ) 
                    bufferStartTime.mSampleTime = 0;
    
                printf("AQHandler.m fillAudioBuffer: DTS for %x: %lf time base: %lf StartDTS: %d\n", 
                        (unsigned int)buffer, 
                        bufferStartTime.mSampleTime, 
                        _av->audio.time_base, 
                        aqStartDts);
    
            }
    
            buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
            buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = lengthCopied;
    
    
    
            buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = 0;
    
            buffer->mPacketDescriptionCount++;
    
            buffer->mAudioDataByteSize += lengthCopied;
        }
    
        int audioBufferCount, audioBufferTotal,  videoBufferCount, videoBufferTotal;
        bufferCheck(_av,&videoBufferCount, &videoBufferTotal, &audioBufferCount, &audioBufferTotal);
    
        if(buffer->mAudioDataByteSize)
        {
    
            err = AudioQueueEnqueueBufferWithParameters(queue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL);
    
            if(err)
            {
                char sErr[10];
                printf("AQHandler.m fillAudioBuffer: Could not enqueue buffer 0x%x: %d %s.", buffer, err, FormatError(sErr, err));
    
            }
    
        }
    
    }
    
    
    
    
    int getNextAudio(video_data_t* vInst, int maxlength, uint8_t* buf, int* pts, int* isDone) 
    {
    
        struct video_context_t  *ctx = vInst->context;
        int    datalength            = 0;
    
        while(ctx->audio_ring.lock || (ctx->audio_ring.count <= 0 && ((ctx->play_state & STATE_DIE) != STATE_DIE)))
        {
    
            if (ctx->play_state & STATE_EOF) return -1;        
            usleep(100);
        }
    
        *pts = 0;
        ctx->audio_ring.lock = kLocked;
    
        if(ctx->audio_ring.count>0 && maxlength > ctx->audio_buffer[ctx->audio_ring.read].size)
        {    
            memcpy(buf, ctx->audio_buffer[ctx->audio_ring.read].data,ctx->audio_buffer[ctx->audio_ring.read].size);
    
            *pts = ctx->audio_buffer[ctx->audio_ring.read].pts;
    
            datalength = ctx->audio_buffer[ctx->audio_ring.read].size;
    
            ctx->audio_ring.read++;        
            ctx->audio_ring.read %= ABUF_SIZE;        
            ctx->audio_ring.count--;
    
        }
        ctx->audio_ring.lock = kUnlocked;
    
        if((ctx->play_state & STATE_EOF) == STATE_EOF && ctx->audio_ring.count == 0) *isDone = 1;
    
        return datalength;
    }
    
  • How to play raw audio in Iphone ? (using ffmpeg)

    19 novembre 2011, par KayKay

    I am a student who is trying to make mms stream audio app.
    I got mms stream using libmms, and decoded wma audio using ffmpeg.
    But however I don't know What to do next.

    I recently saw similar question in stackoverflow site. (Writer is c4r1o5)
    But He used cfwritestreamwrite after avcodec_decode_audio2.
    Is that right? I think It is not necessary because network problem finished after mms_connect, ffmpeg decode.

    Is that necessary to use?
    I tried to put raw audio to audio buffer. and when play, It only comes with white noise.

    Please help me.
    Any hint or comment would be vey appreciated.
    Thanks in advance.

    enter image description here

  • Playing raw audio using AudioQueue IPhone

    18 novembre 2011, par zomerc

    I have been trying to play raw audio on iPhone. I am using libmms open to open a mms stream and decoding the audio to raw audio. Been have problem playing the raw audio using AudioQueue.

    I was wonder if anyone successfully with this problem?