Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
MacOSx - port install package warning
13 février 2017, par beetlejI maybe installed ffmpeg package by port before, but have manually deleted the binary.
After that, when I install any other package by port, it alway show a warning message:
Warning: Error parsing file /opt/local/bin/ffmpeg: Error opening or reading file
How can I fix that? I don't want to install ffmpeg anyway since I have build it myself.
-
Audio conversion of CAF file
13 février 2017, par JohnI am recording audio on the iPhone to a CAF file with kAudioFormatiLBC, the recording works fine.
I want to be able to take a sample and also get it to convert to other formats after I have uploaded it to by ruby on rails webservice.
I am trying to use sox but get:
sox in.caf out.mp3
sox FAIL formats: can't open input file `in.caf': Supported file format but unsupported encoding.Similar with ffmpeg I get:
Unable to find a suitable output format for 'in.caf'
Any ideas?
Thanks
-
How to add a silent audio frame using av_audio_fifo_write ?
13 février 2017, par MichaelHow do you add a silent audio frame to the audio fifo using ffmpeg function:
int av_audio_fifo_write (AVAudioFifo *af, void **data, int nb_samples)
How do you initialize the 'data' parameter?
-
Piping PCM data from FFMPEG to another process with Python subprocess
13 février 2017, par Pete BleackleyI am trying to transcribe a podcast. To do so, I am decoding the mp3 stream with FFMPEG, and piping the resulting PCM output to the speech recognition component. My code looks like this.
mp3=subprocess.Popen(['ffmpeg','-i',audio_url, '-f','s16le','-ac','1','-ar','16000','pipe:0'], stdout=subprocess.PIPE) sphinx=subprocess.Popen(['java','-jar','transcriber.jar'], stdin=mp3.stdout, stdout=subprocess.PIPE)
Where
audio_url
is the url of the mp3 file.When I try to run this, it hangs. It appears that feeding the decoded PCM data through the pipe has deadlocked. What can I do to fix this? The size of the output data is likely to be too big for
subprocess.Popen.communicate
to be an option, and explicitly callingmp3.stdout.close()
has had no effect. -
Playing MP3 with FFMPEG Library
13 février 2017, par Keith ChambersI am attempting to make a C program that will play an MP3 file (Using FFMPEG & SDL).
Here's the general gist of what I'm try to do, based on my understanding thus far.
- Open the file + load 'container' level data into a AVFormatContext
- Find the stream inside the container that you want to work with. In my case I believe there is only a single audio steam so
format_context->streams[0]
references this. - Fetch a decoder(Stored in AVcodec), by the Codec ID found in the stream and populate AVCodecContext
- Set up your desired SDL_AudioSpec struct. Notably here you pass it a callback function for filling up an audio buffer provided by SDL itself and your AVCodecContext which is passed to the callback.
- Open an SDL audio device using your wanted spec and call SDL_PauseAudioDevice() which creates a new thread to continually call the callback function (start the audio playback).
- The main thread continues and continually reads encoded packets/frames and puts them on a global queue until the stream data has been completely processed.
- The callback function dequeues the encoded packets, decodes them and writes them to the buffer passed by SDL audio as a parameter.
Questions:
What exactly is the difference between a frame and a packet in the context of audio?
My understanding is that frames are encapsulated packets that are easier to feed as raw data since they are more uniform in size. A frame may require multiple packets to fill it but multiple frames cannot be made from a single packet(I could have mixed that up, as I don't remember where I read that). Are packets always decoded into frames or are the terms somewhat interchangeable?
Where is the encoded stream of packets stored? In my code I'm using
av_read_frame(AVFormatContext*, AVPacket)
to somehow fetch a packet from somewhere so I can put the packet on my queue.Lots of things about the callback function..
`void audio_callback(void *userdata, Uint8 *stream, int len)`
- Just to confirm, stream and len are decided by SDL audio? The programmer has no way of passing values for these arguments?
- Len is the size, in bytes of decoded audio data that SDL Audio is requesting?
- How do I know how many frames is enough to write len data to the buffer? To my knowledge a frame is an encapsulation of a FIXED amount of decoded audio data, so how do I know how much data each frame contains?
avcodec_send_packet(AVCodecContext *, AVPacket *)
andavcodec_receive_frame(AVCodecContext *, AVFrame *)
are the new ways to decode data, how exactly do they work?
So it seems like you pass in a valid packet to avcodec_send_packet(), it decodes it and then to get the corresponding frame you call the avcodec_receive_frame().
Issues that I can see with this is that size-wise a packet mightn't correspond to a frame, a number of packets might make up a single frame.
OK here's the stripped down version of my code, I can link to the full thing if required.
I omitted all error checking, clean up and the implementation of the PacketQueue structure since it's pretty intuitive. I haven't tested this specifically but my original program is segfaulting when it gets to
avcodec_send_packet()
// Includes etc ommited // My own type, just assume it works and was allocated in main PacketQueue *audioq; void audio_callback(void *userdata, Uint8 *stream, int requestedLen) { AVCodecContext *codec_context = (AVCodecContext *)userdata; AVPacket * nextPacket; AVFrame outputFrame; // Take packet off of audioq Queue and store in nextPacket deQueue(audioq, nextPacket); avcodec_send_packet(codec_context, nextPacket); avcodec_receive_frame(codec_context, &outputFrame); memcpy(stream, (uint8_t *)outputFrame.data, outputFrame.sample_rate); } int main(int argc, char *argv[]) { AVCodec * audio_codec; AVCodecContext *codec_context; AVFormatContext *format_context = NULL; SDL_AudioSpec want; SDL_AudioSpec have; SDL_AudioDeviceID audio_device_id; const char* audio_device_name = NULL; SDL_Init(SDL_INIT_AUDIO | SDL_INIT_TIMER); av_register_all(); format_context = avformat_alloc_context(); avformat_open_input(&format_context, *(argv + 1), NULL, NULL); avformat_find_stream_info(format_context, NULL); audio_codec = avcodec_find_decoder(format_context->streams[0]->codecpar->codec_id); codec_context = avcodec_alloc_context3(audio_codec); avcodec_open2(codec_context, audio_codec, NULL); want.freq = codec_context->sample_rate; want.format = AUDIO_S16SYS; want.channels = 2; want.samples = SDL_AUDIO_BUFFER_SIZE; want.callback = audio_callback; want.userdata = codec_context; want.silence = 0; audio_device_id = SDL_OpenAudioDevice(NULL, 0, &want, &have, SDL_AUDIO_ALLOW_FORMAT_CHANGE); audio_device_name = SDL_GetAudioDeviceName(0, 0); SDL_PauseAudioDevice(audio_device_id, 0); while(quit == 0) { AVPacket* packet = malloc(sizeof(AVPacket)); av_read_frame(format_context, packet); // Put packet onto audioq Queue enQueue(audioq, packet); if(packet) free(packet); } // Cleanup return 0; }
I'm sure my callback function is highly simplistic and somewhat naive but after I replaced the old avcodec_decode_audio4() much of the code I had seem irrelevant(Don't worry the program wasn't working at that point either..)