
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (38)
-
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (8271)
-
Why is ffmpeg faster than this minimal example ?
23 juillet 2022, par Dave CeddiaI'm wanting to read the audio out of a video file as fast as possible, using the libav libraries. It's all working fine, but it seems like it could be faster.


To get a performance baseline, I ran this ffmpeg command and timed it :


time ffmpeg -threads 1 -i file -map 0:a:0 -f null -



On a test file (a 2.5gb 2hr .MOV with pcm_s16be audio) this comes out to about 1.35 seconds on my M1 Macbook Pro.


On the other hand, this minimal C code (based on FFmpeg's "Demuxing and decoding" example) is consistently around 0.3 seconds slower.


#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>

static int decode_packet(AVCodecContext *dec, const AVPacket *pkt, AVFrame *frame)
{
 int ret = 0;

 // submit the packet to the decoder
 ret = avcodec_send_packet(dec, pkt);

 // get all the available frames from the decoder
 while (ret >= 0) {
 ret = avcodec_receive_frame(dec, frame);
 av_frame_unref(frame);
 }

 return 0;
}

int main (int argc, char **argv)
{
 int ret = 0;
 AVFormatContext *fmt_ctx = NULL;
 AVCodecContext *dec_ctx = NULL;
 AVFrame *frame = NULL;
 AVPacket *pkt = NULL;

 if (argc != 3) {
 exit(1);
 }

 int stream_idx = atoi(argv[2]);

 /* open input file, and allocate format context */
 avformat_open_input(&fmt_ctx, argv[1], NULL, NULL);

 /* get the stream */
 AVStream *st = fmt_ctx->streams[stream_idx];

 /* find a decoder for the stream */
 AVCodec *dec = avcodec_find_decoder(st->codecpar->codec_id);

 /* allocate a codec context for the decoder */
 dec_ctx = avcodec_alloc_context3(dec);

 /* copy codec parameters from input stream to output codec context */
 avcodec_parameters_to_context(dec_ctx, st->codecpar);

 /* init the decoder */
 avcodec_open2(dec_ctx, dec, NULL);

 /* allocate frame and packet structs */
 frame = av_frame_alloc();
 pkt = av_packet_alloc();

 /* read frames from the specified stream */
 while (av_read_frame(fmt_ctx, pkt) >= 0) {
 if (pkt->stream_index == stream_idx)
 ret = decode_packet(dec_ctx, pkt, frame);

 av_packet_unref(pkt);
 if (ret < 0)
 break;
 }

 /* flush the decoders */
 decode_packet(dec_ctx, NULL, frame);

 return ret < 0;
}



I tried measuring parts of this program to see if it was spending a lot of time in the setup, but it's not – at least 1.5 seconds of the runtime is the loop where it's reading frames.


So I took some flamegraph recordings (using cargo-flamegraph) and ran each a few times to make sure the timing was consistent. There's probably some overhead since both were consistently higher than running normally, but they still have the 0.3 second delta.


# 1.812 total
time sudo flamegraph ./minimal file 1

# 1.542 total
time sudo flamegraph ffmpeg -threads 1 -i file -map 0:a:0 -f null - 2>&1



Here are the flamegraphs stacked up, scaled so that the faster one is only 85% as wide as the slower one. (click for larger)




The interesting thing that stands out to me is how long is spent on
read
in the minimal example vs. ffmpeg :



The time spent on
lseek
is also a lot longer in the minimal program – it's plainly visible in that flamegraph, but in the ffmpeg flamegraph,lseek
is a single pixel wide.

What's causing this discrepancy ? Is ffmpeg actually doing less work than I think it is here ? Is the minimal code doing something naive ? Is there some buffering or other I/O optimizations that ffmpeg has enabled ?


How can I shave 0.3 seconds off of the minimal example's runtime ?


-
How to contribute to open source, for companies
I have seen many nigh-incomprehensible attempts by companies to contribute to open source projects, including x264. Developers are often simply boggled, wondering why the companies seem incapable of proper communication. The companies assume the developers are being unreceptive, while the developers assume the companies are being incompetent, idiotic, or malicious. Most of this seems to boil down to a basic lack of understanding of how open source works, resulting in a wide variety of misunderstandings. Accordingly, this post will cover the dos and don’ts of corporate contribution to open source.
Do : contact the project using their preferred medium of communication.
Most open source projects use public methods of communication, such as mailing lists and IRC. It’s not the end of the world if you mistakenly make contact with the wrong people or via the wrong medium, but be prepared to switch to the correct one once informed ! You may not be experienced using whatever form of communication the project uses, but if you refuse to communicate through proper channels, they will likely not be as inclined to assist you. Larger open source projects are often much like companies in that they have different parts to their organization with different roles. Don’t assume that everyone is a major developer !
If you don’t know what to do, a good bet is often to just ask someone.
Don’t : contact only one person.
Open source projects are a communal effort. Major contributions are looked over by multiple developers and are often discussed by the community as a whole. Yet many companies tend to contact only a single person in lieu of dealing with the project proper. This has many flaws : to begin with, it forces a single developer (who isn’t paid by you) to act as your liaison, adding yet another layer between what you want and the people you want to talk to. Contribution to open source projects should not be a game of telephone.
Of course, there are exceptions to this : sometimes a single developer is in charge of the entirety of some particular aspect of a project that you intend to contribute to, in which case this might not be so bad.
Do : make clear exactly what it is you are contributing.
Are you contributing code ? Development resources ? Money ? API documentation ? Make it as clear as possible, from the start ! How developers react, which developers get involved, and their expectations will depend heavily on what they think you are providing. Make sure their expectations match reality. Great confusion can result when they do not.
This also applies in the reverse — if there’s something you need from the project, such as support or assistance with development of your patch, make that explicitly clear.
Don’t : code dump.
Code does not have intrinsic value : it is only useful as part of a working, living project. Most projects react very negatively to large “dumps” of code without associated human resources. That is, they expect you to work with them to finalize the code until it is ready to be committed. Of course, it’s better to work with the project from the start : this avoids the situation of writing 50,000 lines of code independently and then finding that half of it needs to be rewritten. Or, worse, writing an enormous amount of code only to find it completely unnecessary.
Of course, the reverse option — keeping such code to yourself — is often even more costly, as it forces you to maintain the code instead of the official developers.
Do : ignore trolls.
As mentioned above, many projects use public communication methods — which, of course, allow anyone to communicate, by nature of being public. Not everyone on a project’s IRC or mailing list is necessarily qualified to officially represent the project. It is not too uncommon for a prospective corporate contributor to be turned off by the uninviting words of someone who isn’t even involved in the project due to assuming that they were. Make sure you’re dealing with the right people before making conclusions.
Don’t : disappear.
If you are going to try to be involved in a project, you need to stay in contact. We’ve had all too many companies who simply disappear after the initial introduction. Some tell us that we’ll need an NDA, then never provide it or send status updates. You may know why you’re not in contact — political issues at the company, product launch crunches, a nice vacation to the Bahamas — but we don’t ! If you disappear, we will assume that you gave up.
Above all, don’t assume that being at a large successful company makes you immune to these problems. If anything, these problems seem to be the most common at the largest companies. I didn’t name any names in this post, but practically every single one of these rules has been violated at some point by companies looking to contribute to x264. In the larger scale of open source, these problems happen constantly. Don’t fall into the same traps that many other companies have.
If you’re an open source developer reading this post, remember it next time you see a company acting seemingly nonsensically in an attempt to contribute : it’s quite possible they just don’t know what to do. And just because they’re doing it wrong doesn’t mean that it isn’t your responsibility to try to help them do it right.
-
How to use audio frame after decode mp3 file using pyav, ffmpeg, python
2 janvier 2021, par Long Tran DaiI am using using python with pyav, ffmpeg to decode mp3 in the memory. I know there are some other way to do it, like pipe ffmpeg command. However, I would like to explore pyav and ffmpeg API. So I have the following code. It works but the sound is very noisy, although hearable :


import numpy as np
import av # to convert mp3 to wav using ffmpeg
import pyaudio # to play music

mp3_path = 'D:/MyProg/python/SauTimThiepHong.mp3'

def decodeStream(mp3_path):
 # Run NOT OK
 
 container = av.open(mp3_path)
 stream = next(s for s in container.streams if s.type == 'audio')
 frame_count = 0
 data = bytearray()
 for packet in container.demux(stream):
 # <class>
 # We need to skip the "flushing" packets that `demux` generates.
 #if frame_count == 5000 : break 
 if packet.dts is None:
 continue
 for frame in packet.decode(): 
 #
 # type(frame) : <class>
 #frame.samples = 1152 : 1152 diem du lieu : Number of audio samples (per channel)
 # moi frame co size = 1152 (diem) * 2 (channels) * 4 (bytes / diem) = 9216 bytes
 # 11021 frames
 #arr = frame.to_ndarray() # arr.nbytes = 9216

 #channels = [] 
 channels = frame.to_ndarray().astype("float16")
 #for plane in frame.planes:
 #channels.append(plane.to_bytes()) #plane has 4 bytes / sample, but audio has only 2 bytes
 # channels.append(np.frombuffer(plane, dtype=np.single).astype("float16"))
 #channels.append(np.frombuffer(plane, dtype=np.single)) # kieu np.single co 4 bytes
 if not frame.is_corrupt:
 #data.extend(np.frombuffer(frame.planes[0], dtype=np.single).astype("float16")) # 1 channel: noisy
 # type(planes) : <class>
 frame_count += 1
 #print( '>>>> %04d' % frame_count, frame) 
 #if frame_count == 5000 : break 
 # mix channels:
 for i in range(frame.samples): 
 for ch in channels: # dec_ctx->channels
 data.extend(ch[i]) #noisy
 #fwrite(frame->data[ch] + data_size*i, 1, data_size, outfile)
 return bytes(data)
</class></class></class>


I use pipe ffmpeg to get decoded data to compare and find they are different :


def RunFFMPEG(mp3_path, target_fs = "44100"):
 # Run OK
 import subprocess
 # init command
 ffmpeg_command = ["ffmpeg", "-i", mp3_path,
 "-ab", "128k", "-acodec", "pcm_s16le", "-ac", "0", "-ar", target_fs, "-map",
 "0:a", "-map_metadata", "-1", "-sn", "-vn", "-y",
 "-f", "wav", "pipe:1"]
 # excute ffmpeg command
 pipe = subprocess.run(ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize= 10**8)
 # debug
 #print(pipe.stdout, pipe.stderr)
 # read signal as numpy array and assign sampling rate
 #audio_np = np.frombuffer(buffer=pipe.stdout, dtype=np.uint16, offset=44)
 #audio_np = np.frombuffer(buffer=pipe.stdout, dtype=np.uint16)
 #sig, fs = audio_np, target_fs
 #return audio_np
 return pipe.stdout[78:] 



Then I use pyaudio to play data and find it very noisy


p = pyaudio.PyAudio()
streamOut = p.open(format=pyaudio.paInt16, channels=2, rate= 44100, output=True)
#streamOut = p.open(format=pyaudio.paInt16, channels=1, rate= 44100, output=True)

mydata = decodeStream(mp3_path)
print("bytes of mydata = ", len(mydata))
#print("bytes of mydata = ", mydata.nbytes)

ffMpegdata = RunFFMPEG(mp3_path)
print("bytes of ffMpegdata = ", len(ffMpegdata)) 
#print("bytes of ffMpegdata = ", ffMpegdata.nbytes)

minlen = min(len(mydata), len(ffMpegdata))
print("mydata == ffMpegdata", mydata[:minlen] == ffMpegdata[:minlen]) # ffMpegdata.tobytes()[:minlen] )

#bytes of mydata = 50784768
#bytes of ffMpegdata = 50784768
#mydata == ffMpegdata False

streamOut.write(mydata)
streamOut.write(ffMpegdata)
streamOut.stop_stream()
streamOut.close()
p.terminate()



Please help me to understand decoded frame of pyav api (after for frame in packet.decode() :). Should it be processed more ? or I have some error ?


It makes me crazy for 3 days. I could not guess where to go.


Thank you very much.