
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (47)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (6001)
-
perturbation of an audio/video decoding execution trace with gstreamer
9 mars 2013, par KKcI am a new comer in gstreamer community and i have a pipeline to decode and trace a .mp4 file.
gst-launch --gst debug=filesrc:7,queue:7,audioconvert:7,audioresample:7,qtdemux:7,faad:7,ffmpeg:7,audioresample:7,audioconvert:7,autoaudiosink:7,autovideosink:7, filesrc location=...! qtdemux name=demuxer demuxer. ! queue ! faad ! audioconvert ! audioresample ! autoaudiosink demuxer. ! queue ! ffdec_h264 ! ffmpegcolorspace ! autovideosink > file1
I inserted the "identity" component to disturb the decoding, and effectively, i saw images which became very slow and sound disappeared.
I used this command :gst-launch --gst debug=filesrc:7,queue:7,audioconvert:7,audioresample:7,qtdemux:7,faad:7,ffmpeg:7,audioresample:7,audioconvert:7,autoaudiosink:7,autovideosink:7, filesrc location=...! qtdemux name=demuxer demuxer. ! queue ! faad ! audioconvert ! audioresample ! autoaudiosink demuxer. ! queue ! identity sleep-time=1000000 ! ffdec_h264 ! ffmpegcolorspace ! autovideosink > file2
The first time i did this execution, two new functions appeared in file2,
(i) gst_ffmpegdec_chain...'skipping...',
(ii) gst_ffmpegdec_video_frame...'Dropping..'
I assumed that the meaning is that some data were dropped or something else
However, since many days, i use the same pipelines, with the same video to decode ; i obtain the same bad visualization, but any new function in file 2 :(
the only difference is the number of occurrence of the functions below :*gst_ffmpegdec_update_qos :...'update* 558 times in one case
*gst_ffmpegdec_update_qos :...'update* 4 times in the other case
I don't know why i am unable to produce again a disturbed trace with 'skipping..' and 'dropping..'
My questions are :
1- have you any idea about the meaning of the above functions ?
2- Do you know any other component useful to disturb a A/V decoding processing ?
Thank you for any reply
-
How to build and link FFMPEG to iOS ?
30 juin 2015, par Alexander Tkachenkoall !
I know, there are a lot of questions here about FFMPEG on iOS, but no one answer is appropriate for my case :(
Something strange happens each case when I am trying to link FFMPEG in my project, so please, help me !My task is to write video-chat application for iOS, that uses RTMP-protocol for publishing and reading video-stream to/from custom Flash Media Server.
I decided to use rtmplib, free open-source library for streaming FLV video over RTMP, as it is the only appropriate library.
Many problem appeared when I began research of it, but later I understood how it should work.
Now I can read live stream of FLV video(from url) and send it back to channel, with the help of my application.
My trouble now is in sending video FROM Camera.
Basic operations sequence, as I understood, should be the following :-
Using AVFoundation, with the help of sequence (Device-AVCaptureSession-AVVideoDataOutput-> AVAssetWriter) I write this to a file(If you need, I can describe this flow more detailed, but in the context of question it is not important). This flow is necessary to make hardware-accelerated conversion of live video from the camera into H.264 codec. But it is in MOV container format. (This is completed step)
-
I read this temporary file with each sample written, and obtain the stream of bytes of video-data, (H.264 encoded, in QuickTime container). (this is allready completed step)
-
I need to convert videodata from QuickTime container format to FLV. And it all in real-time.(packet - by - packet)
-
If i will have the packets of video-data, contained in FLV container format, I will be able to send packets over RTMP using rtmplib.
Now, the most complicated part for me, is step 3.
I think, I need to use ffmpeg lib to this conversion (libavformat). I even found the source code, showing how to decode h.264 data packets from MOV file (looking in libavformat, i found that it is possible to extract this packets even from byte stream, which is more appropriate for me). And having this completed, I will need to encode packets into FLV(using ffmpeg or manually, in a way of adding FLV-headers to h.264 packets, it is not problem and is easy, if I am correct).
FFMPEG has great documentation and is very powerfull library, and I think, there won’t be a problem to use it. BUT the problem here is that I can not got it working in iOS project.
I have spend 3 days reading documentation, stackoverflow and googling the answer on the question "How to build FFMPEG for iOS" and I think, my PM is gonna fire me if I will spend one more week on trying to compile this library :))
I tried to use many different build-scripts and configure files, but when I build FFMPEG, i Got libavformat, libavcodec, etc. for x86 architecture (even when I specify armv6 arch in build-script). (I use "lipo -info libavcodec.a" to show architectures)
So I cannot build this sources, and decided to find prebuilt FFMPEG, that is build for architecture armv7, armv6, i386.
I have downloaded iOS Comm Lib from MidnightCoders from github, and it contains example of usage FFMPEG, it contains prebuilt .a files of avcodec,avformat and another FFMPEG libraries.
I check their architecture :
iMac-2:MediaLibiOS root# lipo -info libavformat.a
Architectures in the fat file: libavformat.a are: armv6 armv7 i386And I found that it is appropriate for me !
When I tried to add this libraries and headers to xCode project, It compiles fine(and I even have no warnings like "Library is compiled for another architecture"), and I can use structures from headers, but when I am trying to call C-function from libavformat (av_register_all()), the compiler show me error message "Symbol(s) not found for architecture armv7 : av_register_all".I thought, that maybe there are no symbols in lib, and tried to show them :
root# nm -arch armv6 libavformat.a | grep av_register_all
00000000 T _av_register_allNow I am stuck here, I don’t understand, why xCode can not see this symbols, and can not move forward.
Please, correct me if I am wrong in the understanding of flow for publishing RTMP-stream from iOS, and help me in building and linking FFMPEG for iOS.
I have iPhone 5.1. SDK and xCode 4.2.
-
-
Finding Optimal Code Coverage
7 mars 2012, par Multimedia Mike — ProgrammingA few months ago, I published a procedure for analyzing code coverage of the test suites exercised in FFmpeg and Libav. I used it to add some more tests and I have it on good authority that it has helped other developers fill in some gaps as well (beginning with students helping out with the projects as part of the Google Code-In program). Now I’m wondering about ways to do better.
Current Process
When adding a test that depends on a sample (like a demuxer or decoder test), it’s ideal to add a sample that’s A) small, and B) exercises as much of the codebase as possible. When I was studying code coverage statistics for the WC4-Xan video decoder, I noticed that the sample didn’t exercise one of the 2 possible frame types. So I scouted samples until I found one that covered both types, trimmed the sample down, and updated the coverage suite.I started wondering about a method for finding the optimal test sample for a given piece of code, one that exercises every code path in a module. Okay, so that’s foolhardy in the vast majority of cases (although I was able to add one test spec that pushed a module’s code coverage from 0% all the way to 100% — but the module in question only had 2 exercisable lines). Still, given a large enough corpus of samples, how can I find the smallest set of samples that exercise the complete codebase ?
This almost sounds like an NP-complete problem. But why should that stop me from trying to find a solution ?
Science Project
Here’s the pitch :- Instrument FFmpeg with code coverage support
- Download lots of media to exercise a particular module
- Run FFmpeg against each sample and log code coverage statistics
- Distill the resulting data in some meaningful way in order to obtain more optimal code coverage
That first step sounds harsh– downloading lots and lots of media. Fortunately, there is at least one multimedia format in the projects that tends to be extremely small : ANSI. These are files that are designed to display elaborate scrolling graphics using text mode. Further, the FATE sample currently deployed for this test (TRE_IOM5.ANS) only exercises a little less than 50% of the code in libavcodec/ansi.c. I believe this makes the ANSI video decoder a good candidate for this experiment.
Procedure
First, find a site that hosts a lot ANSI files. Hi, sixteencolors.net. This site has lots (on the order of 4000) artpacks, which are ZIP archives that contain multiple ANSI files (and sometimes some other files). I scraped a list of all the artpack names.In an effort to be responsible, I randomized the list of artpacks and downloaded periodically and with limited bandwidth (
'wget --limit-rate=20k'
).Run ‘gcov’ on ansi.c in order to gather the full set of line numbers to be covered.
For each artpack, unpack the contents, run the instrumented FFmpeg on each file inside, run ‘gcov’ on ansi.c, and log statistics including the file’s size, the file’s location (artpack.zip:filename), and a comma-separated list of line numbers touched.
Definition of ‘Optimal’
The foregoing procedure worked and yielded useful, raw data. Now I have to figure out how to analyze it.I think it’s most desirable to have the smallest files (in terms of bytes) that exercise the most lines of code. To that end, I sorted the results by filesize, ascending. A Python script initializes a set of all exercisable line numbers in ansi.c, then iterates through each each file’s stats line, adding the file to the list of candidate samples if its set of exercised lines can remove any line numbers from the overall set of lines. Ideally, that set of lines should devolve to an empty set.
I think a second possible approach is to find the single sample that exercises the most code and then proceed with the previously described method.
Initial Results
So far, I have analyzed 13324 samples from 357 different artpacks provided by sixteencolors.net.Using the first method, I can find a set of samples that covers nearly 80% of ansi.c :
<br />
0 bytes: bad-0494.zip:5<br />
1 bytes: grip1293.zip:-ANSI---.---<br />
1 bytes: pur-0794.zip:.<br />
2 bytes: awe9706.zip:-ANSI───.───<br />
61 bytes: echo0197.zip:-(ART)-<br />
62 bytes: hx03.zip:HX005.DAT<br />
76 bytes: imp-0494.zip:IMPVIEW.CFG<br />
82 bytes: ice0010b.zip:_cont'd_.___<br />
101 bytes: bdp-0696.zip:BDP2.WAD<br />
112 bytes: plain12.zip:--------.---<br />
181 bytes: ins1295v.zip:-°VGA°-. н<br />
219 bytes: purg-22.zip:NEM-SHIT.ASC<br />
289 bytes: srg1196.zip:HOWTOREQ.JNK<br />
315 bytes: karma-04.zip:FASHION.COM<br />
318 bytes: buzina9.zip:ox-rmzzy.ans<br />
411 bytes: solo1195.zip:FU-BLAH1.RIP<br />
621 bytes: ciapak14.zip:NA-APOC1.ASC<br />
951 bytes: lght9404.zip:AM-TDHO1.LIT<br />
1214 bytes: atb-1297.zip:TX-ROKL.ASC<br />
2332 bytes: imp-0494.zip:STATUS.ANS<br />
3218 bytes: acepak03.zip:TR-STAT5.ANS<br />
6068 bytes: lgc-0193.zip:LGC-0193.MEM<br />
16778 bytes: purg-20.zip:EZ-HIR~1.JPG<br />
20582 bytes: utd0495.zip:LT-CROW3.ANS<br />
26237 bytes: quad0597.zip:MR-QPWP.GIF<br />
29208 bytes: mx-pack17.zip:mx-mobile-source-logo.jpg<br />
----<br />
109440 bytes total<br />A few notes about that list : Some of those filenames are comprised primarily of control characters. 133t, and all that. The first file is 0 bytes. I wondered if I should discard 0-length files but decided to keep those in, especially if they exercise lines that wouldn’t normally be activated. Also, there are a few JPEG and GIF files in the set. I should point out that I forced the tty demuxer using
-f tty
and there isn’t much in the way of signatures for this format. So, again, whatever exercises more lines is better.Using this same corpus, I tried approach 2– which single sample exercises the most lines of the decoder ? Answer : blde9502.zip:REQUEST.EXE. Huh. I checked it out and ‘file’ ID’s it as a MS-DOS executable. So, that approach wasn’t fruitful, at least not for this corpus since I’m forcing everything through this narrow code path.
Think About The Future
Where can I take this next ? The cloud ! I have people inside the search engine industry who have furnished me with extensive lists of specific types of multimedia files from around the internet. I also see that Amazon Web Services Elastic Compute Cloud (AWS EC2) instances don’t charge for incoming bandwidth.I think you can see where I’m going with this.
See Also :