
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (67)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (8173)
-
checkasm : use perf API on Linux ARM*
1er septembre 2017, par Clément Bœschcheckasm : use perf API on Linux ARM*
On ARM platforms, accessing the PMU registers requires special user
access permissions. Since there is no other way to get accurate timers,
the current implementation of timers in FFmpeg rely on these registers.
Unfortunately, enabling user access to these registers on Linux is not
trivial, and generally involve compiling a random and unreliable github
kernel module, or patching somehow your kernel.Such module is very unlikely to reach the upstream anytime soon. Quoting
Robin Murphin from ARM :> Say you do give userspace direct access to the PMU ; now run two or more
> programs at once that believe they can use the counters for their own
> "minimal-overhead" profiling. Have fun interpreting those results...
>
> And that's not even getting into the implications of scheduling across
> different CPUs, CPUidle, etc. where the PMU state is completely beyond
> userspace's control. In general, the plan to provide userspace with
> something which might happen to just about work in a few corner cases,
> but is meaningless, misleading or downright broken in all others, is to
> never do so.As a result, the alternative is to use the Performance Monitoring Linux
API which makes use of these registers internally (assuming the PMU of
your ARM board is supported in the kernel, which is definitely not a
given...).While the Linux API is obviously cross platform, it does have a
significant overhead which needs to be taken into account. As a result,
that mode is only weakly enabled on ARM platforms exclusively.Note on the non flexibility of the implementation : the timers (native
FFmpeg vs Linux API) are selected at compilation time to prevent the
need of function calls, which would result in a negative impact on the
cycle counters. -
passing additional values to s3 event notification for lambda consumption
8 septembre 2017, par user1790300I have to write code in react-native that allows a user to upload videos to amazon s3 to be transcoded for consumption by various devices. For the processing after the upload occurs ; I am reviewing two approaches :
1) I can use Lambda with ffmpeg to handle the transcoding immediately after the uploading occurs (my fear here would be the amount of time required to transcode the videos and the effect on pricing if it takes a considerable amount of time).
2) I can have s3 pass an sns message to a rest api after the created event occurs and the rest api generate a rabbitmq message that will be processed by worker that will perform the transcoding using ffmpeg.
Option 1) seems to be the preferable option based on a completion time perspective. How concerned should I be with using 1) considering how long video transcoding might take as opposed to option 2) ?
Also, regardless, I need a way to pass additional parameters to lambda or along the sns messaging that would allow me to somehow associate the user who uploaded the video with their account. Is there a way to pass additional text-based values to s3 to pass along to lambda or along sns when the upload completes, as a caveat I plan to upload the video directly to s3 using the rest layer(found this answer here : http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html#RESTObjectPUT-responses-examples) ?
-
make webbased ffmpeg-live transcoder on linux for multiple streams
11 juillet 2017, par Dlniya DlzarHi I am planning to make webbased ffmpeg-live transcoder on linux for multiple streams .
Using ffmpeg and ngnix-rtmp is the basic that i found and planning to do it
my plan is (Web interface for adding and modifying streams (name ,input,output..)
in database , database i mean (json file). and execute ffmpeg command depend on the JSON file (now one more thing i want to do , is to monitor streams based on
nginx-rtmp-module/stat.xsl
git https://github.com/arut/nginx-rtmp-module/blob/master/stat.xsl
and restart streams if there is problem , like no audio or picture
whats is best structure to do it ?? which language is good to do the proccess
is there any missing knowledges ?? is there any other better way in your mind ??