Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (53)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (10509)

  • Live Streaming WebM with Wowza Server

    2 décembre 2010, par noreply@blogger.com (John Luther)

    Guest blogger Charlie Good is CTO and co-founder of Wowza Media Systems

    As a company, we at Wowza move fast and like to tinker. When WebM was announced in May, we saw it as a promising new approach to HTML5 video and decided to do an experiment with live WebM streaming over http.

    Adding WebM VP8 video and Vorbis audio to the other encoding formats that our server supported was easy (we designed the Wowza server to be codec-agnostic). We then created a WebMfile and implemented WebM HTTP streaming.

    We originally created the demo as a proof-of-concept for the IBC show in September, 2010 but have made it available to watch on our web site.

    The file is streamed live (more precisely, "pseudo-live") over http using the Wowza server-side publishing API (PDF). The result is very impressive ; playback starts fast and the VP8 image quality is fantastic.

    You will need a WebM-enabled browser or VLC media player 1.1.5 to view the live stream.

    If you’re interested in keeping up with Wowza’s WebM progress, visit Wowza Labs or drop us a note at info@wowzamedia.com.

  • avcodec/speexdec : Consider mode in frame size check

    26 décembre 2021, par Michael Niedermayer
    avcodec/speexdec : Consider mode in frame size check
    

    No speex samples with non default frame sizes are known (to me)
    the official speexenc seems to only generate the 3 default ones.
    Thus it may be that the fuzzer samples where the first non default
    values encountered by the decoder.
    Possibly the "<" should be " !="

    Fixes : out of array access
    Fixes : 42821/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SPEEX_fuzzer-5640695772217344

    Found-by : continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/speexdec.c
  • three ways to achieve an H264 file

    18 mars 2016, par Kindermann

    here I have three ways to get an H264 file, like all forensic scientists, I am very curious about the differences between them :

    1.

    ffmpeg -i video.mp4 video.h264

    2.

    ffmpeg -i video.mp4 -vcodec copy -an -f h264 video.h264

    3. Using the example "demuxing_decoding.c" provided on the ffmpeg official website :
    http://ffmpeg.org/doxygen/trunk/demuxing_decoding_8c-example.html

    Obviously, the first one does the transformation, and the second one does the demuxing. They render different H264 files which however have similar file sizes(in my case, it’s about say 24 MB). Suprisingly, the third one, which is also supposed to do the demuxing job, renders an H264 file with 8.4 GB ! Why ?

    What I wondered is really, how the interiors of these three methods work ?(The third one is already in source code, therefore it’s quite easy to have an insight) What about the first two commands ? What APIs are called when executing these two commands and how those APIs are called(namely, in what kind of sequences they are called) and things like that.
    One thing that is also important to me is, i have no idea how I can trace the execution routines of ffmpeg command lines. I want to see what’s going on behind ffmpeg commands in source code version. Is it possible ?

    I appreciate any comment.