Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (14)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Librairies et logiciels spécifiques aux médias

    10 décembre 2010, par

    Pour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
    Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

Sur d’autres sites (5953)

  • FFMPEG : Recurring onMetaData for RTMP ? [on hold]

    30 novembre 2017, par stevendesu

    For whatever reason this was put on hold as "too broad", although I felt I was quite specific. So I’ll try rephrasing here :

    My former understanding :

    The RTMP Protocol involves sending several parallel streams of data as a series of packets, with an ID correlating to which stream they are a part of. For instance :

    [VIDEO] <data>
    [AUDIO] <data>
    [VIDEO] <data>
    [VIDEO] <data>
    [SERVER] <metadata about="about" bandwidth="bandwidth">
    [VIDEO] <data>
    [AUDIO] <data>
    ...
    </data></data></metadata></data></data></data></data>

    Then on the player side these packets are split up into separate buffers based on type (all video data is concatenated, all audio data is concatenated, etc)

    One of the packet types is called onMetaData (ID : 0x12)

    An onMetaData packet includes a timestamp for when to trigger the metadata (this way it can be synchronized with the video) as well as the contents of the metadata (a text string)

    My setup :

    I’m using Red5Pro as my ingest server to take in an RTMP stream and then watch this stream via WebRTC. When an onMetaData packet is received by Red5, it sends out a JSON object to all subscribers of the stream over WebSockets with the contents of the stream.

    What I want :

    I want to take advantage of this onMetaData channel to embed the server’s system clock into a stream. This way anyone viewing the stream can determine when (according to the server) a stream was encoded and, if they synchronize their clock with the server, they can then compute the end-to-end latency of the stream. Due to Red5’s use of WebSockets to send metadata this isn’t a perfect solution (you may receive the metadata before or after you actually receive the video information), however I have some plans to work around this.

    In other words, I want my stream to look like this :

    [VIDEO] <data>
    [AUDIO] <data>
    [ONMETADATA] time: 2:05:77.382
    [VIDEO] <data>
    [VIDEO] <data>
    [SERVER] <metadata about="about" bandwidth="bandwidth">
    [VIDEO] <data>
    [ONMETADATA] time: 2:05:77.423
    [AUDIO] <data>
    ...
    </data></data></metadata></data></data></data></data>

    What I would like is to generate this stream (with the server’s current time periodically embedded into the onMetaData channel) using FFMPEG

    Simpler problem :

    FFMPEG offers a -metadata command-line parameter.

    In my experiments, using this parameter caused a single onMetaData event to be fired including things like "title", "author", etc. I could not inject additional onMetaData packets periodically as the stream progressed.

    Even if the metadata packets do not contain the system clock, if I could send any metadata packets periodically using FFMPEG then I could include something static like "the server’s clock at the time the broadcast started". I can then compare this to the current timestamp of the video and calculate the latency.

    My confusion :

    Continuing to look into this after creating my post, there are a couple things that I don’t fully understand or which don’t quite make sense to me. For one, if FFMPEG is only injecting a single onMetaData packet into the stream, then I would expect anyone joining the stream late to miss it. However when I join the stream 8 hours later I see Red5 send me the metadata packet complete with title, author, etc. So it’s almost like the metadata packet doesn’t have a timestamp associated with it but instead is just generic metadata about the video

    Furthermore, there’s something called "AMF" which I’m not familiar with, but it may be important ?

    Original Post

    I spent today playing around with methods to embed the system clock at time of encode into a stream, so that I could compare this value to the same system clock at time of decode to get a rough estimate of RTMP latency. Unfortunately the majority of techniques I used ended up failing.

    One thing I wanted to try next was taking advantage of RTMP’s onMetaData to send the current system clock periodically (maybe every 5 seconds) as part of the stream for any clients to listen for.

    Unfortunately FFMPEG’s -metadata option seems to only be for one-time metadata when the stream first loads. I can’t figure out how to add continuous (and generated) values to a stream.

    Is there a way to do this ?

  • Save RTP vp8 payload packets to .webm file

    3 octobre 2017, par Ibrahim

    I have saved a video call to a .pcap file with Wireshark, and I want to acquire the video from RTP packets. RTP packets payload type is vp8, I could find out the vp8 RTP packets by using libpcap library in C++. Then I saved the contents of all RTP vp8 packets to a file. But i can not convert this raw vp8 data to .mp4 by using ffmpeg. ffmpeg gives error during conversion. ffmpeg conversion error : Invalid data found when processing input

    What are the steps to get .mp4 or .webm video file from vp8 RTP packets ?

    Edit : I could get raw VP8 data excluding VP8 payload descriptor, payload header and keyframe header. Then I added ivf header and frame header for each vp8 raw frame, according to ivf document
    IVF Document

    But When I want to convert my ivf file (output1) to output1.mp4 by using ffmpeg

    ffmpeg -i output1 -c:v vp8 output1.mp4

    I get errors

    convertion error

  • FFMPEG iOS 7 Library

    15 septembre 2017, par Destiny Dawn

    I’ve tried reading many tutorials.
    I’ve spent hours on google, and stackoverflow trying answer.
    So far I’ve read : Trying to compile the FFMPEG libraries for iPhoneOS platform with armv6 and arv7 architecture FFMPEG integration on iphone/ ipad project and https://github.com/lajos/iFrameExtractor few of the many.

    I’m trying to build this library for iOS 7/Xcode 5 compatibility but it’s not working.
    A common error I’d get is :

    Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
    yasm/nasm not found or too old. Use --disable-yasm for a crippled build.

    If you think configure made a mistake, make sure you are using the latest
    version from Git.  If the latest version fails, report the problem to the
    ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
    Include the log file "config.log" produced by configure as this will help
    solving the problem.

    I’d also get many more once that is finished. Such as :

    rm: illegal option -- .
    usage: rm [-f | -i] [-dPRrvW] file ...
          unlink file
    make: *** [clean] Error 64

    I’ve mostly tried using this command to start, but it always crashes on "make clean" :

    ./configure \
    --cc=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc \
    --as='/usr/local/bin/gas-preprocessor.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc' \
    --sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
    --target-os=darwin \
    --arch=arm \
    --cpu=cortex-a8 \
    --extra-cflags='-arch armv7' \
    --extra-ldflags='-arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk' \
    --prefix=compiled/armv7 \
    --enable-cross-compile \
    --enable-nonfree \
    --enable-gpl \
    --disable-armv5te \
    --disable-swscale-alpha \
    --disable-doc \
    --disable-ffmpeg \
    --disable-ffplay \
    --disable-ffprobe \
    --disable-ffserver \
    --disable-asm \
    --disable-debug