Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (108)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (15342)

  • avformat/matroskadec : Don't output uninitialized data for RealAudio 28.8

    22 avril 2020, par Andreas Rheinhardt
    avformat/matroskadec : Don't output uninitialized data for RealAudio 28.8
    

    The Matroska demuxer splits every sequence of h Matroska Blocks into
    h * w / cfs packets of size cfs ; here h (sub_packet_h), w (frame_size)
    and cfs (coded_framesize) are parameters from the track's CodecPrivate.

    It does this by splitting the Block's data in h/2 pieces of size cfs each
    and putting them into a buffer at offset m * 2 * w + n * cfs where
    m (range 0..(h/2 - 1)) indicates the index of the current piece in the
    current Block and n (range 0..(h - 1)) is the index of the current Block
    in the current sequence of Blocks. The data in this buffer is then used
    for the output packets.

    The problem is that there is currently no check to actually guarantee
    that no uninitialized data will be output. One instance where this is
    trivially so is if h == 1 ; another is if cfs * h is so small that the
    input pieces do not cover everything that is output. In order to
    preclude this, rmdec.c checks for h * cfs == 2 * w and h >= 2. The
    former requirement certainly makes much sense, as it means that for
    every given m the input pieces (corresponding to the h different values
    of n) form a nonoverlapping partition of the two adjacent frames of size w
    corresponding to m. But precluding h == 1 is not enough, other odd
    values can cause problems, too. That is because the assumption behind
    the code is that h frames of size w contain data to be output, although
    the real number is h/2 * 2. E.g. for h = 3, cfs = 2 and w = 3 the
    current code would output four (== h * w / cfs) packets. although only
    data for three (== h/2 * h) packets has been read.

    (Notice that if h * cfs == 2 * w, h being even is equivalent to
    cfs dividing w ; the latter condition also seems very reasonable :
    It means that the subframes are a partition of the frames.)

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/matroskadec.c
  • embed video stream with custom meta data

    15 mai 2022, par Sergey Kolesnik

    I have an optical system that provides a UDP video stream.

    &#xA;

    From device specification FAQ :

    &#xA;

    &#xA;

    Both single metadata (KLV) stream and compressed video (H.264) with metadata (KLV) are available on Ethernet link. Compressed video and metadata are coupled in the same stream compliant with STANAG 4609 standard. Each encoded video stream is encapsulated with the associated metadata within an MPEG-TS single program stream over Ethernet UDP/IP/ The video and metadata are synchronized through the use of timestamps.

    &#xA;

    &#xA;

    Also there are other devices that provide data about the state of an aircraft (velocity, coords, etc). This data should be displayed on a client GUI display alongside with video. Of course it has to be synchronized with the current video frame.

    &#xA;

    One of the approaches I though of is to embed this data into the video stream. But I am not sure if it is possible or should I use another (than UDP) protocol for this purpose.

    &#xA;

    Is it possible/reasonable to use such approach ? Is ffmpeg library suitable in this case ?&#xA;If not, what are the other ways to synchronize data with a video frame.&#xA;Latency is crucial. Although bandwidth is limited to 2-5 Mbps.

    &#xA;


    &#xA;

    It seems to be possible using ffmpeg : AVPacket can be provided with additional data using function av_packet_add_side_data which takes a preallocated buffer, size and a type AVPacketSideDataType.&#xA;However, I am not sure for now, which enum value of AVPacketSideDataType can be used for custom user-provided binary data.

    &#xA;

    Something similar that might be used for my needs :

    &#xA;

    How do I encode KLV packets to an H.264 video using libav*

    &#xA;

  • How to work with data from streaming services in my Java application ?

    24 novembre 2020, par gabriel garcia

    I'm currently trying to develop an "streaming client" as a way to organize multiple stream services (twitch, yt, mitele...) in a single desktop application written in Java.

    &#xA;

    It basically relies on streamlink (which relies in ffmpeg) thanks to all it's features so my project could be defined as a frontend for streamlink.

    &#xA;

    Straight to the point, one of the features I'd like to add it is the option to programatically record streams in the background and showing this video stream to the user when it's requested. Since there's also the possibility that the user wants to watch the stream without recording it, I'm forced to work with all that byte-like data sent from those streaming sources.

    &#xA;

    So, the problem is basically that I do not know much about video coding/decoding/muxing/demuxing nor video theory like container structure, video formats and such.

    &#xA;

    But the idea is to work with all the data sent from the stream source (let's say twitch, for example), read this bytes (I'm not sure what kind of information is sent to the client nor format) from the java.lang.Process's stdout and then present it to the client.

    &#xA;

    Here's another problem : I don't know how to play video streams in JavaFX and I don't think it's even supported right now. So I would have to extract each frame and sound associated from the stdout and show them to the user each time a new frame is received (oups, another problem since I don't know when does each frame starts/ends since I'm reading each stdout's line).

    &#xA;

    As a summary :

    &#xA;

      &#xA;
    • How can I know when does each frame starts/stops ?
    • &#xA;

    • How can I extract the image and sound from each frame ?
    • &#xA;

    &#xA;

    I hope I'm not asking too much and that you could shed some light upon my darkness.

    &#xA;