Advanced search

Medias (1)

Tag: - Tags -/publicité

Other articles (58)

  • HTML5 audio and video support

    13 April 2011, by

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 January 2010, by

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 January 2010, by

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation; Oggz-tools : outils d’inspection de fichiers ogg; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores;
    Binaires complémentaires et facultatifs flvtool2 : extraction / (...)

On other websites (9111)

  • How improves Video Player processing using Qt and FFmpeg?

    13 September 2016, by Eric Menezes

    A time ago, I started to develop a video player/analyser. For beeing an analyser as well, the application should have inside its buffer the next frames and the previous as well. That’s where the complication begins.

    For that, we started to use an VideoProducer that decodes the frames and audio from video (using ffmpeg), added it into a buffer from where the video and audio consumers retrieve that objects (VideoFrame and AudioChunk). For this job, we have some QThreads which is one producer, 2 consumers and (the biggest trouble maker) 2 workers that is used to retrieve objects from producer’s buffer and insert them into a circular buffer (that because of previous frames). These workers are important because of the backwards buffering job (this player should play backwards too).

    So, now the player is running well, but not so good. It’s notable that is losing performance. Like removing producer buffer and using just the circular. But still, some questions remains:

    • Should I continue using QThread with reimplemented run()? I read that works better with Signals & Slots;

    • If Signals & Slots worth it, the producer still needs to reimplement QThread::run(), right?

    • Cosidering that buffer must have some previous frames and bad quality videos will be reproduced, is that (VideoProducer insert objects into a Buffer, AudioConsumer and FrameConsumer retrieve these objects from Buffer and display/reproducer them) the better way?

    • What is the best way to sync audio and video? The sync using audio pts is doing well, but some troubles appear sometimes; and

    • For buffering backwards, ffmpeg does not provide frames this way, so I need to seek back, decode older frames, reorder them and prepend to the buffer. This job has been done by that Workers, another QThread the keep consuming from Producer buffer and, if buffering backwards, asks for seek and do the reorder job. I can just guess that it is bad. And I assume that do this reorder job should be done at Producer level. Is that any way to do this better?

    I know it’s a lot of questions, and I’m sorry for that, but I don’t know where to find these answers.

    Thanks for helping.

    For better understanding, heres how its been done:

    • VideoProducer -> Decoder QThread. Runs in a loop decoding and enqueuing frames into a Buffer.

    • FrameConsumer -> Video consumer. Retrieves frames from frame CircularBuffer in a loop using another QThread. Display the frame and sleep few mseconds based on video fps and AudioConsumer clock time.

    • AudioConsumer -> Audio consumer and video’s clock. Works with signals using QAudioOutput::notify() to retrieve chunks of audio from audio CircularBuffer and insert them into QAudioOutput buffer. When decodes the first frame, its pts is used to start the clock. (if a seek has been called, the next audio frame will mark the clock start time)

    • Worker -> Each stream (audio and video) has one. It’s a QThread running in a loop (run() reimplemented) retrieving objects from Buffer and inserting (backwards or forward) to CircularBuffer.

    And another ones that manage UI, filters, some operations with frames/chunks...

  • FFmpeg Composer Definition

    9 May 2017, by amanguel

    Could you please help me with the following ffmpeg command in order to compose 0 to 10 MKV Video and MKA Audio files to a single MPEG4 Video and Audio file. The resulting file layout is a grid of 3x3 circular videosVideo Output layout.

    ffmpeg \
    - i "1.mkv" \
    - f matroska -vcodec vp8 \
    - an \
    - i "2.mkv" \
    - f matroska -vcodec vp8 \
    - an \
    - i "1.mka" \
    - i "2.mka" \
    - filter_complex "color=s=360x640:c=white [base]; \
    [0:v] setpts=PTS-STARTPTS, scale=100x100, geq=’st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3),50*50),255,0)’ [upperleft]; \
    [1:v] setpts=PTS-STARTPTS, scale=100x100, geq=’st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3),50*50),255,0)’ [upperright]; \
    [base][upperleft] overlay=shortest=0:x=5:y=5 [tmp1]; \
    [tmp1][upperright] overlay=shortest=0:x=125:y=5" \
    - map 2:a -map 3:a \
    - t 60 \
    - y "composed.mp4"

    Thank you very much in advance,
    Andres

  • How to stream lossy audio in chunks? [closed]

    18 May 2024, by Adam

    On the client side, I'm collecting periodically received audio slices into a circular buffer during the playback to avoid having to receive the entire file at once, only a small portion of its size.

    


    On server side I tried splitting MP3/M4A/OGG into frames, but even with ffmpeg, a gap is created in the chunk (usually at the beginning/end), which prevents me from validly concatenating them on the client side. Not even when I encoded the MP3 to PCM beforehand and then split the WAV file into MP3 chunks. Of course, split to WAV chunks there is no gap, but one slice is larger than the entire MP3 file.
MP3 gap

    


    I also tried copying and encoding:

    


    ffmpeg -i input.mp3 -ss 0 -t 30 -c:a copy -f mp3 pipe:1
ffmpeg -i input.mp3 -ss 0 -t 30 -c:a libmp3lame -f mp3 pipe:1