Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (88)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (8915)

  • Live Stream using .m3u8 and .ts files with iPhone as server

    26 février 2015, par Bhumit

    I am trying to accomplish a task to live stream from iPhone camera. I have done some research and found that i can use .m3u8 files for streaming live video with should contain .ts(Mpeg-2) files .

    Now the file which i have on my iPhone is .mp4 file and it does not work with .m3u8, so i figured out i will have to convert .mp4 to .ts for that , but i have not succeeded in doing so.

    I found that it is possible to convert video ffmpeg lib as mentioned in this article here. I have successfully imported ffmpeg library but not able figure out how can i use it to convert a video as i am using this for first time.

    One another thing apple documentation says

    There are a number of hardware and software encoders that can create
    MPEG-2 transport streams carrying MPEG-4 video and AAC audio in real
    time.

    What is being said here ? is there any other way i can use .mp4 files for live streaming without converting them from iOS ?

    Let me know if i am not clear, i can provide more information .Any suggestion is appreciated. I would like to know am i on a right path here ?

    EDIT

    I am adding more info to my question, so basically what i am asking is , we can convert .mp4 video to .ts using following command

    ffmpeg -i file.mp4 -acodec libfaac -vcodec libx264 -an -map 0 -f segment -segment_time 10 -segment_list test.m3u8 -segment_format mpegts -vbsf h264_mp4toannexb -flags -global_header stream%05d.ts

    How can i use ffmpeg library to do what this command does in iOS.

  • Node.js Live Streaming : Avoid buffering

    27 octobre 2012, par Shirish Kamath

    I've written a small nodeJS server that outputs system audio captured by ffmpeg on Windows (using DirectShow) to the browser as a streaming MP3 file. The audio needs to be as live as possible, with minimum/no buffering, and a "skipping" effect in the audio is perfectly acceptable.

    When I play the audio in Chrome using the HTML5 audio tag, there's a delay of about 8-10 secs over a low-latency LAN connection. I suspected this to be a client-side buffer, and used a Flash MP3 player on the client-side, which brought down the delay to 2-3 secs.

    Now, the buffering seems to taking place on the server-side. The documentation for NodeJS's response.write mentions that the data is written kernel buffers. How do I go about avoiding any buffering altogether or at least getting around it, so that the client always gets the latest audio data ? Strategies for handling 'drain' events to always push live data ?

    On the request object, I've used setNoDelay(true) to avoid the use of Nagle's algorithm. Following is a snippet of how data is written when the spawned ffmpeg process emits data.

    var clients = []; //List of client connections currently being served
    ffmpeg.stdout.on('data', function(data) {
       for(var i = 0; i < clients.length; i++){
           clients[i].res.write(data);
       }
    });
  • ffmpeg - preserve time base and pts of all frames during transcode

    18 mars 2021, par jdizzle

    Context :

    


    I have an application that produces mp4s with HEVC encoding. I want to convert them to AVC for use in browser-based displaying. A very crucial part of how I want to use this is to preserve exact PTS times, as this is how we correlate the frames to other data streams not included in the video.

    


    Question :

    


    How do I make ffmpeg preserve this information across the transcode ? All the obvious flags seem to have no effect and ffmpeg just does whatever it wants.

    


    $ ffprobe -show_streams original.mp4 2>/dev/null | grep time_base
codec_time_base=16666667/500000000
time_base=1/1000


    


    Here is my convert command :

    


    $ ffmpeg -i original.mp4 -copyts -copytb 0 test.mp4


    


    And its result :

    


    $ ffprobe -show_streams test.mp4 2>/dev/null | grep time_base
codec_time_base=1/60
time_base=1/15360


    


    I would expect the time_bases to match. The PTS of the frames also don't match when doing ffprobe -show_frames

    


    EDIT :
@Gyan suggested using -video_track_timescale, but that didn't get the exact behavior I was looking for :

    


    $ sdiff <(ffprobe -show_frames test.mp4  | grep pkt_pts_time) <(ffprobe -show_frames original.mp4 | grep pkt_pts_time)
pkt_pts_time=0.000000                           pkt_pts_time=0.000000
pkt_pts_time=0.033000                           pkt_pts_time=0.033000
pkt_pts_time=0.067000                         | pkt_pts_time=0.066000
pkt_pts_time=0.100000                           pkt_pts_time=0.100000
pkt_pts_time=0.133000                           pkt_pts_time=0.133000
pkt_pts_time=0.167000                         | pkt_pts_time=0.166000
pkt_pts_time=0.200000                           pkt_pts_time=0.200000
pkt_pts_time=0.233000                           pkt_pts_time=0.233000
pkt_pts_time=0.267000                         | pkt_pts_time=0.266000
pkt_pts_time=0.300000                           pkt_pts_time=0.300000
pkt_pts_time=0.333000                           pkt_pts_time=0.333000
pkt_pts_time=0.367000                         | pkt_pts_time=0.366000
pkt_pts_time=0.400000                           pkt_pts_time=0.400000
pkt_pts_time=0.433000                           pkt_pts_time=0.433000
pkt_pts_time=0.467000                           pkt_pts_time=0.467000
pkt_pts_time=0.500000                           pkt_pts_time=0.500000
pkt_pts_time=0.533000                         | pkt_pts_time=0.532000
pkt_pts_time=0.567000                         | pkt_pts_time=0.565000
pkt_pts_time=0.600000                         | pkt_pts_time=0.598000
pkt_pts_time=0.633000                         | pkt_pts_time=0.631000
pkt_pts_time=0.667000                         | pkt_pts_time=0.665000
pkt_pts_time=0.700000                         | pkt_pts_time=0.698000
...