Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (53)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (10746)

  • "At least one output file must be specified" error when using concat protocol

    18 juillet 2021, par DataMsc

    I've been attempting to concatenate all the .aac files in a directory to make one output .aac. Here is the command I'm using :

    


    ffmpeg -i concat:1.aac|2.aac|3.aac -c copy output.aac


    


    When I try to execute this command in the directory, I get this error :

    


    At least one output file must be specified


    


    I have no idea where I went wrong, but any guidance is appreciated.

    


  • ffmepg always get broken video streaming from rtsp protocol [on hold]

    6 janvier 2014, par poc

    I'm trying to capture rtsp streaming from Ip Camera by the command

    ffmpeg -t 10 -i  rtsp://172.19.1.42/live.sdp  -ss 00:00:02.500  -c:v copy download.mp4

    However I always get the broken streaming, but if I used the VLC or Qucicktime to watch the RTSP streaming, it worked fine , there was no broken streaming.

    Is there any setting or options for ffmpeg can improve the streaming quality ? Thanks

    [rtsp @ 0x7f9a84017c00] Estimating duration from bitrate, this may be inaccurate
    Input #0, rtsp, from 'rtsp://172.19.1.42/live.sdp':
     Metadata:
       title           : RTSP server
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p, 2048x1536, 13.33 tbr, 90k tbn, 180k tbc
    -t is not an input option, keeping it for the next output; consider fixing your command line.
    Output #0, mp4, to 'c0_s1_h264_1920x1536_10_cbr_500_6000000_imagequality.mp4':
     Metadata:
       title           : RTSP server
       encoder         : Lavf54.63.104
       Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 2048x1536, q=2-31, 90k tbn, 90k tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    [NULL @ 0x7f9a83805200] RTP: missed 3 packets
       Last message repeated 1 times
    RTP: missed 3 packets=-1.0 size=     521kB time=00:00:00.80 bitrate=5303.8kbits/s
    [NULL @ 0x7f9a83805200] RTP: missed 3 packets
    RTP: missed 3 packets=-1.0 size=     895kB time=00:00:01.40 bitrate=5224.0kbits/s
    [NULL @ 0x7f9a83805200] RTP: missed 3 packets
    RTP: missed 3 packets=-1.0 size=    1271kB time=00:00:02.00 bitrate=5192.3kbits/s
    [NULL @ 0x7f9a83805200] RTP: missed 3 packets
    RTP: missed 3 packets=-1.0 size=    1646kB time=00:00:02.60 bitrate=5175.1kbits/s
    RTP: missed 2 packets=-1.0 size=    2029kB time=00:00:03.20 bitrate=5183.5kbits/s
    RTP: missed 3 packets=-1.0 size=    2420kB time=00:00:03.80 bitrate=5207.3kbits/s
    RTP: missed 2 packets=-1.0 size=    2801kB time=00:00:04.40 bitrate=5206.0kbits/s
    [NULL @ 0x7f9a83805200] RTP: missed 3 packets
    RTP: missed 2 packets=-1.0 size=    3180kB time=00:00:05.00 bitrate=5201.6kbits/s
    [NULL @ 0x7f9a83805200] RTP: missed 3 packets
    RTP: missed 3 packets=-1.0 size=    4708kB time=00:00:07.41 bitrate=5204.6kbits/s
    RTP: missed 2 packets=-1.0 size=    5088kB time=00:00:08.01 bitrate=5202.7kbits/s
    RTP: missed 2 packets=-1.0 size=    5857kB time=00:00:09.21 bitrate=5207.3kbits/s
    [NULL @ 0x7f9a83805200] RTP: missed 2 packets
    frame=  100 fps= 10 q=-1.0 Lsize=    6382kB time=00:00:09.96 bitrate=5246.4kbits/s
  • How to Add PulseAudio Server to quay.io/browser/google-chrome-stable Docker Image for Audio Support with Screen Recording ?

    17 avril, par Ahmed Seddik Bouchiba

    I’m trying to set up an environment for recording the screen of a Chrome browser running in a Docker container, and I need to enable audio support. I’m using the quay.io/browser/google-chrome-stable:133.0.6943.98-6 image for the browser and quay.io/aerokube/xvfb:21.1 for the virtual framebuffer to capture the screen.

    


    However, I’m facing an issue where audio is not supported in the Chrome Docker image, which I need for recording. The setup involves using FFmpeg in a separate container to stream the recorded video, but without audio from the browser, this setup isn’t complete.

    


    I’m looking for guidance on how to add a PulseAudio server to the Chrome image to enable audio support. Specifically :

    


    How can I configure the Docker image quay.io/browser/google-chrome-stable:133.0.6943.98-6 to support PulseAudio?

Are there any considerations or best practices when adding PulseAudio to a headless browser Docker container?

Is it possible to run the PulseAudio server in a separate container and link it to the Chrome container, or should it be included directly in the Chrome container?


    


    Any help on adding PulseAudio support to this Chrome Docker image would be greatly appreciated !

    


    Additional Context :

    


    The goal is to run a headless Chrome browser with audio support to record the browser’s activities (both video and audio) and stream it using FFmpeg.

I’m using Docker Compose to orchestrate the containers but haven’t figured out how to integrate PulseAudio into the setup effectively.


    


    Thanks in advance !