Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (67)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (8541)

  • Adding ffmpeg path on remote server not working [closed]

    26 mars 2024, par Vatsal A Mehta

    I am working on mp3 audio processing and need the ffmpeg library for my code to work. In my local setup I have installed ffmpeg and the code is working fine. But I have to make changes to the codebase on a remote server where ffmpeg isn't installed. I tried adding the installation folders of ffmpeg and then editing the PATH environment variable to add the ffmpeg path. But I get an error message that says : FileNotFoundError: [Errno 2] No such file or directory: 'ffprobe'.

    


    The ffmpeg installed files in my local setup are :

    


    


    Now I copied this and added to the codebase on my remote server. Then I copied the /bin file path and added to env variables on remote server. But it doesn't seem to work.

    


  • How to Add PulseAudio Server to quay.io/browser/google-chrome-stable Docker Image for Audio Support with Screen Recording ?

    17 avril, par Ahmed Seddik Bouchiba

    I’m trying to set up an environment for recording the screen of a Chrome browser running in a Docker container, and I need to enable audio support. I’m using the quay.io/browser/google-chrome-stable:133.0.6943.98-6 image for the browser and quay.io/aerokube/xvfb:21.1 for the virtual framebuffer to capture the screen.

    


    However, I’m facing an issue where audio is not supported in the Chrome Docker image, which I need for recording. The setup involves using FFmpeg in a separate container to stream the recorded video, but without audio from the browser, this setup isn’t complete.

    


    I’m looking for guidance on how to add a PulseAudio server to the Chrome image to enable audio support. Specifically :

    


    How can I configure the Docker image quay.io/browser/google-chrome-stable:133.0.6943.98-6 to support PulseAudio?

Are there any considerations or best practices when adding PulseAudio to a headless browser Docker container?

Is it possible to run the PulseAudio server in a separate container and link it to the Chrome container, or should it be included directly in the Chrome container?


    


    Any help on adding PulseAudio support to this Chrome Docker image would be greatly appreciated !

    


    Additional Context :

    


    The goal is to run a headless Chrome browser with audio support to record the browser’s activities (both video and audio) and stream it using FFmpeg.

I’m using Docker Compose to orchestrate the containers but haven’t figured out how to integrate PulseAudio into the setup effectively.


    


    Thanks in advance !

    


  • Gitlab CI - Combine two docker images into a single stage

    12 mars 2024, par seal.r00t

    I have this gitlab-ci.yaml file. Using this file I execute my k6.io load test from pipeline. Now I need to execute some FFmpeg commands during the run stage, and my question is how do I make the FFmpeg tool available to the run stage. Do I need to grab the image that has FFmpeg and add it next to grafa/k6 or something else ?

    


    default:
  tags:
    - default

workflow:
  name: "$PIPELINE_NAME"

stages:
  - lint
  - setup
  - run
  - teardown

lint:js:
  stage: lint
  image:
    name: tmknom/prettier
    entrypoint:
      - ""
  rules:
    - if: '$CI_PIPELINE_SOURCE == "push"'
      when: always
    - if: '$CI_PIPELINE_SOURCE == "schedule"'
      when: always
  script:
    - prettier --check '**/*.js'

setup:
  stage: setup
  image: alpine
  rules:
    - if: '$CI_PIPELINE_SOURCE == "web"'
      when: always
    - if: '$CI_PIPELINE_SOURCE == "schedule"'
      when: always
  script:
    - echo 'set up!'

run:
  stage: run
  environment:
    name: run
  image:
    name: grafana/k6:latest
    entrypoint: [ "" ]
  artifacts:
    when: always
    paths:
      - summaries/
  rules:
    - if: '$CI_PIPELINE_SOURCE == "web"'
      when: always  # Prevent pipeline run for push event
    - if: '$CI_PIPELINE_SOURCE == "schedule"'
      when: always
  script:
    - ./run.sh

teardown:
  stage: teardown
  image: alpine
  rules:
    - if: '$CI_PIPELINE_SOURCE == "web"'
      when: always
    - if: '$CI_PIPELINE_SOURCE == "schedule"'
      when: always
  script:
    - echo 'tear down!'


    


    I tried adding two name tags under the run stage for using two images but it didn't work and returned a syntax error.

    


    run:
  stage: run
  environment:
    name: run
  image:
    name: grafana/k6:latest
    name: linuxserver/ffmpeg