Recherche avancée

Médias (1)

Mot : - Tags -/berlin

Autres articles (98)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

Sur d’autres sites (10480)

  • ffplay playback h264 encoded camera preview blur

    11 juillet 2019, par doufu

    I use the ffplay tool provided by ffmpeg to play an h264 encoded camera. When the camera is still, the interface will be very clear, but when I cover the camera for a while to remove the occlusion or move the camera quickly, the interface will be blurred, resulting in a mosaic-like picture, but the interface will become clear after a few seconds.
    Why is this happening ?
    Am I missing some options to run ffplay ?
    PS:Using AMCap will not have the above problem.
    The command opening my camera I used :

    ffplay.exe -f dshow -i video="camera name"

    Preview screencast video is uploaded to youtube

    The following is the printout information of ffplay:
    enter image description here

  • ffmpeg - How to convert massive amounts of files in parallel ?

    15 juillet 2019, par Forivin

    I need to convert about 1.5TiB or audio files which are in either flac or wav format. They need to be converted into mp3 files, keeping important meta data and the cover art etc. and the bitrate needs to be 320k.

    This alone is easy :

    ffmpeg -i "$flacFile" -ab 320k -map_metadata 0 -id3v2_version 3 -vsync 2 "$mp3File" < /dev/null

    But the problem is making it faster. The command from above only uses 12.5% of the CPU. I’d much rather use like 80%. So I played around with the threads flag, but it doesn’t make it faster or slower :

    ffmpeg -i "$flacFile" -ab 320k -map_metadata 0 -id3v2_version 3 -vsync 2 -threads 4 "$mp3File" < /dev/null

    But it only utilizes my CPU by 13%. I think it only uses one thread. My CPU has 8 physical cores btw (+ hyperthreading).

    So my idea now is to somehow have multiple instances of ffmpeg running at the same time, but I have no clue how to do that properly.

    This is my current script to take all flac/wav files from one directory (recursively) and convert them to mp3 files in a new directory with the exact same structure :

    #!/bin/bash

    SOURCE_DIR="/home/fedora/audiodata_flac"
    TARGET_DIR="/home/fedora/audiodata_mp3"

    echo "FLAC/WAV files will be read from '$SOURCE_DIR' and MP3 files will be written to '$TARGET_DIR'!"
    read -p "Are you sure? (y/N)" -n 1 -r
    echo    # (optional) move to a new line
    if [[ $REPLY =~ ^[Yy]$ ]] ; then # Continue if user enters "y"

       # Find all flac/wav files in the given SOURCE_DIR and iterate over them:
       find "${SOURCE_DIR}" -type f \( -iname "*.flac" -or -iname "*.wav" \) -print0 | while IFS= read -r -d '' flacFile; do
           if [[ "$(basename "${flacFile}")" != ._* ]] ; then # Skip files starting with "._"
               tmpVar="${flacFile%.*}.mp3"
               mp3File="${tmpVar/$SOURCE_DIR/$TARGET_DIR}"
               mp3FilePath=$(dirname "${mp3File}")
               mkdir -p "${mp3FilePath}"
               if [ ! -f "$mp3File" ]; then # If the mp3 file doesn't exist already
                   echo "Input: $flacFile"
                   echo "Output: $mp3File"
                   ffmpeg -i "$flacFile" -ab 320k -map_metadata 0 -id3v2_version 3 -vsync 2 "$mp3File" < /dev/null
               fi
           fi
       done
    fi

    I mean I guess I could append an & to the ffmpeg command, but that would cause throusands of ffmpeg instances to run at the same time, which is too much.

  • Most efficient way to render bitmap to screen on Linux [on hold]

    22 juillet 2019, par Maximus

    My goal is to receive a video feed from wifi and display it on my screen. For this, I’ve created a couple of small programs and a bash script to automate them running. It works like this :

    UDPBitmap/Plotter & ffplay -i - < UDPBitmap/pipe & python requester.py;

    Translation : There is a C++ program called Plotter, its job is to receive packets on an assigned UDP port, process them and write it to pipe (named : UDPBitmap/pipe). The pipe is accessed by ffplay, and ffplay renders the video on screen. The python file is solely responsible for accessing and controlling the camera with various HTTP requests.

    The above command works fine, everything works as expected. However, the resulting latency and framerate is a bit lower than what I’ve wanted. The bottleneck of this program is not the pipe, it is fast enough. Wifi transmission is also fast enough. The only thing left is ffplay.

    Question :

    What is the most efficient way to render a bitmap to screen, on Linux ? Is there a de facto library for this that I can use ?

    Note :

    • Language/framework/library does not matter (C, C++, Java, Python, native linux tools and so on...)
    • I do not need a window handle, but is SDL+OpenGL the way to go ?
    • Writing directly to the framebuffer would be super cool...