Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (46)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (5830)

  • ffmpeg command exports flac with wrong 'length' metadata, works fine for mp3

    21 juillet 2023, par Martin

    I have some audio recorded in Audacity 3.2.3 that I have exported as an mp3 and a flac. Then I have this file split_by_silence.sh

    


    Which has hardcoded input path values that take an input file, split it by detecting silence, and then finally run an ffmpeg command to split the files. If you save the below code into a file split.sh, you can call it with the command $ ./split_by_silence.sh "value1" "value2"

    


    # ./split_by_silence.sh "full_lowq.flac" %03d_output.flac
#IN=$1
#OUT=$2

OUT="%03d_output.flac"
IN="/mnt/e/martinradio/rips/vinyl/WIP/Dogs On Fire (1983, Vinyl)/dog on fire.flac"
OUTPUT_LOCATION="/mnt/e/martinradio/rips/vinyl/WIP/Dogs On Fire (1983, Vinyl)/"

true ${SD_PARAMS:="-18dB"};
true ${MIN_FRAGMENT_DURATION:="20"};
export MIN_FRAGMENT_DURATION
if [ -z "$OUT" ]; then
    echo "Usage: split_by_silence.sh full.mp3 output_template_%03d.mp3"
    echo "Depends on FFmpeg, Bash, Awk, Perl 5. Not tested on Mac or Windows."
    echo ""
    echo "Environment variables (with their current values):"
    echo "    SD_PARAMS=$SD_PARAMS       Parameters for FFmpeg's silencedetect filter: noise tolerance and minimal silence duration"
    echo "    MIN_FRAGMENT_DURATION=$MIN_FRAGMENT_DURATION    Minimal fragment duration"
    exit 1
fi
#
# get comma separated list of split points (use ffmpeg to determine points where audio is at SD_PARAMS [-18db] )
#

echo "_______________________"
echo "Determining split points..." >& 2
SPLITS=$(
    ffmpeg -v warning -i "$IN" -af silencedetect="$SD_PARAMS",ametadata=mode=print:file=-:key=lavfi.silence_start -vn -sn  -f s16le  -y /dev/null \
    | grep lavfi.silence_start= \
    | cut -f 2-2 -d= \
    | perl -ne '
        our $prev;
        INIT { $prev = 0.0; }
        chomp;
        if (($_ - $prev) >= $ENV{MIN_FRAGMENT_DURATION}) {
            print "$_,";
            $prev = $_;
        }
    ' \
    | sed 's!,$!!'
)
echo "SPLITS= $SPLITS"

#
# Add 5 seconds to each of the comma separated numbers
#
# Convert the comma-separated string into an array
arr=($(echo $SPLITS | tr ',' '\n'))
# Initialize a new array to store the results
new_arr=()
# Iterate through each element and add 5 seconds of padding
for i in "${arr[@]}"; do
  result=$(echo "$i + 5" | bc -l)
  new_arr+=("$result")
done
# Convert the array back into a comma-separated string
NEW_SPLITS=$(IFS=,; echo "${new_arr[*]}")
# Print the result
echo "NEW_SPLITS= $NEW_SPLITS"
SPLITS=$NEW_SPLITS

#
# Print how many tracks should be exported
#
res="${SPLITS//[^,]}"
CHARCOUNT="${#res}"
num=$((CHARCOUNT + 2))
echo "Exporting $num tracks"
echo "_______________________"

#
# Split audio into individual tracks
#
current_directory=$(pwd)

cd "$OUTPUT_LOCATION"

echo "Running ffmpeg command: "

ffmpeg -i "$IN" -c copy -map 0 -f segment -segment_times "$SPLITS" "$OUT"
#ffmpeg -i "full_lowq.flac" -c copy -map 0 -f segment -segment_times "302.825,552.017" "%03d_output.flac"


echo "Done."

cd $current_directory

echo "running flac command"
# check flac file intrgrity


    


    If I call this code for my flac file :

    


    OUT="%03d_output.flac"
IN="/mnt/e/martinradio/rips/vinyl/WIP/Dogs On Fire (1983, Vinyl)/dog on fire.flac"


    


    The outputted files have an incorrect metadata for the length. They all report as having the same length, but if i import any of them into audacity, the file has a correct length.

    


    enter image description here

    


    but if i run this for my mp3 file, we can see the correct length metadata :

    


    OUT="%03d_output.mp3"
IN="/mnt/e/martinradio/rips/vinyl/WIP/Dogs On Fire (1983, Vinyl)/dogs on fire.mp3"


    


    enter image description here

    


    So there is something with my ffmpeg command that causes it to export flac files with wrong 'length' metadata

    


    ffmpeg -i "$IN" -c copy -map 0 -f segment -segment_times "$SPLITS" "$OUT"



    


    I've tried with the flac example to change -c copy to -c:a flac, but that just gives every output flac file a length of 00:00:00

    


    is it a problem with my ffmpeg command ? Or my files ? https://file.io/tIFsa1l70076
it works for mp3 files just fine, why does it have this issue with flac ?

    


  • FFMPEG - Struggling to find correct input audio codec parameters on macOS

    20 septembre 2024, par Xavi

    I am trying to read my external stereo microphone with ffmpeg within my Qt Windows+macOs application, but I am struggling to obtain consistent correct input codec parameters on macOs. My findings and suspicions so far :

    


    The code I'm using in macOs is the following, where everything returns a successful return code :

    


     avdevice_register_all();
 
 //macOs only, the same code in windows looks for "dshow" 
 const AVInputFormat *inputFormat = av_find_input_format("avfoundation");
 
 AVFormatContext* inputFormatContext;
 avformat_open_input(&inputFormatContext, inputDevice, inputFormat, NULL);

 avformat_find_stream_info(inputFormatContext, NULL);
 
 //... allocate the codec context for the single input stream and
 // copy the parameters from the stream to the context



    


    In my standalone minimal reproducer this always results on the codec ID of the single stream being AV_CODEC_ID_PCM_F32LE, in both macOS and Windows. When I integrate this code in my Qt application on Windows, I get the same result. However, on macOS, most of the times results in the codec id of the stream being AV_CODEC_ID_PCM_S16LE (via AV_CODEC_ID_FIRST_AUDIO) and sometimes AV_CODEC_ID_PCM_F32LE. Both sample formats are supported by my microphone.

    


    AV_CODEC_ID_PCM_F32LE always results in a correct output. AV_CODEC_ID_PCM_S16LE results on buzzy noisy audio slowed down to 0.5x, and If in this case I decode with AV_CODEC_ID_PCM_F32LE instead of copying the codec parameters from the stream, the output sounds correct again.

    


    I am trying to write generic code, so while enforcing the AV_CODEC_ID_PCM_F32LE codec works, I'd rather understand what is happening.

    


    What am I missing ? Is Qt interacting in some way that I can't think of ? I am compiling and linking my own ffmpeg libraries (6.1.1) and not using Qt's ones.

    


  • ffmpeg encoder streaming issues

    8 août 2017, par bobsingh1

    I am trying to build ffmpeg encoder on linux. I started with a custom built server Dual 1366 2.6 Ghz Xeon CPUs (6 cores) with 16 GB RAM with Ubuntu 16.04 minimal install. Built ffmpeg with h264 and aac. I am taking live source OTA channels and encoding/streaming them with following parameters

    -vcodec libx264 -preset superfast -crf 25 -x264opts keyint=60:min-keyint=60:scenecut=-1 -bufsize 7000k -b:v 6000k -maxrate 6300k -muxrate 6000k -s 1920x1080 -format yuv420p -g 60 -sn -c:a aac -b:a 384k -ar 44100

    And I am able to successfully udp out using mpegts. My problem starts with 5th stream. The server can handle four streams and as soon as I introduce 5th stream I start seeing hiccups in output. Looking at my cpu usage using top I still see only 65% to 75% usage with occasional 80% hit. Memory usage is well within acceptable parameters. So I am wondering either top is not giving me accurate cpu usage or something is not right with ffmpeg. The server is isolated for udp in/out on a 1 Gbps network.

    I decided to up the cpu power and installed two 3.5 Ghz CPUs (6 cores) thinking it was perhaps the cpu clock. To my surprise the results were no different. So now I am wondering is there some built in limit I am hitting when I process at 1080p. If I change the resolution to 720p it is able to process 8 streams but 720 is not acceptable.
    My target is 10 1080p streams per server.
    So my questions are
    1. If I use a quad motherboard and up the cpu count to 4 (6 or 8 cores) will I get 10 1080p streams ? Is there any theoretical max I can go with ffmpeg per machine ?
    2. Do cores matter more or does clock matter more ?
    3. Any suggestions in improvement with my options. I have tried ultrafast preset but the output quality is unacceptable.

    Thanks in advance