
Recherche avancée
Autres articles (90)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...)
Sur d’autres sites (7362)
-
How to make multiple ffmpeg commands run in parallel [duplicate]
9 février, par Kim MỹI'm using the following ffmpeg command to compress video :


`nice -n 10 ${ffmpegPath} -i "${chunkPath}" -c:v libx264 -preset fast -crf 28 "${compressedPath}"`



However, when I run two instances of ffmpeg of this command to achieve parallelism :


Either in two child processes within a single Node.js application or in two separate Node.js applications running at the same time, it seems only one command is processed, and the other is skipped.


I've noticed that 2 FFmpeg instances are loaded into RAM and both create a starting file for the final compressed video, they finish compression around the same time. However, the total processing time is effectively doubled compared to compressing a single video file alone, which only takes half the time.


I also try to pass the
-threads
argument but it produces same result.

-
How to seamlessly concatenate multiple Opus files together without popping sound ?
16 février, par Gurdie DerilusI have a large PCM file that I've split into N chunks (N being the # of threads), and I encode them in parallel into Opus files with FFmpeg.


Note : All PCM files are 16-bit Little Endian, 2 channels, 48000 sample rate.


I then concatenate the Opus files using FFmpeg's demuxer, but I can hear an audible pop sound between each segment.


Opening this sample file in Audacity reveals the issue :
Notice the introduced pops in opus


I created a simple and short Golang project on Github with a sample PCM file for easy testing. Note, not production code, so obviously not following any best practices here.


#1, I suspected the pops might've been introduced while parallel encoding each PCM file to Opus files. This, however, wasn't the case.
Concatted Opus files vs Separate Opus files image.


#2, using the concat filter works, however it reencodes the files, which is not doable in my case as it's too slow (these files can & do reach up to an hour). I know Opus files are chainable, so I can't imagine why they don't work flawlessly.


#3, I heard that Opus has a 20ms frame size, so I split the file against that frame size, but this made no difference.


chunkSize := largePcmFileStat.Size() / int64(runtime.GOMAXPROCS(0))
chunkSize = int64(roundUpToNearestMultiple(float64(chunkSize), 4))



The entire sample looks like this :


package main

import (
 "context"
 "fmt"
 "io"
 "log"
 "os"
)

func main() {
 // Grab large PCM file
 largePcmFile, err := os.Open("files/full_raw.pcm")
 if err != nil {
 log.Fatalln(err)
 }

 // Split into 2 chunks
 ByteRate := 2
 SampleRate := 48000
 Channels := 2
 Seconds := 20
 chunkSize := Seconds * Channels * SampleRate * ByteRate

 file1, err := encodePcmToOpus(context.TODO(), io.LimitReader(largePcmFile, int64(chunkSize)))
 if err != nil {
 log.Fatalln(err)
 }

 file2, err := encodePcmToOpus(context.TODO(), io.LimitReader(largePcmFile, int64(chunkSize)))
 if err != nil {
 log.Fatalln(err)
 }

 fmt.Println("Check if these play with no defects:", file1)
 fmt.Println("file1:", file1)
 fmt.Println("file2:", file2)
 fmt.Println()

 concatFile, err := concatOpusFiles(context.TODO(), []string{file1, file2})
 if err != nil {
 log.Fatalln(err)
 }

 fmt.Println("concatted file:", concatFile.Name())
}



-
ffmpeg inconsistent speed results by version breaking large audio file into multiple pieces with -ss/-to positional parameters [closed]
2 novembre 2024, par BenHI am trying to chop a large (12 hour+) audio file up into multiple segments using multiple -ss/-to positional operations.


ffmpeg.exe -loglevel error -stats -i "C:\data\chapters\joined_output.mp3" -ss -1 -to 1159 -c copy "C:\data\chapters\001 - Chapter 1.mp3" -ss 1159 -to 1800 -c copy "C:\data\chapters\002 - Chapter 2.mp3" -ss 1800 -to 3181 -c copy "C:\data\chapters\003 - Chapter 3.mp3" ... output.mp3



The '...' indicates that I have more than 20 of such repeated statements to break up into 20 or more chapter files.


I arrived on this because using individual command were processing the entire file each time to parse out the section I wanted. I realize there is an option to place -ss/-to prior to the input file, and have since discovered that this appears to work quicker, but I have not found syntax to use this in a single command and therefore must create a separate command for each chapter.


The above syntax appears to work fine, but was taking about 4 minutes to process. When I reverted to older versions this operation completes much quicker. About 20 seconds with version 6.1 and about 10 seconds on version 5.


There is some discrepancy with how the old versions report the length of the file (it appears to show only about 6.5 hours processed in "out_time" value), but the resulting output files appear to be correct. I think it might be reporting out_time of only the longest section it is processing as the 6.5 hours appear to match the length of that output section.


To be clear, version 5 using my above syntax appears to create all my output files correctly in 10 seconds.


If I split them up into individual commands with -ss/-to before the input, then it actually takes longer with ffmpeg version 5/6 (about 45 seconds compared to 10-20 seconds).
With the latest version 7 it takes about 1 min, 15 secs. Much better than the 4 minutes using my syntax above but still well slower than using version 5/6 with that same syntax.


So, in short, why am I able to (apparently) properly split this 12 hour file into about 25 different segments in about 10 seconds using the syntax above on version 5, but it takes 2x that long on 6, and 30x that long on 7 ? I assume there are just syntax changes I can't figure out or some changes to default behavior ?