
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (95)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8148)
-
Animations with multiple Images using FFmpeg or another tools
14 novembre 2022, par Oleksandr MaliutaWhat I have :


- 

- different video templates without logo
- UI where users will select a template from that list
- UI where users can upload their logotype, add mp3 file, text








What should be done :


- 

- new generated video based on this configuration.




What I found :


- 

- I can use ffmpeg and combine it all. But not sure how to make such animations with logo. Maybe there is existing gui.
- I also found https://github.com/inlife/nexrender. But it works with Adobe After Effects and seems like what I need.






Example of result https://www.introbrand.com/logo-opening-mobiles.html


I'm not looking for the ready solution, just a few words - how to go, what to use..
This is absolutely new things for me, so if you could please suggest something or just tell me what's the best way - I'd appreciate this)


-
command line ffmpeg or sox convert audio tracks from stereo to 7.1 upmix
4 novembre 2016, par user1320370Hi I have 3 flac multichannel files that I need to mix :
1) 7.1 channels
2) 7.1 channels
3) Stereo channels
to avoid eventual problems on mixing (some of these already exist) I like to convert the stereo track to 7.1, in this mode all the 3 files are 7.1 files ready to mix.
I serching for a command line like :
ffmpeg input_stereo.flac output_7.1.flac
or
sox input_stereo.flac output_7.1.flac
I suppose the setting are :
Left -----> FL (front left)
Right -----> FR (front right)
Left + right -----> FC (front center)
(empty) -----> LFE (subwoofer)
Left -----> BL (back left)
Right -----> BR (back right)
Left -----> SL (side left)
Right -----> SR (side right)
is possible post a commmand line that do this ?
-
ffmpeg - Poll folder for files, and stream as video with rtp
17 janvier 2019, par Omer(I’m a newbie when it comes to ffmpeg).
I have an image source which saves files to a given folder in a rate of 30 fps. I want to wait for every (let’s say) 30 frames chunk, encode it to h264 and stream it with RDP to some other app.I thought about writing a python app which just waits for the images, and then executes an ffmpeg command. For that I wrote the following code :
main.py :
import os
import Helpers
import argparse
import IniParser
import subprocess
from functools import partial
from Queue import Queue
from threading import Semaphore, Thread
def Run(config):
os.chdir(config.Workdir)
iteration = 1
q = Queue()
Thread(target=RunProcesses, args=(q, config.AllowedParallelRuns)).start()
while True:
Helpers.FileCount(config.FramesPathPattern, config.ChunkSize * iteration)
command = config.FfmpegCommand.format(startNumber = (iteration-1)*config.ChunkSize, vFrames=config.ChunkSize)
runFunction = partial(subprocess.Popen, command)
q.put(runFunction)
iteration += 1
def RunProcesses(queue, semaphoreSize):
semaphore = Semaphore(semaphoreSize)
while True:
runFunction = queue.get()
Thread(target=HandleProcess, args=(runFunction, semaphore)).start()
def HandleProcess(runFunction, semaphore):
semaphore.acquire()
p = runFunction()
p.wait()
semaphore.release()
if __name__ == '__main__':
argparser = argparse.ArgumentParser()
argparser.add_argument("config", type=str, help="Path for the config file")
args = argparser.parse_args()
iniFilePath = args.config
config = IniParser.Parse(iniFilePath)
Run(config)Helpers.py (not really relevant) :
import os
import time
from glob import glob
def FileCount(pattern, count):
count = int(count)
lastCount = 0
while True:
currentCount = glob(pattern)
if lastCount != currentCount:
lastCount = currentCount
if len(currentCount) >= count and all([CheckIfClosed(f) for f in currentCount]):
break
time.sleep(0.05)
def CheckIfClosed(filePath):
try:
os.rename(filePath, filePath)
return True
except:
return FalseI used the following config file :
Workdir = "C:\Developer\MyProjects\Streaming\OutputStream\PPM"
; Workdir is the directory of reference from which all paths are relative to.
; You may still use full paths if you wish.
FramesPathPattern = "F*.ppm"
; The path pattern (wildcards allowed) where the rendered images are stored to.
; We use this pattern to detect how many rendered images are available for streaming.
; When a chunk of frames is ready - we stream it (or store to disk).
ChunkSize = 30 ; Number of frames for bulk.
; ChunkSize sets the number of frames we need to wait for, in order to execute the ffmpeg command.
; If the folder already contains several chunks, it will first process the first chunk, then second, and so on...
AllowedParallelRuns = 1 ; Number of parallel allowed processes of ffmpeg.
; This sets how many parallel ffmpeg processes are allowed.
; If more than one chunk is available in the folder for processing, we will execute several ffmpeg processes in parallel.
; Only when on of the processes will finish, we will allow another process execution.
FfmpegCommand = "ffmpeg -re -r 30 -start_number {startNumber} -i F%08d.ppm -vframes {vFrames} -vf vflip -f rtp rtp://127.0.0.1:1234" ; Command to execute when a bulk is ready for streaming.
; Once a chunk is ready for processing, this is the command that will be executed (same as running it from the terminal).
; There is however a minor difference. Since every chunk starts with a different frame number, you can use the
; expression of "{startNumber}" which will automatically takes the value of the matching start frame number.
; You can also use "{vFrames}" as an expression for the ChunkSize which was set above in the "ChunkSize" entry.Please note that if I set "AllowedParallelRuns = 2" then it allows multiple ffmpeg processes to run simultaneously.
I then tried to play it with ffplay and see if I’m doing it right.
The first chunk was streamed fine. The following chunks weren’t so great. I got a lot of[sdp @ 0000006de33c9180] RTP: dropping old packet received too late
messages.What should I do so I get the ffplay, to play it in the order of the incoming images ? Is it right to run parallel ffmpeg processes ? Is there a better solution to my problem ?
Thank you !