
Recherche avancée
Autres articles (41)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...)
Sur d’autres sites (7904)
-
Python : mp3 to alsaaudio through ffmpeg pipe and wave.open(f,'r')
3 juillet 2017, par user2754098I’m trying to decode mp3 to wav using ffmpeg :
import alsaaudio
import wave
from subprocess import Popen, PIPE
with open('filename.mp3', 'rb') as infile:
p=Popen(['ffmpeg', '-i', '-', '-f', 'wav', '-'], stdin=infile, stdout=PIPE)
...Next i want redirect data from p.stdout.read() to wave.open(file, r) to use readframes(n) and other methods. But i cannot because ’file’ in wave.open(file,’r’) can be only name of file or an open file pointer.
...
file = wave.open(p.stdout.read(),'r')
card='default'
device=alsaaudio.PCM(card=card)
device.setchannels(file.getnchannels())
device.setrate(file.getframerate())
device.setformat(alsaaudio.PCM_FORMAT_S16_LE)
device.setsetperiodsize(320)
data = file.readframes(320)
while data:
device.write(data)
data = file.readframes(320)I got :
TypeError: file() argument 1 must be encoded string without NULL bytes, not str
So is it possible to handle data from p.stdout.read() by wave.open() ?
Making temporary .wav file isn’t solution.Sorry for my english.
Thanks.UPDATE
Thanks to PM 2Ring for hit about io.BytesIO.
However resulting code does not work.
import alsaaudio
import wave
from subprocess import Popen, PIPE
with open('sometrack.mp3', 'rb') as infile:
p=Popen(['ffmpeg', '-i', '-', '-f','wav', '-'], stdin=infile , stdout=PIPE , stderr=PIPE)
fobj = io.BytesIO(p.stdout.read())
fwave = wave.open(fobj, 'rb')Trace :
File "./script.py", line x, in <module>
fwave = wave.open(fobj, 'rb')
File "/usr/lib/python2.7/wave.py", line x, in open
return Wave_read(f)
File "/usr/lib/python2.7/wave.py", line x, in __init__
self.initfp(f)
File "/usr/lib/python2.7/wave.py", line x, in initfp
raise Error, 'not a WAVE file'
wave.Error: not a WAVE file
</module>From /usr/lib/python2.7/wave.py :
...
self._file = Chunk(file, bigendian = 0)
if self._file.getname() != 'RIFF':
raise Error, 'file does not start with RIFF id'
if self._file.read(4) != 'WAVE':
raise Error, 'not a WAVE file'
...Checking has been failed due to ’bad’ self._file object.
Inside /usr/lib/python2.7/chunk.py i have found a source of problem :
...
try:
self.chunksize = struct.unpack(strflag+'L', file.read(4))[0]
except struct.error:
raise EOFError
...Because struct.unpack(strflag+’L’, file.read(4))[0] returns 0.
But this function works correct.As specified here :
"5-8 bytes - File size(integer)
Size of the overall file - 8 bytes, in bytes (32-bit integer). Typically, you’d fill this in after creation."
That’s why my script doesn’t work. wave.open and other functions cannot handle my file object because self.chunksize = 0. Looks like ffmpeg cannot insert File size when using PIPE.SOLUTION
It’s simple.
I’ve changed init function of Chunk class :After :
...
try:
self.chunksize = struct.unpack(strflag+'L', file.read(4))[0]
except struct.error:
raise EOFError
...Before :
...
try:
self.chunksize = struct.unpack(strflag+'L', file.read(4))[0]
currtell = file.tell()
if self.chunksize == 0:
file.seek(0)
file.read(currtell)
self.chunksize = len(file.read())-4
file.seek(0)
file.read(currtell)
except struct.error:
raise EOFError
...Of course editing of original module is bad idia. So I’ve create custom forks for 2 classes Chunk and Wave_read.
Working but unstable full code you can find here.
Sorry for my awful english.
Thanks.
-
python 3 using ffmpeg in a subprocess getting stderr decoding error
4 mai 2024, par jdauthreI am running ffmpeg as a subprocess and using the stderr to get various bits of data like the subtitles stream Id's. It works fine for most videos, but one with japanese subtitles results in an error :


'charmap' codec can't decode byte in position xxx : character maps to


Much googling suggests the problem is that the japanese requires unicode, whereas English does not. Solutions offered refer to problems with files, and I cannot find a way of doing the same with the stderr. Relevent Code is below :


command = [ffmpeg,"-y","-i",fileSelected,"-acodec","pcm_s16le",
 "-vn","-t","3", "-f", "null","-"]
print(command) 
proc = subprocess.Popen(command,stderr=PIPE,Jstdin=subprocess.PIPE,
 universal_newlines=True,startupinfo=startupinfo)
 
stream = "" 
for line in proc.stderr:
 try:
 print("line",line)
 except exception as error:
 print("print",error)
 line = line[:-1]
 if "Stream #" in line:
 estream = line.split("#",1)[1]
 estream =estream.split(" (",1)[0]
 print("estream",estream)
 stream = stream + estream +"\n" #.split("(",1)[0] 
 stream = stream + estream +"\n"



-
FFmpeg, videotoolbox and avplayer in iOS
9 janvier 2017, par Hwangho KimI have a question how these things are connected and what they exactly do.
FYI, I have a few experience about video player and encoding and decoding.
In my job I deal udp streaming from server and take it with ffmpeg and decodes it and draw it with openGL. And also using ffmpeg for video player.
These are the questions...
1. Only ffmpeg can decodes UDP streaming (encoded with ffmpeg from the server) or not ?
I found some useful information about videotoolbox which can decode streaming with hardware acceleration in iOS. so could I also decode the streaming from the server with videotoolbox ?
2. If it is possible to decode with videotoolbox (I mean if the videotoolbox could be the replacement for ffmpeg), then what is the videotoolbox source code in ffmpeg ? why it is there ?
In my decoder I make AVCodecContext from the streaming and it has hwaccel and hwaccel_context field which set null both of them. I thought this videotoolbox is kind of API which can help ffmpeg to use hwaccel of iOS. But it looks not true for now...
3. If videotoolbox can decode streaming, Does this also decode for H264 in local ? or only streaming possible ?
AVPlayer is a good tool to play a video but if videotoolbox could replace this AVPlayer then, what’s the benefit ? or impossible ?
4. FFmpeg only uses CPU for decoding (software decoder) or hwaccel also ?
When I play a video with ffmpeg player, CPU usage over 100% and Does it means this ffmpeg uses only software decoder ? or there is a way to use hwaccel ?
Please understand my poor english and any answer would be appreciated.
Thanks.