
Advanced search
Medias (91)
-
Richard Stallman et le logiciel libre
19 October 2011, by
Updated: May 2013
Language: français
Type: Text
-
Stereo master soundtrack
17 October 2011, by
Updated: October 2011
Language: English
Type: Audio
-
Elephants Dream - Cover of the soundtrack
17 October 2011, by
Updated: October 2011
Language: English
Type: Picture
-
#7 Ambience
16 October 2011, by
Updated: June 2015
Language: English
Type: Audio
-
#6 Teaser Music
16 October 2011, by
Updated: February 2013
Language: English
Type: Audio
-
#5 End Title
16 October 2011, by
Updated: February 2013
Language: English
Type: Audio
Other articles (85)
-
Mise à jour de la version 0.1 vers 0.2
24 June 2013, byExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1); Installation des dépendances pour Smush; Installation de MediaInfo et FFprobe pour la récupération des métadonnées; On n’utilise plus ffmpeg2theora; On n’installe plus flvtool2 au profit de flvtool++; On n’installe plus ffmpeg-php qui n’est plus maintenu au profit de (...) -
Ecrire une actualité
21 June 2013, byPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Support audio et vidéo HTML5
10 April 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
On other websites (11578)
-
ffmpeg - resize video and merge with image
25 April 2018, by SudeshI’ve 3 inputs (1st, 2nd and 3rd block)
1st a mp4 video with 600x400 aspect ratio
2nd a png image with 600x400 aspect ratio
3rd a jpeg image with red backgroundOutput (4th block)
I need a mp4 video of 600x400 as output, it should have resized video of 422x282 and merge all three as shown in image.
Can we implement this via ffmpeg command line?
I’m able to resize video and image separately but having issue in creating desire output. -
Is it possible to merge two or more videos in real-time like this?
25 February 2015, by MarkoIs it possible to play video online that’s made of two or more video files?
Since my original post wasn’t clear enough, here’s expanded explanation and question.
My site is hosted on Linux/Apache/PHP server. I have video files in FLV/F4V format. I can also convert them to other available formats if necessary. All videos have same aspect ratio and other parameters.
What I want is to build (or use if exist) online video player that plays video composed of multiple video files concatenated together in real-time, i.e. when user clicks to see a video.
For example, visitor comes to my site and sees video titled "Welcome" available to play. When he/she clicks to play that video, I take video files "Opening.f4v", "Welcome.f4v" and "Ending.f4v" and join/merge/concatenate them one after another to create one continuous video on the fly.
Resulting video looks like one video, with no visual clues, lags or even smallest observable delay between video parts. Basically what is done is some form of on-the-fly editing or pre-editing, and user sees the result. This resulting video is not saved on the server, it’s just composed and played that way real-time.
Also, if possible, user shouldn’t be made to wait for this merging to be over before he/she sees resulting video, but to be able to get first part of the video playing immediately, while merging is done simultaneously.
Is this possible with flash/actionscript, ffmpeg, html5 or some other online technology? I don’t need explanation how it’s possible, just a nod that it’s possible and some links to further investigate.
Also, if one option is to use flash, what are alternatives for making this work when site is visited from iphone/ipad?
-
FFMpeg Command work in command line, but not in python script. (Semi Solved)
20 February 2015, by FooldjOkay, kind of a weird problem. But I’m not sure whether it’s python, ffmpeg, or some stupid thing I’m doing wrong.
I’m trying to take a video, and take 1 frame a second, and output that frame to an image. Right now, if i use the command line with ffmpeg:
ffmpeg -i test.avi -r 1 -f image2 image-%3d.jpeg -pix_fmt rgb24 -vcodec rawrvideo
It outputs about 10 images, the images look fine, awesome. Now I have this code (right now some code from some github, as I wanted stuff that i was relatively sure would work, and mine is allll convoluted)
import subprocess as sp
import numpy as np
import re
import cv2
import time
FFMPEG_BIN = r'ffmpeg.exe'
INPUT_VID = 'test.avi'
def getInfo():
command = [FFMPEG_BIN,'-i', INPUT_VID, '-']
pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE)
pipe.stdout.readline()
pipe.terminate()
infos = pipe.stderr.read()
infos_list = infos.split('\r\n')
res = re.search(' \d+x\d+ ',infos)
res = [int(x) for x in res.group(0).split('x')]
return res
res = getInfo()
command = [ FFMPEG_BIN,
'-i', INPUT_VID,
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
n = 0
im2 = []
try:
mog = cv2.BackgroundSubtractorMOG2(120,2,True)
while True:
raw_image = pipe.stdout.read(res[0]*res[1]*3)
# transform the byte read into a numpy array
image = np.fromstring(raw_image, dtype='uint8')
image = image.reshape((res[1],res[0],3))
rgbImg = image.copy()
fname = ('_tmp%03d.png'%time.time())
cv2.imwrite(fname, rgbImg)
# throw away the data in the pipe's buffer.
#pipe.stdout.flush()
n += 1
print n
except:
print 'done',n
pipe.kill()
cv2.destroyAllWindows()When I run this, I get 10 images, but they all have a Blue Tint! I cannot for the life of me figure out why. I’ve done tons of searches, I’ve tried quite a few different codecs (usually just messes things up worse). The media info for the video file is here:
General
Complete name : test.avi
Format : AVI
Format/Info : Audio Video Interleave
File size : 85.0 KiB
Duration : 133ms
Overall bit rate : 5 235 Kbps
Video
ID : 0
Format : JPEG
Codec ID : MJPG
Duration : 133ms
Bit rate : 1 240 Kbps
Width : 640 pixels
Height : 480 pixels
Display aspect ratio : 4:3
Frame rate : 30.000 fps
Color space : YUV
Chroma subsampling : 4:2:2
Bit depth : 8 bits
Compression mode : Lossy
Bits/(Pixel*Frame) : 0.135
Stream size : 20.1 KiB (24%)Any suggestions? It seems like it should be an RGB mixup...just not sure where at...
EDIT: So I fixed the problem by switching the blue and red channels with this code:
bChannel = rgbImg[:,:,0]
rChannel = rgbImg[:,:,2]
gChannel = rgbImg[:,:,1]rgbArray = np.zeros((res[1],res[0],3), 'uint8')
rgbArray[...,0] = rChannel
rgbArray[...,1] = gChannel
rgbArray[...,2] = bChannelSo I guess this is now a question of, why is python mixing up these channels? Is it a problem with python, or ffmpeg, the codec?
Thanks!