
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (41)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (7889)
-
Most efficient way to render bitmap to screen on Linux [on hold]
22 juillet 2019, par MaximusMy goal is to receive a video feed from wifi and display it on my screen. For this, I’ve created a couple of small programs and a bash script to automate them running. It works like this :
UDPBitmap/Plotter & ffplay -i - < UDPBitmap/pipe & python requester.py;
Translation : There is a C++ program called Plotter, its job is to receive packets on an assigned UDP port, process them and write it to pipe (named : UDPBitmap/pipe). The pipe is accessed by ffplay, and ffplay renders the video on screen. The python file is solely responsible for accessing and controlling the camera with various HTTP requests.
The above command works fine, everything works as expected. However, the resulting latency and framerate is a bit lower than what I’ve wanted. The bottleneck of this program is not the pipe, it is fast enough. Wifi transmission is also fast enough. The only thing left is ffplay.
Question :
What is the most efficient way to render a bitmap to screen, on Linux ? Is there a de facto library for this that I can use ?
Note :
- Language/framework/library does not matter (C, C++, Java, Python, native linux tools and so on...)
- I do not need a window handle, but is SDL+OpenGL the way to go ?
- Writing directly to the framebuffer would be super cool...
-
ffmpeg itsoffset doesnt apply offset to mp4
26 juillet 2019, par bruxThe following command mixes an mp4 and an mp3 together, keeping the audio from the mp4.
ffmpeg -i video.mp4 -i audio.mp3 -map 0:v -c:v copy -filter_complex '[0:a][1:a]amix[aout]' -map '[aout]' -shortest out.mp4
The command works as expected.
Now I want to offset the mp4 file (both the audio and video stream of the mp4) so that there is a delay of 500ms at the start of the mp4, here is my command :
ffmpeg -itsoffset 00:00:00.500 -i video.mp4 -i audio.mp3 -map 0:v -c:v copy -filter_complex '[0:a][1:a]amix[aout]' -map '[aout]' -shortest out.mp4
This doesnt work as expected, the output doesnt have the expected delay of 500ms at the start of the mp4 streams. It appears the output is just the same as the first command I ran.
The version of ffmpeg I am using is :
ffmpeg version n4.0-39-gda39990
Here are the files I’m using :
video.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/video.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2019-07-26T03:28:49.000000Z
com.android.version: 8.0.0
Duration: 00:00:07.64, start: 0.000000, bitrate: 20534 kb/s
Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuvj420p(pc, smpte170m), 1920x1080, 20966 kb/s, SAR 1:1 DAR 16:9, 29.70 fps, 29.75 tbr, 90k tbn, 180k tbc (default)
Metadata:
rotate : 270
creation_time : 2019-07-26T03:28:49.000000Z
handler_name : VideoHandle
Side data:
displaymatrix: rotation of 90.00 degrees
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 96 kb/s (default)
Metadata:
creation_time : 2019-07-26T03:28:49.000000Z
handler_name : SoundHandleaudio.mp3
Input #0, mp3, from '/storage/emulated/0/audio.mp3':
Metadata:
album : Sunflower (Spider-Man: Into the Spider-Verse) - Single
composer : Louis Bell, Carter Lang, Austin Richard Post, Billy Walsh & Khalif Malik Ibin Shaman Brown
genre : Hip-Hop/Rap
copyright : This Compilation ℗ 2018 Republic Records, a division of UMG Recordings, Inc.
title : Sunflower (Spider-Man: Into the Spider-Verse)
artist : Post Malone & Swae Lee
album_artist : Post Malone & Swae Lee
track : 01/01
TYER : 2018-10-18T07:00:00Z
Duration: 00:02:38.07, start: 0.000000, bitrate: 325 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s
Stream #0:1: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 1400x1400 [SAR 300:300 DAR 1:1], 90k tbr, 90k tbn, 90k tbc
Metadata:
comment : Cover (front) -
How to use Python (For example, ffmpeg or moviePy) to split large video to display on multiple screen [on hold]
27 juillet 2019, par Chen ClarenceI am trying to find resources to split a very long (1080*15360) video into 8 1080p videos of the same time length. Is there anything that could achieve this, or even better allows control over each pixel(use a function to cover part of my video with a black circle). Right now I have to brute force it, such as the following, but I’m sure there are much more efficient methods. Thanks in advance !
cap=cv2.VideoCapture('sample.avi')
#inputs & check for error
numberOfScreens=8
screenArrangement=(1,8)
if (numberOfScreens!=(screenArrangement[0]*screenArrangement[1])):
raise ValueError('The screen arrangement does not match the number of scrrens')
exit(0)
currentFrame=0
originalFrames=[]
while(True):
#reading frames
ret,frame=cap.read()
if not ret:
break
height, width, layers = frame.shape
unitHeight=(int)(height/screenArrangement[0]) #reduce this to outside loop
unitWidth=(int)(width/screenArrangement[1])
#cutting frames into desired size
for i in range(screenArrangement[0]):
for j in range(screenArrangement[1]):
try:
if not os.path.exists('Screen'+str(i*screenArrangement[0]+j+1)):
os.makedirs('Screen'+str(i*screenArrangement[0]+j+1))
print('creating directory '+'Screen'+str(i*screenArrangement[0]+j+1))
except OSError:
print ("Error Creating Directory")
name='./Screen'+str(i*screenArrangement[0]+j+1)+'/frame'+str(currentFrame)+'.png'
cropImg = frame[(i*unitHeight):((i+1)*unitHeight), (j*unitWidth):((j+1)*unitWidth)]
print('creating'+name)
#saving cropeed frames
cv2.imwrite(name, cropImg,[cv2.IMWRITE_PNG_COMPRESSION, 0])
currentFrame+=1
Frames=currentFrame
#setting up the writer object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
writer=cv2.VideoWriter('Screen1.avi', fourcc, 30, (unitWidth, unitHeight), True)
#write the video
for i in range(Frames):
img=cv2.imread('./Screen1/frame'+str(i)+'.png')
writer.write(img)
writer.release()
cap.release()
cv2.destroyAllWindows()