
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (100)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (8788)
-
Decoding VP8 On A Sega Dreamcast
20 février 2011, par Multimedia Mike — Sega Dreamcast, VP8I got Google’s libvpx VP8 codec library to compile and run on the Sega Dreamcast with its Hitachi/Renesas SH-4 200 MHz CPU. So give Google/On2 their due credit for writing portable software. I’m not sure how best to illustrate this so please accept this still photo depicting my testbench Dreamcast console driving video to my monitor :
Why ? Because I wanted to try my hand at porting some existing software to this console and because I tend to be most comfortable working with assorted multimedia software components. This seemed like it would be a good exercise.
You may have observed that the video is blue. Shortest, simplest answer : Pure laziness. Short, technical answer : Path of least resistance for getting through this exercise. Longer answer follows.
Update : I did eventually realize that the Dreamcast can work with YUV textures. Read more in my followup post.
Process and Pitfalls
libvpx comes with a number of little utilities includingdecode_to_md5.c
. The first order of business was porting over enough source files to make the VP8 decoder compile along with the MD5 testbench utility.Again, I used the KallistiOS (KOS) console RTOS (aside : I’m still working to get modern Linux kernels compiled for the Dreamcast). I started by configuring and compiling libvpx on a regular desktop Linux system. From there, I was able to modify a number of configuration options to make the build more amenable to the embedded RTOS.
I had to create a few shim header files that mapped various functions related to threading and synchronization to their KOS equivalents. For example, KOS has a threading library cleverly named kthreads which is mostly compatible with the more common pthread library functions. KOS apparently also predates stdint.h, so I had to contrive a file with those basic types.So I got everything compiled and then uploaded the binary along with a small VP8 IVF test vector. Imagine my surprise when an MD5 sum came out of the serial console. Further, visualize my utter speechlessness when I noticed that the MD5 sum matched what my desktop platform produced. It worked !
Almost. When I tried to decode all frames in a test vector, the program would invariably crash. The problem was that the file that manages motion compensation (reconinter.c) needs to define MUST_BE_ALIGNED which compiles byte-wise block copy functions. This is necessary for CPUs like the SH-4 which can’t load unaligned data. Apparently, even ARM CPUs these days can handle unaligned memory accesses which is why this isn’t a configure-time option.
Showing The Work
I completed the first testbench application which ran the MD5 test on all 17 official IVF test vectors. The SH-4/Dreamcast version aces the whole suite.However, this is a video game console, so I had better be able to show the decoded video. The Dreamcast is strictly RGB— forget about displaying YUV data directly. I could take the performance hit to convert YUV -> RGB. Or, I could just display the intensity information (Y plane) rendered on a random color scale (I chose blue) on an RGB565 texture (the DC’s graphics hardware can also do paletted textures but those need to be rearranged/twiddled/swizzled).
Results
So, can the Dreamcast decode VP8 video in realtime ? Sure ! Well, I really need to qualify. In the test depicted in the picture, it seems to be realtime (though I wasn’t enforcing proper frame timings, just decoding and displaying as quickly as possible). Obviously, I wasn’t bothering to properly convert YUV -> RGB. Plus, that Big Buck Bunny test vector clip is only 176x144. Obviously, no audio decoding either.So, realtime playback, with a little fine print.
On the plus side, it’s trivial to get the Dreamcast video hardware to upscale that little blue image to fullscreen.
I was able to tally the total milliseconds’ worth of wall clock time required to decode the 17 VP8 test vectors. As you can probably work out from this list, when I try to play a 320x240 video, things start to break down.
- Processed 29 176x144 frames in 987 milliseconds.
- Processed 49 176x144 frames in 1809 milliseconds.
- Processed 49 176x144 frames in 704 milliseconds.
- Processed 29 176x144 frames in 255 milliseconds.
- Processed 49 176x144 frames in 339 milliseconds.
- Processed 48 175x143 frames in 2446 milliseconds.
- Processed 29 176x144 frames in 432 milliseconds.
- Processed 2 1432x888 frames in 2060 milliseconds.
- Processed 49 176x144 frames in 1884 milliseconds.
- Processed 57 320x240 frames in 5792 milliseconds.
- Processed 29 176x144 frames in 989 milliseconds.
- Processed 29 176x144 frames in 740 milliseconds.
- Processed 29 176x144 frames in 839 milliseconds.
- Processed 49 175x143 frames in 2849 milliseconds.
- Processed 260 320x240 frames in 29719 milliseconds.
- Processed 29 176x144 frames in 962 milliseconds.
- Processed 29 176x144 frames in 933 milliseconds.
-
Why can I not change the number of frames (nframes) in a gganimate animation ?
26 décembre 2022, par GekinI have produced an animation per gganimate and rendered it per ffmpeg. It works just fine, but only, if I do not change the number of frames. If I do set the number of frames, I get this error message :


nframes and fps adjusted to match transition
Error parsing framerate 8,4. 
Error: Rendering with ffmpeg failed



I produced the gganim
MonthlyAveragePrecipitationMap
the following way :

options(scipen = 999, OutDec = ",")

MonthlyAveragePrecipitationMap = ggplot(MonthlyAverageExtremePrecipitation) + 
 geom_path(data = map_data("world","Germany"),
 aes(x = long, y = lat, group = group)) +
 coord_fixed(xlim = c(6,15),
 ylim = c(47,55)) + 
 geom_point(aes(x=lon, y=lat, 
 colour = ShareOfExtremePrecipitationEvents,
 group = MonthOfYear),
 size = 3) + 
 scale_color_gradient(low="blue", high="yellow") + 
 xlab("Longitude (degree)") +
 ylab("Latitude (degree)") + 
 theme_bw() +
 transition_manual(frames = MonthOfYear) + 
 labs(title = '{unique(MonthlyAverageExtremePrecipitation$MonthOfYear)[as.integer(frame)]}', 
 color = paste0("Share of Extreme Precipitation Events \namong all Precipitation Events")) 



I call the animation the following way :


animate(MonthlyAveragePrecipitationMap,
 nframes = 300,
 renderer =
 ffmpeg_renderer(
 format = "auto",
 ffmpeg = NULL,
 options = list(pix_fmt = "yuv420p")))




I used this exact code just a few days ago and it worked fine.


Has someone had similar experiences ?
Thanks in advance.


-
Open CV Codec FFMPEG Error fallback to use tag 0x7634706d/'mp4v'
22 mai 2019, par CohenDoing a filter recording and all is fine. The code is running, but at the end the video is not saved as MP4. I have this error :
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'Using a MAC and the code is running correctly, but is not saving. I tried to find more details about this error, but wasn’t so fortunate. I use as editor Sublime. The code run on Atom tough but is giving this error :
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
2018-05-28 15:04:25.274 Python[17483:2224774] AVF: AVAssetWriter status: Cannot create file....
import numpy as np
import cv2
import random
from utils import CFEVideoConf, image_resize
import glob
import math
cap = cv2.VideoCapture(0)
frames_per_seconds = 24
save_path='saved-media/filter.mp4'
config = CFEVideoConf(cap, filepath=save_path, res='360p')
out = cv2.VideoWriter(save_path, config.video_type, frames_per_seconds, config.dims)
def verify_alpha_channel(frame):
try:
frame.shape[3] # looking for the alpha channel
except IndexError:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
return frame
def apply_hue_saturation(frame, alpha, beta):
hsv_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv_image)
s.fill(199)
v.fill(255)
hsv_image = cv2.merge([h, s, v])
out = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
frame = verify_alpha_channel(frame)
out = verify_alpha_channel(out)
cv2.addWeighted(out, 0.25, frame, 1.0, .23, frame)
return frame
def apply_color_overlay(frame, intensity=0.5, blue=0, green=0, red=0):
frame = verify_alpha_channel(frame)
frame_h, frame_w, frame_c = frame.shape
sepia_bgra = (blue, green, red, 1)
overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
return frame
def apply_sepia(frame, intensity=0.5):
frame = verify_alpha_channel(frame)
frame_h, frame_w, frame_c = frame.shape
sepia_bgra = (20, 66, 112, 1)
overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
return frame
def alpha_blend(frame_1, frame_2, mask):
alpha = mask/255.0
blended = cv2.convertScaleAbs(frame_1*(1-alpha) + frame_2*alpha)
return blended
def apply_circle_focus_blur(frame, intensity=0.2):
frame = verify_alpha_channel(frame)
frame_h, frame_w, frame_c = frame.shape
y = int(frame_h/2)
x = int(frame_w/2)
mask = np.zeros((frame_h, frame_w, 4), dtype='uint8')
cv2.circle(mask, (x, y), int(y/2), (255,255,255), -1, cv2.LINE_AA)
mask = cv2.GaussianBlur(mask, (21,21),11 )
blured = cv2.GaussianBlur(frame, (21,21), 11)
blended = alpha_blend(frame, blured, 255-mask)
frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
return frame
def portrait_mode(frame):
cv2.imshow('frame', frame)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, mask = cv2.threshold(gray, 120,255,cv2.THRESH_BINARY)
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGRA)
blured = cv2.GaussianBlur(frame, (21,21), 11)
blended = alpha_blend(frame, blured, mask)
frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
return frame
def apply_invert(frame):
return cv2.bitwise_not(frame)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
#cv2.imshow('frame',frame)
hue_sat = apply_hue_saturation(frame.copy(), alpha=3, beta=3)
cv2.imshow('hue_sat', hue_sat)
sepia = apply_sepia(frame.copy(), intensity=.8)
cv2.imshow('sepia',sepia)
color_overlay = apply_color_overlay(frame.copy(), intensity=.8, red=123, green=231)
cv2.imshow('color_overlay',color_overlay)
invert = apply_invert(frame.copy())
cv2.imshow('invert', invert)
blur_mask = apply_circle_focus_blur(frame.copy())
cv2.imshow('blur_mask', blur_mask)
portrait = portrait_mode(frame.copy())
cv2.imshow('portrait',portrait)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()