
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (74)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8274)
-
Why mp4 can not shared on Facebook or Instagram on Android ?
8 janvier 2023, par JánosThis is the file : https://t44-post-cover.s3.eu-central-1.amazonaws.com/2e3s.mp4


When I download and try share, it fails. Why ? This is the parameters was rendered. Downloading and sharing works from iPhone to Facebook on Messenger or Instagram, but not from Android phone to Facebook on Messenger or Instagram.


What parameters
ffpmpeg
need to be able to sharemp4
from Android to Facebook / Instagram ?

await new Promise<void>((resolve, reject) => {
 ffmpeg()
 .setFfmpegPath(pathToFfmpeg)
 .input(`/tmp/input_${imgId}.gif`)
 .inputFormat('gif')
 .inputFPS(30)
 .input('anullsrc')
 .inputFormat('lavfi')
 .audioCodec('aac')
 .audioChannels(2)
 .audioFilters(['apad'])
 .videoCodec('libx264')
 .videoBitrate(1000)
 .videoFilters([
 'tpad=stop_mode=clone:stop_duration=2',
 'scale=trunc(iw/2)*2:trunc(ih/2)*2:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse',
 ])
 .fps(30)
 .duration(5)
 .format('mp4')
 .outputOptions([
 '-pix_fmt yuvj420p',
 '-profile:v baseline',
 '-level 3.0',
 '-crf 17',
 '-movflags +faststart',
 '-movflags frag_keyframe+empty_moov',
 ])
 .output(`/tmp/output_${imgId}.mp4`)
 .on('end', () => {
 console.log('MP4 video generated')
 resolve()
 })
 .on('error', (e) => {
 console.log(e)
 reject()
 })
 .run()
 })
</void>


-
Render YUV video in OpenGL of ffmpeg using CVPixelBufferRef and Shaders
4 septembre 2012, par resident_I'm using to render YUV frames of ffmpeg with the iOS 5.0 method "CVOpenGLESTextureCacheCreateTextureFromImage".
I'm using like the apple example GLCameraRipple
My result in iPhone screen is this : iPhone Screen
I need to know I'm doing wrong.
I put part of my code to find errors.
ffmpeg configure frames :
ctx->p_sws_ctx = sws_getContext(ctx->p_video_ctx->width,
ctx->p_video_ctx->height,
ctx->p_video_ctx->pix_fmt,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height,
PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
// Framebuffer for RGB data
ctx->p_frame_buffer = malloc(avpicture_get_size(PIX_FMT_YUV420P,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height));
avpicture_fill((AVPicture*)ctx->p_picture_rgb, ctx->p_frame_buffer,PIX_FMT_YUV420P,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height);My render method :
if (NULL == videoTextureCache) {
NSLog(@"displayPixelBuffer error");
return;
}
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, mTexW, mTexH, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, mFrameW * 3, NULL, 0, NULL, &pixelBuffer);
CVReturn err;
// Y-plane
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
mTexW,
mTexH,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// UV-plane
glActiveTexture(GL_TEXTURE1);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
mTexW/2,
mTexH/2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Set the view port to the entire view
glViewport(0, 0, backingWidth, backingHeight);
static const GLfloat squareVertices[] = {
1.0f, 1.0f,
-1.0f, 1.0f,
1.0f, -1.0f,
-1.0f, -1.0f,
};
GLfloat textureVertices[] = {
1, 1,
1, 0,
0, 1,
0, 0,
};
// Draw the texture on the screen with OpenGL ES 2
[self renderWithSquareVertices:squareVertices textureVertices:textureVertices];
// Flush the CVOpenGLESTexture cache and release the texture
CVOpenGLESTextureCacheFlush(videoTextureCache, 0);
CVPixelBufferRelease(pixelBuffer);
[moviePlayerDelegate bufferDone];RenderWithSquareVertices method
- (void)renderWithSquareVertices:(const GLfloat*)squareVertices textureVertices:(const GLfloat*)textureVertices
{
// Use shader program.
glUseProgram(shader.program);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];}
My fragment shader :
uniform sampler2D SamplerY;
uniform sampler2D SamplerUV;
varying highp vec2 _texcoord;
void main()
{
mediump vec3 yuv;
lowp vec3 rgb;
yuv.x = texture2D(SamplerY, _texcoord).r;
yuv.yz = texture2D(SamplerUV, _texcoord).rg - vec2(0.5, 0.5);
// BT.601, which is the standard for SDTV is provided as a reference
/* rgb = mat3( 1, 1, 1,
0, -.34413, 1.772,
1.402, -.71414, 0) * yuv;*/
// Using BT.709 which is the standard for HDTV
rgb = mat3( 1, 1, 1,
0, -.18732, 1.8556,
1.57481, -.46813, 0) * yuv;
gl_FragColor = vec4(rgb, 1);
}Very thanks,
-
How can I combine HDR video with a still PNG in FFMPEG without messing up colors ?
2 juin 2022, par RobertI'm trying to add a still image to the end of a video as a title-card. Normally this is simple, but the video is HDR (shot on an iPhone) and for reasons I don't understand that's causing the colors in the PNG to go nuts.


Here's the original png :



Here's a screenshot from the end of the output video :



Clearly something is amiss. I made the PNG image in Photoshop from a photo, and I've tried it with and without embedding the color profile without any noticeable difference.


Here's the command for ffmpeg :


ffmpeg -y\
 -t 5 -i '../inputs/video.MOV'\
 -loop 1 -t 5 -i '../../common/end-youtube.png'\
 -filter_complex '
[1:v] fade=type=in:duration=1:alpha=1, setpts=PTS-STARTPTS+5/TB[end];
[0:v][end] overlay [outV]
'\
 -map [outV] '../test.mp4'



I've tried this with non-HDR videos and the colors look perfectly normal, so clearly the HDRness is somehow involved. I've also tried adding this line to set the colorspace etc of the output. It helps with the PNG (although it's not perfect) but then the video is washed out.


-colorspace bt709 -color_trc bt709 -color_primaries bt709 -color_range mpeg \



I don't really care if the output video is HDR or not, as long as the video and png parts both look visually close to the inputs. This is something I'll need to do to many videos, so if possible a solution that doesn't involve tweaking colors for each one would be ideal.