Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (6)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (3692)

  • Is it possible to merge two or more videos in real-time like this ?

    25 février 2015, par Marko

    Is it possible to play video online that’s made of two or more video files ?

    Since my original post wasn’t clear enough, here’s expanded explanation and question.

    My site is hosted on Linux/Apache/PHP server. I have video files in FLV/F4V format. I can also convert them to other available formats if necessary. All videos have same aspect ratio and other parameters.

    What I want is to build (or use if exist) online video player that plays video composed of multiple video files concatenated together in real-time, i.e. when user clicks to see a video.

    For example, visitor comes to my site and sees video titled "Welcome" available to play. When he/she clicks to play that video, I take video files "Opening.f4v", "Welcome.f4v" and "Ending.f4v" and join/merge/concatenate them one after another to create one continuous video on the fly.

    Resulting video looks like one video, with no visual clues, lags or even smallest observable delay between video parts. Basically what is done is some form of on-the-fly editing or pre-editing, and user sees the result. This resulting video is not saved on the server, it’s just composed and played that way real-time.

    Also, if possible, user shouldn’t be made to wait for this merging to be over before he/she sees resulting video, but to be able to get first part of the video playing immediately, while merging is done simultaneously.

    Is this possible with flash/actionscript, ffmpeg, html5 or some other online technology ? I don’t need explanation how it’s possible, just a nod that it’s possible and some links to further investigate.

    Also, if one option is to use flash, what are alternatives for making this work when site is visited from iphone/ipad ?

  • How can I combine HDR video with a still PNG in FFMPEG without messing up colors ?

    2 juin 2022, par Robert

    I'm trying to add a still image to the end of a video as a title-card. Normally this is simple, but the video is HDR (shot on an iPhone) and for reasons I don't understand that's causing the colors in the PNG to go nuts.

    


    Here's the original png :
Original colors

    


    Here's a screenshot from the end of the output video :
Changed colors

    


    Clearly something is amiss. I made the PNG image in Photoshop from a photo, and I've tried it with and without embedding the color profile without any noticeable difference.

    


    Here's the command for ffmpeg :

    


    ffmpeg -y\
 -t 5 -i '../inputs/video.MOV'\
 -loop 1 -t 5 -i '../../common/end-youtube.png'\
 -filter_complex '
[1:v] fade=type=in:duration=1:alpha=1, setpts=PTS-STARTPTS+5/TB[end];
[0:v][end] overlay [outV]
'\
 -map [outV] '../test.mp4'


    


    I've tried this with non-HDR videos and the colors look perfectly normal, so clearly the HDRness is somehow involved. I've also tried adding this line to set the colorspace etc of the output. It helps with the PNG (although it's not perfect) but then the video is washed out.

    


     -colorspace bt709 -color_trc bt709 -color_primaries bt709 -color_range mpeg \


    


    I don't really care if the output video is HDR or not, as long as the video and png parts both look visually close to the inputs. This is something I'll need to do to many videos, so if possible a solution that doesn't involve tweaking colors for each one would be ideal.

    


  • Render YUV video in OpenGL of ffmpeg using CVPixelBufferRef and Shaders

    4 septembre 2012, par resident_

    I'm using to render YUV frames of ffmpeg with the iOS 5.0 method "CVOpenGLESTextureCacheCreateTextureFromImage".

    I'm using like the apple example GLCameraRipple

    My result in iPhone screen is this : iPhone Screen

    I need to know I'm doing wrong.

    I put part of my code to find errors.

    ffmpeg configure frames :

    ctx->p_sws_ctx = sws_getContext(ctx->p_video_ctx->width,
                                   ctx->p_video_ctx->height,
                                   ctx->p_video_ctx->pix_fmt,
                                   ctx->p_video_ctx->width,
                                   ctx->p_video_ctx->height,
                                   PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);


    // Framebuffer for RGB data
    ctx->p_frame_buffer = malloc(avpicture_get_size(PIX_FMT_YUV420P,
                                                   ctx->p_video_ctx->width,
                                                   ctx->p_video_ctx->height));

    avpicture_fill((AVPicture*)ctx->p_picture_rgb, ctx->p_frame_buffer,PIX_FMT_YUV420P,
                  ctx->p_video_ctx->width,
                  ctx->p_video_ctx->height);

    My render method :

    if (NULL == videoTextureCache) {
       NSLog(@"displayPixelBuffer error");
       return;
    }    


    CVPixelBufferRef pixelBuffer;    
      CVPixelBufferCreateWithBytes(kCFAllocatorDefault, mTexW, mTexH, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, mFrameW * 3, NULL, 0, NULL, &pixelBuffer);



    CVReturn err;    
    // Y-plane
    glActiveTexture(GL_TEXTURE0);
    err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                      videoTextureCache,
                                                      pixelBuffer,
                                                      NULL,
                                                      GL_TEXTURE_2D,
                                                      GL_RED_EXT,
                                                      mTexW,
                                                      mTexH,
                                                      GL_RED_EXT,
                                                      GL_UNSIGNED_BYTE,
                                                      0,
                                                      &_lumaTexture);
    if (err)
    {
       NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
    }  

    glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);    

    // UV-plane
    glActiveTexture(GL_TEXTURE1);
    err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                      videoTextureCache,
                                                      pixelBuffer,
                                                      NULL,
                                                      GL_TEXTURE_2D,
                                                      GL_RG_EXT,
                                                      mTexW/2,
                                                      mTexH/2,
                                                      GL_RG_EXT,
                                                      GL_UNSIGNED_BYTE,
                                                      1,
                                                      &_chromaTexture);
    if (err)
    {
       NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
    }

    glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);    

    glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);

    // Set the view port to the entire view
    glViewport(0, 0, backingWidth, backingHeight);

    static const GLfloat squareVertices[] = {
       1.0f, 1.0f,
       -1.0f, 1.0f,
       1.0f,  -1.0f,
       -1.0f,  -1.0f,
    };

    GLfloat textureVertices[] = {
       1, 1,
       1, 0,
       0, 1,
       0, 0,
    };

    // Draw the texture on the screen with OpenGL ES 2
    [self renderWithSquareVertices:squareVertices textureVertices:textureVertices];


    // Flush the CVOpenGLESTexture cache and release the texture
    CVOpenGLESTextureCacheFlush(videoTextureCache, 0);    
    CVPixelBufferRelease(pixelBuffer);    

    [moviePlayerDelegate bufferDone];

    RenderWithSquareVertices method

       - (void)renderWithSquareVertices:(const GLfloat*)squareVertices textureVertices:(const GLfloat*)textureVertices
    {


     // Use shader program.
       glUseProgram(shader.program);

    // Update attribute values.
    glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
    glEnableVertexAttribArray(ATTRIB_VERTEX);
    glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
    glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    // Present
    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
    [context presentRenderbuffer:GL_RENDERBUFFER];

    }

    My fragment shader :

    uniform sampler2D SamplerY;
    uniform sampler2D SamplerUV;


    varying highp vec2 _texcoord;

    void main()
    {

    mediump vec3 yuv;
    lowp vec3 rgb;

    yuv.x = texture2D(SamplerY, _texcoord).r;
    yuv.yz = texture2D(SamplerUV, _texcoord).rg - vec2(0.5, 0.5);

    // BT.601, which is the standard for SDTV is provided as a reference

    /* rgb = mat3(    1,       1,     1,
    0, -.34413, 1.772,
    1.402, -.71414,     0) * yuv;*/


    // Using BT.709 which is the standard for HDTV
    rgb = mat3(      1,       1,      1,
              0, -.18732, 1.8556,
              1.57481, -.46813,      0) * yuv;

      gl_FragColor = vec4(rgb, 1);

    }

    Very thanks,