Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (70)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (8287)

  • Conversion from Iphone Core Surface RGB Frame into ffmepg AVFarme

    10 mars 2015, par Raghu

    I am trying to convert Core Surface RGB frame buffer(Iphone) to ffmpeg Avfarme to encode into a movie file. But I am not getting the correct video output (video showing colors dazzling not the correct picture)

    I guess there is something wrong with converting from core surface frame buffer into AVFrame.

    Here is my code :

    Surface *surface = [[Surface alloc]initWithCoreSurfaceBuffer:coreSurfaceBuffer];
    [surface lock];
    unsigned int height = surface.height;
    unsigned int width = surface.width;
    unsigned int alignmentedBytesPerRow = (width * 4);
    if (!readblePixels) {
       readblePixels = CGBitmapAllocateData(alignmentedBytesPerRow * height);
       NSLog(@"alloced readablepixels");
    }
    unsigned int bytesPerRow = surface.bytesPerRow;
    void *pixels = surface.baseAddress;
    for (unsigned int j = 0; j < height; j++) {
       memcpy(readblePixels + alignmentedBytesPerRow * j, pixels + bytesPerRow * j, bytesPerRow);
    }

    pFrameRGB->data[0] = readblePixels; // I guess here is what I am doing wrong.
    pFrameRGB->data[1] = NULL;
    pFrameRGB->data[2] = NULL;
    pFrameRGB->data[3] = NULL;

    pFrameRGB->linesize[0] = pCodecCtx->width;
    pFrameRGB->linesize[1] = 0;
    pFrameRGB->linesize[2] = 0;
    pFrameRGB->linesize[3] = 0;


    sws_scale (img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
          0, pCodecCtx->height,
          pFrameYUV->data, pFrameYUV->linesize);  

    Please help me out.

    Thanks,

    Raghu

  • iPhone slow motion video transcode

    25 mai 2016, par coverboy

    I’m developing upload video (taken from iPhone) to my server.

    However, I have no idea how to implement.

    Any source code objective-c or swift will be welcomed.

    I have 120fps or 240fps video (It’s a slo-mo).
    When I playback these video on my iPhone6. I can see slo-mo effect.
    (I know playback frame rate is 30fps.)

    I want to convert that video before upload to my server, from 120/240 fps to 30fps video. (I mean not adjusting playback frame rate, it means video transcode to 30fps.)
    Additionally, I want to check slo-mo effect start-point and end-point.
    (Maybe iPhone record this information to video binary(it might be reside in file’s header.)

    Well, I guess if I use ffmpeg library, it should be easy(?).

    So any suggestions will be welcomed.

  • Display YUV420P video using OpenGLES on iPhone

    25 décembre 2014, par user3487978

    I am writing an app displaying YUV420p video using ffmpeg library on iPhone, every thing works fine, but the OpenGL color is a little strange, see picture, the bellow image is displaying using AVPicture in RGB format, above is using AVPicture in YUV format.
    any ideas ?

    my shader

    varying highp vec2 varTexcoord;
    uniform sampler2D SamplerY;
    uniform sampler2D SamplerUV;
    void main (void)
    {
       mediump vec3 yuv;
       lowp vec3 rgb;
       yuv.x = texture2D(SamplerY, varTexcoord).r;
       yuv.yz = texture2D(SamplerUV, varTexcoord).rg - vec2(0.5, 0.5);
       // Using BT.709 which is the standard for HDTV
       rgb = mat3(      1,       1,      1,
                   0, -.18732, 1.8556,
                   1.57481, -.46813,      0) * yuv;
       gl_FragColor = vec4(rgb,1.0);
    }

    Renderer :

    glActiveTexture(GL_TEXTURE0);
    glTexImage2D(GL_TEXTURE_2D,
                0,
                GL_LUMINANCE,
                (GLsizei)model.width,
                (GLsizei)model.height,
                0,
                GL_LUMINANCE,
                GL_UNSIGNED_BYTE,
                [model.y bytes]);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glActiveTexture(GL_TEXTURE1);
    glTexImage2D(GL_TEXTURE_2D,
                0,
                GL_LUMINANCE,
                (GLsizei)model.width/2,
                (GLsizei)model.height/2,
                0,
                GL_LUMINANCE,
                GL_UNSIGNED_BYTE,
                [model.uv bytes]);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    image on bottom is using RGB, above is using YUV