Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (39)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9872)

  • FFMPEG segment creates black frame in the beginning - and key_frame looks offset [closed]

    30 septembre 2022, par jackyroo

    I'm trying to split a video using ffmpeg segment :

    


    ffmpeg -r 60 -accurate_seek -I input.mov -map 0:a? -map 0:v? -codec copy -reset_timestamps 1 -map_metadata 0 -avoid_negative_ts 1 -f segment output_%%03d.mov -y -loglevel debug

    


    The problem with this approach is that the first split segment has 2 black frames at the beginning, while the consecutive ones are fine.

    


    I cannot figure out why, and I'm genuinely lost in this process.

    


    I've also tried specifying -force_key_frames but it makes no difference at all.

    


    Then I found something interesting by comparing keyframes of the original video and the first split chunk using this command :

    


    ffprobe -select_streams v -skip_frame nokey -show_frames -v quiet input.mov

    


    Original Video KeyFrames
The first keyframe of the original video looks fine :

    


    [FRAME]
media_type=video
stream_index=0
key_frame=1
pts=0
pts_time=0.000000
pkt_dts=127488
pkt_dts_time=8.300000
best_effort_timestamp=0
best_effort_timestamp_time=0.000000
pkt_duration=256
pkt_duration_time=0.016667
duration=256
duration_time=0.016667
pkt_pos=40
pkt_size=128838
width=1420
height=2000
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=unknown
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=left
[SIDE_DATA]
side_data_type=H.26[45] User Data Unregistered SEI message
[/SIDE_DATA]
[/FRAME]


    


    Split Video - First Chunk Keyframes

    


    On the other hand, it seems like the first keyframe of the chunk (the one with the two added black frames in the beginning) gets shifted !

    


    [FRAME]
media_type=video
stream_index=1
key_frame=1
pts=507
pts_time=0.033008
pkt_dts=N/A
pkt_dts_time=N/A
best_effort_timestamp=507
best_effort_timestamp_time=0.033008
pkt_duration=256
pkt_duration_time=0.016667
duration=256
duration_time=0.016667
pkt_pos=40
pkt_size=128838
width=1420
height=2000
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=unknown
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=left
[SIDE_DATA]
side_data_type=H.26[45] User Data Unregistered SEI message
[/SIDE_DATA]
[/FRAME]


    


    Might this be the cause of the two black frames at the beginning ?

    


    The following is the first (and only) keyframe of the second chunk, and it looks fine :

    


    [FRAME]
media_type=video
stream_index=1
key_frame=1
pts=0
pts_time=0.000000
pkt_dts=N/A
pkt_dts_time=N/A
best_effort_timestamp=0
best_effort_timestamp_time=0.000000
pkt_duration=256
pkt_duration_time=0.016667
duration=256
duration_time=0.016667
pkt_pos=40
pkt_size=158598
width=1420
height=2000
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=unknown
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=left
[/FRAME]


    


    Please help me spreading some light over this...

    


    Thanks !

    


  • Convert Black to Transparency in FFMPEG

    12 février 2019, par Kapitano

    I have a sequence of BMP images, which I’m concatenating to an MPNG video file, with this FFMPEG code :

    ffmpeg -f image2 -framerate 12 -i "Frame_%d.bmp" -c:v png "Video.avi"

    I want to replace all black pixels with a transparency in the alpha layer, so it can be laid over other videos in an editor. The following code doesn’t work :

    ffmpeg -f image2 -framerate 12 -i "Frame_%d.bmp" -filter_complex "[0:v]colorkey=0x000000:0.01:0.0[BlackToTransparancy]" -map [BlackToTransparancy] -c:v png "Video.avi"

    As requested, here’s the full log :

    ffmpeg version N-93020-g3224d6691c Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 8.2.1 (GCC) 20181201
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
     libavutil      56. 26.100 / 56. 26.100
     libavcodec     58. 44.100 / 58. 44.100
     libavformat    58. 26.100 / 58. 26.100
     libavdevice    58.  6.101 / 58.  6.101
     libavfilter     7. 48.100 /  7. 48.100
     libswscale      5.  4.100 /  5.  4.100
     libswresample   3.  4.100 /  3.  4.100
     libpostproc    55.  4.100 / 55.  4.100
    Input #0, image2, from 'D:\ReadOut_Process\Test\Test__Frame_%d.bmp':
     Duration: 00:00:16.83, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: bmp, bgra, 1280x720, 12 tbr, 12 tbn, 12 tbc
    Stream mapping:
     Stream #0:0 (bmp) -> colorkey
     colorkey -> Stream #0:0 (png)
    Press [q] to stop, [?] for help
    Output #0, avi, to 'D:\ReadOut_Process\Test\Test__Scribe.avi':
     Metadata:
       ISFT            : Lavf58.26.100
       Stream #0:0: Video: png (MPNG / 0x474E504D), rgba, 1280x720, q=2-31, 200 kb/s, 12 fps, 12 tbn, 12 tbc
       Metadata:
         encoder         : Lavc58.44.100 png
    frame=   34 fps=0.0 q=-0.0 size=     774kB time=00:00:02.16 bitrate=2924.7kbits/s speed=4.21x    
    frame=   65 fps= 64 q=-0.0 size=    1798kB time=00:00:04.75 bitrate=3100.1kbits/s speed=4.66x    
    frame=   95 fps= 62 q=-0.0 size=    3078kB time=00:00:07.25 bitrate=3477.4kbits/s speed=4.76x    
    frame=  124 fps= 61 q=-0.0 size=    4358kB time=00:00:09.66 bitrate=3692.8kbits/s speed=4.76x    
    frame=  155 fps= 61 q=-0.0 size=    6150kB time=00:00:12.25 bitrate=4112.4kbits/s speed=4.83x    
    frame=  185 fps= 61 q=-0.0 size=    7686kB time=00:00:14.75 bitrate=4268.5kbits/s speed=4.84x    
    frame=  202 fps= 58 q=-0.0 Lsize=    9187kB time=00:00:16.83 bitrate=4470.8kbits/s speed=4.83x    
    video:9176kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.112806%

    The BMPs are too large to upload, but here’s a link to a section of a typical one : https://pasteboard.co/I0RzfjI.bmp

    Can it be done ?

  • why ffmpeg frame to opengles texture is black

    5 juin 2012, par joe

    I'm trying to converted from a video using ffmpeg to an opengles texture in jni, but i just get a black texture. I have output the opengl with the glGetError(), there is any error.
    Here is my code :

    void*              pixels;    int err;
    int i;
    int frameFinished = 0;
    AVPacket packet;
    static struct SwsContext *img_convert_ctx;
    static struct SwsContext *scale_context = NULL;
    int64_t seek_target;

    int target_width = 320;
    int target_height = 240;
    GLenum error = GL_NO_ERROR;
    sws_freeContext(img_convert_ctx);  

    i = 0;
    while((i==0) && (av_read_frame(pFormatCtx, &packet)>=0)) {
       if(packet.stream_index==videoStream) {
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           if(frameFinished) {
               LOGI("packet pts %llu", packet.pts);
               img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
                      pCodecCtx->pix_fmt,
                      target_width, target_height, PIX_FMT_RGB24, SWS_BICUBIC,
                      NULL, NULL, NULL);
               if(img_convert_ctx == NULL) {
                   LOGE("could not initialize conversion context\n");
                   return;
               }
               sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
               LOGI("sws_scale");

               videoTextures = new Texture*[1];
               videoTextures[0]->mWidth = 256; //(unsigned)pCodecCtx->width;
               videoTextures[0]->mHeight = 256; //(unsigned)pCodecCtx->height;
               videoTextures[0]->mData = pFrameRGB->data[0];

               glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

               glGenTextures(1, &(videoTextures[0]->mTextureID));
               glBindTexture(GL_TEXTURE_2D, videoTextures[0]->mTextureID);
               glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
               glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

               if(0 == got_texture)
               {
                   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, videoTextures[0]->mWidth, videoTextures[0]->mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);

                   glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
               }else
               {
                   glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);

               }

               i = 1;
               error = glGetError();
               if( error != GL_NO_ERROR ) {
                   LOGE("couldn't create texture!!");
                      switch (error) {
                       case GL_INVALID_ENUM:
                       LOGE("GL Error: Enum argument is out of range");
                       break;
                       case GL_INVALID_VALUE:
                           LOGE("GL Error: Numeric value is out of range");
                       break;
                       case GL_INVALID_OPERATION:
                           LOGE("GL Error: Operation illegal in current state");
                       break;
                       case GL_OUT_OF_MEMORY:
                           LOGE("GL Error: Not enough memory to execute command");
                       break;
                       default:
                           break;
                      }
               }
           }
       }
       av_free_packet(&packet);
    }

    i success to change pFrameRGB to a java bitmap, but i just want to change it to a texture in the c code,