Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (14)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (5073)

  • Merge audio and video RTP data into mp4 file

    4 juillet 2015, par Kaidul Islam

    I am receiving audio and video RTP data through socket connection. Now I want to merge the video and audio RTP data into MP4 file. How can I achieve this ? Do I need to save the video RTP into h264 and audio RTP into PCMU separately and later merge these into MP4 file ? Or is it possible to merge audio-video RTP into MP4 file directly ?

    Thanks in advance !

  • How to save h264 frames as jpeg images using ffmpeg ?

    17 août 2020, par Matthew Czarnek

    I would like to save thumbnails from a h264 stream that I'm turning into ffmpeg avpackets as jpegs.

    


    I'm started with a h264 AVPacket (iframe) and decode it into an AVFrame using avcodec_send_packet/avcodec_receive_frame. Now trying to go from AVFrame and convert into AVPacket using avcodec_send_frame/avcodec_receive_packet

    


    I can convert to png instead jpg, though I do get a the video looking like it's outputting three separate frames squeezed side by side into one. Wondering if it's one frame is R, next G, and finally B. I'm not sure, clearly I'm doing something wrong there. I figured it's possible it's the png encoder and I don't need it, so let's get jpg working first. But jpg is outputting unopenable files.

    


    Any advice ?

    


    Here is my code :

    


    int output_thumbnails(AVPacket* video_packet)
{
    char png_file_name[max_chars_per_filename];
    char thumbnail_id_char[10];
    _itoa_s(thumbnail_id, thumbnail_id_char, 10);
    strcpy_s(png_file_name, max_chars_per_filename, time_stamped_filepath);
    strcat_s(png_file_name, max_chars_per_filename, time_stamped_filename);
    strcat_s(png_file_name, max_chars_per_filename, thumbnail_id_char);
    strcat_s(png_file_name, max_chars_per_filename, ".jpg");
    thumbnail_id++;

    int error_code = send_AVPacket_to_videocard(video_packet, av_codec_context_RTSP);

    //if (error_code == AVERROR_EOF)
    //{
    //  //  error_code = videocard_to_PNG(png_file_name, av_codec_context_RTSP, av_codec_RTSP);
    //}
    if (error_code == AVERROR(EAGAIN)) //send packets to videocard until function returns EAGAIN
    {
        error_code = videocard_to_PNG(png_file_name, av_codec_context_RTSP);

        //EAGAIN means that the video card buffer is ready to have the png pulled off of it
        if (error_code == AVERROR_EOF)
        {
            //  error_code = videocard_to_PNG(png_file_name, av_codec_context_RTSP, av_codec_RTSP);
        }
        else if (error_code == AVERROR(EAGAIN))
        {

        }
        else
        {
            deal_with_av_errors(error_code, __LINE__, __FILE__);
        }
    }
    else
    {
        deal_with_av_errors(error_code, __LINE__, __FILE__);
    }
    
    return 0;
}


    


    VideoThumbnailGenerator.h :
#include "VideoThumbnailGenerator.h"

    


    bool decoder_context_created = false;
bool encoder_context_created = false;

AVCodecContext* h264_decoder_codec_ctx;
AVCodecContext* thumbnail_encoder_codec_ctx;

int send_AVPacket_to_videocard(AVPacket* packet, AVCodecContext* codec_ctx)
{
    if(!decoder_context_created)
    {
        AVCodec* h264_codec = avcodec_find_decoder(codec_ctx->codec_id);
        h264_decoder_codec_ctx = avcodec_alloc_context3(h264_codec);

        h264_decoder_codec_ctx->width = codec_ctx->width;
        h264_decoder_codec_ctx->height = codec_ctx->height;
        h264_decoder_codec_ctx->pix_fmt = AV_PIX_FMT_RGB24;
        h264_decoder_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
        h264_decoder_codec_ctx->skip_frame = AVDISCARD_NONINTRA;//AVDISCARD_NONREF;//AVDISCARD_NONINTRA;
        
        h264_decoder_codec_ctx->time_base.num = 1;
        h264_decoder_codec_ctx->time_base.den = 30;

        h264_decoder_codec_ctx->extradata = codec_ctx->extradata;
        h264_decoder_codec_ctx->extradata_size = codec_ctx->extradata_size;
        
        int error_code = avcodec_open2(h264_decoder_codec_ctx, h264_codec, NULL);
        if (!h264_codec) {
            return -1;
        }

        if (error_code < 0)
        {
            return error_code;
        }
        decoder_context_created = true;
    }

    
    //use hardware decoding to decode video frame
    int error_code = avcodec_send_packet(h264_decoder_codec_ctx, packet);
    if(error_code == AVERROR(EAGAIN))
    {
        return AVERROR(EAGAIN);
    }
    if(error_code<0)
    {
        printf("Error: Could not send packet to video card");
        return error_code;
    }

    return 0;
}

int videocard_to_PNG(char *png_file_path, AVCodecContext* codec_ctx)
{
    if (!encoder_context_created)
    {
        //AVCodec* thumbnail_codec = avcodec_find_encoder(AV_CODEC_ID_PNG);
        AVCodec* thumbnail_codec = avcodec_find_encoder(AV_CODEC_ID_JPEG2000);
        thumbnail_encoder_codec_ctx = avcodec_alloc_context3(thumbnail_codec);

        thumbnail_encoder_codec_ctx->width = 128;
        thumbnail_encoder_codec_ctx->height = (int)(((float)codec_ctx->height/(float)codec_ctx->width) * 128);
        thumbnail_encoder_codec_ctx->pix_fmt = AV_PIX_FMT_RGB24; //AV_PIX_FMT_YUVJ420P
        thumbnail_encoder_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;

        thumbnail_encoder_codec_ctx->time_base.num = 1;
        thumbnail_encoder_codec_ctx->time_base.den = 30;

        bool thread_check = thumbnail_encoder_codec_ctx->thread_type & FF_THREAD_FRAME;
        bool frame_threads_check = thumbnail_encoder_codec_ctx->codec->capabilities & AV_CODEC_CAP_FRAME_THREADS;
        
        int error_code = avcodec_open2(thumbnail_encoder_codec_ctx, thumbnail_codec, NULL);
        if (!thumbnail_codec) {
            return -1;
        }

        if (error_code < 0)
        {
            return error_code;
        }
        encoder_context_created = true;
    }
    
    AVFrame* thumbnail_frame = av_frame_alloc();
    AVPacket* thumbnail_packet = av_packet_alloc();
    //av_init_packet(png_packet);
    int error_code = avcodec_receive_frame(h264_decoder_codec_ctx, thumbnail_frame);

    //check for errors everytime
    //note EAGAIN errors won't get here since they won't get past while
    if (error_code < 0 && error_code != AVERROR(EAGAIN))
    {
        printf("Error: Could not get frame from video card");
        return error_code;
    }
    //empty buffer if there are any more frames to pull (there shouldn't be)
    //while(error_code != AVERROR(EAGAIN))
    //{
    //  //check for errors everytime
    //  //note EAGAIN errors won't get here since they won't get past while
    //  if (error_code < 0)
    //  {
    //      printf("Error: Could not get frame from video card");
    //      return error_code;
    //  }
    //  
    //  error_code = avcodec_receive_frame(h264_decoder_codec_ctx, png_frame);
    //}

    //now we convert back to AVPacket, this time one holding PNG info, so we can store to file
    error_code = avcodec_send_frame(thumbnail_encoder_codec_ctx, thumbnail_frame);

    if (error_code >= 0) {
        error_code = avcodec_receive_packet(thumbnail_encoder_codec_ctx, thumbnail_packet);

        FILE* out_PNG;

        errno_t err = fopen_s(&out_PNG, png_file_path, "wb");
        if (err == 0) {
            fwrite(thumbnail_packet->data, thumbnail_packet->size, 1, out_PNG);
        }
        fclose(out_PNG);

    }
    return error_code;
}


    


  • Capture Video with opencv, save to ffmpeg pipe and live stream

    9 mars 2018, par Chris

    The goal is to stream an analysed live video over RTSP to some media server. To make the edits/analysis I use opencv, save the edited frames as JPEG in an FFMPEG image pipe and use the same FFMPEG to create a RTSP stream. Sorry if the terminology is not that accurate, I find it still quite confusing.

    I have the following code after quite some struggle :

    import cv2
    from subprocess import Popen, PIPE
    from PIL import Image

    # open pipe
    p = Popen('ffmpeg -y -f image2pipe -vcodec mjpeg -r 24 -i - -vcodec h264 -f rtsp -rtsp_transport tcp rtsp://localhost:8081/test.sdp', stdin=PIPE)

    video = cv2.VideoCapture(0)
    i = 0
    while video.isOpened():
       i=i+1
       ret, frame = video.read()
       if ret:
           #[...do some analysis stuff]
           im = Image.fromarray(frame)
           im.save(p.stdin, 'JPEG')

           """
           alternatively
           img_str = cv2.imencode('.jpg', frame)[1].tostring()
           p.stdin.write(img_str)
           """

       else:
           break

       print (i)
       if(i==1000):
           break


    p.stdin.close()
    p.wait()
    video.release()
    cv2.destroyAllWindows()
    print("done streaming video")

    This runs for 124 frames (i=124) then the loop hangs and I get some message from ffmpeg where I am not sure what it is about, however it does not look like an error :

    push frame
    122
    push frame
    123
    push frame
    124
    Input #0, image2pipe, from 'pipe:':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 640x480 [SAR 1:1 DAR 4:3], 24 fps, 24 tbr, 24 tbn, 24 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
    [libx264 @ 000002076650d980] using SAR=1/1
    [libx264 @ 000002076650d980] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 000002076650d980] profile High, level 3.0
    [libx264 @ 000002076650d980] 264 - core 155 r2893 b00bcaf - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=24 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    push frame

    The webcam seems to continue running but no more frames are pushed into the pipe. It looks like some buffer is filled or something. If I write directly to a video file instead of rtsp, it works. If I open the rtsp stream simultaneously with ffplay, it also works (although with a 5 seconds lag).
    Anyone an idea where this is coming from and how to solve it ?