Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (15)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (4872)

  • How to encode the input images from camera into H.264 stream ?

    22 avril 2015, par kuu

    I’m trying to encode the input images from MacBook Pro’s built-in FaceTime HD Camera into an H.264 video stream in real time using the libx264 on Mac OS X 10.9.5.

    Below are the steps I took :

    1. Get 1280x720 32BGRA images from camera at 15fps using AVFoundation API (AVCaptureDevice class, etc.)
    2. Convert the images into 320x180 YUV420P format using libswscale.
    3. Encode the images into an H.264 video stream (baseline profile) using libx264.

    I apply the above steps each time the image is obtained from the camera, believing that the encoder keeps track of the encoding state and generates a NAL unit when it’s available.

    As I wanted to get the encoded frames while providing the input images to the encoder, I decided to flush the encoder (calling x264_encoder_delayed_frames()) every 30 frames (2 seconds).

    However, when I restart the encoding, the encoder stops after a while (x264_encoder_encode() never returns.) I tried changing the number of frames before flushing, but the situation didn’t change.

    Below are the related code (I omitted the image capture code because it looks no problem.)

    Can you point out anything I might be doing wrong ?

    x264_t *encoder;
    x264_param_t param;

    // Will be called only first time.
    int initEncoder() {
     int ret;

     if ((ret = x264_param_default_preset(&param, "medium", NULL)) < 0) {
       return ret;
     }

     param.i_csp = X264_CSP_I420;
     param.i_width  = 320;
     param.i_height = 180;
     param.b_vfr_input = 0;
     param.b_repeat_headers = 1;
     param.b_annexb = 1;

     if ((ret = x264_param_apply_profile(&param, "baseline")) < 0) {
       return ret;
     }

     encoder = x264_encoder_open(&param);
     if (!encoder) {
       return AVERROR_UNKNOWN;
     }

     return 0;
    }

    // Will be called from encodeFrame() defined below.
    int convertImage(const enum AVPixelFormat srcFmt, const int srcW, const int srcH, const uint8_t *srcData, const enum AVPixelFormat dstFmt, const int dstW, const int dstH, x264_image_t *dstData) {
     struct SwsContext *sws_ctx;
     int ret;
     int src_linesize[4];
     uint8_t *src_data[4];

     sws_ctx = sws_getContext(srcW, srcH, srcFmt,
                          dstW, dstH, dstFmt,
                          SWS_BILINEAR, NULL, NULL, NULL);

     if (!sws_ctx) {
       return AVERROR_UNKNOWN;
     }

     if ((ret = av_image_fill_linesizes(src_linesize, srcFmt, srcW)) < 0) {
       sws_freeContext(sws_ctx);
       return ret;
     }

     if ((ret = av_image_fill_pointers(src_data, srcFmt, srcH, (uint8_t *) srcData, src_linesize)) < 0) {
       sws_freeContext(sws_ctx);
       return ret;
     }

     sws_scale(sws_ctx, (const uint8_t * const*)src_data, src_linesize, 0, srcH, dstData->plane, dstData->i_stride);
     sws_freeContext(sws_ctx);
     return 0;
    }

    // Will be called for each frame.
    int encodeFrame(const uint8_t *data, const int width, const int height) {
     int ret;
     x264_picture_t pic;
     x264_picture_t pic_out;
     x264_nal_t *nal;
     int i_nal;

     if ((ret = x264_picture_alloc(&pic, param.i_csp, param.i_width, param.i_height)) < 0) {
       return ret;
     }

     if ((ret = convertImage(AV_PIX_FMT_RGB32, width, height, data, AV_PIX_FMT_YUV420P, 320, 180, &pic.img)) < 0) {
       x264_picture_clean(&pic);
       return ret;
     }

     if ((ret = x264_encoder_encode(encoder, &nal, &i_nal, &pic, &pic_out)) < 0) {
       x264_picture_clean(&pic);
       return ret;
     }

     if(ret) {
       for (int i = 0; i < i_nal; i++) {
         printNAL(nal + i);
       }
     }

     x264_picture_clean(&pic);
     return 0;
    }

    // Will be called every 30 frames.
    int flushEncoder() {
     int ret;
     x264_nal_t *nal;
     int i_nal;
     x264_picture_t pic_out;

     /* Flush delayed frames */
     while (x264_encoder_delayed_frames(encoder)) {
       if ((ret = x264_encoder_encode(encoder, &nal, &i_nal, NULL, &pic_out)) < 0) {
         return ret;
       }

       if (ret) {
         for (int j = 0; j < i_nal; j++) {
           printNAL(nal + j);
         }
       }
     }
    }
  • why do andorid camera and gopros not use b-frames ?

    26 août 2021, par Photo_Survey

    I am using ffmpeg to extract the gop-structure of videos, which I recorded with my Smartphone (Samsung Galaxy A51) and my GoPro (Hero 7 Black).
The gop-structures I get all look like this : IPPPPPPPPPPPPPP. The videos of the different devices only differ in the number of P-Frame per gop-structure. The ffmpeg code I used fpr this is the following :

    


    ffprobe -show_frames inputvideo.mp4 -print_format json
Now my question is why the encoders of both devices don't use B-Frames ? Is is it because the encoding of B-Frames is more complicated for the Hardwar or something like this ?

    


  • Live streaming from h.264 IP camera to Web browser

    21 mars 2019, par ilmix

    I want to live stream video from h.264/h.265 IP camera to browser with little to no delay and in decent quality (Full HD). I know there are couple of questions like this one but the answers seem to be either incomplete or outdated. So far I’ve tried ffmpeg and ffserver and had some success with them, but there are problems :

    When I stream to mjpg the quality isn’t great, if I use webm quality is better but there is significant delay (aprox. 5 seconds), probably due to transcoding from h264 to vp9. How can I improve it ? Is it possible to stream h264 without transcoding it to different format ? Are there any better solutions than ffserver and ffmpeg ?

    Here is the config I’ve used for mjpg :

    ffmpeg -rtsp_transport tcp -i rtsp://rtsp_user:Rtsp_pass@192.168.3.83:554/Streaming/Channels/101 -q:v 3 http://localhost:8090/feed3.ffm

    on ffserver :

    <feed>
      file /tmp/feed3.ffm
      filemaxsize 1G
      acl allow 127.0.0.1
    </feed>

    <stream>
       Feed feed3.ffm
       Format mpjpeg
       VideoCodec mjpeg
       VideoFrameRate 25
       VideoIntraOnly
       VideoBufferSize 8192
       VideoBitRate 8192
       VideoSize 1920x1080
       VideoQMin 5
       VideoQMax 15
       NoAudio
       Strict -1
    </stream>

    And for webm :

    ffmpeg -rtsp_transport tcp -i rtsp://rtsp_user:Rtsp_pass@192.168.3.83:554/Streaming/Channels/101 -c:v libvpx http://127.0.0.1:8090/feed4.ffm

    ffserver :

    <stream>
      Feed feed4.ffm
      Format webm
      # Audio settings
      NoAudio
      # Video settings
      VideoCodec libvpx
      VideoSize 720x576          
      VideoFrameRate 25          
      AVOptionVideo qmin 10
      AVOptionVideo qmax 42
      AVOptionAudio flags +global_header
      PreRoll -1
      StartSendOnKey
      VideoBitRate 400            
    </stream>