Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (77)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (11258)

  • Confused about x264 and encoding video frames

    26 février 2015, par spartygw

    I built a test driver for encoding a series of images I have captured. I am using libx264 and based my driver off of this guy’s answer :

    StackOverflow link

    In my case I am starting out by reading in a JPG image and converting to YUV and passing that same frame over and over in a loop to the x264 encoder.

    My expectation was that since the frame is the same that the output from the encoder would be very small and constant.

    Instead I find that the NAL payload is varied from a few bytes to a few KB and also varies highly depending on the frame rate I specify in the encoder parameters.

    Obviously I don’t understand video encoding. Why does the output size vary so much ?

    int main()
    {
     Image image(WIDTH, HEIGHT);
     image.FromJpeg("frame-1.jpg");

     unsigned char *data = image.GetRGB();

     x264_param_t param;

     x264_param_default_preset(&param, "fast", "zerolatency");
     param.i_threads = 1;
     param.i_width = WIDTH;
     param.i_height = HEIGHT;
     param.i_fps_num = FPS;
     param.i_fps_den = 1;

     // Intra refres:
     param.i_keyint_max = FPS;
     param.b_intra_refresh = 1;

     //Rate control:
     param.rc.i_rc_method = X264_RC_CRF;
     param.rc.f_rf_constant = FPS-5;
     param.rc.f_rf_constant_max = FPS+5;

     //For streaming:
     param.b_repeat_headers = 1;
     param.b_annexb = 1;

     x264_param_apply_profile(&param, "baseline");

     // initialize the encoder
     x264_t* encoder = x264_encoder_open(&param);
     x264_picture_t pic_in, pic_out;
     x264_picture_alloc(&pic_in, X264_CSP_I420, WIDTH, HEIGHT);
     // X264 expects YUV420P data use libswscale
     // (from ffmpeg) to convert images to the right format
     struct SwsContext* convertCtx =
           sws_getContext(WIDTH, HEIGHT, PIX_FMT_RGB24, WIDTH, HEIGHT,
                          PIX_FMT_YUV420P, SWS_FAST_BILINEAR,
                          NULL, NULL, NULL);

     // encoding is as simple as this then, for each frame do:
     // data is a pointer to your RGB structure
     int srcstride = WIDTH*3; //RGB stride is just 3*width
     sws_scale(convertCtx, &data, &srcstride, 0, HEIGHT,
               pic_in.img.plane, pic_in.img.i_stride);
     x264_nal_t* nals;
     int i_nals;
     int frame_size =
           x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);

     int max_loop=15;
     int this_loop=1;

     while (frame_size >= 0 && --max_loop)
     {
         cout << "------------" << this_loop++ << "-----------------\n";
         cout << "Frame size = " << frame_size << endl;
         cout << "output has " << pic_out.img.i_csp << " colorspace\n";
         cout << "output has " << pic_out.img.i_plane << " # img planes\n";

         cout << "i_nals = " << i_nals << endl;
         for (int n=0; n
  • ffmpeg QSV hardware encoder with x11grab screen capture

    11 janvier 2020, par Toby Eggitt

    I believe I have built ffmpeg with support for my motherboard’s Intel graphics processor chip, but I have not succeeded in showing this working in any way. My goal is to use it for screen capture (the ffmpeg I built does capture screen successfully using the software encoding, but this is far too slow to be useful—it manages about 12fps at a very modest quality).

    My main problem—I think—is that I don’t know how to use these encoders, the examples I found all fail, which makes me suspect that what I’ve built is broken in some way. However, I also have no idea how I can verify that I built this correctly, but the following are true :

    • The five components that I built to get to this all compiled without
      errors (they were libva, gmmlib, intel-media-driver, libmfx, and
      ffmpeg
    • The output of ffmpeg -encoders includes four encoders with _qsv in
      their names including h264_qsv
    • Most of the commands I have tried result in output of this form :
       [h264_qsv @ 0x55ef1dc72040] Low power mode is unsupported
       [h264_qsv @ 0x55ef1dc72040] Current frame rate is unsupported
       [h264_qsv @ 0x55ef1dc72040] Current picture structure is unsupported
       [h264_qsv @ 0x55ef1dc72040] Current resolution is unsupported
       [h264_qsv @ 0x55ef1dc72040] Current pixel format is unsupported
       [h264_qsv @ 0x55ef1dc72040] some encoding parameters are not supported by the QSV runtime. Please double check the input parameters.
       Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    I have the impression this thing might be fussy about many parameters of this sort but have no idea where to find out what it would like. Any suggestions at all, how to verify it, or better yet, how to issue a command that captures screen and encodes with the hardware, would be most welcome.

  • recording live stream video from tv card using ffmpeg at window [on hold]

    6 décembre 2013, par user2688423

    I want to record live stream every 1 second from tv card(tv signal) using ffmpeg in window.

    first of all, to record live video from tv card, I tried below.

    1. First I tried this.

    ffmpeg -list_devices true -f dshow -i dummy

    then the result is

    " [dshow @ 000000000024e6fe0] DirectShow video devices
    [dshow @000000000024e6fe0] "SKYTV HD USB Maxx Video Capture"
    [dshow @
    000000000024e6fe0] DirectShow audio devices
    [dshow @
    000000000024e6fe0] "Analog Audio In(SKYTV HD USB Ma" "

    so I tried

    ffmpeg -f dshow -i video="SKYTV HD USB Maxx Video Capture" -r 20
    -threads 0 D ://test.mkv

    But it didn't work. the Error message is

    "[dshow@000000000034d920] Could not run filter
    video=SKYTV HD USB
    Maxx Video Capture : Input/output error"

    I use the device called 'SKYTV HD USB Maxx Video Capture' for getting tv signal(TV card).

    1. The First way deosn't work, I tried different way.

    ffmpeg -y -f vfwcap -i list

    then the result is
    "

    [dshow @ 00000000003fd760] Driver 0

    [dshow @ 00000000003fd760] Microsoft WDM Image Capture (Win32)
    [dshow @ 00000000003fd760] Version : 6.1.7601.17514 list : Input/output error

    "

    so I tried

    ffmpeg -y -f vfwcap -r 25 -i 0 D ://out.mp4

    then, there is some out.mp4 file in D drive but the file is nothing.
    (I think it is not TV signal)

    what should i do to record live video every 1 second from tv card(tv signal) using ffmpeg in window ? And How can I set channel at tvcard(Because I want to get tv signal, there are many channels).

    Please help..!