Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (108)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (16434)

  • FFmpeg extracts black image from H264 stream

    8 juin 2022, par massivemoisture

    I have a C# application that receives H264 stream through a socket. I want to continuously get the latest image from that stream.

    


    Here's what I did with FFmpeg 5.0.1, just a rough sample to get ONE latest image, how I start FFmpeg :

    


    var ffmpegInfo = new ProcessStartInfo(FFMPEG_PATH);
ffmpegInfo.RedirectStandardInput = true;
ffmpegInfo.RedirectStandardOutput = true;
ffmpegInfo.RedirectStandardError = true;
ffmpegInfo.UseShellExecute = false;

ffmpegInfo.Arguments = "-i pipe: -f h264 -pix_fmt bgr24 -an -sn pipe:";

ffmpegInfo.UseShellExecute = false;
ffmpegInfo.CreateNoWindow = true;

Process myFFmpeg = new Process();
myFFmpeg.StartInfo = ffmpegInfo;
myFFmpeg.EnableRaisingEvents = true;
myFFmpeg.Start();

var inStream = myFFmpeg.StandardInput.BaseStream;
FileStream baseStream = myFFmpeg.StandardOutput.BaseStream as FileStream;
myFFmpeg.BeginErrorReadLine();


    


    Then I start a new thread to receive the stream through socket :

    


    // inStream is "myFFmpeg.StandardInput.BaseStream" from the code block above
var t = Task.Run(() => ReceiveStream(inStream));


    


    Next I read the output from FFmpeg :

    


    byte[] decoded = new byte[Width * Height * 3];
int numBytesToRead = Width * Height * 3;
int numBytesRead = 0;

while (numBytesToRead > 0)
{
   int n = baseStream.Read(decoded, 0, decoded.Length);
   Console.WriteLine($"Read {n} bytes");
   if (n == 0)
   {
      break;
   }              
      numBytesRead += n;
      numBytesToRead -= n;
}


    


    Lastly, I use ImageSharp library to save decoded byte array as a jpeg file.

    


    image.Save("test.jpeg", encoder);


    


    However, test.jpeg always comes out as a black image. What did I do wrong ?
Here's the stderr log that I got from ffmpeg :

    


    
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, 25 fps, 25 tbr, 1200k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Incompatible pixel format 'bgr24' for codec 'libx264', auto-selecting format 'yuv444p'
[libx264 @ 0x11b9068f0] using cpu capabilities: ARMv8 NEON
[libx264 @ 0x11b9068f0] profile High 4:4:4 Predictive, level 5.0, 4:4:4, 8-bit
Output #0, h264, to 'pipe:':
  Metadata:
    encoder         : Lavf59.16.100
  Stream #0:0: Video: h264, yuv444p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, q=2-31, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.18.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=    1 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
frame=   56 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x
frame=   87 fps= 83 q=28.0 size=     370kB time=00:00:01.16 bitrate=2610.6kbits/s speed=1.11x
frame=  118 fps= 75 q=28.0 size=     698kB time=00:00:02.40 bitrate=2381.4kbits/s speed=1.54x
frame=  154 fps= 75 q=28.0 size=    1083kB time=00:00:03.84 bitrate=2311.1kbits/s speed=1.86x
...


    


    Thank you !

    


    Edit : as suggested by @kesh, I have changed h264 to rawvideo, the arguments now are : -i pipe: -f rawvideo -pix_fmt bgr24 -an -sn pipe:

    


    Here's the output of ffmpeg :

    


    Input #0, h264, from 'pipe:':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, 25 fps, 25 tbr, 1200k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
// About 9 of these "No accelerated colorspace..." message
[swscaler @ 0x128690000] [swscaler @ 0x1286a0000] No accelerated colorspace conversion found from yuv420p to bgr24.
Output #0, rawvideo, to 'pipe:':
  Metadata:
    encoder         : Lavf59.16.100
  Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(pc, gbr/bt470bg/smpte170m, progressive), 1080x2256, q=2-31, 1461888 kb/s, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.18.100 rawvideo
frame=    1 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x
// FFmpeg outputs no more log after this


    


  • Output black when I decode h264 720p with ffmpeg

    6 décembre 2017, par José Marqueses Saxo

    First, sorry for my english. When I decode h264 720p in ardrone2.0 my output is black and I cant see anything.

    I have try to change the value of pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24; to pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P; and the value of pCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24; to pCodecCtxH264->pix_fmt = AV_PIX_FMT_YUV420P; but my program crash. What am I doing wrong ?. Thank you, see part of my code :

    av_register_all();
    avcodec_register_all();
    avformat_network_init();

    // 1.2. Open video file
    if(avformat_open_input(&pFormatCtx, drone_addr, NULL, NULL) != 0) {
     mexPrintf("No conecct with Drone");
     EndVideo();
     return;
    }

    pCodec    = avcodec_find_decoder(AV_CODEC_ID_H264);

    pCodecCtx = avcodec_alloc_context3(pCodec);
    pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24;
    pCodecCtx->skip_frame = AVDISCARD_DEFAULT;
    pCodecCtx->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
    pCodecCtx->err_recognition = AV_EF_CAREFUL;
    pCodecCtx->skip_loop_filter = AVDISCARD_DEFAULT;
    pCodecCtx->workaround_bugs = FF_BUG_AUTODETECT;
    pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
    pCodecCtx->codec_id = AV_CODEC_ID_H264;
    pCodecCtx->skip_idct = AVDISCARD_DEFAULT;
    pCodecCtx->width = 1280;
    pCodecCtx->height = 720;

    pCodecH264 = avcodec_find_decoder(AV_CODEC_ID_H264);
    pCodecCtxH264 = avcodec_alloc_context3(pCodecH264);


    pCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24;
    pCodecCtxH264->skip_frame = AVDISCARD_DEFAULT;
    pCodecCtxH264->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
    pCodecCtxH264->err_recognition = AV_EF_CAREFUL;
    pCodecCtxH264->skip_loop_filter = AVDISCARD_DEFAULT;
    pCodecCtxH264->workaround_bugs = FF_BUG_AUTODETECT;
    pCodecCtxH264->codec_type = AVMEDIA_TYPE_VIDEO;
    pCodecCtxH264->codec_id = AV_CODEC_ID_H264;
    pCodecCtxH264->skip_idct = AVDISCARD_DEFAULT;

    if(avcodec_open2(pCodecCtxH264, pCodecH264, &optionsDict) < 0)
    {
      mexPrintf("Error opening H264 codec");
      return ;
    }

    pFrame_BGR24 = av_frame_alloc();


    if(pFrame_BGR24 == NULL) {
      mexPrintf("Could not allocate pFrame_BGR24\n");
      return ;
    }

    // Determine required buffer size and allocate buffer

    buffer_BGR24 =
    (uint8_t *)av_mallocz(av_image_get_buffer_size(AV_PIX_FMT_BGR24,
    pCodecCtx->width, ((pCodecCtx->height == 720) ? 720 : pCodecCtx->height) *
    sizeof(uint8_t)*3,1));

    // Assign buffer to image planes

    av_image_fill_arrays(pFrame_BGR24->data, pFrame_BGR24->linesize,
    buffer_BGR24,AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height,1);

    // format conversion context
    pConvertCtx_BGR24 = sws_getContext(pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,  AV_PIX_FMT_BGR24,
                                    SWS_BILINEAR | SWS_ACCURATE_RND, 0, 0, 0);

    // 1.6. get video frames
    pFrame = av_frame_alloc();

    av_init_packet(&packet);
    packet.data = NULL;
    packet.size = 0;
    }

    //Captura un frame
    void video::capture(mxArray *plhs[]) {

     if(av_read_frame(pFormatCtx, &packet) < 0){
         mexPrintf("Error al leer frame");
         return;
     }
      do {
          do {
             rest = avcodec_send_packet(pCodecCtxH264, &packet);
          } while(rest == AVERROR(EAGAIN));

          if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {
                   printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d,
                   AVERROR(EINVAL): %d\n", AVERROR(EAGAIN), AVERROR_EOF,
                   AVERROR(EINVAL));
               printf("fe_read_frame: Frame getting error (%d)!\n", rest);
               return;
          }

          rest = avcodec_receive_frame(pCodecCtxH264, pFrame);
      } while(rest == AVERROR(EAGAIN));

      if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {

       // An error or EOF occured,index break out and return what
       // we have so far.
         printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d, AVERROR(EINVAL): %d\n",
         AVERROR(EAGAIN), AVERROR_EOF, AVERROR(EINVAL));
           printf("fe_read_frame: EOF or some othere decoding error (%d)!\n",
           rest);
           return;
      }


      // 2.1.1. convert frame to GRAYSCALE [or BGR] for OpenCV
      sws_scale(pConvertCtx_BGR24,   (const uint8_t* const*)pFrame->data,
          pFrame->linesize, 0,pCodecCtx->height,   pFrame_BGR24->data,  
                pFrame_BGR24->linesize);
    //}
      av_packet_unref(&packet);
      av_init_packet(&packet);
      mwSize dims[] = {(pCodecCtx->width)*((pCodecCtx->height == 720) ? 720 :
      pCodecCtx->height)*sizeof(uint8_t)*3};
      plhs[0] = mxCreateNumericArray(1,dims,mxUINT8_CLASS, mxREAL);
       //plhs[0]=mxCreateDoubleMatrix(pCodecCtx->height,pCodecCtx-
       >width,mxREAL);
      point=mxGetPr(plhs[0]);
      memcpy(point, pFrame_BGR24->data[0],(pCodecCtx->width)*(pCodecCtx-
       >height)*sizeof(uint8_t)*3);
    }
  • how to encode videos for the web and mobile phones using ffmpeg

    3 février 2012, par Arnar Yngvason

    I'm running a website where users can upload their videos and they are all trancoded to the same format (mp4 a.t.m.). Up until now I've been using zencoder (transcoding as a service). But I want to start transcoding the videos on my own server.

    What I would like to know is :

    • Which formats should I transcode to and which sizes are needed for the videos to play on most mobile phones ?
    • Do I actually need webm ?
    • Which is better : CRF or VRF ?
    • I would like the videos to have the same bitrate/quality as the originals. Can I set a max ?
    • Is there a max bitrate I should not exceed if I want to videos to play everywhere ?

    If someone would be so kind to write down the commands I need and explain how they work and what they do, I would be very thankful :)