Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (36)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (12440)

  • Output black when I decode h264 720p with ffmpeg

    6 décembre 2017, par José Marqueses Saxo

    First, sorry for my english. When I decode h264 720p in ardrone2.0 my output is black and I cant see anything.

    I have try to change the value of pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24; to pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P; and the value of pCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24; to pCodecCtxH264->pix_fmt = AV_PIX_FMT_YUV420P; but my program crash. What am I doing wrong ?. Thank you, see part of my code :

    av_register_all();
    avcodec_register_all();
    avformat_network_init();

    // 1.2. Open video file
    if(avformat_open_input(&pFormatCtx, drone_addr, NULL, NULL) != 0) {
     mexPrintf("No conecct with Drone");
     EndVideo();
     return;
    }

    pCodec    = avcodec_find_decoder(AV_CODEC_ID_H264);

    pCodecCtx = avcodec_alloc_context3(pCodec);
    pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24;
    pCodecCtx->skip_frame = AVDISCARD_DEFAULT;
    pCodecCtx->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
    pCodecCtx->err_recognition = AV_EF_CAREFUL;
    pCodecCtx->skip_loop_filter = AVDISCARD_DEFAULT;
    pCodecCtx->workaround_bugs = FF_BUG_AUTODETECT;
    pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
    pCodecCtx->codec_id = AV_CODEC_ID_H264;
    pCodecCtx->skip_idct = AVDISCARD_DEFAULT;
    pCodecCtx->width = 1280;
    pCodecCtx->height = 720;

    pCodecH264 = avcodec_find_decoder(AV_CODEC_ID_H264);
    pCodecCtxH264 = avcodec_alloc_context3(pCodecH264);


    pCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24;
    pCodecCtxH264->skip_frame = AVDISCARD_DEFAULT;
    pCodecCtxH264->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
    pCodecCtxH264->err_recognition = AV_EF_CAREFUL;
    pCodecCtxH264->skip_loop_filter = AVDISCARD_DEFAULT;
    pCodecCtxH264->workaround_bugs = FF_BUG_AUTODETECT;
    pCodecCtxH264->codec_type = AVMEDIA_TYPE_VIDEO;
    pCodecCtxH264->codec_id = AV_CODEC_ID_H264;
    pCodecCtxH264->skip_idct = AVDISCARD_DEFAULT;

    if(avcodec_open2(pCodecCtxH264, pCodecH264, &optionsDict) < 0)
    {
      mexPrintf("Error opening H264 codec");
      return ;
    }

    pFrame_BGR24 = av_frame_alloc();


    if(pFrame_BGR24 == NULL) {
      mexPrintf("Could not allocate pFrame_BGR24\n");
      return ;
    }

    // Determine required buffer size and allocate buffer

    buffer_BGR24 =
    (uint8_t *)av_mallocz(av_image_get_buffer_size(AV_PIX_FMT_BGR24,
    pCodecCtx->width, ((pCodecCtx->height == 720) ? 720 : pCodecCtx->height) *
    sizeof(uint8_t)*3,1));

    // Assign buffer to image planes

    av_image_fill_arrays(pFrame_BGR24->data, pFrame_BGR24->linesize,
    buffer_BGR24,AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height,1);

    // format conversion context
    pConvertCtx_BGR24 = sws_getContext(pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,  AV_PIX_FMT_BGR24,
                                    SWS_BILINEAR | SWS_ACCURATE_RND, 0, 0, 0);

    // 1.6. get video frames
    pFrame = av_frame_alloc();

    av_init_packet(&packet);
    packet.data = NULL;
    packet.size = 0;
    }

    //Captura un frame
    void video::capture(mxArray *plhs[]) {

     if(av_read_frame(pFormatCtx, &packet) < 0){
         mexPrintf("Error al leer frame");
         return;
     }
      do {
          do {
             rest = avcodec_send_packet(pCodecCtxH264, &packet);
          } while(rest == AVERROR(EAGAIN));

          if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {
                   printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d,
                   AVERROR(EINVAL): %d\n", AVERROR(EAGAIN), AVERROR_EOF,
                   AVERROR(EINVAL));
               printf("fe_read_frame: Frame getting error (%d)!\n", rest);
               return;
          }

          rest = avcodec_receive_frame(pCodecCtxH264, pFrame);
      } while(rest == AVERROR(EAGAIN));

      if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {

       // An error or EOF occured,index break out and return what
       // we have so far.
         printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d, AVERROR(EINVAL): %d\n",
         AVERROR(EAGAIN), AVERROR_EOF, AVERROR(EINVAL));
           printf("fe_read_frame: EOF or some othere decoding error (%d)!\n",
           rest);
           return;
      }


      // 2.1.1. convert frame to GRAYSCALE [or BGR] for OpenCV
      sws_scale(pConvertCtx_BGR24,   (const uint8_t* const*)pFrame->data,
          pFrame->linesize, 0,pCodecCtx->height,   pFrame_BGR24->data,  
                pFrame_BGR24->linesize);
    //}
      av_packet_unref(&packet);
      av_init_packet(&packet);
      mwSize dims[] = {(pCodecCtx->width)*((pCodecCtx->height == 720) ? 720 :
      pCodecCtx->height)*sizeof(uint8_t)*3};
      plhs[0] = mxCreateNumericArray(1,dims,mxUINT8_CLASS, mxREAL);
       //plhs[0]=mxCreateDoubleMatrix(pCodecCtx->height,pCodecCtx-
       >width,mxREAL);
      point=mxGetPr(plhs[0]);
      memcpy(point, pFrame_BGR24->data[0],(pCodecCtx->width)*(pCodecCtx-
       >height)*sizeof(uint8_t)*3);
    }
  • FFmpeg extracts black image from H264 stream

    8 juin 2022, par massivemoisture

    I have a C# application that receives H264 stream through a socket. I want to continuously get the latest image from that stream.

    


    Here's what I did with FFmpeg 5.0.1, just a rough sample to get ONE latest image, how I start FFmpeg :

    


    var ffmpegInfo = new ProcessStartInfo(FFMPEG_PATH);
ffmpegInfo.RedirectStandardInput = true;
ffmpegInfo.RedirectStandardOutput = true;
ffmpegInfo.RedirectStandardError = true;
ffmpegInfo.UseShellExecute = false;

ffmpegInfo.Arguments = "-i pipe: -f h264 -pix_fmt bgr24 -an -sn pipe:";

ffmpegInfo.UseShellExecute = false;
ffmpegInfo.CreateNoWindow = true;

Process myFFmpeg = new Process();
myFFmpeg.StartInfo = ffmpegInfo;
myFFmpeg.EnableRaisingEvents = true;
myFFmpeg.Start();

var inStream = myFFmpeg.StandardInput.BaseStream;
FileStream baseStream = myFFmpeg.StandardOutput.BaseStream as FileStream;
myFFmpeg.BeginErrorReadLine();


    


    Then I start a new thread to receive the stream through socket :

    


    // inStream is "myFFmpeg.StandardInput.BaseStream" from the code block above
var t = Task.Run(() => ReceiveStream(inStream));


    


    Next I read the output from FFmpeg :

    


    byte[] decoded = new byte[Width * Height * 3];
int numBytesToRead = Width * Height * 3;
int numBytesRead = 0;

while (numBytesToRead > 0)
{
   int n = baseStream.Read(decoded, 0, decoded.Length);
   Console.WriteLine($"Read {n} bytes");
   if (n == 0)
   {
      break;
   }              
      numBytesRead += n;
      numBytesToRead -= n;
}


    


    Lastly, I use ImageSharp library to save decoded byte array as a jpeg file.

    


    image.Save("test.jpeg", encoder);


    


    However, test.jpeg always comes out as a black image. What did I do wrong ?
Here's the stderr log that I got from ffmpeg :

    


    
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, 25 fps, 25 tbr, 1200k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Incompatible pixel format 'bgr24' for codec 'libx264', auto-selecting format 'yuv444p'
[libx264 @ 0x11b9068f0] using cpu capabilities: ARMv8 NEON
[libx264 @ 0x11b9068f0] profile High 4:4:4 Predictive, level 5.0, 4:4:4, 8-bit
Output #0, h264, to 'pipe:':
  Metadata:
    encoder         : Lavf59.16.100
  Stream #0:0: Video: h264, yuv444p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, q=2-31, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.18.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=    1 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
frame=   56 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x
frame=   87 fps= 83 q=28.0 size=     370kB time=00:00:01.16 bitrate=2610.6kbits/s speed=1.11x
frame=  118 fps= 75 q=28.0 size=     698kB time=00:00:02.40 bitrate=2381.4kbits/s speed=1.54x
frame=  154 fps= 75 q=28.0 size=    1083kB time=00:00:03.84 bitrate=2311.1kbits/s speed=1.86x
...


    


    Thank you !

    


    Edit : as suggested by @kesh, I have changed h264 to rawvideo, the arguments now are : -i pipe: -f rawvideo -pix_fmt bgr24 -an -sn pipe:

    


    Here's the output of ffmpeg :

    


    Input #0, h264, from 'pipe:':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, 25 fps, 25 tbr, 1200k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
// About 9 of these "No accelerated colorspace..." message
[swscaler @ 0x128690000] [swscaler @ 0x1286a0000] No accelerated colorspace conversion found from yuv420p to bgr24.
Output #0, rawvideo, to 'pipe:':
  Metadata:
    encoder         : Lavf59.16.100
  Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(pc, gbr/bt470bg/smpte170m, progressive), 1080x2256, q=2-31, 1461888 kb/s, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.18.100 rawvideo
frame=    1 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x
// FFmpeg outputs no more log after this


    


  • Fill output video with black screen in case of missing the input stream or switch input UDP stream to another source

    13 mai 2018, par Omar Mahmoud

    I am using FFMPEG to record a streams from UDP sources, but unfortunately once the input stream missed, the ffmpeg stop recording, then append video to file once the stream observed by the server again.

    And as these recording files is time based so, I need fill video with black screen in case of missing the input stream.

    ffmpeg -i ’udp ://224.12.12.1:4000’ -t 00:45:00 -vcodec copy -acodec copy -f mpegts /record/eEs1526947.ts

    How can I do this, either by ffmpeg or other CLI based service/process ?