Recherche avancée

Médias (91)

Autres articles (39)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (7562)

  • How can I mux H.264 RTP output into a container using FFMPEG ?

    23 septembre 2013, par Grad

    I am working on the effects of network losses in video transmission. In order to simulate the network losses I use a simple program which drops random RTP packets from the output of H.264 RTP encoding.

    I use Joint Model (JM) 14.2 in order to encode the video. However, I don't use AnnexB format as my output, instead I choose the output as RTP packets. The JM output is generated as RTP packets with RTP headers and payload as a sequence. After that, some of RTP packets are dropped by using a simple program. Then, I decode the output by using also JM and it's error concealment methods. That gives me a YUV file as output. The format of the output is as follows :

        ----------------------------------------------------------------------
        | RTP Header #1 | RTP Payload #1 | RTP Header #2 | RTP Payload #2 |...
        ----------------------------------------------------------------------

    I want to make a subjective test with these bitstreams and it's very inconvenient to crowdsource this subjective test with GBs of video data. So, I want to mux these bitstreams into a container (i.e. AVI) by using FFMPEG. I have tried to decode these bitstreams with FFMPEG and FFPLAY ; however, both of them didn't work. I also tried the following command and it didn't work, either.

       ffmpeg - f h264 -i  -vcodec copy -r 25 out.avi

    Which format or muxer should I use ? Do I need to convert these files to any other format ?

  • ffmpegthumbnailer error with carrierwave-video-thumbnailer

    30 janvier 2014, par scientiffic

    I am getting the error "No such file or directory" when I try to run ffmpegthumbnailer using the carrierwave-video-thumbnailer gem.

    I confirmed that ffmpegthumbnailer is working correctly on my computer since I can generate a thumbnail image from a video straight from the command line.

    From my logs, it looks like my app thinks that it has generated a thumbnail image. However, when I look in the directory, there is no file tmpfile.png, and my app fails with the error.

    Has anyone successfully used the carrierewave-video-thumbnailer gem to create thumbnails, and if so, what am I doing wrong ? Alternatively, if there is some way I can just run ffmpegthumbnailer within my model, I could do that too.

    Here are my logs :

    Running....ffmpegthumbnailer -i /Users/.../Website/public/uploads/tmp/1380315873-21590-2814/thumb_Untitled.mov -o /Users/.../Website/public/uploads/tmp/1380315873-21590-2814/tmpfile.png -c png -q 10 -s 192 -f
    Success!
    Errno::ENOENT: No such file or directory - (/Users/.../Website/public/uploads/tmp/1380315873-21590-2814/tmpfile.png, /Users/.../Website/public/uploads/tmp/1380315873-21590-2814/thumb_Untitled.mov)

    video_path_uploader.rb

    class VideoPathUploader < CarrierWave::Uploader::Base
     include CarrierWave::Video
     include CarrierWave::Video::Thumbnailer

     process encode_video: [:mp4]

     # Include RMagick or MiniMagick support:
     # include CarrierWave::RMagick
     include CarrierWave::MiniMagick

     # Choose what kind of storage to use for this uploader:
     # storage :file
     storage :fog

     # Override the directory where uploaded files will be stored.
     # This is a sensible default for uploaders that are meant to be mounted:
     def store_dir
       "#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
     end

      version :thumb do
         process thumbnail: [{format: 'png', quality: 10, size: 192, strip: true, logger: Rails.logger}]
         def full_filename for_file
           png_name for_file, version_name
         end
     end

       def png_name for_file, version_name
         %Q{#{version_name}_#{for_file.chomp(File.extname(for_file))}.png}
       end

    end

    Video.rb

    class Video < ActiveRecord::Base
     # maybe we should add a title attribute to the video?
     attr_accessible :position, :project_id, :step_id, :image_id, :saved, :embed_url, :thumbnail_url, :video_path
     mount_uploader :video_path, VideoPathUploader
    ...
    end
  • FFmpeg avcodec_decode_video2 decode RTSP H264 HD-video packet to video picture with error

    29 mai 2018, par Nguyen Ba Thi

    I used FFmpeg library version 4.0 to have simple C++ program, in witch is a thread to receive RTSP H264 video data from IP-camera and display it in program window.

    Code of this thread is follow :

    DWORD WINAPI GrabbProcess(LPVOID lpParam)
    // Grabbing thread
    {
     DWORD i;
     int ret = 0, nPacket=0;
     FILE *pktFile;
     // Open video file
     pFormatCtx = avformat_alloc_context();
     if(avformat_open_input(&pFormatCtx, nameVideoStream, NULL, NULL)!=0)
         fGrabb=-1; // Couldn't open file
     else
     // Retrieve stream information
     if(avformat_find_stream_info(pFormatCtx, NULL)<0)
         fGrabb=-2; // Couldn't find stream information
     else
     {
         // Find the first video stream
         videoStream=-1;
         for(i=0; inb_streams; i++)
           if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
           {
             videoStream=i;
             break;
           }
         if(videoStream==-1)
             fGrabb=-3; // Didn't find a video stream
         else
         {
             // Get a pointer to the codec context for the video stream
             pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;
             // Find the decoder for the video stream
             pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
             if(pCodec==NULL)
                 fGrabb=-4; // Codec not found
             else
             {
                 // Copy context
                 pCodecCtx = avcodec_alloc_context3(pCodec);
                 if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0)
                     fGrabb=-5; // Error copying codec context
                 else
                 {
                     // Open codec
                     if(avcodec_open2(pCodecCtx, pCodec, NULL)<0)
                         fGrabb=-6; // Could not open codec
                     else
                     // Allocate video frame for input
                     pFrame=av_frame_alloc();
                     // Determine required buffer size and allocate buffer
                     numBytes=avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width,
                         pCodecCtx->height);
                     buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
                     // Assign appropriate parts of buffer to image planes in pFrame
                     // Note that pFrame is an AVFrame, but AVFrame is a superset
                     // of AVPicture
                     avpicture_fill((AVPicture *)pFrame, buffer, pCodecCtx->pix_fmt,
                         pCodecCtx->width, pCodecCtx->height);

                     // Allocate video frame for display
                     pFrameRGB=av_frame_alloc();
                     // Determine required buffer size and allocate buffer
                     numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
                         pCodecCtx->height);
                     bufferRGB=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
                     // Assign appropriate parts of buffer to image planes in pFrameRGB
                     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
                     // of AVPicture
                     avpicture_fill((AVPicture *)pFrameRGB, bufferRGB, AV_PIX_FMT_RGB24,
                         pCodecCtx->width, pCodecCtx->height);
                     // initialize SWS context for software scaling to FMT_RGB24
                     sws_ctx_to_RGB = sws_getContext(pCodecCtx->width,
                         pCodecCtx->height,
                         pCodecCtx->pix_fmt,
                         pCodecCtx->width,
                         pCodecCtx->height,
                         AV_PIX_FMT_RGB24,
                         SWS_BILINEAR,
                         NULL,
                         NULL,
                         NULL);

                     // Allocate video frame (grayscale YUV420P) for processing
                     pFrameYUV=av_frame_alloc();
                     // Determine required buffer size and allocate buffer
                     numBytes=avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx->width,
                         pCodecCtx->height);
                     bufferYUV=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
                     // Assign appropriate parts of buffer to image planes in pFrameYUV
                     // Note that pFrameYUV is an AVFrame, but AVFrame is a superset
                     // of AVPicture
                     avpicture_fill((AVPicture *)pFrameYUV, bufferYUV, AV_PIX_FMT_YUV420P,
                         pCodecCtx->width, pCodecCtx->height);
                     // initialize SWS context for software scaling to FMT_YUV420P
                     sws_ctx_to_YUV = sws_getContext(pCodecCtx->width,
                         pCodecCtx->height,
                         pCodecCtx->pix_fmt,
                         pCodecCtx->width,
                         pCodecCtx->height,
                         AV_PIX_FMT_YUV420P,
                         SWS_BILINEAR,
                         NULL,
                         NULL,
                         NULL);
                   RealBsqHdr.biWidth = pCodecCtx->width;
                   RealBsqHdr.biHeight = -pCodecCtx->height;
                 }
             }
         }
     }
     while ((fGrabb==1)||(fGrabb==100))
     {
         // Grabb a frame
         if (av_read_frame(pFormatCtx, &packet) >= 0)
         {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStream)
           {
               // Decode video frame
               int len = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
               nPacket++;
               // Did we get a video frame?
               if(frameFinished)
               {
                   // Convert the image from its native format to YUV
                   sws_scale(sws_ctx_to_YUV, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height,
                       pFrameYUV->data, pFrameYUV->linesize);
                   // Convert the image from its native format to RGB
                   sws_scale(sws_ctx_to_RGB, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height,
                       pFrameRGB->data, pFrameRGB->linesize);
                   HDC hdc=GetDC(hWndM);
                   SetDIBitsToDevice(hdc, 0, 0, pCodecCtx->width, pCodecCtx->height,
                       0, 0, 0, pCodecCtx->height,pFrameRGB->data[0], (LPBITMAPINFO)&RealBsqHdr, DIB_RGB_COLORS);
                   ReleaseDC(hWndM,hdc);
                   av_frame_unref(pFrame);
               }
           }
           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
         }
      }
      // Free the org frame
     av_frame_free(&pFrame);
     // Free the RGB frame
     av_frame_free(&pFrameRGB);
     // Free the YUV frame
     av_frame_free(&pFrameYUV);

     // Close the codec
     avcodec_close(pCodecCtx);
     avcodec_close(pCodecCtxOrig);

     // Close the video file
     avformat_close_input(&pFormatCtx);
     avformat_free_context(pFormatCtx);

     if (fGrabb==1)
         sprintf(tmpstr,"Grabbing Completed %d frames", nCntTotal);
     else if (fGrabb==2)
         sprintf(tmpstr,"User break on %d frames", nCntTotal);
     else if (fGrabb==3)
         sprintf(tmpstr,"Can't Grabb at frame %d", nCntTotal);
     else if (fGrabb==-1)
         sprintf(tmpstr,"Couldn't open file");
     else if (fGrabb==-2)
         sprintf(tmpstr,"Couldn't find stream information");
     else if (fGrabb==-3)
         sprintf(tmpstr,"Didn't find a video stream");
     else if (fGrabb==-4)
         sprintf(tmpstr,"Codec not found");
     else if (fGrabb==-5)
         sprintf(tmpstr,"Error copying codec context");
     else if (fGrabb==-6)
         sprintf(tmpstr,"Could not open codec");
     i=(UINT) fGrabb;
     fGrabb=0;
     SetWindowText(hWndM,tmpstr);
     ExitThread(i);
     return 0;
    }
    // End Grabbing thread  

    When program receive RTSP H264 video data with resolution 704x576 then decoded video pictures are OK. When receive RTSP H264 HD-video data with resolution 1280x720 it look like that first video picture is decoded OK and then video pictures are decoded but always with some error.

    Please help me to fix this problem !

    Here is problems brief :
    I have an IP camera model HI3518E_50H10L_S39 (product of China).
    Camera can provide H264 video stream both at resolution 704x576 (with RTSP URI "rtsp ://192.168.1.18:554/user=admin_password=tlJwpbo6_channel=1_stream=1.sdp ?real_stream") or 1280x720 (with RTSP URI "rtsp ://192.168.1.18:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp ?real_stream").
    Using FFplay utility I can access and display them with good picture quality.
    For testing of grabbing from this camera, I have a simple (above mentioned) program in VC-2005. In "Grabbing thread" program use FFmpeg library version 4.0 for opening camera RTSP stream, retrieve stream information, find the first video stream... and prepare some variables.
    Center of this thread is loop : Grab a frame (function av_read_frame) - Decode it if it’s video (function avcodec_decode_video2) - Convert to RGB format (function sws_scale) - Display to program window (GDI function SetDIBitsToDevice).
    When proram run with camera RTSP stream at resolution 704x576, I have good video picture. Here is a sample :
    704x576 sample
    When program run with camera RTSP stream at resolution 1280x720, first video picture is good :
    First good at res.1280x720
    but then not good :
    not good at res.1280x720
    Its seem to be my FFmpeg function call to avcodec_decode_video2 can’t fully decode certain packet for some reasons.