Recherche avancée

Médias (1)

Mot : - Tags -/publishing

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (15815)

  • Memory Leak in c++/cli application

    10 décembre 2013, par Ankush

    I am passing bitmap from my c# app to c++/cli dll that add it to video.
    The problem is program slowly leaking memory. I tried _CrtDumpMemoryLeaks() shows me leak of bitmap & another 40 byte leak but i am disposing bitmap.
    Can anyone see memory leak, Here is code..

    Flow :

    1) Capture screenshot by takescreenshot()

    2) pass it to c++/cli function

    3) dispose bitmap

    lines from my c# app

    Bitmap snap = takescreeshot();
    vencoder.AddBitmap(snap);
    snap.Dispose();
    vencoder.printleak();

    private static Bitmap takescreeshot()
       {
           System.Drawing.Bitmap bitmap = null;
           System.Drawing.Graphics graphics = null;

           bitmap = new Bitmap
           (
               System.Windows.Forms.Screen.PrimaryScreen.Bounds.Width,
               System.Windows.Forms.Screen.PrimaryScreen.Bounds.Height,
               System.Drawing.Imaging.PixelFormat.Format24bppRgb
           );

           graphics = System.Drawing.Graphics.FromImage(bitmap);

           graphics.CopyFromScreen(Screen.PrimaryScreen.Bounds.X, Screen.PrimaryScreen.Bounds.Y, 0, 0, Screen.PrimaryScreen.Bounds.Size);

           //Write TimeSpamp
           Rectangle rect = new Rectangle(1166, 738, 200, 20);
           String datetime= System.String.Format("{0:dd:MM:yy  hh:mm:ss}",DateTime.Now);
           System.Drawing.Font sysfont = new System.Drawing.Font("Times New Roman", 14, FontStyle.Bold);
           graphics.DrawString(datetime, sysfont, Brushes.Red,rect);
           //

           Grayscale filter = new Grayscale(0.2125, 0.7154, 0.0721);
           Bitmap grayImage = filter.Apply(bitmap);

           //Dispose
           bitmap.Dispose();
           graphics.Dispose();

           return grayImage;
       }

    now in c++/cli dll

    bool VideoEncoder::AddBitmap(Bitmap^ bitmap)
       {
           BitmapData^ bitmapData = bitmap->LockBits( System::Drawing::Rectangle( 0, 0,bitmap->Width, bitmap->Height ),ImageLockMode::ReadOnly,PixelFormat::Format8bppIndexed);
           uint8_t* ptr = reinterpret_cast( static_cast( bitmapData->Scan0 ) );
           uint8_t* srcData[4] = { ptr, NULL, NULL, NULL };
           int srcLinesize[4] = { bitmapData->Stride, 0, 0, 0 };

           pCurrentPicture = CreateFFmpegPicture(pVideoStream->codec->pix_fmt, pVideoStream->codec->width, pVideoStream->codec->height);

           sws_scale(pImgConvertCtx, srcData, srcLinesize, 0, bitmap->Height, pCurrentPicture->data, pCurrentPicture->linesize );

           bitmap->UnlockBits( bitmapData );

           write_video_frame();

           bitmapData=nullptr;
           ptr=NULL;

           return true;
       }

    AVFrame * VideoEncoder::CreateFFmpegPicture(int pix_fmt, int nWidth, int nHeight)
       {

         AVFrame *picture     = NULL;
         uint8_t *picture_buf = NULL;
         int size;

         picture = avcodec_alloc_frame();
         if ( !picture)
         {
           printf("Cannot create frame\n");
           return NULL;
         }

         size = avpicture_get_size((AVPixelFormat)pix_fmt, nWidth, nHeight);

         picture_buf = (uint8_t *) av_malloc(size);

         if (!picture_buf)
         {
           av_free(picture);
           printf("Cannot allocate buffer\n");
           return NULL;
         }

         avpicture_fill((AVPicture *)picture, picture_buf,
           (AVPixelFormat)pix_fmt, nWidth, nHeight);

         return picture;
       }

       void VideoEncoder::write_video_frame()
       {
           AVCodecContext* codecContext = pVideoStream->codec;

           int out_size, ret = 0;

           if ( pFormatContext->oformat->flags & AVFMT_RAWPICTURE )
           {
               printf( "raw picture must be written" );
           }
           else
           {
               out_size = avcodec_encode_video( codecContext, pVideoEncodeBuffer,nSizeVideoEncodeBuffer, pCurrentPicture );

               if ( out_size > 0 )
               {
                   AVPacket packet;
                   av_init_packet( &packet );

                   if ( codecContext->coded_frame->pts != AV_NOPTS_VALUE )
                   {
                       packet.pts = av_rescale_q( packet.pts, codecContext->time_base, pVideoStream->time_base );
                   }

                   if ( codecContext->coded_frame->pkt_dts != AV_NOPTS_VALUE )
                   {
                       packet.dts = av_rescale_q( packet.dts, codecContext->time_base, pVideoStream->time_base );
                   }

                   if ( codecContext->coded_frame->key_frame )
                   {
                       packet.flags |= AV_PKT_FLAG_KEY;
                   }

                   packet.stream_index = pVideoStream->index;
                   packet.data = pVideoEncodeBuffer;
                   packet.size = out_size;

                   ret = av_interleaved_write_frame( pFormatContext, &packet );

                   av_free_packet(&packet);
                   av_freep(pCurrentPicture);
               }
               else
               {
                   // image was buffered
               }
           }

           if ( ret != 0 )
           {
               throw gcnew Exception( "Error while writing video frame." );
           }
       }


       void VideoEncoder::printleak()
       {
           printf("No of leaks: %d",_CrtDumpMemoryLeaks());
           printf("\n");
       }
  • How to extract jpeg/png images from RTSP stream without encoding the whole stream ?

    25 septembre 2020, par Rune Aspvik

    I am building a video system in Node.js based on IP cameras. They all output RTSP streams and MJPEG streams. However the Mjpeg is unreliable for web browser display snapshots and so I want to create my own snapshots. I'm using FFMPEG to record the RTSP stream so that's been my tool. So far I've tried two options :

    


      

    1. FFMPEG with two outputs :
    2. 


    


    ffmpeg -i rtsp://.... -c copy output.ts -r 1/5 -s 640x360 -f image2 -update 1 -y snap.jpg

    


    This works really well, but it's using a lot of resources since even though I only need a low res image every 5 sec, it encodes the whole stream.

    


      

    1. Get the snapshot on demand using FFMPEG and the output.ts as input.
    2. 


    


    ffmpeg -sseof -1 -i output.ts -frames:v 1 -f image2 pipe:1

    


    This also works really well, and doesn't require a lot of resources. However I believe it's not the most elegant solution.

    


    How can I use option 1, but without doing the resource demanding solution ?

    


  • How do I use the CLI interface of FFMpeg from a static build ?

    13 décembre 2019, par deek0146

    I have added this (https://github.com/kewlbear/FFmpeg-iOS-build-script) version of ffmpeg to my project. I can’t see the entry point to the library in the headers included.

    How do I get access to the same text command based system that the stand alone application has, or an equivalent ?

    I would also be happy if someone could point me towards documentation that allows you to use FFmpeg without the command line interface.

    This is what I am trying to execute (I have it working on windows and android using the CLI version of ffmpeg)

    ffmpeg -framerate 30 -i snap%03d.jpg -itsoffset 00:00:03.23333 -itsoffset 00:00:05 -i soundEffect.WAV -c:v libx264 -vf fps=30 -pix_fmt yuv420p result.mp4