Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (100)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (9842)

  • x86inc : Add debug symbols indicating sizes of compiled functions

    12 octobre 2015, par Geza Lore
    x86inc : Add debug symbols indicating sizes of compiled functions
    

    Some debuggers/profilers use this metadata to determine which function a
    given instruction is in ; without it they get can confused by local labels
    (if you haven’t stripped those). On the other hand, some tools are still
    confused even with this metadata. e.g. this fixes `gdb`, but not `perf`.

    Currently only implemented for ELF.

    • [DH] common/x86/x86inc.asm
    • [DH] tools/checkasm-a.asm
  • avcodec_encode_video2() error -1 : Could not encode video packet - javacv

    7 mars 2016, par 404error

    I want to create a video (mp4) from a set of images and want to add a background sound to it. The background sound can either be recorded or a file may be browsed using a content chooser in android.
    The following code creates the video when a new audio is recorded in 3gp format. However when i browse an audio file (mp3 for example), it shows this error and the video recorded cannot be played.

    the error shown is :

    org.bytedeco.javacv.FrameRecorder$Exception: avcodec_encode_video2() error -1: Could not encode video packet. :at video_settings$Generate.doInBackground(video_settings.java:298)

    the code at video_settings.java:298 is

                       recorder.record(frame2);

    relevant code is :

    protected Void doInBackground(Void... arg0) {


           try {
               FrameGrabber grabber1 = new FFmpegFrameGrabber(paths.get(0));
               FrameGrabber grabber2 = new FFmpegFrameGrabber(backgroundSoundPath);
               Log.d("hgbj", backgroundSoundPath);
               grabber1.start();
               grabber2.start();

               FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(video, 320,
                       240, grabber2.getAudioChannels());// 320, 240
               recorder.setVideoCodec(avcodec.AV_CODEC_ID_MPEG4);//
               recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
               recorder.setFormat("mp4");
               recorder.setFrameRate(2);
               recorder.setVideoBitrate(10 * 1024 * 1024);

               recorder.start();
               Frame frame1, frame2;
               long timeLength = grabber2.getLengthInTime();
               System.out.println("total time = " + timeLength);


               for (int i = 0; i < paths.size(); i++) {
                   // record this frame and then record (numFrames*percentageTime[i]/100) number of frames of the audio.
                   frame1 = grabber1.grabFrame();
                   long startTime = System.currentTimeMillis();
                   recorder.setTimestamp(startTime * 1000);
                   recorder.record(frame1);
                   boolean first = true;
                   // while current time - start time < percentage time * total time / 100: record frame2
                   long temp = timeLength * percentageTime[i] / 100000 + startTime;
                   while (System.currentTimeMillis() <= temp) {
                       frame2 = grabber2.grabFrame();
                       if (frame2 == null) break;
                       if (first) {
                           recorder.setTimestamp(startTime * 1000);
                           first = false;
                       }
                       recorder.record(frame2);
                   }
                   if (i < paths.size() - 1) {
                       grabber1.stop();
                       grabber1 = new FFmpegFrameGrabber(paths.get(i + 1));
                       grabber1.start();
                   }
               }

    My question is : if its working for 3gp recorded files why isn’t it working for browsed mp3 files and what should I do to make it work ?
    I have tried changing the codecs, frame height width, video bitrate but dont know any way to determine what bitrate etc is compatible with a given codec/format.
    I am changing the content uri obtained from file browser into the real path so that’s not the issue.

  • how to change the frame rate and capturing image size from IP camera

    6 décembre 2015, par rockycai

    Now I have a IP camera and I want to capture image from it through RTSP.I use below code and it works well.But the camera’s frame rate is 25/s.So I got a lot of images per second.I don’t want it.And per image is 6.2MB.I also don’t want need to get high quality image.What can I do to slower the frame rate and smaller the size of image ?

    #ifndef INT64_C
    #define INT64_C(c) (c ## LL)
    #define UINT64_C(c) (c ## ULL)
    #endif

    #ifdef __cplusplus
    extern "C" {
    #endif
       /*Include ffmpeg header file*/
    #include <libavformat></libavformat>avformat.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libswscale></libswscale>swscale.h>
    #include
    #ifdef __cplusplus
    }
    #endif

    #include


    static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame);

    int main (int argc, const char * argv[])
    {
       AVFormatContext *pFormatCtx;
       int             i, videoStream;
       AVCodecContext  *pCodecCtx;
       AVCodec         *pCodec;
       AVFrame         *pFrame;
       AVFrame         *pFrameRGB;
       AVPacket        packet;
       int             frameFinished;
       int             numBytes;
       uint8_t         *buffer;

       // Register all formats and codecs
       av_register_all();
    //  const char *filename="C:\libraries\gfjyp.avi";
       // Open video file
       //AVDictionary *options = NULL;
       //av_dict_set(&amp;options,"rtsp_transport","tcp",0);
       if(av_open_input_file(&amp;pFormatCtx, argv[1], NULL, 0, NULL)!=0)
           return -1; // Couldn't open file

       // Retrieve stream information
       if(av_find_stream_info(pFormatCtx)&lt;0)
           return -1; // Couldn't find stream information

       // Dump information about file onto standard error
       dump_format(pFormatCtx, 0, argv[1], false);

       // Find the first video stream
       videoStream=-1;
       for(i=0; inb_streams; i++)
           if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
           {
               videoStream=i;
               break;
           }
           if(videoStream==-1)
               return -1; // Didn't find a video stream

           // Get a pointer to the codec context for the video stream
           pCodecCtx=pFormatCtx->streams[videoStream]->codec;

           // Find the decoder for the video stream
           pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
           if(pCodec==NULL)
               return -1; // Codec not found

           // Open codec
           if(avcodec_open(pCodecCtx, pCodec)&lt;0)
               return -1; // Could not open codec

           // Hack to correct wrong frame rates that seem to be generated by some codecs

           if(pCodecCtx->time_base.num>1000 &amp;&amp; pCodecCtx->time_base.den==1)
               pCodecCtx->time_base.den=1000;

           //pCodecCtx->time_base.den=1;
           //pCodecCtx->time_base.num=1;
           // Allocate video frame
           pFrame=avcodec_alloc_frame();

           // Allocate an AVFrame structure
           pFrameRGB=avcodec_alloc_frame();
           if(pFrameRGB==NULL)
               return -1;

           // Determine required buffer size and allocate buffer
           numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
               pCodecCtx->height);

           //buffer=malloc(numBytes);
           buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

           // Assign appropriate parts of buffer to image planes in pFrameRGB
           avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
               pCodecCtx->width, pCodecCtx->height);

           // Read frames and save first five frames to disk
           i=0;
           while(av_read_frame(pFormatCtx, &amp;packet)>=0)
           {
               // Is this a packet from the video stream?
               if(packet.stream_index==videoStream)
               {
                   // Decode video frame
                   avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished,
                       &amp;packet);

                   // Did we get a video frame?
                   if(frameFinished)
                   {

                       static struct SwsContext *img_convert_ctx;

    #if 0
                       // Older removed code
                       // Convert the image from its native format to RGB swscale
                       img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24,
                           (AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width,
                           pCodecCtx->height);

                       // function template, for reference
                       int sws_scale(struct SwsContext *context, uint8_t* src[], int srcStride[], int srcSliceY,
                           int srcSliceH, uint8_t* dst[], int dstStride[]);
    #endif
                       // Convert the image into YUV format that SDL uses
                       if(img_convert_ctx == NULL) {
                           int w = pCodecCtx->width;
                           int h = pCodecCtx->height;

                           img_convert_ctx = sws_getContext(w, h,
                               pCodecCtx->pix_fmt,
                               w, h, PIX_FMT_RGB24, SWS_BICUBIC,
                               NULL, NULL, NULL);
                           if(img_convert_ctx == NULL) {
                               fprintf(stderr, "Cannot initialize the conversion context!\n");
                               exit(1);
                           }
                       }
                       int ret = sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
                           pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
    #if 0
                       // this use to be true, as of 1/2009, but apparently it is no longer true in 3/2009
                       if(ret) {
                           fprintf(stderr, "SWS_Scale failed [%d]!\n", ret);
                           exit(-1);
                       }
    #endif

                       // Save the frame to disk
                       if(i++&lt;=1000)
                           SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
                   }
               }

               // Free the packet that was allocated by av_read_frame
               av_free_packet(&amp;packet);
               //sleep(1);
           }

           // Free the RGB image
           //free(buffer);
           av_free(buffer);
           av_free(pFrameRGB);

           // Free the YUV frame
           av_free(pFrame);

           // Close the codec
           avcodec_close(pCodecCtx);

           // Close the video file
           av_close_input_file(pFormatCtx);

           return 0;
    }

    static void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
    {
       FILE *pFile;
       char szFilename[32];
       int  y;

       // Open file
       sprintf(szFilename, "frame%d.ppm", iFrame);
       pFile=fopen(szFilename, "wb");
       if(pFile==NULL)
           return;

       // Write header
       fprintf(pFile, "P6\n%d %d\n255\n", width, height);

       // Write pixel data
       for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

       // Close file
       fclose(pFile);
    }