Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (107)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (11126)

  • Capture JPEG frame from avi file using ffmpeg library. How to open captured files ?

    30 novembre 2013, par ios198

    This is the code I found from the ffmpeg tutorial website :

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include
    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
     FILE *pFile;
     char szFilename[32];
     int  y;

     // Open file
     sprintf(szFilename, "frame%d.ppm", iFrame); // szFilenam = frame4.ppm
     pFile=fopen(szFilename, "wb");
     if (pFile == NULL) {
      return;
       }
      //Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    int main(int argc, char **argv[])
    {
       AVFormatContext *pFormatCtx = NULL;
       int             i, videoStream;
       AVCodecContext  *pCodecCtx = NULL;
       AVCodec         *pCodec = NULL;
       AVFrame         *pFrame = NULL;
       AVFrame         *pFrameRGB = NULL;
       AVPacket        packet;
       int             frameFinished;
       int             numBytes;
       uint8_t         *buffer = NULL;

       AVDictionary    *optionsDict = NULL;
       struct SwsContext      *sws_ctx = NULL;


       // Register all formats and codecs
       av_register_all();

       // Open video file
       if(avformat_open_input(&amp;pFormatCtx, "/root/dhquan/AVI/turning_pages.avi", NULL, NULL)!=0)
           return -1; // couldn&#39;t open file

       // Retrieve stream information
       if(avformat_find_stream_info(pFormatCtx,NULL)&lt;0)
           return -1; // couldn&#39;t find stream information
                      // This function populates pFormatCtx->streams with the proper

       // dump information about file onto standard error
       av_dump_format(pFormatCtx, 0, "/root/dhquan/AVI/turning_pages.avi", 0);

       // Find the first video stream
       videoStream = -1;
       for(i=0;inb_streams;i++)
           if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){
               videoStream=i;
               break;
           }
           if(videoStream==-1)
               return -1; // didn&#39;t find a video stream

       // Get a pointer to the codec context for the video stream
           pCodecCtx= pFormatCtx->streams[videoStream]->codec;

       // Find the decoder for the video stream
           pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
           if(pCodec==NULL){
               fprintf(stderr,"Unsupported codec!\n");
               return -1;
           }

       // Open Codec
           if(avcodec_open2(pCodecCtx, pCodec, &amp;optionsDict)&lt;0)
               return -1; // Could not open codec

       // Allocate video frame
           pFrame = avcodec_alloc_frame();

       // Allocate an AVFrame structure
            pFrameRGB=avcodec_alloc_frame();
           if(pFrameRGB==NULL)
               return -1;
       // Determine required buffer size and allocate buffer
            numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                     pCodecCtx->height);
            buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

            sws_ctx =
            sws_getContext
       (
           pCodecCtx->width,
           pCodecCtx->height,
           pCodecCtx->pix_fmt,
           pCodecCtx->width,
           pCodecCtx->height,
           PIX_FMT_RGB24,
           SWS_BILINEAR,
           NULL,
           NULL,
           NULL
       );

     // Assign appropriate parts of buffer to image planes in pFrameRGB
     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
     // of AVPicture
     avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
            pCodecCtx->width, pCodecCtx->height);

     // Read frames and save first five frames to disk
     i=0;
     while(av_read_frame(pFormatCtx, &amp;packet)>=0) {
       // Is this a packet from the video stream?
       if(packet.stream_index==videoStream) {
         // Decode video frame
         avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished,
                  &amp;packet);

         // Did we get a video frame?
         if(frameFinished) {
       // Convert the image from its native format to RGB
           sws_scale
           (
               sws_ctx,
               (uint8_t const * const *)pFrame->data,
               pFrame->linesize,
               0,
               pCodecCtx->height,
               pFrameRGB->data,
               pFrameRGB->linesize
           );

       // Save the frame to disk
       if(++i&lt;=5)
         SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
               i);
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&amp;packet);
     }

     // Free the RGB image
     av_free(buffer);
     av_free(pFrameRGB);

     // Free the YUV frame
     av_free(pFrame);

     // Close the codec
     avcodec_close(pCodecCtx);

     // Close the video file
     avformat_close_input(&amp;pFormatCtx);

     return 0;
     //getch();
    }

    In line :
    sprintf(szFilename, "frame%d.ppm", iFrame) ;

    I changed into frame%d.jpg. It creates .jpg file in my folder. But I can't read it. How to open this file ? Please help me.

  • ffmpeg screen blending mode with transparency

    17 novembre 2013, par gilad s

    I have a sequence of PNG files with transparency and a video- video1.mp4

    1. I'd like to convert the PNG files into a video with an alpha channel- video2.mp4

    2. I'd like to merge video2.mp4 with video1.mp4 while the transparent parts in video2 are

    not seen and the other parts are blended using screen blending mode with video1

    when I do :

    ffmpeg -i "%03d.png" -vf scale=640:480 -vcodec libx264 video2.mp4

    a video is created with a black color where the transperent parts are supposed to be (I'm not sure if it's really transparent - I tryied other encoders - all gave the same result).

    afterwards when I do :

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "blend=all_mode=&#39;screen&#39;" output.mp4

    the videos are indeed merged but the video has a pink overlay ?!

  • Showing a video using C (not C++ or C#)

    1er octobre 2013, par user2833591

    I learned programming in C using the Tscope-library (which is incompatible with C++ and C#), so I'm completely stuck with C.

    The Tscope-library is used to program small psychological experiments, it allows for functions that generate random numbers or produce images on the screen. Not sure if it might be a problem, but Tscope does generate it's own 'window'.

    So I wanted my experiment to show videos (currently in .wmv-format, but that can be changed, no problem), but I don't know how to do so (neither in code nor concept).

    I have come across FFmpeg, but the longer I see its code, the more I worry it's not meant for C (as in, parts of the code appear completely unknown to me). Could someone please help me ? If FFmpeg is indeed the answer, could someone give a quick run-down of the idea behind how it works (I've seen something about frames being put together) ?