Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (65)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9052)

  • Python and FFMPEG video streaming not displaying via HTML5

    18 mars 2016, par arussell

    I’m trying to write a python script to serve a video over HTTP and display it via HTML5 video tag, I’m using FFMPEG to serve the video over HTTP and receiving the the video via sockets in Python. FFMPEG seems to be sending the video and my Python script is receiving it but for some reason I’m not able to display it in my web browser nor getting any visible error in my script.

    Any help will be highly appreciated.

    This is the FFMPEG line I’m using to send the video to HTTP

    FFMPEG -re -i video_file.webm -c:v libx264 -c:a copy -f h264 http://127.0.0.1:8081

    Here is my Python code

    import socket   #for sockets handling
    import time     #for time functions
    import sys

    hostIP = '127.0.0.1'
    SourcePort = 8081 #FFMPEG
    PlayerPort = 8082 #Internet Browser

    def gen_headers():
        # determine response code
        h = ''
        h = 'HTTP/1.1 200 OK\n'
        # write further headers
        current_date = time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime())
        h += 'Date: ' + current_date +'\n'
        h += 'Content-Type: video/mp4\n\n'
        return h

    def start_server():
       socketFFMPEG = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
       # this is for easy starting/killing the app
       socketFFMPEG.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
       print('Socket created')

       try:
           socketFFMPEG.bind((hostIP, SourcePort))
           print('Socket bind complete')
       except socket.error as msg:
           print('Bind failed. Error : ' + str(sys.exc_info()))
           sys.exit()

       #Start listening on socketFFMPEG
       socketFFMPEG.listen(10)
       print('Socket now listening. Waiting for video source from FFMPEG on port', SourcePort)

       conn, addr = socketFFMPEG.accept()
       ip, port = str(addr[0]), str(addr[1])
       print('Accepting connection from ' + ip + ':' + port)

       socketPlayer = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
       socketPlayer.bind((hostIP, PlayerPort))
       socketPlayer.listen(1) #listen just 1 petition
       print('Waiting for Internet Browser')
       conn2, addr2 = socketPlayer.accept()
       conn2.sendall(gen_headers().encode())

       while True:
           try :
               #receive data from client FFMPEG
               input_from_FFMPEG = conn.recv(1024)
               #send data to internet browser
               conn2.sendall(input_from_FFMPEG)
           except socket.error:
               print('Error data :' + str(input_from_FFMPEG))
               print('send Error : ' + str(sys.exc_info()))
               conn2.close()
               sys.exit()

       socketFFMPEG.close()

    start_server()  

    I’m getting the error 10053 "An established connection was aborted by the software in your host machine" when loading the following "byte" type data

    \x00\x00\x00\x01A\x9b\x1dI\xe1\x0f&S\x02\xbf\xb1\x82j3{qz\x85\xca\\\xb2\xb7\xc5\xdfi\x92y\x0c{\xb0\xde\xd1\x96j\xccE\xa3G\x87\x84Z\x0191\xba\x8a3\x8e\xe2lfX\x82\xd4*N\x8a\x9f\xa9\xc9\xfb\x13\xfc_]D\x0f\x9e\x1c"\x0fN\xceu\t\x17n\xbe\x95\xd1\x10Wj\xf5t\x90\xa8\x1am\xf7!d\x82py\n\x10\xf5\x9b{\xd9\xf8\x8e^\xc7\xb3o+\x0eQX\xb3\x17B?\xb8\x1c\xecP\xa0\xf10\xc7\xc8\x8d\xf1P\xd3\xdf\xd0\xd5\x13ah+bM\x9c\xbe\xca\xb4\x9a?\xac\xb9\x0fao\xf3\xed\x9c\xe4^\x10\x079t\xf4\x0f\xce\xbe*\xd4w\x1f\x1a\x02\xbd\xed\xe9\x16\x8a\x98\xe0\x1d\xc4\xde5\xa8\xf0\x88\xb4\x07=\xe2w\xc3Q\xc1\x99K7\xff\x01`(\xb3sN\x88\x18\xfd7\xd4\x07\xab\x95\xf95\x05\xcd\xd6,!=\xfb\xc4\xc8\xbf\xad\x96\x83\xc0\x9b%\xdds\x92s\xc0lN\xdd\x14\xba\xbd\x04L\xb1\x08\xec[~tB~`\r\xbe\xa9\xbe\xa4r`\xa3\x98\x19z\xa9\xe9\xd3jK>(\xd5\x8c\x9eY~\xa8\x9f\x86\x90\x148R\xfd<\xb2\xdaUy\xa8\xb5\xba\x1d\xd1\xf6\xa6N\xb0#\x08Xo\xa6\x1c \xbaB\x8cbp\x1c\r\xa1\xa4"\x06\xd8\xe5\x85[\x89\x8a\xcba\xa3\xcc\xe0C\x946\xad6\x08\x90\r&\xcb\x13\xa6\xfbG\xc5\x85I<\x96\xcb\x89}\xcb\xda\xa5\x02\xbcB\xb9\x93\x938\x89\x1c\x92\xb3\x83\xfe\xa7\xf6\xa8\x1f\xdf\xa8\xef\xd55\xb6\xbf>#\xba\xd7\x8e\xd2z\xc2\xca\xf9\xdd2\xdd\x96\xb6\xf8\xc3\xc1\x0f/D\x05\xd3?\x18\xb1\x85T\x08\xcd\xfc\xc7p\xba\x0c\x93\xcdY\xf3 !4\x13\xaen\x82\x10[\x07I>\xe4\xc3\xb7\xca\xee\x93\r\xc3\xe1>\xe9\xd6\x9a\xbeLQ\x93\x86n\xadb\x13\xcas\xc0\xdeh\x1a\x9f\x00Dp\x94fv\xb7\xd9\xba\x93\x0c\xd1H2\x0e\xa2]`\xf2Q{+o\x80\xf0\x8a\x11"\x94u\x9b1\xc3\xdaV\xd9\x9e\xc6\xf7?\x18\xd9\xfbs\xf3\x07\xc6\x91\x83\x19\'\x13\xe4o\xa9S\x1cP\xa4w\xbc\xe36\xb9k\xc3\xaa":~`\xe7\x18\xe8\x9bu\n\x16\xf3\x89\xe2k\x07\x08\xf6\x8c\x98\x98\xbd\x8f*\x11\xe7\xa1\nj1\'\xe2=\x7f\xdf\x16\xc8\xf6\xec\xe1\xe6G\xd1\x1b\xeb\xc0\xd4\xf7\xc3c\xc7v\xc3\xf8\xa5\xac\x89\xdd4\x90i\t\x98\xfe\xfcx\xad{[\xf4\x92\x16^O\xf2\xc2]\xec\xa7\xe9Gu\\dF\xa6\xa7\xd3k?\xba\xedY\xba\x85\'\x1a\xa6.(\xcfB\x82tN\xdc\xad\xe6\xfcM\x01:\x0b\x14\x070\xf4\x99l2C\x92\x9c\x13h\x82\xf6w\xc4$5\xe1~\x11T~\xc9\x8f\xaeUAI%\xa6\x12(\x9c\x17\x9d*\xcc9\xee\xb7\xb8w \x92\x9a\x1cD\xfd\xd8wi7rt\xd8\x93\xbd7\x83\xf1\xe3\xbd\x92\x81\xe0\xfel\xfa\\\x9c\xebM\xf3m`p\xb9\xe2\x13Kd\xe08\xcc\x15\x96[G\xda`\x8cD\xa7\xf1\xd3\xc8T\xcf\xb1)\xa5E$\x91\x94{\x88&\xac\xc1\x92\xd5E\xa98\xd2\x89\xd1?\xd7\x9c\xdc\xbb!\x18\xc1\xa1m\xba*L\xab\xa0\xff\xd8\xee\xbbH\xe3\xa2\xe4\x9d=9\x05\xb4\x9bm\xe7\xc6J\xd9\xc3\xb1\xe9b*jB`4t\x9fv\xe8\xc4F\x9c`\xd0\x03\xd8\x12}\x8b\xb3$A\x9c\xdc;\x81@)rH\xf1\x18\xe1\xba\x0c4\x06\xe9xa\x94\xdd\xde\xa8&\xef)\xd7F\x94F\xa7j\xd3\x13O\xe03\xc9\xc9\xf2\x15\x1a\x9bsy\x16\x83H\xb4\x9e\xee\xc9M\xe7\xf4x \xa5\x9c^\xb9m\xeee\x03=_\x11\xda^l\xfe\xba\xa4\x98mjW\xf0\xa9\xc4\x11g\xd9C\xf7K.\x8c\xab3~n%\x7f\xc0p\xc8\xb1\xd6\x8d\xe5E\xb1\xc1\xe3(~\x9e\x9c\x91.\xdc\x08\xfb\xa0\xbe\x98y$U\xdeH\x08\xb2z,yX\xfaqx\xfe\xb0\xa9\xb4Q\xf2P\x95d\xc8\x88\r\xc3\x1dr\x88\xba\xc8\x990`(\x08m\x19\xebi\xf8\x11\xc6g\xd6\xc4\x12C\xad~\xe1$2\x01Hmg\xdb\x920\x18\xcc\xc0K\x04~\x1e\xeb\xd9>\x81F*I\x99\xe4\x00\xa3\xc4,U\x89\xdf\x843\xa3\xfb\xea\xc9d\x05\xeb]
  • How we can open /dev/video0 or any v4l2 node with ffmpeg for capture raw frames and encode it in jpeg format ?

    19 mars 2016, par satinder

    I am new in video domain . I am working now on ffmpeg , I use ffmpeg command line but there is very big issue and challenge for use ffmpeg in my own c code .So I read some tutorials like dranger.com etc . But I am not able to capture v4l2 or my laptop /dev/video0 node . I want capture raw video stream and overlay it with some text and then compress it in jpeg format . I have a little idea and that is in following code that code is working for ant .mp4 format or encoded file but not work on /dev/video0 node format . so please any one help me . Thanks in advance !!

    please see following code snapshot that is a tutorial1.c from dranger.com :

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>

    #include

    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
     FILE *pFile;
     char szFilename[32];
     int  y;

     // Open file
     sprintf(szFilename, "frame%d.ppm", iFrame);
     pFile=fopen(szFilename, "wb");
     if(pFile==NULL)
       return;

     // Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    int main(int argc, char *argv[]) {
     AVFormatContext *pFormatCtx = NULL;
     int             i, videoStream;
     AVCodecContext  *pCodecCtx = NULL;
     AVCodec         *pCodec = NULL;
     AVFrame         *pFrame = NULL;
     AVFrame         *pFrameRGB = NULL;
     AVPacket        packet;
     int             frameFinished;
     int             numBytes;
     uint8_t         *buffer = NULL;

     AVDictionary    *optionsDict = NULL;
     struct SwsContext      *sws_ctx = NULL;

     if(argc &lt; 2) {
       printf("Please provide a movie file\n");
       return -1;
     }
     // Register all formats and codecs
     av_register_all();

     // Open video file
     if(avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL)!=0)
       return -1; // Couldn't open file

     // Retrieve stream information
     if(avformat_find_stream_info(pFormatCtx, NULL)&lt;0)
       return -1; // Couldn't find stream information

     // Dump information about file onto standard error
     av_dump_format(pFormatCtx, 0, argv[1], 0);

     // Find the first video stream
     videoStream=-1;
     for(i=0; inb_streams; i++)
       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
         videoStream=i;
         break;
       }
     if(videoStream==-1)
       return -1; // Didn't find a video stream

     // Get a pointer to the codec context for the video stream
     pCodecCtx=pFormatCtx->streams[videoStream]->codec;

     // Find the decoder for the video stream
     pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
     if(pCodec==NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
     }
     // Open codec
     if(avcodec_open2(pCodecCtx, pCodec, &amp;optionsDict)&lt;0)
       return -1; // Could not open codec

     // Allocate video frame
     pFrame=av_frame_alloc();

     // Allocate an AVFrame structure
     pFrameRGB=av_frame_alloc();
     if(pFrameRGB==NULL)
       return -1;

     // Determine required buffer size and allocate buffer
     numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
                     pCodecCtx->height);
     buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

     sws_ctx =
       sws_getContext
       (
           pCodecCtx->width,
           pCodecCtx->height,
           pCodecCtx->pix_fmt,
           pCodecCtx->width,
           pCodecCtx->height,
           AV_PIX_FMT_RGB24,
           SWS_BILINEAR,
           NULL,
           NULL,
           NULL
       );

     // Assign appropriate parts of buffer to image planes in pFrameRGB
     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
     // of AVPicture
     avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
            pCodecCtx->width, pCodecCtx->height);

     // Read frames and save first five frames to disk
     i=0;
     while(av_read_frame(pFormatCtx, &amp;packet)>=0) {
       // Is this a packet from the video stream?
       if(packet.stream_index==videoStream) {
         // Decode video frame
         avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished,
                  &amp;packet);

         // Did we get a video frame?
         if(frameFinished) {
       // Convert the image from its native format to RGB
           sws_scale
           (
               sws_ctx,
               (uint8_t const * const *)pFrame->data,
               pFrame->linesize,
               0,
               pCodecCtx->height,
               pFrameRGB->data,
               pFrameRGB->linesize
           );

       // Save the frame to disk
       if(++i&lt;=5)
         SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
               i);
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&amp;packet);
     }

     // Free the RGB image
     av_free(buffer);
     av_free(pFrameRGB);

     // Free the YUV frame
     av_free(pFrame);

     // Close the codec
     avcodec_close(pCodecCtx);

     // Close the video file
     avformat_close_input(&amp;pFormatCtx);

     return 0;
    }
  • Torn images acquired when decoding video frames with FFmpeg

    22 mars 2016, par bot1131357

    I am trying to decode the images using the tutorial at dranger.com. Below is the code I’m working with. The code is pretty much untouched aside from pgm_save() function and replacing the deprecated functions.

    The program compiled successfully, but when I tried to process a video, I’m getting tearing effect like this : image1 and this image2.

    (Side question : I’ve tried to replace avpicture_fill() which is deprecated with av_image_copy_to_buffer() but I’m getting an access violation error, so I left it as is. I wonder if there is a proper way for me to assign the frame data to a buffer.)

    The library that I’m using is ffmpeg-20160219-git-98a0053-win32-dev. Would really appreciate it if someone could help me with this.

    // Decode video and save frames

    char filename[] = "test%0.3d.ppm";
    static void ppm_save(unsigned char *buf, int wrap, int xsize, int ysize,
                        int framenum )
    {

       char filenamestr[sizeof(filename)];
       FILE *f;
       int i;

       sprintf_s(filenamestr, sizeof(filenamestr), filename, framenum);
       fopen_s(&amp;f,filenamestr,"w");
       fprintf(f,"P6\n%d %d\n%d\n",xsize,ysize,255);
       for(i=0;i/ Register all formats and codecs
       av_register_all();

       // Open video file
       if (avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL) != 0)
           return -1; // Couldn't open file

       // Retrieve stream information
       if (avformat_find_stream_info(pFormatCtx, NULL) &lt; 0)
           return -1; // Couldn't find stream information

       // Dump information about file onto standard error (Not necessary)
       av_dump_format(pFormatCtx, 0, argv[1], 0);

       // Find the first video stream
       videoStream = -1;
       for (i = 0; i &lt; pFormatCtx->nb_streams; i++)
           if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
               videoStream = i;
               break;
           }
       if (videoStream == -1)
           return -1; // Didn't find a video stream

       /* find the video decoder */
       codec = avcodec_find_decoder(pFormatCtx->streams[videoStream]->codec->codec_id);
       if (!codec) {
           fprintf(stderr, "codec not found\n");
           exit(1);
       }

       codecCtx= avcodec_alloc_context3(codec);
       if(avcodec_copy_context(codecCtx, pFormatCtx->streams[i]->codec) != 0) {
           fprintf(stderr, "Couldn't copy codec context");
           return -1; // Error copying codec context
       }  

       /* open it */
       if (avcodec_open2(codecCtx, codec, NULL) &lt; 0) {
           fprintf(stderr, "could not open codec\n");
           exit(1);
       }

       // Allocate video frame
       inframe= av_frame_alloc();
       if(inframe==NULL)
           return -1;

       // Allocate output frame
       outframe=av_frame_alloc();
       if(outframe==NULL)
           return -1;

       // Determine required buffer size and allocate buffer
       int numBytes=av_image_get_buffer_size(AV_PIX_FMT_RGB24, codecCtx->width,
                       codecCtx->height,1);
       uint8_t* buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       // Assign appropriate parts of buffer to image planes in outframe
       // Note that outframe is an AVFrame, but AVFrame is a superset
       // of AVPicture


       avpicture_fill((AVPicture *)outframe, buffer, AV_PIX_FMT_RGB24,
            codecCtx->width, codecCtx->height );
       //av_image_copy_to_buffer(buffer, numBytes,
    //                           outframe->data, outframe->linesize,
    //                           AV_PIX_FMT_RGB24, codecCtx->width, codecCtx->height,1);

       // initialize SWS context for software scaling
       sws_ctx = sws_getContext(codecCtx->width,
                  codecCtx->height,
                  codecCtx->pix_fmt,
                  codecCtx->width,
                  codecCtx->height,
                  AV_PIX_FMT_RGB24,
                  SWS_BILINEAR,
                  NULL,
                  NULL,
                  NULL
                  );  


       // av_init_packet(&amp;avpkt);


       i = 0;
       while(av_read_frame(pFormatCtx, &amp;avpkt)>=0) {
           // Is this a packet from the video stream?
           if(avpkt.stream_index==videoStream) {
             // Decode video frame
             avcodec_decode_video2(codecCtx, inframe, &amp;frameFinished, &amp;avpkt);

             // Did we get a video frame?
             if(frameFinished) {
           // Convert the image from its native format to RGB
           sws_scale(sws_ctx, (uint8_t const * const *)inframe->data,
                 inframe->linesize, 0, codecCtx->height,
                 outframe->data, outframe->linesize);

           // Save the frame to disk
           if(++i%15 == 0)
               ppm_save(outframe->data[0], outframe->linesize[0],
                           codecCtx->width, codecCtx->height, i);

             }
           }

       // Free the packet that was allocated by av_read_frame
       av_packet_unref(&amp;avpkt);
       }


       // Free the RGB image
       av_free(buffer);
       av_frame_free(&amp;outframe);

       // Free the original frame
       av_frame_free(&amp;inframe);

       // Close the codecs
       avcodec_close(codecCtx);
       av_free(codecCtx);

       // Close the video file
       avformat_close_input(&amp;pFormatCtx);


       printf("\n");


       return 0;
    }