Recherche avancée

Médias (91)

Autres articles (34)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (4672)

  • Change ffmpeg output directory

    6 mai 2019, par Filip

    Im using ffmpeg to compress footage and i want to compess the footage of a specific day but when i overwrite the files it outputs a empty stream because it writes as it reads at the same time so i want to rename the output file. Find will give the full path which is necessary but i don’t know how to change the actual file name, rather than the path.

    Any suggestions ?

    find /home/server/recordings/compress -name '*.mp4' -print0 | xargs -0 -I{} ffmpeg -i {}  -c:v libx265 -preset fast -crf 25 -x265-params "vbv-maxrate=1500:vbv-bufsize=1000" -c:a aac {}
  • Access violation reading location 0x000000148965F000

    14 février 2014, par user3012914

    I tried to encode BMP Images, which i get from a buffer and store it as a H264 Video. I am stuck with these errors the arrive randomly and repeatedly

    I am using Visual Studio 2012

    1) Access violation reading location 0x000000148965F000.

    2)Heap corruption

    The debug shows the error at this point

       struct SwsContext* fooContext = sws_getContext(_imgWidth,_imgHeight,PIX_FMT_RGB32,c->width,c->height,PIX_FMT_YUV420P, SWS_FAST_BILINEAR,NULL,NULL,NULL);
                       sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize);    // converting frame size and format

    I guess the read violation happens due to non - pre initialized values. But i couldnt exactly understand why. I have also attached part of the code below

    PagedImage *inImg = getUpdatedInputImage(0);
           ML_CHECK(inImg);
           ImageVector imgExt = inImg->getImageExtent();
           if ((imgExt.x == _imgWidth) && (imgExt.y == _imgHeight))
           {
               if (((imgExt.x % 4) == 0) && ((imgExt.y % 4) == 0))
               {
                  _numFramesFld->setIntValue(_numFramesFld->getIntValue() + 1);
                   MLFree(unicodeFilename);
                   // configure header
                   //BITMAPINFO bitmapInfo
                   // read out input image and write output image into video
                   // get input image as an array
                   void* imgData = NULL;
                   SubImageBox imageBox(imgExt); // get the whole image
                   getTile(inImg, imageBox, MLuint8Type, &imgData);
                   MLuint8* iData = (MLuint8*)imgData;
                   // since we have only images with
                   // a z-ext of 1, we can compute the c stride as follows
                   int cStride = _imgWidth * _imgHeight;
                   int offset  = 0;
                   MLuint8 r=0, g=0, b=0;
                   // pointer into the bitmap that is
                   // used to write images into an video
                   UCHAR* dst = (UCHAR*)_bits;
                   for (int y = _imgHeight-1; y >= 0; y--)
                   { // reversely scan the image. if y-rows of DIB are set in normal order, no compression will be available.
                       offset = _imgWidth * y;
                       for (int x = 0; x < _imgWidth; x++)
                       {
                           if (_isGreyValueImage)
                           {
                               r = iData[offset + x];
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                           }
                           else
                           {
                               b = iData[offset + x]; // windows bitmap need reverse order: bgr instead of rgb
                               g = iData[offset + x + cStride          ];
                               r = iData[offset + x + cStride + cStride];
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)g;
                               *dst++ = (UCHAR)b;
                           }
                           // alpha channel in input image is ignored
                       }
                   }
                   outbuf_size = 100000 + c->width*c->height*(32>>3);      // allocate output buffer
                   outbuf = static_cast(malloc(outbuf_size));
                   fileName_ = (_outputFilenameFld->getStringValue()).c_str();
                   FILE* f = fopen(fileName_,"wb");                    // opening video file for writing
                   if(!f)
                   {
                       _messageFld->setStringValue("Cannot open file");
                   }
                   else _messageFld->setStringValue("Opened video file for writing\n");

                   //for(i=0;i<_numFramesFld->getIntValue();i++)
                   //{
                       fflush(stdout);
                       int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);                                // allocating outbuffer
                       uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
                       AVFrame* inpic = avcodec_alloc_frame();                                                               // mandatory frame allocation
                       AVFrame* outpic = avcodec_alloc_frame();
                       //outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) * 90);                        // setting frame pts
                       avpicture_fill((AVPicture*)inpic,(uint8_t*)dst, PIX_FMT_RGB32, c->width, c->height);                            // fill image with input screenshot
                       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);                  // clear output picture for buffer copy
                       av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);

                       inpic->data[0] += inpic->linesize[0]*(c->height-1);                                                   // flipping frame
                       inpic->linesize[0] = -inpic->linesize[0];                                                             // flipping frame

                       struct SwsContext* fooContext = sws_getContext(_imgWidth,_imgHeight,PIX_FMT_RGB32,c->width,c->height,PIX_FMT_YUV420P, SWS_FAST_BILINEAR,NULL,NULL,NULL);
                       sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize);    // converting frame size and format
                       out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);                                      // encoding video
                       _messageFld->setStringValue("Encoding frame %3d (size=%5d)\n");
                        fwrite(outbuf, 1, out_size, f);
                        delete [] dst;                                                                                         // freeing memory
                       av_free(outbuffer);    
                       av_free(inpic);
                       av_free(outpic);
                       av_free(fooContext);
                       DeleteObject(_hbitmap);

                       for(int Z = 0; Z/ encode the delayed frames
                           fwrite(outbuf, 1, out_size, f);
                       }
                       //outbuf[0] = 0x00;
                       //outbuf[1] = 0x00;                                                                                               // add sequence end code to have a real mpeg file
                       //outbuf[2] = 0x01;
                       //outbuf[3] = 0xb7;
                       //fwrite(outbuf, 1, 4, f);
                       fclose(f);
                       avcodec_close(c);                                                                                               // freeing memory
                       free(outbuf);
                       av_free(c);
                       printf("Closed codec and Freed\n");
                   }
               }
  • HTTP Header for Duration of a MP4 for HTML 5 video

    9 mars 2014, par Mustafa

    I am trying to stream MP4 video as it is encoded from a webserver. I believe I used the appropriate flags, but it is not working correctly. When I download the video from my stream and open it with VLC, it properly shows the duration. Since a socket is not seekable, I assume it writes the metadata to end ? My Chrome browser always shows 8 seconds duration. The first 8 seconds plays at the normal speed, but afterwards the pause button turns into play button and the video plays very fast, probably as fast as it is recieved. However the audio is played at normal speed. I tried document.getElementById('myVid').duration = 20000 but it is a readonly field.

    I wonder, is there anyway to explicitly state the duration in HTTP headers or in any other way ? I cannot find any documentation about it.

    ffmpeg -i - -vcodec libx264 -acodec libvo_aacenc -ar 44100 -ac 2 -ab 128000 -f mp4 -movflags frag_keyframe+faststart pipe:1 -fflags +genpts -re -profile baseline -level 30 -preset fast

    To close-voters, that thinks it is not programming related, I use it in my own server I coded, and I need to set the duration programatically via JavaScript or setting the HTTP header. I believe it may be related to both ffmpeg or http headers, that's why I posted it here.

    app.get("/video/*", function(req,res){
       res.writeHead(200, {
           'Content-Type': 'video/mp4',
       });
       var dir = req.url.split("/").splice(2).join("/");
       var buf = new Buffer(dir, 'base64');
       var src = buf.toString();

       var Transcoder = require('stream-transcoder');
       var stream = fs.createReadStream(src);
       // I added my own flags to this module, they are at below:
       new Transcoder(stream)
           .videoCodec('libx264')
           .audioCodec("libvo_aacenc")
           .sampleRate(44100)
           .channels(2)
           .audioBitrate(128 * 1000)
           .format('mp4')
           .on('finish', function() {
               console.log("finished");
           })
           .stream().pipe(res);
    });

    exec function in that stream-transcoder module,

       a.push("-fflags");
       a.push("+genpts");
       a.push("-re");
       a.push("-profile");
       a.push("baseline");
       a.push("-level");
       a.push("30");
       a.push("-preset");
       a.push("fast");
       a.push("-strict");
       a.push("experimental");
       a.push("-frag_duration");
       a.push("" + 2 * (1000 * 1000));
       var child = spawn('ffmpeg', a, {
           cwd: os.tmpdir()
       });