Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (16)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (4784)

  • Change ffmpeg output directory

    6 mai 2019, par Filip

    Im using ffmpeg to compress footage and i want to compess the footage of a specific day but when i overwrite the files it outputs a empty stream because it writes as it reads at the same time so i want to rename the output file. Find will give the full path which is necessary but i don’t know how to change the actual file name, rather than the path.

    Any suggestions ?

    find /home/server/recordings/compress -name '*.mp4' -print0 | xargs -0 -I{} ffmpeg -i {}  -c:v libx265 -preset fast -crf 25 -x265-params "vbv-maxrate=1500:vbv-bufsize=1000" -c:a aac {}
  • Access violation reading location 0x000000148965F000

    14 février 2014, par user3012914

    I tried to encode BMP Images, which i get from a buffer and store it as a H264 Video. I am stuck with these errors the arrive randomly and repeatedly

    I am using Visual Studio 2012

    1) Access violation reading location 0x000000148965F000.

    2)Heap corruption

    The debug shows the error at this point

       struct SwsContext* fooContext = sws_getContext(_imgWidth,_imgHeight,PIX_FMT_RGB32,c->width,c->height,PIX_FMT_YUV420P, SWS_FAST_BILINEAR,NULL,NULL,NULL);
                       sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize);    // converting frame size and format

    I guess the read violation happens due to non - pre initialized values. But i couldnt exactly understand why. I have also attached part of the code below

    PagedImage *inImg = getUpdatedInputImage(0);
           ML_CHECK(inImg);
           ImageVector imgExt = inImg->getImageExtent();
           if ((imgExt.x == _imgWidth) && (imgExt.y == _imgHeight))
           {
               if (((imgExt.x % 4) == 0) && ((imgExt.y % 4) == 0))
               {
                  _numFramesFld->setIntValue(_numFramesFld->getIntValue() + 1);
                   MLFree(unicodeFilename);
                   // configure header
                   //BITMAPINFO bitmapInfo
                   // read out input image and write output image into video
                   // get input image as an array
                   void* imgData = NULL;
                   SubImageBox imageBox(imgExt); // get the whole image
                   getTile(inImg, imageBox, MLuint8Type, &imgData);
                   MLuint8* iData = (MLuint8*)imgData;
                   // since we have only images with
                   // a z-ext of 1, we can compute the c stride as follows
                   int cStride = _imgWidth * _imgHeight;
                   int offset  = 0;
                   MLuint8 r=0, g=0, b=0;
                   // pointer into the bitmap that is
                   // used to write images into an video
                   UCHAR* dst = (UCHAR*)_bits;
                   for (int y = _imgHeight-1; y >= 0; y--)
                   { // reversely scan the image. if y-rows of DIB are set in normal order, no compression will be available.
                       offset = _imgWidth * y;
                       for (int x = 0; x < _imgWidth; x++)
                       {
                           if (_isGreyValueImage)
                           {
                               r = iData[offset + x];
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                           }
                           else
                           {
                               b = iData[offset + x]; // windows bitmap need reverse order: bgr instead of rgb
                               g = iData[offset + x + cStride          ];
                               r = iData[offset + x + cStride + cStride];
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)g;
                               *dst++ = (UCHAR)b;
                           }
                           // alpha channel in input image is ignored
                       }
                   }
                   outbuf_size = 100000 + c->width*c->height*(32>>3);      // allocate output buffer
                   outbuf = static_cast(malloc(outbuf_size));
                   fileName_ = (_outputFilenameFld->getStringValue()).c_str();
                   FILE* f = fopen(fileName_,"wb");                    // opening video file for writing
                   if(!f)
                   {
                       _messageFld->setStringValue("Cannot open file");
                   }
                   else _messageFld->setStringValue("Opened video file for writing\n");

                   //for(i=0;i<_numFramesFld->getIntValue();i++)
                   //{
                       fflush(stdout);
                       int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);                                // allocating outbuffer
                       uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
                       AVFrame* inpic = avcodec_alloc_frame();                                                               // mandatory frame allocation
                       AVFrame* outpic = avcodec_alloc_frame();
                       //outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) * 90);                        // setting frame pts
                       avpicture_fill((AVPicture*)inpic,(uint8_t*)dst, PIX_FMT_RGB32, c->width, c->height);                            // fill image with input screenshot
                       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);                  // clear output picture for buffer copy
                       av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);

                       inpic->data[0] += inpic->linesize[0]*(c->height-1);                                                   // flipping frame
                       inpic->linesize[0] = -inpic->linesize[0];                                                             // flipping frame

                       struct SwsContext* fooContext = sws_getContext(_imgWidth,_imgHeight,PIX_FMT_RGB32,c->width,c->height,PIX_FMT_YUV420P, SWS_FAST_BILINEAR,NULL,NULL,NULL);
                       sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize);    // converting frame size and format
                       out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);                                      // encoding video
                       _messageFld->setStringValue("Encoding frame %3d (size=%5d)\n");
                        fwrite(outbuf, 1, out_size, f);
                        delete [] dst;                                                                                         // freeing memory
                       av_free(outbuffer);    
                       av_free(inpic);
                       av_free(outpic);
                       av_free(fooContext);
                       DeleteObject(_hbitmap);

                       for(int Z = 0; Z/ encode the delayed frames
                           fwrite(outbuf, 1, out_size, f);
                       }
                       //outbuf[0] = 0x00;
                       //outbuf[1] = 0x00;                                                                                               // add sequence end code to have a real mpeg file
                       //outbuf[2] = 0x01;
                       //outbuf[3] = 0xb7;
                       //fwrite(outbuf, 1, 4, f);
                       fclose(f);
                       avcodec_close(c);                                                                                               // freeing memory
                       free(outbuf);
                       av_free(c);
                       printf("Closed codec and Freed\n");
                   }
               }
  • HTTP Header for Duration of a MP4 for HTML 5 video

    9 mars 2014, par Mustafa

    I am trying to stream MP4 video as it is encoded from a webserver. I believe I used the appropriate flags, but it is not working correctly. When I download the video from my stream and open it with VLC, it properly shows the duration. Since a socket is not seekable, I assume it writes the metadata to end ? My Chrome browser always shows 8 seconds duration. The first 8 seconds plays at the normal speed, but afterwards the pause button turns into play button and the video plays very fast, probably as fast as it is recieved. However the audio is played at normal speed. I tried document.getElementById('myVid').duration = 20000 but it is a readonly field.

    I wonder, is there anyway to explicitly state the duration in HTTP headers or in any other way ? I cannot find any documentation about it.

    ffmpeg -i - -vcodec libx264 -acodec libvo_aacenc -ar 44100 -ac 2 -ab 128000 -f mp4 -movflags frag_keyframe+faststart pipe:1 -fflags +genpts -re -profile baseline -level 30 -preset fast

    To close-voters, that thinks it is not programming related, I use it in my own server I coded, and I need to set the duration programatically via JavaScript or setting the HTTP header. I believe it may be related to both ffmpeg or http headers, that's why I posted it here.

    app.get("/video/*", function(req,res){
       res.writeHead(200, {
           'Content-Type': 'video/mp4',
       });
       var dir = req.url.split("/").splice(2).join("/");
       var buf = new Buffer(dir, 'base64');
       var src = buf.toString();

       var Transcoder = require('stream-transcoder');
       var stream = fs.createReadStream(src);
       // I added my own flags to this module, they are at below:
       new Transcoder(stream)
           .videoCodec('libx264')
           .audioCodec("libvo_aacenc")
           .sampleRate(44100)
           .channels(2)
           .audioBitrate(128 * 1000)
           .format('mp4')
           .on('finish', function() {
               console.log("finished");
           })
           .stream().pipe(res);
    });

    exec function in that stream-transcoder module,

       a.push("-fflags");
       a.push("+genpts");
       a.push("-re");
       a.push("-profile");
       a.push("baseline");
       a.push("-level");
       a.push("30");
       a.push("-preset");
       a.push("fast");
       a.push("-strict");
       a.push("experimental");
       a.push("-frag_duration");
       a.push("" + 2 * (1000 * 1000));
       var child = spawn('ffmpeg', a, {
           cwd: os.tmpdir()
       });