Recherche avancée

Médias (1)

Mot : - Tags -/3GS

Autres articles (46)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

Sur d’autres sites (8395)

  • Streaming video with FFmpeg through pipe causes partial file offset error and moov atom not found

    12 mars 2023, par Moe

    I'm trying to stream a video from firebase cloud storage through ffmpeg then to the HTML video player, using a very basic example with the range header worked fine and was exactly what I was trying to do, but now when I'm trying to pipe the stream from firebase then through ffmpeg then to the browser it works fine for just first couple of requests (First 10 seconds) but after that It faced these issues :

    


      

    • Unable to get the actual time of the video on the browser (Constantly changing as if it doesn't know metadata)
    • 


    • On the server it fails to continue streaming to the request with the following :
    • 


    


    [NULL @ 000001d8c239e140] Invalid NAL unit size (110356 > 45446).
[NULL @ 000001d8c239e140] missing picture in access unit with size 45450
pipe:0: corrupt input packet in stream 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d8c238c7c0] stream 0, offset 0x103fcf: partial file


    


    and then this error as well :

    


    [mov,mp4,m4a,3gp,3g2,mj2 @ 000001dc9590c7c0] moov atom not found
pipe:0: Invalid data found when processing input


    


    Using nodejs and running Ffmpeg V 5.0.1 on serverless env :

    


      const filePath = `sample.mp4` ||  req.query.path;

    // Create a read stream from the video file
    const videoFile = bucket.file(filePath);

    const range = req.headers.range;

    if(!range){
      return res.status(500).json({mes: 'Not Found'});
    }

      // Get the file size for the 'Content-Length' header
      const [metadata] = await videoFile.getMetadata();
      const videoSize = metadata.size;

      const CHUNK_SIZE = 1000000; // 1MB

      const start = Number(range.replace(/\D/g, ""));
      const end = Math.min(start + CHUNK_SIZE, videoSize - 1);
    
      // Create headers
      const contentLength = end - start + 1;
      const headers = {
        "Content-Range": `bytes ${start}-${end}/${videoSize}`,
        "Accept-Ranges": "bytes",
        "Content-Length": contentLength,
        "Content-Type": "video/mp4",
        // 'Transfer-Encoding': 'chunked'
      };
    
      // HTTP Status 206 for Partial Content
      res.writeHead(206, headers);
    
      // create video read stream for this particular chunk
      const inputStream = videoFile.createReadStream({ start, end });

     const ffmpeg = spawn(pathToFfmpeg, [
        // '-re', '-y',  
        '-f', 'mp4',  
        '-i', 'pipe:0', // read input from standard input (pipe)
        '-c:v', 'copy', // copy video codec
        '-c:a', 'copy', // copy audio codec
        '-map_metadata', '0',
        `-movflags`, `frag_keyframe+empty_moov+faststart+default_base_moof`,
        '-f', 'mp4', // output format
        'pipe:1',
       // write output to standard output (pipe)
      ], {
        stdio: ['pipe', 'pipe', 'inherit']
      });

      inputStream.pipe(ffmpeg.stdin);

      ffmpeg.stdout.pipe(res);
      


    


    Note that this version is trimmed I do have a lot of log code and error handling of course, and basically again what's happening is that for the first request it is working fine but then if the player requests a part like minute 5 for example it gives the errors I mentioned above :

    


    What I have tried :

    


      

    • At first I tried adjusting the ffmpeg parameters with the following to try and fix the moov atom error but it still presisted :
    • 


    


        -map_metadata', '0',
    `-movflags`, `frag_keyframe+empty_moov+faststart+default_base_moof`


    


      

    • I have also tried to stream to file then through pipe but that also gave the same errors.
    • 


    


    Finally after googling for about day and a half trying out tens of solutions nothing worked, now I'm stuck at this error where I'm not able to process a specific fragment of a video through ffmpeg, is that even possible or am I doing something wrong ?

    


    Why am I even streaming through ffmpeg ?

    


    I am indeed of course going to add filters to the video and a dynamic watermark text for every request that's why I need to use ffmpeg through stream not directly through a file as the video filters will change on demand according to every user

    


  • lavc/ffv1 : change FFV1SliceContext.plane into a RefStruct object

    11 juillet 2024, par Anton Khirnov
    lavc/ffv1 : change FFV1SliceContext.plane into a RefStruct object
    

    Frame threading in the FFV1 decoder works in a very unusual way - the
    state that needs to be propagated from the previous frame is not decoded
    pixels(¹), but each slice's entropy coder state after decoding the slice.

    For that purpose, the decoder's update_thread_context() callback stores
    a pointer to the previous frame thread's private data. Then, when
    decoding each slice, the frame thread uses the standard progress
    mechanism to wait for the corresponding slice in the previous frame to
    be completed, then copies the entropy coder state from the
    previously-stored pointer.

    This approach is highly dubious, as update_thread_context() should be
    the only point where frame-thread contexts come into direct contact.
    There are no guarantees that the stored pointer will be valid at all, or
    will contain any particular data after update_thread_context() finishes.

    More specifically, this code can break due to the fact that keyframes
    reset entropy coder state and thus do not need to wait for the previous
    frame. As an example, consider a decoder process with 2 frame threads -
    thread 0 with its context 0, and thread 1 with context 1 - decoding a
    previous frame P, current frame F, followed by a keyframe K. Then
    consider concurrent execution consistent with the following sequence of
    events :
    * thread 0 starts decoding P
    * thread 0 reads P's slice header, then calls
    ff_thread_finish_setup() allowing next frame thread to start
    * main thread calls update_thread_context() to transfer state from
    context 0 to context 1 ; context 1 stores a pointer to context 0's private
    data
    * thread 1 starts decoding F
    * thread 1 reads F's slice header, then calls
    ff_thread_finish_setup() allowing the next frame thread to start
    decoding
    * thread 0 finishes decoding P
    * thread 0 starts decoding K ; since K is a keyframe, it does not
    wait for F and reallocates the arrays holding entropy coder state
    * thread 0 finishes decoding K
    * thread 1 reads entropy coder state from its stored pointer to context
    0, however it finds state from K rather than from P

    This execution is currently prevented by special-casing FFV1 in the
    generic frame threading code, however that is supremely ugly. It also
    involves unnecessary copies of the state arrays, when in fact they can
    only be used by one thread at a time.

    This commit addresses these deficiencies by changing the array of
    PlaneContext (each of which contains the allocated state arrays)
    embedded in FFV1SliceContext into a RefStruct object. This object can
    then be propagated across frame threads in standard manner. Since the
    code structure guarantees only one thread accesses it at a time, no
    copies are necessary. It is also re-created for keyframes, solving the
    above issue cleanly.

    Special-casing of FFV1 in the generic frame threading code will be
    removed in a later commit.

    (¹) except in the case of a damaged slice, when previous frame's pixels
    are used directly

    • [DH] libavcodec/ffv1.c
    • [DH] libavcodec/ffv1.h
    • [DH] libavcodec/ffv1dec.c
  • Encoding to h264 failed to send some frames using ffmpeg c api

    8 juillet 2020, par Vuwox

    Using FFMPEG C API, Im trying to push generated image to MP4 format.

    


    When I push frame-by-frame, the muxing seems to failed on avcodec_receive_packet(...) which return AVERROR(EAGAIN) on the first frames, but after a while is starting to add my frame, but the first one.

    


    What I mean, is that when push frame 1 to 13, I have errors, but after frame 14 to end (36), the frame are added to the video, but the encoded image are not the 14 to 36, instead its the frame 1 to 23 that are added.

    


    I don't understand, is this a problem with the framerate (which i want 12 fps), or with key/inter- frame ?

    


    Here the code for different part of the class,

    


    NOTE :

    


      

    • m_filename = "C :\tmp\test.mp4"
    • 


    • m_framerate = 12
    • 


    • m_width = 1080
    • 


    • m_height = 1080
    • 


    


    ctor

    


    // Allocate the temporary buffer that hold the our generated image in RGB.
picture_rgb24 = av_frame_alloc();
picture_rgb24->pts = 0;
picture_rgb24->data[0] = NULL;
picture_rgb24->linesize[0] = -1;
picture_rgb24->format = AV_PIX_FMT_RGB24;
picture_rgb24->height = m_height;
picture_rgb24->width = m_width;

if ((_ret = av_image_alloc(picture_rgb24->data, picture_rgb24->linesize, m_width, m_height, (AVPixelFormat)picture_rgb24->format, 24)) < 0)
    throw ...

// Allocate the temporary frame that will be convert from RGB to YUV using ffmpeg api.
frame_yuv420 = av_frame_alloc();
frame_yuv420->pts = 0;
frame_yuv420->data[0] = NULL;
frame_yuv420->linesize[0] = -1;
frame_yuv420->format = AV_PIX_FMT_YUV420P;
frame_yuv420->width = m_height;
frame_yuv420->height = m_width;

if ((_ret = av_image_alloc(frame_yuv420->data, frame_yuv420->linesize, m_width, m_height, (AVPixelFormat)frame_yuv420->format, 32)) < 0)
    throw ...

init_muxer(); // see below.

m_inited = true;
    
m_pts_increment = av_rescale_q(1, { 1, m_framerate }, ofmt_ctx->streams[0]->time_base);

// Context that convert the RGB24 to YUV420P format (using this instead of filter similar to GIF).
swsCtx = sws_getContext(m_width, m_height, AV_PIX_FMT_RGB24, m_width, m_height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, 0, 0, 0);


    


    init_muxer :

    


    AVOutputFormat* oformat = av_guess_format(nullptr, m_filename.c_str(), nullptr);
if (!oformat) throw ...

_ret = avformat_alloc_output_context2(&ofmt_ctx, oformat, nullptr, m_filename.c_str());
if (_ret) throw ...

AVCodec *codec = avcodec_find_encoder(oformat->video_codec);
if (!codec) throw ...

AVStream *stream = avformat_new_stream(ofmt_ctx, codec);
if (!stream) throw ...

o_codec_ctx = avcodec_alloc_context3(codec);
if (!o_codec_ctx) throw ...

stream->codecpar->codec_id = oformat->video_codec;
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->width = m_width;
stream->codecpar->height = m_height;
stream->codecpar->format = AV_PIX_FMT_YUV420P;
stream->codecpar->bit_rate = 400000;

avcodec_parameters_to_context(o_codec_ctx, stream->codecpar);
o_codec_ctx->time_base = { 1, m_framerate };

// Using gop_size == 0, we want 'intra' frame, so no b-frame will be generated.
o_codec_ctx->max_b_frames = 0;
o_codec_ctx->gop_size = 0;
o_codec_ctx->b_quant_offset = 0;
//o_codec_ctx->framerate = { m_framerate , 1 };

if (stream->codecpar->codec_id == AV_CODEC_ID_H264)
    av_opt_set(o_codec_ctx, "preset", "ultrafast", 0);      // Lossless H.264
else if (stream->codecpar->codec_id == AV_CODEC_ID_H265)
    av_opt_set(o_codec_ctx, "preset", "ultrafast", 0);      // Lossless H.265

avcodec_parameters_from_context(stream->codecpar, o_codec_ctx);

if ((_ret = avcodec_open2(o_codec_ctx, codec, NULL)) < 0)
    throw ...

if ((_ret = avio_open(&ofmt_ctx->pb, m_filename.c_str(), AVIO_FLAG_WRITE)) < 0)
    throw ...

if ((_ret = avformat_write_header(ofmt_ctx, NULL)) < 0)
    throw ...

av_dump_format(ofmt_ctx, 0, m_filename.c_str(), 1);


    


    add_frame :

    


    // loop to transfer our image format to ffmpeg one.
for (int y = 0; y < m_height; y++)
{
    for (int x = 0; x < m_width; x++)
    {
        picture_rgb24->data[0][idx] = ...;
        picture_rgb24->data[0][idx + 1] = ...;
        picture_rgb24->data[0][idx + 2] = ...;
    }
}

// From RGB to YUV
sws_scale(swsCtx, (const uint8_t * const *)picture_rgb24->data, picture_rgb24->linesize, 0, m_height, frame_yuv420->data, frame_yuv420->linesize);

// mux the YUV frame
muxing_one_frame(frame_yuv420);

// Increment the FPS of the picture for the next add-up to the buffer.      
picture_rgb24->pts += m_pts_increment;
frame_yuv420->pts += m_pts_increment;


    


    muxing_one_frame :

    


    int ret = avcodec_send_frame(o_codec_ctx, frame);
AVPacket *pkt = av_packet_alloc();
av_init_packet(pkt);

while (ret >= 0) {
    ret = avcodec_receive_packet(o_codec_ctx, pkt);
    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) break;        
    av_write_frame(ofmt_ctx, pkt);
}
av_packet_unref(pkt);


    


    close_file :

    


    av_write_trailer(ofmt_ctx);
avio_close(ofmt_ctx->pb);