Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (65)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (10444)

  • Choose Program 2 in ffmpeg

    11 juin 2020, par jilboobs seksi

    I tried to stream a video and i want to choose Program 2 in this command line in ffmpeg. How to do that ?

    



    Input #0, hls, from 'https://videodelivery.net/d0b94c4c5e737af01ff8f6a56e5fc1aa/manifest/video.m3u8':
Duration: 00:34:11.80, start: 0.062089, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 400000
Stream #0:0(en): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 4 kb/s (default)
Metadata:
variant_bitrate : 5200000
comment         : eng
Stream #0:1: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 426x240 [SAR 640:639 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
Metadata:
variant_bitrate : 400000
Program 1
Metadata:
variant_bitrate : 800000
Stream #0:0(en): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 4 kb/s (default)
Metadata:
variant_bitrate : 5200000
comment         : eng
Stream #0:2: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
Metadata:
variant_bitrate : 800000
Program 2
Metadata:
variant_bitrate : 1800000
Stream #0:0(en): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 4 kb/s (default)
Metadata:
variant_bitrate : 5200000
comment         : eng
Stream #0:3: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 854x480 [SAR 1280:1281 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
Metadata:
variant_bitrate : 1800000


    


  • Transcode HLS segments individually with FFMPEG

    9 juillet 2020, par Mathix420

    I'm trying to transcode a video in HLS, by first splitting the video in segments without encoding changes and then transcode all segments individually. I'm trying to achieve this so I can transcode a video in multiple EC2 instances in parallel to be more time efficient.

    


    I am using this scipt right now

    


    # Split input file in multiple segments

ffmpeg -hide_banner -y -i $input -c copy -map 0 -an -segment_time 4 -reset_timestamps 1 -f segment output%03d.webm

# Transcode each segments in multiple resolutions

find . -name 'output*.webm' -exec ffmpeg -hide_banner -y -i {} \
  -vf "scale=-2:360" -c:v libx264 -profile:v main -crf 20 -sc_threshold 0 -b:v 800k -maxrate 856k -bufsize 1200k {}.360p.ts \
  -vf "scale=-2:480" -c:v libx264 -profile:v main -crf 20 -sc_threshold 0 -b:v 1400k -maxrate 1498k -bufsize 2100k {}.480p.ts \
  -vf "scale=-2:720" -c:v libx264 -profile:v main -crf 20 -sc_threshold 0 -b:v 2800k -maxrate 2996k -bufsize 4200k {}.720p.ts \
  -vf "scale=-2:1080" -c:v libx264 -profile:v main -crf 20 -sc_threshold 0 -b:v 5000k -maxrate 5350k -bufsize 7500k {}.1080p.ts \;


    


    But then when I tried to get all segments durations to make an m3u8 playlist (with the command below)

    


    # List segments duration

find . -name 'output*.webm.360p.ts' \
  -exec echo -n {} \; \
  -exec ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 {} \;


    


    I got this result

    


    output000.webm.360p.ts 5.120000
output001.webm.360p.ts 5.120000
output002.webm.360p.ts 4.400000
output003.webm.360p.ts 5.480000
output004.webm.360p.ts 0.360000
output005.webm.360p.ts 5.120000
output006.webm.360p.ts 4.960000
output007.webm.360p.ts 0.001000


    


    I can't figure out why my output004 is only 0.360000 seconds long.

    


    When I tried to play it with VLC it just shows one or two frames then main decoder error: buffer deadlock prevented.

    


    Thanks for trying to help me !

    


  • Encoding to h264 failed to send some frames using ffmpeg c api

    8 juillet 2020, par Vuwox

    Using FFMPEG C API, Im trying to push generated image to MP4 format.

    


    When I push frame-by-frame, the muxing seems to failed on avcodec_receive_packet(...) which return AVERROR(EAGAIN) on the first frames, but after a while is starting to add my frame, but the first one.

    


    What I mean, is that when push frame 1 to 13, I have errors, but after frame 14 to end (36), the frame are added to the video, but the encoded image are not the 14 to 36, instead its the frame 1 to 23 that are added.

    


    I don't understand, is this a problem with the framerate (which i want 12 fps), or with key/inter- frame ?

    


    Here the code for different part of the class,

    


    NOTE :

    


      

    • m_filename = "C :\tmp\test.mp4"
    • 


    • m_framerate = 12
    • 


    • m_width = 1080
    • 


    • m_height = 1080
    • 


    


    ctor

    


    // Allocate the temporary buffer that hold the our generated image in RGB.
picture_rgb24 = av_frame_alloc();
picture_rgb24->pts = 0;
picture_rgb24->data[0] = NULL;
picture_rgb24->linesize[0] = -1;
picture_rgb24->format = AV_PIX_FMT_RGB24;
picture_rgb24->height = m_height;
picture_rgb24->width = m_width;

if ((_ret = av_image_alloc(picture_rgb24->data, picture_rgb24->linesize, m_width, m_height, (AVPixelFormat)picture_rgb24->format, 24)) < 0)
    throw ...

// Allocate the temporary frame that will be convert from RGB to YUV using ffmpeg api.
frame_yuv420 = av_frame_alloc();
frame_yuv420->pts = 0;
frame_yuv420->data[0] = NULL;
frame_yuv420->linesize[0] = -1;
frame_yuv420->format = AV_PIX_FMT_YUV420P;
frame_yuv420->width = m_height;
frame_yuv420->height = m_width;

if ((_ret = av_image_alloc(frame_yuv420->data, frame_yuv420->linesize, m_width, m_height, (AVPixelFormat)frame_yuv420->format, 32)) < 0)
    throw ...

init_muxer(); // see below.

m_inited = true;
    
m_pts_increment = av_rescale_q(1, { 1, m_framerate }, ofmt_ctx->streams[0]->time_base);

// Context that convert the RGB24 to YUV420P format (using this instead of filter similar to GIF).
swsCtx = sws_getContext(m_width, m_height, AV_PIX_FMT_RGB24, m_width, m_height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, 0, 0, 0);


    


    init_muxer :

    


    AVOutputFormat* oformat = av_guess_format(nullptr, m_filename.c_str(), nullptr);
if (!oformat) throw ...

_ret = avformat_alloc_output_context2(&ofmt_ctx, oformat, nullptr, m_filename.c_str());
if (_ret) throw ...

AVCodec *codec = avcodec_find_encoder(oformat->video_codec);
if (!codec) throw ...

AVStream *stream = avformat_new_stream(ofmt_ctx, codec);
if (!stream) throw ...

o_codec_ctx = avcodec_alloc_context3(codec);
if (!o_codec_ctx) throw ...

stream->codecpar->codec_id = oformat->video_codec;
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->width = m_width;
stream->codecpar->height = m_height;
stream->codecpar->format = AV_PIX_FMT_YUV420P;
stream->codecpar->bit_rate = 400000;

avcodec_parameters_to_context(o_codec_ctx, stream->codecpar);
o_codec_ctx->time_base = { 1, m_framerate };

// Using gop_size == 0, we want 'intra' frame, so no b-frame will be generated.
o_codec_ctx->max_b_frames = 0;
o_codec_ctx->gop_size = 0;
o_codec_ctx->b_quant_offset = 0;
//o_codec_ctx->framerate = { m_framerate , 1 };

if (stream->codecpar->codec_id == AV_CODEC_ID_H264)
    av_opt_set(o_codec_ctx, "preset", "ultrafast", 0);      // Lossless H.264
else if (stream->codecpar->codec_id == AV_CODEC_ID_H265)
    av_opt_set(o_codec_ctx, "preset", "ultrafast", 0);      // Lossless H.265

avcodec_parameters_from_context(stream->codecpar, o_codec_ctx);

if ((_ret = avcodec_open2(o_codec_ctx, codec, NULL)) < 0)
    throw ...

if ((_ret = avio_open(&ofmt_ctx->pb, m_filename.c_str(), AVIO_FLAG_WRITE)) < 0)
    throw ...

if ((_ret = avformat_write_header(ofmt_ctx, NULL)) < 0)
    throw ...

av_dump_format(ofmt_ctx, 0, m_filename.c_str(), 1);


    


    add_frame :

    


    // loop to transfer our image format to ffmpeg one.
for (int y = 0; y < m_height; y++)
{
    for (int x = 0; x < m_width; x++)
    {
        picture_rgb24->data[0][idx] = ...;
        picture_rgb24->data[0][idx + 1] = ...;
        picture_rgb24->data[0][idx + 2] = ...;
    }
}

// From RGB to YUV
sws_scale(swsCtx, (const uint8_t * const *)picture_rgb24->data, picture_rgb24->linesize, 0, m_height, frame_yuv420->data, frame_yuv420->linesize);

// mux the YUV frame
muxing_one_frame(frame_yuv420);

// Increment the FPS of the picture for the next add-up to the buffer.      
picture_rgb24->pts += m_pts_increment;
frame_yuv420->pts += m_pts_increment;


    


    muxing_one_frame :

    


    int ret = avcodec_send_frame(o_codec_ctx, frame);
AVPacket *pkt = av_packet_alloc();
av_init_packet(pkt);

while (ret >= 0) {
    ret = avcodec_receive_packet(o_codec_ctx, pkt);
    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) break;        
    av_write_frame(ofmt_ctx, pkt);
}
av_packet_unref(pkt);


    


    close_file :

    


    av_write_trailer(ofmt_ctx);
avio_close(ofmt_ctx->pb);