Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (33)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5559)

  • libswscale error Slice Parameters 0, 1080 are invalid

    3 mai 2023, par lokit khemka

    I am trying to scale a video from 1080p to 480p. For that, I have setup swscaler context as :

    


    encoder_sc->sws_ctx = sws_getContext(1920, 1080,
                            AV_PIX_FMT_YUV420P, 
                           854, 480, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL );


    


    However, when I am calling the scale frame function as

    


    sws_scale_frame(encoder->sws_ctx, input_frame, input_frame);


    


    However, when I do that I am getting the error Slice parameter 0, 1080 are in valid. I am very new to FFMPEG and video processing in general. I could not find any solution while searching. Any help is greatly appreciated.

    


    EDIT : I am including the entire source code because I cannot seem to solve the issue.

    


    

typedef struct StreamingContext{
    AVFormatContext* avfc;
    AVCodec *video_avc;
    AVCodec *audio_avc;
    AVStream *video_avs;
    AVStream *audio_avs;
    AVCodecContext *video_avcc;
    AVCodecContext *audio_avcc;
    int video_index;
    int audio_index;
    char* filename;
    struct SwsContext *sws_ctx;
}StreamingContext;


typedef struct StreamingParams{
    char copy_video;
    char copy_audio;
    char *output_extension;
    char *muxer_opt_key;
    char *muxer_opt_value;
    char *video_codec;
    char *audio_codec;
    char *codec_priv_key;
    char *codec_priv_value;
}StreamingParams;


int prepare_video_encoder(StreamingContext *encoder_sc, AVCodecContext *decoder_ctx, AVRational input_framerate,
                          StreamingParams sp)
{
    encoder_sc->video_avs = avformat_new_stream(encoder_sc->avfc, NULL);
    encoder_sc->video_avc = avcodec_find_encoder_by_name(sp.video_codec);
    if (!encoder_sc->video_avc)
    {
        logging("Cannot find the Codec.");
        return -1;
    }

    encoder_sc->video_avcc = avcodec_alloc_context3(encoder_sc->video_avc);
    if (!encoder_sc->video_avcc)
    {
        logging("Could not allocate memory for Codec Context.");
        return -1;
    }

    av_opt_set(encoder_sc->video_avcc->priv_data, "preset", "fast", 0);
    if (sp.codec_priv_key && sp.codec_priv_value)
        av_opt_set(encoder_sc->video_avcc->priv_data, sp.codec_priv_key, sp.codec_priv_value, 0);

    encoder_sc->video_avcc->height = decoder_ctx->height;
    encoder_sc->video_avcc->width = decoder_ctx->width;
    encoder_sc->video_avcc->sample_aspect_ratio = decoder_ctx->sample_aspect_ratio;

    if (encoder_sc->video_avc->pix_fmts)
        encoder_sc->video_avcc->pix_fmt = encoder_sc->video_avc->pix_fmts[0];
    else
        encoder_sc->video_avcc->pix_fmt = decoder_ctx->pix_fmt;

    encoder_sc->video_avcc->bit_rate = 2 * 1000 * 1000;
    encoder_sc->video_avcc->rc_buffer_size = 4 * 1000 * 1000;
    encoder_sc->video_avcc->rc_max_rate = 2 * 1000 * 1000;
    encoder_sc->video_avcc->rc_min_rate = 2.5 * 1000 * 1000;

    encoder_sc->video_avcc->time_base = av_inv_q(input_framerate);
    encoder_sc->video_avs->time_base = encoder_sc->video_avcc->time_base;

    //Creating Scaling Context
    encoder_sc->sws_ctx = sws_getContext(1920, 1080,
                            decoder_ctx->pix_fmt, 
                           854, 480, encoder_sc->video_avcc->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL );
    if (!encoder_sc->sws_ctx){logging("Cannot Create Scaling Context."); return -1;}

    if (avcodec_open2(encoder_sc->video_avcc, encoder_sc->video_avc, NULL) < 0)
    {
        logging("Could not open the Codec.");
        return -1;
    }
    avcodec_parameters_from_context(encoder_sc->video_avs->codecpar, encoder_sc->video_avcc);
    return 0;
}



int transcode_video(StreamingContext *decoder, StreamingContext *encoder, AVPacket *input_packet, AVFrame *input_frame, AVFrame *scaled_frame)
{
    int response = avcodec_send_packet(decoder->video_avcc, input_packet);
    if (response < 0)
    {
        logging("Error while sending the Packet to Decoder: %s", av_err2str(response));
        return response;
    }

    while (response >= 0)
    {
        response = avcodec_receive_frame(decoder->video_avcc, input_frame);
        
        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
        {
            break;
        }
        else if (response < 0)
        {
            logging("Error while receiving frame from Decoder: %s", av_err2str(response));
            return response;
        }
        if (response >= 0)
        {
            scaled_frame->format = encoder->video_avcc->pix_fmt;
            scaled_frame->width = 854;
            scaled_frame->height = 480;
            sws_scale_frame(encoder->sws_ctx, scaled_frame, input_frame);
            //ERROR is in the scaled_frame
            if (encode_video(decoder, encoder, scaled_frame)) 
                return -1;
        }

        av_frame_unref(input_frame);
    }
    return 0;
}



    


  • Make 2 videos on top of each other in a 1080×1920 scene

    6 octobre 2023, par Byte me

    I am trying to make two videos on top of each other. Right now i am scaling both to 1080x960 using ffmpeg and putting them together using vstack. Unfortunetely no success. Can anyone help me ?

    


      ffmpeg()
      .input('./placeholder.mp4').videoCodec('copy')
      .input("./scaled_YT.mp4").videoCodec('copy')
      .complexFilter([
        `[0:v]scale=1080x960[v0];[1:v]scale=1080x960[v1];[v0][v1]vstack=inputs=2[v]`
    ], ['v'])
      .toFormat('mp4')
      .on('end', () => {
          console.log('Files have been merged!');
      })
      .on('error', (err) => {
          console.error('Error:', err)
      })
      .save(outputPath);


    


    Error: Error: ffmpeg exited with code 1: &#xA;    at ChildProcess.<anonymous> (D:\Discord Bots\TEMP_TEST\done_projects\videoEditor_bot\node_modules\fluent-ffmpeg\lib\processor.js:182:22)  &#xA;    at ChildProcess.emit (node:events:513:28)&#xA;    at ChildProcess._handle.onexit (node:internal/child_process:291:12)&#xA;</anonymous>

    &#xA;

  • EC2 for video-encoding

    24 septembre 2012, par TK Kocheran

    I have a potential job which will require me to do some video encoding with FFMPEG and x264. I'll have a series of files which I'll need to encode once, then I'll be able to bring down the instances. Since I'm not really sure of the resource utilization of x264 and FFMPEG, what kind of instances should I get ? I'm thinking either a

    High-CPU Extra Large Instance

    7 GB of memory
    20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
    1690 GB of instance storage
    64-bit platform
    I/O Performance : High
    API name : c1.xlarge

    or, alternatively a

    Cluster GPU Quadruple Extra Large Instance

    22 GB of memory
    33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
    2 x NVIDIA Tesla “Fermi” M2050 GPUs
    1690 GB of instance storage
    64-bit platform
    I/O Performance : Very High (10 Gigabit Ethernet)
    API name : cg1.4xlarge

    What should I use ? Does x264/FFMPEG perform better with faster/more CPUs or does it really pound the GPU more ? In any case, it seems that the Cluster GPU seems to be the higher performance instance. What should I prefer ?