Recherche avancée

Médias (91)

Autres articles (80)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (12048)

  • OSError : MoviePy error : the file guitar.mp4 could not be found

    9 septembre 2023, par dunnjm814

    I'm working on a video to audio converter with react and flask/python.
I have received a 500 with this error :

    


    raise IOError(("MoviePy error: the file %s could not be found!\n"
OSError: MoviePy error: the file guitar.mp4 could not be found!
Please check that you entered the correct path.


    


    EDIT : As stated in comments, moviepy VideoFileClip is looking for a path. Per suggestion, I am now attempting to write the incoming video file to a temp directory housed in the backend of the app. The updated stack trace shows the filepath printing, however when presented to VideoFileClip it is still unhappy.

    


    The following snippet is the onSubmit for the video file upload :

    


    const onSubmit = async (e) => {
    e.preventDefault()
    const data = new FormData()
    console.log('hopefully the mp4', videoData)
    data.append('mp3', videoData)
    console.log('hopefully a form object with mp4', data)
    const response = await fetch('/api/convert', {
      method: "POST",
      body: data
    })
    if (response.ok) {
      const converted = await response.json()
      setMp3(converted)
      console.log(mp3)
    } else {
      window.alert("something went wrong :(");
    }
  }


    


    Here is a link to an image depicting the console output of my file upload
from within init.py

    


    app = Flask(__name__)

app.config.from_object(Config)
app.register_blueprint(convert, url_prefix='/api/convert')

CORS(app)



    


    from within converter.py

    


    import os
from flask import Blueprint, jsonify, request
import imageio
from moviepy.editor import *


convert = Blueprint('convert', __name__)

@convert.route('', methods=['POST'])
def convert_mp4():
  if request.files['mp3'].filename:
    os.getcwd()
    filename = request.files['mp3'].filename
    print('hey its a file again', filename)
    safe_filename = secure_filename(filename)
    video_file = os.path.join("/temp/", safe_filename)
    print('hey its the file path', video_file)
    video_clip = VideoFileClip(video_file)
    print('hey its the VideoFileClip', video_clip)
    audio_clip = video_clip.audio
    audio_clip.write_audiofile(os.path.join("/temp/", f"{safe_filename}-converted.mp3"))

    video_clip.close()
    audio_clip.close()

    return jsonify(send_from_directory(os.path.join("/temp/", f"{safe_filename}-converted.mp3")))
  else:
    return {'error': 'something went wrong :('}




    


    In the stack trace below you can see file printing the name of the video, my only other thought on why this may not be working was because it was getting lost in the post request, however the fact it is printing after my if file: check is leaving me pretty confused.

    


    hey its a file again guitar.mp4
hey its the file path /temp/guitar.mp4
127.0.0.1 - - [22/Apr/2021 12:12:15] "POST /api/convert HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__
    return self.wsgi_app(environ, start_response)
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app
    response = self.handle_exception(e)
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/jasondunn/projects/audioconverter/back/api/converter.py", line 20, in convert_mp4
    video_clip = VideoFileClip(video_file)
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__
    self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__
    infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
  File "/home/jasondunn/projects/audioconverter/.venv/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_reader.py", line 270, in ffmpeg_parse_infos
    raise IOError(("MoviePy error: the file %s could not be found!\n"
OSError: MoviePy error: the file /temp/guitar.mp4 could not be found!
Please check that you entered the correct path.


    


    thanks in advance for taking a look/future advice. First official post on Stack Overflow :)

    


  • ffmpeg.wasm - How to do literally anything with a blob url

    24 novembre 2024, par SeriousLee

    I'm using the ffmpeg.wasm for the first time and I can't get anything working, beyond loading it. I have this function that does nothing in particular (I got it from the vite + react example in the docs, slightly modified) and all I want to do is pass it a blob URL like this blob:http://localhost:5173/c7a9ea7c-aa26-4f4f-9c80-11b8aef3e81f and run through the function and have it give me anything back. But instead, it hangs on the ffmpeg.exec command and never completes. And yes, I've determined that the input blob does work - it's an 8MB, 12-second long mp4 clip.

    


    Here's the function :

    


        const processOutputVideo = async (videoURL) => {
      const ffmpeg = ffmpegRef.current;

      await ffmpeg.writeFile("input.mp4", await fetchFile(videoURL));
      await ffmpeg.exec(["-i", "input.mp4", "output.mp4"]);

      const fileData = await ffmpeg.readFile("output.mp4");
      const blob = new Blob([fileData.buffer], { type: "video/mp4" });
      const blobUrl = URL.createObjectURL(blob);

      return blobUrl;
    };


    


    And here's the ffmpeg logs from my terminal.

    


    [FFMPEG stderr] ffmpeg version 5.1.4 Copyright (c) 2000-2023 the FFmpeg developers
Post.jsx:35 [FFMPEG stderr]   built with emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)
Post.jsx:35 [FFMPEG stderr]   configuration: --target-os=none --arch=x86_32 --enable-cross-compile --disable-asm --disable-stripping --disable-programs --disable-doc --disable-debug --disable-runtime-cpudetect --disable-autodetect --nm=emnm --ar=emar --ranlib=emranlib --cc=emcc --cxx=em++ --objcc=emcc --dep-cc=emcc --extra-cflags='-I/opt/include -O3 -msimd128 -sUSE_PTHREADS -pthread' --extra-cxxflags='-I/opt/include -O3 -msimd128 -sUSE_PTHREADS -pthread' --enable-gpl --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libopus --enable-zlib --enable-libwebp --enable-libfreetype --enable-libfribidi --enable-libass --enable-libzimg
Post.jsx:35 [FFMPEG stderr]   libavutil      57. 28.100 / 57. 28.100
Post.jsx:35 [FFMPEG stderr]   libavcodec     59. 37.100 / 59. 37.100
Post.jsx:35 [FFMPEG stderr]   libavformat    59. 27.100 / 59. 27.100
Post.jsx:35 [FFMPEG stderr]   libavdevice    59.  7.100 / 59.  7.100
Post.jsx:35 [FFMPEG stderr]   libavfilter     8. 44.100 /  8. 44.100
Post.jsx:35 [FFMPEG stderr]   libswscale      6.  7.100 /  6.  7.100
Post.jsx:35 [FFMPEG stderr]   libswresample   4.  7.100 /  4.  7.100
Post.jsx:35 [FFMPEG stderr]   libpostproc    56.  6.100 / 56.  6.100
Post.jsx:35 [FFMPEG stderr] Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Post.jsx:35 [FFMPEG stderr]   Metadata:
Post.jsx:35 [FFMPEG stderr]     major_brand     : mp42
Post.jsx:35 [FFMPEG stderr]     minor_version   : 0
Post.jsx:35 [FFMPEG stderr]     compatible_brands: mp42mp41isomavc1
Post.jsx:35 [FFMPEG stderr]     creation_time   : 2019-03-15T17:39:05.000000Z
Post.jsx:35 [FFMPEG stderr]   Duration: 00:00:12.82, start: 0.000000, bitrate: 5124 kb/s
Post.jsx:35 [FFMPEG stderr]   Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080, 4985 kb/s, 29.97 fps, 29.97 tbr, 30k tbn (default)
Post.jsx:35 [FFMPEG stderr]     Metadata:
Post.jsx:35 [FFMPEG stderr]       creation_time   : 2019-03-15T17:39:05.000000Z
Post.jsx:35 [FFMPEG stderr]       handler_name    : L-SMASH Video Handler
Post.jsx:35 [FFMPEG stderr]       vendor_id       : [0][0][0][0]
Post.jsx:35 [FFMPEG stderr]       encoder         : AVC Coding
Post.jsx:35 [FFMPEG stderr]   Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 137 kb/s (default)
Post.jsx:35 [FFMPEG stderr]     Metadata:
Post.jsx:35 [FFMPEG stderr]       creation_time   : 2019-03-15T17:39:05.000000Z
Post.jsx:35 [FFMPEG stderr]       handler_name    : L-SMASH Audio Handler
Post.jsx:35 [FFMPEG stderr]       vendor_id       : [0][0][0][0]
Post.jsx:35 [FFMPEG stderr] Stream mapping:
Post.jsx:35 [FFMPEG stderr]   Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Post.jsx:35 [FFMPEG stderr]   Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] using cpu capabilities: none!
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] profile High, level 4.0, 4:2:0, 8-bit
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] 264 - core 164 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00


    


    And it just hangs there. When I use the example video URL from the official example (https://raw.githubusercontent.com/ffmpegwasm/testdata/master/video-15s.avi), it doesn't hang and it completes the function and returns me a blob URL in the same format as that first blob URL I showed you guys and this is what the ffmpeg output looks like in my console in that case :

    


    Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] frame P:160   Avg QP:23.62  size:  1512
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] frame B:385   Avg QP:26.75  size:   589
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] consecutive B-frames:  5.5%  3.6%  0.0% 90.9%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb I  I16..4: 12.6% 87.4%  0.0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb P  I16..4:  3.8% 47.5%  1.6%  P16..4: 12.9%  7.4%  5.0%  0.0%  0.0%    skip:21.7%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb B  I16..4:  1.2% 10.3%  0.4%  B16..8: 22.3%  6.9%  1.4%  direct: 2.7%  skip:54.8%  L0:46.9% L1:40.2% BI:12.9%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] 8x8 transform intra:88.7% inter:74.7%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] coded y,uvDC,uvAC intra: 68.3% 0.0% 0.0% inter: 11.8% 0.0% 0.0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i16 v,h,dc,p: 33% 40% 24%  3%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 15% 26% 52%  2%  1%  1%  1%  1%  3%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 27% 21% 20%  5%  5%  5%  4%  6%  5%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i8c dc,h,v,p: 100%  0%  0%  0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] Weighted P-Frames: Y:12.5% UV:0.0%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref P L0: 48.9% 12.5% 22.3% 14.7%  1.6%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref B L0: 77.5% 15.7%  6.8%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref B L1: 90.9%  9.1%
Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] kb/s:242.65
Post.jsx:35 [FFMPEG stderr] Aborted()


    


    Where am I going wrong, what should I convert my input blob into ? And just FYI, ChatGPT has been completely garbage at helping me solve this lmao.

    


  • How to use ffmpeg api to make a filter overlay water mark ?

    6 septembre 2022, par Leon Lee

    OS : Ubuntu 20.04

    


    FFmpeg : 4.4.0

    


    Test video :

    


    Input #0, hevc, from './videos/akiyo_352x288p25.265' :
Duration : N/A, bitrate : N/A
Stream #0:0 : Video : hevc (Main), yuv420p(tv), 352x288, 25 fps, 25 tbr, 1200k tbn, 25 tbc

    


    Test watermark :

    


    200*200.png

    


    I copy ffmpeg official example.

    


    Compiler no error, run no error , but i can't see add watermark

    


    Here is my code

    


    #include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavfilter></libavfilter>buffersink.h>&#xA;#include <libavfilter></libavfilter>buffersrc.h>&#xA;int open_input_file(AVFormatContext *fmt, AVCodecContext **codecctx, AVCodec *codec, const char *filename, int index)&#xA;{&#xA;    int ret = 0;&#xA;    char msg[500];&#xA;    *codecctx = avcodec_alloc_context3(codec);&#xA;    ret = avcodec_parameters_to_context(*codecctx, fmt->streams[index]->codecpar);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avcodec_parameters_to_context error,ret:%d\n", ret);&#xA;        &#xA;        return -1;&#xA;    }&#xA;&#xA;    // open 解码器&#xA;    ret = avcodec_open2(*codecctx, codec, NULL);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avcodec_open2 error,ret:%d\n", ret);&#xA;        &#xA;        return -2;&#xA;    }&#xA;    printf("pix:%d\n", (*codecctx)->pix_fmt);&#xA;    return ret;&#xA;}&#xA;&#xA;int init_filter(AVFilterContext **buffersrc_ctx, AVFilterContext **buffersink_ctx, AVFilterGraph **filter_graph, AVStream *stream, AVCodecContext *codecctx, const char *filter_desc)&#xA;{&#xA;    int ret = -1;&#xA;    char args[512];&#xA;    char msg[500];&#xA;    const AVFilter *buffersrc = avfilter_get_by_name("buffer");&#xA;    const AVFilter *buffersink = avfilter_get_by_name("buffersink");&#xA;&#xA;    AVFilterInOut *input = avfilter_inout_alloc();&#xA;    AVFilterInOut *output = avfilter_inout_alloc();&#xA;&#xA;    AVRational time_base = stream->time_base;&#xA;    enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};&#xA;&#xA;    if (!output || !input || !filter_graph)&#xA;    {&#xA;        ret = -1;&#xA;        printf("avfilter_graph_alloc/avfilter_inout_alloc error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", codecctx->width, codecctx->height, codecctx->pix_fmt, stream->time_base.num, stream->time_base.den, codecctx->sample_aspect_ratio.num, codecctx->sample_aspect_ratio.den);&#xA;    ret = avfilter_graph_create_filter(buffersrc_ctx, buffersrc, "in", args, NULL, *filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_create_filter buffersrc error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;&#xA;    ret = avfilter_graph_create_filter(buffersink_ctx, buffersink, "out", NULL, NULL, *filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_create_filter buffersink error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    ret = av_opt_set_int_list(*buffersink_ctx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("av_opt_set_int_list error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    /*&#xA; * The buffer source output must be connected to the input pad of&#xA; * the first filter described by filters_descr; since the first&#xA; * filter input label is not specified, it is set to "in" by&#xA; * default.&#xA; */&#xA;    output->name = av_strdup("in");&#xA;    output->filter_ctx = *buffersrc_ctx;&#xA;    output->pad_idx = 0;&#xA;    output->next = NULL;&#xA;&#xA;    /*&#xA; * The buffer sink input must be connected to the output pad of&#xA; * the last filter described by filters_descr; since the last&#xA; * filter output label is not specified, it is set to "out" by&#xA; * default.&#xA; */&#xA;    input->name = av_strdup("out");&#xA;    input->filter_ctx = *buffersink_ctx;&#xA;    input->pad_idx = 0;&#xA;    input->next = NULL;&#xA;&#xA;    if ((ret = avfilter_graph_parse_ptr(*filter_graph, filter_desc, &amp;input, &amp;output, NULL)) &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_parse_ptr error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;&#xA;    if ((ret = avfilter_graph_config(*filter_graph, NULL)) &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_config error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    end:&#xA;    avfilter_inout_free(&amp;input);&#xA;    avfilter_inout_free(&amp;output);&#xA;    return ret;&#xA;}&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    int ret;&#xA;    char msg[500];&#xA;    const char *filter_descr = "drawbox=x=100:y=100:w=100:h=100:color=pink@0.5"; // OK&#xA;    //const char *filter_descr = "movie=200.png[wm];[in][wm]overlay=10:10[out]"; //Test&#xA;    // const char *filter_descr = "scale=640:360,transpose=cclock";&#xA;    AVFormatContext *pFormatCtx = NULL;&#xA;    AVCodecContext *pCodecCtx;&#xA;    AVFilterContext *buffersink_ctx;&#xA;    AVFilterContext *buffersrc_ctx;&#xA;    AVFilterGraph *filter_graph;&#xA;    AVCodec *codec;&#xA;    int video_stream_index = -1;&#xA;&#xA;    AVPacket packet;&#xA;    AVFrame *pFrame;&#xA;    AVFrame *pFrame_out;&#xA;    filter_graph = avfilter_graph_alloc();&#xA;    FILE *fp_yuv = fopen("test.yuv", "wb&#x2B;");&#xA;    ret = avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avformat_open_input error,ret:%d\n", ret);&#xA;        &#xA;        return -1;&#xA;    }&#xA;&#xA;    ret = avformat_find_stream_info(pFormatCtx, NULL);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avformat_find_stream_info error,ret:%d\n", ret);&#xA;        &#xA;        return -2;&#xA;    }&#xA;&#xA;    ret = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;codec, 0);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("av_find_best_stream error,ret:%d\n", ret);&#xA;        &#xA;        return -3;&#xA;    }&#xA;    // 获取到视频流索引&#xA;    video_stream_index = ret;&#xA;&#xA;    av_dump_format(pFormatCtx, 0, argv[1], 0);&#xA;    if ((ret = open_input_file(pFormatCtx, &amp;pCodecCtx, codec, argv[1], video_stream_index)) &lt; 0)&#xA;    {&#xA;        ret = -1;&#xA;        printf("open_input_file error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;&#xA;    if ((ret = init_filter(&amp;buffersrc_ctx, &amp;buffersink_ctx, &amp;filter_graph, pFormatCtx->streams[video_stream_index], pCodecCtx, filter_descr)) &lt; 0)&#xA;    {&#xA;        ret = -2;&#xA;        printf("init_filter error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    pFrame = av_frame_alloc();&#xA;    pFrame_out = av_frame_alloc();&#xA;    while (1)&#xA;    {&#xA;        if ((ret = av_read_frame(pFormatCtx, &amp;packet)) &lt; 0)&#xA;            break;&#xA;&#xA;        if (packet.stream_index == video_stream_index)&#xA;        {&#xA;            ret = avcodec_send_packet(pCodecCtx, &amp;packet);&#xA;            if (ret &lt; 0)&#xA;            {&#xA;                printf("avcodec_send_packet error,ret:%d\n", ret);&#xA;                &#xA;                break;&#xA;            }&#xA;&#xA;            while (ret >= 0)&#xA;            {&#xA;                ret = avcodec_receive_frame(pCodecCtx, pFrame);&#xA;                if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                {&#xA;                    break;&#xA;                }&#xA;                else if (ret &lt; 0)&#xA;                {&#xA;                    printf("avcodec_receive_frame error,ret:%d\n", ret);&#xA;                    &#xA;                    goto end;&#xA;                }&#xA;&#xA;                pFrame->pts = pFrame->best_effort_timestamp;&#xA;&#xA;                /* push the decoded frame into the filtergraph */&#xA;                ret = av_buffersrc_add_frame_flags(buffersrc_ctx, pFrame, AV_BUFFERSRC_FLAG_KEEP_REF);&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    printf("av_buffersrc_add_frame_flags error,ret:%d\n", ret);&#xA;                    &#xA;                    break;&#xA;                }&#xA;&#xA;                /* pull filtered frames from the filtergraph */&#xA;                while (1)&#xA;                {&#xA;                    ret = av_buffersink_get_frame(buffersink_ctx, pFrame_out);&#xA;                    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                        break;&#xA;                    if (ret &lt; 0)&#xA;                        goto end;&#xA;                    if (pFrame_out->format == AV_PIX_FMT_YUV420P)&#xA;                    {&#xA;                        //Y, U, V&#xA;                        for (int i = 0; i &lt; pFrame_out->height; i&#x2B;&#x2B;)&#xA;                        {&#xA;                            fwrite(pFrame_out->data[0] &#x2B; pFrame_out->linesize[0] * i, 1, pFrame_out->width, fp_yuv);&#xA;                        }&#xA;                        for (int i = 0; i &lt; pFrame_out->height / 2; i&#x2B;&#x2B;)&#xA;                        {&#xA;                            fwrite(pFrame_out->data[1] &#x2B; pFrame_out->linesize[1] * i, 1, pFrame_out->width / 2, fp_yuv);&#xA;                        }&#xA;                        for (int i = 0; i &lt; pFrame_out->height / 2; i&#x2B;&#x2B;)&#xA;                        {&#xA;                            fwrite(pFrame_out->data[2] &#x2B; pFrame_out->linesize[2] * i, 1, pFrame_out->width / 2, fp_yuv);&#xA;                        }&#xA;                    }&#xA;                    av_frame_unref(pFrame_out);&#xA;                }&#xA;                av_frame_unref(pFrame);&#xA;            }&#xA;        }&#xA;        av_packet_unref(&amp;packet);&#xA;    }&#xA;    end:&#xA;    avcodec_free_context(&amp;pCodecCtx);&#xA;    fclose(fp_yuv);&#xA;}&#xA;

    &#xA;