Recherche avancée

Médias (1)

Mot : - Tags -/getid3

Autres articles (96)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5279)

  • how can I take audio file(mp3) and video file(mp4) as input in Java spring ?

    29 juillet 2022, par Jawad Un Islam Abir

    I am using this code to read the audio file. but don't know how to take user input.

    


    InputStream inputStream = new FileInputStream("example.ogg");
FFmpegInput input = new FFmpegInput(inputStream);
FFmpegSourceStream stream = input.open(inputFormat);

stream.registerStreams();

AudioSourceSubstream audioSourceSubstream = null;
for(MediaSourceSubstream substream : stream.getSubstreams()) {
    if (substream.getMediaType() != MediaType.AUDIO) continue;

    audioSourceSubstream = (AudioSourceSubstream) substream;
}

if (audioSourceSubstream == null) throw new NullPointerException();


    


  • How to minimize latency in ffmpeg stream Java ?

    13 juillet 2022, par Taavi Sõerd

    I need to stream ffmpeg video feed in android studio and need minimal latency. Code below has achieved that when playing on galaxy s21 ultra but when I play it on galaxy tab then it's like in slow motion. When i set buffer size to 0 I get minimal latency but can't actually even see the video as it's all corrupted (all gray and colored noise).

    


    public class Decode implements Runnable {
public Activity activity;
AVFrame pFrameRGB;
SwsContext sws_ctx;
ByteBuffer bitmapBuffer;
Bitmap bmp;
byte[] array;
int imageViewWidth = 0;
int imageViewHeight = 0;
boolean imageChanged = true;
int v_stream_idx = -1;
int klv_stream_idx = -1;

boolean imageDrawMutex = false;

boolean imageIsSet = false;
ImageView imageView =  MainActivity.getmInstanceActivity().findViewById(R.id.imageView);

String mFilename = "udp://@" + MainActivity.connectionIP;;
UasDatalinkLocalSet mLatestDls;

public Decode(Activity _activity) {
    this.activity = _activity;
}

public void create_decoder(AVCodecContext codec_ctx) {
    imageChanged = true;

    // Determine required buffer size and allocate buffer
    int numBytes =av_image_get_buffer_size(AV_PIX_FMT_RGBA, codec_ctx.width(),
            codec_ctx.height(), 1);
    BytePointer buffer = new BytePointer(av_malloc(numBytes));

    bmp = Bitmap.createBitmap(codec_ctx.width(), codec_ctx.height(), Bitmap.Config.ARGB_8888);

    array = new byte[codec_ctx.width() * codec_ctx.height() * 4];
    bitmapBuffer = ByteBuffer.wrap(array);

    sws_ctx = sws_getContext(
            codec_ctx.width(),
            codec_ctx.height(),
            codec_ctx.pix_fmt(),
            codec_ctx.width(),
            codec_ctx.height(),
            AV_PIX_FMT_RGBA,
            SWS_POINT,
            null,
            null,
            (DoublePointer) null
    );

    if (sws_ctx == null) {
        Log.d("app", "Can not use sws");
        throw new IllegalStateException();
    }

    av_image_fill_arrays(pFrameRGB.data(), pFrameRGB.linesize(),
            buffer, AV_PIX_FMT_RGBA, codec_ctx.width(), codec_ctx.height(), 1);
}

@Override
public void run() {
    Log.d("app", "Start decoder");

    int ret = -1, i = 0;
    String vf_path = mFilename;

    AVFormatContext fmt_ctx = new AVFormatContext(null);
    AVPacket pkt = new AVPacket();


    AVDictionary multicastDict = new AVDictionary();

    av_dict_set(multicastDict, "rtsp_transport", "udp_multicast", 0);

    av_dict_set(multicastDict, "localaddr", getIPAddress(true), 0);
    av_dict_set(multicastDict, "reuse", "1", 0);

    av_dict_set(multicastDict, "buffer_size", "0.115M", 0);

    ret = avformat_open_input(fmt_ctx, vf_path, null, multicastDict);
    if (ret < 0) {
        Log.d("app", String.format("Open video file %s failed \n", vf_path));
        byte[] error_message = new byte[1024];
        int elen = av_strerror(ret, error_message, 1024);
        String s = new String(error_message, 0, 20);
        Log.d("app", String.format("Return: %d", ret));
        Log.d("app", String.format("Message: %s", s));
        throw new IllegalStateException();
    }
    
    if (avformat_find_stream_info(fmt_ctx, (PointerPointer) null) < 0) {
        //System.exit(-1);
        Log.d("app", "Stream info not found");
    }


    avformat.av_dump_format(fmt_ctx, 0, mFilename, 0);

    int nstreams = fmt_ctx.nb_streams();

    for (i = 0; i < fmt_ctx.nb_streams(); i++) {
        if (fmt_ctx.streams(i).codecpar().codec_type() == AVMEDIA_TYPE_VIDEO) {
            v_stream_idx = i;
        }
        if (fmt_ctx.streams(i).codecpar().codec_type() == AVMEDIA_TYPE_DATA) {
            klv_stream_idx = i;
        }
    }
    if (v_stream_idx == -1) {
        Log.d("app", "Cannot find video stream");
        throw new IllegalStateException();
    } else {
        Log.d("app", String.format("Video stream %d with resolution %dx%d\n", v_stream_idx,
                fmt_ctx.streams(v_stream_idx).codecpar().width(),
                fmt_ctx.streams(v_stream_idx).codecpar().height()));
    }

    AVCodecContext codec_ctx = avcodec_alloc_context3(null);
    avcodec_parameters_to_context(codec_ctx, fmt_ctx.streams(v_stream_idx).codecpar());


    AVCodec codec = avcodec_find_decoder(codec_ctx.codec_id());


    AVDictionary avDictionary = new AVDictionary();

    av_dict_set(avDictionary, "fflags", "nobuffer", 0);


    if (codec == null) {
        Log.d("app", "Unsupported codec for video file");
        throw new IllegalStateException();
    }
    ret = avcodec_open2(codec_ctx, codec, avDictionary);
    if (ret < 0) {
        Log.d("app", "Can not open codec");
        throw new IllegalStateException();
    }

    AVFrame frm = av_frame_alloc();

    // Allocate an AVFrame structure
    pFrameRGB = av_frame_alloc();
    if (pFrameRGB == null) {
        //System.exit(-1);
        Log.d("app", "unable to init pframergb");
    }

    create_decoder(codec_ctx);

    int width = codec_ctx.width();
    int height = codec_ctx.height();

    double fps = 15;
    

    while (true) {
        try {
            Thread.sleep(1);
        } catch (Exception e) {

        }

        try {
            if (av_read_frame(fmt_ctx, pkt) >= 0) {
                if (pkt.stream_index() == v_stream_idx) {
                    avcodec_send_packet(codec_ctx, pkt);

                    if (codec_ctx.width() != width || codec_ctx.height() != height) {
                        create_decoder(codec_ctx);
                        width = codec_ctx.width();
                        height = codec_ctx.height();
                    }
                }

                if (pkt.stream_index() == klv_stream_idx) {

                    byte[] klvDataBuffer = new byte[pkt.size()];

                    for (int j = 0; j < pkt.size(); j++) {
                        klvDataBuffer[j] = pkt.data().get(j);
                    }

                    try {
                        KLV k = new KLV(klvDataBuffer, KLV.KeyLength.SixteenBytes, KLV.LengthEncoding.BER);
                        byte[] main_payload = k.getValue();

                        // decode the Uas Datalink Local Set from main_payload binary blob.
                        mLatestDls = new UasDatalinkLocalSet(main_payload);

                        if (mLatestDls != null) {

                            MainActivity.getmInstanceActivity().runOnUiThread(new Runnable() {
                                @RequiresApi(api = Build.VERSION_CODES.Q)
                                @Override
                                public void run() {
                                    MainActivity.getmInstanceActivity().updateKlv(mLatestDls);
                                }
                            });
                        }
                    } catch (Exception e) {
                        e.printStackTrace();
                    }
                    
                }

                int wasFrameDecoded = 0;
                while (wasFrameDecoded >= 0) {
                    wasFrameDecoded = avcodec_receive_frame(codec_ctx, frm);

                    if (wasFrameDecoded >= 0) {
                        // get clip fps
                        fps = 15; //av_q2d(fmt_ctx.streams(v_stream_idx).r_frame_rate());

                        sws_scale(
                                sws_ctx,
                                frm.data(),
                                frm.linesize(),
                                0,
                                codec_ctx.height(),
                                pFrameRGB.data(),
                                pFrameRGB.linesize()
                        );

                        if(!imageDrawMutex) {
                            MainActivity.getmInstanceActivity().runOnUiThread(new Runnable() {
                                @Override
                                public void run() {
                                    if (imageIsSet) {
                                        imageDrawMutex = true;
                                        pFrameRGB.data(0).position(0).get(array);
                                        bitmapBuffer.rewind();
                                        bmp.copyPixelsFromBuffer(bitmapBuffer);

                                        if (imageChanged) {
                                            (imageView).setImageBitmap(bmp);
                                            imageChanged = false;
                                        }

                                        (imageView).invalidate();
                                        imageDrawMutex = false;
                                    } else {
                                        (imageView).setImageBitmap(bmp);
                                        imageIsSet = true;
                                    }
                                }
                            });
                        }
                    }
                }
                av_packet_unref(pkt);

            }
        } catch (Exception e) {
            e.printStackTrace();
        }

        if (false) {
            Log.d("threads", "false");

            av_frame_free(frm);

            avcodec_close(codec_ctx);
            avcodec_free_context(codec_ctx);

            avformat_close_input(fmt_ctx);
        }
    }
}


    


    This code is running in Android Studio with Java. I'm quite new on this topic so not really sure even where to start.
What could be the cause of that ?

    


  • Error in making animation through ffmpeg (python3.9)

    20 avril 2024, par Taehyung Ghim

    When I try to make 2D animation map for cow tracking(matching 2 camera views) through ffmpeg, following error occurs.

    


    
raise subprocess.CalledProcessError(subprocess.CalledProcessError: Command '['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '4000x4000', '-pix_fmt', 'rgba', '-r', '5', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-metadata', 'artist=Me', '-y', '../out_detect/run7/TRACKS_ANIMATION_fused.mp4']' returned non-zero exit status 1.



    


    Following is the full error :

    


    &#xA;Plotting the last 1800.9391813674797 frames.&#xA;INFO:Animation.save using <class>&#xA;INFO:MovieWriter._run: running command: ffmpeg -f rawvideo -vcodec rawvideo -s 4000x4000 -pix_fmt rgba -r 5 -loglevel error -i pipe: -vcodec h264 -pix_fmt yuv420p -metadata artist=Me -y ../out_detect/run7/TRACKS_ANIMATION_fused.mp4&#xA;WARNING:MovieWriter stderr:&#xA;[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:ParamValidationExt(), width > 0, height > 0, width * height &lt;= 9437184, invalid 4000 x 4000 in dependency layer settings!&#xA;[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:WelsInitEncoderExt(), ParamValidationExt failed return 2.&#xA;[libopenh264 @ 0x55b93df81fc0] [OpenH264] this = 0x0x55b93df8ef10, Error:CWelsH264SVCEncoder::Initialize(), WelsInitEncoderExt failed.&#xA;[libopenh264 @ 0x55b93df81fc0] Initialize failed&#xA;Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height&#xA;   &#xA;&#xA; Traceback (most recent call last):&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 236, in saving&#xA;        yield self&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 1095, in save&#xA;        writer.grab_frame(**savefig_kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 353, in grab_frame&#xA;        self.fig.savefig(self._proc.stdin, format=self.frame_format,&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/figure.py", line 3012, in savefig&#xA;        self.canvas.print_figure(fname, **kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backend_bases.py", line 2314, in print_figure&#xA;        result = print_method(&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backend_bases.py", line 1643, in wrapper&#xA;        return func(*args, **kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/_api/deprecation.py", line 412, in wrapper&#xA;        return func(*inner_args, **inner_kwargs)&#xA;      File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/backends/backend_agg.py", line 486, in print_raw&#xA;        fh.write(renderer.buffer_rgba())&#xA;    BrokenPipeError: [Errno 32] Broken pipe&#xA;During handling of the above exception, another exception occurred:&#xA;Traceback (most recent call last):&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/main.py", line 330, in <module>&#xA;    inference_tracking_video(opt=args, device=dev, detector=detector, keypoint_tfm=keypoint_tfm,&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/tracking.py", line 325, in inference_tracking_video&#xA;    postprocess_tracking_results(track_args=track_args, cfg_postprocess=cfg_matching_parameters.POSTPROCESS,&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/postprocess_results.py", line 90, in postprocess_tracking_results&#xA;    postprocess_trajectories(track_args=track_args, analysis_matching_cfg=cfg_analysis)&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/postprocess_results.py", line 58, in postprocess_trajectories&#xA;    analyse_trajectories(analysis_arguments, full_width, full_height, video_fps, frame_rate_animation)&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/postprocessing/trajectory_postprocess.py", line 115, in analyse_trajectories&#xA;    create_virtual_map_animation_final(opt.save_dir, final_matching_both_cams, color_dict3, full_width,&#xA;  File "/home/rom/PycharmProjects/cow_tracking_package/tracking-cows/output/output_plot_fused_trajectory_animation.py", line 236, in create_virtual_map_animation_final&#xA;    virtual_map_animation.save(traj_file_path, writer=writer)&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 1095, in save&#xA;    writer.grab_frame(**savefig_kwargs)&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/contextlib.py", line 137, in __exit__&#xA;    self.gen.throw(typ, value, traceback)&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 238, in saving&#xA;    self.finish()&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 344, in finish&#xA;    self._cleanup()  # Inline _cleanup() once cleanup() is removed.&#xA;  File "/home/rom/anaconda3/envs/cow_tracking_env/lib/python3.9/site-packages/matplotlib/animation.py", line 375, in _cleanup&#xA;    raise subprocess.CalledProcessError(&#xA;subprocess.CalledProcessError: Command &#x27;[&#x27;ffmpeg&#x27;, &#x27;-f&#x27;, &#x27;rawvideo&#x27;, &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;, &#x27;-s&#x27;, &#x27;4000x4000&#x27;, &#x27;-pix_fmt&#x27;, &#x27;rgba&#x27;, &#x27;-r&#x27;, &#x27;5&#x27;, &#x27;-loglevel&#x27;, &#x27;error&#x27;, &#x27;-i&#x27;, &#x27;pipe:&#x27;, &#x27;-vcodec&#x27;, &#x27;h264&#x27;, &#x27;-pix_fmt&#x27;, &#x27;yuv420p&#x27;, &#x27;-metadata&#x27;, &#x27;artist=Me&#x27;, &#x27;-y&#x27;, &#x27;../out_detect/run7/TRACKS_ANIMATION_fused.mp4&#x27;]&#x27; returned non-zero exit status 1.&#xA;&#xA;</module></class>

    &#xA;

    ffmpeg version is 4.3 built with gcc 7.3.0. and OS is Ubuntu 20.04&#xA;and my conda env is below.

    &#xA;

    channels:&#xA;  - pytorch&#xA;  - defaults&#xA;dependencies:&#xA;  - _libgcc_mutex=0.1=main&#xA;  - _openmp_mutex=4.5=1_gnu&#xA;  - blas=1.0=mkl&#xA;  - bzip2=1.0.8=h7b6447c_0&#xA;  - ca-certificates=2021.10.26=h06a4308_2&#xA;  - certifi=2021.10.8=py39h06a4308_2&#xA;  - cudatoolkit=11.3.1=h2bc3f7f_2&#xA;  - ffmpeg=4.3=hf484d3e_0&#xA;  - freetype=2.11.0=h70c0345_0&#xA;  - giflib=5.2.1=h7b6447c_0&#xA;  - gmp=6.2.1=h2531618_2&#xA;  - gnutls=3.6.15=he1e5248_0&#xA;  - intel-openmp=2021.4.0=h06a4308_3561&#xA;  - jpeg=9d=h7f8727e_0&#xA;  - lame=3.100=h7b6447c_0&#xA;  - lcms2=2.12=h3be6417_0&#xA;  - ld_impl_linux-64=2.35.1=h7274673_9&#xA;  - libffi=3.3=he6710b0_2&#xA;  - libgcc-ng=9.3.0=h5101ec6_17&#xA;  - libgomp=9.3.0=h5101ec6_17&#xA;  - libiconv=1.15=h63c8f33_5&#xA;  - libidn2=2.3.2=h7f8727e_0&#xA;  - libpng=1.6.37=hbc83047_0&#xA;  - libstdcxx-ng=9.3.0=hd4cf53a_17&#xA;  - libtasn1=4.16.0=h27cfd23_0&#xA;  - libtiff=4.2.0=h85742a9_0&#xA;  - libunistring=0.9.10=h27cfd23_0&#xA;  - libuv=1.40.0=h7b6447c_0&#xA;  - libwebp=1.2.0=h89dd481_0&#xA;  - libwebp-base=1.2.0=h27cfd23_0&#xA;  - lz4-c=1.9.3=h295c915_1&#xA;  - mkl=2021.4.0=h06a4308_640&#xA;  - mkl-service=2.4.0=py39h7f8727e_0&#xA;  - mkl_fft=1.3.1=py39hd3c417c_0&#xA;  - mkl_random=1.2.2=py39h51133e4_0&#xA;  - ncurses=6.3=h7f8727e_2&#xA;  - nettle=3.7.3=hbbd107a_1&#xA;  - numpy=1.21.2=py39h20f2e39_0&#xA;  - numpy-base=1.21.2=py39h79a1101_0&#xA;  - olefile=0.46=pyhd3eb1b0_0&#xA;  - openh264=2.1.0=hd408876_0&#xA;  - openssl=1.1.1m=h7f8727e_0&#xA;  - pillow=8.4.0=py39h5aabda8_0&#xA;  - pip=21.2.4=py39h06a4308_0&#xA;  - python=3.9.7=h12debd9_1&#xA;  - pytorch=1.10.0=py3.9_cuda11.3_cudnn8.2.0_0&#xA;  - pytorch-mutex=1.0=cuda&#xA;  - readline=8.1=h27cfd23_0&#xA;  - setuptools=58.0.4=py39h06a4308_0&#xA;  - six=1.16.0=pyhd3eb1b0_0&#xA;  - sqlite=3.36.0=hc218d9a_0&#xA;  - tk=8.6.11=h1ccaba5_0&#xA;  - torchaudio=0.10.0=py39_cu113&#xA;  - torchvision=0.11.1=py39_cu113&#xA;  - typing_extensions=3.10.0.2=pyh06a4308_0&#xA;  - wheel=0.37.0=pyhd3eb1b0_1&#xA;  - xz=5.2.5=h7b6447c_0&#xA;  - zlib=1.2.11=h7b6447c_3&#xA;  - zstd=1.4.9=haebb681_0&#xA;  - pip:&#xA;    - absl-py==1.0.0&#xA;    - addict==2.4.0&#xA;    - cachetools==4.2.4&#xA;    - charset-normalizer==2.0.8&#xA;    - cloudpickle==2.0.0&#xA;    - cycler==0.11.0&#xA;    - cython==0.29.24&#xA;    - docutils==0.18.1&#xA;    - easydict==1.9&#xA;    - filterpy==1.4.5&#xA;    - fonttools==4.28.2&#xA;    - geohash2==1.1&#xA;    - google-auth==2.3.3&#xA;    - google-auth-oauthlib==0.4.6&#xA;    - grpcio==1.42.0&#xA;    - idna==3.3&#xA;    - imageio==2.13.5&#xA;    - importlib-metadata==4.8.2&#xA;    - joblib==1.1.0&#xA;    - kiwisolver==1.3.2&#xA;    - loguru==0.6.0&#xA;    - markdown==3.3.6&#xA;    - matplotlib==3.5.0&#xA;    - natsort==8.0.2&#xA;    - networkx==2.6.3&#xA;    - oauthlib==3.1.1&#xA;    - opencv-python==4.5.4.60&#xA;    - packaging==21.3&#xA;    - pandas==1.3.4&#xA;    - protobuf==3.19.1&#xA;    - pyasn1==0.4.8&#xA;    - pyasn1-modules==0.2.8&#xA;    - pycocotools==2.0.4&#xA;    - pyparsing==3.0.6&#xA;    - pyqt5==5.15.6&#xA;    - pyqt5-qt5==5.15.2&#xA;    - pyqt5-sip==12.9.0&#xA;    - python-dateutil==2.8.2&#xA;    - pytz==2021.3&#xA;    - pytz-deprecation-shim==0.1.0.post0&#xA;    - pywavelets==1.2.0&#xA;    - pyyaml==6.0&#xA;    - requests==2.26.0&#xA;    - requests-oauthlib==1.3.0&#xA;    - rsa==4.8&#xA;    - scikit-image==0.19.1&#xA;    - scikit-learn==1.0.2&#xA;    - scipy==1.7.3&#xA;    - seaborn==0.11.2&#xA;    - setuptools-scm==6.3.2&#xA;    - shapely==1.8.0&#xA;    - sklearn==0.0&#xA;    - split-folders==0.4.3&#xA;    - tabulate==0.8.9&#xA;    - tensorboard==2.7.0&#xA;    - tensorboard-data-server==0.6.1&#xA;    - tensorboard-plugin-wit==1.8.0&#xA;    - terminaltables==3.1.10&#xA;    - thop==0.0.31-2005241907&#xA;    - threadpoolctl==3.1.0&#xA;    - tifffile==2021.11.2&#xA;    - timm==0.4.12&#xA;    - tomli==1.2.2&#xA;    - tqdm==4.62.3&#xA;    - traja==0.2.8&#xA;    - tzdata==2021.5&#xA;    - tzlocal==4.1&#xA;    - urllib3==1.26.7&#xA;    - werkzeug==2.0.2&#xA;    - yacs==0.1.8&#xA;    - yapf==0.32.0&#xA;    - zipp==3.6.0&#xA;

    &#xA;

    I also installed ffmpy through conda.

    &#xA;

    It will be very grateful if anyone could help me.

    &#xA;