Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (50)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (6787)

  • Matomo 2 reaches end of life soon (December 2017), update now !

    7 décembre 2017, par Matomo Core Team — Community

    In less than three weeks, Matomo (Piwik) 2 will be no longer supported. This means that no further (security) updates will be released for this version. As per our Long Term Support announcement, Matomo 2.X is supported for 12 months after the initial release of Matomo 3.0.0 which was on December 18th 2016. Therefore, Matomo 2 will no longer receive any updates after December 18th 2017.

    It has been almost a year since we released Matomo (Piwik) 3 and we highly recommend updating to Matomo 3 ASAP. The major new release came with a new UI, performance and security improvements. If you are still on Matomo 2, the security improvements alone should be worth updating your Matomo to Matomo 3 now. We cannot recommend this enough.

    The update to Matomo (Piwik) 3 should be smooth, but may take a while depending on the amount of data you have.

    • If you have any problem with the update, feel free to get in touch with us, or ask in the forums.
    • If you are currently using Matomo (Piwik) self-hosted and would like to be upgraded, plus your Matomo managed in the official Cloud-hosted service, contact InnoCraft Cloud and they will migrate your database.

    At Matomo (Piwik) and InnoCraft, the company of the makers of Matomo, we have seen many thousands of Matomo installations upgraded over the past year and look forward to an exciting future for Matomo 3 and beyond !

  • Layout Video Recording Like Instgram/ Ticktok Feature

    21 mars 2023, par Amarchand K

    I'm try to do record video inn different layouts like Instagram in Flutter. For this feature I used ffmpeg_kit_flutter package. I refers this solution to do this. but output video is blank, any one helps me to solve this.
the video input and output path is valid, also show the bellow error while printing,

    


    `ffmpeg version n5.1.2 Copyright (c) 2000-2022 the FFmpeg developers
    built with Android (7155654, based on r399163b1) clang version 11.0.5 (https://android.googlesource.com/toolchain/llvm-project 87f1315dfbea7c137aa2e6d362dbb457e388158d)
    configuration: --cross-prefix=aarch64-linux-android- --sysroot=/files/android-sdk/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/sysroot --prefix=/home/taner/Projects/ffmpeg-kit/prebuilt/android-arm64/ffmpeg --pkg-config=/usr/bin/pkg-config --enable-version3 --arch=aarch64 --cpu=armv8-a --target-os=android --enable-neon --enable-asm --enable-inline-asm --ar=aarch64-linux-android-ar --cc=aarch64-linux-android24-clang --cxx=aarch64-linux-android24-clang++ --ranlib=aarch64-linux-android-ranlib --strip=aarch64-linux-android-strip --nm=aarch64-linux-android-nm --extra-libs='-L/home/taner/Projects/ffmpeg-kit/prebuilt/android-arm64/cpu-features/lib -lndk_compat' --disable-autodetect --enable-cross-compile --enable-pic --enable-jni --enable-optimizations --enable-swscale --disable-static --enable-shared --enable-pthreads --enable-v4l2-m2m --disable-outdev=fbdev --disable-indev=fbdev --enable-small --disable-xmm-clobber-test --disable-debug --enable-lto --disable-neon-clobber-test --disable-programs --disable-postproc --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-sndio --disable-schannel --disable-securetransport --disable-xlib --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --disable-videotoolbox --disable-audiotoolbox --disable-appkit --disable-alsa --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --enable-gmp --enable-gnutls --enable-iconv --disable-sdl2 --disable-openssl --enable-zlib --enable-mediacodec
    libavutil      57. 28.100 / 57. 28.100
    libavcodec     59. 37.100 / 59. 37.100
    libavformat    59. 27.100 / 59. 27.100
    libavdevice    59.  7.100 / 59.  7.100
    libavfilter     8. 44.100 /  8. 44.100
    libswscale      6.  7.100 /  6.  7.100
    libswresample   4.  7.100 /  4.  7.100
  -vsync is deprecated. Use -fps_mode
  Passing a number to -vsync is deprecated, use a string argument as described in the manual.
  Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/data/user/0/com.example.tuki_taki/cache/REC1453994379216336834.mp4':
    Metadata:
      major_brand     : mp42
      minor_version   : 0
      compatible_brands: isommp42
      creation_time   : 2023-03-21T07:15:58.000000Z
      com.android.version: 12
    Duration: 00:00:03.77, start: 0.000000, bitrate: 2204 kb/s
    Stream #0:0[0x1](eng): Video: h264, 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt470bg/smpte170m/bt709, progressive, left), 640x480, 2199 kb/s, 29.61 fps, 29.58 tbr, 90k tbn (default)
      Metadata:
        creation_time   : 2023-03-21T07:15:58.000000Z
        handler_name    : VideoHandle
        vendor_id       : [0][0][0][0]
      Side data:
        displaymatrix: rotation of -90.00 degrees
    Stream #0:1[0x2](eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 128 kb/s (default)
      Metadata:
        creation_time   : 2023-03-21T07:15:58.000000Z
        handler_name    : SoundHandle
        vendor_id       : [0][0][0][0]
  Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '/data/user/0/com.example.tuki_taki/cache/REC5972384708251368209.mp4':
    Metadata:
      major_brand     : mp42
      minor_version   : 0
      compatible_brands: isommp42
      creation_time   : 2023-03-21T07:16:05.000000Z
      com.android.version: 12
    Duration: 00:00:02.84, start: 0.000000, bitrate: 2703 kb/s
    Stream #1:0[0x1](eng): Video: h264, 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt470bg/smpte170m/bt709, progressive, left), 640x480, 2801 kb/s, 29.61 fps, 29.58 tbr, 90k tbn (default)
      Metadata:
        creation_time   : 2023-03-21T07:16:05.000000Z
        handler_name    : VideoHandle
        vendor_id       : [0][0][0][0]
      Side data:
        displaymatrix: rotation of -90.00 degrees
    Stream #1:1[0x2](eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 128 kb/s (default)
      Metadata:
        creation_time   : 2023-03-21T07:16:05.000000Z
        handler_name    : SoundHandle
        vendor_id       : [0][0][0][0]
  Stream mapping:
    Stream #0:0 (h264) -> scale:default
    Stream #0:1 (aac) -> amix
    Stream #1:0 (h264) -> scale:default
    Stream #1:1 (aac) -> amix
    hstack:default -> Stream #0:0 (mpeg4)
    amix:default -> Stream #0:1 (aac)
  Press [q] to stop, [?] for help
  Output #0, mp4, to '/data/user/0/com.example.tuki_taki/cache/output.mp4':
    Metadata:
      major_brand     : mp42
      minor_version   : 0
      compatible_brands: isommp42
      com.android.version: 12
      encoder         : Lavf59.27.100
    Stream #0:0: Video: mpeg4, 1 reference frame (mp4v / 0x7634706D), yuv420p(progressive), 960x640 (0x0) [SAR 1:1 DAR 3:2], q=2-31, 200 kb/s, 29.58 fps, 11360 tbn
      Metadata:
        encoder         : Lavc59.37.100 mpeg4
      Side data:
        cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
    Stream #0:1: Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, delay 1024, 69 kb/s
      Metadata:
        encoder         : Lavc59.37.100 aac
  frame=    1 fps=0.0 q=4.3 size=       0kB time=00:00:00.23 bitrate=   1.5kbits/s speed=3.49x    


    


    [mpeg4 @ 0xb400007122ebac50] Invalid pts (8) <= last (8)&#xA;Error submitting video frame to the encoder&#xA;[aac @ 0xb4000071232208c0] Qavg : 9911.349&#xA;[aac @ 0xb4000071232208c0] 2 frames left in the queue on closing&#xA;Conversion failed`

    &#xA;

    I'm tried

    &#xA;

      `Future<void> onLayoutDone() async {&#xA;try {&#xA;  final String outputPath = await _getTempPath();   &#xA;  const String filter =&#xA;      "[0:v]scale=480:640,setsar=1[l];[1:v]scale=480:640,setsar=1[r];[l][r]hstack;[0][1]amix -vsync 0 ";    &#xA;  log("left path: ${layoutVideoPathList[0]} right : ${layoutVideoPathList[1]} $outputPath");     &#xA;  final String command =" -y -i ${layoutVideoPathList[0]} -i ${layoutVideoPathList[1]} -filter_complex$filter$outputPath -loglevel verbose"; `these paths are valid`    &#xA;     &#xA;  await FFmpegKit.execute(command).then((value) async {      &#xA;    String? error = await value.getAllLogsAsString();    &#xA;    log(error!);    &#xA;    final ReturnCode? returnCode = await value.getReturnCode();    &#xA;    if (returnCode != null) {&#xA;      setVideo(videoPath: outputPath);    &#xA;      layoutVideoPathList.clear();    &#xA;    }&#xA;  });&#xA;} catch (e) {&#xA;  log("error while  combine -========-=-=-=-=-=-=-=-=- $e");    &#xA;}`&#xA;</void>

    &#xA;

  • Vulkan image data to AVFrames and to video

    12 avril 2024, par W4zab1

    I am trying to encode Vulkan image data into video with MPEG4 format. For some reason the output videofile is corrupted. FFProbe shows discontinuity in timestamps, and the frames are corrupted.&#xA;First I prepare my video encoder
    &#xA;Then I get FrameEnded events from my engine where I can get the image data from the vulkan swapchain.
    &#xA;I then convert the image data from vulkan to AVFrames (RGBA to YUV420P), then I pass the frames into queue.
    &#xA;This queue is then handled in another thread, where the frames are processed, and written into video.
    &#xA;I am bit of a noob with ffmpeg, so there can be some code that does not make sense.

    &#xA;

    This seems right straight forward logic, but there is probably some problems with codec params, way I am converting the imagedata to AVFrame, or something of that sort.
    &#xA;The videofile still gets created, and has some data in it (it is > 0 bytes, and longer the recording, bigger the filesize).
    &#xA;There is no errors from ffmpeg with log_level set to DEBUG.

    &#xA;

    struct FrameData {&#xA;    AVFrame* frame;&#xA;    int frame_index;&#xA;};&#xA;&#xA;class EventListenerVideoCapture : public VEEventListenerGLFW {&#xA;private:&#xA;    AVFormatContext* format_ctx = nullptr;&#xA;    AVCodec* video_codec = nullptr;&#xA;    AVCodecContext* codec_context = nullptr;&#xA;    AVStream* video_stream = nullptr;&#xA;    AVDictionary* muxer_opts = nullptr;&#xA;    int frame_index = 0;&#xA;&#xA;    std::queue frame_queue;&#xA;    std::mutex queue_mtx;&#xA;    std::condition_variable queue_cv;&#xA;    std::atomic<bool> stop_processing{ false };&#xA;    std::thread video_processing_thread;&#xA;    int prepare_video_encoder()&#xA;    {&#xA;        av_log_set_level(AV_LOG_DEBUG);&#xA;        // Add video stream to format context&#xA;        avformat_alloc_output_context2(&amp;format_ctx, nullptr, nullptr, "video.mpg");&#xA;        video_stream = avformat_new_stream(format_ctx, NULL);&#xA;        video_codec = (AVCodec*)avcodec_find_encoder(AV_CODEC_ID_MPEG4);&#xA;        codec_context = avcodec_alloc_context3(video_codec);&#xA;        if (!format_ctx) { std::cerr &lt;&lt; "Error: Failed to allocate format context" &lt;&lt; std::endl; system("pause"); }&#xA;        if (!video_stream) { std::cerr &lt;&lt; "Error: Failed to create new stream" &lt;&lt; std::endl; system("pause"); }&#xA;        if (!video_codec) { std::cerr &lt;&lt; "Error: Failed to find video codec" &lt;&lt; std::endl; system("pause"); }&#xA;        if (!codec_context) { std::cerr &lt;&lt; "Error: Failed to allocate codec context" &lt;&lt; std::endl; system("pause"); }&#xA;&#xA;        if (avio_open(&amp;format_ctx->pb, "video.mpg", AVIO_FLAG_WRITE) &lt; 0) { std::cerr &lt;&lt; "Error: Failed to open file for writing!" &lt;&lt; std::endl; return -1; }&#xA;&#xA;        av_opt_set(codec_context->priv_data, "preset", "fast", 0);&#xA;&#xA;        codec_context->codec_id = AV_CODEC_ID_MPEG4;&#xA;        codec_context->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;        codec_context->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;        codec_context->width = getWindowPointer()->getExtent().width;&#xA;        codec_context->height = getWindowPointer()->getExtent().height;&#xA;        codec_context->bit_rate = 1000 * 1000; // Bitrate&#xA;        codec_context->time_base = { 1, 30 }; // 30 FPS&#xA;        codec_context->gop_size = 10;&#xA;&#xA;        av_dict_set(&amp;muxer_opts, "movflags", "faststart", 0);&#xA;&#xA;        //Unecessary? Since the params are copied anyways&#xA;        video_stream->time_base = codec_context->time_base;&#xA;&#xA;        //Try to open codec after changes&#xA;        //copy codec_context params to videostream&#xA;        //and write headers to format_context&#xA;        if (avcodec_open2(codec_context, video_codec, NULL) &lt; 0) { std::cerr &lt;&lt; "Error: Could not open codec!" &lt;&lt; std::endl; return -1; }&#xA;        if (avcodec_parameters_from_context(video_stream->codecpar, codec_context) &lt; 0) { std::cerr &lt;&lt; "Error: Could not copy params from context to stream!" &lt;&lt; std::endl; return -1; };&#xA;        if (avformat_write_header(format_ctx, &amp;muxer_opts) &lt; 0) { std::cerr &lt;&lt; "Error: Failed to write output file headers!" &lt;&lt; std::endl; return -1; }&#xA;        return 0;&#xA;    }&#xA;&#xA;    void processFrames() {&#xA;        while (!stop_processing) {&#xA;            FrameData* frameData = nullptr;&#xA;            {&#xA;                std::unique_lock lock(queue_mtx);&#xA;                queue_cv.wait(lock, [&amp;]() { return !frame_queue.empty() || stop_processing; });&#xA;&#xA;                if (stop_processing &amp;&amp; frame_queue.empty())&#xA;                    break;&#xA;&#xA;                frameData = frame_queue.front();&#xA;                frame_queue.pop();&#xA;            }&#xA;&#xA;            if (frameData) {&#xA;                encodeAndWriteFrame(frameData);&#xA;                AVFrame* frame = frameData->frame;&#xA;                av_frame_free(&amp;frame); // Free the processed frame&#xA;                delete frameData;&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    void encodeAndWriteFrame(FrameData* frameData) {&#xA;&#xA;        // Validation&#xA;        if (!frameData->frame) { std::cerr &lt;&lt; "Error: Frame was null! " &lt;&lt; std::endl; return; }&#xA;        if (frameData->frame->format != codec_context->pix_fmt) { std::cerr &lt;&lt; "Error: Frame format mismatch!" &lt;&lt; std::endl; return; }&#xA;        if ( av_frame_get_buffer(frameData->frame, 0) &lt; 0) { std::cerr &lt;&lt; "Error allocating frame buffer: " &lt;&lt; std::endl; return; }&#xA;        if (!codec_context) return;&#xA;&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;        if (!pkt) { std::cerr &lt;&lt; "Error: Failed to allocate AVPacket" &lt;&lt; std::endl; system("pause"); }&#xA;&#xA;        int ret = avcodec_send_frame(codec_context, frameData->frame);&#xA;        if (ret &lt; 0) { &#xA;            std::cerr &lt;&lt; "Error receiving packet from codec: " &lt;&lt; ret &lt;&lt; std::endl;&#xA;            delete frameData;&#xA;            av_packet_free(&amp;pkt); return; &#xA;        }&#xA;&#xA;        while (ret >= 0) {&#xA;            ret = avcodec_receive_packet(codec_context, pkt);&#xA;&#xA;            //Error checks&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { break; }&#xA;            else if (ret &lt; 0) { std::cerr &lt;&lt; "Error receiving packet from codec: " &lt;&lt; ret &lt;&lt; std::endl; av_packet_free(&amp;pkt); return; }&#xA;            if (!video_stream) { std::cerr &lt;&lt; "Error: video stream is null!" &lt;&lt; std::endl; av_packet_free(&amp;pkt); return; }&#xA;            &#xA;            int64_t frame_duration = codec_context->time_base.den / codec_context->time_base.num;&#xA;            pkt->stream_index = video_stream->index;&#xA;            pkt->duration = frame_duration;&#xA;            pkt->pts = frameData->frame_index * frame_duration;&#xA;&#xA;            int write_ret = av_interleaved_write_frame(format_ctx, pkt);&#xA;            if (write_ret &lt; 0) { std::cerr &lt;&lt; "Error: failed to write a frame! " &lt;&lt; write_ret &lt;&lt; std::endl;}&#xA;&#xA;            av_packet_unref(pkt);&#xA;        }&#xA;&#xA;        av_packet_free(&amp;pkt);&#xA;&#xA;    }&#xA;&#xA;protected:&#xA;    virtual void onFrameEnded(veEvent event) override {&#xA;        // Get the image data from vulkan&#xA;        VkExtent2D extent = getWindowPointer()->getExtent();&#xA;        uint32_t imageSize = extent.width * extent.height * 4;&#xA;        VkImage image = getEnginePointer()->getRenderer()->getSwapChainImage();&#xA;&#xA;        uint8_t *dataImage = new uint8_t[imageSize];&#xA;        &#xA;        vh::vhBufCopySwapChainImageToHost(getEnginePointer()->getRenderer()->getDevice(),&#xA;            getEnginePointer()->getRenderer()->getVmaAllocator(),&#xA;            getEnginePointer()->getRenderer()->getGraphicsQueue(),&#xA;            getEnginePointer()->getRenderer()->getCommandPool(),&#xA;            image, VK_FORMAT_R8G8B8A8_UNORM,&#xA;            VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,&#xA;            dataImage, extent.width, extent.height, imageSize);&#xA;        &#xA;        // Create AVFrame for the converted image data&#xA;        AVFrame* frame = av_frame_alloc();&#xA;        if (!frame) { std::cout &lt;&lt; "Could not allocate memory for frame!" &lt;&lt; std::endl; return; }&#xA;&#xA;        frame->format = AV_PIX_FMT_YUV420P;&#xA;        frame->width = extent.width;&#xA;        frame->height = extent.height;&#xA;        if (av_frame_get_buffer(frame, 0) &lt; 0) { std::cerr &lt;&lt; "Failed to allocate frame buffer! " &lt;&lt; std::endl; return;} ;&#xA;&#xA;        // Prepare context for converting from RGBA to YUV420P&#xA;        SwsContext* sws_ctx = sws_getContext(&#xA;            extent.width, extent.height, AV_PIX_FMT_RGBA,&#xA;            extent.width, extent.height, AV_PIX_FMT_YUV420P,&#xA;            SWS_BILINEAR, nullptr, nullptr, nullptr);&#xA;&#xA;        // Convert the vulkan image data to AVFrame&#xA;        uint8_t* src_data[1] = { dataImage };&#xA;        int src_linesize[1] = { extent.width * 4 };&#xA;        int scale_ret = sws_scale(sws_ctx, src_data, src_linesize, 0, extent.height,&#xA;                  frame->data, frame->linesize);&#xA;&#xA;        if (scale_ret &lt;= 0) { std::cerr &lt;&lt; "Failed to scale the image to frame" &lt;&lt; std::endl; return; }&#xA;&#xA;        sws_freeContext(sws_ctx);&#xA;        delete[] dataImage;&#xA;&#xA;        // Add frame to the queue&#xA;        {&#xA;            std::lock_guard lock(queue_mtx);&#xA;&#xA;            FrameData* frameData = new FrameData;&#xA;            frameData->frame = frame;&#xA;            frameData->frame_index = frame_index;&#xA;            frame_queue.push(frameData);&#xA;&#xA;            frame_index&#x2B;&#x2B;;&#xA;        }&#xA;&#xA;        // Notify processing thread&#xA;        queue_cv.notify_one();&#xA;    }&#xA;&#xA;public:&#xA;    EventListenerVideoCapture(std::string name) : VEEventListenerGLFW(name) {&#xA;        //Prepare the video encoder&#xA;        int ret = prepare_video_encoder();&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            std::cerr &lt;&lt; "Failed to prepare video encoder! " &lt;&lt; std::endl;&#xA;            exit(-1);&#xA;        }&#xA;        else&#xA;        {&#xA;            // Start video processing thread&#xA;            video_processing_thread = std::thread(&amp;EventListenerVideoCapture::processFrames, this);&#xA;        }&#xA;    }&#xA;&#xA;    ~EventListenerVideoCapture() {&#xA;        // Stop video processing thread&#xA;        stop_processing = true;&#xA;        queue_cv.notify_one(); // Notify processing thread to stop&#xA;&#xA;        if (video_processing_thread.joinable()) {&#xA;            video_processing_thread.join();&#xA;        }&#xA;&#xA;        // Flush codec and close output file&#xA;        avcodec_send_frame(codec_context, nullptr);&#xA;        av_write_trailer(format_ctx);&#xA;&#xA;        av_dict_free(&amp;muxer_opts);&#xA;        avio_closep(&amp;format_ctx->pb);&#xA;        avcodec_free_context(&amp;codec_context);&#xA;        avformat_free_context(format_ctx);&#xA;    }&#xA;};&#xA;&#xA;</bool>

    &#xA;

    I have tried changing the codec params, debugging and printing the videoframe data with no success.

    &#xA;