
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (50)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (6787)
-
Matomo 2 reaches end of life soon (December 2017), update now !
7 décembre 2017, par Matomo Core Team — CommunityIn less than three weeks, Matomo (Piwik) 2 will be no longer supported. This means that no further (security) updates will be released for this version. As per our Long Term Support announcement, Matomo 2.X is supported for 12 months after the initial release of Matomo 3.0.0 which was on December 18th 2016. Therefore, Matomo 2 will no longer receive any updates after December 18th 2017.
It has been almost a year since we released Matomo (Piwik) 3 and we highly recommend updating to Matomo 3 ASAP. The major new release came with a new UI, performance and security improvements. If you are still on Matomo 2, the security improvements alone should be worth updating your Matomo to Matomo 3 now. We cannot recommend this enough.
The update to Matomo (Piwik) 3 should be smooth, but may take a while depending on the amount of data you have.
- If you have any problem with the update, feel free to get in touch with us, or ask in the forums.
- If you are currently using Matomo (Piwik) self-hosted and would like to be upgraded, plus your Matomo managed in the official Cloud-hosted service, contact InnoCraft Cloud and they will migrate your database.
At Matomo (Piwik) and InnoCraft, the company of the makers of Matomo, we have seen many thousands of Matomo installations upgraded over the past year and look forward to an exciting future for Matomo 3 and beyond !
-
Layout Video Recording Like Instgram/ Ticktok Feature
21 mars 2023, par Amarchand KI'm try to do record video inn different layouts like Instagram in Flutter. For this feature I used ffmpeg_kit_flutter package. I refers this solution to do this. but output video is blank, any one helps me to solve this.
the video input and output path is valid, also show the bellow error while printing,


`ffmpeg version n5.1.2 Copyright (c) 2000-2022 the FFmpeg developers
 built with Android (7155654, based on r399163b1) clang version 11.0.5 (https://android.googlesource.com/toolchain/llvm-project 87f1315dfbea7c137aa2e6d362dbb457e388158d)
 configuration: --cross-prefix=aarch64-linux-android- --sysroot=/files/android-sdk/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/sysroot --prefix=/home/taner/Projects/ffmpeg-kit/prebuilt/android-arm64/ffmpeg --pkg-config=/usr/bin/pkg-config --enable-version3 --arch=aarch64 --cpu=armv8-a --target-os=android --enable-neon --enable-asm --enable-inline-asm --ar=aarch64-linux-android-ar --cc=aarch64-linux-android24-clang --cxx=aarch64-linux-android24-clang++ --ranlib=aarch64-linux-android-ranlib --strip=aarch64-linux-android-strip --nm=aarch64-linux-android-nm --extra-libs='-L/home/taner/Projects/ffmpeg-kit/prebuilt/android-arm64/cpu-features/lib -lndk_compat' --disable-autodetect --enable-cross-compile --enable-pic --enable-jni --enable-optimizations --enable-swscale --disable-static --enable-shared --enable-pthreads --enable-v4l2-m2m --disable-outdev=fbdev --disable-indev=fbdev --enable-small --disable-xmm-clobber-test --disable-debug --enable-lto --disable-neon-clobber-test --disable-programs --disable-postproc --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-sndio --disable-schannel --disable-securetransport --disable-xlib --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --disable-videotoolbox --disable-audiotoolbox --disable-appkit --disable-alsa --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --enable-gmp --enable-gnutls --enable-iconv --disable-sdl2 --disable-openssl --enable-zlib --enable-mediacodec
 libavutil 57. 28.100 / 57. 28.100
 libavcodec 59. 37.100 / 59. 37.100
 libavformat 59. 27.100 / 59. 27.100
 libavdevice 59. 7.100 / 59. 7.100
 libavfilter 8. 44.100 / 8. 44.100
 libswscale 6. 7.100 / 6. 7.100
 libswresample 4. 7.100 / 4. 7.100
 -vsync is deprecated. Use -fps_mode
 Passing a number to -vsync is deprecated, use a string argument as described in the manual.
 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/data/user/0/com.example.tuki_taki/cache/REC1453994379216336834.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 creation_time : 2023-03-21T07:15:58.000000Z
 com.android.version: 12
 Duration: 00:00:03.77, start: 0.000000, bitrate: 2204 kb/s
 Stream #0:0[0x1](eng): Video: h264, 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt470bg/smpte170m/bt709, progressive, left), 640x480, 2199 kb/s, 29.61 fps, 29.58 tbr, 90k tbn (default)
 Metadata:
 creation_time : 2023-03-21T07:15:58.000000Z
 handler_name : VideoHandle
 vendor_id : [0][0][0][0]
 Side data:
 displaymatrix: rotation of -90.00 degrees
 Stream #0:1[0x2](eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 128 kb/s (default)
 Metadata:
 creation_time : 2023-03-21T07:15:58.000000Z
 handler_name : SoundHandle
 vendor_id : [0][0][0][0]
 Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '/data/user/0/com.example.tuki_taki/cache/REC5972384708251368209.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 creation_time : 2023-03-21T07:16:05.000000Z
 com.android.version: 12
 Duration: 00:00:02.84, start: 0.000000, bitrate: 2703 kb/s
 Stream #1:0[0x1](eng): Video: h264, 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt470bg/smpte170m/bt709, progressive, left), 640x480, 2801 kb/s, 29.61 fps, 29.58 tbr, 90k tbn (default)
 Metadata:
 creation_time : 2023-03-21T07:16:05.000000Z
 handler_name : VideoHandle
 vendor_id : [0][0][0][0]
 Side data:
 displaymatrix: rotation of -90.00 degrees
 Stream #1:1[0x2](eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 128 kb/s (default)
 Metadata:
 creation_time : 2023-03-21T07:16:05.000000Z
 handler_name : SoundHandle
 vendor_id : [0][0][0][0]
 Stream mapping:
 Stream #0:0 (h264) -> scale:default
 Stream #0:1 (aac) -> amix
 Stream #1:0 (h264) -> scale:default
 Stream #1:1 (aac) -> amix
 hstack:default -> Stream #0:0 (mpeg4)
 amix:default -> Stream #0:1 (aac)
 Press [q] to stop, [?] for help
 Output #0, mp4, to '/data/user/0/com.example.tuki_taki/cache/output.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 com.android.version: 12
 encoder : Lavf59.27.100
 Stream #0:0: Video: mpeg4, 1 reference frame (mp4v / 0x7634706D), yuv420p(progressive), 960x640 (0x0) [SAR 1:1 DAR 3:2], q=2-31, 200 kb/s, 29.58 fps, 11360 tbn
 Metadata:
 encoder : Lavc59.37.100 mpeg4
 Side data:
 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
 Stream #0:1: Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, delay 1024, 69 kb/s
 Metadata:
 encoder : Lavc59.37.100 aac
 frame= 1 fps=0.0 q=4.3 size= 0kB time=00:00:00.23 bitrate= 1.5kbits/s speed=3.49x 



[mpeg4 @ 0xb400007122ebac50] Invalid pts (8) <= last (8)
Error submitting video frame to the encoder
[aac @ 0xb4000071232208c0] Qavg : 9911.349
[aac @ 0xb4000071232208c0] 2 frames left in the queue on closing
Conversion failed`


I'm tried


`Future<void> onLayoutDone() async {
try {
 final String outputPath = await _getTempPath(); 
 const String filter =
 "[0:v]scale=480:640,setsar=1[l];[1:v]scale=480:640,setsar=1[r];[l][r]hstack;[0][1]amix -vsync 0 "; 
 log("left path: ${layoutVideoPathList[0]} right : ${layoutVideoPathList[1]} $outputPath"); 
 final String command =" -y -i ${layoutVideoPathList[0]} -i ${layoutVideoPathList[1]} -filter_complex$filter$outputPath -loglevel verbose"; `these paths are valid` 
 
 await FFmpegKit.execute(command).then((value) async { 
 String? error = await value.getAllLogsAsString(); 
 log(error!); 
 final ReturnCode? returnCode = await value.getReturnCode(); 
 if (returnCode != null) {
 setVideo(videoPath: outputPath); 
 layoutVideoPathList.clear(); 
 }
 });
} catch (e) {
 log("error while combine -========-=-=-=-=-=-=-=-=- $e"); 
}`
</void>


-
Vulkan image data to AVFrames and to video
12 avril 2024, par W4zab1I am trying to encode Vulkan image data into video with MPEG4 format. For some reason the output videofile is corrupted. FFProbe shows discontinuity in timestamps, and the frames are corrupted.
First I prepare my video encoder

Then I get FrameEnded events from my engine where I can get the image data from the vulkan swapchain.

I then convert the image data from vulkan to AVFrames (RGBA to YUV420P), then I pass the frames into queue.

This queue is then handled in another thread, where the frames are processed, and written into video.

I am bit of a noob with ffmpeg, so there can be some code that does not make sense.

This seems right straight forward logic, but there is probably some problems with codec params, way I am converting the imagedata to AVFrame, or something of that sort.

The videofile still gets created, and has some data in it (it is > 0 bytes, and longer the recording, bigger the filesize).

There is no errors from ffmpeg with log_level set to DEBUG.

struct FrameData {
 AVFrame* frame;
 int frame_index;
};

class EventListenerVideoCapture : public VEEventListenerGLFW {
private:
 AVFormatContext* format_ctx = nullptr;
 AVCodec* video_codec = nullptr;
 AVCodecContext* codec_context = nullptr;
 AVStream* video_stream = nullptr;
 AVDictionary* muxer_opts = nullptr;
 int frame_index = 0;

 std::queue frame_queue;
 std::mutex queue_mtx;
 std::condition_variable queue_cv;
 std::atomic<bool> stop_processing{ false };
 std::thread video_processing_thread;
 int prepare_video_encoder()
 {
 av_log_set_level(AV_LOG_DEBUG);
 // Add video stream to format context
 avformat_alloc_output_context2(&format_ctx, nullptr, nullptr, "video.mpg");
 video_stream = avformat_new_stream(format_ctx, NULL);
 video_codec = (AVCodec*)avcodec_find_encoder(AV_CODEC_ID_MPEG4);
 codec_context = avcodec_alloc_context3(video_codec);
 if (!format_ctx) { std::cerr << "Error: Failed to allocate format context" << std::endl; system("pause"); }
 if (!video_stream) { std::cerr << "Error: Failed to create new stream" << std::endl; system("pause"); }
 if (!video_codec) { std::cerr << "Error: Failed to find video codec" << std::endl; system("pause"); }
 if (!codec_context) { std::cerr << "Error: Failed to allocate codec context" << std::endl; system("pause"); }

 if (avio_open(&format_ctx->pb, "video.mpg", AVIO_FLAG_WRITE) < 0) { std::cerr << "Error: Failed to open file for writing!" << std::endl; return -1; }

 av_opt_set(codec_context->priv_data, "preset", "fast", 0);

 codec_context->codec_id = AV_CODEC_ID_MPEG4;
 codec_context->codec_type = AVMEDIA_TYPE_VIDEO;
 codec_context->pix_fmt = AV_PIX_FMT_YUV420P;
 codec_context->width = getWindowPointer()->getExtent().width;
 codec_context->height = getWindowPointer()->getExtent().height;
 codec_context->bit_rate = 1000 * 1000; // Bitrate
 codec_context->time_base = { 1, 30 }; // 30 FPS
 codec_context->gop_size = 10;

 av_dict_set(&muxer_opts, "movflags", "faststart", 0);

 //Unecessary? Since the params are copied anyways
 video_stream->time_base = codec_context->time_base;

 //Try to open codec after changes
 //copy codec_context params to videostream
 //and write headers to format_context
 if (avcodec_open2(codec_context, video_codec, NULL) < 0) { std::cerr << "Error: Could not open codec!" << std::endl; return -1; }
 if (avcodec_parameters_from_context(video_stream->codecpar, codec_context) < 0) { std::cerr << "Error: Could not copy params from context to stream!" << std::endl; return -1; };
 if (avformat_write_header(format_ctx, &muxer_opts) < 0) { std::cerr << "Error: Failed to write output file headers!" << std::endl; return -1; }
 return 0;
 }

 void processFrames() {
 while (!stop_processing) {
 FrameData* frameData = nullptr;
 {
 std::unique_lock lock(queue_mtx);
 queue_cv.wait(lock, [&]() { return !frame_queue.empty() || stop_processing; });

 if (stop_processing && frame_queue.empty())
 break;

 frameData = frame_queue.front();
 frame_queue.pop();
 }

 if (frameData) {
 encodeAndWriteFrame(frameData);
 AVFrame* frame = frameData->frame;
 av_frame_free(&frame); // Free the processed frame
 delete frameData;
 }
 }
 }

 void encodeAndWriteFrame(FrameData* frameData) {

 // Validation
 if (!frameData->frame) { std::cerr << "Error: Frame was null! " << std::endl; return; }
 if (frameData->frame->format != codec_context->pix_fmt) { std::cerr << "Error: Frame format mismatch!" << std::endl; return; }
 if ( av_frame_get_buffer(frameData->frame, 0) < 0) { std::cerr << "Error allocating frame buffer: " << std::endl; return; }
 if (!codec_context) return;

 AVPacket* pkt = av_packet_alloc();
 if (!pkt) { std::cerr << "Error: Failed to allocate AVPacket" << std::endl; system("pause"); }

 int ret = avcodec_send_frame(codec_context, frameData->frame);
 if (ret < 0) { 
 std::cerr << "Error receiving packet from codec: " << ret << std::endl;
 delete frameData;
 av_packet_free(&pkt); return; 
 }

 while (ret >= 0) {
 ret = avcodec_receive_packet(codec_context, pkt);

 //Error checks
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { break; }
 else if (ret < 0) { std::cerr << "Error receiving packet from codec: " << ret << std::endl; av_packet_free(&pkt); return; }
 if (!video_stream) { std::cerr << "Error: video stream is null!" << std::endl; av_packet_free(&pkt); return; }
 
 int64_t frame_duration = codec_context->time_base.den / codec_context->time_base.num;
 pkt->stream_index = video_stream->index;
 pkt->duration = frame_duration;
 pkt->pts = frameData->frame_index * frame_duration;

 int write_ret = av_interleaved_write_frame(format_ctx, pkt);
 if (write_ret < 0) { std::cerr << "Error: failed to write a frame! " << write_ret << std::endl;}

 av_packet_unref(pkt);
 }

 av_packet_free(&pkt);

 }

protected:
 virtual void onFrameEnded(veEvent event) override {
 // Get the image data from vulkan
 VkExtent2D extent = getWindowPointer()->getExtent();
 uint32_t imageSize = extent.width * extent.height * 4;
 VkImage image = getEnginePointer()->getRenderer()->getSwapChainImage();

 uint8_t *dataImage = new uint8_t[imageSize];
 
 vh::vhBufCopySwapChainImageToHost(getEnginePointer()->getRenderer()->getDevice(),
 getEnginePointer()->getRenderer()->getVmaAllocator(),
 getEnginePointer()->getRenderer()->getGraphicsQueue(),
 getEnginePointer()->getRenderer()->getCommandPool(),
 image, VK_FORMAT_R8G8B8A8_UNORM,
 VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,
 dataImage, extent.width, extent.height, imageSize);
 
 // Create AVFrame for the converted image data
 AVFrame* frame = av_frame_alloc();
 if (!frame) { std::cout << "Could not allocate memory for frame!" << std::endl; return; }

 frame->format = AV_PIX_FMT_YUV420P;
 frame->width = extent.width;
 frame->height = extent.height;
 if (av_frame_get_buffer(frame, 0) < 0) { std::cerr << "Failed to allocate frame buffer! " << std::endl; return;} ;

 // Prepare context for converting from RGBA to YUV420P
 SwsContext* sws_ctx = sws_getContext(
 extent.width, extent.height, AV_PIX_FMT_RGBA,
 extent.width, extent.height, AV_PIX_FMT_YUV420P,
 SWS_BILINEAR, nullptr, nullptr, nullptr);

 // Convert the vulkan image data to AVFrame
 uint8_t* src_data[1] = { dataImage };
 int src_linesize[1] = { extent.width * 4 };
 int scale_ret = sws_scale(sws_ctx, src_data, src_linesize, 0, extent.height,
 frame->data, frame->linesize);

 if (scale_ret <= 0) { std::cerr << "Failed to scale the image to frame" << std::endl; return; }

 sws_freeContext(sws_ctx);
 delete[] dataImage;

 // Add frame to the queue
 {
 std::lock_guard lock(queue_mtx);

 FrameData* frameData = new FrameData;
 frameData->frame = frame;
 frameData->frame_index = frame_index;
 frame_queue.push(frameData);

 frame_index++;
 }

 // Notify processing thread
 queue_cv.notify_one();
 }

public:
 EventListenerVideoCapture(std::string name) : VEEventListenerGLFW(name) {
 //Prepare the video encoder
 int ret = prepare_video_encoder();
 if (ret < 0)
 {
 std::cerr << "Failed to prepare video encoder! " << std::endl;
 exit(-1);
 }
 else
 {
 // Start video processing thread
 video_processing_thread = std::thread(&EventListenerVideoCapture::processFrames, this);
 }
 }

 ~EventListenerVideoCapture() {
 // Stop video processing thread
 stop_processing = true;
 queue_cv.notify_one(); // Notify processing thread to stop

 if (video_processing_thread.joinable()) {
 video_processing_thread.join();
 }

 // Flush codec and close output file
 avcodec_send_frame(codec_context, nullptr);
 av_write_trailer(format_ctx);

 av_dict_free(&muxer_opts);
 avio_closep(&format_ctx->pb);
 avcodec_free_context(&codec_context);
 avformat_free_context(format_ctx);
 }
};

</bool>


I have tried changing the codec params, debugging and printing the videoframe data with no success.