
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (112)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (15026)
-
The hardware decoding was successful, but the hw_frames_ctx in the received frame is empty
15 juillet 2024, par mercuric taylorI tried to use QSV hardware decoding under ffmpeg, using the integrated graphics 730 on my computer. Here's the code I used to initialize the decoder


const AVCodec* codec = NULL;
int ret;
int err = 0;
// Create the QSV hardware device.
 ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_QSV, "auto", NULL, 0);
 if (ret < 0)
 {
 char error_string[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(error_string, AV_ERROR_MAX_STRING_SIZE, ret);
 LError("Error creating QSV device: {}", error_string);
 return NULL;
 }
 // Search for QSV decoders, either for H.264 or H.265.
 codec = avcodec_find_decoder_by_name(codec_name);
 if (!codec)
 {
 LError("Failed to find QSV decoder.");
 return NULL;
 }

 // Creating a decoder context and associating it with the hardware device.
 decoder_ctx = avcodec_alloc_context3(codec);
 if (!decoder_ctx)
 {
 ret = AVERROR(ENOMEM);
 LError("Failed to allocate decoder context.\n");
 return NULL;
 }
 decoder_ctx->codec_id = AV_CODEC_ID_H264; 
 decoder_ctx->opaque = &hw_device_ctx;
 decoder_ctx->get_format = get_format;
// Open the decoder.
 if ((ret = avcodec_open2(decoder_ctx, NULL, NULL)) < 0)
 {
 LError("Failed to open decoder: %d\n", ret);
 return NULL;
 }

 parser_ctx = av_parser_init(avcodec_find_encoder_by_name(codec_name)->id);



The following is the process of decoding using the decoder :


AVFrame* frame = av_frame_alloc();
 AVFrame* dstFrame = av_frame_alloc();
 res = avcodec_send_packet(decoder_ctx, pkt);
 if (res < 0)
 {
 return;
 }
 int num = 0;
 while (res >= 0)
 {
 res = avcodec_receive_frame(decoder_ctx, frame);

 if (res == AVERROR(EAGAIN) || res == AVERROR_EOF)
 {
 //if (res == AVERROR(EAGAIN)) 
 //{
 // LInfo("AVERROR(EAGAIN):");
 //}
 //if (res == AVERROR_EOF) 
 //{
 // // LInfo("AVERROR_EOF");
 //}
 // av_frame_unref(frame);
 break;
 }
 else if (res < 0)
 {
 // av_frame_unref(frame);
 return;
 }


 frameNumbers_++;
 if (frame->hw_frames_ctx == NULL)
 {
 LError("hw_frames_ctx is null");
 LError("avcodec_receive_frame return is {}", res);
 }



My issue is that I've successfully decoded the video. The return value of avcodec_receive_frame is 0, and the width and height of the AVFrame are the same as the input video stream.


However,** the hw_frames_ctx field of the AVFrame is empty**. Why would this happen in a successful hardware decoding scenario ?


Could it be due to some incorrect configurations ? I've set up a get_format function like this


static enum AVPixelFormat get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
 while (*pix_fmts != AV_PIX_FMT_NONE) {
 if (*pix_fmts == AV_PIX_FMT_QSV) {
 DecodeContext *decode = (DecodeContext*)avctx->opaque;
 AVHWFramesContext *frames_ctx;
 AVQSVFramesContext *frames_hwctx;
 int ret;
 /* create a pool of surfaces to be used by the decoder */
 avctx->hw_frames_ctx = av_hwframe_ctx_alloc(decode->hw_device_ref);
 if (!avctx->hw_frames_ctx)
 return AV_PIX_FMT_NONE;
 frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data;
 frames_hwctx = (AVQSVFramesContext*)frames_ctx->hwctx;
 frames_ctx->format = AV_PIX_FMT_QSV;
 frames_ctx->sw_format = avctx->sw_pix_fmt;
 frames_ctx->width = FFALIGN(avctx->coded_width, 32);
 frames_ctx->height = FFALIGN(avctx->coded_height, 32);
 frames_ctx->initial_pool_size = 32;
 frames_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
 ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
 if (ret < 0)
 return AV_PIX_FMT_NONE;
 return AV_PIX_FMT_QSV;
 }
 pix_fmts++;
 }
 fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
 return AV_PIX_FMT_NONE;
}



But I also noticed that even though I set decoder_ctx->get_format = get_format ; this function is not being executed later on.


I observed that my GPU is also being utilized during program execution, indicating a successful hardware decoding. My subsequent goal is to render a frame from this decoded AVFrame. It seems like the hw_frames_ctx of the AVFrame is a texture handle on the GPU. I wish to directly use this field for D3D11 rendering and display it on the screen.
My questions are :


- 

- Is the hw_frames_ctx field empty in the case of successful hardware decoding ?
- Does it represent a texture handle on the GPU ?
- If my rendering approach is wrong, how can I correctly render this AVFrame using D3D11 ?








-
Encoding a growing video file in realtime fails prematurely
17 janvier 2023, par MacsterThis batch script is repeatedly concatenating video clips from an textfile. The output file is then beeing encoded in realtime into dash format. Unfortunately the realtime encoding will always end prematurely and I can't figure out why. From what I observed, it shouldn't be possible that the realtime encoding would catch up to the concating - which is happening each time after the duration of the clip that was just added - because I'm setting an offset, to when the encoding has to start, via timeout.


I've tried other formats like .mp4 and .h264 and other options, but nothing seems to help. So my assumption is, that there is a conflict when read/write operation is made and these operations overlap at a certain point. But how do I find out when and how to avoid it ? I haven't had the feeling that something was happening at the exact same time, when observing the command promt.



The screenshot was taken right at failing. As you can see, the concat file
queue1.webm
is already more than 10 seconds longer than the realtime encoding at its failing position. That's why I don't think it has to do with catching up too fast. It will fail randomly, so one time it fails at 25 seconds and next time it might fail at 2 minutes and 20 seconds.

To avoid the possibility of different video settings causing troubble, I'm using only one video file. I will link it here : BigBuckBunny (Mega NZ) It's a 10 sec snippet from BigBuckBunny. I hope this is legal !? But you can use what ever clip you want.


IMPORTANT : If you try to reproduce the behaviour, please make sure you make at least one entry,
likefile 'bigbuckbunny_webm.webm'
inmylist.txt
, because adding something if the file is empty is kinda broken :)

So here is the code :


Just the FFMPEG commands :


ffmpeg -f concat -i "mylist.txt" -safe 0 -c copy -f webm -reset_timestamps 1 -streaming 1 -live 1 -y queue1.webm
[..]
ffmpeg -re -i queue1.webm -c copy -map 0:v -use_timeline 1 -use_template 1 -remove_at_exit 0 -window_size 10 -adaptation_sets "id=0,streams=v" -streaming 1 -live 1 -f dash -y queue.mpd



makedir.bat


@ECHO on

:: Create new queue
IF NOT EXIST "queue1.webm" mkfifo "queue1.webm"

setlocal EnableDelayedExpansion

set string=file 'bigbuckbunny_webm.webm'
set video_path=""
SET /a c=0
set file=-1
set file_before=""

:loop
::Get last entry from "mylist.txt"
for /f "delims=" %%a in ('type mylist.txt ^| findstr /b /c:"file"') do (
 set video_path=%%a
)
echo %video_path%

::Insert file 'bigbuckbunny_webm.webm' if mylist.txt is empty.
if "%video_path%" EQU """" (echo %string% >> mylist.txt && set file=%string:~6,-1%) else (set file=%video_path:~6,-1%)

::Insert file 'bigbuckbunny_webm.webm' into mylit.txt if actual entry(%file%) is the same than before(file 'bigbuckbunny_webm.webm').
if "%file%" EQU "%file_before%" (echo. >> mylist.txt && echo %string%>>mylist.txt) 

echo %file%

::Get the video duration
for /f "tokens=1* delims=:" %%a in ('ffmpeg -i %file% 2^>^&1 ^| findstr "Duration"') do (set duration=%%b)
echo %duration%

::Crop format to HH:MM:SS
set duration=%duration:~1,11%
echo %duration%

::Check if seconds are double digits, less than 10, like 09. Then use only 9.
if %duration:~6,1% EQU 0 (
 set /a sec=%duration:~7,1% 
 ) else ( 
 set /a sec=%duration:~6,2%

)
echo %sec%

::Convert duration into seconds
set /a duration=%duration:~0,2%*3600+%duration:~3,2%*60+%sec%
echo %duration%

::echo %duration%

::Increase iteration count.
set /a c=c+1

::Add new clip to queue.
ffmpeg -f concat -i "mylist.txt" -safe 0 -c copy -f webm -reset_timestamps 1 -streaming 1 -live 1 -y queue1.webm

::Start realtime encoding queue1, if a first clip was added.
if !c! EQU 1 (
 start cmd /k "startRealtimeEncoding.bat"
)

::Wait the duration of the inserted video 
timeout /t %duration%

::Set the actual filename as the previous file for the next iteration.
set file_before=%file%

::Stop after c loops.
if !c! NEQ 20 goto loop

echo %c%

endlocal

:end 



startRealtimeEncoding.bat


@ECHO off

timeout /t 5
ffmpeg -re -i queue1.webm -c copy -map 0:v -seg_duration 2 -keyint_min 48 -use_timeline 1 -use_template 1 -remove_at_exit 0 -window_size 10 -adaptation_sets "id=0,streams=v" -streaming 1 -live 1 -f dash -y queue.mpd

:end



-
Manim Animation Rendering Fails on Google Cloud Run : Segment Combination Issues [closed]
28 juin, par Ahaskar KashyapProblem Summary


I'm running a Manim animation server on Google Cloud Run that successfully creates video segments but fails during the FFmpeg combination step. The behavior is inconsistent based on the number of segments created.


Environment


- 

- Platform : Google Cloud Run (8GB RAM, 4 CPU)
- Container : Debian 12 (bookworm) with Python 3.9.23
- FFmpeg : 5.1.6 (with h264 support enabled)
- Manim : Latest version with
-ql
(480p15) quality setting - Timeout : 240 seconds












Observed Behavior






 Animation Complexity 

Segments Created 

Final Video 

Status 







 Simple (2 segments) 

✅ Success 

✅ Created (7,681 bytes) 

❌ Reports "failed" 




 Complex (8+ segments) 

✅ Success 

❌ Not created 

❌ Actually fails 







Code Structure


# Manim command used
manim_cmd = [
 'manim', python_file, scene_class,
 '--media_dir', output_dir,
 '-ql', # Low quality (480p15)
 '--disable_caching',
 '--output_file', f"{output_filename}.mp4",
 '--verbosity', 'ERROR',
 '--progress_bar', 'none',
 '--write_to_movie'
]



Specific Issues


Issue 1 : False Negatives (Simple Animations)


- 

- What happens : Manim creates 2 segments successfully, FFmpeg combines them into final video
- Problem : Final video exists and is playable, but process reports "Manim failed (code 1)"
- Evidence : Can download the "failed" video via
/videos/filename.mp4
and it plays correctly








Issue 2 : Real Failures (Complex Animations)


- 

- What happens : Manim creates 8+ segments successfully
- Problem : FFmpeg combination step genuinely fails, no final video created
- Error : Process exits with code 1, only partial segments remain








Key Questions


- 

- Why does FFmpeg combination work for 2 segments but fail for 8+ segments ?
- Why does the same code work locally but fail on Cloud Run ?
- Is this a Cloud Run container limitation, FFmpeg configuration issue, or Manim-specific problem ?
- How can I debug FFmpeg combination failures in a containerized environment ?










File Structure (When Working)


/app/manim_animations/
└── animation_name/
 └── videos/
 └── animation_name_1234/
 └── 480p15/
 ├── partial_movie_files/
 │ └── SceneClass/
 │ ├── uncached_00000.mp4
 │ └── uncached_00001.mp4
 └── final_animation.mp4 # This gets created for 2 segments



Error Output


🔒 ISOLATED: Manim return code: 1
Manim failed (code 1): [stderr contains FFmpeg errors]



Has anyone encountered similar issues with Manim + FFmpeg on Cloud Run or other containerized environments ? Any insights into why segment count affects combination success would be greatly appreciated.


Investigation Results


What Works :


- 

- ✅ Local development (identical code works perfectly)
- ✅ FFmpeg installation (
ffmpeg -version
works, h264 encoders available) - ✅ Segment creation (all
uncached_*.mp4
files created with correct sizes) - ✅ Simple animations after container restart










What Doesn't Work :


- 

- ❌ Segment combination for 8+ segments
- ❌ Status detection for 2-segment animations
- ❌ Animations after multiple renders (resource accumulation ?)








Theories Tested :


- 

- Resource constraints : Upgraded to 16GB/8CPU - made things worse
- FFmpeg version : Upgraded 5.1.6→7.x - broke basic functionality
- File accumulation : Container restart helps temporarily
- Path detection : Isolation script may look in wrong directories