
Recherche avancée
Autres articles (43)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (4712)
-
Révision 17664 : spip-2-stable is back and is now spip 2.1.10
6 avril 2011, par Ben . -
Compression Artifacts using sws_Scale() AVFrame YUV420p-> openCV Mat BGR24 and back
8 septembre 2023, par MorphI transcode, using C++ and FFmpeg, an H264 video in an .mp4 container to H265 video in an .mp4 container. That works perfectly with crisp and clear images and encoding conversion confirmed by checking with FFprobe.


Then, I call one extra function in between the end of the H264 decoding and before the start of the H265 encoding. At that point I have an allocated AVFrame* that I pass to that function as an argument.


The function converts the AVFrame into an openCV cv::Mat and backwards. Technically that is the easy part, yet i encountered a compression artifact problem in the process of which i don't understand why it happens.


The function code (including a workaround for the question that follows) is as follows :


void modifyVideoFrame(AVFrame * frame)
{
 // STEP 1: WORKAROUND, overwriting AV_PIX_FMT_YUV420P BEFORE both sws_scale() functions below, solves "compression artifacts" problem;
 frame->format = AV_PIX_FMT_RGB24; 
 
 // STEP 2: Convert the FFmpeg AVFrame to an openCV cv::Mat (matrix) object.
 cv::Mat image(frame->height, frame->width, CV_8UC3);
 int clz = image.step1();

 SwsContext* context = sws_getContext(frame->width, frame->height, (AVPixelFormat)frame->format, frame->width, frame->height, AVPixelFormat::AV_PIX_FMT_BGR24, SWS_FAST_BILINEAR, NULL, NULL, NULL);
 sws_scale(context, frame->data, frame->linesize, 0, frame->height, &image.data, &clz);
 sws_freeContext(context);

 // STEP 3 : Change the pixels.
 if (false)
 {
 // TODO when "compression artifacts" problem with baseline YUV420p to BGR24 and back BGR24 to YUV420P is solved or explained and understood.
 }
 
 // UPDATE: Added VISUAL CHECK
 cv::imshow("Visual Check of Conversion AVFrame to cv:Map", image);
 cv::waitKey(20);

 // STEP 4: Convert the openCV Mat object back to the FFmpeg AVframe.
 clz = image.step1();
 context = sws_getContext(frame->width, frame->height, AVPixelFormat::AV_PIX_FMT_BGR24, frame->width, frame->height, (AVPixelFormat)frame->format, SWS_FAST_BILINEAR, NULL, NULL, NULL);
 sws_scale(context, &image.data, &clz, 0, frame->height, frame->data, frame->linesize);
 sws_freeContext(context);
}



The code as shown, including the workaround, works perfectly but is NOT understood.


Using FFprobe i established that the input pixel format is YUV420p which is indeed AV_PIX_FMT_YUV420p that is found in the frame format. If I convert it to BGR24 and back to YUV420p without the workaround in step 1, then i have slight compression artifacts but which are clearly visible when viewing with VLC. So there is a loss somewhere which is what I try to understand.


However, when I use the workaround in step 1 then I obtain the exact same output as if this extra function wasn't called (that is crisp and clear H265 without compression artifacts). To be sure that the conversion took place i modified the red value (inside the part of the code that now says if(false) ), and i can indeed see the changes when playing the H265 output file with VLC.


From that test it is clear that after the conversion of the input, data present in AVFrame, from YUV420P to cv::Map BGR24, all information and data needed to convert it back into the original YUV420P input data was available. Yet that is not what happens without the workaround, proven by the compression artifacts.


I used the first 17 seconds of the movie clip "Charge" encoded in H264 and available on the 'Blender' website.


Is there anyone that has some explanation or that can help me understand why the code WITHOUT the workaround does not nicely converts the input data forwards and then backwards back into the original input data.




compared to what i see with work-around OR (update) Visual Check section (cv::imshow) IF part 4 of code is remarked :



These are the FFmpeg StreamingParams that i used on input :


copy_audio => 1
copy_video => 0
vid_codec => "libx265"
vid_video_codec_priv_key => "x265-params"
vid_codec_priv_value => "keyint=60:min-keyint=60:scenecut=0"

// Encoder output
x265 [info]: HEVC encoder version 3.5+98-753305aff
x265 [info]: build info [Windows][GCC 12.2.0][64 bit] 8bit+10bit+12bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast LZCNT SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x265 [info]: Main profile, Level-3.1 (Main tier)
x265 [info]: Thread pool 0 using 64 threads on numa nodes 0
x265 [info]: Slices : 1
x265 [info]: frame threads / pool features : 1 / wpp(12 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 2
x265 [info]: Lookahead / bframes / badapt : 15 / 4 / 0
x265 [info]: b-pyramid / weightp / weightb : 1 / 1 / 0
x265 [info]: References / ref-limit cu / depth : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree : 2 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress : ABR-2000 kbps / 0.60
x265 [info]: VBV/HRD buffer / max-rate / init : 4000 / 2000 / 0.750
x265 [info]: tools: rd=2 psy-rd=2.00 rskip mode=1 signhide tmvp fast-intra
x265 [info]: tools: strong-intra-smoothing lslices=4 deblock sao



-
How to Gradually Fade Audio Volume in ffmpeg from X% to Y% and Back to X% ?
23 juillet 2024, par DomingoSLI'm trying to process an audio file (bg-music.mp3) using ffmpeg to achieve specific volume changes at certain time intervals and trim the file at 30 seconds. The requirements are :


- 

- First 5 seconds at 100% volume
- Fade from 100% to 10% volume between 5 and 7 seconds
- From 25 to 27 seconds, fade audio to 100% volume and keep it until the end of the file
- Trim the file at 30 seconds










Here is the command I have used so far :


ffmpeg -i bg-music.mp3 -filter_complex "[0]volume=1:enable='between(t,0,5)',volume='if(between(t,5,7),1-(0.45*(t-5)),0.1)':enable='between(t,5,25)',volume='if(between(t,25,27),0.1+(0.45*(t-25)),1)':enable='gt(t,27)'" -t 30 output.mp3



While this command processes the file, the volume changes are not gradual between the specified intervals (5 to 7 seconds and 25 to 27 seconds). Instead, the volume changes abruptly rather than smoothly transitioning. Here's the result


How can I correctly apply a gradual volume fade using ffmpeg to meet these requirements ?