
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (58)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8376)
-
Compression Artifacts using sws_Scale() AVFrame YUV420p-> openCV Mat BGR24 and back
8 septembre 2023, par MorphI transcode, using C++ and FFmpeg, an H264 video in an .mp4 container to H265 video in an .mp4 container. That works perfectly with crisp and clear images and encoding conversion confirmed by checking with FFprobe.


Then, I call one extra function in between the end of the H264 decoding and before the start of the H265 encoding. At that point I have an allocated AVFrame* that I pass to that function as an argument.


The function converts the AVFrame into an openCV cv::Mat and backwards. Technically that is the easy part, yet i encountered a compression artifact problem in the process of which i don't understand why it happens.


The function code (including a workaround for the question that follows) is as follows :


void modifyVideoFrame(AVFrame * frame)
{
 // STEP 1: WORKAROUND, overwriting AV_PIX_FMT_YUV420P BEFORE both sws_scale() functions below, solves "compression artifacts" problem;
 frame->format = AV_PIX_FMT_RGB24; 
 
 // STEP 2: Convert the FFmpeg AVFrame to an openCV cv::Mat (matrix) object.
 cv::Mat image(frame->height, frame->width, CV_8UC3);
 int clz = image.step1();

 SwsContext* context = sws_getContext(frame->width, frame->height, (AVPixelFormat)frame->format, frame->width, frame->height, AVPixelFormat::AV_PIX_FMT_BGR24, SWS_FAST_BILINEAR, NULL, NULL, NULL);
 sws_scale(context, frame->data, frame->linesize, 0, frame->height, &image.data, &clz);
 sws_freeContext(context);

 // STEP 3 : Change the pixels.
 if (false)
 {
 // TODO when "compression artifacts" problem with baseline YUV420p to BGR24 and back BGR24 to YUV420P is solved or explained and understood.
 }
 
 // UPDATE: Added VISUAL CHECK
 cv::imshow("Visual Check of Conversion AVFrame to cv:Map", image);
 cv::waitKey(20);

 // STEP 4: Convert the openCV Mat object back to the FFmpeg AVframe.
 clz = image.step1();
 context = sws_getContext(frame->width, frame->height, AVPixelFormat::AV_PIX_FMT_BGR24, frame->width, frame->height, (AVPixelFormat)frame->format, SWS_FAST_BILINEAR, NULL, NULL, NULL);
 sws_scale(context, &image.data, &clz, 0, frame->height, frame->data, frame->linesize);
 sws_freeContext(context);
}



The code as shown, including the workaround, works perfectly but is NOT understood.


Using FFprobe i established that the input pixel format is YUV420p which is indeed AV_PIX_FMT_YUV420p that is found in the frame format. If I convert it to BGR24 and back to YUV420p without the workaround in step 1, then i have slight compression artifacts but which are clearly visible when viewing with VLC. So there is a loss somewhere which is what I try to understand.


However, when I use the workaround in step 1 then I obtain the exact same output as if this extra function wasn't called (that is crisp and clear H265 without compression artifacts). To be sure that the conversion took place i modified the red value (inside the part of the code that now says if(false) ), and i can indeed see the changes when playing the H265 output file with VLC.


From that test it is clear that after the conversion of the input, data present in AVFrame, from YUV420P to cv::Map BGR24, all information and data needed to convert it back into the original YUV420P input data was available. Yet that is not what happens without the workaround, proven by the compression artifacts.


I used the first 17 seconds of the movie clip "Charge" encoded in H264 and available on the 'Blender' website.


Is there anyone that has some explanation or that can help me understand why the code WITHOUT the workaround does not nicely converts the input data forwards and then backwards back into the original input data.




compared to what i see with work-around OR (update) Visual Check section (cv::imshow) IF part 4 of code is remarked :



These are the FFmpeg StreamingParams that i used on input :


copy_audio => 1
copy_video => 0
vid_codec => "libx265"
vid_video_codec_priv_key => "x265-params"
vid_codec_priv_value => "keyint=60:min-keyint=60:scenecut=0"

// Encoder output
x265 [info]: HEVC encoder version 3.5+98-753305aff
x265 [info]: build info [Windows][GCC 12.2.0][64 bit] 8bit+10bit+12bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast LZCNT SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x265 [info]: Main profile, Level-3.1 (Main tier)
x265 [info]: Thread pool 0 using 64 threads on numa nodes 0
x265 [info]: Slices : 1
x265 [info]: frame threads / pool features : 1 / wpp(12 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 2
x265 [info]: Lookahead / bframes / badapt : 15 / 4 / 0
x265 [info]: b-pyramid / weightp / weightb : 1 / 1 / 0
x265 [info]: References / ref-limit cu / depth : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree : 2 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress : ABR-2000 kbps / 0.60
x265 [info]: VBV/HRD buffer / max-rate / init : 4000 / 2000 / 0.750
x265 [info]: tools: rd=2 psy-rd=2.00 rskip mode=1 signhide tmvp fast-intra
x265 [info]: tools: strong-intra-smoothing lslices=4 deblock sao



-
How to Gradually Fade Audio Volume in ffmpeg from X% to Y% and Back to X% ?
23 juillet 2024, par DomingoSLI'm trying to process an audio file (bg-music.mp3) using ffmpeg to achieve specific volume changes at certain time intervals and trim the file at 30 seconds. The requirements are :


- 

- First 5 seconds at 100% volume
- Fade from 100% to 10% volume between 5 and 7 seconds
- From 25 to 27 seconds, fade audio to 100% volume and keep it until the end of the file
- Trim the file at 30 seconds










Here is the command I have used so far :


ffmpeg -i bg-music.mp3 -filter_complex "[0]volume=1:enable='between(t,0,5)',volume='if(between(t,5,7),1-(0.45*(t-5)),0.1)':enable='between(t,5,25)',volume='if(between(t,25,27),0.1+(0.45*(t-25)),1)':enable='gt(t,27)'" -t 30 output.mp3



While this command processes the file, the volume changes are not gradual between the specified intervals (5 to 7 seconds and 25 to 27 seconds). Instead, the volume changes abruptly rather than smoothly transitioning. Here's the result


How can I correctly apply a gradual volume fade using ffmpeg to meet these requirements ?


-
Crop black padding and resize back to original 1920x1080
1er juin 2020, par Satish KumarI have video of resolution 1920x1080 (16:9 aspect ratio). When played its padded with black box on all sides. How to remove the black boxes to get the 1920x1080 video ?






Below the audio and video details :



Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Maths Logic.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.19.102
 Duration: 00:43:11.24, start: 0.000000, bitrate: 1475 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 1405 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 64 kb/s (default)
 Metadata:
 handler_name : SoundHandler