
Recherche avancée
Autres articles (108)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (8261)
-
Can anyone help merge these 2 FFmpeg commands ?
7 juin 2019, par WebDevi have two commands which are part of a larger set of commands. basically i need to merge these two into one command to speed things up. can anyone help me please ?
ffmpeg -y -f concat -safe 0 -protocol_whitelist "file,http,https,tcp,tls" -i "photos.txt" -i "mainscreen.png" -i "audio.mp3" -filter_complex "scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1280x720:fps=15:d=120, overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2, drawtext=fontfile=font.otf:text='%%~ni':fontcolor=black:fontsize=32:x=90:y=582" -preset veryfast -tune stillimage -shortest -pix_fmt yuv420p "slideshow.mp4"
ffmpeg -y -i "slideshow.mp4" -filter_complex "[0:a]showwaves=mode=cline:s=110x36:r=15:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[0:v][v]overlay=77:444,scale=1280:720[outv]" -map "[outv]" -map 0:a -preset veryfast "done.mp4"firstly creates a slideshow and adds text.
then draws a showwaves effect onto the video.thank you in advance
UPDATE :
from Gyan’s response, and after tinkering for a while, it kind of works how i needed it to. it does what I wanted, but keeps throwing "depreciated pixel format" error. heres the updated command once i finished. can you spot the problem ? and is the command written properly ?
ffmpeg -y -f concat -safe 0 -protocol_whitelist "file,http,https,tcp,tls" -i "images.txt" -i "screen.png" -i "audio.mp3" -filter_complex "scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1280x720:fps=15:d=120, overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2, drawtext=fontfile=Assets/Fonts/font.otf:text='%%~ni':fontcolor=black:fontsize=32:x=90:y=582[v];[2:a]showwaves=mode=cline:s=110x36:r=15:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[w];[v][w]overlay=77:444,scale=1280:720[outv]" -map "[outv]" -map 2:a -c:v libx264 -c:a aac -preset veryfast -shortest -pix_fmt yuv420p "done.mp4"
SECOND UPDATE :
thanks to Gyan (once again) for helping me better understand the command. here is the final code which does what I need :
ffmpeg -y -f concat -safe 0 -protocol_whitelist "file,http,https,tcp,tls" -i "images.txt" -i "screen.png" -i "tmp.audio.mp3" -filter_complex "[0]scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1280x720:fps=15:d=120, overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2, drawtext=fontfile=font.otf:text='%%~ni':fontcolor=black:fontsize=32:x=90:y=582[v];[2:a]showwaves=mode=cline:s=110x36:r=15:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[w];[v][w]overlay=77:444,scale=1280:720[outv]" -map "[outv]" -map 2:a -preset veryfast -shortest -pix_fmt yuv420p "done.mp4"
only change from Gyan’s code is i removed [p] ;[1][p] and replaced with a comma to achieve what I needed. seems to work perfect now, ignoring the depreciated pixels warning.
FINAL UPDATE :
Gyan’s updated code works perfect, a massive thank you to you sir. you also helped me understand your work which was very helpful. -
M4a (mp4) audio file encoded with pydub+ffmpeg doesn't play on Android
6 juillet 2020, par EvelynI have a python script to split up some
wav
files and export tom4a
using pydub. I'm able to get these files to play on several devices, but not on an Android device (using Google Pixel 3). When I try encoding with straightffmpeg
in terminal, that works fine on the Android device.

What is the difference in these two files, and since
pydub
is usingffmpeg
, what do I need to change to make it do exactly the same as theffmpeg
command ?

Not working


from pydub import AudioSegment
>>> audio = AudioSegment.from_wav("input.wav")
>>> slice = audio[1000:3000]
>>> slice.export("pydub_export.m4a", format="mp4", parameters=["-ac", "1", "-c:a", "libfdk_aac", "-profile:a", "aac_he", "-vbr", "2"])



mediainfo
output :

General
Complete name : pydub_export.m4a
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/iso2/mp41)
File size : 11.7 KiB
Duration : 2 s 115 ms
Overall bit rate mode : Constant
Overall bit rate : 45.1 kb/s
Writing application : Lavf58.29.100

Audio
ID : 1
Format : AAC LC SBR
Format/Info : Advanced Audio Codec Low Complexity with Spectral Band Replication
Commercial name : HE-AAC
Format settings : NBC
Codec ID : mp4a-40-5
Duration : 2 s 115 ms
Duration_LastFrame : -22 ms
Bit rate mode : Constant
Bit rate : 41.4 kb/s
Channel(s) : 1 channel
Channel layout : C
Sampling rate : 44.1 kHz
Frame rate : 21.533 FPS (2048 SPF)
Compression mode : Lossy
Stream size : 10.7 KiB (92%)
Default : Yes
Alternate group : 1



Working


$ffmpeg -i input.wav -acodec copy -ss 1 -to 3 input_slice.wav
$ffmpeg -i input_slice.wav -ac 1 -c:a libfdk_aac -profile:a aac_he -vbr 2 ffmpeg_export.m4a



mediainfo
output :

General
Complete name : ffmpeg_export.m4a
Format : MPEG-4
Format profile : Apple audio with iTunes info
Codec ID : M4A (isom/iso2)
File size : 11.2 KiB
Duration : 2 s 112 ms
Overall bit rate mode : Constant
Overall bit rate : 43.6 kb/s
Writing application : Lavf58.29.100

Audio
ID : 1
Format : AAC LC SBR
Format/Info : Advanced Audio Codec Low Complexity with Spectral Band Replication
Commercial name : HE-AAC
Format settings : NBC
Codec ID : mp4a-40-5
Duration : 2 s 112 ms
Duration_LastFrame : -25 ms
Bit rate mode : Constant
Bit rate : 39.9 kb/s
Channel(s) : 1 channel
Channel layout : C
Sampling rate : 44.1 kHz
Frame rate : 21.533 FPS (2048 SPF)
Compression mode : Lossy
Stream size : 10.3 KiB (91%)
Default : Yes
Alternate group : 1



I already tried moving metadata to the front with
-movflags faststart
on the broken file and it didn't make a difference.

-
How AVCodecContext bitrate, framerate and timebase is used when encoding single frame
28 mars 2023, par CyrusI am trying to learn FFmpeg from examples as there is a tight schedule. The task is to encode a raw YUV image into JPEG format of the given width and height. I have found examples from ffmpeg official website, which turns out to be quite straight-forward. However there are some fields in AVCodecContext that I thought only makes sense when encoding videos(e.g. bitrate, framerate, timebase, gopsize, max_b_frames etc).


I understand on a high level what those values are when it comes to videos, but do I need to care about those when I just want a single image ? Currently for testing, I am just setting them as dummy values and it seems to work. But I want to make sure that I am not making terrible assumptions that will break in the long run.


EDIT :


Here is the code I got. Most of them are copy and paste from examples, with some changes to replace old APIs with newer ones.


#include "thumbnail.h"
#include "libavcodec/avcodec.h"
#include "libavutil/imgutils.h"
#include 
#include 
#include 

void print_averror(int error_code) {
 char err_msg[100] = {0};
 av_strerror(error_code, err_msg, 100);
 printf("Reason: %s\n", err_msg);
}

ffmpeg_status_t save_yuv_as_jpeg(uint8_t* source_buffer, char* output_thumbnail_filename, int thumbnail_width, int thumbnail_height) {
 const AVCodec* mjpeg_codec = avcodec_find_encoder(AV_CODEC_ID_MJPEG);
 if (!mjpeg_codec) {
 printf("Codec for mjpeg cannot be found.\n");
 return FFMPEG_THUMBNAIL_CODEC_NOT_FOUND;
 }

 AVCodecContext* codec_ctx = avcodec_alloc_context3(mjpeg_codec);
 if (!codec_ctx) {
 printf("Codec context cannot be allocated for the given mjpeg codec.\n");
 return FFMPEG_THUMBNAIL_ALLOC_CONTEXT_FAILED;
 }

 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 printf("Thumbnail packet cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_PACKET_FAILED;
 }

 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 printf("Thumbnail frame cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_FRAME_FAILED;
 }

 // The part that I don't understand
 codec_ctx->bit_rate = 400000;
 codec_ctx->width = thumbnail_width;
 codec_ctx->height = thumbnail_height;
 codec_ctx->time_base = (AVRational){1, 25};
 codec_ctx->framerate = (AVRational){1, 25};

 codec_ctx->gop_size = 10;
 codec_ctx->max_b_frames = 1;
 codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
 int ret = av_image_fill_arrays(frame->data, frame->linesize, source_buffer, AV_PIX_FMT_YUV420P, thumbnail_width, thumbnail_height, 32);
 if (ret < 0) {
 print_averror(ret);
 printf("Pixel format: yuv420p, width: %d, height: %d\n", thumbnail_width, thumbnail_height);
 return FFMPEG_THUMBNAIL_FILL_FRAME_DATA_FAILED;
 }

 ret = avcodec_send_frame(codec_ctx, frame);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to send frame to encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to receive packet from encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 // store the thumbnail in output
 int fd = open(output_thumbnail_filename, O_CREAT | O_RDWR);
 write(fd, pkt->data, pkt->size);
 close(fd);

 // freeing allocated structs
 avcodec_free_context(&codec_ctx);
 av_frame_free(&frame);
 av_packet_free(&pkt);
 return FFMPEG_SUCCESS;
}