Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (41)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (6048)

  • How to execute docker in cloud (ffmpeg, docker)

    8 octobre 2022, par M.K. Kim

    I want to execute a dockerized python cli that transcodes a video file in cloud.
Locally it works fine but I'd like to hear your thoughts on how to best implement this in cloud. Below is the dockerfile and how docker is run. As you see it takes two arguments(input/output file path)

    


    //Dockerfile

FROM alpine:3.12

# Install requirements
RUN apk add python3 && \
    apk add py3-pip  && \
    pip3 install --upgrade pip && \
    apk add ffmpeg


# Install dependency
RUN pip3 install dependency1

# Create work directory and run container as user
RUN addgroup -S app && adduser -S app -G app
RUN mkdir /app && chown app:app /app
WORKDIR /app
USER app

# Use my-app as entrypoint (allows passing arguments)
ENTRYPOINT ["/usr/bin/my-app"]



    


    docker build . -t my-app:latest
docker run --rm -it -v $PWD:/app my-app:latest inputfile.mp4 outputfile.mp4


    


    Locally it works. But I want to build a front-end where I can upload a video file in cloud storage (S3, Google Cloud Storage, etc) and I want the output file to be either uploaded into cloud storage or downloaded from the browser.

    


    what would be the best approach in this case ? using this docker image as a layer to lambda function would be a good approach ? I'm not sure how to pass arguments (path of input output file) in cloud

    


  • FFMPEG xfade and acrossfade not working properly when used in combination

    18 octobre 2022, par Khawar Raza

    I am trying to merge videos with transition effects. I am using ffmpeg-kit v5.1 android wrapper which is using ffmpeg v5.1.2 internally. When I use xfade and acrossfade in combination, the output video and audios are out of sync. Sometimes resultant video finishes first and sometimes audio finishes first.

    


    Here is the sample command :

    


     -i "input1.mp4"    // 1080x1920, 6132 milliseconds,
 -i "input2.mp4"    // 1080x808,  4808 milliseconds
 -i "input3.mp4"    // 1280x720,  5399 milliseconds
 
 -filter_complex " 
 
 [0:v] scale=w=1280:h=1920:force_original_aspect_ratio=disable, boxblur=40[blurcanvas0]; 
 [0:v] scale=1080.0:1920.0:force_original_aspect_ratio=disable[scaled0]; 
 [blurcanvas0][scaled0] overlay=(main_w-overlay_w)*0.5:(main_h-overlay_h)*0.5, settb=AVTB, format=yuv420p,fps=30[part0]; 
 
 [1:v] scale=w=1280:h=1920:force_original_aspect_ratio=disable, boxblur=40[blurcanvas1]; 
 [1:v] scale=1280.0:957.62964:force_original_aspect_ratio=disable[scaled1]; 
 [blurcanvas1][scaled1] overlay=(main_w-overlay_w)*0.5:(main_h-overlay_h)*0.5, settb=AVTB, format=yuv420p,fps=30[part1]; 
 
 [2:v] scale=w=1280:h=1920:force_original_aspect_ratio=disable, boxblur=40[blurcanvas2]; 
 [2:v] scale=1280.0:720.0:force_original_aspect_ratio=disable[scaled2]; 
 [blurcanvas2][scaled2] overlay=(main_w-overlay_w)*0.5:(main_h-overlay_h)*0.5, settb=AVTB, format=yuv420p,fps=30[part2];  
 
 [part0][part1] xfade=transition=smoothleft:duration=2:offset=5.132[transition0]; 
 [transition0][part2] xfade=transition=smoothleft:duration=2:offset=8.940001[transition1];    
 
 [0:a][1:a]acrossfade=d=2[afade0];
 [afade0][2:a]acrossfade=d=2[afade1] " 
 
 -vsync passthrough -c:v libx264 -pix_fmt yuv420p 
 -map [transition1] -map [afade1] -preset ultrafast  "output.mp4"


    


    Basically, the script takes videos of different resolutions and formats, scales them to the maximum size respecting the aspect ratio, adds a blurred background to extra space, and then joins the individual parts using a transition effect. Then audios are joined using acrossfade filter with the same transition duration as used in videos.

    


    As per xfade requirements, script is converting videos to the same resolution and format yuv420p, changing fps to 30, and setting the timebase using settb=AVTB. All the requirements are met but the resultant video and audio are not synced. Any hint what is the missing part here ?

    


    Edit :

    


    As per @kesh reply, here is the updated command which has no impact :

    


    -i "input1.mp4" -i "input2.mp4" -i "input3.mp4" -filter_complex " 

[0:v]scale=w=1280:h=1920:force_original_aspect_ratio=disable, boxblur=40[canvas0]; 
[0:v] scale=1080.0:1920.0:force_original_aspect_ratio=disable[scaled0]; 
[canvas0][scaled0] overlay=(main_w-overlay_w)*0.5:(main_h-overlay_h)*0.5, settb=AVTB, format=yuv420p,fps=30[overlay0];

[1:v]scale=w=1280:h=1920:force_original_aspect_ratio=disable, boxblur=40[canvas1]; 
[1:v] scale=1280.0:957.62964:force_original_aspect_ratio=disable[scaled1]; 
[canvas1][scaled1] overlay=(main_w-overlay_w)*0.5:(main_h-overlay_h)*0.5, settb=AVTB, format=yuv420p,fps=30[overlay1];

[2:v]scale=w=1280:h=1920:force_original_aspect_ratio=disable, boxblur=40[canvas2]; 
[2:v] scale=1280.0:720.0:force_original_aspect_ratio=disable[scaled2]; 
[canvas2][scaled2] overlay=(main_w-overlay_w)*0.5:(main_h-overlay_h)*0.5, settb=AVTB, format=yuv420p,fps=30[overlay2];  

[overlay0][overlay1] xfade=transition=smoothleft:duration=2:offset=5.132[transition0]; 
[transition0][overlay2] xfade=transition=smoothleft:duration=2:offset=8.940001[transition1];

[0:a]asettb=AVTB[audio0]; 
[1:a]asettb=AVTB[audio1]; 
[2:a]asettb=AVTB[audio2];    
[audio0][audio1]acrossfade=d=2[afade0]; 
[afade0][audio2]acrossfade=d=2[afade1]"

-vsync passthrough -c:v libx264 -pix_fmt yuv420p -map [transition1] -map [afade1] -preset ultrafast  "output.mp4"


    


  • FFMPEG sendcmd command limitation

    18 novembre 2022, par Stahlauge

    i try to draw text into a video with drawtext and sendcmd.
In the sendcmd file there are over 9100 commands defined (each frame needs a different text).
All my commands are build this way.

    


    0 Parsed_drawtext_1 reinit 'text=\102030GA01, 102040Ap05
von oben, in Fließrichtung
[0,00]   17.11.2022 10\:22\:04:fontsize=28.928569157918293:y=5:x=0:fontcolor=#FFFFFF';
0.04 Parsed_drawtext_1 reinit 'text=\102030GA01, 102040Ap05
von oben, in Fließrichtung
[0,00]   17.11.2022 10\:22\:04:fontsize=28.928569157918293:y=5:x=0:fontcolor=#FFFFFF';
0.08 Parsed_drawtext_1 reinit 'text=\102030GA01, 102040Ap05
von oben, in Fließrichtung
[0,00]   17.11.2022 10\:22\:05:fontsize=28.928569157918293:y=5:x=0:fontcolor=#FFFFFF';


    


    My command is.

    


    ffmpeg.exe -y -i "17112022_102204.mp4" -vf "sendcmd=f=drawtextcommands.cmd,drawtext=fontfile='Ubuntu-L.ttf':text=:fontcolor=black@1.0:line_spacing=5, drawtext=fontfile='Ubuntu-L.ttf':text=:fontcolor=yellow@1.0:line_spacing=5,drawtext=fontfile='Ubuntu-L.ttf':text=:fontcolor=yellow@1.0:line_spacing=5,drawtext=fontfile='Ubuntu-L.ttf':text=:fontcolor=green@1.0:line_spacing=5" -c:v mpeg4 -qscale:v 5 -f mp4 "102030GA01_2022-11-17_10-21.mp4"


    


    My ffmpeg Version is n5.0.4 lgpl.
If i run this command with the sendcmd file on my develop laptop it works just fine (i7-10850H and 32GB RAM) but if i run the excat same on a Surface (i5-1035G4 and 8GB RAM) or Dell Latitude 5175 (Intel m5-6Y57 and 8GB RAM) i get an error

    


    [Parsed_sendcmd_0 @ 000002454188c540] No Targed specified in interval #XXXX, command #0 


    


    where XXXX is different each time i run it. All PC runs Win 10.

    


    I can´t really find the problem why i get this error on the tablets but not on my Laptop.
Maybe it´s a hardware ressource problem even FFMEG doesn´t have any requirements at all ?

    


    I only can get it to run on the tablets if i delete lots of commands from the sendcmd file.
I can´t define excatly the amount i need to delete because each run i can have more or less commands that are working.
But i can say i need to delete a lot more commands on the Dell tablet then on the Surface on to get it to run.