Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (10190)

  • Output a video with "slide up" transition using more than 100 images in FFMPEG ?

    9 juillet 2021, par Joseph Ladera Fugata

    I have more than a hundred images of the same size and format that my company wants to display at the big 9:16 (rotated 16:9) screen outside the front gate. It's supposed to be easy but they required me to have it slide from top to bottom, meaning that it should look like a smooth auto scroll effects. I searched here and there but no luck.

    


    I have tried xfade like this :

    


    ffmpeg -loop 1 -i input.txt -filter_complex
"xfade=transition=slideup:duration=10:offset=0,format=yuv420p" output.mp4


    


    It didn't do anything just a bunch of error referring to the inputs. Which is supposed to be just 2 images in the first place.

    


    The next thing I tried was using Concat from someone named @Gyan at his reply HERE and here's my version of the code :

    


    ffmpeg -y -f concat -safe 0 -i input.txt
-vf tile=1x%img_count%,loop=%_my_loop_count_var%:1:0,
crop=iw:ih/%img_count%:0:clip((t-%_start_time%)/%sec_per_img%*ih/%img_count%\,0\,ih*%img_count_minus_one%/%img_count%)
-r 25 -c:v libx264 -preset ultrafast output.mp4


    


    When I played with it, it gives a different output even do the image are all the same dimensions.

    


    I found someone on youtube used this but it is using bash and I am on windows. Unless someone here can convert it to a batch script would be great. I look into it and it seems like he's just V-stacking them kinda like what I did but there's more. I know I could have gone through win bash but I doubt the script will run on a non-Unix environment just by having bash, and I'm not yet familiar with Cygwin either.

    


    I also did tried other options posted by others here, I just forgot to bookmark them, but non of them works on more than a hundred images.

    


    I love to hear a response if anyone can help.

    


  • Create Panorama from Non-Sequential Video Frames

    6 mai 2021, par M.Innat

    There is a similar question (not that detailed and no exact solution).

    



    


    I want to create a single panorama image from video frames. And for that, I need to get minimum non-sequential video frames at first. A demo video file is uploaded here.

    


    What I Need

    


    A mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain minimum overlap regions to each other otherwise it can not be done.

    


    enter image description here

    


    So, if I have the following video frame's order

    


    A, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ...


    


    To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping.

    


         [overlap]  [overlap]  [overlap] [overlap]  [overlap]
 A,    A,B,       B,C,       C,A,       A,C,      C,B,  ...


    


    What I've Tried and Stuck

    


    A demo video clip is given above. To get non-sequential video frames, I primarily rely on ffmpeg software.

    


    Trial 1 Ref.

    


    ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    After that, on the out.mp4, I applied slice the video frames using opencv

    


    import cv2, os 
from pathlib import Path

vframe_dir = Path("vid_frames/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vidcap = cv2.VideoCapture('out.mp4')
success,image = vidcap.read()
count = 0

while success:
    cv2.imwrite(f"{vframe_dir}/frame%d.jpg" % count, image)     
    success,image = vidcap.read()
    count += 1


    


    Next, I rotated these saved images horizontally (as my video is a vertical view).

    


    vframe_dir = Path("out/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vframe_dir_rot = Path("vframe_dir_rot/")
vframe_dir_rot.mkdir(parents=True, exist_ok=True)

for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))):
    image = cv2.imread(f"{vframe_dir}/{each_img}")[:, :, ::-1] # Read (with BGRtoRGB)
    
    image = cv2.rotate(image,cv2.cv2.ROTATE_180)
    image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE)

    cv2.imwrite(f"{vframe_dir_rot}/{each_img}", image[:, :, ::-1]) # Save (with RGBtoBGR)


    


    The output is ok for this method (with ffmpeg) but inappropriate for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated.

    
 


    Trail 2 - Ref

    


    ffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    didn't work at all.

    


    Trail 3

    


    ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.png


    


    No luck either. However, I've found this last ffmpeg command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is still do not need frames, and I kinda manually pick some desired frames, and then the opecv stitching algorithm works. So, after picking some frames and rotating (as mentioned before) :

    


    stitcher = cv2.Stitcher.create()
status, pano = stitcher.stitch(images) # images: manually picked video frames -_- 


    
 


    Update

    


    After some trials, I am kinda adopting the non-programming solution. But would love to see an efficient programmatic approach.

    


    On the given demo video, I used Adobe products (premiere pro and photoshop) to do this task, video instruction. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) via premier and use photoshop to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.

    


    enter image description here

    



    


    Here are some big challenges. The original video clips have some conditions though, and it's too serious. Unlike the given demo video clips :

    


      

    • It's not straight forward, i.e. camera shaking
    • 


    • Lighting condition, i.e. causes different visual look at the same spot
    • 


    • Cameral flickering or banding
    • 


    


    This scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using adobe tools) I couldn't make it any good.

    



    


    However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.

    


  • Determine which decoders/demuxers/parsers ffmpeg needs to successfully consume file

    10 octobre 2024, par rschristian

    I'm trying to custom compile a build of ffmpeg.wasm as the prebuilt, "support everything" is a tad hefty at 35mb. This base build (as well as standard ffmpeg, running on my desktop) works perfectly fine for the provided file, however, so I do have something I can work against.

    


    My issue is that I'm a bit stuck on figuring out what precisely I need to support the provided file, the correct combination of decoders, demuxers, parsers, etc., and the encoders, muxers I'll need to use to convert it to my desired output.

    


    I'm sure I can brute force this with time, but is there a way of having ffmpeg report precisely which combination it's using when running against a file ? I've tried -report but it doesn't seem to contain this information, really it contains no more useful codex information than the standard output log as far as I can tell.

    


    For example, I can see the current file I'm testing with (foo.m4s) is h264 video and aac audio, so I tried the following flags based on what I've been able to find online and by looking through the list of muxers :

    


    --enable-decoder=aac,h264
--enable-demuxer=aac,h264
--enable-parser=aac,h264


    


    However, this results in the following error :

    


    foo.m4s: Invalid data found when processing input


    


    So it seems like it's not quite the correct list.

    


    Is there any good way to debug this ? Some way of having ffmpeg itself report exactly what I'll need set to handle this conversion using my own compilation ? As the goal is a minimum build, adding the kitchen sink and slowly reducing over time will obviously be super time consuming, so I'd like to avoid it if at all possible.

    



    


    Edit : Trial and error got me down to this, though I don't quite understand it (and the question still stands as I could reasonably need to handle other files in the future) :

    


    --enable-demuxer=aac,mov
--enable-parser=aac


    


    mov for some reason ended up being the fix ? The first line of ffmpeg's output was Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'foo.m4s', and so I simply grabbed those one-by-one and sure enough mov worked, despite the video having h264. Would love if someone could explain this too.